Fix links [ci skip]

This commit is contained in:
Ines Montani 2019-02-17 22:25:50 +01:00
parent 04b4df0ec9
commit 212ff359ef
6 changed files with 14 additions and 16 deletions

View File

@ -295,7 +295,7 @@ If you've trained your own model, for example for
convenient to deploy, we recommend wrapping it as a Python package.
For more information and a detailed guide on how to package your model, see the
documentation on [saving and loading models](/usage/training#saving-loading).
documentation on [saving and loading models](/usage/saving-loading#models).
## Using models in production {#production}

View File

@ -18,10 +18,10 @@ spaCy makes it very easy to create your own pipelines consisting of reusable
components this includes spaCy's default tagger, parser and entity recognizer,
but also your own custom processing functions. A pipeline component can be added
to an already existing `nlp` object, specified when initializing a `Language`
class, or defined within a [model package](/usage/training#saving-loading).
class, or defined within a [model package](/usage/saving-loading#models).
When you load a model, spaCy first consults the model's
[`meta.json`](/usage/training#saving-loading). The meta typically includes the
[`meta.json`](/usage/saving-loading#models). The meta typically includes the
model details, the ID of a language class, and an optional list of pipeline
components. spaCy then does the following:

View File

@ -379,7 +379,7 @@ import Serialization101 from 'usage/101/\_serialization.md'
<Infobox title="📖 Saving and loading">
To learn more about how to **save and load your own models**, see the usage
guide on [saving and loading](/usage/training#saving-loading).
guide on [saving and loading](/usage/saving-loading#models).
</Infobox>
@ -655,7 +655,7 @@ new_doc = Doc(Vocab()).from_disk("/tmp/customer_feedback_627.bin")
<Infobox>
**API:** [`Language`](/api/language), [`Doc`](/api/doc) **Usage:**
[Saving and loading models](/usage/models#saving-loading)
[Saving and loading models](/usage/saving-loading#models)
</Infobox>
@ -690,7 +690,7 @@ print("Sentiment", doc.sentiment)
<Infobox>
**API:** [`Matcher`](/api/matcher) **Usage:**
[Rule-based matching](/usage/linguistic-features#rule-based-matching)
[Rule-based matching](/usage/rule-based-matching)
</Infobox>

View File

@ -7,12 +7,11 @@ menu:
- ['Tagger & Parser', 'tagger-parser']
- ['Text Classification', 'textcat']
- ['Tips and Advice', 'tips']
- ['Saving & Loading', 'saving-loading']
---
This guide describes how to train new statistical models for spaCy's
part-of-speech tagger, named entity recognizer and dependency parser. Once the
model is trained, you can then [save and load](/usage/models#saving-loading) it.
model is trained, you can then [save and load](/usage/saving-loading#models) it.
## Training basics {#basics}
@ -97,8 +96,7 @@ may end up with the following entities, some correct, some incorrect.
| Spotify steps up Asia expansion | Spotify | `0` | `8` | `ORG` | ✅ |
| Spotify steps up Asia expansion | Asia | `17` | `21` | `NORP` | ❌ |
Alternatively, the
[rule-based matcher](/usage/linguistic-features#rule-based-matching) can be a
Alternatively, the [rule-based matcher](/usage/rule-based-matching) can be a
useful tool to extract tokens or combinations of tokens, as well as their start
and end index in a document. In this case, we'll extract mentions of Google and
assume they're an `ORG`.
@ -473,9 +471,9 @@ for hotels with high ratings for their wifi offerings.
> To achieve even better accuracy, try merging multi-word tokens and entities
> specific to your domain into one token before parsing your text. You can do
> this by running the entity recognizer or
> [rule-based matcher](/usage/linguistic-features#rule-based-matching) to find
> relevant spans, and merging them using
> [`Doc.retokenize`](/api/doc#retokenize). You could even add your own custom
> [rule-based matcher](/usage/rule-based-matching) to find relevant spans, and
> merging them using [`Doc.retokenize`](/api/doc#retokenize). You could even add
> your own custom
> [pipeline component](/usage/processing-pipelines#custom-components) to do this
> automatically just make sure to add it `before='parser'`.

View File

@ -265,7 +265,7 @@ language, you can import the class directly, e.g.
**API:** [`spacy.load`](/api/top-level#spacy.load),
[`Language.to_disk`](/api/language#to_disk) **Usage:**
[Models](/usage/models#usage),
[Saving and loading](/usage/training#saving-loading)
[Saving and loading](/usage/saving-loading#models)
</Infobox>
@ -337,7 +337,7 @@ patterns.
<Infobox>
**API:** [`Matcher`](/api/matcher), [`PhraseMatcher`](/api/phrasematcher)
**Usage:** [Rule-based matching](/usage/linguistic-features#rule-based-matching)
**Usage:** [Rule-based matching](/usage/rule-based-matching)
</Infobox>

View File

@ -384,7 +384,7 @@
"id": "matcher-explorer",
"title": "Rule-based Matcher Explorer",
"slogan": "Test spaCy's rule-based Matcher by creating token patterns interactively",
"description": "Test spaCy's rule-based `Matcher` by creating token patterns interactively and running them over your text. Each token can set multiple attributes like text value, part-of-speech tag or boolean flags. The token-based view lets you explore how spaCy processes your text and why your pattern matches, or why it doesn't. For more details on rule-based matching, see the [documentation](https://spacy.io/usage/linguistic-features#rule-based-matching).",
"description": "Test spaCy's rule-based `Matcher` by creating token patterns interactively and running them over your text. Each token can set multiple attributes like text value, part-of-speech tag or boolean flags. The token-based view lets you explore how spaCy processes your text and why your pattern matches, or why it doesn't. For more details on rule-based matching, see the [documentation](https://spacy.io/usage/rule-based-matching).",
"image": "https://explosion.ai/assets/img/demos/matcher.png",
"thumb": "https://i.imgur.com/rPK4AGt.jpg",
"url": "https://explosion.ai/demos/matcher",