Fix missing ids

This commit is contained in:
Ines Montani 2019-03-14 17:56:53 +01:00
parent cffe63ea24
commit cbcba699dd
9 changed files with 23 additions and 22 deletions

View File

@ -113,7 +113,7 @@ default regular expressions with your own in the language's `Defaults`.
</Infobox> </Infobox>
### Creating a `Language` subclass {#language-subclass} ### Creating a language subclass {#language-subclass}
Language-specific code and resources should be organized into a sub-package of Language-specific code and resources should be organized into a sub-package of
spaCy, named according to the language's spaCy, named according to the language's
@ -614,7 +614,7 @@ require models to be trained from labeled examples. The word vectors, word
probabilities and word clusters also require training, although these can be probabilities and word clusters also require training, although these can be
trained from unlabeled text, which tends to be much easier to collect. trained from unlabeled text, which tends to be much easier to collect.
### Creating a vocabulary file ### Creating a vocabulary file {#vocab-file}
spaCy expects that common words will be cached in a [`Vocab`](/api/vocab) spaCy expects that common words will be cached in a [`Vocab`](/api/vocab)
instance. The vocabulary caches lexical features. spaCy loads the vocabulary instance. The vocabulary caches lexical features. spaCy loads the vocabulary
@ -644,7 +644,7 @@ If you don't have a large sample of text available, you can also convert word
vectors produced by a variety of other tools into spaCy's format. See the docs vectors produced by a variety of other tools into spaCy's format. See the docs
on [converting word vectors](/usage/vectors-similarity#converting) for details. on [converting word vectors](/usage/vectors-similarity#converting) for details.
### Creating or converting a training corpus ### Creating or converting a training corpus {#training-corpus}
The easiest way to train spaCy's tagger, parser, entity recognizer or text The easiest way to train spaCy's tagger, parser, entity recognizer or text
categorizer is to use the [`spacy train`](/api/cli#train) command-line utility. categorizer is to use the [`spacy train`](/api/cli#train) command-line utility.

View File

@ -29,7 +29,7 @@ Here's a quick comparison of the functionalities offered by spaCy,
| Entity linking | ❌ | ❌ | ❌ | | Entity linking | ❌ | ❌ | ❌ |
| Coreference resolution | ❌ | ❌ | ✅ | | Coreference resolution | ❌ | ❌ | ✅ |
### When should I use what? ### When should I use what? {#comparison-usage}
Natural Language Understanding is an active area of research and development, so Natural Language Understanding is an active area of research and development, so
there are many different tools or technologies catering to different use-cases. there are many different tools or technologies catering to different use-cases.

View File

@ -28,7 +28,7 @@ import QuickstartInstall from 'widgets/quickstart-install.js'
## Installation instructions {#installation} ## Installation instructions {#installation}
### pip ### pip {#pip}
Using pip, spaCy releases are available as source packages and binary wheels (as Using pip, spaCy releases are available as source packages and binary wheels (as
of v2.0.13). of v2.0.13).
@ -58,7 +58,7 @@ source .env/bin/activate
pip install spacy pip install spacy
``` ```
### conda ### conda {#conda}
Thanks to our great community, we've been able to re-add conda support. You can Thanks to our great community, we've been able to re-add conda support. You can
also install spaCy via `conda-forge`: also install spaCy via `conda-forge`:
@ -194,7 +194,7 @@ official distributions these are:
| Python 3.4 | Visual Studio 2010 | | Python 3.4 | Visual Studio 2010 |
| Python 3.5+ | Visual Studio 2015 | | Python 3.5+ | Visual Studio 2015 |
### Run tests ### Run tests {#run-tests}
spaCy comes with an spaCy comes with an
[extensive test suite](https://github.com/explosion/spaCy/tree/master/spacy/tests). [extensive test suite](https://github.com/explosion/spaCy/tree/master/spacy/tests).
@ -418,7 +418,7 @@ either of these, clone your repository again.
</Accordion> </Accordion>
## Changelog ## Changelog {#changelog}
import Changelog from 'widgets/changelog.js' import Changelog from 'widgets/changelog.js'

View File

@ -111,7 +111,7 @@ print(nlp.pipe_names)
# ['tagger', 'parser', 'ner'] # ['tagger', 'parser', 'ner']
``` ```
### Built-in pipeline components ### Built-in pipeline components {#built-in}
spaCy ships with several built-in pipeline components that are also available in spaCy ships with several built-in pipeline components that are also available in
the `Language.factories`. This means that you can initialize them by calling the `Language.factories`. This means that you can initialize them by calling

View File

@ -22,7 +22,7 @@ the changes, see [this table](/usage/v2#incompat) and the notes on
</Infobox> </Infobox>
### Serializing the pipeline ### Serializing the pipeline {#pipeline}
When serializing the pipeline, keep in mind that this will only save out the When serializing the pipeline, keep in mind that this will only save out the
**binary data for the individual components** to allow spaCy to restore them **binary data for the individual components** to allow spaCy to restore them
@ -361,7 +361,7 @@ In theory, the entry point mechanism also lets you overwrite built-in factories
including the tokenizer. By default, spaCy will output a warning in these including the tokenizer. By default, spaCy will output a warning in these
cases, to prevent accidental overwrites and unintended results. cases, to prevent accidental overwrites and unintended results.
#### Advanced components with settings #### Advanced components with settings {#advanced-cfg}
The `**cfg` keyword arguments that the factory receives are passed down all the The `**cfg` keyword arguments that the factory receives are passed down all the
way from `spacy.load`. This means that the factory can respond to custom way from `spacy.load`. This means that the factory can respond to custom

View File

@ -14,7 +14,7 @@ faster runtime, and many bug fixes, v2.1 also introduces experimental support
for some exciting new NLP innovations. For the full changelog, see the for some exciting new NLP innovations. For the full changelog, see the
[release notes on GitHub](https://github.com/explosion/spaCy/releases/tag/v2.1.0). [release notes on GitHub](https://github.com/explosion/spaCy/releases/tag/v2.1.0).
### BERT/ULMFit/Elmo-style pre-training {tag="experimental"} ### BERT/ULMFit/Elmo-style pre-training {#pretraining tag="experimental"}
> #### Example > #### Example
> >
@ -39,7 +39,7 @@ it.
</Infobox> </Infobox>
### Extended match pattern API ### Extended match pattern API {#matcher-api}
> #### Example > #### Example
> >
@ -67,7 +67,7 @@ values.
</Infobox> </Infobox>
### Easy rule-based entity recognition ### Easy rule-based entity recognition {#entity-ruler}
> #### Example > #### Example
> >
@ -91,7 +91,7 @@ flexibility.
</Infobox> </Infobox>
### Phrase matching with other attributes ### Phrase matching with other attributes {#phrasematcher}
> #### Example > #### Example
> >
@ -115,7 +115,7 @@ or `POS` for finding sequences of the same part-of-speech tags.
</Infobox> </Infobox>
### Retokenizer for merging and splitting ### Retokenizer for merging and splitting {#retokenizer}
> #### Example > #### Example
> >
@ -142,7 +142,7 @@ deprecated.
</Infobox> </Infobox>
### Components and languages via entry points ### Components and languages via entry points {#entry-points}
> #### Example > #### Example
> >
@ -169,7 +169,7 @@ is required.
</Infobox> </Infobox>
### Improved documentation ### Improved documentation {#docs}
Although it looks pretty much the same, we've rebuilt the entire documentation Although it looks pretty much the same, we've rebuilt the entire documentation
using [Gatsby](https://www.gatsbyjs.org/) and [MDX](https://mdxjs.com/). It's using [Gatsby](https://www.gatsbyjs.org/) and [MDX](https://mdxjs.com/). It's

View File

@ -50,7 +50,7 @@ const Quickstart = ({ data, title, description, id, children }) => {
<Section id={id}> <Section id={id}>
<div className={classes.root}> <div className={classes.root}>
{title && ( {title && (
<H2 className={classes.title}> <H2 className={classes.title} name={id}>
<a href={`#${id}`}>{title}</a> <a href={`#${id}`}>{title}</a>
</H2> </H2>
)} )}

View File

@ -83,6 +83,7 @@ const Permalink = ({ id, children }) =>
const Headline = ({ const Headline = ({
Component, Component,
id, id,
name,
new: version, new: version,
model, model,
tag, tag,
@ -100,7 +101,7 @@ const Headline = ({
}) })
const tags = tag ? tag.split(',').map(t => t.trim()) : [] const tags = tag ? tag.split(',').map(t => t.trim()) : []
return ( return (
<Component id={id} className={headingClassNames}> <Component id={id} name={name} className={headingClassNames}>
<Permalink id={id}>{children} </Permalink> <Permalink id={id}>{children} </Permalink>
{tags.map((tag, i) => ( {tags.map((tag, i) => (
<Tag spaced key={i}> <Tag spaced key={i}>

View File

@ -95,10 +95,10 @@ const Changelog = () => {
error error
) : isLoading ? null : ( ) : isLoading ? null : (
<> <>
<H3>Stable Releases</H3> <H3 id="changelog-stable">Stable Releases</H3>
<ChangelogTable data={releases} /> <ChangelogTable data={releases} />
<H3>Pre-Releases</H3> <H3 id="changelog-pre">Pre-Releases</H3>
<p> <p>
Pre-releases include alpha and beta versions, as well as release candidates. They Pre-releases include alpha and beta versions, as well as release candidates. They