mirror of
https://github.com/explosion/spaCy.git
synced 2024-12-24 17:06:29 +03:00
Fix missing ids
This commit is contained in:
parent
cffe63ea24
commit
cbcba699dd
|
@ -113,7 +113,7 @@ default regular expressions with your own in the language's `Defaults`.
|
|||
|
||||
</Infobox>
|
||||
|
||||
### Creating a `Language` subclass {#language-subclass}
|
||||
### Creating a language subclass {#language-subclass}
|
||||
|
||||
Language-specific code and resources should be organized into a sub-package of
|
||||
spaCy, named according to the language's
|
||||
|
@ -614,7 +614,7 @@ require models to be trained from labeled examples. The word vectors, word
|
|||
probabilities and word clusters also require training, although these can be
|
||||
trained from unlabeled text, which tends to be much easier to collect.
|
||||
|
||||
### Creating a vocabulary file
|
||||
### Creating a vocabulary file {#vocab-file}
|
||||
|
||||
spaCy expects that common words will be cached in a [`Vocab`](/api/vocab)
|
||||
instance. The vocabulary caches lexical features. spaCy loads the vocabulary
|
||||
|
@ -644,7 +644,7 @@ If you don't have a large sample of text available, you can also convert word
|
|||
vectors produced by a variety of other tools into spaCy's format. See the docs
|
||||
on [converting word vectors](/usage/vectors-similarity#converting) for details.
|
||||
|
||||
### Creating or converting a training corpus
|
||||
### Creating or converting a training corpus {#training-corpus}
|
||||
|
||||
The easiest way to train spaCy's tagger, parser, entity recognizer or text
|
||||
categorizer is to use the [`spacy train`](/api/cli#train) command-line utility.
|
||||
|
|
|
@ -29,7 +29,7 @@ Here's a quick comparison of the functionalities offered by spaCy,
|
|||
| Entity linking | ❌ | ❌ | ❌ |
|
||||
| Coreference resolution | ❌ | ❌ | ✅ |
|
||||
|
||||
### When should I use what?
|
||||
### When should I use what? {#comparison-usage}
|
||||
|
||||
Natural Language Understanding is an active area of research and development, so
|
||||
there are many different tools or technologies catering to different use-cases.
|
||||
|
|
|
@ -28,7 +28,7 @@ import QuickstartInstall from 'widgets/quickstart-install.js'
|
|||
|
||||
## Installation instructions {#installation}
|
||||
|
||||
### pip
|
||||
### pip {#pip}
|
||||
|
||||
Using pip, spaCy releases are available as source packages and binary wheels (as
|
||||
of v2.0.13).
|
||||
|
@ -58,7 +58,7 @@ source .env/bin/activate
|
|||
pip install spacy
|
||||
```
|
||||
|
||||
### conda
|
||||
### conda {#conda}
|
||||
|
||||
Thanks to our great community, we've been able to re-add conda support. You can
|
||||
also install spaCy via `conda-forge`:
|
||||
|
@ -194,7 +194,7 @@ official distributions these are:
|
|||
| Python 3.4 | Visual Studio 2010 |
|
||||
| Python 3.5+ | Visual Studio 2015 |
|
||||
|
||||
### Run tests
|
||||
### Run tests {#run-tests}
|
||||
|
||||
spaCy comes with an
|
||||
[extensive test suite](https://github.com/explosion/spaCy/tree/master/spacy/tests).
|
||||
|
@ -418,7 +418,7 @@ either of these, clone your repository again.
|
|||
|
||||
</Accordion>
|
||||
|
||||
## Changelog
|
||||
## Changelog {#changelog}
|
||||
|
||||
import Changelog from 'widgets/changelog.js'
|
||||
|
||||
|
|
|
@ -111,7 +111,7 @@ print(nlp.pipe_names)
|
|||
# ['tagger', 'parser', 'ner']
|
||||
```
|
||||
|
||||
### Built-in pipeline components
|
||||
### Built-in pipeline components {#built-in}
|
||||
|
||||
spaCy ships with several built-in pipeline components that are also available in
|
||||
the `Language.factories`. This means that you can initialize them by calling
|
||||
|
|
|
@ -22,7 +22,7 @@ the changes, see [this table](/usage/v2#incompat) and the notes on
|
|||
|
||||
</Infobox>
|
||||
|
||||
### Serializing the pipeline
|
||||
### Serializing the pipeline {#pipeline}
|
||||
|
||||
When serializing the pipeline, keep in mind that this will only save out the
|
||||
**binary data for the individual components** to allow spaCy to restore them –
|
||||
|
@ -361,7 +361,7 @@ In theory, the entry point mechanism also lets you overwrite built-in factories
|
|||
– including the tokenizer. By default, spaCy will output a warning in these
|
||||
cases, to prevent accidental overwrites and unintended results.
|
||||
|
||||
#### Advanced components with settings
|
||||
#### Advanced components with settings {#advanced-cfg}
|
||||
|
||||
The `**cfg` keyword arguments that the factory receives are passed down all the
|
||||
way from `spacy.load`. This means that the factory can respond to custom
|
||||
|
|
|
@ -14,7 +14,7 @@ faster runtime, and many bug fixes, v2.1 also introduces experimental support
|
|||
for some exciting new NLP innovations. For the full changelog, see the
|
||||
[release notes on GitHub](https://github.com/explosion/spaCy/releases/tag/v2.1.0).
|
||||
|
||||
### BERT/ULMFit/Elmo-style pre-training {tag="experimental"}
|
||||
### BERT/ULMFit/Elmo-style pre-training {#pretraining tag="experimental"}
|
||||
|
||||
> #### Example
|
||||
>
|
||||
|
@ -39,7 +39,7 @@ it.
|
|||
|
||||
</Infobox>
|
||||
|
||||
### Extended match pattern API
|
||||
### Extended match pattern API {#matcher-api}
|
||||
|
||||
> #### Example
|
||||
>
|
||||
|
@ -67,7 +67,7 @@ values.
|
|||
|
||||
</Infobox>
|
||||
|
||||
### Easy rule-based entity recognition
|
||||
### Easy rule-based entity recognition {#entity-ruler}
|
||||
|
||||
> #### Example
|
||||
>
|
||||
|
@ -91,7 +91,7 @@ flexibility.
|
|||
|
||||
</Infobox>
|
||||
|
||||
### Phrase matching with other attributes
|
||||
### Phrase matching with other attributes {#phrasematcher}
|
||||
|
||||
> #### Example
|
||||
>
|
||||
|
@ -115,7 +115,7 @@ or `POS` for finding sequences of the same part-of-speech tags.
|
|||
|
||||
</Infobox>
|
||||
|
||||
### Retokenizer for merging and splitting
|
||||
### Retokenizer for merging and splitting {#retokenizer}
|
||||
|
||||
> #### Example
|
||||
>
|
||||
|
@ -142,7 +142,7 @@ deprecated.
|
|||
|
||||
</Infobox>
|
||||
|
||||
### Components and languages via entry points
|
||||
### Components and languages via entry points {#entry-points}
|
||||
|
||||
> #### Example
|
||||
>
|
||||
|
@ -169,7 +169,7 @@ is required.
|
|||
|
||||
</Infobox>
|
||||
|
||||
### Improved documentation
|
||||
### Improved documentation {#docs}
|
||||
|
||||
Although it looks pretty much the same, we've rebuilt the entire documentation
|
||||
using [Gatsby](https://www.gatsbyjs.org/) and [MDX](https://mdxjs.com/). It's
|
||||
|
|
|
@ -50,7 +50,7 @@ const Quickstart = ({ data, title, description, id, children }) => {
|
|||
<Section id={id}>
|
||||
<div className={classes.root}>
|
||||
{title && (
|
||||
<H2 className={classes.title}>
|
||||
<H2 className={classes.title} name={id}>
|
||||
<a href={`#${id}`}>{title}</a>
|
||||
</H2>
|
||||
)}
|
||||
|
|
|
@ -83,6 +83,7 @@ const Permalink = ({ id, children }) =>
|
|||
const Headline = ({
|
||||
Component,
|
||||
id,
|
||||
name,
|
||||
new: version,
|
||||
model,
|
||||
tag,
|
||||
|
@ -100,7 +101,7 @@ const Headline = ({
|
|||
})
|
||||
const tags = tag ? tag.split(',').map(t => t.trim()) : []
|
||||
return (
|
||||
<Component id={id} className={headingClassNames}>
|
||||
<Component id={id} name={name} className={headingClassNames}>
|
||||
<Permalink id={id}>{children} </Permalink>
|
||||
{tags.map((tag, i) => (
|
||||
<Tag spaced key={i}>
|
||||
|
|
|
@ -95,10 +95,10 @@ const Changelog = () => {
|
|||
error
|
||||
) : isLoading ? null : (
|
||||
<>
|
||||
<H3>Stable Releases</H3>
|
||||
<H3 id="changelog-stable">Stable Releases</H3>
|
||||
<ChangelogTable data={releases} />
|
||||
|
||||
<H3>Pre-Releases</H3>
|
||||
<H3 id="changelog-pre">Pre-Releases</H3>
|
||||
|
||||
<p>
|
||||
Pre-releases include alpha and beta versions, as well as release candidates. They
|
||||
|
|
Loading…
Reference in New Issue
Block a user