diff --git a/website/docs/usage/_benchmarks-models.md b/website/docs/usage/_benchmarks-models.md index 4b25418b5..5b193d3a4 100644 --- a/website/docs/usage/_benchmarks-models.md +++ b/website/docs/usage/_benchmarks-models.md @@ -1,10 +1,10 @@ import { Help } from 'components/typography'; import Link from 'components/link' - +
-| System | Parser | Tagger | NER | WPS
CPU words per second on CPU, higher is better | WPS
GPU words per second on GPU, higher is better | +| Pipeline | Parser | Tagger | NER | WPS
CPU words per second on CPU, higher is better | WPS
GPU words per second on GPU, higher is better | | ---------------------------------------------------------- | -----: | -----: | ---: | ------------------------------------------------------------------: | -----------------------------------------------------------------: | | [`en_core_web_trf`](/models/en#en_core_web_trf) (spaCy v3) | | | | | 6k | | [`en_core_web_lg`](/models/en#en_core_web_lg) (spaCy v3) | | | | | | @@ -21,10 +21,10 @@ import { Help } from 'components/typography'; import Link from 'components/link'
-| Named Entity Recognition Model | OntoNotes | CoNLL '03 | +| Named Entity Recognition System | OntoNotes | CoNLL '03 | | ------------------------------------------------------------------------------ | --------: | --------: | | spaCy RoBERTa (2020) | | 92.2 | -| spaCy CNN (2020) | | 88.4 | +| spaCy CNN (2020) | 85.3 | 88.4 | | spaCy CNN (2017) | 86.4 | | | [Stanza](https://stanfordnlp.github.io/stanza/) (StanfordNLP)1 | 88.8 | 92.1 | | Flair2 | 89.7 | 93.1 | diff --git a/website/docs/usage/embeddings-transformers.md b/website/docs/usage/embeddings-transformers.md index d61172a5b..b00760e62 100644 --- a/website/docs/usage/embeddings-transformers.md +++ b/website/docs/usage/embeddings-transformers.md @@ -235,8 +235,6 @@ The `Transformer` component sets the [`Doc._.trf_data`](/api/transformer#custom_attributes) extension attribute, which lets you access the transformers outputs at runtime. - - ```cli $ python -m spacy download en_core_trf_lg ``` diff --git a/website/docs/usage/facts-figures.md b/website/docs/usage/facts-figures.md index 743dae74d..a31559b04 100644 --- a/website/docs/usage/facts-figures.md +++ b/website/docs/usage/facts-figures.md @@ -63,7 +63,7 @@ import Benchmarks from 'usage/\_benchmarks-models.md'
-| System | UAS | LAS | +| Dependency Parsing System | UAS | LAS | | ------------------------------------------------------------------------------ | ---: | ---: | | spaCy RoBERTa (2020)1 | 96.8 | 95.0 | | spaCy CNN (2020)1 | 93.7 | 91.8 | diff --git a/website/docs/usage/linguistic-features.md b/website/docs/usage/linguistic-features.md index 914e18acb..d9a894398 100644 --- a/website/docs/usage/linguistic-features.md +++ b/website/docs/usage/linguistic-features.md @@ -1654,9 +1654,12 @@ The [`SentenceRecognizer`](/api/sentencerecognizer) is a simple statistical component that only provides sentence boundaries. Along with being faster and smaller than the parser, its primary advantage is that it's easier to train because it only requires annotated sentence boundaries rather than full -dependency parses. - - +dependency parses. spaCy's [trained pipelines](/models) include both a parser +and a trained sentence segmenter, which is +[disabled](/usage/processing-pipelines#disabling) by default. If you only need +sentence boundaries and no parser, you can use the `enable` and `disable` +arguments on [`spacy.load`](/api/top-level#spacy.load) to enable the senter and +disable the parser. > #### senter vs. parser > diff --git a/website/docs/usage/processing-pipelines.md b/website/docs/usage/processing-pipelines.md index 97806dc2a..dbf0881ac 100644 --- a/website/docs/usage/processing-pipelines.md +++ b/website/docs/usage/processing-pipelines.md @@ -253,8 +253,6 @@ different mechanisms you can use: Disabled and excluded component names can be provided to [`spacy.load`](/api/top-level#spacy.load) as a list. - - > #### 💡 Optional pipeline components > > The `disable` mechanism makes it easy to distribute pipeline packages with @@ -262,6 +260,11 @@ Disabled and excluded component names can be provided to > your pipeline may include a statistical _and_ a rule-based component for > sentence segmentation, and you can choose which one to run depending on your > use case. +> +> For example, spaCy's [trained pipelines](/models) like +> [`en_core_web_sm`](/models/en#en_core_web_sm) contain both a `parser` and +> `senter` that perform sentence segmentation, but the `senter` is disabled by +> default. ```python # Load the pipeline without the entity recognizer diff --git a/website/docs/usage/projects.md b/website/docs/usage/projects.md index 8e093e8d6..6d5746308 100644 --- a/website/docs/usage/projects.md +++ b/website/docs/usage/projects.md @@ -733,7 +733,10 @@ workflows, but only one can be tracked by DVC. The Prodigy integration will require a nightly version of Prodigy that supports -spaCy v3+. +spaCy v3+. You can already use annotations created with Prodigy in spaCy v3 by +exporting your data with +[`data-to-spacy`](https://prodi.gy/docs/recipes#data-to-spacy) and running +[`spacy convert`](/api/cli#convert) to convert it to the binary format.