spaCy/website/docs/usage/_benchmarks-models.md
2021-01-21 20:24:05 +01:00

45 lines
1.6 KiB
Markdown

import { Help } from 'components/typography'; import Link from 'components/link'
<figure>
| Pipeline | Parser | Tagger | NER |
| ---------------------------------------------------------- | -----: | -----: | ---: |
| [`en_core_web_trf`](/models/en#en_core_web_trf) (spaCy v3) | 95.2 | 97.8 | 89.9 |
| [`en_core_web_lg`](/models/en#en_core_web_lg) (spaCy v3) | 91.9 | 97.4 | 85.5 |
| `en_core_web_lg` (spaCy v2) | 91.9 | 97.2 | 85.5 |
<figcaption class="caption">
<!-- TODO: speed (or update caption below) -->
**Full pipeline accuracy and speed** on the
[OntoNotes 5.0](https://catalog.ldc.upenn.edu/LDC2013T19) corpus (reported on
the development set).
</figcaption>
</figure>
<figure>
| Named Entity Recognition System | OntoNotes | CoNLL '03 |
| -------------------------------- | --------: | --------: |
| spaCy RoBERTa (2020) | 89.7 | 91.6 |
| Stanza (StanfordNLP)<sup>1</sup> | 88.8 | 92.1 |
| Flair<sup>2</sup> | 89.7 | 93.1 |
<figcaption class="caption">
**Named entity recognition accuracy** on the
[OntoNotes 5.0](https://catalog.ldc.upenn.edu/LDC2013T19) and
[CoNLL-2003](https://www.aclweb.org/anthology/W03-0419.pdf) corpora. See
[NLP-progress](http://nlpprogress.com/english/named_entity_recognition.html) for
more results. Project template:
[`benchmarks/ner_conll03`](%%GITHUB_PROJECTS/benchmarks/ner_conll03). **1. **
[Qi et al. (2020)](https://arxiv.org/pdf/2003.07082.pdf). **2. **
[Akbik et al. (2018)](https://www.aclweb.org/anthology/C18-1139/).
</figcaption>
</figure>