Update docs [ci skip]

This commit is contained in:
Ines Montani 2020-09-23 22:02:31 +02:00
parent c8bda92243
commit 02008e9a55
3 changed files with 39 additions and 22 deletions

View File

@ -5,20 +5,15 @@ import { Help } from 'components/typography'; import Link from 'components/link'
<figure>
| System | Parser | Tagger | NER | WPS<br />CPU <Help>words per second on CPU, higher is better</Help> | WPS<br/>GPU <Help>words per second on GPU, higher is better</Help> |
| ------------------------------------------------------------------------- | ----------------: | ----------------: | ---: | ------------------------------------------------------------------: | -----------------------------------------------------------------: |
| ---------------------------------------------------------- | -----: | -----: | ---: | ------------------------------------------------------------------: | -----------------------------------------------------------------: |
| [`en_core_web_trf`](/models/en#en_core_web_trf) (spaCy v3) | | | | | 6k |
| [`en_core_web_lg`](/models/en#en_core_web_lg) (spaCy v3) | | | | | |
| `en_core_web_lg` (spaCy v2) | 91.9 | 97.2 | 85.9 | 10k | |
| [Stanza](https://stanfordnlp.github.io/stanza/) (StanfordNLP)<sup>1</sup> | _n/a_<sup>2</sup> | _n/a_<sup>2</sup> | 88.8 | 234 | 2k |
| <Link to="https://github.com/flairNLP/flair" hideIcon>Flair</Link> | - | 97.9 | 89.3 | | |
<figcaption class="caption">
**Accuracy and speed on the
[OntoNotes 5.0](https://catalog.ldc.upenn.edu/LDC2013T19) corpus.**<br />**1. **
[Qi et al. (2020)](https://arxiv.org/pdf/2003.07082.pdf). **2. ** _Coming soon_:
Qi et al. don't report parsing and tagging results on OntoNotes. We're working
on training Stanza on this corpus to allow direct comparison.
[OntoNotes 5.0](https://catalog.ldc.upenn.edu/LDC2013T19) corpus.**
</figcaption>
@ -26,19 +21,22 @@ on training Stanza on this corpus to allow direct comparison.
<figure>
| System | POS | UAS | LAS |
| ------------------------------------------------------------------------------ | ---: | ---: | ---: |
| spaCy RoBERTa (2020) | 98.0 | 96.8 | 95.0 |
| spaCy CNN (2020) | | | |
| [Mrini et al.](https://khalilmrini.github.io/Label_Attention_Layer.pdf) (2019) | 97.3 | 97.4 | 96.3 |
| [Zhou and Zhao](https://www.aclweb.org/anthology/P19-1230/) (2019) | 97.3 | 97.2 | 95.7 |
| Named Entity Recognition Model | OntoNotes | CoNLL '03 |
| ------------------------------------------------------------------------------ | --------: | --------- |
| spaCy RoBERTa (2020) |
| spaCy CNN (2020) | |
| spaCy CNN (2017) | 86.4 |
| [Stanza](https://stanfordnlp.github.io/stanza/) (StanfordNLP)<sup>1</sup> | 88.8 |
| <Link to="https://github.com/flairNLP/flair" hideIcon>Flair</Link><sup>2</sup> | 89.7 |
<figcaption class="caption">
**Accuracy on the Penn Treebank.** See
[NLP-progress](http://nlpprogress.com/english/dependency_parsing.html) for more
results. For spaCy's evaluation, see the
[project template](https://github.com/explosion/projects/tree/v3/benchmarks/parsing_penn_treebank).
**Named entity recognition accuracy** on the
[OntoNotes 5.0](https://catalog.ldc.upenn.edu/LDC2013T19) and
[CoNLL-2003](https://www.aclweb.org/anthology/W03-0419.pdf) corpora. See
[NLP-progress](http://nlpprogress.com/english/named_entity_recognition.html) for
more results. **1. ** [Qi et al. (2020)](https://arxiv.org/pdf/2003.07082.pdf).
**2. ** [Akbik et al. (2018)](https://www.aclweb.org/anthology/C18-1139/)
</figcaption>

View File

@ -61,6 +61,25 @@ import Benchmarks from 'usage/\_benchmarks-models.md'
<Benchmarks />
<figure>
| System | UAS | LAS |
| ------------------------------------------------------------------------------ | ---: | ---: |
| spaCy RoBERTa (2020) | 96.8 | 95.0 |
| spaCy CNN (2020) | 93.7 | 91.8 |
| [Mrini et al.](https://khalilmrini.github.io/Label_Attention_Layer.pdf) (2019) | 97.4 | 96.3 |
| [Zhou and Zhao](https://www.aclweb.org/anthology/P19-1230/) (2019) | 97.2 | 95.7 |
<figcaption class="caption">
**Accuracy on the Penn Treebank.** See
[NLP-progress](http://nlpprogress.com/english/dependency_parsing.html) for more
results.
</figcaption>
</figure>
<Project id="benchmarks/parsing_penn_treebank">
The easiest way to reproduce spaCy's benchmarks on the Penn Treebank is to clone

View File

@ -297,7 +297,7 @@ const Landing = ({ data }) => {
to run.
</p>
<p>
<Button to="/usage/facts-figures#benchmarks">See details</Button>
<Button to="/usage/facts-figures#benchmarks">More results</Button>
</p>
</LandingCol>