mirror of
https://github.com/explosion/spaCy.git
synced 2025-08-04 20:30:24 +03:00
Make it clearer that benchmark accuracy
is a replacement for evaluate
This commit is contained in:
parent
2387802ee7
commit
91b0f0ff8c
|
@ -1136,8 +1136,19 @@ $ python -m spacy pretrain [config_path] [output_dir] [--code] [--resume-path] [
|
||||||
|
|
||||||
## evaluate {id="evaluate",version="2",tag="command"}
|
## evaluate {id="evaluate",version="2",tag="command"}
|
||||||
|
|
||||||
Evaluate a trained pipeline. Expects a loadable spaCy pipeline (package name or
|
The `evaluate` subcommand is superseded by
|
||||||
path) and evaluation data in the
|
[`spacy benchmark accuracy`](#benchmark-accuracy). `evaluate` is provided as an
|
||||||
|
alias to `benchmark accuracy` for compatibility.
|
||||||
|
|
||||||
|
## benchmark {id="benchmark", version="3.5"}
|
||||||
|
|
||||||
|
The `spacy benchmark` CLI includes commands for benchmarking the accuracy and
|
||||||
|
speed of your spaCy pipelines.
|
||||||
|
|
||||||
|
### accuracy {id="benchmark-accuracy", version="3.5", tag="command"}
|
||||||
|
|
||||||
|
Evaluate the accuracy of a trained pipeline. Expects a loadable spaCy pipeline
|
||||||
|
(package name or path) and evaluation data in the
|
||||||
[binary `.spacy` format](/api/data-formats#binary-training). The
|
[binary `.spacy` format](/api/data-formats#binary-training). The
|
||||||
`--gold-preproc` option sets up the evaluation examples with gold-standard
|
`--gold-preproc` option sets up the evaluation examples with gold-standard
|
||||||
sentences and tokens for the predictions. Gold preprocessing helps the
|
sentences and tokens for the predictions. Gold preprocessing helps the
|
||||||
|
@ -1148,7 +1159,7 @@ skew. To render a sample of dependency parses in a HTML file using the
|
||||||
`--displacy-path` argument.
|
`--displacy-path` argument.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
$ python -m spacy evaluate [model] [data_path] [--output] [--code] [--gold-preproc] [--gpu-id] [--displacy-path] [--displacy-limit]
|
$ python -m spacy benchmark accuracy [model] [data_path] [--output] [--code] [--gold-preproc] [--gpu-id] [--displacy-path] [--displacy-limit]
|
||||||
```
|
```
|
||||||
|
|
||||||
| Name | Description |
|
| Name | Description |
|
||||||
|
@ -1164,15 +1175,6 @@ $ python -m spacy evaluate [model] [data_path] [--output] [--code] [--gold-prepr
|
||||||
| `--help`, `-h` | Show help message and available arguments. ~~bool (flag)~~ |
|
| `--help`, `-h` | Show help message and available arguments. ~~bool (flag)~~ |
|
||||||
| **CREATES** | Training results and optional metrics and visualizations. |
|
| **CREATES** | Training results and optional metrics and visualizations. |
|
||||||
|
|
||||||
## benchmark {id="benchmark", version="3.5"}
|
|
||||||
|
|
||||||
The `spacy benchmark` CLI includes commands for benchmarking the accuracy and
|
|
||||||
speed of your spaCy pipelines.
|
|
||||||
|
|
||||||
### accuracy {id="benchmark-accuracy", version="3.5", tag="command"}
|
|
||||||
|
|
||||||
This subcommand is an alias for [`spacy evaluate`](#evaluate).
|
|
||||||
|
|
||||||
### speed {id="benchmark-speed", version="3.5", tag="command"}
|
### speed {id="benchmark-speed", version="3.5", tag="command"}
|
||||||
|
|
||||||
Benchmark the speed of a trained pipeline with a 95% confidence interval.
|
Benchmark the speed of a trained pipeline with a 95% confidence interval.
|
||||||
|
|
Loading…
Reference in New Issue
Block a user