mirror of
https://github.com/explosion/spaCy.git
synced 2024-12-25 17:36:30 +03:00
319eb508b5
* Add a `spacy evaluate speed` subcommand This subcommand reports the mean batch performance of a model on a data set with a 95% confidence interval. For reliability, it first performs some warmup rounds. Then it will measure performance on batches with randomly shuffled documents. To avoid having too many spaCy commands, `speed` is a subcommand of `evaluate` and accuracy evaluation is moved to its own `evaluate accuracy` subcommand. * Fix import cycle * Restore `spacy evaluate`, make `spacy benchmark speed` an alias * Add documentation for `spacy benchmark` * CREATES -> PRINTS * WPS -> words/s * Disable formatting of benchmark speed arguments * Fail with an error message when trying to speed bench empty corpus * Make it clearer that `benchmark accuracy` is a replacement for `evaluate` * Fix docstring webpage reference * tests: check `evaluate` output against `benchmark accuracy` |
||
---|---|---|
.. | ||
project | ||
templates | ||
__init__.py | ||
_util.py | ||
apply.py | ||
assemble.py | ||
benchmark_speed.py | ||
convert.py | ||
debug_config.py | ||
debug_data.py | ||
debug_diff.py | ||
debug_model.py | ||
download.py | ||
evaluate.py | ||
find_threshold.py | ||
info.py | ||
init_config.py | ||
init_pipeline.py | ||
package.py | ||
pretrain.py | ||
profile.py | ||
train.py | ||
validate.py |