spaCy/spacy/cli
Adriane Boyd 2bcceb80c4
Refactor the Scorer to improve flexibility (#5731)
* Refactor the Scorer to improve flexibility

Refactor the `Scorer` to improve flexibility for arbitrary pipeline
components.

* Individual pipeline components provide their own `evaluate` methods
that score a list of `Example`s and return a dictionary of scores
* `Scorer` is initialized either:
  * with a provided pipeline containing components to be scored
  * with a default pipeline containing the built-in statistical
    components (senter, tagger, morphologizer, parser, ner)
* `Scorer.score` evaluates a list of `Example`s and returns a dictionary
of scores referring to the scores provided by the components in the
pipeline

Significant differences:

* `tags_acc` is renamed to `tag_acc` to be consistent with `token_acc`
and the new `morph_acc`, `pos_acc`, and `lemma_acc`
* Scoring is no longer cumulative: `Scorer.score` scores a list of
examples rather than a single example and does not retain any state
about previously scored examples
* PRF values in the returned scores are no longer multiplied by 100

* Add kwargs to Morphologizer.evaluate

* Create generalized scoring methods in Scorer

* Generalized static scoring methods are added to `Scorer`
  * Methods require an attribute (either on Token or Doc) that is
used to key the returned scores

Naming differences:

* `uas`, `las`, and `las_per_type` in the scores dict are renamed to
`dep_uas`, `dep_las`, and `dep_las_per_type`

Scoring differences:

* `Doc.sents` is now scored as spans rather than on sentence-initial
token positions so that `Doc.sents` and `Doc.ents` can be scored with
the same method (this lowers scores since a single incorrect sentence
start results in two incorrect spans)

* Simplify / extend hasattr check for eval method

* Add hasattr check to tokenizer scoring
* Simplify to hasattr check for component scoring

* Reset Example alignment if docs are set

Reset the Example alignment if either doc is set in case the
tokenization has changed.

* Add PRF tokenization scoring for tokens as spans

Add PRF scores for tokens as character spans. The scores are:

* token_acc: # correct tokens / # gold tokens
* token_p/r/f: PRF for (token.idx, token.idx + len(token))

* Add docstring to Scorer.score_tokenization

* Rename component.evaluate() to component.score()

* Update Scorer API docs

* Update scoring for positive_label in textcat

* Fix TextCategorizer.score kwargs

* Update Language.evaluate docs

* Update score names in default config
2020-07-25 12:53:02 +02:00
..
project Update CLI commans to use one shared util file 2020-07-10 17:57:40 +02:00
__init__.py Merge branch 'develop' into feature/refactor-config-args 2020-07-10 20:51:52 +02:00
_util.py Refactor pipeline components, config and language data (#5759) 2020-07-22 13:42:59 +02:00
convert.py Update command docstrings and docs 2020-07-12 13:53:49 +02:00
debug_data.py Refactor pipeline components, config and language data (#5759) 2020-07-22 13:42:59 +02:00
debug_model.py Use consistent --gpu-id option name 2020-07-22 16:53:41 +02:00
download.py Update CLI commans to use one shared util file 2020-07-10 17:57:40 +02:00
evaluate.py Refactor the Scorer to improve flexibility (#5731) 2020-07-25 12:53:02 +02:00
info.py Refactor pipeline components, config and language data (#5759) 2020-07-22 13:42:59 +02:00
init_model.py Update command docstrings and docs 2020-07-12 13:53:49 +02:00
package.py Refactor pipeline components, config and language data (#5759) 2020-07-22 13:42:59 +02:00
pretrain.py Use consistent --gpu-id option name 2020-07-22 16:53:41 +02:00
profile.py Refactor pipeline components, config and language data (#5759) 2020-07-22 13:42:59 +02:00
train.py Refactor the Scorer to improve flexibility (#5731) 2020-07-25 12:53:02 +02:00
validate.py Update CLI commans to use one shared util file 2020-07-10 17:57:40 +02:00