mirror of
https://github.com/explosion/spaCy.git
synced 2024-11-11 04:08:09 +03:00
2bcceb80c4
* Refactor the Scorer to improve flexibility Refactor the `Scorer` to improve flexibility for arbitrary pipeline components. * Individual pipeline components provide their own `evaluate` methods that score a list of `Example`s and return a dictionary of scores * `Scorer` is initialized either: * with a provided pipeline containing components to be scored * with a default pipeline containing the built-in statistical components (senter, tagger, morphologizer, parser, ner) * `Scorer.score` evaluates a list of `Example`s and returns a dictionary of scores referring to the scores provided by the components in the pipeline Significant differences: * `tags_acc` is renamed to `tag_acc` to be consistent with `token_acc` and the new `morph_acc`, `pos_acc`, and `lemma_acc` * Scoring is no longer cumulative: `Scorer.score` scores a list of examples rather than a single example and does not retain any state about previously scored examples * PRF values in the returned scores are no longer multiplied by 100 * Add kwargs to Morphologizer.evaluate * Create generalized scoring methods in Scorer * Generalized static scoring methods are added to `Scorer` * Methods require an attribute (either on Token or Doc) that is used to key the returned scores Naming differences: * `uas`, `las`, and `las_per_type` in the scores dict are renamed to `dep_uas`, `dep_las`, and `dep_las_per_type` Scoring differences: * `Doc.sents` is now scored as spans rather than on sentence-initial token positions so that `Doc.sents` and `Doc.ents` can be scored with the same method (this lowers scores since a single incorrect sentence start results in two incorrect spans) * Simplify / extend hasattr check for eval method * Add hasattr check to tokenizer scoring * Simplify to hasattr check for component scoring * Reset Example alignment if docs are set Reset the Example alignment if either doc is set in case the tokenization has changed. * Add PRF tokenization scoring for tokens as spans Add PRF scores for tokens as character spans. The scores are: * token_acc: # correct tokens / # gold tokens * token_p/r/f: PRF for (token.idx, token.idx + len(token)) * Add docstring to Scorer.score_tokenization * Rename component.evaluate() to component.score() * Update Scorer API docs * Update scoring for positive_label in textcat * Fix TextCategorizer.score kwargs * Update Language.evaluate docs * Update score names in default config |
||
---|---|---|
.. | ||
cli | ||
displacy | ||
gold | ||
lang | ||
matcher | ||
ml | ||
pipeline | ||
syntax | ||
tests | ||
tokens | ||
__init__.pxd | ||
__init__.py | ||
__main__.py | ||
about.py | ||
attrs.pxd | ||
attrs.pyx | ||
compat.py | ||
default_config.cfg | ||
errors.py | ||
glossary.py | ||
gold.pyx | ||
kb.pxd | ||
kb.pyx | ||
language.py | ||
lemmatizer.py | ||
lexeme.pxd | ||
lexeme.pyx | ||
lookups.py | ||
morphology.pxd | ||
morphology.pyx | ||
parts_of_speech.pxd | ||
parts_of_speech.pyx | ||
pipe_analysis.py | ||
schemas.py | ||
scorer.py | ||
strings.pxd | ||
strings.pyx | ||
structs.pxd | ||
symbols.pxd | ||
symbols.pyx | ||
tokenizer.pxd | ||
tokenizer.pyx | ||
typedefs.pxd | ||
typedefs.pyx | ||
util.py | ||
vectors.pyx | ||
vocab.pxd | ||
vocab.pyx |