mirror of
https://github.com/explosion/spaCy.git
synced 2025-10-26 13:41:21 +03:00
* Add doc.cats to spacy.gold at the paragraph level
Support `doc.cats` as `"cats": [{"label": string, "value": number}]` in
the spacy JSON training format at the paragraph level.
* `spacy.gold.docs_to_json()` writes `docs.cats`
* `GoldCorpus` reads in cats in each `GoldParse`
* Update instances of gold_tuples to handle cats
Update iteration over gold_tuples / gold_parses to handle addition of
cats at the paragraph level.
* Add textcat to train CLI
* Add textcat options to train CLI
* Add textcat labels in `TextCategorizer.begin_training()`
* Add textcat evaluation to `Scorer`:
* For binary exclusive classes with provided label: F1 for label
* For 2+ exclusive classes: F1 macro average
* For multilabel (not exclusive): ROC AUC macro average (currently
relying on sklearn)
* Provide user info on textcat evaluation settings, potential
incompatibilities
* Provide pipeline to Scorer in `Language.evaluate` for textcat config
* Customize train CLI output to include only metrics relevant to current
pipeline
* Add textcat evaluation to evaluate CLI
* Fix handling of unset arguments and config params
Fix handling of unset arguments and model confiug parameters in Scorer
initialization.
* Temporarily add sklearn requirement
* Remove sklearn version number
* Improve Scorer handling of models without textcats
* Fixing Scorer handling of models without textcats
* Update Scorer output for python 2.7
* Modify inf in Scorer for python 2.7
* Auto-format
Also make small adjustments to make auto-formatting with black easier and produce nicer results
* Move error message to Errors
* Update documentation
* Add cats to annotation JSON format [ci skip]
* Fix tpl flag and docs [ci skip]
* Switch to internal roc_auc_score
Switch to internal `roc_auc_score()` adapted from scikit-learn.
* Add AUCROCScore tests and improve errors/warnings
* Add tests for AUCROCScore and roc_auc_score
* Add missing error for only positive/negative values
* Remove unnecessary warnings and errors
* Make reduced roc_auc_score functions private
Because most of the checks and warnings have been stripped for the
internal functions and access is only intended through `ROCAUCScore`,
make the functions for roc_auc_score adapted from scikit-learn private.
* Check that data corresponds with multilabel flag
Check that the training instances correspond with the multilabel flag,
adding the multilabel flag if required.
* Add textcat score to early stopping check
* Add more checks to debug-data for textcat
* Add example training data for textcat
* Add more checks to textcat train CLI
* Check configuration when extending base model
* Fix typos
* Update textcat example data
* Provide licensing details and licenses for data
* Remove two labels with no positive instances from jigsaw-toxic-comment
data.
Co-authored-by: Ines Montani <ines@ines.io>
4.6 KiB
4.6 KiB
| title | teaser | tag | source |
|---|---|---|---|
| Scorer | Compute evaluation scores | class | spacy/scorer.py |
The Scorer computes and stores evaluation scores. It's typically created by
Language.evaluate.
Scorer.__init__
Create a new Scorer.
Example
from spacy.scorer import Scorer scorer = Scorer()
| Name | Type | Description |
|---|---|---|
eval_punct |
bool | Evaluate the dependency attachments to and from punctuation. |
| RETURNS | Scorer |
The newly created object. |
Scorer.score
Update the evaluation scores from a single Doc /
GoldParse pair.
Example
scorer = Scorer() scorer.score(doc, gold)
| Name | Type | Description |
|---|---|---|
doc |
Doc |
The predicted annotations. |
gold |
GoldParse |
The correct annotations. |
verbose |
bool | Print debugging information. |
punct_labels |
tuple | Dependency labels for punctuation. Used to evaluate dependency attachments to punctuation if eval_punct is True. |
Properties
| Name | Type | Description |
|---|---|---|
token_acc |
float | Tokenization accuracy. |
tags_acc |
float | Part-of-speech tag accuracy (fine grained tags, i.e. Token.tag). |
uas |
float | Unlabelled dependency score. |
las |
float | Labelled dependency score. |
ents_p |
float | Named entity accuracy (precision). |
ents_r |
float | Named entity accuracy (recall). |
ents_f |
float | Named entity accuracy (F-score). |
ents_per_type 2.1.5 |
dict | Scores per entity label. Keyed by label, mapped to a dict of p, r and f scores. |
textcat_score 2.2 |
float | F-score on positive label for binary exclusive, macro-averaged F-score for 3+ exclusive, macro-averaged AUC ROC score for multilabel (-1 if undefined). |
textcats_per_cat 2.2 |
dict | Scores per textcat label, keyed by label. |
scores |
dict | All scores, keyed by type. |