mirror of
https://github.com/explosion/spaCy.git
synced 2024-11-16 06:37:04 +03:00
4.8 KiB
4.8 KiB
title | teaser | tag | source |
---|---|---|---|
Scorer | Compute evaluation scores | class | spacy/scorer.py |
The Scorer
computes and stores evaluation scores. It's typically created by
Language.evaluate
.
Scorer.__init__
Create a new Scorer
.
Example
from spacy.scorer import Scorer scorer = Scorer()
Name | Type | Description |
---|---|---|
eval_punct |
bool | Evaluate the dependency attachments to and from punctuation. |
RETURNS | Scorer |
The newly created object. |
Scorer.score
Update the evaluation scores from a single Doc
/
GoldParse
pair.
Example
scorer = Scorer() scorer.score(doc, gold)
Name | Type | Description |
---|---|---|
doc |
Doc |
The predicted annotations. |
gold |
GoldParse |
The correct annotations. |
verbose |
bool | Print debugging information. |
punct_labels |
tuple | Dependency labels for punctuation. Used to evaluate dependency attachments to punctuation if eval_punct is True . |
Properties
Name | Type | Description |
---|---|---|
token_acc |
float | Tokenization accuracy. |
tags_acc |
float | Part-of-speech tag accuracy (fine grained tags, i.e. Token.tag ). |
uas |
float | Unlabelled dependency score. |
las |
float | Labelled dependency score. |
ents_p |
float | Named entity accuracy (precision). |
ents_r |
float | Named entity accuracy (recall). |
ents_f |
float | Named entity accuracy (F-score). |
ents_per_type 2.1.5 |
dict | Scores per entity label. Keyed by label, mapped to a dict of p , r and f scores. |
textcat_score 2.2 |
float | F-score on positive label for binary exclusive, macro-averaged F-score for 3+ exclusive, macro-averaged AUC ROC score for multilabel (-1 if undefined). |
textcats_per_cat 2.2 |
dict | Scores per textcat label, keyed by label. |
las_per_type 2.2.3 |
dict | Labelled dependency scores, keyed by label. |
scores |
dict | All scores, keyed by type. |