spaCy/website/docs/api/scorer.md
Paul O'Leary McCann a44b7d4622
Add experimental coref docs (#11291)
* Add experimental coref docs

* Docs cleanup

* Apply suggestions from code review

Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>

* Apply changes from code review

* Fix prettier formatting

It seems a period after a number made this think it was a list?

* Update docs on examples for initialize

* Add docs for coref scorers

* Remove 3.4 notes from coref

There won't be a "new" tag until it's in core.

* Add docs for span cleaner

* Fix docs

* Fix docs to match spacy-experimental

These weren't properly updated when the code was moved out of spacy
core.

* More doc fixes

* Formatting

* Update architectures

* Fix links

* Fix another link

Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>
Co-authored-by: svlandeg <svlandeg@github.com>
2022-09-27 18:11:23 +09:00

21 KiB

title teaser tag source
Scorer Compute evaluation scores class spacy/scorer.py

The Scorer computes evaluation scores. It's typically created by Language.evaluate. In addition, the Scorer provides a number of evaluation methods for evaluating Token and Doc attributes.

Scorer.__init__

Create a new Scorer.

Example

from spacy.scorer import Scorer

# Default scoring pipeline
scorer = Scorer()

# Provided scoring pipeline
nlp = spacy.load("en_core_web_sm")
scorer = Scorer(nlp)
Name Description
nlp The pipeline to use for scoring, where each pipeline component may provide a scoring method. If none is provided, then a default pipeline is constructed using the default_lang and default_pipeline settings. Optional[Language]
default_lang The language to use for a default pipeline if nlp is not provided. Defaults to xx. str
default_pipeline The pipeline components to use for a default pipeline if nlp is not provided. Defaults to ("senter", "tagger", "morphologizer", "parser", "ner", "textcat"). Iterable[string]
keyword-only
\*\*kwargs Any additional settings to pass on to the individual scoring methods. Any

Scorer.score

Calculate the scores for a list of Example objects using the scoring methods provided by the components in the pipeline.

The returned Dict contains the scores provided by the individual pipeline components. For the scoring methods provided by the Scorer and used by the core pipeline components, the individual score names start with the Token or Doc attribute being scored:

  • token_acc, token_p, token_r, token_f
  • sents_p, sents_r, sents_f
  • tag_acc
  • pos_acc
  • morph_acc, morph_micro_p, morph_micro_r, morph_micro_f, morph_per_feat
  • lemma_acc
  • dep_uas, dep_las, dep_las_per_type
  • ents_p, ents_r ents_f, ents_per_type
  • spans_sc_p, spans_sc_r, spans_sc_f
  • cats_score (depends on config, description provided in cats_score_desc), cats_micro_p, cats_micro_r, cats_micro_f, cats_macro_p, cats_macro_r, cats_macro_f, cats_macro_auc, cats_f_per_type, cats_auc_per_type

Example

scorer = Scorer()
scores = scorer.score(examples)
Name Description
examples The Example objects holding both the predictions and the correct gold-standard annotations. Iterable[Example]
RETURNS A dictionary of scores. Dict[str, Union[float, Dict[str, float]]]

Scorer.score_tokenization

Scores the tokenization:

  • token_acc: number of correct tokens / number of gold tokens
  • token_p, token_r, token_f: precision, recall and F-score for token character spans

Docs with has_unknown_spaces are skipped during scoring.

Example

scores = Scorer.score_tokenization(examples)
Name Description
examples The Example objects holding both the predictions and the correct gold-standard annotations. Iterable[Example]
RETURNS Dict A dictionary containing the scores token_acc, token_p, token_r, token_f. Dict[str, float]]

Scorer.score_token_attr

Scores a single token attribute. Tokens with missing values in the reference doc are skipped during scoring.

Example

scores = Scorer.score_token_attr(examples, "pos")
print(scores["pos_acc"])
Name Description
examples The Example objects holding both the predictions and the correct gold-standard annotations. Iterable[Example]
attr The attribute to score. str
keyword-only
getter Defaults to getattr. If provided, getter(token, attr) should return the value of the attribute for an individual Token. Callable[[Token, str], Any]
missing_values Attribute values to treat as missing annotation in the reference annotation. Defaults to {0, None, ""}. Set[Any]
RETURNS A dictionary containing the score {attr}_acc. Dict[str, float]

Scorer.score_token_attr_per_feat

Scores a single token attribute per feature for a token attribute in the Universal Dependencies FEATS format. Tokens with missing values in the reference doc are skipped during scoring.

Example

scores = Scorer.score_token_attr_per_feat(examples, "morph")
print(scores["morph_per_feat"])
Name Description
examples The Example objects holding both the predictions and the correct gold-standard annotations. Iterable[Example]
attr The attribute to score. str
keyword-only
getter Defaults to getattr. If provided, getter(token, attr) should return the value of the attribute for an individual Token. Callable[[Token, str], Any]
missing_values Attribute values to treat as missing annotation in the reference annotation. Defaults to {0, None, ""}. Set[Any]
RETURNS A dictionary containing the micro PRF scores under the key {attr}_micro_p/r/f and the per-feature PRF scores under {attr}_per_feat. Dict[str, Dict[str, float]]

Scorer.score_spans

Returns PRF scores for labeled or unlabeled spans.

Example

scores = Scorer.score_spans(examples, "ents")
print(scores["ents_f"])
Name Description
examples The Example objects holding both the predictions and the correct gold-standard annotations. Iterable[Example]
attr The attribute to score. str
keyword-only
getter Defaults to getattr. If provided, getter(doc, attr) should return the Span objects for an individual Doc. CallableDoc, str], Iterable[Span
has_annotation Defaults to None. If provided, has_annotation(doc) should return whether a Doc has annotation for this attr. Docs without annotation are skipped for scoring purposes. str
labeled Defaults to True. If set to False, two spans will be considered equal if their start and end match, irrespective of their label. bool
allow_overlap Defaults to False. Whether or not to allow overlapping spans. If set to False, the alignment will automatically resolve conflicts. bool
RETURNS A dictionary containing the PRF scores under the keys {attr}_p, {attr}_r, {attr}_f and the per-type PRF scores under {attr}_per_type. Dict[str, Union[float, Dict[str, float]]]

Scorer.score_deps

Calculate the UAS, LAS, and LAS per type scores for dependency parses. Tokens with missing values for the attr (typically dep) are skipped during scoring.

Example

def dep_getter(token, attr):
    dep = getattr(token, attr)
    dep = token.vocab.strings.as_string(dep).lower()
    return dep

scores = Scorer.score_deps(
    examples,
    "dep",
    getter=dep_getter,
    ignore_labels=("p", "punct")
)
print(scores["dep_uas"], scores["dep_las"])
Name Description
examples The Example objects holding both the predictions and the correct gold-standard annotations. Iterable[Example]
attr The attribute to score. str
keyword-only
getter Defaults to getattr. If provided, getter(token, attr) should return the value of the attribute for an individual Token. Callable[[Token, str], Any]
head_attr The attribute containing the head token. str
head_getter Defaults to getattr. If provided, head_getter(token, attr) should return the head for an individual Token. Callable[[Doc, str], Token]
ignore_labels Labels to ignore while scoring (e.g. "punct"). Iterable[str]
missing_values Attribute values to treat as missing annotation in the reference annotation. Defaults to {0, None, ""}. Set[Any]
RETURNS A dictionary containing the scores: {attr}_uas, {attr}_las, and {attr}_las_per_type. Dict[str, Union[float, Dict[str, float]]]

Scorer.score_cats

Calculate PRF and ROC AUC scores for a doc-level attribute that is a dict containing scores for each label like Doc.cats. The returned dictionary contains the following scores:

  • {attr}_micro_p, {attr}_micro_r and {attr}_micro_f: each instance across each label is weighted equally
  • {attr}_macro_p, {attr}_macro_r and {attr}_macro_f: the average values across evaluations per label
  • {attr}_f_per_type and {attr}_auc_per_type: each contains a dictionary of scores, keyed by label
  • A final {attr}_score and corresponding {attr}_score_desc (text description)

The reported {attr}_score depends on the classification properties:

  • binary exclusive with positive label: {attr}_score is set to the F-score of the positive label
  • 3+ exclusive classes, macro-averaged F-score: {attr}_score = {attr}_macro_f
  • multilabel, macro-averaged AUC: {attr}_score = {attr}_macro_auc

Example

labels = ["LABEL_A", "LABEL_B", "LABEL_C"]
scores = Scorer.score_cats(
    examples,
    "cats",
    labels=labels
)
print(scores["cats_macro_auc"])
Name Description
examples The Example objects holding both the predictions and the correct gold-standard annotations. Iterable[Example]
attr The attribute to score. str
keyword-only
getter Defaults to getattr. If provided, getter(doc, attr) should return the cats for an individual Doc. CallableDoc, str], Dict[str, float
labels The set of possible labels. Defaults to []. Iterable[str]
multi_label Whether the attribute allows multiple labels. Defaults to True. bool
positive_label The positive label for a binary task with exclusive classes. Defaults to None. Optional[str]
RETURNS A dictionary containing the scores, with inapplicable scores as None. Dict[str, Optional[float]]

Returns PRF for predicted links on the entity level. To disentangle the performance of the NEL from the NER, this method only evaluates NEL links for entities that overlap between the gold reference and the predictions.

Example

scores = Scorer.score_links(
    examples,
    negative_labels=["NIL", ""]
)
print(scores["nel_micro_f"])
Name Description
examples The Example objects holding both the predictions and the correct gold-standard annotations. Iterable[Example]
keyword-only
negative_labels The string values that refer to no annotation (e.g. "NIL"). Iterable[str]
RETURNS A dictionary containing the scores. Dict[str, Optional[float]]

get_ner_prf

Compute micro-PRF and per-entity PRF scores.

Name Description
examples The Example objects holding both the predictions and the correct gold-standard annotations. Iterable[Example]

score_coref_clusters

Returns LEA (Moosavi and Strube, 2016) PRF scores for coreference clusters.

Note this scoring function is not yet included in spaCy core - for details, see the CoreferenceResolver docs.

Example

scores = score_coref_clusters(
    examples,
    span_cluster_prefix="coref_clusters",
)
print(scores["coref_f"])
Name Description
examples The Example objects holding both the predictions and the correct gold-standard annotations. Iterable[Example]
keyword-only
span_cluster_prefix The prefix used for spans representing coreference clusters. str
RETURNS A dictionary containing the scores. Dict[str, Optional[float]]

score_span_predictions

Return accuracy for reconstructions of spans from single tokens. Only exactly correct predictions are counted as correct, there is no partial credit for near answers. Used by the SpanResolver.

Note this scoring function is not yet included in spaCy core - for details, see the SpanResolver docs.

Example

scores = score_span_predictions(
    examples,
    output_prefix="coref_clusters",
)
print(scores["span_coref_clusters_accuracy"])
Name Description
examples The Example objects holding both the predictions and the correct gold-standard annotations. Iterable[Example]
keyword-only
output_prefix The prefix used for spans representing the final predicted spans. str
RETURNS A dictionary containing the scores. Dict[str, Optional[float]]