spaCy/website/docs/api/entityruler.md
Adriane Boyd cae4589f5a
Replace EntityRuler with SpanRuler implementation (#11320)
* Replace EntityRuler with SpanRuler implementation

Remove `EntityRuler` and rename the `SpanRuler`-based
`future_entity_ruler` to `entity_ruler`.

Main changes:

* It is no longer possible to load patterns on init as with
`EntityRuler(patterns=)`.
* The older serialization formats (`patterns.jsonl`) are no longer
supported and the related tests are removed.
* The config settings are only stored in the config, not in the
serialized component (in particular the `phrase_matcher_attr` and
overwrite settings).

* Add migration guide to EntityRuler API docs

* docs update

* Minor edit

Co-authored-by: svlandeg <svlandeg@github.com>
2022-10-24 09:11:35 +02:00

5.6 KiB

title new teaser api_string_name api_trainable
EntityRuler 2.1 Pipeline component for rule-based named entity recognition entity_ruler false

As of spaCy v4, there is no separate EntityRuler class. The entity ruler is implemented as a special case of the SpanRuler component.

See the migration guide below for differences between the v3 EntityRuler and v4 SpanRuler implementations of the entity_ruler component.

See the SpanRuler API docs for the full API.

The entity ruler lets you add spans to the Doc.ents using token-based rules or exact phrase matches. It can be combined with the statistical EntityRecognizer to boost accuracy, or used on its own to implement a purely rule-based entity recognition system. For usage examples, see the docs on rule-based entity recognition.

Assigned Attributes

This component assigns predictions basically the same way as the EntityRecognizer.

Predictions can be accessed under Doc.ents as a tuple. Each label will also be reflected in each underlying token, where it is saved in the Token.ent_type and Token.ent_iob fields. Note that by definition each token can only have one label.

When setting Doc.ents to create training data, all the spans must be valid and non-overlapping, or an error will be thrown.

Location Value
Doc.ents The annotated spans. Tuple[Span]
Token.ent_iob An enum encoding of the IOB part of the named entity tag. int
Token.ent_iob_ The IOB part of the named entity tag. str
Token.ent_type The label part of the named entity tag (hash). int
Token.ent_type_ The label part of the named entity tag. str

Config and implementation

The default config is defined by the pipeline component factory and describes how the component should be configured. You can override its settings via the config argument on nlp.add_pipe or in your config.cfg for training.

Example

config = {
   "phrase_matcher_attr": None,
   "validate": True,
   "overwrite_ents": False,
   "ent_id_sep": "||",
}
nlp.add_pipe("entity_ruler", config=config)
Setting Description
phrase_matcher_attr Optional attribute name match on for the internal PhraseMatcher, e.g. LOWER to match on the lowercase token text. Defaults to None. Optional[Union[int, str]]
validate Whether patterns should be validated (passed to the Matcher and PhraseMatcher). Defaults to False. bool
overwrite_ents If existing entities are present, e.g. entities added by the model, overwrite them by matches if necessary. Defaults to False. bool
ent_id_sep Separator used internally for entity IDs. Defaults to "||". str
scorer The scoring method. Defaults to spacy.scorer.get_ner_prf. Optional[Callable]

Migrating from v3

Loading patterns

Unlike the v3 EntityRuler, the SpanRuler cannot load patterns on initialization with SpanRuler(patterns=patterns) or directly from a JSONL file path with SpanRuler.from_disk(jsonl_path). Patterns should be loaded from the JSONL file separately and then added through SpanRuler.initialize or SpanRuler.add_patterns.

 ruler = nlp.get_pipe("entity_ruler")
- ruler.from_disk("patterns.jsonl")
+ import srsly
+ patterns = srsly.read_jsonl("patterns.jsonl")
+ ruler.add_patterns(patterns)

Saving patterns

SpanRuler.to_disk always saves the full component data to a directory and does not include an option to save the patterns to a single JSONL file.

 ruler = nlp.get_pipe("entity_ruler")
- ruler.to_disk("patterns.jsonl")
+ import srsly
+ srsly.write_jsonl("patterns.jsonl", ruler.patterns)

Accessing token and phrase patterns

The separate token patterns and phrase patterns are no longer accessible under ruler.token_patterns or ruler.phrase_patterns. You can access the combined patterns in their original format using the property SpanRuler.patterns.

Removing patterns by ID

SpanRuler.remove removes by label rather than ID. To remove by ID, use SpanRuler.remove_by_id:

 ruler = nlp.get_pipe("entity_ruler")
- ruler.remove("id")
+ ruler.remove_by_id("id")