spaCy/website/docs/api/spanruler.md
Kevin Humphreys 19650ebb52
Enable fuzzy text matching in Matcher (#11359)
* enable fuzzy matching

* add fuzzy param to EntityMatcher

* include rapidfuzz_capi

not yet used

* fix type

* add FUZZY predicate

* add fuzzy attribute list

* fix type properly

* tidying

* remove unnecessary dependency

* handle fuzzy sets

* simplify fuzzy sets

* case fix

* switch to FUZZYn predicates

use Levenshtein distance.
remove fuzzy param.
remove rapidfuzz_capi.

* revert changes added for fuzzy param

* switch to polyleven

(Python package)

* enable fuzzy matching

* add fuzzy param to EntityMatcher

* include rapidfuzz_capi

not yet used

* fix type

* add FUZZY predicate

* add fuzzy attribute list

* fix type properly

* tidying

* remove unnecessary dependency

* handle fuzzy sets

* simplify fuzzy sets

* case fix

* switch to FUZZYn predicates

use Levenshtein distance.
remove fuzzy param.
remove rapidfuzz_capi.

* revert changes added for fuzzy param

* switch to polyleven

(Python package)

* fuzzy match only on oov tokens

* remove polyleven

* exclude whitespace tokens

* don't allow more edits than characters

* fix min distance

* reinstate FUZZY operator

with length-based distance function

* handle sets inside regex operator

* remove is_oov check

* attempt build fix

no mypy failure locally

* re-attempt build fix

* don't overwrite fuzzy param value

* move fuzzy_match

to its own Python module to allow patching

* move fuzzy_match back inside Matcher

simplify logic and add tests

* Format tests

* Parametrize fuzzyn tests

* Parametrize and merge fuzzy+set tests

* Format

* Move fuzzy_match to a standalone method

* Change regex kwarg type to bool

* Add types for fuzzy_match

- Refactor variable names
- Add test for symmetrical behavior

* Parametrize fuzzyn+set tests

* Minor refactoring for fuzz/fuzzy

* Make fuzzy_match a Matcher kwarg

* Update type for _default_fuzzy_match

* don't overwrite function param

* Rename to fuzzy_compare

* Update fuzzy_compare default argument declarations

* allow fuzzy_compare override from EntityRuler

* define new Matcher keyword arg

* fix type definition

* Implement fuzzy_compare config option for EntityRuler and SpanRuler

* Rename _default_fuzzy_compare to fuzzy_compare, remove from reexported objects

* Use simpler fuzzy_compare algorithm

* Update types

* Increase minimum to 2 in fuzzy_compare to allow one transposition

* Fix predicate keys and matching for SetPredicate with FUZZY and REGEX

* Add FUZZY6..9

* Add initial docs

* Increase default fuzzy to rounded 30% of pattern length

* Update docs for fuzzy_compare in components

* Update EntityRuler and SpanRuler API docs

* Rename EntityRuler and SpanRuler setting to matcher_fuzzy_compare

To having naming similar to `phrase_matcher_attr`, rename
`fuzzy_compare` setting for `EntityRuler` and `SpanRuler` to
`matcher_fuzzy_compare. Organize next to `phrase_matcher_attr` in docs.

* Fix schema aliases

Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>

* Fix typo

Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>

* Add FUZZY6-9 operators and update tests

* Parameterize test over greedy

Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>

* Fix type for fuzzy_compare to remove Optional

* Rename to spacy.levenshtein_compare.v1, move to spacy.matcher.levenshtein

* Update docs following levenshtein_compare renaming

Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>
2023-01-10 10:36:17 +01:00

19 KiB

title tag source new teaser api_string_name api_trainable
SpanRuler class spacy/pipeline/span_ruler.py 3.3 Pipeline component for rule-based span and named entity recognition span_ruler false

The span ruler lets you add spans to Doc.spans and/or Doc.ents using token-based rules or exact phrase matches. For usage examples, see the docs on rule-based span matching.

Assigned Attributes

Matches will be saved to Doc.spans[spans_key] as a SpanGroup and/or to Doc.ents, where the annotation is saved in the Token.ent_type and Token.ent_iob fields.

Location Value
Doc.spans[spans_key] The annotated spans. SpanGroup
Doc.ents The annotated spans. Tuple[Span]
Token.ent_iob An enum encoding of the IOB part of the named entity tag. int
Token.ent_iob_ The IOB part of the named entity tag. str
Token.ent_type The label part of the named entity tag (hash). int
Token.ent_type_ The label part of the named entity tag. str

Config and implementation

The default config is defined by the pipeline component factory and describes how the component should be configured. You can override its settings via the config argument on nlp.add_pipe or in your config.cfg.

Example

config = {
   "spans_key": "my_spans",
   "validate": True,
   "overwrite": False,
}
nlp.add_pipe("span_ruler", config=config)
Setting Description
spans_key The spans key to save the spans under. If None, no spans are saved. Defaults to "ruler". Optional[str]
spans_filter The optional method to filter spans before they are assigned to doc.spans. Defaults to None. Optional[CallableIterable[Span], Iterable[Span, List[Span]]]
annotate_ents Whether to save spans to doc.ents. Defaults to False. bool
ents_filter The method to filter spans before they are assigned to doc.ents. Defaults to util.filter_chain_spans. CallableIterable[Span], Iterable[Span, List[Span]]
phrase_matcher_attr Token attribute to match on, passed to the internal PhraseMatcher as attr. Defaults to None. Optional[Union[int, str]]
matcher_fuzzy_compare 3.5 The fuzzy comparison method, passed on to the internal Matcher. Defaults to spacy.matcher.levenshtein.levenshtein_compare. Callable
validate Whether patterns should be validated, passed to Matcher and PhraseMatcher as validate. Defaults to False. bool
overwrite Whether to remove any existing spans under Doc.spans[spans key] if spans_key is set, or to remove any ents under Doc.ents if annotate_ents is set. Defaults to True. bool
scorer The scoring method. Defaults to Scorer.score_spans for Doc.spans[spans_key] with overlapping spans allowed. Optional[Callable]
%%GITHUB_SPACY/spacy/pipeline/span_ruler.py

SpanRuler.__init__

Initialize the span ruler. If patterns are supplied here, they need to be a list of dictionaries with a "label" and "pattern" key. A pattern can either be a token pattern (list) or a phrase pattern (string). For example: {"label": "ORG", "pattern": "Apple"}.

Example

# Construction via add_pipe
ruler = nlp.add_pipe("span_ruler")

# Construction from class
from spacy.pipeline import SpanRuler
ruler = SpanRuler(nlp, overwrite=True)
Name Description
nlp The shared nlp object to pass the vocab to the matchers and process phrase patterns. Language
name Instance name of the current pipeline component. Typically passed in automatically from the factory when the component is added. Used to disable the current span ruler while creating phrase patterns with the nlp object. str
keyword-only
spans_key The spans key to save the spans under. If None, no spans are saved. Defaults to "ruler". Optional[str]
spans_filter The optional method to filter spans before they are assigned to doc.spans. Defaults to None. Optional[CallableIterable[Span], Iterable[Span, List[Span]]]
annotate_ents Whether to save spans to doc.ents. Defaults to False. bool
ents_filter The method to filter spans before they are assigned to doc.ents. Defaults to util.filter_chain_spans. CallableIterable[Span], Iterable[Span, List[Span]]
phrase_matcher_attr Token attribute to match on, passed to the internal PhraseMatcher as attr. Defaults to None. Optional[Union[int, str]]
matcher_fuzzy_compare 3.5 The fuzzy comparison method, passed on to the internal Matcher. Defaults to spacy.matcher.levenshtein.levenshtein_compare. Callable
validate Whether patterns should be validated, passed to Matcher and PhraseMatcher as validate. Defaults to False. bool
overwrite Whether to remove any existing spans under Doc.spans[spans key] if spans_key is set, or to remove any ents under Doc.ents if annotate_ents is set. Defaults to True. bool
scorer The scoring method. Defaults to Scorer.score_spans for Doc.spans[spans_key] with overlapping spans allowed. Optional[Callable]

SpanRuler.initialize

Initialize the component with data and used before training to load in rules from a pattern file. This method is typically called by Language.initialize and lets you customize arguments it receives via the [initialize.components] block in the config. Any existing patterns are removed on initialization.

Example

span_ruler = nlp.add_pipe("span_ruler")
span_ruler.initialize(lambda: [], nlp=nlp, patterns=patterns)
### config.cfg
[initialize.components.span_ruler]

[initialize.components.span_ruler.patterns]
@readers = "srsly.read_jsonl.v1"
path = "corpus/span_ruler_patterns.jsonl
Name Description
get_examples Function that returns gold-standard annotations in the form of Example objects. Not used by the SpanRuler. Callable], Iterable[Example
keyword-only
nlp The current nlp object. Defaults to None. Optional[Language]
patterns The list of patterns. Defaults to None. Optional[Sequence[Dict[str, Union[str, List[Dict[str, Any]]]]]]

SpanRuler._\len__

The number of all patterns added to the span ruler.

Example

ruler = nlp.add_pipe("span_ruler")
assert len(ruler) == 0
ruler.add_patterns([{"label": "ORG", "pattern": "Apple"}])
assert len(ruler) == 1
Name Description
RETURNS The number of patterns. int

SpanRuler.__contains__

Whether a label is present in the patterns.

Example

ruler = nlp.add_pipe("span_ruler")
ruler.add_patterns([{"label": "ORG", "pattern": "Apple"}])
assert "ORG" in ruler
assert not "PERSON" in ruler
Name Description
label The label to check. str
RETURNS Whether the span ruler contains the label. bool

SpanRuler.__call__

Find matches in the Doc and add them to doc.spans[span_key] and/or doc.ents. Typically, this happens automatically after the component has been added to the pipeline using nlp.add_pipe. If the span ruler was initialized with overwrite=True, existing spans and entities will be removed.

Example

ruler = nlp.add_pipe("span_ruler")
ruler.add_patterns([{"label": "ORG", "pattern": "Apple"}])

doc = nlp("A text about Apple.")
spans = [(span.text, span.label_) for span in doc.spans["ruler"]]
assert spans == [("Apple", "ORG")]
Name Description
doc The Doc object to process, e.g. the Doc in the pipeline. Doc
RETURNS The modified Doc with added spans/entities. Doc

SpanRuler.add_patterns

Add patterns to the span ruler. A pattern can either be a token pattern (list of dicts) or a phrase pattern (string). For more details, see the usage guide on rule-based matching.

Example

patterns = [
    {"label": "ORG", "pattern": "Apple"},
    {"label": "GPE", "pattern": [{"lower": "san"}, {"lower": "francisco"}]}
]
ruler = nlp.add_pipe("span_ruler")
ruler.add_patterns(patterns)
Name Description
patterns The patterns to add. List[Dict[str, Union[str, List[dict]]]]

SpanRuler.remove

Remove patterns by label from the span ruler. A ValueError is raised if the label does not exist in any patterns.

Example

patterns = [{"label": "ORG", "pattern": "Apple", "id": "apple"}]
ruler = nlp.add_pipe("span_ruler")
ruler.add_patterns(patterns)
ruler.remove("ORG")
Name Description
label The label of the pattern rule. str

SpanRuler.remove_by_id

Remove patterns by ID from the span ruler. A ValueError is raised if the ID does not exist in any patterns.

Example

patterns = [{"label": "ORG", "pattern": "Apple", "id": "apple"}]
ruler = nlp.add_pipe("span_ruler")
ruler.add_patterns(patterns)
ruler.remove_by_id("apple")
Name Description
pattern_id The ID of the pattern rule. str

SpanRuler.clear

Remove all patterns the span ruler.

Example

patterns = [{"label": "ORG", "pattern": "Apple", "id": "apple"}]
ruler = nlp.add_pipe("span_ruler")
ruler.add_patterns(patterns)
ruler.clear()

SpanRuler.to_disk

Save the span ruler patterns to a directory. The patterns will be saved as newline-delimited JSON (JSONL).

Example

ruler = nlp.add_pipe("span_ruler")
ruler.to_disk("/path/to/span_ruler")
Name Description
path A path to a directory, which will be created if it doesn't exist. Paths may be either strings or Path-like objects. Union[str, Path]

SpanRuler.from_disk

Load the span ruler from a path.

Example

ruler = nlp.add_pipe("span_ruler")
ruler.from_disk("/path/to/span_ruler")
Name Description
path A path to a directory. Paths may be either strings or Path-like objects. Union[str, Path]
RETURNS The modified SpanRuler object. SpanRuler

SpanRuler.to_bytes

Serialize the span ruler to a bytestring.

Example

ruler = nlp.add_pipe("span_ruler")
ruler_bytes = ruler.to_bytes()
Name Description
RETURNS The serialized patterns. bytes

SpanRuler.from_bytes

Load the pipe from a bytestring. Modifies the object in place and returns it.

Example

ruler_bytes = ruler.to_bytes()
ruler = nlp.add_pipe("span_ruler")
ruler.from_bytes(ruler_bytes)
Name Description
bytes_data The bytestring to load. bytes
RETURNS The modified SpanRuler object. SpanRuler

SpanRuler.labels

All labels present in the match patterns.

Name Description
RETURNS The string labels. Tuple[str, ...]

SpanRuler.ids

All IDs present in the id property of the match patterns.

Name Description
RETURNS The string IDs. Tuple[str, ...]

SpanRuler.patterns

All patterns that were added to the span ruler.

Name Description
RETURNS The original patterns, one dictionary per pattern. List[Dict[str, Union[str, dict]]]

Attributes

Name Description
key The spans key that spans are saved under. Optional[str]
matcher The underlying matcher used to process token patterns. Matcher
phrase_matcher The underlying phrase matcher used to process phrase patterns. PhraseMatcher