spaCy/website/docs/api/phrasematcher.md
Paul O'Leary McCann 698b8b495f
Update/remove old Matcher syntax (#11370)
* Clean up old Matcher call style related stuff

In v2 Matcher.add was called with (key, on_match, *patterns). In v3 this
was changed to (key, patterns, *, on_match=None), but there were various
points where the old call syntax was documented or handled specially.
This removes all those.

The Matcher itself didn't need any code changes, as it just gives a
generic type error. However the PhraseMatcher required some changes
because it would automatically "fix" the old call style.

Surprisingly, the tokenizer was still using the old call style in one
place.

After these changes tests failed in two places:

1. one test for the "new" call style, including the "old" call style. I
   removed this test.
2. deserializing the PhraseMatcher fails because the input docs are a
   set.

I am not sure why 2 is happening - I guess it's a quirk of the
serialization format? - so for now I just convert the set to a list when
deserializing. The check that the input Docs are a List in the
PhraseMatcher is a new check, but makes it parallel with the other
Matchers, which seemed like the right thing to do.

* Add notes related to input docs / deserialization type

* Remove Typing import

* Remove old note about call style change

* Apply suggestions from code review

Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>

* Use separate method for setting internal doc representations

In addition to the title change, this changes the internal dict to be a
defaultdict, instead of a dict with frequent use of setdefault.

* Add _add_from_arrays for unpickling

* Cleanup around adding from arrays

This moves adding to internal structures into the private batch method,
and removes the single-add method.

This has one behavioral change for `add`, in that if something is wrong
with the list of input Docs (such as one of the items not being a Doc),
valid items before the invalid one will not be added. Also the callback
will not be updated if anything is invalid. This change should not be
significant.

This also adds a test to check failure when given a non-Doc.

* Update spacy/matcher/phrasematcher.pyx

Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>

Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
2022-08-30 15:40:31 +02:00

8.5 KiB
Raw Blame History

title teaser tag source new
PhraseMatcher Match sequences of tokens, based on documents class spacy/matcher/phrasematcher.pyx 2

The PhraseMatcher lets you efficiently match large terminology lists. While the Matcher lets you match sequences based on lists of token descriptions, the PhraseMatcher accepts match patterns in the form of Doc objects. See the usage guide for examples.

PhraseMatcher.__init__

Create the rule-based PhraseMatcher. Setting a different attr to match on will change the token attributes that will be compared to determine a match. By default, the incoming Doc is checked for sequences of tokens with the same ORTH value, i.e. the verbatim token text. Matching on the attribute LOWER will result in case-insensitive matching, since only the lowercase token texts are compared. In theory, it's also possible to match on sequences of the same part-of-speech tags or dependency labels.

If validate=True is set, additional validation is performed when pattern are added. At the moment, it will check whether a Doc has attributes assigned that aren't necessary to produce the matches (for example, part-of-speech tags if the PhraseMatcher matches on the token text). Since this can often lead to significantly worse performance when creating the pattern, a UserWarning will be shown.

Example

from spacy.matcher import PhraseMatcher
matcher = PhraseMatcher(nlp.vocab)
Name Description
vocab The vocabulary object, which must be shared with the documents the matcher will operate on. Vocab
attr 2.1 The token attribute to match on. Defaults to ORTH, i.e. the verbatim token text. Union[int, str]
validate 2.1 Validate patterns added to the matcher. bool

PhraseMatcher.__call__

Find all token sequences matching the supplied patterns on the Doc or Span.

Example

from spacy.matcher import PhraseMatcher

matcher = PhraseMatcher(nlp.vocab)
matcher.add("OBAMA", [nlp("Barack Obama")])
doc = nlp("Barack Obama lifts America one last time in emotional farewell")
matches = matcher(doc)
Name Description
doclike The Doc or Span to match over. Union[Doc, Span]
keyword-only
as_spans 3 Instead of tuples, return a list of Span objects of the matches, with the match_id assigned as the span label. Defaults to False. bool
RETURNS A list of (match_id, start, end) tuples, describing the matches. A match tuple describes a span doc[start:end]. The match_id is the ID of the added match pattern. If as_spans is set to True, a list of Span objects is returned instead. Union[List[Tuple[int, int, int]], List[Span]]

Because spaCy stores all strings as integers, the match_id you get back will be an integer, too but you can always get the string representation by looking it up in the vocabulary's StringStore, i.e. nlp.vocab.strings:

match_id_string = nlp.vocab.strings[match_id]

PhraseMatcher.__len__

Get the number of rules added to the matcher. Note that this only returns the number of rules (identical with the number of IDs), not the number of individual patterns.

Example

  matcher = PhraseMatcher(nlp.vocab)
  assert len(matcher) == 0
  matcher.add("OBAMA", [nlp("Barack Obama")])
  assert len(matcher) == 1
Name Description
RETURNS The number of rules. int

PhraseMatcher.__contains__

Check whether the matcher contains rules for a match ID.

Example

  matcher = PhraseMatcher(nlp.vocab)
  assert "OBAMA" not in matcher
  matcher.add("OBAMA", [nlp("Barack Obama")])
  assert "OBAMA" in matcher
Name Description
key The match ID. str
RETURNS Whether the matcher contains rules for this match ID. bool

PhraseMatcher.add

Add a rule to the matcher, consisting of an ID key, one or more patterns, and a optional callback function to act on the matches. The callback function will receive the arguments matcher, doc, i and matches. If a pattern already exists for the given ID, the patterns will be extended. An on_match callback will be overwritten.

Example

  def on_match(matcher, doc, id, matches):
      print('Matched!', matches)

  matcher = PhraseMatcher(nlp.vocab)
  matcher.add("OBAMA", [nlp("Barack Obama")], on_match=on_match)
  matcher.add("HEALTH", [nlp("health care reform"), nlp("healthcare reform")], on_match=on_match)
  doc = nlp("Barack Obama urges Congress to find courage to defend his healthcare reforms")
  matches = matcher(doc)
Name Description
key An ID for the thing you're matching. str
docs Doc objects of the phrases to match. List[Doc]
keyword-only
on_match Callback function to act on matches. Takes the arguments matcher, doc, i and matches. Optional[CallableMatcher, Doc, int, List[tuple], Any

PhraseMatcher.remove

Remove a rule from the matcher by match ID. A KeyError is raised if the key does not exist.

Example

matcher = PhraseMatcher(nlp.vocab)
matcher.add("OBAMA", [nlp("Barack Obama")])
assert "OBAMA" in matcher
matcher.remove("OBAMA")
assert "OBAMA" not in matcher
Name Description
key The ID of the match rule. str