* Clean up old Matcher call style related stuff In v2 Matcher.add was called with (key, on_match, *patterns). In v3 this was changed to (key, patterns, *, on_match=None), but there were various points where the old call syntax was documented or handled specially. This removes all those. The Matcher itself didn't need any code changes, as it just gives a generic type error. However the PhraseMatcher required some changes because it would automatically "fix" the old call style. Surprisingly, the tokenizer was still using the old call style in one place. After these changes tests failed in two places: 1. one test for the "new" call style, including the "old" call style. I removed this test. 2. deserializing the PhraseMatcher fails because the input docs are a set. I am not sure why 2 is happening - I guess it's a quirk of the serialization format? - so for now I just convert the set to a list when deserializing. The check that the input Docs are a List in the PhraseMatcher is a new check, but makes it parallel with the other Matchers, which seemed like the right thing to do. * Add notes related to input docs / deserialization type * Remove Typing import * Remove old note about call style change * Apply suggestions from code review Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com> * Use separate method for setting internal doc representations In addition to the title change, this changes the internal dict to be a defaultdict, instead of a dict with frequent use of setdefault. * Add _add_from_arrays for unpickling * Cleanup around adding from arrays This moves adding to internal structures into the private batch method, and removes the single-add method. This has one behavioral change for `add`, in that if something is wrong with the list of input Docs (such as one of the items not being a Doc), valid items before the invalid one will not be added. Also the callback will not be updated if anything is invalid. This change should not be significant. This also adds a test to check failure when given a non-Doc. * Update spacy/matcher/phrasematcher.pyx Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com> Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
16 KiB
title | teaser | tag | source |
---|---|---|---|
Matcher | Match sequences of tokens, based on pattern rules | class | spacy/matcher/matcher.pyx |
The Matcher
lets you find words and phrases using rules describing their token
attributes. Rules can refer to token annotations (like the text or
part-of-speech tags), as well as lexical attributes like Token.is_punct
.
Applying the matcher to a Doc
gives you access to the matched
tokens in context. For in-depth examples and workflows for combining rules and
statistical models, see the usage guide on
rule-based matching.
Pattern format
### Example [ {"LOWER": "i"}, {"LEMMA": {"IN": ["like", "love"]}}, {"POS": "NOUN", "OP": "+"} ]
A pattern added to the Matcher
consists of a list of dictionaries. Each
dictionary describes one token and its attributes. The available token
pattern keys correspond to a number of
Token
attributes. The supported attributes for
rule-based matching are:
Attribute | Description |
---|---|
ORTH |
The exact verbatim text of a token. |
TEXT 2.1 |
The exact verbatim text of a token. |
NORM |
The normalized form of the token text. |
LOWER |
The lowercase form of the token text. |
LENGTH |
The length of the token text. |
IS_ALPHA , IS_ASCII , IS_DIGIT |
Token text consists of alphabetic characters, ASCII characters, digits. |
IS_LOWER , IS_UPPER , IS_TITLE |
Token text is in lowercase, uppercase, titlecase. |
IS_PUNCT , IS_SPACE , IS_STOP |
Token is punctuation, whitespace, stop word. |
IS_SENT_START |
Token is start of sentence. |
LIKE_NUM , LIKE_URL , LIKE_EMAIL |
Token text resembles a number, URL, email. |
SPACY |
Token has a trailing space. |
POS , TAG , MORPH , DEP , LEMMA , SHAPE |
The token's simple and extended part-of-speech tag, morphological analysis, dependency label, lemma, shape. |
ENT_TYPE |
The token's entity label. |
ENT_IOB |
The IOB part of the token's entity tag. |
ENT_ID |
The token's entity ID (ent_id ). |
ENT_KB_ID |
The token's entity knowledge base ID (ent_kb_id ). |
_ 2.1 |
Properties in custom extension attributes. |
OP |
Operator or quantifier to determine how often to match a token pattern. |
Operators and quantifiers define how often a token pattern should be matched:
### Example [ {"POS": "ADJ", "OP": "*"}, {"POS": "NOUN", "OP": "+"} {"POS": "PROPN", "OP": "{2}"} ]
OP | Description |
---|---|
! |
Negate the pattern, by requiring it to match exactly 0 times. |
? |
Make the pattern optional, by allowing it to match 0 or 1 times. |
+ |
Require the pattern to match 1 or more times. |
* |
Allow the pattern to match 0 or more times. |
{n} |
Require the pattern to match exactly n times. |
{n,m} |
Require the pattern to match at least n but not more than m times. |
{n,} |
Require the pattern to match at least n times. |
{,m} |
Require the pattern to match at most m times. |
Token patterns can also map to a dictionary of properties instead of a single value to indicate whether the expected value is a member of a list or how it compares to another value.
### Example [ {"LEMMA": {"IN": ["like", "love", "enjoy"]}}, {"POS": "PROPN", "LENGTH": {">=": 10}}, ]
Attribute | Description |
---|---|
IN |
Attribute value is member of a list. |
NOT_IN |
Attribute value is not member of a list. |
IS_SUBSET |
Attribute value (for MORPH or custom list attributes) is a subset of a list. |
IS_SUPERSET |
Attribute value (for MORPH or custom list attributes) is a superset of a list. |
INTERSECTS |
Attribute value (for MORPH or custom list attribute) has a non-empty intersection with a list. |
== , >= , <= , > , < |
Attribute value is equal, greater or equal, smaller or equal, greater or smaller. |
Matcher.__init__
Create the rule-based Matcher
. If validate=True
is set, all patterns added
to the matcher will be validated against a JSON schema and a MatchPatternError
is raised if problems are found. Those can include incorrect types (e.g. a
string where an integer is expected) or unexpected property names.
Example
from spacy.matcher import Matcher matcher = Matcher(nlp.vocab)
Name | Description |
---|---|
vocab |
The vocabulary object, which must be shared with the documents the matcher will operate on. |
validate 2.1 |
Validate all patterns added to this matcher. |
Matcher.__call__
Find all token sequences matching the supplied patterns on the Doc
or Span
.
Note that if a single label has multiple patterns associated with it, the returned matches don't provide a way to tell which pattern was responsible for the match.
Example
from spacy.matcher import Matcher matcher = Matcher(nlp.vocab) pattern = [{"LOWER": "hello"}, {"LOWER": "world"}] matcher.add("HelloWorld", [pattern]) doc = nlp("hello world!") matches = matcher(doc)
Name | Description |
---|---|
doclike |
The Doc or Span to match over. |
keyword-only | |
as_spans 3 |
Instead of tuples, return a list of Span objects of the matches, with the match_id assigned as the span label. Defaults to False . |
allow_missing 3 |
Whether to skip checks for missing annotation for attributes included in patterns. Defaults to False . |
with_alignments 3.0.6 |
Return match alignment information as part of the match tuple as List[int] with the same length as the matched span. Each entry denotes the corresponding index of the token in the pattern. If as_spans is set to True , this setting is ignored. Defaults to False . |
RETURNS | A list of (match_id, start, end) tuples, describing the matches. A match tuple describes a span doc[start:end ]. The match_id is the ID of the added match pattern. If as_spans is set to True , a list of Span objects is returned instead. |
Matcher.__len__
Get the number of rules added to the matcher. Note that this only returns the number of rules (identical with the number of IDs), not the number of individual patterns.
Example
matcher = Matcher(nlp.vocab) assert len(matcher) == 0 matcher.add("Rule", [[{"ORTH": "test"}]]) assert len(matcher) == 1
Name | Description |
---|---|
RETURNS | The number of rules. |
Matcher.__contains__
Check whether the matcher contains rules for a match ID.
Example
matcher = Matcher(nlp.vocab) assert "Rule" not in matcher matcher.add("Rule", [[{'ORTH': 'test'}]]) assert "Rule" in matcher
Name | Description |
---|---|
key |
The match ID. |
RETURNS | Whether the matcher contains rules for this match ID. |
Matcher.add
Add a rule to the matcher, consisting of an ID key, one or more patterns, and an
optional callback function to act on the matches. The callback function will
receive the arguments matcher
, doc
, i
and matches
. If a pattern already
exists for the given ID, the patterns will be extended. An on_match
callback
will be overwritten.
Example
def on_match(matcher, doc, id, matches): print('Matched!', matches) matcher = Matcher(nlp.vocab) patterns = [ [{"LOWER": "hello"}, {"LOWER": "world"}], [{"ORTH": "Google"}, {"ORTH": "Maps"}] ] matcher.add("TEST_PATTERNS", patterns, on_match=on_match) doc = nlp("HELLO WORLD on Google Maps.") matches = matcher(doc)
Name | Description |
---|---|
match_id |
An ID for the thing you're matching. |
patterns |
Match pattern. A pattern consists of a list of dicts, where each dict describes a token. |
keyword-only | |
on_match |
Callback function to act on matches. Takes the arguments matcher , doc , i and matches . |
greedy 3 |
Optional filter for greedy matches. Can either be "FIRST" or "LONGEST" . |
Matcher.remove
Remove a rule from the matcher. A KeyError
is raised if the match ID does not
exist.
Example
matcher.add("Rule", [[{"ORTH": "test"}]]) assert "Rule" in matcher matcher.remove("Rule") assert "Rule" not in matcher
Name | Description |
---|---|
key |
The ID of the match rule. |
Matcher.get
Retrieve the pattern stored for a key. Returns the rule as an
(on_match, patterns)
tuple containing the callback and available patterns.
Example
matcher.add("Rule", [[{"ORTH": "test"}]]) on_match, patterns = matcher.get("Rule")
Name | Description |
---|---|
key |
The ID of the match rule. |
RETURNS | The rule, as an (on_match, patterns) tuple. |