2020-03-02 13:48:10 +03:00
|
|
|
# cython: infer_types=True, profile=True
|
Update/remove old Matcher syntax (#11370)
* Clean up old Matcher call style related stuff
In v2 Matcher.add was called with (key, on_match, *patterns). In v3 this
was changed to (key, patterns, *, on_match=None), but there were various
points where the old call syntax was documented or handled specially.
This removes all those.
The Matcher itself didn't need any code changes, as it just gives a
generic type error. However the PhraseMatcher required some changes
because it would automatically "fix" the old call style.
Surprisingly, the tokenizer was still using the old call style in one
place.
After these changes tests failed in two places:
1. one test for the "new" call style, including the "old" call style. I
removed this test.
2. deserializing the PhraseMatcher fails because the input docs are a
set.
I am not sure why 2 is happening - I guess it's a quirk of the
serialization format? - so for now I just convert the set to a list when
deserializing. The check that the input Docs are a List in the
PhraseMatcher is a new check, but makes it parallel with the other
Matchers, which seemed like the right thing to do.
* Add notes related to input docs / deserialization type
* Remove Typing import
* Remove old note about call style change
* Apply suggestions from code review
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* Use separate method for setting internal doc representations
In addition to the title change, this changes the internal dict to be a
defaultdict, instead of a dict with frequent use of setdefault.
* Add _add_from_arrays for unpickling
* Cleanup around adding from arrays
This moves adding to internal structures into the private batch method,
and removes the single-add method.
This has one behavioral change for `add`, in that if something is wrong
with the list of input Docs (such as one of the items not being a Doc),
valid items before the invalid one will not be added. Also the callback
will not be updated if anything is invalid. This change should not be
significant.
This also adds a test to check failure when given a non-Doc.
* Update spacy/matcher/phrasematcher.pyx
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
2022-08-30 16:40:31 +03:00
|
|
|
from collections import defaultdict
|
2023-06-26 12:41:03 +03:00
|
|
|
from typing import List
|
|
|
|
|
|
|
|
from preshed.maps cimport map_clear, map_get, map_init, map_iter, map_set
|
2019-09-27 17:22:34 +03:00
|
|
|
|
2020-04-28 14:37:37 +03:00
|
|
|
import warnings
|
|
|
|
|
2023-07-19 13:03:31 +03:00
|
|
|
from ..attrs cimport DEP, LEMMA, MORPH, POS, TAG
|
2023-06-26 12:41:03 +03:00
|
|
|
|
2021-01-26 06:52:45 +03:00
|
|
|
from ..attrs import IDS
|
2023-06-26 12:41:03 +03:00
|
|
|
|
2020-08-31 15:53:22 +03:00
|
|
|
from ..tokens.span cimport Span
|
2023-06-26 12:41:03 +03:00
|
|
|
from ..tokens.token cimport Token
|
2019-10-18 12:01:47 +03:00
|
|
|
from ..typedefs cimport attr_t
|
2019-02-07 11:42:25 +03:00
|
|
|
|
2020-04-28 14:37:37 +03:00
|
|
|
from ..errors import Errors, Warnings
|
2023-06-26 12:41:03 +03:00
|
|
|
from ..schemas import TokenPattern
|
2019-02-07 11:42:25 +03:00
|
|
|
|
|
|
|
|
|
|
|
cdef class PhraseMatcher:
|
2019-03-08 13:42:26 +03:00
|
|
|
"""Efficiently match large terminology lists. While the `Matcher` matches
|
|
|
|
sequences based on lists of token descriptions, the `PhraseMatcher` accepts
|
|
|
|
match patterns in the form of `Doc` objects.
|
|
|
|
|
2021-01-30 12:09:38 +03:00
|
|
|
DOCS: https://spacy.io/api/phrasematcher
|
|
|
|
USAGE: https://spacy.io/usage/rule-based-matching#phrasematcher
|
2019-09-27 17:22:34 +03:00
|
|
|
|
|
|
|
Adapted from FlashText: https://github.com/vi3k6i5/flashtext
|
|
|
|
MIT License (see `LICENSE`)
|
|
|
|
Copyright (c) 2017 Vikash Singh (vikash.duliajan@gmail.com)
|
2019-03-08 13:42:26 +03:00
|
|
|
"""
|
2019-02-07 11:42:25 +03:00
|
|
|
|
2020-07-06 14:06:25 +03:00
|
|
|
def __init__(self, Vocab vocab, attr="ORTH", validate=False):
|
2019-03-08 13:42:26 +03:00
|
|
|
"""Initialize the PhraseMatcher.
|
|
|
|
|
|
|
|
vocab (Vocab): The shared vocabulary.
|
2020-05-24 19:51:10 +03:00
|
|
|
attr (int / str): Token attribute to match on.
|
2019-03-08 13:42:26 +03:00
|
|
|
validate (bool): Perform additional validation when patterns are added.
|
|
|
|
|
2021-01-30 12:09:38 +03:00
|
|
|
DOCS: https://spacy.io/api/phrasematcher#init
|
2019-03-08 13:42:26 +03:00
|
|
|
"""
|
2019-02-07 11:42:25 +03:00
|
|
|
self.vocab = vocab
|
2019-09-27 17:22:34 +03:00
|
|
|
self._callbacks = {}
|
Update/remove old Matcher syntax (#11370)
* Clean up old Matcher call style related stuff
In v2 Matcher.add was called with (key, on_match, *patterns). In v3 this
was changed to (key, patterns, *, on_match=None), but there were various
points where the old call syntax was documented or handled specially.
This removes all those.
The Matcher itself didn't need any code changes, as it just gives a
generic type error. However the PhraseMatcher required some changes
because it would automatically "fix" the old call style.
Surprisingly, the tokenizer was still using the old call style in one
place.
After these changes tests failed in two places:
1. one test for the "new" call style, including the "old" call style. I
removed this test.
2. deserializing the PhraseMatcher fails because the input docs are a
set.
I am not sure why 2 is happening - I guess it's a quirk of the
serialization format? - so for now I just convert the set to a list when
deserializing. The check that the input Docs are a List in the
PhraseMatcher is a new check, but makes it parallel with the other
Matchers, which seemed like the right thing to do.
* Add notes related to input docs / deserialization type
* Remove Typing import
* Remove old note about call style change
* Apply suggestions from code review
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* Use separate method for setting internal doc representations
In addition to the title change, this changes the internal dict to be a
defaultdict, instead of a dict with frequent use of setdefault.
* Add _add_from_arrays for unpickling
* Cleanup around adding from arrays
This moves adding to internal structures into the private batch method,
and removes the single-add method.
This has one behavioral change for `add`, in that if something is wrong
with the list of input Docs (such as one of the items not being a Doc),
valid items before the invalid one will not be added. Also the callback
will not be updated if anything is invalid. This change should not be
significant.
This also adds a test to check failure when given a non-Doc.
* Update spacy/matcher/phrasematcher.pyx
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
2022-08-30 16:40:31 +03:00
|
|
|
self._docs = defaultdict(set)
|
2019-09-27 17:22:34 +03:00
|
|
|
self._validate = validate
|
|
|
|
|
|
|
|
self.mem = Pool()
|
|
|
|
self.c_map = <MapStruct*>self.mem.alloc(1, sizeof(MapStruct))
|
|
|
|
self._terminal_hash = 826361138722620965
|
|
|
|
map_init(self.mem, self.c_map, 8)
|
|
|
|
|
2019-09-29 18:12:33 +03:00
|
|
|
if isinstance(attr, (int, long)):
|
2019-02-07 11:42:25 +03:00
|
|
|
self.attr = attr
|
|
|
|
else:
|
2021-06-03 10:05:26 +03:00
|
|
|
if attr is None:
|
|
|
|
attr = "ORTH"
|
2019-08-21 15:00:37 +03:00
|
|
|
attr = attr.upper()
|
|
|
|
if attr == "TEXT":
|
|
|
|
attr = "ORTH"
|
2021-01-26 06:52:45 +03:00
|
|
|
if attr == "IS_SENT_START":
|
|
|
|
attr = "SENT_START"
|
2019-12-25 14:39:49 +03:00
|
|
|
if attr.lower() not in TokenPattern().dict():
|
2019-08-21 15:00:37 +03:00
|
|
|
raise ValueError(Errors.E152.format(attr=attr))
|
2021-01-26 06:52:45 +03:00
|
|
|
self.attr = IDS.get(attr)
|
2019-02-07 11:42:25 +03:00
|
|
|
|
|
|
|
def __len__(self):
|
2019-09-27 17:22:34 +03:00
|
|
|
"""Get the number of match IDs added to the matcher.
|
2019-02-07 11:42:25 +03:00
|
|
|
|
|
|
|
RETURNS (int): The number of rules.
|
2019-03-08 13:42:26 +03:00
|
|
|
|
2021-01-30 12:09:38 +03:00
|
|
|
DOCS: https://spacy.io/api/phrasematcher#len
|
2019-02-07 11:42:25 +03:00
|
|
|
"""
|
2019-09-27 17:22:34 +03:00
|
|
|
return len(self._callbacks)
|
2019-02-07 11:42:25 +03:00
|
|
|
|
|
|
|
def __contains__(self, key):
|
|
|
|
"""Check whether the matcher contains rules for a match ID.
|
|
|
|
|
2020-05-24 18:20:58 +03:00
|
|
|
key (str): The match ID.
|
2019-02-07 11:42:25 +03:00
|
|
|
RETURNS (bool): Whether the matcher contains rules for this match ID.
|
2019-03-08 13:42:26 +03:00
|
|
|
|
2021-01-30 12:09:38 +03:00
|
|
|
DOCS: https://spacy.io/api/phrasematcher#contains
|
2019-02-07 11:42:25 +03:00
|
|
|
"""
|
2019-09-27 17:22:34 +03:00
|
|
|
return key in self._callbacks
|
2019-02-07 11:42:25 +03:00
|
|
|
|
|
|
|
def __reduce__(self):
|
2019-09-29 18:12:33 +03:00
|
|
|
data = (self.vocab, self._docs, self._callbacks, self.attr)
|
2019-02-12 20:27:54 +03:00
|
|
|
return (unpickle_matcher, data, None, None)
|
2019-02-07 11:42:25 +03:00
|
|
|
|
2019-09-27 17:22:34 +03:00
|
|
|
def remove(self, key):
|
|
|
|
"""Remove a rule from the matcher by match ID. A KeyError is raised if
|
|
|
|
the key does not exist.
|
|
|
|
|
2020-05-24 18:20:58 +03:00
|
|
|
key (str): The match ID.
|
2019-09-27 17:34:53 +03:00
|
|
|
|
2021-01-30 12:09:38 +03:00
|
|
|
DOCS: https://spacy.io/api/phrasematcher#remove
|
2019-09-27 17:22:34 +03:00
|
|
|
"""
|
|
|
|
if key not in self._docs:
|
|
|
|
raise KeyError(key)
|
|
|
|
cdef MapStruct* current_node
|
|
|
|
cdef MapStruct* terminal_map
|
|
|
|
cdef MapStruct* node_pointer
|
|
|
|
cdef void* result
|
|
|
|
cdef key_t terminal_key
|
|
|
|
cdef void* value
|
|
|
|
cdef int c_i = 0
|
|
|
|
cdef vector[MapStruct*] path_nodes
|
|
|
|
cdef vector[key_t] path_keys
|
|
|
|
cdef key_t key_to_remove
|
2019-10-14 13:19:51 +03:00
|
|
|
for keyword in sorted(self._docs[key], key=lambda x: len(x), reverse=True):
|
2019-09-27 17:22:34 +03:00
|
|
|
current_node = self.c_map
|
2019-10-14 13:19:51 +03:00
|
|
|
path_nodes.clear()
|
|
|
|
path_keys.clear()
|
2019-09-27 17:22:34 +03:00
|
|
|
for token in keyword:
|
|
|
|
result = map_get(current_node, token)
|
|
|
|
if result:
|
|
|
|
path_nodes.push_back(current_node)
|
|
|
|
path_keys.push_back(token)
|
|
|
|
current_node = <MapStruct*>result
|
|
|
|
else:
|
|
|
|
# if token is not found, break out of the loop
|
|
|
|
current_node = NULL
|
|
|
|
break
|
2022-05-12 13:23:52 +03:00
|
|
|
path_nodes.push_back(current_node)
|
|
|
|
path_keys.push_back(self._terminal_hash)
|
2019-09-27 17:22:34 +03:00
|
|
|
# remove the tokens from trie node if there are no other
|
|
|
|
# keywords with them
|
|
|
|
result = map_get(current_node, self._terminal_hash)
|
|
|
|
if current_node != NULL and result:
|
|
|
|
terminal_map = <MapStruct*>result
|
|
|
|
terminal_keys = []
|
|
|
|
c_i = 0
|
|
|
|
while map_iter(terminal_map, &c_i, &terminal_key, &value):
|
|
|
|
terminal_keys.append(self.vocab.strings[terminal_key])
|
|
|
|
# if this is the only remaining key, remove unnecessary paths
|
|
|
|
if terminal_keys == [key]:
|
|
|
|
while not path_nodes.empty():
|
|
|
|
node_pointer = path_nodes.back()
|
|
|
|
path_nodes.pop_back()
|
|
|
|
key_to_remove = path_keys.back()
|
|
|
|
path_keys.pop_back()
|
|
|
|
result = map_get(node_pointer, key_to_remove)
|
|
|
|
if node_pointer.filled == 1:
|
|
|
|
map_clear(node_pointer, key_to_remove)
|
|
|
|
self.mem.free(result)
|
|
|
|
else:
|
|
|
|
# more than one key means more than 1 path,
|
|
|
|
# delete not required path and keep the others
|
|
|
|
map_clear(node_pointer, key_to_remove)
|
|
|
|
self.mem.free(result)
|
|
|
|
break
|
|
|
|
# otherwise simply remove the key
|
|
|
|
else:
|
|
|
|
result = map_get(current_node, self._terminal_hash)
|
|
|
|
if result:
|
|
|
|
map_clear(<MapStruct*>result, self.vocab.strings[key])
|
|
|
|
|
|
|
|
del self._callbacks[key]
|
|
|
|
del self._docs[key]
|
|
|
|
|
Update/remove old Matcher syntax (#11370)
* Clean up old Matcher call style related stuff
In v2 Matcher.add was called with (key, on_match, *patterns). In v3 this
was changed to (key, patterns, *, on_match=None), but there were various
points where the old call syntax was documented or handled specially.
This removes all those.
The Matcher itself didn't need any code changes, as it just gives a
generic type error. However the PhraseMatcher required some changes
because it would automatically "fix" the old call style.
Surprisingly, the tokenizer was still using the old call style in one
place.
After these changes tests failed in two places:
1. one test for the "new" call style, including the "old" call style. I
removed this test.
2. deserializing the PhraseMatcher fails because the input docs are a
set.
I am not sure why 2 is happening - I guess it's a quirk of the
serialization format? - so for now I just convert the set to a list when
deserializing. The check that the input Docs are a List in the
PhraseMatcher is a new check, but makes it parallel with the other
Matchers, which seemed like the right thing to do.
* Add notes related to input docs / deserialization type
* Remove Typing import
* Remove old note about call style change
* Apply suggestions from code review
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* Use separate method for setting internal doc representations
In addition to the title change, this changes the internal dict to be a
defaultdict, instead of a dict with frequent use of setdefault.
* Add _add_from_arrays for unpickling
* Cleanup around adding from arrays
This moves adding to internal structures into the private batch method,
and removes the single-add method.
This has one behavioral change for `add`, in that if something is wrong
with the list of input Docs (such as one of the items not being a Doc),
valid items before the invalid one will not be added. Also the callback
will not be updated if anything is invalid. This change should not be
significant.
This also adds a test to check failure when given a non-Doc.
* Update spacy/matcher/phrasematcher.pyx
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
2022-08-30 16:40:31 +03:00
|
|
|
def _add_from_arrays(self, key, specs, *, on_match=None):
|
|
|
|
"""Add a preprocessed list of specs, with an optional callback.
|
2019-10-25 23:21:08 +03:00
|
|
|
|
2020-05-24 18:20:58 +03:00
|
|
|
key (str): The match ID.
|
Update/remove old Matcher syntax (#11370)
* Clean up old Matcher call style related stuff
In v2 Matcher.add was called with (key, on_match, *patterns). In v3 this
was changed to (key, patterns, *, on_match=None), but there were various
points where the old call syntax was documented or handled specially.
This removes all those.
The Matcher itself didn't need any code changes, as it just gives a
generic type error. However the PhraseMatcher required some changes
because it would automatically "fix" the old call style.
Surprisingly, the tokenizer was still using the old call style in one
place.
After these changes tests failed in two places:
1. one test for the "new" call style, including the "old" call style. I
removed this test.
2. deserializing the PhraseMatcher fails because the input docs are a
set.
I am not sure why 2 is happening - I guess it's a quirk of the
serialization format? - so for now I just convert the set to a list when
deserializing. The check that the input Docs are a List in the
PhraseMatcher is a new check, but makes it parallel with the other
Matchers, which seemed like the right thing to do.
* Add notes related to input docs / deserialization type
* Remove Typing import
* Remove old note about call style change
* Apply suggestions from code review
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* Use separate method for setting internal doc representations
In addition to the title change, this changes the internal dict to be a
defaultdict, instead of a dict with frequent use of setdefault.
* Add _add_from_arrays for unpickling
* Cleanup around adding from arrays
This moves adding to internal structures into the private batch method,
and removes the single-add method.
This has one behavioral change for `add`, in that if something is wrong
with the list of input Docs (such as one of the items not being a Doc),
valid items before the invalid one will not be added. Also the callback
will not be updated if anything is invalid. This change should not be
significant.
This also adds a test to check failure when given a non-Doc.
* Update spacy/matcher/phrasematcher.pyx
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
2022-08-30 16:40:31 +03:00
|
|
|
specs (List[List[int]]): A list of lists of hashes to match.
|
2019-02-07 11:42:25 +03:00
|
|
|
on_match (callable): Callback executed on match.
|
|
|
|
"""
|
2019-09-27 17:22:34 +03:00
|
|
|
cdef MapStruct* current_node
|
|
|
|
cdef MapStruct* internal_node
|
|
|
|
cdef void* result
|
|
|
|
|
Update/remove old Matcher syntax (#11370)
* Clean up old Matcher call style related stuff
In v2 Matcher.add was called with (key, on_match, *patterns). In v3 this
was changed to (key, patterns, *, on_match=None), but there were various
points where the old call syntax was documented or handled specially.
This removes all those.
The Matcher itself didn't need any code changes, as it just gives a
generic type error. However the PhraseMatcher required some changes
because it would automatically "fix" the old call style.
Surprisingly, the tokenizer was still using the old call style in one
place.
After these changes tests failed in two places:
1. one test for the "new" call style, including the "old" call style. I
removed this test.
2. deserializing the PhraseMatcher fails because the input docs are a
set.
I am not sure why 2 is happening - I guess it's a quirk of the
serialization format? - so for now I just convert the set to a list when
deserializing. The check that the input Docs are a List in the
PhraseMatcher is a new check, but makes it parallel with the other
Matchers, which seemed like the right thing to do.
* Add notes related to input docs / deserialization type
* Remove Typing import
* Remove old note about call style change
* Apply suggestions from code review
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* Use separate method for setting internal doc representations
In addition to the title change, this changes the internal dict to be a
defaultdict, instead of a dict with frequent use of setdefault.
* Add _add_from_arrays for unpickling
* Cleanup around adding from arrays
This moves adding to internal structures into the private batch method,
and removes the single-add method.
This has one behavioral change for `add`, in that if something is wrong
with the list of input Docs (such as one of the items not being a Doc),
valid items before the invalid one will not be added. Also the callback
will not be updated if anything is invalid. This change should not be
significant.
This also adds a test to check failure when given a non-Doc.
* Update spacy/matcher/phrasematcher.pyx
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
2022-08-30 16:40:31 +03:00
|
|
|
self._callbacks[key] = on_match
|
|
|
|
for spec in specs:
|
|
|
|
self._docs[key].add(tuple(spec))
|
2019-09-27 17:22:34 +03:00
|
|
|
|
|
|
|
current_node = self.c_map
|
Update/remove old Matcher syntax (#11370)
* Clean up old Matcher call style related stuff
In v2 Matcher.add was called with (key, on_match, *patterns). In v3 this
was changed to (key, patterns, *, on_match=None), but there were various
points where the old call syntax was documented or handled specially.
This removes all those.
The Matcher itself didn't need any code changes, as it just gives a
generic type error. However the PhraseMatcher required some changes
because it would automatically "fix" the old call style.
Surprisingly, the tokenizer was still using the old call style in one
place.
After these changes tests failed in two places:
1. one test for the "new" call style, including the "old" call style. I
removed this test.
2. deserializing the PhraseMatcher fails because the input docs are a
set.
I am not sure why 2 is happening - I guess it's a quirk of the
serialization format? - so for now I just convert the set to a list when
deserializing. The check that the input Docs are a List in the
PhraseMatcher is a new check, but makes it parallel with the other
Matchers, which seemed like the right thing to do.
* Add notes related to input docs / deserialization type
* Remove Typing import
* Remove old note about call style change
* Apply suggestions from code review
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* Use separate method for setting internal doc representations
In addition to the title change, this changes the internal dict to be a
defaultdict, instead of a dict with frequent use of setdefault.
* Add _add_from_arrays for unpickling
* Cleanup around adding from arrays
This moves adding to internal structures into the private batch method,
and removes the single-add method.
This has one behavioral change for `add`, in that if something is wrong
with the list of input Docs (such as one of the items not being a Doc),
valid items before the invalid one will not be added. Also the callback
will not be updated if anything is invalid. This change should not be
significant.
This also adds a test to check failure when given a non-Doc.
* Update spacy/matcher/phrasematcher.pyx
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
2022-08-30 16:40:31 +03:00
|
|
|
for token in spec:
|
2019-09-27 17:22:34 +03:00
|
|
|
if token == self._terminal_hash:
|
2020-04-28 14:37:37 +03:00
|
|
|
warnings.warn(Warnings.W021)
|
2019-09-27 17:22:34 +03:00
|
|
|
break
|
|
|
|
result = <MapStruct*>map_get(current_node, token)
|
|
|
|
if not result:
|
|
|
|
internal_node = <MapStruct*>self.mem.alloc(1, sizeof(MapStruct))
|
|
|
|
map_init(self.mem, internal_node, 8)
|
|
|
|
map_set(self.mem, current_node, token, internal_node)
|
|
|
|
result = internal_node
|
|
|
|
current_node = <MapStruct*>result
|
|
|
|
result = <MapStruct*>map_get(current_node, self._terminal_hash)
|
|
|
|
if not result:
|
|
|
|
internal_node = <MapStruct*>self.mem.alloc(1, sizeof(MapStruct))
|
|
|
|
map_init(self.mem, internal_node, 8)
|
|
|
|
map_set(self.mem, current_node, self._terminal_hash, internal_node)
|
|
|
|
result = internal_node
|
|
|
|
map_set(self.mem, <MapStruct*>result, self.vocab.strings[key], NULL)
|
|
|
|
|
Update/remove old Matcher syntax (#11370)
* Clean up old Matcher call style related stuff
In v2 Matcher.add was called with (key, on_match, *patterns). In v3 this
was changed to (key, patterns, *, on_match=None), but there were various
points where the old call syntax was documented or handled specially.
This removes all those.
The Matcher itself didn't need any code changes, as it just gives a
generic type error. However the PhraseMatcher required some changes
because it would automatically "fix" the old call style.
Surprisingly, the tokenizer was still using the old call style in one
place.
After these changes tests failed in two places:
1. one test for the "new" call style, including the "old" call style. I
removed this test.
2. deserializing the PhraseMatcher fails because the input docs are a
set.
I am not sure why 2 is happening - I guess it's a quirk of the
serialization format? - so for now I just convert the set to a list when
deserializing. The check that the input Docs are a List in the
PhraseMatcher is a new check, but makes it parallel with the other
Matchers, which seemed like the right thing to do.
* Add notes related to input docs / deserialization type
* Remove Typing import
* Remove old note about call style change
* Apply suggestions from code review
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* Use separate method for setting internal doc representations
In addition to the title change, this changes the internal dict to be a
defaultdict, instead of a dict with frequent use of setdefault.
* Add _add_from_arrays for unpickling
* Cleanup around adding from arrays
This moves adding to internal structures into the private batch method,
and removes the single-add method.
This has one behavioral change for `add`, in that if something is wrong
with the list of input Docs (such as one of the items not being a Doc),
valid items before the invalid one will not be added. Also the callback
will not be updated if anything is invalid. This change should not be
significant.
This also adds a test to check failure when given a non-Doc.
* Update spacy/matcher/phrasematcher.pyx
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
2022-08-30 16:40:31 +03:00
|
|
|
def add(self, key, docs, *, on_match=None):
|
|
|
|
"""Add a match-rule to the phrase-matcher. A match-rule consists of: an ID
|
|
|
|
key, a list of one or more patterns, and (optionally) an on_match callback.
|
|
|
|
|
|
|
|
key (str): The match ID.
|
|
|
|
docs (list): List of `Doc` objects representing match patterns.
|
|
|
|
on_match (callable): Callback executed on match.
|
|
|
|
|
|
|
|
If any of the input Docs are invalid, no internal state will be updated.
|
|
|
|
|
|
|
|
DOCS: https://spacy.io/api/phrasematcher#add
|
|
|
|
"""
|
|
|
|
if isinstance(docs, Doc):
|
|
|
|
raise ValueError(Errors.E179.format(key=key))
|
|
|
|
if docs is None or not isinstance(docs, List):
|
|
|
|
raise ValueError(Errors.E948.format(name="PhraseMatcher", arg_type=type(docs)))
|
|
|
|
if on_match is not None and not hasattr(on_match, "__call__"):
|
|
|
|
raise ValueError(Errors.E171.format(name="PhraseMatcher", arg_type=type(on_match)))
|
|
|
|
|
|
|
|
_ = self.vocab[key]
|
|
|
|
specs = []
|
|
|
|
|
|
|
|
for doc in docs:
|
|
|
|
if len(doc) == 0:
|
|
|
|
continue
|
|
|
|
if not isinstance(doc, Doc):
|
|
|
|
raise ValueError(Errors.E4000.format(type=type(doc)))
|
|
|
|
|
|
|
|
attrs = (TAG, POS, MORPH, LEMMA, DEP)
|
|
|
|
has_annotation = {attr: doc.has_annotation(attr) for attr in attrs}
|
|
|
|
for attr in attrs:
|
|
|
|
if self.attr == attr and not has_annotation[attr]:
|
|
|
|
if attr == TAG:
|
|
|
|
pipe = "tagger"
|
|
|
|
elif attr in (POS, MORPH):
|
|
|
|
pipe = "morphologizer or tagger+attribute_ruler"
|
|
|
|
elif attr == LEMMA:
|
|
|
|
pipe = "lemmatizer"
|
|
|
|
elif attr == DEP:
|
|
|
|
pipe = "parser"
|
|
|
|
error_msg = Errors.E155.format(pipe=pipe, attr=self.vocab.strings.as_string(attr))
|
|
|
|
raise ValueError(error_msg)
|
|
|
|
if self._validate and any(has_annotation.values()) \
|
|
|
|
and self.attr not in attrs:
|
|
|
|
string_attr = self.vocab.strings[self.attr]
|
|
|
|
warnings.warn(Warnings.W012.format(key=key, attr=string_attr))
|
|
|
|
specs.append(self._convert_to_array(doc))
|
|
|
|
|
|
|
|
self._add_from_arrays(key, specs, on_match=on_match)
|
|
|
|
|
2021-02-10 15:43:32 +03:00
|
|
|
def __call__(self, object doclike, *, as_spans=False):
|
2019-02-07 11:42:25 +03:00
|
|
|
"""Find all sequences matching the supplied patterns on the `Doc`.
|
|
|
|
|
2021-02-10 15:43:32 +03:00
|
|
|
doclike (Doc or Span): The document to match over.
|
2020-08-31 15:53:22 +03:00
|
|
|
as_spans (bool): Return Span objects with labels instead of (match_id,
|
|
|
|
start, end) tuples.
|
|
|
|
RETURNS (list): A list of `(match_id, start, end)` tuples,
|
2019-02-07 11:42:25 +03:00
|
|
|
describing the matches. A match tuple describes a span
|
2020-08-31 15:53:22 +03:00
|
|
|
`doc[start:end]`. The `match_id` is an integer. If as_spans is set
|
|
|
|
to True, a list of Span objects is returned.
|
2019-03-08 13:42:26 +03:00
|
|
|
|
2021-01-30 12:09:38 +03:00
|
|
|
DOCS: https://spacy.io/api/phrasematcher#call
|
2019-02-07 11:42:25 +03:00
|
|
|
"""
|
|
|
|
matches = []
|
2021-02-10 15:43:32 +03:00
|
|
|
if doclike is None or len(doclike) == 0:
|
2019-09-27 17:22:34 +03:00
|
|
|
# if doc is empty or None just return empty list
|
|
|
|
return matches
|
2021-02-10 15:43:32 +03:00
|
|
|
if isinstance(doclike, Doc):
|
|
|
|
doc = doclike
|
|
|
|
start_idx = 0
|
|
|
|
end_idx = len(doc)
|
|
|
|
elif isinstance(doclike, Span):
|
|
|
|
doc = doclike.doc
|
|
|
|
start_idx = doclike.start
|
|
|
|
end_idx = doclike.end
|
|
|
|
else:
|
|
|
|
raise ValueError(Errors.E195.format(good="Doc or Span", got=type(doclike).__name__))
|
2019-09-27 17:22:34 +03:00
|
|
|
|
2019-10-18 12:01:47 +03:00
|
|
|
cdef vector[SpanC] c_matches
|
2021-02-10 15:43:32 +03:00
|
|
|
self.find_matches(doc, start_idx, end_idx, &c_matches)
|
2019-09-27 17:22:34 +03:00
|
|
|
for i in range(c_matches.size()):
|
2019-10-18 12:01:47 +03:00
|
|
|
matches.append((c_matches[i].label, c_matches[i].start, c_matches[i].end))
|
2019-02-07 11:42:25 +03:00
|
|
|
for i, (ent_id, start, end) in enumerate(matches):
|
2019-10-08 13:07:02 +03:00
|
|
|
on_match = self._callbacks.get(self.vocab.strings[ent_id])
|
2019-02-07 11:42:25 +03:00
|
|
|
if on_match is not None:
|
|
|
|
on_match(self, doc, i, matches)
|
2020-08-31 15:53:22 +03:00
|
|
|
if as_spans:
|
|
|
|
return [Span(doc, start, end, label=key) for key, start, end in matches]
|
|
|
|
else:
|
|
|
|
return matches
|
2019-02-07 11:42:25 +03:00
|
|
|
|
2021-02-10 15:43:32 +03:00
|
|
|
cdef void find_matches(self, Doc doc, int start_idx, int end_idx, vector[SpanC] *matches) nogil:
|
2019-09-27 17:22:34 +03:00
|
|
|
cdef MapStruct* current_node = self.c_map
|
|
|
|
cdef int start = 0
|
2021-02-10 15:43:32 +03:00
|
|
|
cdef int idx = start_idx
|
|
|
|
cdef int idy = start_idx
|
2019-09-27 17:22:34 +03:00
|
|
|
cdef key_t key
|
|
|
|
cdef void* value
|
|
|
|
cdef int i = 0
|
2019-10-18 12:01:47 +03:00
|
|
|
cdef SpanC ms
|
2019-09-27 17:22:34 +03:00
|
|
|
cdef void* result
|
2021-02-10 15:43:32 +03:00
|
|
|
while idx < end_idx:
|
2019-09-27 17:22:34 +03:00
|
|
|
start = idx
|
|
|
|
token = Token.get_struct_attr(&doc.c[idx], self.attr)
|
|
|
|
# look for sequences from this position
|
|
|
|
result = map_get(current_node, token)
|
|
|
|
if result:
|
|
|
|
current_node = <MapStruct*>result
|
|
|
|
idy = idx + 1
|
2021-02-10 15:43:32 +03:00
|
|
|
while idy < end_idx:
|
2019-09-27 17:22:34 +03:00
|
|
|
result = map_get(current_node, self._terminal_hash)
|
|
|
|
if result:
|
|
|
|
i = 0
|
|
|
|
while map_iter(<MapStruct*>result, &i, &key, &value):
|
2019-10-18 12:01:47 +03:00
|
|
|
ms = make_spanstruct(key, start, idy)
|
2019-09-27 17:22:34 +03:00
|
|
|
matches.push_back(ms)
|
|
|
|
inner_token = Token.get_struct_attr(&doc.c[idy], self.attr)
|
|
|
|
result = map_get(current_node, inner_token)
|
|
|
|
if result:
|
|
|
|
current_node = <MapStruct*>result
|
|
|
|
idy += 1
|
|
|
|
else:
|
|
|
|
break
|
|
|
|
else:
|
|
|
|
# end of doc reached
|
|
|
|
result = map_get(current_node, self._terminal_hash)
|
|
|
|
if result:
|
|
|
|
i = 0
|
|
|
|
while map_iter(<MapStruct*>result, &i, &key, &value):
|
2019-10-18 12:01:47 +03:00
|
|
|
ms = make_spanstruct(key, start, idy)
|
2019-09-27 17:22:34 +03:00
|
|
|
matches.push_back(ms)
|
|
|
|
current_node = self.c_map
|
|
|
|
idx += 1
|
|
|
|
|
2020-07-06 14:06:25 +03:00
|
|
|
def pipe(self, stream, batch_size=1000, return_matches=False, as_tuples=False):
|
2020-08-31 18:01:24 +03:00
|
|
|
"""Match a stream of documents, yielding them in turn. Deprecated as of
|
|
|
|
spaCy v3.0.
|
2019-02-07 11:42:25 +03:00
|
|
|
"""
|
2020-08-31 18:01:24 +03:00
|
|
|
warnings.warn(Warnings.W105.format(matcher="PhraseMatcher"), DeprecationWarning)
|
2019-02-07 11:42:25 +03:00
|
|
|
if as_tuples:
|
|
|
|
for doc, context in stream:
|
|
|
|
matches = self(doc)
|
|
|
|
if return_matches:
|
|
|
|
yield ((doc, matches), context)
|
|
|
|
else:
|
|
|
|
yield (doc, context)
|
|
|
|
else:
|
|
|
|
for doc in stream:
|
|
|
|
matches = self(doc)
|
|
|
|
if return_matches:
|
|
|
|
yield (doc, matches)
|
|
|
|
else:
|
|
|
|
yield doc
|
|
|
|
|
2019-09-27 17:22:34 +03:00
|
|
|
def _convert_to_array(self, Doc doc):
|
|
|
|
return [Token.get_struct_attr(&doc.c[i], self.attr) for i in range(len(doc))]
|
2019-02-12 20:27:54 +03:00
|
|
|
|
|
|
|
|
2019-09-29 18:12:33 +03:00
|
|
|
def unpickle_matcher(vocab, docs, callbacks, attr):
|
|
|
|
matcher = PhraseMatcher(vocab, attr=attr)
|
2019-02-12 20:27:54 +03:00
|
|
|
for key, specs in docs.items():
|
|
|
|
callback = callbacks.get(key, None)
|
Update/remove old Matcher syntax (#11370)
* Clean up old Matcher call style related stuff
In v2 Matcher.add was called with (key, on_match, *patterns). In v3 this
was changed to (key, patterns, *, on_match=None), but there were various
points where the old call syntax was documented or handled specially.
This removes all those.
The Matcher itself didn't need any code changes, as it just gives a
generic type error. However the PhraseMatcher required some changes
because it would automatically "fix" the old call style.
Surprisingly, the tokenizer was still using the old call style in one
place.
After these changes tests failed in two places:
1. one test for the "new" call style, including the "old" call style. I
removed this test.
2. deserializing the PhraseMatcher fails because the input docs are a
set.
I am not sure why 2 is happening - I guess it's a quirk of the
serialization format? - so for now I just convert the set to a list when
deserializing. The check that the input Docs are a List in the
PhraseMatcher is a new check, but makes it parallel with the other
Matchers, which seemed like the right thing to do.
* Add notes related to input docs / deserialization type
* Remove Typing import
* Remove old note about call style change
* Apply suggestions from code review
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* Use separate method for setting internal doc representations
In addition to the title change, this changes the internal dict to be a
defaultdict, instead of a dict with frequent use of setdefault.
* Add _add_from_arrays for unpickling
* Cleanup around adding from arrays
This moves adding to internal structures into the private batch method,
and removes the single-add method.
This has one behavioral change for `add`, in that if something is wrong
with the list of input Docs (such as one of the items not being a Doc),
valid items before the invalid one will not be added. Also the callback
will not be updated if anything is invalid. This change should not be
significant.
This also adds a test to check failure when given a non-Doc.
* Update spacy/matcher/phrasematcher.pyx
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
2022-08-30 16:40:31 +03:00
|
|
|
matcher._add_from_arrays(key, specs, on_match=callback)
|
2019-02-12 20:27:54 +03:00
|
|
|
return matcher
|
2019-09-27 17:22:34 +03:00
|
|
|
|
|
|
|
|
2019-10-18 12:01:47 +03:00
|
|
|
cdef SpanC make_spanstruct(attr_t label, int start, int end) nogil:
|
|
|
|
cdef SpanC spanc
|
|
|
|
spanc.label = label
|
|
|
|
spanc.start = start
|
|
|
|
spanc.end = end
|
|
|
|
return spanc
|