Run PhraseMatcher on Spans (#6918)

* Add regression test

* Run PhraseMatcher on Spans

* Add test for PhraseMatcher on Spans and Docs

* Add SCA

* Add test with 3 matches in Doc, 1 match in Span

* Update docs

* Use doc.length for find_matches in tokenizer

Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
This commit is contained in:
Peter Baumann 2021-02-10 07:43:32 -05:00 committed by GitHub
parent a96f850cb7
commit 61b04a70d5
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
7 changed files with 174 additions and 13 deletions

106
.github/contributors/peter-exos.md vendored Normal file
View File

@ -0,0 +1,106 @@
# spaCy contributor agreement
This spaCy Contributor Agreement (**"SCA"**) is based on the
[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
The SCA applies to any contribution that you make to any product or project
managed by us (the **"project"**), and sets out the intellectual property rights
you grant to us in the contributed materials. The term **"us"** shall mean
[ExplosionAI GmbH](https://explosion.ai/legal). The term
**"you"** shall mean the person or entity identified below.
If you agree to be bound by these terms, fill in the information requested
below and include the filled-in version with your first pull request, under the
folder [`.github/contributors/`](/.github/contributors/). The name of the file
should be your GitHub username, with the extension `.md`. For example, the user
example_user would create the file `.github/contributors/example_user.md`.
Read this agreement carefully before signing. These terms and conditions
constitute a binding legal agreement.
## Contributor Agreement
1. The term "contribution" or "contributed materials" means any source code,
object code, patch, tool, sample, graphic, specification, manual,
documentation, or any other material posted or submitted by you to the project.
2. With respect to any worldwide copyrights, or copyright applications and
registrations, in your contribution:
* you hereby assign to us joint ownership, and to the extent that such
assignment is or becomes invalid, ineffective or unenforceable, you hereby
grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
royalty-free, unrestricted license to exercise all rights under those
copyrights. This includes, at our option, the right to sublicense these same
rights to third parties through multiple levels of sublicensees or other
licensing arrangements;
* you agree that each of us can do all things in relation to your
contribution as if each of us were the sole owners, and if one of us makes
a derivative work of your contribution, the one who makes the derivative
work (or has it made will be the sole owner of that derivative work;
* you agree that you will not assert any moral rights in your contribution
against us, our licensees or transferees;
* you agree that we may register a copyright in your contribution and
exercise all ownership rights associated with it; and
* you agree that neither of us has any duty to consult with, obtain the
consent of, pay or render an accounting to the other for any use or
distribution of your contribution.
3. With respect to any patents you own, or that you can license without payment
to any third party, you hereby grant to us a perpetual, irrevocable,
non-exclusive, worldwide, no-charge, royalty-free license to:
* make, have made, use, sell, offer to sell, import, and otherwise transfer
your contribution in whole or in part, alone or in combination with or
included in any product, work or materials arising out of the project to
which your contribution was submitted, and
* at our option, to sublicense these same rights to third parties through
multiple levels of sublicensees or other licensing arrangements.
4. Except as set out above, you keep all right, title, and interest in your
contribution. The rights that you grant to us under these terms are effective
on the date you first submitted a contribution to us, even if your submission
took place before the date you sign these terms.
5. You covenant, represent, warrant and agree that:
* Each contribution that you submit is and shall be an original work of
authorship and you can legally grant the rights set out in this SCA;
* to the best of your knowledge, each contribution will not violate any
third party's copyrights, trademarks, patents, or other intellectual
property rights; and
* each contribution shall be in compliance with U.S. export control laws and
other applicable export and import laws. You agree to notify us if you
become aware of any circumstance which would make any of the foregoing
representations inaccurate in any respect. We may publicly disclose your
participation in the project, including the fact that you have signed the SCA.
6. This SCA is governed by the laws of the State of California and applicable
U.S. Federal law. Any choice of law rules will not apply.
7. Please place an “x” on one of the applicable statement below. Please do NOT
mark both statements:
* [ ] I am signing on behalf of myself as an individual and no other person
or entity, including my employer, has or will have rights with respect to my
contributions.
* [x] I am signing on behalf of my employer or a legal entity and I have the
actual authority to contractually bind that entity.
## Contributor Details
| Field | Entry |
|------------------------------- | -------------------- |
| Name | Peter Baumann |
| Company name (if applicable) | Exos Financial |
| Title or role (if applicable) | data scientist |
| Date | Feb 1st, 2021 |
| GitHub username | peter-exos |
| Website (optional) | |

View File

@ -18,4 +18,4 @@ cdef class PhraseMatcher:
cdef Pool mem cdef Pool mem
cdef key_t _terminal_hash cdef key_t _terminal_hash
cdef void find_matches(self, Doc doc, vector[SpanC] *matches) nogil cdef void find_matches(self, Doc doc, int start_idx, int end_idx, vector[SpanC] *matches) nogil

View File

@ -230,10 +230,10 @@ cdef class PhraseMatcher:
result = internal_node result = internal_node
map_set(self.mem, <MapStruct*>result, self.vocab.strings[key], NULL) map_set(self.mem, <MapStruct*>result, self.vocab.strings[key], NULL)
def __call__(self, doc, *, as_spans=False): def __call__(self, object doclike, *, as_spans=False):
"""Find all sequences matching the supplied patterns on the `Doc`. """Find all sequences matching the supplied patterns on the `Doc`.
doc (Doc): The document to match over. doclike (Doc or Span): The document to match over.
as_spans (bool): Return Span objects with labels instead of (match_id, as_spans (bool): Return Span objects with labels instead of (match_id,
start, end) tuples. start, end) tuples.
RETURNS (list): A list of `(match_id, start, end)` tuples, RETURNS (list): A list of `(match_id, start, end)` tuples,
@ -244,12 +244,22 @@ cdef class PhraseMatcher:
DOCS: https://spacy.io/api/phrasematcher#call DOCS: https://spacy.io/api/phrasematcher#call
""" """
matches = [] matches = []
if doc is None or len(doc) == 0: if doclike is None or len(doclike) == 0:
# if doc is empty or None just return empty list # if doc is empty or None just return empty list
return matches return matches
if isinstance(doclike, Doc):
doc = doclike
start_idx = 0
end_idx = len(doc)
elif isinstance(doclike, Span):
doc = doclike.doc
start_idx = doclike.start
end_idx = doclike.end
else:
raise ValueError(Errors.E195.format(good="Doc or Span", got=type(doclike).__name__))
cdef vector[SpanC] c_matches cdef vector[SpanC] c_matches
self.find_matches(doc, &c_matches) self.find_matches(doc, start_idx, end_idx, &c_matches)
for i in range(c_matches.size()): for i in range(c_matches.size()):
matches.append((c_matches[i].label, c_matches[i].start, c_matches[i].end)) matches.append((c_matches[i].label, c_matches[i].start, c_matches[i].end))
for i, (ent_id, start, end) in enumerate(matches): for i, (ent_id, start, end) in enumerate(matches):
@ -261,17 +271,17 @@ cdef class PhraseMatcher:
else: else:
return matches return matches
cdef void find_matches(self, Doc doc, vector[SpanC] *matches) nogil: cdef void find_matches(self, Doc doc, int start_idx, int end_idx, vector[SpanC] *matches) nogil:
cdef MapStruct* current_node = self.c_map cdef MapStruct* current_node = self.c_map
cdef int start = 0 cdef int start = 0
cdef int idx = 0 cdef int idx = start_idx
cdef int idy = 0 cdef int idy = start_idx
cdef key_t key cdef key_t key
cdef void* value cdef void* value
cdef int i = 0 cdef int i = 0
cdef SpanC ms cdef SpanC ms
cdef void* result cdef void* result
while idx < doc.length: while idx < end_idx:
start = idx start = idx
token = Token.get_struct_attr(&doc.c[idx], self.attr) token = Token.get_struct_attr(&doc.c[idx], self.attr)
# look for sequences from this position # look for sequences from this position
@ -279,7 +289,7 @@ cdef class PhraseMatcher:
if result: if result:
current_node = <MapStruct*>result current_node = <MapStruct*>result
idy = idx + 1 idy = idx + 1
while idy < doc.length: while idy < end_idx:
result = map_get(current_node, self._terminal_hash) result = map_get(current_node, self._terminal_hash)
if result: if result:
i = 0 i = 0

View File

@ -323,3 +323,33 @@ def test_phrase_matcher_deprecated(en_vocab):
@pytest.mark.parametrize("attr", ["SENT_START", "IS_SENT_START"]) @pytest.mark.parametrize("attr", ["SENT_START", "IS_SENT_START"])
def test_phrase_matcher_sent_start(en_vocab, attr): def test_phrase_matcher_sent_start(en_vocab, attr):
_ = PhraseMatcher(en_vocab, attr=attr) # noqa: F841 _ = PhraseMatcher(en_vocab, attr=attr) # noqa: F841
def test_span_in_phrasematcher(en_vocab):
"""Ensure that PhraseMatcher accepts Span and Doc as input"""
doc = Doc(en_vocab,
words=["I", "like", "Spans", "and", "Docs", "in", "my", "input", ",", "and", "nothing", "else", "."])
span = doc[:8]
pattern = Doc(en_vocab, words=["Spans", "and", "Docs"])
matcher = PhraseMatcher(en_vocab)
matcher.add("SPACY", [pattern])
matches_doc = matcher(doc)
matches_span = matcher(span)
assert len(matches_doc) == 1
assert len(matches_span) == 1
def test_span_v_doc_in_phrasematcher(en_vocab):
"""Ensure that PhraseMatcher only returns matches in input Span and not in entire Doc"""
doc = Doc(en_vocab,
words=["I", "like", "Spans", "and", "Docs", "in", "my", "input", ",",
"Spans", "and", "Docs", "in", "my", "matchers", ","
"and", "Spans", "and", "Docs", "everywhere" "."])
span = doc[9:15] # second clause
pattern = Doc(en_vocab, words=["Spans", "and", "Docs"])
matcher = PhraseMatcher(en_vocab)
matcher.add("SPACY", [pattern])
matches_doc = matcher(doc)
matches_span = matcher(span)
assert len(matches_doc) == 3
assert len(matches_span) == 1

View File

@ -0,0 +1,15 @@
from spacy.tokens import Doc
from spacy.matcher import PhraseMatcher
def test_span_in_phrasematcher(en_vocab):
"""Ensure that PhraseMatcher accepts Span as input"""
doc = Doc(en_vocab,
words=["I", "like", "Spans", "and", "Docs", "in", "my", "input", ",", "and", "nothing", "else", "."])
span = doc[:8]
pattern = Doc(en_vocab, words=["Spans", "and", "Docs"])
matcher = PhraseMatcher(en_vocab)
matcher.add("SPACY", [pattern])
matches = matcher(span)
assert matches

View File

@ -245,7 +245,7 @@ cdef class Tokenizer:
cdef int offset cdef int offset
cdef int modified_doc_length cdef int modified_doc_length
# Find matches for special cases # Find matches for special cases
self._special_matcher.find_matches(doc, &c_matches) self._special_matcher.find_matches(doc, 0, doc.length, &c_matches)
# Skip processing if no matches # Skip processing if no matches
if c_matches.size() == 0: if c_matches.size() == 0:
return True return True

View File

@ -44,7 +44,7 @@ be shown.
## PhraseMatcher.\_\_call\_\_ {#call tag="method"} ## PhraseMatcher.\_\_call\_\_ {#call tag="method"}
Find all token sequences matching the supplied patterns on the `Doc`. Find all token sequences matching the supplied patterns on the `Doc` or `Span`.
> #### Example > #### Example
> >
@ -59,7 +59,7 @@ Find all token sequences matching the supplied patterns on the `Doc`.
| Name | Description | | Name | Description |
| ------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | ------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `doc` | The document to match over. ~~Doc~~ | | `doclike` | The `Doc` or `Span` to match over. ~~Union[Doc, Span]~~ |
| _keyword-only_ | | | _keyword-only_ | |
| `as_spans` <Tag variant="new">3</Tag> | Instead of tuples, return a list of [`Span`](/api/span) objects of the matches, with the `match_id` assigned as the span label. Defaults to `False`. ~~bool~~ | | `as_spans` <Tag variant="new">3</Tag> | Instead of tuples, return a list of [`Span`](/api/span) objects of the matches, with the `match_id` assigned as the span label. Defaults to `False`. ~~bool~~ |
| **RETURNS** | A list of `(match_id, start, end)` tuples, describing the matches. A match tuple describes a span `doc[start:end`]. The `match_id` is the ID of the added match pattern. If `as_spans` is set to `True`, a list of `Span` objects is returned instead. ~~Union[List[Tuple[int, int, int]], List[Span]]~~ | | **RETURNS** | A list of `(match_id, start, end)` tuples, describing the matches. A match tuple describes a span `doc[start:end`]. The `match_id` is the ID of the added match pattern. If `as_spans` is set to `True`, a list of `Span` objects is returned instead. ~~Union[List[Tuple[int, int, int]], List[Span]]~~ |