mirror of
https://github.com/explosion/spaCy.git
synced 2024-12-24 17:06:29 +03:00
Add (noun chunks) syntax iterators for Danish (#6246)
* add syntax iterators for danish * add test noun chunks for danish syntax iterators * add contributor agreement * update da syntax iterators to remove nested chunks * add tests for da noun chunks * Fix test * add missing import * fix example * Prevent overlapping noun chunks Prevent overlapping noun chunks by tracking the end index of the previous noun chunk span. Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
This commit is contained in:
parent
6f7e7d88b9
commit
e3222fdec9
106
.github/contributors/ophelielacroix.md
vendored
Normal file
106
.github/contributors/ophelielacroix.md
vendored
Normal file
|
@ -0,0 +1,106 @@
|
||||||
|
# spaCy contributor agreement
|
||||||
|
|
||||||
|
This spaCy Contributor Agreement (**"SCA"**) is based on the
|
||||||
|
[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
|
||||||
|
The SCA applies to any contribution that you make to any product or project
|
||||||
|
managed by us (the **"project"**), and sets out the intellectual property rights
|
||||||
|
you grant to us in the contributed materials. The term **"us"** shall mean
|
||||||
|
[ExplosionAI GmbH](https://explosion.ai/legal). The term
|
||||||
|
**"you"** shall mean the person or entity identified below.
|
||||||
|
|
||||||
|
If you agree to be bound by these terms, fill in the information requested
|
||||||
|
below and include the filled-in version with your first pull request, under the
|
||||||
|
folder [`.github/contributors/`](/.github/contributors/). The name of the file
|
||||||
|
should be your GitHub username, with the extension `.md`. For example, the user
|
||||||
|
example_user would create the file `.github/contributors/example_user.md`.
|
||||||
|
|
||||||
|
Read this agreement carefully before signing. These terms and conditions
|
||||||
|
constitute a binding legal agreement.
|
||||||
|
|
||||||
|
## Contributor Agreement
|
||||||
|
|
||||||
|
1. The term "contribution" or "contributed materials" means any source code,
|
||||||
|
object code, patch, tool, sample, graphic, specification, manual,
|
||||||
|
documentation, or any other material posted or submitted by you to the project.
|
||||||
|
|
||||||
|
2. With respect to any worldwide copyrights, or copyright applications and
|
||||||
|
registrations, in your contribution:
|
||||||
|
|
||||||
|
* you hereby assign to us joint ownership, and to the extent that such
|
||||||
|
assignment is or becomes invalid, ineffective or unenforceable, you hereby
|
||||||
|
grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
|
||||||
|
royalty-free, unrestricted license to exercise all rights under those
|
||||||
|
copyrights. This includes, at our option, the right to sublicense these same
|
||||||
|
rights to third parties through multiple levels of sublicensees or other
|
||||||
|
licensing arrangements;
|
||||||
|
|
||||||
|
* you agree that each of us can do all things in relation to your
|
||||||
|
contribution as if each of us were the sole owners, and if one of us makes
|
||||||
|
a derivative work of your contribution, the one who makes the derivative
|
||||||
|
work (or has it made will be the sole owner of that derivative work;
|
||||||
|
|
||||||
|
* you agree that you will not assert any moral rights in your contribution
|
||||||
|
against us, our licensees or transferees;
|
||||||
|
|
||||||
|
* you agree that we may register a copyright in your contribution and
|
||||||
|
exercise all ownership rights associated with it; and
|
||||||
|
|
||||||
|
* you agree that neither of us has any duty to consult with, obtain the
|
||||||
|
consent of, pay or render an accounting to the other for any use or
|
||||||
|
distribution of your contribution.
|
||||||
|
|
||||||
|
3. With respect to any patents you own, or that you can license without payment
|
||||||
|
to any third party, you hereby grant to us a perpetual, irrevocable,
|
||||||
|
non-exclusive, worldwide, no-charge, royalty-free license to:
|
||||||
|
|
||||||
|
* make, have made, use, sell, offer to sell, import, and otherwise transfer
|
||||||
|
your contribution in whole or in part, alone or in combination with or
|
||||||
|
included in any product, work or materials arising out of the project to
|
||||||
|
which your contribution was submitted, and
|
||||||
|
|
||||||
|
* at our option, to sublicense these same rights to third parties through
|
||||||
|
multiple levels of sublicensees or other licensing arrangements.
|
||||||
|
|
||||||
|
4. Except as set out above, you keep all right, title, and interest in your
|
||||||
|
contribution. The rights that you grant to us under these terms are effective
|
||||||
|
on the date you first submitted a contribution to us, even if your submission
|
||||||
|
took place before the date you sign these terms.
|
||||||
|
|
||||||
|
5. You covenant, represent, warrant and agree that:
|
||||||
|
|
||||||
|
* Each contribution that you submit is and shall be an original work of
|
||||||
|
authorship and you can legally grant the rights set out in this SCA;
|
||||||
|
|
||||||
|
* to the best of your knowledge, each contribution will not violate any
|
||||||
|
third party's copyrights, trademarks, patents, or other intellectual
|
||||||
|
property rights; and
|
||||||
|
|
||||||
|
* each contribution shall be in compliance with U.S. export control laws and
|
||||||
|
other applicable export and import laws. You agree to notify us if you
|
||||||
|
become aware of any circumstance which would make any of the foregoing
|
||||||
|
representations inaccurate in any respect. We may publicly disclose your
|
||||||
|
participation in the project, including the fact that you have signed the SCA.
|
||||||
|
|
||||||
|
6. This SCA is governed by the laws of the State of California and applicable
|
||||||
|
U.S. Federal law. Any choice of law rules will not apply.
|
||||||
|
|
||||||
|
7. Please place an “x” on one of the applicable statement below. Please do NOT
|
||||||
|
mark both statements:
|
||||||
|
|
||||||
|
* [X] I am signing on behalf of myself as an individual and no other person
|
||||||
|
or entity, including my employer, has or will have rights with respect to my
|
||||||
|
contributions.
|
||||||
|
|
||||||
|
* [ ] I am signing on behalf of my employer or a legal entity and I have the
|
||||||
|
actual authority to contractually bind that entity.
|
||||||
|
|
||||||
|
## Contributor Details
|
||||||
|
|
||||||
|
| Field | Entry |
|
||||||
|
|-------------------------------|-----------------|
|
||||||
|
| Name | Ophélie Lacroix |
|
||||||
|
| Company name (if applicable) | |
|
||||||
|
| Title or role (if applicable) | |
|
||||||
|
| Date | |
|
||||||
|
| GitHub username | ophelielacroix |
|
||||||
|
| Website (optional) | |
|
|
@ -7,6 +7,7 @@ from .stop_words import STOP_WORDS
|
||||||
from .lex_attrs import LEX_ATTRS
|
from .lex_attrs import LEX_ATTRS
|
||||||
from .morph_rules import MORPH_RULES
|
from .morph_rules import MORPH_RULES
|
||||||
from ..tag_map import TAG_MAP
|
from ..tag_map import TAG_MAP
|
||||||
|
from .syntax_iterators import SYNTAX_ITERATORS
|
||||||
|
|
||||||
from ..tokenizer_exceptions import BASE_EXCEPTIONS
|
from ..tokenizer_exceptions import BASE_EXCEPTIONS
|
||||||
from ...language import Language
|
from ...language import Language
|
||||||
|
@ -24,6 +25,7 @@ class DanishDefaults(Language.Defaults):
|
||||||
suffixes = TOKENIZER_SUFFIXES
|
suffixes = TOKENIZER_SUFFIXES
|
||||||
tag_map = TAG_MAP
|
tag_map = TAG_MAP
|
||||||
stop_words = STOP_WORDS
|
stop_words = STOP_WORDS
|
||||||
|
syntax_iterators = SYNTAX_ITERATORS
|
||||||
|
|
||||||
|
|
||||||
class Danish(Language):
|
class Danish(Language):
|
||||||
|
|
81
spacy/lang/da/syntax_iterators.py
Normal file
81
spacy/lang/da/syntax_iterators.py
Normal file
|
@ -0,0 +1,81 @@
|
||||||
|
# coding: utf8
|
||||||
|
from __future__ import unicode_literals
|
||||||
|
|
||||||
|
from ...symbols import NOUN, PROPN, PRON, VERB, AUX
|
||||||
|
from ...errors import Errors
|
||||||
|
|
||||||
|
|
||||||
|
def noun_chunks(doclike):
|
||||||
|
def is_verb_token(tok):
|
||||||
|
return tok.pos in [VERB, AUX]
|
||||||
|
|
||||||
|
def next_token(tok):
|
||||||
|
try:
|
||||||
|
return tok.nbor()
|
||||||
|
except IndexError:
|
||||||
|
return None
|
||||||
|
|
||||||
|
def get_left_bound(doc, root):
|
||||||
|
left_bound = root
|
||||||
|
for tok in reversed(list(root.lefts)):
|
||||||
|
if tok.dep in np_left_deps:
|
||||||
|
left_bound = tok
|
||||||
|
return left_bound
|
||||||
|
|
||||||
|
def get_right_bound(doc, root):
|
||||||
|
right_bound = root
|
||||||
|
for tok in root.rights:
|
||||||
|
if tok.dep in np_right_deps:
|
||||||
|
right = get_right_bound(doc, tok)
|
||||||
|
if list(
|
||||||
|
filter(
|
||||||
|
lambda t: is_verb_token(t) or t.dep in stop_deps,
|
||||||
|
doc[root.i : right.i],
|
||||||
|
)
|
||||||
|
):
|
||||||
|
break
|
||||||
|
else:
|
||||||
|
right_bound = right
|
||||||
|
return right_bound
|
||||||
|
|
||||||
|
def get_bounds(doc, root):
|
||||||
|
return get_left_bound(doc, root), get_right_bound(doc, root)
|
||||||
|
|
||||||
|
doc = doclike.doc
|
||||||
|
|
||||||
|
if not doc.is_parsed:
|
||||||
|
raise ValueError(Errors.E029)
|
||||||
|
|
||||||
|
if not len(doc):
|
||||||
|
return
|
||||||
|
|
||||||
|
left_labels = [
|
||||||
|
"det",
|
||||||
|
"fixed",
|
||||||
|
"nmod:poss",
|
||||||
|
"amod",
|
||||||
|
"flat",
|
||||||
|
"goeswith",
|
||||||
|
"nummod",
|
||||||
|
"appos",
|
||||||
|
]
|
||||||
|
right_labels = ["fixed", "nmod:poss", "amod", "flat", "goeswith", "nummod", "appos"]
|
||||||
|
stop_labels = ["punct"]
|
||||||
|
|
||||||
|
np_label = doc.vocab.strings.add("NP")
|
||||||
|
np_left_deps = [doc.vocab.strings.add(label) for label in left_labels]
|
||||||
|
np_right_deps = [doc.vocab.strings.add(label) for label in right_labels]
|
||||||
|
stop_deps = [doc.vocab.strings.add(label) for label in stop_labels]
|
||||||
|
|
||||||
|
chunks = []
|
||||||
|
prev_right = -1
|
||||||
|
for token in doclike:
|
||||||
|
if token.pos in [PROPN, NOUN, PRON]:
|
||||||
|
left, right = get_bounds(doc, token)
|
||||||
|
if left.i <= prev_right:
|
||||||
|
continue
|
||||||
|
yield left.i, right.i + 1, np_label
|
||||||
|
prev_right = right.i
|
||||||
|
|
||||||
|
|
||||||
|
SYNTAX_ITERATORS = {"noun_chunks": noun_chunks}
|
61
spacy/tests/lang/da/test_noun_chunks.py
Normal file
61
spacy/tests/lang/da/test_noun_chunks.py
Normal file
|
@ -0,0 +1,61 @@
|
||||||
|
# coding: utf-8
|
||||||
|
from __future__ import unicode_literals
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from ...util import get_doc
|
||||||
|
|
||||||
|
|
||||||
|
def test_noun_chunks_is_parsed(da_tokenizer):
|
||||||
|
"""Test that noun_chunks raises Value Error for 'da' language if Doc is not parsed.
|
||||||
|
To check this test, we're constructing a Doc
|
||||||
|
with a new Vocab here and forcing is_parsed to 'False'
|
||||||
|
to make sure the noun chunks don't run.
|
||||||
|
"""
|
||||||
|
doc = da_tokenizer("Det er en sætning")
|
||||||
|
doc.is_parsed = False
|
||||||
|
with pytest.raises(ValueError):
|
||||||
|
list(doc.noun_chunks)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
DA_NP_TEST_EXAMPLES = [
|
||||||
|
(
|
||||||
|
"Hun elsker at plukker frugt.",
|
||||||
|
['PRON', 'VERB', 'PART', 'VERB', 'NOUN', 'PUNCT'],
|
||||||
|
['nsubj', 'ROOT', 'mark', 'obj', 'obj', 'punct'],
|
||||||
|
[1, 0, 1, -2, -1, -4],
|
||||||
|
["Hun", "frugt"],
|
||||||
|
),
|
||||||
|
(
|
||||||
|
"Påfugle er de smukkeste fugle.",
|
||||||
|
['NOUN', 'AUX', 'DET', 'ADJ', 'NOUN', 'PUNCT'],
|
||||||
|
['nsubj', 'cop', 'det', 'amod', 'ROOT', 'punct'],
|
||||||
|
[4, 3, 2, 1, 0, -1],
|
||||||
|
["Påfugle", "de smukkeste fugle"],
|
||||||
|
),
|
||||||
|
(
|
||||||
|
"Rikke og Jacob Jensen glæder sig til en hyggelig skovtur",
|
||||||
|
['PROPN', 'CCONJ', 'PROPN', 'PROPN', 'VERB', 'PRON', 'ADP', 'DET', 'ADJ', 'NOUN'],
|
||||||
|
['nsubj', 'cc', 'conj', 'flat', 'ROOT', 'obj', 'case', 'det', 'amod', 'obl'],
|
||||||
|
[4, 1, -2, -1, 0, -1, 3, 2, 1, -5],
|
||||||
|
["Rikke", "Jacob Jensen", "sig", "en hyggelig skovtur"],
|
||||||
|
),
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.parametrize(
|
||||||
|
"text,pos,deps,heads,expected_noun_chunks", DA_NP_TEST_EXAMPLES
|
||||||
|
)
|
||||||
|
def test_da_noun_chunks(da_tokenizer, text, pos, deps, heads, expected_noun_chunks):
|
||||||
|
tokens = da_tokenizer(text)
|
||||||
|
|
||||||
|
assert len(heads) == len(pos)
|
||||||
|
doc = get_doc(
|
||||||
|
tokens.vocab, words=[t.text for t in tokens], heads=heads, deps=deps, pos=pos
|
||||||
|
)
|
||||||
|
|
||||||
|
noun_chunks = list(doc.noun_chunks)
|
||||||
|
assert len(noun_chunks) == len(expected_noun_chunks)
|
||||||
|
for i, np in enumerate(noun_chunks):
|
||||||
|
assert np.text == expected_noun_chunks[i]
|
Loading…
Reference in New Issue
Block a user