mirror of
https://github.com/explosion/spaCy.git
synced 2025-02-11 17:10:36 +03:00
Merge branch 'develop' into nightly.spacy.io
This commit is contained in:
commit
0bb28e1cdf
106
.github/contributors/delzac.md
vendored
Normal file
106
.github/contributors/delzac.md
vendored
Normal file
|
@ -0,0 +1,106 @@
|
||||||
|
# spaCy contributor agreement
|
||||||
|
|
||||||
|
This spaCy Contributor Agreement (**"SCA"**) is based on the
|
||||||
|
[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
|
||||||
|
The SCA applies to any contribution that you make to any product or project
|
||||||
|
managed by us (the **"project"**), and sets out the intellectual property rights
|
||||||
|
you grant to us in the contributed materials. The term **"us"** shall mean
|
||||||
|
[ExplosionAI GmbH](https://explosion.ai/legal). The term
|
||||||
|
**"you"** shall mean the person or entity identified below.
|
||||||
|
|
||||||
|
If you agree to be bound by these terms, fill in the information requested
|
||||||
|
below and include the filled-in version with your first pull request, under the
|
||||||
|
folder [`.github/contributors/`](/.github/contributors/). The name of the file
|
||||||
|
should be your GitHub username, with the extension `.md`. For example, the user
|
||||||
|
example_user would create the file `.github/contributors/example_user.md`.
|
||||||
|
|
||||||
|
Read this agreement carefully before signing. These terms and conditions
|
||||||
|
constitute a binding legal agreement.
|
||||||
|
|
||||||
|
## Contributor Agreement
|
||||||
|
|
||||||
|
1. The term "contribution" or "contributed materials" means any source code,
|
||||||
|
object code, patch, tool, sample, graphic, specification, manual,
|
||||||
|
documentation, or any other material posted or submitted by you to the project.
|
||||||
|
|
||||||
|
2. With respect to any worldwide copyrights, or copyright applications and
|
||||||
|
registrations, in your contribution:
|
||||||
|
|
||||||
|
* you hereby assign to us joint ownership, and to the extent that such
|
||||||
|
assignment is or becomes invalid, ineffective or unenforceable, you hereby
|
||||||
|
grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
|
||||||
|
royalty-free, unrestricted license to exercise all rights under those
|
||||||
|
copyrights. This includes, at our option, the right to sublicense these same
|
||||||
|
rights to third parties through multiple levels of sublicensees or other
|
||||||
|
licensing arrangements;
|
||||||
|
|
||||||
|
* you agree that each of us can do all things in relation to your
|
||||||
|
contribution as if each of us were the sole owners, and if one of us makes
|
||||||
|
a derivative work of your contribution, the one who makes the derivative
|
||||||
|
work (or has it made will be the sole owner of that derivative work;
|
||||||
|
|
||||||
|
* you agree that you will not assert any moral rights in your contribution
|
||||||
|
against us, our licensees or transferees;
|
||||||
|
|
||||||
|
* you agree that we may register a copyright in your contribution and
|
||||||
|
exercise all ownership rights associated with it; and
|
||||||
|
|
||||||
|
* you agree that neither of us has any duty to consult with, obtain the
|
||||||
|
consent of, pay or render an accounting to the other for any use or
|
||||||
|
distribution of your contribution.
|
||||||
|
|
||||||
|
3. With respect to any patents you own, or that you can license without payment
|
||||||
|
to any third party, you hereby grant to us a perpetual, irrevocable,
|
||||||
|
non-exclusive, worldwide, no-charge, royalty-free license to:
|
||||||
|
|
||||||
|
* make, have made, use, sell, offer to sell, import, and otherwise transfer
|
||||||
|
your contribution in whole or in part, alone or in combination with or
|
||||||
|
included in any product, work or materials arising out of the project to
|
||||||
|
which your contribution was submitted, and
|
||||||
|
|
||||||
|
* at our option, to sublicense these same rights to third parties through
|
||||||
|
multiple levels of sublicensees or other licensing arrangements.
|
||||||
|
|
||||||
|
4. Except as set out above, you keep all right, title, and interest in your
|
||||||
|
contribution. The rights that you grant to us under these terms are effective
|
||||||
|
on the date you first submitted a contribution to us, even if your submission
|
||||||
|
took place before the date you sign these terms.
|
||||||
|
|
||||||
|
5. You covenant, represent, warrant and agree that:
|
||||||
|
|
||||||
|
* Each contribution that you submit is and shall be an original work of
|
||||||
|
authorship and you can legally grant the rights set out in this SCA;
|
||||||
|
|
||||||
|
* to the best of your knowledge, each contribution will not violate any
|
||||||
|
third party's copyrights, trademarks, patents, or other intellectual
|
||||||
|
property rights; and
|
||||||
|
|
||||||
|
* each contribution shall be in compliance with U.S. export control laws and
|
||||||
|
other applicable export and import laws. You agree to notify us if you
|
||||||
|
become aware of any circumstance which would make any of the foregoing
|
||||||
|
representations inaccurate in any respect. We may publicly disclose your
|
||||||
|
participation in the project, including the fact that you have signed the SCA.
|
||||||
|
|
||||||
|
6. This SCA is governed by the laws of the State of California and applicable
|
||||||
|
U.S. Federal law. Any choice of law rules will not apply.
|
||||||
|
|
||||||
|
7. Please place an “x” on one of the applicable statement below. Please do NOT
|
||||||
|
mark both statements:
|
||||||
|
|
||||||
|
* [x] I am signing on behalf of myself as an individual and no other person
|
||||||
|
or entity, including my employer, has or will have rights with respect to my
|
||||||
|
contributions.
|
||||||
|
|
||||||
|
* [ ] I am signing on behalf of my employer or a legal entity and I have the
|
||||||
|
actual authority to contractually bind that entity.
|
||||||
|
|
||||||
|
## Contributor Details
|
||||||
|
|
||||||
|
| Field | Entry |
|
||||||
|
|------------------------------- | -------------------- |
|
||||||
|
| Name | Matthew Chin |
|
||||||
|
| Company name (if applicable) | |
|
||||||
|
| Title or role (if applicable) | |
|
||||||
|
| Date | 2020-09-22 |
|
||||||
|
| GitHub username | delzac |
|
||||||
|
| Website (optional) | |
|
106
.github/contributors/florijanstamenkovic.md
vendored
Normal file
106
.github/contributors/florijanstamenkovic.md
vendored
Normal file
|
@ -0,0 +1,106 @@
|
||||||
|
# spaCy contributor agreement
|
||||||
|
|
||||||
|
This spaCy Contributor Agreement (**"SCA"**) is based on the
|
||||||
|
[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
|
||||||
|
The SCA applies to any contribution that you make to any product or project
|
||||||
|
managed by us (the **"project"**), and sets out the intellectual property rights
|
||||||
|
you grant to us in the contributed materials. The term **"us"** shall mean
|
||||||
|
[ExplosionAI GmbH](https://explosion.ai/legal). The term
|
||||||
|
**"you"** shall mean the person or entity identified below.
|
||||||
|
|
||||||
|
If you agree to be bound by these terms, fill in the information requested
|
||||||
|
below and include the filled-in version with your first pull request, under the
|
||||||
|
folder [`.github/contributors/`](/.github/contributors/). The name of the file
|
||||||
|
should be your GitHub username, with the extension `.md`. For example, the user
|
||||||
|
example_user would create the file `.github/contributors/example_user.md`.
|
||||||
|
|
||||||
|
Read this agreement carefully before signing. These terms and conditions
|
||||||
|
constitute a binding legal agreement.
|
||||||
|
|
||||||
|
## Contributor Agreement
|
||||||
|
|
||||||
|
1. The term "contribution" or "contributed materials" means any source code,
|
||||||
|
object code, patch, tool, sample, graphic, specification, manual,
|
||||||
|
documentation, or any other material posted or submitted by you to the project.
|
||||||
|
|
||||||
|
2. With respect to any worldwide copyrights, or copyright applications and
|
||||||
|
registrations, in your contribution:
|
||||||
|
|
||||||
|
* you hereby assign to us joint ownership, and to the extent that such
|
||||||
|
assignment is or becomes invalid, ineffective or unenforceable, you hereby
|
||||||
|
grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
|
||||||
|
royalty-free, unrestricted license to exercise all rights under those
|
||||||
|
copyrights. This includes, at our option, the right to sublicense these same
|
||||||
|
rights to third parties through multiple levels of sublicensees or other
|
||||||
|
licensing arrangements;
|
||||||
|
|
||||||
|
* you agree that each of us can do all things in relation to your
|
||||||
|
contribution as if each of us were the sole owners, and if one of us makes
|
||||||
|
a derivative work of your contribution, the one who makes the derivative
|
||||||
|
work (or has it made will be the sole owner of that derivative work;
|
||||||
|
|
||||||
|
* you agree that you will not assert any moral rights in your contribution
|
||||||
|
against us, our licensees or transferees;
|
||||||
|
|
||||||
|
* you agree that we may register a copyright in your contribution and
|
||||||
|
exercise all ownership rights associated with it; and
|
||||||
|
|
||||||
|
* you agree that neither of us has any duty to consult with, obtain the
|
||||||
|
consent of, pay or render an accounting to the other for any use or
|
||||||
|
distribution of your contribution.
|
||||||
|
|
||||||
|
3. With respect to any patents you own, or that you can license without payment
|
||||||
|
to any third party, you hereby grant to us a perpetual, irrevocable,
|
||||||
|
non-exclusive, worldwide, no-charge, royalty-free license to:
|
||||||
|
|
||||||
|
* make, have made, use, sell, offer to sell, import, and otherwise transfer
|
||||||
|
your contribution in whole or in part, alone or in combination with or
|
||||||
|
included in any product, work or materials arising out of the project to
|
||||||
|
which your contribution was submitted, and
|
||||||
|
|
||||||
|
* at our option, to sublicense these same rights to third parties through
|
||||||
|
multiple levels of sublicensees or other licensing arrangements.
|
||||||
|
|
||||||
|
4. Except as set out above, you keep all right, title, and interest in your
|
||||||
|
contribution. The rights that you grant to us under these terms are effective
|
||||||
|
on the date you first submitted a contribution to us, even if your submission
|
||||||
|
took place before the date you sign these terms.
|
||||||
|
|
||||||
|
5. You covenant, represent, warrant and agree that:
|
||||||
|
|
||||||
|
* Each contribution that you submit is and shall be an original work of
|
||||||
|
authorship and you can legally grant the rights set out in this SCA;
|
||||||
|
|
||||||
|
* to the best of your knowledge, each contribution will not violate any
|
||||||
|
third party's copyrights, trademarks, patents, or other intellectual
|
||||||
|
property rights; and
|
||||||
|
|
||||||
|
* each contribution shall be in compliance with U.S. export control laws and
|
||||||
|
other applicable export and import laws. You agree to notify us if you
|
||||||
|
become aware of any circumstance which would make any of the foregoing
|
||||||
|
representations inaccurate in any respect. We may publicly disclose your
|
||||||
|
participation in the project, including the fact that you have signed the SCA.
|
||||||
|
|
||||||
|
6. This SCA is governed by the laws of the State of California and applicable
|
||||||
|
U.S. Federal law. Any choice of law rules will not apply.
|
||||||
|
|
||||||
|
7. Please place an “x” on one of the applicable statement below. Please do NOT
|
||||||
|
mark both statements:
|
||||||
|
|
||||||
|
* [x] I am signing on behalf of myself as an individual and no other person
|
||||||
|
or entity, including my employer, has or will have rights with respect to my
|
||||||
|
contributions.
|
||||||
|
|
||||||
|
* [ ] I am signing on behalf of my employer or a legal entity and I have the
|
||||||
|
actual authority to contractually bind that entity.
|
||||||
|
|
||||||
|
## Contributor Details
|
||||||
|
|
||||||
|
| Field | Entry |
|
||||||
|
|------------------------------- | -------------------- |
|
||||||
|
| Name | Florijan Stamenkovic |
|
||||||
|
| Company name (if applicable) | |
|
||||||
|
| Title or role (if applicable) | |
|
||||||
|
| Date | 2020-10-05 |
|
||||||
|
| GitHub username | florijanstamenkovic |
|
||||||
|
| Website (optional) | |
|
106
.github/contributors/zaibacu.md
vendored
Normal file
106
.github/contributors/zaibacu.md
vendored
Normal file
|
@ -0,0 +1,106 @@
|
||||||
|
# spaCy contributor agreement
|
||||||
|
|
||||||
|
This spaCy Contributor Agreement (**"SCA"**) is based on the
|
||||||
|
[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
|
||||||
|
The SCA applies to any contribution that you make to any product or project
|
||||||
|
managed by us (the **"project"**), and sets out the intellectual property rights
|
||||||
|
you grant to us in the contributed materials. The term **"us"** shall mean
|
||||||
|
[ExplosionAI GmbH](https://explosion.ai/legal). The term
|
||||||
|
**"you"** shall mean the person or entity identified below.
|
||||||
|
|
||||||
|
If you agree to be bound by these terms, fill in the information requested
|
||||||
|
below and include the filled-in version with your first pull request, under the
|
||||||
|
folder [`.github/contributors/`](/.github/contributors/). The name of the file
|
||||||
|
should be your GitHub username, with the extension `.md`. For example, the user
|
||||||
|
example_user would create the file `.github/contributors/example_user.md`.
|
||||||
|
|
||||||
|
Read this agreement carefully before signing. These terms and conditions
|
||||||
|
constitute a binding legal agreement.
|
||||||
|
|
||||||
|
## Contributor Agreement
|
||||||
|
|
||||||
|
1. The term "contribution" or "contributed materials" means any source code,
|
||||||
|
object code, patch, tool, sample, graphic, specification, manual,
|
||||||
|
documentation, or any other material posted or submitted by you to the project.
|
||||||
|
|
||||||
|
2. With respect to any worldwide copyrights, or copyright applications and
|
||||||
|
registrations, in your contribution:
|
||||||
|
|
||||||
|
* you hereby assign to us joint ownership, and to the extent that such
|
||||||
|
assignment is or becomes invalid, ineffective or unenforceable, you hereby
|
||||||
|
grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
|
||||||
|
royalty-free, unrestricted license to exercise all rights under those
|
||||||
|
copyrights. This includes, at our option, the right to sublicense these same
|
||||||
|
rights to third parties through multiple levels of sublicensees or other
|
||||||
|
licensing arrangements;
|
||||||
|
|
||||||
|
* you agree that each of us can do all things in relation to your
|
||||||
|
contribution as if each of us were the sole owners, and if one of us makes
|
||||||
|
a derivative work of your contribution, the one who makes the derivative
|
||||||
|
work (or has it made will be the sole owner of that derivative work;
|
||||||
|
|
||||||
|
* you agree that you will not assert any moral rights in your contribution
|
||||||
|
against us, our licensees or transferees;
|
||||||
|
|
||||||
|
* you agree that we may register a copyright in your contribution and
|
||||||
|
exercise all ownership rights associated with it; and
|
||||||
|
|
||||||
|
* you agree that neither of us has any duty to consult with, obtain the
|
||||||
|
consent of, pay or render an accounting to the other for any use or
|
||||||
|
distribution of your contribution.
|
||||||
|
|
||||||
|
3. With respect to any patents you own, or that you can license without payment
|
||||||
|
to any third party, you hereby grant to us a perpetual, irrevocable,
|
||||||
|
non-exclusive, worldwide, no-charge, royalty-free license to:
|
||||||
|
|
||||||
|
* make, have made, use, sell, offer to sell, import, and otherwise transfer
|
||||||
|
your contribution in whole or in part, alone or in combination with or
|
||||||
|
included in any product, work or materials arising out of the project to
|
||||||
|
which your contribution was submitted, and
|
||||||
|
|
||||||
|
* at our option, to sublicense these same rights to third parties through
|
||||||
|
multiple levels of sublicensees or other licensing arrangements.
|
||||||
|
|
||||||
|
4. Except as set out above, you keep all right, title, and interest in your
|
||||||
|
contribution. The rights that you grant to us under these terms are effective
|
||||||
|
on the date you first submitted a contribution to us, even if your submission
|
||||||
|
took place before the date you sign these terms.
|
||||||
|
|
||||||
|
5. You covenant, represent, warrant and agree that:
|
||||||
|
|
||||||
|
* Each contribution that you submit is and shall be an original work of
|
||||||
|
authorship and you can legally grant the rights set out in this SCA;
|
||||||
|
|
||||||
|
* to the best of your knowledge, each contribution will not violate any
|
||||||
|
third party's copyrights, trademarks, patents, or other intellectual
|
||||||
|
property rights; and
|
||||||
|
|
||||||
|
* each contribution shall be in compliance with U.S. export control laws and
|
||||||
|
other applicable export and import laws. You agree to notify us if you
|
||||||
|
become aware of any circumstance which would make any of the foregoing
|
||||||
|
representations inaccurate in any respect. We may publicly disclose your
|
||||||
|
participation in the project, including the fact that you have signed the SCA.
|
||||||
|
|
||||||
|
6. This SCA is governed by the laws of the State of California and applicable
|
||||||
|
U.S. Federal law. Any choice of law rules will not apply.
|
||||||
|
|
||||||
|
7. Please place an “x” on one of the applicable statement below. Please do NOT
|
||||||
|
mark both statements:
|
||||||
|
|
||||||
|
* [x] I am signing on behalf of myself as an individual and no other person
|
||||||
|
or entity, including my employer, has or will have rights with respect to my
|
||||||
|
contributions.
|
||||||
|
|
||||||
|
* [ ] I am signing on behalf of my employer or a legal entity and I have the
|
||||||
|
actual authority to contractually bind that entity.
|
||||||
|
|
||||||
|
## Contributor Details
|
||||||
|
|
||||||
|
| Field | Entry |
|
||||||
|
|------------------------------- | -------------------- |
|
||||||
|
| Name | Šarūnas Navickas |
|
||||||
|
| Company name (if applicable) | TokenMill |
|
||||||
|
| Title or role (if applicable) | Data Engineer |
|
||||||
|
| Date | 2020-09-24 |
|
||||||
|
| GitHub username | zaibacu |
|
||||||
|
| Website (optional) | |
|
|
@ -1,5 +1,6 @@
|
||||||
from .tokenizer_exceptions import TOKENIZER_EXCEPTIONS
|
from .tokenizer_exceptions import TOKENIZER_EXCEPTIONS
|
||||||
from .stop_words import STOP_WORDS
|
from .stop_words import STOP_WORDS
|
||||||
|
from .syntax_iterators import SYNTAX_ITERATORS
|
||||||
from .lex_attrs import LEX_ATTRS
|
from .lex_attrs import LEX_ATTRS
|
||||||
from ...language import Language
|
from ...language import Language
|
||||||
|
|
||||||
|
@ -8,6 +9,7 @@ class TurkishDefaults(Language.Defaults):
|
||||||
tokenizer_exceptions = TOKENIZER_EXCEPTIONS
|
tokenizer_exceptions = TOKENIZER_EXCEPTIONS
|
||||||
lex_attr_getters = LEX_ATTRS
|
lex_attr_getters = LEX_ATTRS
|
||||||
stop_words = STOP_WORDS
|
stop_words = STOP_WORDS
|
||||||
|
syntax_iterators = SYNTAX_ITERATORS
|
||||||
|
|
||||||
|
|
||||||
class Turkish(Language):
|
class Turkish(Language):
|
||||||
|
|
|
@ -32,6 +32,36 @@ _num_words = [
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|
||||||
|
_ordinal_words = [
|
||||||
|
"birinci",
|
||||||
|
"ikinci",
|
||||||
|
"üçüncü",
|
||||||
|
"dördüncü",
|
||||||
|
"beşinci",
|
||||||
|
"altıncı",
|
||||||
|
"yedinci",
|
||||||
|
"sekizinci",
|
||||||
|
"dokuzuncu",
|
||||||
|
"onuncu",
|
||||||
|
"yirminci",
|
||||||
|
"otuzuncu",
|
||||||
|
"kırkıncı",
|
||||||
|
"ellinci",
|
||||||
|
"altmışıncı",
|
||||||
|
"yetmişinci",
|
||||||
|
"sekseninci",
|
||||||
|
"doksanıncı",
|
||||||
|
"yüzüncü",
|
||||||
|
"bininci",
|
||||||
|
"mliyonuncu",
|
||||||
|
"milyarıncı",
|
||||||
|
"trilyonuncu",
|
||||||
|
"katrilyonuncu",
|
||||||
|
"kentilyonuncu",
|
||||||
|
]
|
||||||
|
|
||||||
|
_ordinal_endings = ("inci", "ıncı", "nci", "ncı", "uncu", "üncü")
|
||||||
|
|
||||||
def like_num(text):
|
def like_num(text):
|
||||||
if text.startswith(("+", "-", "±", "~")):
|
if text.startswith(("+", "-", "±", "~")):
|
||||||
text = text[1:]
|
text = text[1:]
|
||||||
|
@ -42,8 +72,20 @@ def like_num(text):
|
||||||
num, denom = text.split("/")
|
num, denom = text.split("/")
|
||||||
if num.isdigit() and denom.isdigit():
|
if num.isdigit() and denom.isdigit():
|
||||||
return True
|
return True
|
||||||
if text.lower() in _num_words:
|
|
||||||
|
text_lower = text.lower()
|
||||||
|
|
||||||
|
#Check cardinal number
|
||||||
|
if text_lower in _num_words:
|
||||||
return True
|
return True
|
||||||
|
|
||||||
|
#Check ordinal number
|
||||||
|
if text_lower in _ordinal_words:
|
||||||
|
return True
|
||||||
|
if text_lower.endswith(_ordinal_endings):
|
||||||
|
if text_lower[:-3].isdigit() or text_lower[:-4].isdigit():
|
||||||
|
return True
|
||||||
|
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
|
59
spacy/lang/tr/syntax_iterators.py
Normal file
59
spacy/lang/tr/syntax_iterators.py
Normal file
|
@ -0,0 +1,59 @@
|
||||||
|
# coding: utf8
|
||||||
|
from __future__ import unicode_literals
|
||||||
|
|
||||||
|
from ...symbols import NOUN, PROPN, PRON
|
||||||
|
from ...errors import Errors
|
||||||
|
|
||||||
|
|
||||||
|
def noun_chunks(doclike):
|
||||||
|
"""
|
||||||
|
Detect base noun phrases from a dependency parse. Works on both Doc and Span.
|
||||||
|
"""
|
||||||
|
# Please see documentation for Turkish NP structure
|
||||||
|
labels = [
|
||||||
|
"nsubj",
|
||||||
|
"iobj",
|
||||||
|
"obj",
|
||||||
|
"obl",
|
||||||
|
"appos",
|
||||||
|
"orphan",
|
||||||
|
"dislocated",
|
||||||
|
"ROOT",
|
||||||
|
]
|
||||||
|
doc = doclike.doc # Ensure works on both Doc and Span.
|
||||||
|
if not doc.has_annotation("DEP"):
|
||||||
|
raise ValueError(Errors.E029)
|
||||||
|
|
||||||
|
np_deps = [doc.vocab.strings.add(label) for label in labels]
|
||||||
|
conj = doc.vocab.strings.add("conj")
|
||||||
|
flat = doc.vocab.strings.add("flat")
|
||||||
|
np_label = doc.vocab.strings.add("NP")
|
||||||
|
|
||||||
|
def extend_right(w): # Playing a trick for flat
|
||||||
|
rindex = w.i + 1
|
||||||
|
for rdep in doc[w.i].rights: # Extend the span to right if there is a flat
|
||||||
|
if rdep.dep == flat and rdep.pos in (NOUN, PROPN):
|
||||||
|
rindex = rdep.i + 1
|
||||||
|
else:
|
||||||
|
break
|
||||||
|
return rindex
|
||||||
|
|
||||||
|
prev_end = len(doc) + 1
|
||||||
|
for i, word in reversed(list(enumerate(doclike))):
|
||||||
|
if word.pos not in (NOUN, PROPN, PRON):
|
||||||
|
continue
|
||||||
|
# Prevent nested chunks from being produced
|
||||||
|
if word.i >= prev_end:
|
||||||
|
continue
|
||||||
|
if word.dep in np_deps:
|
||||||
|
prev_end = word.left_edge.i
|
||||||
|
yield word.left_edge.i, extend_right(word), np_label
|
||||||
|
elif word.dep == conj:
|
||||||
|
cc_token = word.left_edge
|
||||||
|
prev_end = cc_token.i
|
||||||
|
yield cc_token.right_edge.i + 1, extend_right(word), np_label # Shave off cc tokens from the NP
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
SYNTAX_ITERATORS = {"noun_chunks": noun_chunks}
|
|
@ -239,6 +239,9 @@ def th_tokenizer():
|
||||||
def tr_tokenizer():
|
def tr_tokenizer():
|
||||||
return get_lang_class("tr")().tokenizer
|
return get_lang_class("tr")().tokenizer
|
||||||
|
|
||||||
|
@pytest.fixture(scope="session")
|
||||||
|
def tr_vocab():
|
||||||
|
return get_lang_class("tr").Defaults.create_vocab()
|
||||||
|
|
||||||
@pytest.fixture(scope="session")
|
@pytest.fixture(scope="session")
|
||||||
def tt_tokenizer():
|
def tt_tokenizer():
|
||||||
|
|
|
@ -606,3 +606,16 @@ def test_doc_init_iob():
|
||||||
ents = [0, "B-", "O", "I-PERSON", "I-GPE"]
|
ents = [0, "B-", "O", "I-PERSON", "I-GPE"]
|
||||||
with pytest.raises(ValueError):
|
with pytest.raises(ValueError):
|
||||||
doc = Doc(Vocab(), words=words, ents=ents)
|
doc = Doc(Vocab(), words=words, ents=ents)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.xfail
|
||||||
|
def test_doc_set_ents_spans(en_tokenizer):
|
||||||
|
doc = en_tokenizer("Some text about Colombia and the Czech Republic")
|
||||||
|
spans = [Span(doc, 3, 4, label="GPE"), Span(doc, 6, 8, label="GPE")]
|
||||||
|
with doc.retokenize() as retokenizer:
|
||||||
|
for span in spans:
|
||||||
|
retokenizer.merge(span)
|
||||||
|
# If this line is uncommented, it works:
|
||||||
|
# print(spans)
|
||||||
|
doc.ents = spans
|
||||||
|
assert [ent.text for ent in doc.ents] == ["Colombia", "Czech Republic"]
|
||||||
|
|
12
spacy/tests/lang/tr/test_noun_chunks.py
Normal file
12
spacy/tests/lang/tr/test_noun_chunks.py
Normal file
|
@ -0,0 +1,12 @@
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
|
||||||
|
def test_noun_chunks_is_parsed(tr_tokenizer):
|
||||||
|
"""Test that noun_chunks raises Value Error for 'tr' language if Doc is not parsed.
|
||||||
|
To check this test, we're constructing a Doc
|
||||||
|
with a new Vocab here and forcing is_parsed to 'False'
|
||||||
|
to make sure the noun chunks don't run.
|
||||||
|
"""
|
||||||
|
doc = tr_tokenizer("Dün seni gördüm.")
|
||||||
|
with pytest.raises(ValueError):
|
||||||
|
list(doc.noun_chunks)
|
570
spacy/tests/lang/tr/test_parser.py
Normal file
570
spacy/tests/lang/tr/test_parser.py
Normal file
|
@ -0,0 +1,570 @@
|
||||||
|
from spacy.tokens import Doc
|
||||||
|
|
||||||
|
|
||||||
|
def test_tr_noun_chunks_amod_simple(tr_tokenizer):
|
||||||
|
text = "sarı kedi"
|
||||||
|
heads = [1, 1]
|
||||||
|
deps = ["amod", "ROOT"]
|
||||||
|
pos = ["ADJ", "NOUN"]
|
||||||
|
tokens = tr_tokenizer(text)
|
||||||
|
doc = Doc(
|
||||||
|
tokens.vocab, words=[t.text for t in tokens], pos=pos, heads=heads, deps=deps
|
||||||
|
)
|
||||||
|
chunks = list(doc.noun_chunks)
|
||||||
|
assert len(chunks) == 1
|
||||||
|
assert chunks[0].text_with_ws == "sarı kedi "
|
||||||
|
|
||||||
|
|
||||||
|
def test_tr_noun_chunks_nmod_simple(tr_tokenizer):
|
||||||
|
text = "arkadaşımın kedisi" # my friend's cat
|
||||||
|
heads = [1, 1]
|
||||||
|
deps = ["nmod", "ROOT"]
|
||||||
|
pos = ["NOUN", "NOUN"]
|
||||||
|
tokens = tr_tokenizer(text)
|
||||||
|
doc = Doc(
|
||||||
|
tokens.vocab, words=[t.text for t in tokens], pos=pos, heads=heads, deps=deps
|
||||||
|
)
|
||||||
|
chunks = list(doc.noun_chunks)
|
||||||
|
assert len(chunks) == 1
|
||||||
|
assert chunks[0].text_with_ws == "arkadaşımın kedisi "
|
||||||
|
|
||||||
|
|
||||||
|
def test_tr_noun_chunks_determiner_simple(tr_tokenizer):
|
||||||
|
text = "O kedi" # that cat
|
||||||
|
heads = [1, 1]
|
||||||
|
deps = ["det", "ROOT"]
|
||||||
|
pos = ["DET", "NOUN"]
|
||||||
|
tokens = tr_tokenizer(text)
|
||||||
|
doc = Doc(
|
||||||
|
tokens.vocab, words=[t.text for t in tokens], pos=pos, heads=heads, deps=deps
|
||||||
|
)
|
||||||
|
chunks = list(doc.noun_chunks)
|
||||||
|
assert len(chunks) == 1
|
||||||
|
assert chunks[0].text_with_ws == "O kedi "
|
||||||
|
|
||||||
|
|
||||||
|
def test_tr_noun_chunks_nmod_amod(tr_tokenizer):
|
||||||
|
text = "okulun eski müdürü"
|
||||||
|
heads = [2, 2, 2]
|
||||||
|
deps = ["nmod", "amod", "ROOT"]
|
||||||
|
pos = ["NOUN", "ADJ", "NOUN"]
|
||||||
|
tokens = tr_tokenizer(text)
|
||||||
|
doc = Doc(
|
||||||
|
tokens.vocab, words=[t.text for t in tokens], pos=pos, heads=heads, deps=deps
|
||||||
|
)
|
||||||
|
chunks = list(doc.noun_chunks)
|
||||||
|
assert len(chunks) == 1
|
||||||
|
assert chunks[0].text_with_ws == "okulun eski müdürü "
|
||||||
|
|
||||||
|
|
||||||
|
def test_tr_noun_chunks_one_det_one_adj_simple(tr_tokenizer):
|
||||||
|
text = "O sarı kedi"
|
||||||
|
heads = [2, 2, 2]
|
||||||
|
deps = ["det", "amod", "ROOT"]
|
||||||
|
pos = ["DET", "ADJ", "NOUN"]
|
||||||
|
tokens = tr_tokenizer(text)
|
||||||
|
doc = Doc(
|
||||||
|
tokens.vocab, words=[t.text for t in tokens], pos=pos, heads=heads, deps=deps
|
||||||
|
)
|
||||||
|
chunks = list(doc.noun_chunks)
|
||||||
|
assert len(chunks) == 1
|
||||||
|
assert chunks[0].text_with_ws == "O sarı kedi "
|
||||||
|
|
||||||
|
|
||||||
|
def test_tr_noun_chunks_two_adjs_simple(tr_tokenizer):
|
||||||
|
text = "beyaz tombik kedi"
|
||||||
|
heads = [2, 2, 2]
|
||||||
|
deps = ["amod", "amod", "ROOT"]
|
||||||
|
pos = ["ADJ", "ADJ", "NOUN"]
|
||||||
|
tokens = tr_tokenizer(text)
|
||||||
|
doc = Doc(
|
||||||
|
tokens.vocab, words=[t.text for t in tokens], pos=pos, heads=heads, deps=deps
|
||||||
|
)
|
||||||
|
chunks = list(doc.noun_chunks)
|
||||||
|
assert len(chunks) == 1
|
||||||
|
assert chunks[0].text_with_ws == "beyaz tombik kedi "
|
||||||
|
|
||||||
|
|
||||||
|
def test_tr_noun_chunks_one_det_two_adjs_simple(tr_tokenizer):
|
||||||
|
text = "o beyaz tombik kedi"
|
||||||
|
heads = [3, 3, 3, 3]
|
||||||
|
deps = ["det", "amod", "amod", "ROOT"]
|
||||||
|
pos = ["DET", "ADJ", "ADJ", "NOUN"]
|
||||||
|
tokens = tr_tokenizer(text)
|
||||||
|
doc = Doc(
|
||||||
|
tokens.vocab, words=[t.text for t in tokens], pos=pos, heads=heads, deps=deps
|
||||||
|
)
|
||||||
|
chunks = list(doc.noun_chunks)
|
||||||
|
assert len(chunks) == 1
|
||||||
|
assert chunks[0].text_with_ws == "o beyaz tombik kedi "
|
||||||
|
|
||||||
|
|
||||||
|
def test_tr_noun_chunks_nmod_two(tr_tokenizer):
|
||||||
|
text = "kızın saçının rengi"
|
||||||
|
heads = [1, 2, 2]
|
||||||
|
deps = ["nmod", "nmod", "ROOT"]
|
||||||
|
pos = ["NOUN", "NOUN", "NOUN"]
|
||||||
|
tokens = tr_tokenizer(text)
|
||||||
|
doc = Doc(
|
||||||
|
tokens.vocab, words=[t.text for t in tokens], pos=pos, heads=heads, deps=deps
|
||||||
|
)
|
||||||
|
chunks = list(doc.noun_chunks)
|
||||||
|
assert len(chunks) == 1
|
||||||
|
assert chunks[0].text_with_ws == "kızın saçının rengi "
|
||||||
|
|
||||||
|
|
||||||
|
def test_tr_noun_chunks_chain_nmod_with_adj(tr_tokenizer):
|
||||||
|
text = "ev sahibinin tatlı köpeği"
|
||||||
|
heads = [1, 3, 3, 3]
|
||||||
|
deps = ["nmod", "nmod", "amod", "ROOT"]
|
||||||
|
pos = ["NOUN", "NOUN", "ADJ", "NOUN"]
|
||||||
|
tokens = tr_tokenizer(text)
|
||||||
|
doc = Doc(
|
||||||
|
tokens.vocab, words=[t.text for t in tokens], pos=pos, heads=heads, deps=deps
|
||||||
|
)
|
||||||
|
chunks = list(doc.noun_chunks)
|
||||||
|
assert len(chunks) == 1
|
||||||
|
assert chunks[0].text_with_ws == "ev sahibinin tatlı köpeği "
|
||||||
|
|
||||||
|
|
||||||
|
def test_tr_noun_chunks_chain_nmod_with_acl(tr_tokenizer):
|
||||||
|
text = "ev sahibinin gelen köpeği"
|
||||||
|
heads = [1, 3, 3, 3]
|
||||||
|
deps = ["nmod", "nmod", "acl", "ROOT"]
|
||||||
|
pos = ["NOUN", "NOUN", "VERB", "NOUN"]
|
||||||
|
tokens = tr_tokenizer(text)
|
||||||
|
doc = Doc(
|
||||||
|
tokens.vocab, words=[t.text for t in tokens], pos=pos, heads=heads, deps=deps
|
||||||
|
)
|
||||||
|
chunks = list(doc.noun_chunks)
|
||||||
|
assert len(chunks) == 1
|
||||||
|
assert chunks[0].text_with_ws == "ev sahibinin gelen köpeği "
|
||||||
|
|
||||||
|
|
||||||
|
def test_tr_noun_chunks_chain_nmod_head_with_amod_acl(tr_tokenizer):
|
||||||
|
text = "arabanın kırdığım sol aynası"
|
||||||
|
heads = [3, 3, 3, 3]
|
||||||
|
deps = ["nmod", "acl", "amod", "ROOT"]
|
||||||
|
pos = ["NOUN", "VERB", "ADJ", "NOUN"]
|
||||||
|
tokens = tr_tokenizer(text)
|
||||||
|
doc = Doc(
|
||||||
|
tokens.vocab, words=[t.text for t in tokens], pos=pos, heads=heads, deps=deps
|
||||||
|
)
|
||||||
|
chunks = list(doc.noun_chunks)
|
||||||
|
assert len(chunks) == 1
|
||||||
|
assert chunks[0].text_with_ws == "arabanın kırdığım sol aynası "
|
||||||
|
|
||||||
|
|
||||||
|
def test_tr_noun_chunks_nmod_three(tr_tokenizer):
|
||||||
|
text = "güney Afrika ülkelerinden Mozambik"
|
||||||
|
heads = [1, 2, 3, 3]
|
||||||
|
deps = ["nmod", "nmod", "nmod", "ROOT"]
|
||||||
|
pos = ["NOUN", "PROPN", "NOUN", "PROPN"]
|
||||||
|
tokens = tr_tokenizer(text)
|
||||||
|
doc = Doc(
|
||||||
|
tokens.vocab, words=[t.text for t in tokens], pos=pos, heads=heads, deps=deps
|
||||||
|
)
|
||||||
|
chunks = list(doc.noun_chunks)
|
||||||
|
assert len(chunks) == 1
|
||||||
|
assert chunks[0].text_with_ws == "güney Afrika ülkelerinden Mozambik "
|
||||||
|
|
||||||
|
|
||||||
|
def test_tr_noun_chunks_det_amod_nmod(tr_tokenizer):
|
||||||
|
text = "bazı eski oyun kuralları"
|
||||||
|
heads = [3, 3, 3, 3]
|
||||||
|
deps = ["det", "nmod", "nmod", "ROOT"]
|
||||||
|
pos = ["DET", "ADJ", "NOUN", "NOUN"]
|
||||||
|
tokens = tr_tokenizer(text)
|
||||||
|
doc = Doc(
|
||||||
|
tokens.vocab, words=[t.text for t in tokens], pos=pos, heads=heads, deps=deps
|
||||||
|
)
|
||||||
|
chunks = list(doc.noun_chunks)
|
||||||
|
assert len(chunks) == 1
|
||||||
|
assert chunks[0].text_with_ws == "bazı eski oyun kuralları "
|
||||||
|
|
||||||
|
|
||||||
|
def test_tr_noun_chunks_acl_simple(tr_tokenizer):
|
||||||
|
text = "bahçesi olan okul"
|
||||||
|
heads = [2, 0, 2]
|
||||||
|
deps = ["acl", "cop", "ROOT"]
|
||||||
|
pos = ["NOUN", "AUX", "NOUN"]
|
||||||
|
tokens = tr_tokenizer(text)
|
||||||
|
doc = Doc(
|
||||||
|
tokens.vocab, words=[t.text for t in tokens], pos=pos, heads=heads, deps=deps
|
||||||
|
)
|
||||||
|
chunks = list(doc.noun_chunks)
|
||||||
|
assert len(chunks) == 1
|
||||||
|
assert chunks[0].text_with_ws == "bahçesi olan okul "
|
||||||
|
|
||||||
|
|
||||||
|
def test_tr_noun_chunks_acl_verb(tr_tokenizer):
|
||||||
|
text = "sevdiğim sanatçılar"
|
||||||
|
heads = [1, 1]
|
||||||
|
deps = ["acl", "ROOT"]
|
||||||
|
pos = ["VERB", "NOUN"]
|
||||||
|
tokens = tr_tokenizer(text)
|
||||||
|
doc = Doc(
|
||||||
|
tokens.vocab, words=[t.text for t in tokens], pos=pos, heads=heads, deps=deps
|
||||||
|
)
|
||||||
|
chunks = list(doc.noun_chunks)
|
||||||
|
assert len(chunks) == 1
|
||||||
|
assert chunks[0].text_with_ws == "sevdiğim sanatçılar "
|
||||||
|
|
||||||
|
|
||||||
|
def test_tr_noun_chunks_acl_nmod(tr_tokenizer):
|
||||||
|
text = "en sevdiğim ses sanatçısı"
|
||||||
|
heads = [1, 3, 3, 3]
|
||||||
|
deps = ["advmod", "acl", "nmod", "ROOT"]
|
||||||
|
pos = ["ADV", "VERB", "NOUN", "NOUN"]
|
||||||
|
tokens = tr_tokenizer(text)
|
||||||
|
doc = Doc(
|
||||||
|
tokens.vocab, words=[t.text for t in tokens], pos=pos, heads=heads, deps=deps
|
||||||
|
)
|
||||||
|
chunks = list(doc.noun_chunks)
|
||||||
|
assert len(chunks) == 1
|
||||||
|
assert chunks[0].text_with_ws == "en sevdiğim ses sanatçısı "
|
||||||
|
|
||||||
|
|
||||||
|
def test_tr_noun_chunks_acl_nmod(tr_tokenizer):
|
||||||
|
text = "bildiğim bir turizm şirketi"
|
||||||
|
heads = [3, 3, 3, 3]
|
||||||
|
deps = ["acl", "det", "nmod", "ROOT"]
|
||||||
|
pos = ["VERB", "DET", "NOUN", "NOUN"]
|
||||||
|
tokens = tr_tokenizer(text)
|
||||||
|
doc = Doc(
|
||||||
|
tokens.vocab, words=[t.text for t in tokens], pos=pos, heads=heads, deps=deps
|
||||||
|
)
|
||||||
|
chunks = list(doc.noun_chunks)
|
||||||
|
assert len(chunks) == 1
|
||||||
|
assert chunks[0].text_with_ws == "bildiğim bir turizm şirketi "
|
||||||
|
|
||||||
|
|
||||||
|
def test_tr_noun_chunks_np_recursive_nsubj_to_root(tr_tokenizer):
|
||||||
|
text = "Simge'nin okuduğu kitap"
|
||||||
|
heads = [1, 2, 2]
|
||||||
|
deps = ["nsubj", "acl", "ROOT"]
|
||||||
|
pos = ["PROPN", "VERB", "NOUN"]
|
||||||
|
tokens = tr_tokenizer(text)
|
||||||
|
doc = Doc(
|
||||||
|
tokens.vocab, words=[t.text for t in tokens], pos=pos, heads=heads, deps=deps
|
||||||
|
)
|
||||||
|
chunks = list(doc.noun_chunks)
|
||||||
|
assert len(chunks) == 1
|
||||||
|
assert chunks[0].text_with_ws == "Simge'nin okuduğu kitap "
|
||||||
|
|
||||||
|
|
||||||
|
def test_tr_noun_chunks_np_recursive_nsubj_attached_to_pron_root(tr_tokenizer):
|
||||||
|
text = "Simge'nin konuşabileceği birisi"
|
||||||
|
heads = [1, 2, 2]
|
||||||
|
deps = ["nsubj", "acl", "ROOT"]
|
||||||
|
pos = ["PROPN", "VERB", "PRON"]
|
||||||
|
tokens = tr_tokenizer(text)
|
||||||
|
doc = Doc(
|
||||||
|
tokens.vocab, words=[t.text for t in tokens], pos=pos, heads=heads, deps=deps
|
||||||
|
)
|
||||||
|
chunks = list(doc.noun_chunks)
|
||||||
|
assert len(chunks) == 1
|
||||||
|
assert chunks[0].text_with_ws == "Simge'nin konuşabileceği birisi "
|
||||||
|
|
||||||
|
|
||||||
|
def test_tr_noun_chunks_np_recursive_nsubj_in_subnp(tr_tokenizer):
|
||||||
|
text = "Simge'nin yarın gideceği yer"
|
||||||
|
heads = [2, 2, 3, 3]
|
||||||
|
deps = ["nsubj", "obl", "acl", "ROOT"]
|
||||||
|
pos = ["PROPN", "NOUN", "VERB", "NOUN"]
|
||||||
|
tokens = tr_tokenizer(text)
|
||||||
|
doc = Doc(
|
||||||
|
tokens.vocab, words=[t.text for t in tokens], pos=pos, heads=heads, deps=deps
|
||||||
|
)
|
||||||
|
chunks = list(doc.noun_chunks)
|
||||||
|
assert len(chunks) == 1
|
||||||
|
assert chunks[0].text_with_ws == "Simge'nin yarın gideceği yer "
|
||||||
|
|
||||||
|
|
||||||
|
def test_tr_noun_chunks_np_recursive_two_nmods(tr_tokenizer):
|
||||||
|
text = "ustanın kapısını degiştireceği çamasır makinası"
|
||||||
|
heads = [2, 2, 4, 4, 4]
|
||||||
|
deps = ["nsubj", "obj", "acl", "nmod", "ROOT"]
|
||||||
|
pos = ["NOUN", "NOUN", "VERB", "NOUN", "NOUN"]
|
||||||
|
tokens = tr_tokenizer(text)
|
||||||
|
doc = Doc(
|
||||||
|
tokens.vocab, words=[t.text for t in tokens], pos=pos, heads=heads, deps=deps
|
||||||
|
)
|
||||||
|
chunks = list(doc.noun_chunks)
|
||||||
|
assert len(chunks) == 1
|
||||||
|
assert chunks[0].text_with_ws == "ustanın kapısını degiştireceği çamasır makinası "
|
||||||
|
|
||||||
|
|
||||||
|
def test_tr_noun_chunks_np_recursive_four_nouns(tr_tokenizer):
|
||||||
|
text = "kızına piyano dersi verdiğim hanım"
|
||||||
|
heads = [3, 2, 3, 4, 4]
|
||||||
|
deps = ["obl", "nmod", "obj", "acl", "ROOT"]
|
||||||
|
pos = ["NOUN", "NOUN", "NOUN", "VERB", "NOUN"]
|
||||||
|
tokens = tr_tokenizer(text)
|
||||||
|
doc = Doc(
|
||||||
|
tokens.vocab, words=[t.text for t in tokens], pos=pos, heads=heads, deps=deps
|
||||||
|
)
|
||||||
|
chunks = list(doc.noun_chunks)
|
||||||
|
assert len(chunks) == 1
|
||||||
|
assert chunks[0].text_with_ws == "kızına piyano dersi verdiğim hanım "
|
||||||
|
|
||||||
|
|
||||||
|
def test_tr_noun_chunks_np_recursive_no_nmod(tr_tokenizer):
|
||||||
|
text = "içine birkaç çiçek konmuş olan bir vazo"
|
||||||
|
heads = [3, 2, 3, 6, 3, 6, 6]
|
||||||
|
deps = ["obl", "det", "nsubj", "acl", "aux", "det", "ROOT"]
|
||||||
|
pos = ["ADP", "DET", "NOUN", "VERB", "AUX", "DET", "NOUN"]
|
||||||
|
tokens = tr_tokenizer(text)
|
||||||
|
doc = Doc(
|
||||||
|
tokens.vocab, words=[t.text for t in tokens], pos=pos, heads=heads, deps=deps
|
||||||
|
)
|
||||||
|
chunks = list(doc.noun_chunks)
|
||||||
|
assert len(chunks) == 1
|
||||||
|
assert chunks[0].text_with_ws == "içine birkaç çiçek konmuş olan bir vazo "
|
||||||
|
|
||||||
|
|
||||||
|
def test_tr_noun_chunks_np_recursive_long_two_acls(tr_tokenizer):
|
||||||
|
text = "içine Simge'nin bahçesinden toplanmış birkaç çiçeğin konmuş olduğu bir vazo"
|
||||||
|
heads = [6, 2, 3, 5, 5, 6, 9, 6, 9, 9]
|
||||||
|
deps = ["obl", "nmod" , "obl", "acl", "det", "nsubj", "acl", "aux", "det", "ROOT"]
|
||||||
|
pos = ["ADP", "PROPN", "NOUN", "VERB", "DET", "NOUN", "VERB", "AUX", "DET", "NOUN"]
|
||||||
|
tokens = tr_tokenizer(text)
|
||||||
|
doc = Doc(
|
||||||
|
tokens.vocab, words=[t.text for t in tokens], pos=pos, heads=heads, deps=deps
|
||||||
|
)
|
||||||
|
chunks = list(doc.noun_chunks)
|
||||||
|
assert len(chunks) == 1
|
||||||
|
assert chunks[0].text_with_ws == "içine Simge'nin bahçesinden toplanmış birkaç çiçeğin konmuş olduğu bir vazo "
|
||||||
|
|
||||||
|
|
||||||
|
def test_tr_noun_chunks_two_nouns_in_nmod(tr_tokenizer):
|
||||||
|
text = "kız ve erkek çocuklar"
|
||||||
|
heads = [3, 2, 0, 3]
|
||||||
|
deps = ["nmod", "cc", "conj", "ROOT"]
|
||||||
|
pos = ["NOUN", "CCONJ", "NOUN", "NOUN"]
|
||||||
|
tokens = tr_tokenizer(text)
|
||||||
|
doc = Doc(
|
||||||
|
tokens.vocab, words=[t.text for t in tokens], pos=pos, heads=heads, deps=deps
|
||||||
|
)
|
||||||
|
chunks = list(doc.noun_chunks)
|
||||||
|
assert len(chunks) == 1
|
||||||
|
assert chunks[0].text_with_ws == "kız ve erkek çocuklar "
|
||||||
|
|
||||||
|
def test_tr_noun_chunks_two_nouns_in_nmod(tr_tokenizer):
|
||||||
|
text = "tatlı ve gürbüz çocuklar"
|
||||||
|
heads = [3, 2, 0, 3]
|
||||||
|
deps = ["amod", "cc", "conj", "ROOT"]
|
||||||
|
pos = ["ADJ", "CCONJ", "NOUN", "NOUN"]
|
||||||
|
tokens = tr_tokenizer(text)
|
||||||
|
doc = Doc(
|
||||||
|
tokens.vocab, words=[t.text for t in tokens], pos=pos, heads=heads, deps=deps
|
||||||
|
)
|
||||||
|
chunks = list(doc.noun_chunks)
|
||||||
|
assert len(chunks) == 1
|
||||||
|
assert chunks[0].text_with_ws == "tatlı ve gürbüz çocuklar "
|
||||||
|
|
||||||
|
|
||||||
|
def test_tr_noun_chunks_conj_simple(tr_tokenizer):
|
||||||
|
text = "Sen ya da ben"
|
||||||
|
heads = [0, 3, 1, 0]
|
||||||
|
deps = ["ROOT", "cc", "fixed", "conj"]
|
||||||
|
pos = ["PRON", "CCONJ", "CCONJ", "PRON"]
|
||||||
|
tokens = tr_tokenizer(text)
|
||||||
|
doc = Doc(
|
||||||
|
tokens.vocab, words=[t.text for t in tokens], pos=pos, heads=heads, deps=deps
|
||||||
|
)
|
||||||
|
chunks = list(doc.noun_chunks)
|
||||||
|
assert len(chunks) == 2
|
||||||
|
assert chunks[0].text_with_ws == "ben "
|
||||||
|
assert chunks[1].text_with_ws == "Sen "
|
||||||
|
|
||||||
|
def test_tr_noun_chunks_conj_three(tr_tokenizer):
|
||||||
|
text = "sen, ben ve ondan"
|
||||||
|
heads = [0, 2, 0, 4, 0]
|
||||||
|
deps = ["ROOT", "punct", "conj", "cc", "conj"]
|
||||||
|
pos = ["PRON", "PUNCT", "PRON", "CCONJ", "PRON"]
|
||||||
|
tokens = tr_tokenizer(text)
|
||||||
|
doc = Doc(
|
||||||
|
tokens.vocab, words=[t.text for t in tokens], pos=pos, heads=heads, deps=deps
|
||||||
|
)
|
||||||
|
chunks = list(doc.noun_chunks)
|
||||||
|
assert len(chunks) == 3
|
||||||
|
assert chunks[0].text_with_ws == "ondan "
|
||||||
|
assert chunks[1].text_with_ws == "ben "
|
||||||
|
assert chunks[2].text_with_ws == "sen "
|
||||||
|
|
||||||
|
|
||||||
|
def test_tr_noun_chunks_conj_three(tr_tokenizer):
|
||||||
|
text = "ben ya da sen ya da onlar"
|
||||||
|
heads = [0, 3, 1, 0, 6, 4, 3]
|
||||||
|
deps = ["ROOT", "cc", "fixed", "conj", "cc", "fixed", "conj"]
|
||||||
|
pos = ["PRON", "CCONJ", "CCONJ", "PRON", "CCONJ", "CCONJ", "PRON"]
|
||||||
|
tokens = tr_tokenizer(text)
|
||||||
|
doc = Doc(
|
||||||
|
tokens.vocab, words=[t.text for t in tokens], pos=pos, heads=heads, deps=deps
|
||||||
|
)
|
||||||
|
chunks = list(doc.noun_chunks)
|
||||||
|
assert len(chunks) == 3
|
||||||
|
assert chunks[0].text_with_ws == "onlar "
|
||||||
|
assert chunks[1].text_with_ws == "sen "
|
||||||
|
assert chunks[2].text_with_ws == "ben "
|
||||||
|
|
||||||
|
|
||||||
|
def test_tr_noun_chunks_conj_and_adj_phrase(tr_tokenizer):
|
||||||
|
text = "ben ve akıllı çocuk"
|
||||||
|
heads = [0, 3, 3, 0]
|
||||||
|
deps = ["ROOT", "cc", "amod", "conj"]
|
||||||
|
pos = ["PRON", "CCONJ", "ADJ", "NOUN"]
|
||||||
|
tokens = tr_tokenizer(text)
|
||||||
|
doc = Doc(
|
||||||
|
tokens.vocab, words=[t.text for t in tokens], pos=pos, heads=heads, deps=deps
|
||||||
|
)
|
||||||
|
chunks = list(doc.noun_chunks)
|
||||||
|
assert len(chunks) == 2
|
||||||
|
assert chunks[0].text_with_ws == "akıllı çocuk "
|
||||||
|
assert chunks[1].text_with_ws == "ben "
|
||||||
|
|
||||||
|
|
||||||
|
def test_tr_noun_chunks_conj_fixed_adj_phrase(tr_tokenizer):
|
||||||
|
text = "ben ya da akıllı çocuk"
|
||||||
|
heads = [0, 4, 1, 4, 0]
|
||||||
|
deps = ["ROOT", "cc", "fixed", "amod", "conj"]
|
||||||
|
pos = ["PRON", "CCONJ", "CCONJ", "ADJ", "NOUN"]
|
||||||
|
tokens = tr_tokenizer(text)
|
||||||
|
doc = Doc(
|
||||||
|
tokens.vocab, words=[t.text for t in tokens], pos=pos, heads=heads, deps=deps
|
||||||
|
)
|
||||||
|
chunks = list(doc.noun_chunks)
|
||||||
|
assert len(chunks) == 2
|
||||||
|
assert chunks[0].text_with_ws == "akıllı çocuk "
|
||||||
|
assert chunks[1].text_with_ws == "ben "
|
||||||
|
|
||||||
|
|
||||||
|
def test_tr_noun_chunks_conj_subject(tr_tokenizer):
|
||||||
|
text = "Sen ve ben iyi anlaşıyoruz"
|
||||||
|
heads = [4, 2, 0, 2, 4]
|
||||||
|
deps = ["nsubj", "cc", "conj", "adv", "ROOT"]
|
||||||
|
pos = ["PRON", "CCONJ", "PRON", "ADV", "VERB"]
|
||||||
|
tokens = tr_tokenizer(text)
|
||||||
|
doc = Doc(
|
||||||
|
tokens.vocab, words=[t.text for t in tokens], pos=pos, heads=heads, deps=deps
|
||||||
|
)
|
||||||
|
chunks = list(doc.noun_chunks)
|
||||||
|
assert len(chunks) == 2
|
||||||
|
assert chunks[0].text_with_ws == "ben "
|
||||||
|
assert chunks[1].text_with_ws == "Sen "
|
||||||
|
|
||||||
|
|
||||||
|
def test_tr_noun_chunks_conj_noun_head_verb(tr_tokenizer):
|
||||||
|
text = "Simge babasını görmüyormuş, annesini değil"
|
||||||
|
heads = [2, 2, 2, 4, 2, 4]
|
||||||
|
deps = ["nsubj", "obj", "ROOT", "punct", "conj", "aux"]
|
||||||
|
pos = ["PROPN", "NOUN", "VERB", "PUNCT", "NOUN", "AUX"]
|
||||||
|
tokens = tr_tokenizer(text)
|
||||||
|
doc = Doc(
|
||||||
|
tokens.vocab, words=[t.text for t in tokens], pos=pos, heads=heads, deps=deps
|
||||||
|
)
|
||||||
|
chunks = list(doc.noun_chunks)
|
||||||
|
assert len(chunks) == 3
|
||||||
|
assert chunks[0].text_with_ws == "annesini "
|
||||||
|
assert chunks[1].text_with_ws == "babasını "
|
||||||
|
assert chunks[2].text_with_ws == "Simge "
|
||||||
|
|
||||||
|
|
||||||
|
def test_tr_noun_chunks_flat_simple(tr_tokenizer):
|
||||||
|
text = "New York"
|
||||||
|
heads = [0, 0]
|
||||||
|
deps = ["ROOT", "flat"]
|
||||||
|
pos = ["PROPN", "PROPN"]
|
||||||
|
tokens = tr_tokenizer(text)
|
||||||
|
doc = Doc(
|
||||||
|
tokens.vocab, words=[t.text for t in tokens], pos=pos, heads=heads, deps=deps
|
||||||
|
)
|
||||||
|
chunks = list(doc.noun_chunks)
|
||||||
|
assert len(chunks) == 1
|
||||||
|
assert chunks[0].text_with_ws == "New York "
|
||||||
|
|
||||||
|
|
||||||
|
def test_tr_noun_chunks_flat_names_and_title(tr_tokenizer):
|
||||||
|
text = "Gazi Mustafa Kemal"
|
||||||
|
heads = [1, 1, 1]
|
||||||
|
deps = ["nmod", "ROOT", "flat"]
|
||||||
|
pos = ["PROPN", "PROPN", "PROPN"]
|
||||||
|
tokens = tr_tokenizer(text)
|
||||||
|
doc = Doc(
|
||||||
|
tokens.vocab, words=[t.text for t in tokens], pos=pos, heads=heads, deps=deps
|
||||||
|
)
|
||||||
|
chunks = list(doc.noun_chunks)
|
||||||
|
assert len(chunks) == 1
|
||||||
|
assert chunks[0].text_with_ws == "Gazi Mustafa Kemal "
|
||||||
|
|
||||||
|
|
||||||
|
def test_tr_noun_chunks_flat_names_and_title(tr_tokenizer):
|
||||||
|
text = "Ahmet Vefik Paşa"
|
||||||
|
heads = [2, 0, 2]
|
||||||
|
deps = ["nmod", "flat", "ROOT"]
|
||||||
|
pos = ["PROPN", "PROPN", "PROPN"]
|
||||||
|
tokens = tr_tokenizer(text)
|
||||||
|
doc = Doc(
|
||||||
|
tokens.vocab, words=[t.text for t in tokens], pos=pos, heads=heads, deps=deps
|
||||||
|
)
|
||||||
|
chunks = list(doc.noun_chunks)
|
||||||
|
assert len(chunks) == 1
|
||||||
|
assert chunks[0].text_with_ws == "Ahmet Vefik Paşa "
|
||||||
|
|
||||||
|
|
||||||
|
def test_tr_noun_chunks_flat_name_lastname_and_title(tr_tokenizer):
|
||||||
|
text = "Cumhurbaşkanı Ahmet Necdet Sezer"
|
||||||
|
heads = [1, 1, 1, 1]
|
||||||
|
deps = ["nmod", "ROOT", "flat", "flat"]
|
||||||
|
pos = ["NOUN", "PROPN", "PROPN", "PROPN"]
|
||||||
|
tokens = tr_tokenizer(text)
|
||||||
|
doc = Doc(
|
||||||
|
tokens.vocab, words=[t.text for t in tokens], pos=pos, heads=heads, deps=deps
|
||||||
|
)
|
||||||
|
chunks = list(doc.noun_chunks)
|
||||||
|
assert len(chunks) == 1
|
||||||
|
assert chunks[0].text_with_ws == "Cumhurbaşkanı Ahmet Necdet Sezer "
|
||||||
|
|
||||||
|
|
||||||
|
def test_tr_noun_chunks_flat_in_nmod(tr_tokenizer):
|
||||||
|
text = "Ahmet Sezer adında bir ögrenci"
|
||||||
|
heads = [2, 0, 4, 4, 4]
|
||||||
|
deps = ["nmod", "flat", "nmod", "det", "ROOT"]
|
||||||
|
pos = ["PROPN", "PROPN", "NOUN", "DET", "NOUN"]
|
||||||
|
tokens = tr_tokenizer(text)
|
||||||
|
doc = Doc(
|
||||||
|
tokens.vocab, words=[t.text for t in tokens], pos=pos, heads=heads, deps=deps
|
||||||
|
)
|
||||||
|
chunks = list(doc.noun_chunks)
|
||||||
|
assert len(chunks) == 1
|
||||||
|
assert chunks[0].text_with_ws == "Ahmet Sezer adında bir ögrenci "
|
||||||
|
|
||||||
|
|
||||||
|
def test_tr_noun_chunks_flat_and_chain_nmod(tr_tokenizer):
|
||||||
|
text = "Batı Afrika ülkelerinden Sierra Leone"
|
||||||
|
heads = [1, 2, 3, 3, 3]
|
||||||
|
deps = ["nmod", "nmod", "nmod", "ROOT", "flat"]
|
||||||
|
pos = ["NOUN", "PROPN", "NOUN", "PROPN", "PROPN"]
|
||||||
|
tokens = tr_tokenizer(text)
|
||||||
|
doc = Doc(
|
||||||
|
tokens.vocab, words=[t.text for t in tokens], pos=pos, heads=heads, deps=deps
|
||||||
|
)
|
||||||
|
chunks = list(doc.noun_chunks)
|
||||||
|
assert len(chunks) == 1
|
||||||
|
assert chunks[0].text_with_ws == "Batı Afrika ülkelerinden Sierra Leone "
|
||||||
|
|
||||||
|
|
||||||
|
def test_tr_noun_chunks_two_flats_conjed(tr_tokenizer):
|
||||||
|
text = "New York ve Sierra Leone"
|
||||||
|
heads = [0, 0, 3, 0, 3]
|
||||||
|
deps = ["ROOT", "flat", "cc", "conj", "flat"]
|
||||||
|
pos = ["PROPN", "PROPN", "CCONJ", "PROPN", "PROPN"]
|
||||||
|
tokens = tr_tokenizer(text)
|
||||||
|
doc = Doc(
|
||||||
|
tokens.vocab, words=[t.text for t in tokens], pos=pos, heads=heads, deps=deps
|
||||||
|
)
|
||||||
|
chunks = list(doc.noun_chunks)
|
||||||
|
assert len(chunks) == 2
|
||||||
|
assert chunks[0].text_with_ws == "Sierra Leone "
|
||||||
|
assert chunks[1].text_with_ws == "New York "
|
29
spacy/tests/lang/tr/test_text.py
Normal file
29
spacy/tests/lang/tr/test_text.py
Normal file
|
@ -0,0 +1,29 @@
|
||||||
|
import pytest
|
||||||
|
from spacy.lang.tr.lex_attrs import like_num
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.parametrize(
|
||||||
|
"word",
|
||||||
|
[
|
||||||
|
"bir",
|
||||||
|
"iki",
|
||||||
|
"dört",
|
||||||
|
"altı",
|
||||||
|
"milyon",
|
||||||
|
"100",
|
||||||
|
"birinci",
|
||||||
|
"üçüncü",
|
||||||
|
"beşinci",
|
||||||
|
"100üncü",
|
||||||
|
"8inci"
|
||||||
|
]
|
||||||
|
)
|
||||||
|
def test_tr_lex_attrs_like_number_cardinal_ordinal(word):
|
||||||
|
assert like_num(word)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.parametrize("word", ["beş", "yedi", "yedinci", "birinci"])
|
||||||
|
def test_tr_lex_attrs_capitals(word):
|
||||||
|
assert like_num(word)
|
||||||
|
assert like_num(word.upper())
|
||||||
|
|
15
spacy/tests/regression/test_issue6207.py
Normal file
15
spacy/tests/regression/test_issue6207.py
Normal file
|
@ -0,0 +1,15 @@
|
||||||
|
from spacy.util import filter_spans
|
||||||
|
|
||||||
|
|
||||||
|
def test_issue6207(en_tokenizer):
|
||||||
|
doc = en_tokenizer("zero one two three four five six")
|
||||||
|
|
||||||
|
# Make spans
|
||||||
|
s1 = doc[:4]
|
||||||
|
s2 = doc[3:6] # overlaps with s1
|
||||||
|
s3 = doc[5:7] # overlaps with s2, not s1
|
||||||
|
|
||||||
|
result = filter_spans((s1, s2, s3))
|
||||||
|
assert s1 in result
|
||||||
|
assert s2 not in result
|
||||||
|
assert s3 in result
|
|
@ -1018,7 +1018,7 @@ def filter_spans(spans: Iterable["Span"]) -> List["Span"]:
|
||||||
# Check for end - 1 here because boundaries are inclusive
|
# Check for end - 1 here because boundaries are inclusive
|
||||||
if span.start not in seen_tokens and span.end - 1 not in seen_tokens:
|
if span.start not in seen_tokens and span.end - 1 not in seen_tokens:
|
||||||
result.append(span)
|
result.append(span)
|
||||||
seen_tokens.update(range(span.start, span.end))
|
seen_tokens.update(range(span.start, span.end))
|
||||||
result = sorted(result, key=lambda span: span.start)
|
result = sorted(result, key=lambda span: span.start)
|
||||||
return result
|
return result
|
||||||
|
|
||||||
|
|
|
@ -643,7 +643,7 @@ Debug a Thinc [`Model`](https://thinc.ai/docs/api-model) by running it on a
|
||||||
sample text and checking how it updates its internal weights and parameters.
|
sample text and checking how it updates its internal weights and parameters.
|
||||||
|
|
||||||
```cli
|
```cli
|
||||||
$ python -m spacy debug model [config_path] [component] [--layers] [-DIM] [-PAR] [-GRAD] [-ATTR] [-P0] [-P1] [-P2] [P3] [--gpu-id]
|
$ python -m spacy debug model [config_path] [component] [--layers] [--dimensions] [--parameters] [--gradients] [--attributes] [--print-step0] [--print-step1] [--print-step2] [--print-step3] [--gpu-id]
|
||||||
```
|
```
|
||||||
|
|
||||||
<Accordion title="Example outputs" spaced>
|
<Accordion title="Example outputs" spaced>
|
||||||
|
|
|
@ -232,7 +232,9 @@ transformers as subnetworks directly, you can also use them via the
|
||||||
|
|
||||||
The `Transformer` component sets the
|
The `Transformer` component sets the
|
||||||
[`Doc._.trf_data`](/api/transformer#custom_attributes) extension attribute,
|
[`Doc._.trf_data`](/api/transformer#custom_attributes) extension attribute,
|
||||||
which lets you access the transformers outputs at runtime.
|
which lets you access the transformers outputs at runtime. The trained
|
||||||
|
transformer-based [pipelines](/models) provided by spaCy end on `_trf`, e.g.
|
||||||
|
[`en_core_web_trf`](/models/en#en_core_web_trf).
|
||||||
|
|
||||||
```cli
|
```cli
|
||||||
$ python -m spacy download en_core_web_trf
|
$ python -m spacy download en_core_web_trf
|
||||||
|
|
|
@ -1656,9 +1656,10 @@ because it only requires annotated sentence boundaries rather than full
|
||||||
dependency parses. spaCy's [trained pipelines](/models) include both a parser
|
dependency parses. spaCy's [trained pipelines](/models) include both a parser
|
||||||
and a trained sentence segmenter, which is
|
and a trained sentence segmenter, which is
|
||||||
[disabled](/usage/processing-pipelines#disabling) by default. If you only need
|
[disabled](/usage/processing-pipelines#disabling) by default. If you only need
|
||||||
sentence boundaries and no parser, you can use the `enable` and `disable`
|
sentence boundaries and no parser, you can use the `exclude` or `disable`
|
||||||
arguments on [`spacy.load`](/api/top-level#spacy.load) to enable the senter and
|
argument on [`spacy.load`](/api/top-level#spacy.load) to load the pipeline
|
||||||
disable the parser.
|
without the parser and then enable the sentence recognizer explicitly with
|
||||||
|
[`nlp.enable_pipe`](/api/language#enable_pipe).
|
||||||
|
|
||||||
> #### senter vs. parser
|
> #### senter vs. parser
|
||||||
>
|
>
|
||||||
|
@ -1670,7 +1671,8 @@ disable the parser.
|
||||||
### {executable="true"}
|
### {executable="true"}
|
||||||
import spacy
|
import spacy
|
||||||
|
|
||||||
nlp = spacy.load("en_core_web_sm", enable=["senter"], disable=["parser"])
|
nlp = spacy.load("en_core_web_sm", exclude=["parser"])
|
||||||
|
nlp.enable_pipe("senter")
|
||||||
doc = nlp("This is a sentence. This is another sentence.")
|
doc = nlp("This is a sentence. This is another sentence.")
|
||||||
for sent in doc.sents:
|
for sent in doc.sents:
|
||||||
print(sent.text)
|
print(sent.text)
|
||||||
|
@ -1734,7 +1736,7 @@ nlp = spacy.load("en_core_web_sm")
|
||||||
doc = nlp(text)
|
doc = nlp(text)
|
||||||
print("Before:", [sent.text for sent in doc.sents])
|
print("Before:", [sent.text for sent in doc.sents])
|
||||||
|
|
||||||
@Language.component("set_custom_coundaries")
|
@Language.component("set_custom_boundaries")
|
||||||
def set_custom_boundaries(doc):
|
def set_custom_boundaries(doc):
|
||||||
for token in doc[:-1]:
|
for token in doc[:-1]:
|
||||||
if token.text == "...":
|
if token.text == "...":
|
||||||
|
|
|
@ -1159,7 +1159,8 @@ class DebugComponent:
|
||||||
self.logger.info(f"Pipeline: {nlp.pipe_names}")
|
self.logger.info(f"Pipeline: {nlp.pipe_names}")
|
||||||
|
|
||||||
def __call__(self, doc: Doc) -> Doc:
|
def __call__(self, doc: Doc) -> Doc:
|
||||||
self.logger.debug(f"Doc: {len(doc)} tokens, is_tagged: {doc.is_tagged}")
|
is_tagged = doc.has_annotation("TAG")
|
||||||
|
self.logger.debug(f"Doc: {len(doc)} tokens, is tagged: {is_tagged}")
|
||||||
return doc
|
return doc
|
||||||
|
|
||||||
nlp = spacy.load("en_core_web_sm")
|
nlp = spacy.load("en_core_web_sm")
|
||||||
|
|
|
@ -167,6 +167,7 @@ rule-based matching are:
|
||||||
| `IS_ALPHA`, `IS_ASCII`, `IS_DIGIT` | Token text consists of alphabetic characters, ASCII characters, digits. ~~bool~~ |
|
| `IS_ALPHA`, `IS_ASCII`, `IS_DIGIT` | Token text consists of alphabetic characters, ASCII characters, digits. ~~bool~~ |
|
||||||
| `IS_LOWER`, `IS_UPPER`, `IS_TITLE` | Token text is in lowercase, uppercase, titlecase. ~~bool~~ |
|
| `IS_LOWER`, `IS_UPPER`, `IS_TITLE` | Token text is in lowercase, uppercase, titlecase. ~~bool~~ |
|
||||||
| `IS_PUNCT`, `IS_SPACE`, `IS_STOP` | Token is punctuation, whitespace, stop word. ~~bool~~ |
|
| `IS_PUNCT`, `IS_SPACE`, `IS_STOP` | Token is punctuation, whitespace, stop word. ~~bool~~ |
|
||||||
|
| `IS_SENT_START` | Token is start of sentence. ~~bool~~ |
|
||||||
| `LIKE_NUM`, `LIKE_URL`, `LIKE_EMAIL` | Token text resembles a number, URL, email. ~~bool~~ |
|
| `LIKE_NUM`, `LIKE_URL`, `LIKE_EMAIL` | Token text resembles a number, URL, email. ~~bool~~ |
|
||||||
| `POS`, `TAG`, `MORPH`, `DEP`, `LEMMA`, `SHAPE` | The token's simple and extended part-of-speech tag, morphological analysis, dependency label, lemma, shape. ~~str~~ |
|
| `POS`, `TAG`, `MORPH`, `DEP`, `LEMMA`, `SHAPE` | The token's simple and extended part-of-speech tag, morphological analysis, dependency label, lemma, shape. ~~str~~ |
|
||||||
| `ENT_TYPE` | The token's entity label. ~~str~~ |
|
| `ENT_TYPE` | The token's entity label. ~~str~~ |
|
||||||
|
@ -837,7 +838,7 @@ nlp = spacy.load("en_core_web_sm")
|
||||||
matcher = Matcher(nlp.vocab)
|
matcher = Matcher(nlp.vocab)
|
||||||
|
|
||||||
# Add pattern for valid hashtag, i.e. '#' plus any ASCII token
|
# Add pattern for valid hashtag, i.e. '#' plus any ASCII token
|
||||||
matcher.add("HASHTAG", None, [{"ORTH": "#"}, {"IS_ASCII": True}])
|
matcher.add("HASHTAG", [[{"ORTH": "#"}, {"IS_ASCII": True}]])
|
||||||
|
|
||||||
# Register token extension
|
# Register token extension
|
||||||
Token.set_extension("is_hashtag", default=False)
|
Token.set_extension("is_hashtag", default=False)
|
||||||
|
|
|
@ -285,6 +285,7 @@ add to your pipeline and customize for your use case:
|
||||||
| [`Lemmatizer`](/api/lemmatizer) | Standalone component for rule-based and lookup lemmatization. |
|
| [`Lemmatizer`](/api/lemmatizer) | Standalone component for rule-based and lookup lemmatization. |
|
||||||
| [`AttributeRuler`](/api/attributeruler) | Component for setting token attributes using match patterns. |
|
| [`AttributeRuler`](/api/attributeruler) | Component for setting token attributes using match patterns. |
|
||||||
| [`Transformer`](/api/transformer) | Component for using [transformer models](/usage/embeddings-transformers) in your pipeline, accessing outputs and aligning tokens. Provided via [`spacy-transformers`](https://github.com/explosion/spacy-transformers). |
|
| [`Transformer`](/api/transformer) | Component for using [transformer models](/usage/embeddings-transformers) in your pipeline, accessing outputs and aligning tokens. Provided via [`spacy-transformers`](https://github.com/explosion/spacy-transformers). |
|
||||||
|
| [`TrainablePipe`](/api/pipe) | Base class for trainable pipeline components. |
|
||||||
|
|
||||||
<Infobox title="Details & Documentation" emoji="📖" list>
|
<Infobox title="Details & Documentation" emoji="📖" list>
|
||||||
|
|
||||||
|
@ -396,8 +397,8 @@ type-check model definitions.
|
||||||
For data validation, spaCy v3.0 adopts
|
For data validation, spaCy v3.0 adopts
|
||||||
[`pydantic`](https://github.com/samuelcolvin/pydantic). It also powers the data
|
[`pydantic`](https://github.com/samuelcolvin/pydantic). It also powers the data
|
||||||
validation of Thinc's [config system](https://thinc.ai/docs/usage-config), which
|
validation of Thinc's [config system](https://thinc.ai/docs/usage-config), which
|
||||||
lets you register **custom functions with typed arguments**, reference them
|
lets you register **custom functions with typed arguments**, reference them in
|
||||||
in your config and see validation errors if the argument values don't match.
|
your config and see validation errors if the argument values don't match.
|
||||||
|
|
||||||
<Infobox title="Details & Documentation" emoji="📖" list>
|
<Infobox title="Details & Documentation" emoji="📖" list>
|
||||||
|
|
||||||
|
|
|
@ -2542,6 +2542,42 @@
|
||||||
"author_links": {
|
"author_links": {
|
||||||
"github": "abchapman93"
|
"github": "abchapman93"
|
||||||
}
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "rita-dsl",
|
||||||
|
"title": "RITA DSL",
|
||||||
|
"slogan": "Domain Specific Language for creating language rules",
|
||||||
|
"github": "zaibacu/rita-dsl",
|
||||||
|
"description": "A Domain Specific Language (DSL) for building language patterns. These can be later compiled into spaCy patterns, pure regex, or any other format",
|
||||||
|
"pip": "rita-dsl",
|
||||||
|
"thumb": "https://raw.githubusercontent.com/zaibacu/rita-dsl/master/docs/assets/logo-100px.png",
|
||||||
|
"code_language": "python",
|
||||||
|
"code_example": [
|
||||||
|
"import spacy",
|
||||||
|
"from rita.shortcuts import setup_spacy",
|
||||||
|
"",
|
||||||
|
"rules = \"\"\"",
|
||||||
|
"cuts = {\"fitted\", \"wide-cut\"}",
|
||||||
|
"lengths = {\"short\", \"long\", \"calf-length\", \"knee-length\"}",
|
||||||
|
"fabric_types = {\"soft\", \"airy\", \"crinkled\"}",
|
||||||
|
"fabrics = {\"velour\", \"chiffon\", \"knit\", \"woven\", \"stretch\"}",
|
||||||
|
"",
|
||||||
|
"{IN_LIST(cuts)?, IN_LIST(lengths), WORD(\"dress\")}->MARK(\"DRESS_TYPE\")",
|
||||||
|
"{IN_LIST(lengths), IN_LIST(cuts), WORD(\"dress\")}->MARK(\"DRESS_TYPE\")",
|
||||||
|
"{IN_LIST(fabric_types)?, IN_LIST(fabrics)}->MARK(\"DRESS_FABRIC\")",
|
||||||
|
"\"\"\"",
|
||||||
|
"",
|
||||||
|
"nlp = spacy.load(\"en\")",
|
||||||
|
"setup_spacy(nlp, rules_string=rules)",
|
||||||
|
"r = nlp(\"She was wearing a short wide-cut dress\")",
|
||||||
|
"print(list([{\"label\": e.label_, \"text\": e.text} for e in r.ents]))"
|
||||||
|
],
|
||||||
|
"category": ["standalone"],
|
||||||
|
"tags": ["dsl", "language-patterns", "language-rules", "nlp"],
|
||||||
|
"author": "Šarūnas Navickas",
|
||||||
|
"author_links": {
|
||||||
|
"github": "zaibacu"
|
||||||
|
}
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue
Block a user