Merge remote-tracking branch 'upstream/master' into spacy.io

This commit is contained in:
svlandeg 2021-07-20 16:14:16 +02:00
commit f4f270940c
54 changed files with 1600 additions and 357 deletions

106
.github/contributors/jmyerston.md vendored Normal file
View File

@ -0,0 +1,106 @@
# spaCy contributor agreement
This spaCy Contributor Agreement (**"SCA"**) is based on the
[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
The SCA applies to any contribution that you make to any product or project
managed by us (the **"project"**), and sets out the intellectual property rights
you grant to us in the contributed materials. The term **"us"** shall mean
[ExplosionAI GmbH](https://explosion.ai/legal). The term
**"you"** shall mean the person or entity identified below.
If you agree to be bound by these terms, fill in the information requested
below and include the filled-in version with your first pull request, under the
folder [`.github/contributors/`](/.github/contributors/). The name of the file
should be your GitHub username, with the extension `.md`. For example, the user
example_user would create the file `.github/contributors/example_user.md`.
Read this agreement carefully before signing. These terms and conditions
constitute a binding legal agreement.
## Contributor Agreement
1. The term "contribution" or "contributed materials" means any source code,
object code, patch, tool, sample, graphic, specification, manual,
documentation, or any other material posted or submitted by you to the project.
2. With respect to any worldwide copyrights, or copyright applications and
registrations, in your contribution:
* you hereby assign to us joint ownership, and to the extent that such
assignment is or becomes invalid, ineffective or unenforceable, you hereby
grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
royalty-free, unrestricted license to exercise all rights under those
copyrights. This includes, at our option, the right to sublicense these same
rights to third parties through multiple levels of sublicensees or other
licensing arrangements;
* you agree that each of us can do all things in relation to your
contribution as if each of us were the sole owners, and if one of us makes
a derivative work of your contribution, the one who makes the derivative
work (or has it made will be the sole owner of that derivative work;
* you agree that you will not assert any moral rights in your contribution
against us, our licensees or transferees;
* you agree that we may register a copyright in your contribution and
exercise all ownership rights associated with it; and
* you agree that neither of us has any duty to consult with, obtain the
consent of, pay or render an accounting to the other for any use or
distribution of your contribution.
3. With respect to any patents you own, or that you can license without payment
to any third party, you hereby grant to us a perpetual, irrevocable,
non-exclusive, worldwide, no-charge, royalty-free license to:
* make, have made, use, sell, offer to sell, import, and otherwise transfer
your contribution in whole or in part, alone or in combination with or
included in any product, work or materials arising out of the project to
which your contribution was submitted, and
* at our option, to sublicense these same rights to third parties through
multiple levels of sublicensees or other licensing arrangements.
4. Except as set out above, you keep all right, title, and interest in your
contribution. The rights that you grant to us under these terms are effective
on the date you first submitted a contribution to us, even if your submission
took place before the date you sign these terms.
5. You covenant, represent, warrant and agree that:
* Each contribution that you submit is and shall be an original work of
authorship and you can legally grant the rights set out in this SCA;
* to the best of your knowledge, each contribution will not violate any
third party's copyrights, trademarks, patents, or other intellectual
property rights; and
* each contribution shall be in compliance with U.S. export control laws and
other applicable export and import laws. You agree to notify us if you
become aware of any circumstance which would make any of the foregoing
representations inaccurate in any respect. We may publicly disclose your
participation in the project, including the fact that you have signed the SCA.
6. This SCA is governed by the laws of the State of California and applicable
U.S. Federal law. Any choice of law rules will not apply.
7. Please place an “x” on one of the applicable statement below. Please do NOT
mark both statements:
* [x] I am signing on behalf of myself as an individual and no other person
or entity, including my employer, has or will have rights with respect to my
contributions.
* [ ] I am signing on behalf of my employer or a legal entity and I have the
actual authority to contractually bind that entity.
## Contributor Details
| Field | Entry |
| ----------------------------- | ----------------------------------- |
| Name | Jacobo Myerston |
| Company name (if applicable) | University of California, San Diego |
| Title or role (if applicable) | Academic |
| Date | 07/05/2021 |
| GitHub username | jmyerston |
| Website (optional) | diogenet.ucsd.edu |

106
.github/contributors/julien-talkair.md vendored Normal file
View File

@ -0,0 +1,106 @@
# spaCy contributor agreement
This spaCy Contributor Agreement (**"SCA"**) is based on the
[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
The SCA applies to any contribution that you make to any product or project
managed by us (the **"project"**), and sets out the intellectual property rights
you grant to us in the contributed materials. The term **"us"** shall mean
[ExplosionAI GmbH](https://explosion.ai/legal). The term
**"you"** shall mean the person or entity identified below.
If you agree to be bound by these terms, fill in the information requested
below and include the filled-in version with your first pull request, under the
folder [`.github/contributors/`](/.github/contributors/). The name of the file
should be your GitHub username, with the extension `.md`. For example, the user
example_user would create the file `.github/contributors/example_user.md`.
Read this agreement carefully before signing. These terms and conditions
constitute a binding legal agreement.
## Contributor Agreement
1. The term "contribution" or "contributed materials" means any source code,
object code, patch, tool, sample, graphic, specification, manual,
documentation, or any other material posted or submitted by you to the project.
2. With respect to any worldwide copyrights, or copyright applications and
registrations, in your contribution:
* you hereby assign to us joint ownership, and to the extent that such
assignment is or becomes invalid, ineffective or unenforceable, you hereby
grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
royalty-free, unrestricted license to exercise all rights under those
copyrights. This includes, at our option, the right to sublicense these same
rights to third parties through multiple levels of sublicensees or other
licensing arrangements;
* you agree that each of us can do all things in relation to your
contribution as if each of us were the sole owners, and if one of us makes
a derivative work of your contribution, the one who makes the derivative
work (or has it made will be the sole owner of that derivative work;
* you agree that you will not assert any moral rights in your contribution
against us, our licensees or transferees;
* you agree that we may register a copyright in your contribution and
exercise all ownership rights associated with it; and
* you agree that neither of us has any duty to consult with, obtain the
consent of, pay or render an accounting to the other for any use or
distribution of your contribution.
3. With respect to any patents you own, or that you can license without payment
to any third party, you hereby grant to us a perpetual, irrevocable,
non-exclusive, worldwide, no-charge, royalty-free license to:
* make, have made, use, sell, offer to sell, import, and otherwise transfer
your contribution in whole or in part, alone or in combination with or
included in any product, work or materials arising out of the project to
which your contribution was submitted, and
* at our option, to sublicense these same rights to third parties through
multiple levels of sublicensees or other licensing arrangements.
4. Except as set out above, you keep all right, title, and interest in your
contribution. The rights that you grant to us under these terms are effective
on the date you first submitted a contribution to us, even if your submission
took place before the date you sign these terms.
5. You covenant, represent, warrant and agree that:
* Each contribution that you submit is and shall be an original work of
authorship and you can legally grant the rights set out in this SCA;
* to the best of your knowledge, each contribution will not violate any
third party's copyrights, trademarks, patents, or other intellectual
property rights; and
* each contribution shall be in compliance with U.S. export control laws and
other applicable export and import laws. You agree to notify us if you
become aware of any circumstance which would make any of the foregoing
representations inaccurate in any respect. We may publicly disclose your
participation in the project, including the fact that you have signed the SCA.
6. This SCA is governed by the laws of the State of California and applicable
U.S. Federal law. Any choice of law rules will not apply.
7. Please place an “x” on one of the applicable statement below. Please do NOT
mark both statements:
* [ ] I am signing on behalf of myself as an individual and no other person
or entity, including my employer, has or will have rights with respect to my
contributions.
* [x] I am signing on behalf of my employer or a legal entity and I have the
actual authority to contractually bind that entity.
## Contributor Details
| Field | Entry |
|------------------------------- | -------------------- |
| Name | Julien Rossi |
| Company name (if applicable) | TalkAir BV |
| Title or role (if applicable) | CTO, Partner |
| Date | June 28 2021 |
| GitHub username | julien-talkair |
| Website (optional) | |

106
.github/contributors/thomashacker.md vendored Normal file
View File

@ -0,0 +1,106 @@
# spaCy contributor agreement
This spaCy Contributor Agreement (**"SCA"**) is based on the
[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
The SCA applies to any contribution that you make to any product or project
managed by us (the **"project"**), and sets out the intellectual property rights
you grant to us in the contributed materials. The term **"us"** shall mean
[ExplosionAI GmbH](https://explosion.ai/legal). The term
**"you"** shall mean the person or entity identified below.
If you agree to be bound by these terms, fill in the information requested
below and include the filled-in version with your first pull request, under the
folder [`.github/contributors/`](/.github/contributors/). The name of the file
should be your GitHub username, with the extension `.md`. For example, the user
example_user would create the file `.github/contributors/example_user.md`.
Read this agreement carefully before signing. These terms and conditions
constitute a binding legal agreement.
## Contributor Agreement
1. The term "contribution" or "contributed materials" means any source code,
object code, patch, tool, sample, graphic, specification, manual,
documentation, or any other material posted or submitted by you to the project.
2. With respect to any worldwide copyrights, or copyright applications and
registrations, in your contribution:
* you hereby assign to us joint ownership, and to the extent that such
assignment is or becomes invalid, ineffective or unenforceable, you hereby
grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
royalty-free, unrestricted license to exercise all rights under those
copyrights. This includes, at our option, the right to sublicense these same
rights to third parties through multiple levels of sublicensees or other
licensing arrangements;
* you agree that each of us can do all things in relation to your
contribution as if each of us were the sole owners, and if one of us makes
a derivative work of your contribution, the one who makes the derivative
work (or has it made will be the sole owner of that derivative work;
* you agree that you will not assert any moral rights in your contribution
against us, our licensees or transferees;
* you agree that we may register a copyright in your contribution and
exercise all ownership rights associated with it; and
* you agree that neither of us has any duty to consult with, obtain the
consent of, pay or render an accounting to the other for any use or
distribution of your contribution.
3. With respect to any patents you own, or that you can license without payment
to any third party, you hereby grant to us a perpetual, irrevocable,
non-exclusive, worldwide, no-charge, royalty-free license to:
* make, have made, use, sell, offer to sell, import, and otherwise transfer
your contribution in whole or in part, alone or in combination with or
included in any product, work or materials arising out of the project to
which your contribution was submitted, and
* at our option, to sublicense these same rights to third parties through
multiple levels of sublicensees or other licensing arrangements.
4. Except as set out above, you keep all right, title, and interest in your
contribution. The rights that you grant to us under these terms are effective
on the date you first submitted a contribution to us, even if your submission
took place before the date you sign these terms.
5. You covenant, represent, warrant and agree that:
* Each contribution that you submit is and shall be an original work of
authorship and you can legally grant the rights set out in this SCA;
* to the best of your knowledge, each contribution will not violate any
third party's copyrights, trademarks, patents, or other intellectual
property rights; and
* each contribution shall be in compliance with U.S. export control laws and
other applicable export and import laws. You agree to notify us if you
become aware of any circumstance which would make any of the foregoing
representations inaccurate in any respect. We may publicly disclose your
participation in the project, including the fact that you have signed the SCA.
6. This SCA is governed by the laws of the State of California and applicable
U.S. Federal law. Any choice of law rules will not apply.
7. Please place an “x” on one of the applicable statement below. Please do NOT
mark both statements:
* [x] I am signing on behalf of myself as an individual and no other person
or entity, including my employer, has or will have rights with respect to my
contributions.
* [ ] I am signing on behalf of my employer or a legal entity and I have the
actual authority to contractually bind that entity.
## Contributor Details
| Field | Entry |
|------------------------------- | -------------------- |
| Name | Edward Schmuhl |
| Company name (if applicable) | |
| Title or role (if applicable) | |
| Date | 09.07.2021 |
| GitHub username | thomashacker |
| Website (optional) | |

View File

@ -9,6 +9,7 @@ on:
jobs:
autoblack:
if: github.repository_owner == 'explosion'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2

12
.pre-commit-config.yaml Normal file
View File

@ -0,0 +1,12 @@
repos:
- repo: https://github.com/ambv/black
rev: 21.6b0
hooks:
- id: black
language_version: python3.7
- repo: https://gitlab.com/pycqa/flake8
rev: 3.9.2
hooks:
- id: flake8
args:
- "--config=setup.cfg"

View File

@ -177,6 +177,15 @@ tools installed.
**⚠️ Note that formatting and linting is currently only possible for Python
modules in `.py` files, not Cython modules in `.pyx` and `.pxd` files.**
### Pre-Commit Hooks
After cloning the repo, after installing the packages from `requirements.txt`, enter the repo folder and run `pre-commit install`.
Each time a `git commit` is initiated, `black` and `flake8` will run automatically on the modified files only.
In case of error, or when `black` modified a file, the modified file needs to be `git add` once again and a new
`git commit` has to be issued.
### Code formatting
[`black`](https://github.com/ambv/black) is an opinionated Python code

View File

@ -5,7 +5,7 @@ requires = [
"cymem>=2.0.2,<2.1.0",
"preshed>=3.0.2,<3.1.0",
"murmurhash>=0.28.0,<1.1.0",
"thinc>=8.0.7,<8.1.0",
"thinc>=8.0.8,<8.1.0",
"blis>=0.4.0,<0.8.0",
"pathy",
"numpy>=1.15.0",

View File

@ -2,7 +2,7 @@
spacy-legacy>=3.0.7,<3.1.0
cymem>=2.0.2,<2.1.0
preshed>=3.0.2,<3.1.0
thinc>=8.0.7,<8.1.0
thinc>=8.0.8,<8.1.0
blis>=0.4.0,<0.8.0
ml_datasets>=0.2.0,<0.3.0
murmurhash>=0.28.0,<1.1.0
@ -22,6 +22,7 @@ setuptools
packaging>=20.0
typing_extensions>=3.7.4.1,<4.0.0.0; python_version < "3.8"
# Development dependencies
pre-commit>=2.13.0
cython>=0.25,<3.0
pytest>=5.2.0
pytest-timeout>=1.3.0,<2.0.0

View File

@ -37,14 +37,14 @@ setup_requires =
cymem>=2.0.2,<2.1.0
preshed>=3.0.2,<3.1.0
murmurhash>=0.28.0,<1.1.0
thinc>=8.0.7,<8.1.0
thinc>=8.0.8,<8.1.0
install_requires =
# Our libraries
spacy-legacy>=3.0.7,<3.1.0
murmurhash>=0.28.0,<1.1.0
cymem>=2.0.2,<2.1.0
preshed>=3.0.2,<3.1.0
thinc>=8.0.7,<8.1.0
thinc>=8.0.8,<8.1.0
blis>=0.4.0,<0.8.0
wasabi>=0.8.1,<1.1.0
srsly>=2.4.1,<3.0.0

View File

@ -5,7 +5,7 @@ import sys
# set library-specific custom warning handling before doing anything else
from .errors import setup_default_warnings
setup_default_warnings()
setup_default_warnings() # noqa: E402
# These are imported as part of the API
from thinc.api import prefer_gpu, require_gpu, require_cpu # noqa: F401

View File

@ -1,6 +1,6 @@
# fmt: off
__title__ = "spacy"
__version__ = "3.1.0"
__version__ = "3.1.1"
__download_url__ = "https://github.com/explosion/spacy-models/releases/download"
__compatibility__ = "https://raw.githubusercontent.com/explosion/spacy-models/master/compatibility.json"
__projects__ = "https://github.com/explosion/projects"

View File

@ -139,6 +139,9 @@ def package(
readme = generate_readme(meta)
create_file(readme_path, readme)
create_file(package_path / model_name_v / "README.md", readme)
msg.good("Generated README.md from meta.json")
else:
msg.info("Using existing README.md from pipeline directory")
imports = []
for code_path in code_paths:
imports.append(code_path.stem)
@ -202,8 +205,9 @@ def get_meta(
"url": "",
"license": "MIT",
}
meta.update(existing_meta)
nlp = util.load_model_from_path(Path(model_path))
meta.update(nlp.meta)
meta.update(existing_meta)
meta["spacy_version"] = util.get_model_version_range(about.__version__)
meta["vectors"] = {
"width": nlp.vocab.vectors_length,

View File

@ -864,6 +864,9 @@ class Errors:
E1018 = ("Knowledge base for component '{name}' is not set. "
"Make sure either `nel.initialize` or `nel.set_kb` "
"is called with a `kb_loader` function.")
E1019 = ("`noun_chunks` requires the pos tagging, which requires a "
"statistical model to be installed and loaded. For more info, see "
"the documentation:\nhttps://spacy.io/usage/models")
# Deprecated model shortcuts, only used in errors and warnings

View File

@ -1,16 +1,11 @@
from .tokenizer_exceptions import TOKENIZER_EXCEPTIONS, TOKEN_MATCH
from .stop_words import STOP_WORDS
from .syntax_iterators import SYNTAX_ITERATORS
from .lex_attrs import LEX_ATTRS
from ...language import Language
class AzerbaijaniDefaults(Language.Defaults):
tokenizer_exceptions = TOKENIZER_EXCEPTIONS
lex_attr_getters = LEX_ATTRS
stop_words = STOP_WORDS
token_match = TOKEN_MATCH
syntax_iterators = SYNTAX_ITERATORS
class Azerbaijani(Language):

View File

@ -0,0 +1,18 @@
from .tokenizer_exceptions import TOKENIZER_EXCEPTIONS
from .stop_words import STOP_WORDS
from .lex_attrs import LEX_ATTRS
from ...language import Language
class AncientGreekDefaults(Language.Defaults):
tokenizer_exceptions = TOKENIZER_EXCEPTIONS
lex_attr_getters = LEX_ATTRS
stop_words = STOP_WORDS
class AncientGreek(Language):
lang = "grc"
Defaults = AncientGreekDefaults
__all__ = ["AncientGreek"]

View File

@ -0,0 +1,17 @@
"""
Example sentences to test spaCy and its language models.
>>> from spacy.lang.grc.examples import sentences
>>> docs = nlp.pipe(sentences)
"""
sentences = [
"ἐρᾷ μὲν ἁγνὸς οὐρανὸς τρῶσαι χθόνα, ἔρως δὲ γαῖαν λαμβάνει γάμου τυχεῖν·",
"εὐδαίμων Χαρίτων καὶ Μελάνιππος ἔφυ, θείας ἁγητῆρες ἐφαμερίοις φιλότατος.",
"ὃ μὲν δὴ ἀπόστολος ἐς τὴν Μίλητον ἦν.",
"Θρασύβουλος δὲ σαφέως προπεπυσμένος πάντα λόγον καὶ εἰδὼς τὰ Ἀλυάττης μέλλοι ποιήσειν μηχανᾶται τοιάδε.",
"φιλόπαις δ' ἦν ἐκμανῶς καὶ Ἀλέξανδρος ὁ βασιλεύς.",
"Ἀντίγονος ὁ βασιλεὺς ἐπεκώμαζε τῷ Ζήνωνι",
"αὐτὰρ ὃ δεύτατος ἦλθεν ἄναξ ἀνδρῶν Ἀγαμέμνων ἕλκος ἔχων",
]

314
spacy/lang/grc/lex_attrs.py Normal file
View File

@ -0,0 +1,314 @@
from ...attrs import LIKE_NUM
_num_words = [
# CARDINALS
"εἷς",
"ἑνός",
"ἑνί",
"ἕνα",
"μία",
"μιᾶς",
"μιᾷ",
"μίαν",
"ἕν",
"δύο",
"δυοῖν",
"τρεῖς",
"τριῶν",
"τρισί",
"τρία",
"τέτταρες",
"τεττάρων",
"τέτταρσι",
"τέτταρα",
"τέτταρας",
"πέντε",
"ἕξ",
"ἑπτά",
"ὀκτώ",
"ἐννέα",
"δέκα",
"ἕνδεκα",
"δώδεκα",
"πεντεκαίδεκα",
"ἑκκαίδεκα",
"ἑπτακαίδεκα",
"ὀκτωκαίδεκα",
"ἐννεακαίδεκα",
"εἴκοσι",
"τριάκοντα",
"τετταράκοντα",
"πεντήκοντα",
"ἑξήκοντα",
"ἑβδομήκοντα",
"ὀγδοήκοντα",
"ἐνενήκοντα",
"ἑκατόν",
"διακόσιοι",
"διακοσίων",
"διακοσιᾶν",
"διακοσίους",
"διακοσίοις",
"διακόσια",
"διακόσιαι",
"διακοσίαις",
"διακοσίαισι",
"διηκόσιοι",
"διηκοσίων",
"διηκοσιέων",
"διακοσίας",
"διηκόσια",
"διηκόσιαι",
"διηκοσίας",
"τριακόσιοι",
"τριακοσίων",
"τριακοσιᾶν",
"τριακοσίους",
"τριακοσίοις",
"τριακόσια",
"τριακόσιαι",
"τριακοσίαις",
"τριακοσίαισι",
"τριακοσιέων",
"τριακοσίας",
"τριηκόσια",
"τριηκοσίας",
"τριηκόσιοι",
"τριηκοσίοισιν",
"τριηκοσίους",
"τριηκοσίων",
"τετρακόσιοι",
"τετρακοσίων",
"τετρακοσιᾶν",
"τετρακοσίους",
"τετρακοσίοις",
"τετρακόσια",
"τετρακόσιαι",
"τετρακοσίαις",
"τετρακοσίαισι",
"τετρακοσιέων",
"τετρακοσίας",
"πεντακόσιοι",
"πεντακοσίων",
"πεντακοσιᾶν",
"πεντακοσίους",
"πεντακοσίοις",
"πεντακόσια",
"πεντακόσιαι",
"πεντακοσίαις",
"πεντακοσίαισι",
"πεντακοσιέων",
"πεντακοσίας",
"ἑξακόσιοι",
"ἑξακοσίων",
"ἑξακοσιᾶν",
"ἑξακοσίους",
"ἑξακοσίοις",
"ἑξακόσια",
"ἑξακόσιαι",
"ἑξακοσίαις",
"ἑξακοσίαισι",
"ἑξακοσιέων",
"ἑξακοσίας",
"ἑπτακόσιοι",
"ἑπτακοσίων",
"ἑπτακοσιᾶν",
"ἑπτακοσίους",
"ἑπτακοσίοις",
"ἑπτακόσια",
"ἑπτακόσιαι",
"ἑπτακοσίαις",
"ἑπτακοσίαισι",
"ἑπτακοσιέων",
"ἑπτακοσίας",
"ὀκτακόσιοι",
"ὀκτακοσίων",
"ὀκτακοσιᾶν",
"ὀκτακοσίους",
"ὀκτακοσίοις",
"ὀκτακόσια",
"ὀκτακόσιαι",
"ὀκτακοσίαις",
"ὀκτακοσίαισι",
"ὀκτακοσιέων",
"ὀκτακοσίας",
"ἐνακόσιοι",
"ἐνακοσίων",
"ἐνακοσιᾶν",
"ἐνακοσίους",
"ἐνακοσίοις",
"ἐνακόσια",
"ἐνακόσιαι",
"ἐνακοσίαις",
"ἐνακοσίαισι",
"ἐνακοσιέων",
"ἐνακοσίας",
"χίλιοι",
"χιλίων",
"χιλιῶν",
"χιλίους",
"χιλίοις",
"χίλιαι",
"χιλίας",
"χιλίαις",
"χίλια",
"χίλι",
"δισχίλιοι",
"δισχιλίων",
"δισχιλιῶν",
"δισχιλίους",
"δισχιλίοις",
"δισχίλιαι",
"δισχιλίας",
"δισχιλίαις",
"δισχίλια",
"δισχίλι",
"τρισχίλιοι",
"τρισχιλίων",
"τρισχιλιῶν",
"τρισχιλίους",
"τρισχιλίοις",
"τρισχίλιαι",
"τρισχιλίας",
"τρισχιλίαις",
"τρισχίλια",
"τρισχίλι",
"μύριοι",
"μύριοί",
"μυρίων",
"μυρίοις",
"μυρίους",
"μύριαι",
"μυρίαις",
"μυρίας",
"μύρια",
"δισμύριοι",
"δισμύριοί",
"δισμυρίων",
"δισμυρίοις",
"δισμυρίους",
"δισμύριαι",
"δισμυρίαις",
"δισμυρίας",
"δισμύρια",
"δεκακισμύριοι",
"δεκακισμύριοί",
"δεκακισμυρίων",
"δεκακισμυρίοις",
"δεκακισμυρίους",
"δεκακισμύριαι",
"δεκακισμυρίαις",
"δεκακισμυρίας",
"δεκακισμύρια",
# ANCIENT GREEK NUMBERS (1-100)
"α",
"β",
"γ",
"δ",
"ε",
"ϛ",
"ζ",
"η",
"θ",
"ι",
"ια",
"ιβ",
"ιγ",
"ιδ",
"ιε",
"ιϛ",
"ιζ",
"ιη",
"ιθ",
"κ",
"κα",
"κβ",
"κγ",
"κδ",
"κε",
"κϛ",
"κζ",
"κη",
"κθ",
"λ",
"λα",
"λβ",
"λγ",
"λδ",
"λε",
"λϛ",
"λζ",
"λη",
"λθ",
"μ",
"μα",
"μβ",
"μγ",
"μδ",
"με",
"μϛ",
"μζ",
"μη",
"μθ",
"ν",
"να",
"νβ",
"νγ",
"νδ",
"νε",
"νϛ",
"νζ",
"νη",
"νθ",
"ξ",
"ξα",
"ξβ",
"ξγ",
"ξδ",
"ξε",
"ξϛ",
"ξζ",
"ξη",
"ξθ",
"ο",
"οα",
"οβ",
"ογ",
"οδ",
"οε",
"οϛ",
"οζ",
"οη",
"οθ",
"π",
"πα",
"πβ",
"πγ",
"πδ",
"πε",
"πϛ",
"πζ",
"πη",
"πθ",
"ϟ",
"ϟα",
"ϟβ",
"ϟγ",
"ϟδ",
"ϟε",
"ϟϛ",
"ϟζ",
"ϟη",
"ϟθ",
"ρ",
]
def like_num(text):
if text.lower() in _num_words:
return True
return False
LEX_ATTRS = {LIKE_NUM: like_num}

View File

@ -0,0 +1,61 @@
STOP_WORDS = set(
"""
αὐτῷ αὐτοῦ αὐτῆς αὐτόν αὐτὸν αὐτῶν αὐτὸς αὐτὸ αὐτό αὐτός αὐτὴν αὐτοῖς αὐτοὺς αὔτ' αὐτὰ αὐτῇ αὐτὴ
αὐτὼ αὑταὶ καὐτὸς αὐτά αὑτός αὐτοῖσι αὐτοῖσιν αὑτὸς αὐτήν αὐτοῖσί αὐτοί αὐτοὶ αὐτοῖο αὐτάων αὐτὰς
αὐτέων αὐτώ αὐτάς αὐτούς αὐτή αὐταί αὐταὶ αὐτῇσιν τὠυτῷ τὠυτὸ ταὐτὰ ταύτῃ αὐτῇσι αὐτῇς αὐταῖς αὐτᾶς αὐτὰν ταὐτὸν
γε γ' γέ γὰρ γάρ δαῖτα δαιτὸς δαιτὶ δαὶ δαιτί δαῖτ' δαΐδας δαΐδων δἰ διὰ διά δὲ δ' δέ δὴ δή εἰ εἴ κεἰ κεἴ αἴ αἲ εἲ αἰ
ἐστί ἐστιν ὢν ἦν ἐστὶν ὦσιν εἶναι ὄντι εἰσιν ἐστι ὄντα οὖσαν ἦσαν ἔστι ὄντας ἐστὲ εἰσὶ εἶ ὤν οὖσαι ἔσται ἐσμὲν ἐστ' ἐστίν ἔστ' ἔσει ἦμεν εἰμι εἰσὶν ἦσθ'
ἐστὶ οὖσ' ἔστιν εἰμὶ εἴμ' ἐσθ' ᾖς στί εἴην εἶναί οὖσα κἄστ' εἴη ἦσθα εἰμ' ἔστω ὄντ' ἔσθ' ἔμμεναι ἔω ἐὼν ἐσσι ἔσσεται ἐστὸν ἔσαν ἔστων ἐόντα ἦεν ἐοῦσαν ἔην
ἔσσομαι εἰσί ἐστόν ἔσκεν ἐόντ' ἐών ἔσσεσθ' εἰσ' ἐόντες ἐόντε ἐσσεῖται εἰμεν ἔασιν ἔσκε ἔμεναι ἔσεσθαι ἔῃ εἰμὲν εἰσι ἐόντας ἔστε εἰς ἦτε εἰμί ἔσσεαι ἔμμεν
ἐοῦσα ἔμεν ᾖσιν ἐστε ἐόντι εἶεν ἔσσονται ἔησθα ἔσεσθε ἐσσί ἐοῦσ' ἔασι ἔα ἦα ἐόν ἔσσεσθαι ἔσομαι ἔσκον εἴης ἔωσιν εἴησαν ἐὸν ἐουσέων ἔσσῃ ἐούσης ἔσονται
ἐούσας ἐόντων ἐόντος ἐσομένην ἔστωσαν ἔωσι ἔας ἐοῦσαι ἣν εἰσίν ἤστην ὄντες ὄντων οὔσας οὔσαις ὄντος οὖσι οὔσης ἔσῃ ὂν ἐσμεν ἐσμέν οὖσιν ἐσομένους ἐσσόμεσθα
ἒς ἐς ἔς ἐν κεἰς εἲς κἀν ἔν κατὰ κατ' καθ' κατά κάτα κὰπ κὰκ κὰδ κὰρ κάρ κὰγ κὰμ καὶ καί μετὰ μεθ' μετ' μέτα μετά μέθ' μέτ' μὲν μέν μὴ
μή μη οὐκ οὒ οὐ οὐχ οὐχὶ κοὐ κοὐχ οὔ κοὐκ οὐχί οὐκὶ οὐδὲν οὐδεὶς οὐδέν κοὐδεὶς κοὐδὲν οὐδένα οὐδενὸς οὐδέν' οὐδενός οὐδενὶ
οὐδεμία οὐδείς οὐδεμίαν οὐδὲ οὐδ' κοὐδ' οὐδέ οὔτε οὔθ' οὔτέ τε οὔτ' οὕτως οὕτω οὕτῶ χοὔτως οὖν ὦν ὧν τοῦτο τοῦθ' τοῦτον τούτῳ
τούτοις ταύτας αὕτη ταῦτα οὗτος ταύτης ταύτην τούτων ταῦτ' τοῦτ' τούτου αὗται τούτους τοῦτό ταῦτά τούτοισι χαὔτη ταῦθ' χοὖτοι
τούτοισιν οὗτός οὗτοι τούτω τουτέων τοῦτὸν οὗτοί τοῦτου οὗτοὶ ταύτῃσι ταύταις ταυτὶ παρὰ παρ' πάρα παρά πὰρ παραὶ πάρ' περὶ
πέρι περί πρὸς πρός ποτ' ποτὶ προτὶ προτί πότι
σὸς σήν σὴν σὸν σόν σὰ σῶν σοῖσιν σός σῆς σῷ σαῖς σῇ σοῖς σοῦ σ' σὰν σά σὴ σὰς
σᾷ σοὺς σούς σοῖσι σῇς σῇσι σή σῇσιν σοὶ σου ὑμεῖς σὲ σύ σοι ὑμᾶς ὑμῶν ὑμῖν σε
σέ σὺ σέθεν σοί ὑμὶν σφῷν ὑμίν τοι τοὶ σφὼ ὔμμ' σφῶϊ σεῖο τ' σφῶϊν ὔμμιν σέο σευ σεῦ
ὔμμι ὑμέων τύνη ὑμείων τοί ὔμμες σεο τέ τεοῖο ὑμέας σὺν ξὺν σύν
θ' τί τι τις τινες τινα τινος τινὸς τινὶ τινῶν τίς τίνες τινὰς τιν' τῳ του τίνα τοῦ τῷ τινί τινά τίνος τινι τινας τινὰ τινων
τίν' τευ τέο τινές τεο τινὲς τεῷ τέῳ τινός τεῳ τισὶ
τοιαῦτα τοιοῦτον τοιοῦθ' τοιοῦτος τοιαύτην τοιαῦτ' τοιούτου τοιαῦθ' τοιαύτῃ τοιούτοις τοιαῦται τοιαῦτά τοιαύτη τοιοῦτοι τοιούτων τοιούτοισι
τοιοῦτο τοιούτους τοιούτῳ τοιαύτης τοιαύταις τοιαύτας τοιοῦτός τίνι τοῖσι τίνων τέων τέοισί τὰ τῇ τώ τὼ
ἀλλὰ ἀλλ' ἀλλά ἀπ' ἀπὸ κἀπ' ἀφ' τἀπὸ κἀφ' ἄπο ἀπό τὠπὸ τἀπ' ἄλλων ἄλλῳ ἄλλη ἄλλης ἄλλους ἄλλοις ἄλλον ἄλλο ἄλλου τἄλλα ἄλλα
ἄλλᾳ ἄλλοισιν τἄλλ' ἄλλ' ἄλλος ἄλλοισι κἄλλ' ἄλλοι ἄλλῃσι ἄλλόν ἄλλην ἄλλά ἄλλαι ἄλλοισίν ὧλλοι ἄλλῃ ἄλλας ἀλλέων τἆλλα ἄλλως
ἀλλάων ἄλλαις τἆλλ'
ἂν ἄν κἂν τἂν ἃν κεν κ' κέν κέ κε χ' ἄρα τἄρα ἄρ' τἄρ' ἄρ ῥα ῥά τὰρ ἄρά ἂρ
ἡμᾶς με ἐγὼ ἐμὲ μοι κἀγὼ ἡμῶν ἡμεῖς ἐμοὶ ἔγωγ' ἁμοὶ ἡμῖν μ' ἔγωγέ ἐγώ ἐμοί ἐμοῦ κἀμοῦ ἔμ' κἀμὲ ἡμὶν μου ἐμέ ἔγωγε νῷν νὼ χἠμεῖς ἁμὲ κἀγώ κἀμοὶ χἠμᾶς
ἁγὼ ἡμίν κἄμ' ἔμοιγ' μοί τοὐμὲ ἄμμε ἐγὼν ἐμεῦ ἐμεῖο μευ ἔμοιγε ἄμμι μέ ἡμέας νῶϊ ἄμμιν ἧμιν ἐγών νῶΐ ἐμέθεν ἥμιν ἄμμες νῶι ἡμείων ἄμμ' ἡμέων ἐμέο
ἐκ ἔκ ἐξ κἀκ κ ἃκ κἀξ ἔξ εξ Ἐκ τἀμὰ ἐμοῖς τοὐμόν ἐμᾶς τοὐμὸν ἐμῶν ἐμὸς ἐμῆς ἐμῷ τὠμῷ ἐμὸν τἄμ' ἐμὴ ἐμὰς ἐμαῖς ἐμὴν ἐμόν ἐμὰ ἐμός ἐμοὺς ἐμῇ ἐμᾷ
οὑμὸς ἐμοῖν οὑμός κἀμὸν ἐμαὶ ἐμή ἐμάς ἐμοῖσι ἐμοῖσιν ἐμῇσιν ἐμῇσι ἐμῇς ἐμήν
ἔνι ἐνὶ εἰνὶ εἰν ἐμ ἐπὶ ἐπ' ἔπι ἐφ' κἀπὶ τἀπὶ ἐπί ἔφ' ἔπ' ἐὰν ἢν ἐάν ἤν ἄνπερ
αὑτοῖς αὑτὸν αὑτῷ ἑαυτοῦ αὑτόν αὑτῆς αὑτῶν αὑτοῦ αὑτὴν αὑτοῖν χαὐτοῦ αὑταῖς ἑωυτοῦ ἑωυτῇ ἑωυτὸν ἐωυτῷ ἑωυτῆς ἑωυτόν ἑωυτῷ
ἑωυτάς ἑωυτῶν ἑωυτοὺς ἑωυτοῖσι ἑαυτῇ ἑαυτούς αὑτοὺς ἑαυτῶν ἑαυτοὺς ἑαυτὸν ἑαυτῷ ἑαυτοῖς ἑαυτὴν ἑαυτῆς
ἔτι ἔτ' ἔθ' κἄτι ἠέ ἠὲ ἦε ἦέ τοὺς τὴν τὸ τῶν τὸν οἱ τοῖς ταῖς τῆς τὰς αἱ τό τὰν τᾶς τοῖσιν αἳ χὠ τήν τά τοῖν τάς
χοἰ χἠ τάν τᾶν οἳ οἵ τοῖο τόν τοῖιν τούς τάων ταὶ τῇς τῇσι τῇσιν αἵ τοῖό τοῖσίν ὅττί ταί Τὴν τῆ τῶ τάδε ὅδε τοῦδε τόδε τόνδ'
τάδ' τῆσδε τῷδε ὅδ' τῶνδ' τῇδ' τοῦδέ τῶνδε τόνδε τόδ' τοῦδ' τάσδε τήνδε τάσδ' τήνδ' ταῖσδέ τῇδε τῆσδ' τάνδ' τῷδ' τάνδε ἅδε τοῖσδ' ἥδ'
τᾷδέ τοῖσδε τούσδ' ἥδε τούσδε τώδ' ἅδ' οἵδ' τῶνδέ οἵδε τᾷδε τοῖσδεσσι τώδε τῇδέ τοῖσιδε αἵδε τοῦδὲ τῆδ' αἵδ' τοῖσδεσι ὃν ὃς οὗ ἅπερ
οὓς ἧς οἷς ἅσπερ χὦνπερ αἷς ὅς ἥπερ ἃς ὅσπερ ὅνπερ ὧνπερ ᾧπερ ὅν αἷν οἷσι ἇς ἅς οὕς ἥν οἷσιν ἕης ὅου ᾗς οἷσί οἷσίν τοῖσί ᾗσιν οἵπερ αἷσπερ
ὅστις ἥτις ὅτου ὅτοισι ἥντιν' ὅτῳ ὅντιν' ὅττι ἅσσά ὅτεῳ ὅτις ὅτιν' ὅτευ ἥντινα αἵτινές ὅντινα ἅσσα ᾧτινι οἵτινες ὅτι ἅτις ὅτ' ὑμὴ
ὑμήν ὑμὸν ὑπὲρ ὕπερ ὑπέρτερον ὑπεὶρ ὑπέρτατος ὑπὸ ὑπ' ὑφ' ὕπο ὑπαὶ ὑπό ὕπ' ὕφ'
ὣς ὡς ὥς ὧς ὥστ' ὥστε ὥσθ'
""".split()
)

View File

@ -0,0 +1,115 @@
from ..tokenizer_exceptions import BASE_EXCEPTIONS
from ...symbols import ORTH, NORM
from ...util import update_exc
_exc = {}
for token in ["᾽Απ'", "᾽ΑΠ'", "ἀφ'", "᾽Αφ", "ἀπὸ"]:
_exc[token] = [{ORTH: token, NORM: "από"}]
for token in ["᾽Αλλ'", "ἀλλ'", "ἀλλὰ"]:
_exc[token] = [{ORTH: token, NORM: "ἀλλά"}]
for token in ["παρ'", "Παρ'", "παρὰ", "παρ"]:
_exc[token] = [{ORTH: token, NORM: "παρά"}]
for token in ["καθ'", "Καθ'", "κατ'", "Κατ'", "κατὰ"]:
_exc[token] = [{ORTH: token, NORM: "κατά"}]
for token in ["Ἐπ'", "ἐπ'", "ἐπὶ", "Εφ'", "εφ'"]:
_exc[token] = [{ORTH: token, NORM: "επί"}]
for token in ["Δι'", "δι'", "διὰ"]:
_exc[token] = [{ORTH: token, NORM: "διά"}]
for token in ["Ὑπ'", "ὑπ'", "ὑφ'"]:
_exc[token] = [{ORTH: token, NORM: "ὑπό"}]
for token in ["Μετ'", "μετ'", "μεθ'", "μετὰ"]:
_exc[token] = [{ORTH: token, NORM: "μετά"}]
for token in ["Μ'", "μ'", "μέ", "μὲ"]:
_exc[token] = [{ORTH: token, NORM: "με"}]
for token in ["Σ'", "σ'", "σέ", "σὲ"]:
_exc[token] = [{ORTH: token, NORM: "σε"}]
for token in ["Τ'", "τ'", "τέ", "τὲ"]:
_exc[token] = [{ORTH: token, NORM: "τε"}]
for token in ["Δ'", "δ'", "δὲ"]:
_exc[token] = [{ORTH: token, NORM: "δέ"}]
_other_exc = {
"μὲν": [{ORTH: "μὲν", NORM: "μέν"}],
"μὴν": [{ORTH: "μὴν", NORM: "μήν"}],
"τὴν": [{ORTH: "τὴν", NORM: "τήν"}],
"τὸν": [{ORTH: "τὸν", NORM: "τόν"}],
"καὶ": [{ORTH: "καὶ", NORM: "καί"}],
"καὐτός": [{ORTH: "κ", NORM: "καί"}, {ORTH: "αὐτός"}],
"καὐτὸς": [{ORTH: "κ", NORM: "καί"}, {ORTH: "αὐτὸς", NORM: "αὐτός"}],
"κοὐ": [{ORTH: "κ", NORM: "καί"}, {ORTH: "οὐ"}],
"χἡ": [{ORTH: "χ", NORM: "καί"}, {ORTH: ""}],
"χοἱ": [{ORTH: "χ", NORM: "καί"}, {ORTH: "οἱ"}],
"χἱκετεύετε": [{ORTH: "χ", NORM: "καί"}, {ORTH: "ἱκετεύετε"}],
"κἀν": [{ORTH: "κ", NORM: "καί"}, {ORTH: "ἀν", NORM: "ἐν"}],
"κἀγὼ": [{ORTH: "κἀ", NORM: "καί"}, {ORTH: "γὼ", NORM: "ἐγώ"}],
"κἀγώ": [{ORTH: "κἀ", NORM: "καί"}, {ORTH: "γώ", NORM: "ἐγώ"}],
"ἁγώ": [{ORTH: "", NORM: ""}, {ORTH: "γώ", NORM: "ἐγώ"}],
"ἁγὼ": [{ORTH: "", NORM: ""}, {ORTH: "γὼ", NORM: "ἐγώ"}],
"ἐγᾦδα": [{ORTH: "ἐγ", NORM: "ἐγώ"}, {ORTH: "ᾦδα", NORM: "οἶδα"}],
"ἐγᾦμαι": [{ORTH: "ἐγ", NORM: "ἐγώ"}, {ORTH: "ᾦμαι", NORM: "οἶμαι"}],
"κἀς": [{ORTH: "κ", NORM: "καί"}, {ORTH: "ἀς", NORM: "ἐς"}],
"κᾆτα": [{ORTH: "κ", NORM: "καί"}, {ORTH: "ᾆτα", NORM: "εἶτα"}],
"κεἰ": [{ORTH: "κ", NORM: "καί"}, {ORTH: "εἰ"}],
"κεἰς": [{ORTH: "κ", NORM: "καί"}, {ORTH: "εἰς"}],
"χὤτε": [{ORTH: "χ", NORM: "καί"}, {ORTH: "ὤτε", NORM: "ὅτε"}],
"χὤπως": [{ORTH: "χ", NORM: "καί"}, {ORTH: "ὤπως", NORM: "ὅπως"}],
"χὤτι": [{ORTH: "χ", NORM: "καί"}, {ORTH: "ὤτι", NORM: "ὅτι"}],
"χὤταν": [{ORTH: "χ", NORM: "καί"}, {ORTH: "ὤταν", NORM: "ὅταν"}],
"οὑμός": [{ORTH: "οὑ", NORM: ""}, {ORTH: "μός", NORM: "ἐμός"}],
"οὑμὸς": [{ORTH: "οὑ", NORM: ""}, {ORTH: "μὸς", NORM: "ἐμός"}],
"οὑμοί": [{ORTH: "οὑ", NORM: "οἱ"}, {ORTH: "μοί", NORM: "ἐμoί"}],
"οὑμοὶ": [{ORTH: "οὑ", NORM: "οἱ"}, {ORTH: "μοὶ", NORM: "ἐμoί"}],
"σοὔστι": [{ORTH: "σοὔ", NORM: "σοί"}, {ORTH: "στι", NORM: "ἐστι"}],
"σοὐστί": [{ORTH: "σοὐ", NORM: "σοί"}, {ORTH: "στί", NORM: "ἐστί"}],
"σοὐστὶ": [{ORTH: "σοὐ", NORM: "σοί"}, {ORTH: "στὶ", NORM: "ἐστί"}],
"μοὖστι": [{ORTH: "μοὖ", NORM: "μοί"}, {ORTH: "στι", NORM: "ἐστι"}],
"μοὔστι": [{ORTH: "μοὔ", NORM: "μοί"}, {ORTH: "στι", NORM: "ἐστι"}],
"τοὔνομα": [{ORTH: "τοὔ", NORM: "τό"}, {ORTH: "νομα", NORM: "ὄνομα"}],
"οὑν": [{ORTH: "οὑ", NORM: ""}, {ORTH: "ν", NORM: "ἐν"}],
"ὦνερ": [{ORTH: "", NORM: ""}, {ORTH: "νερ", NORM: "ἄνερ"}],
"ὦνδρες": [{ORTH: "", NORM: ""}, {ORTH: "νδρες", NORM: "ἄνδρες"}],
"προὔχων": [{ORTH: "προὔ", NORM: "πρό"}, {ORTH: "χων", NORM: "ἔχων"}],
"προὔχοντα": [{ORTH: "προὔ", NORM: "πρό"}, {ORTH: "χοντα", NORM: "ἔχοντα"}],
"ὥνεκα": [{ORTH: "", NORM: "οὗ"}, {ORTH: "νεκα", NORM: "ἕνεκα"}],
"θοἰμάτιον": [{ORTH: "θο", NORM: "τό"}, {ORTH: "ἰμάτιον"}],
"ὥνεκα": [{ORTH: "", NORM: "οὗ"}, {ORTH: "νεκα", NORM: "ἕνεκα"}],
"τὠληθές": [{ORTH: "τὠ", NORM: "τὸ"}, {ORTH: "ληθές", NORM: "ἀληθές"}],
"θἡμέρᾳ": [{ORTH: "θ", NORM: "τῇ"}, {ORTH: "ἡμέρᾳ"}],
"ἅνθρωπος": [{ORTH: "", NORM: ""}, {ORTH: "νθρωπος", NORM: "ἄνθρωπος"}],
"τἄλλα": [{ORTH: "τ", NORM: "τὰ"}, {ORTH: "ἄλλα"}],
"τἆλλα": [{ORTH: "τἆ", NORM: "τὰ"}, {ORTH: "λλα", NORM: "ἄλλα"}],
"ἁνήρ": [{ORTH: "", NORM: ""}, {ORTH: "νήρ", NORM: "ἀνήρ"}],
"ἁνὴρ": [{ORTH: "", NORM: ""}, {ORTH: "νὴρ", NORM: "ἀνήρ"}],
"ἅνδρες": [{ORTH: "", NORM: "οἱ"}, {ORTH: "νδρες", NORM: "ἄνδρες"}],
"ἁγαθαί": [{ORTH: "", NORM: "αἱ"}, {ORTH: "γαθαί", NORM: "ἀγαθαί"}],
"ἁγαθαὶ": [{ORTH: "", NORM: "αἱ"}, {ORTH: "γαθαὶ", NORM: "ἀγαθαί"}],
"ἁλήθεια": [{ORTH: "", NORM: ""}, {ORTH: "λήθεια", NORM: "ἀλήθεια"}],
"τἀνδρός": [{ORTH: "τ", NORM: "τοῦ"}, {ORTH: "ἀνδρός"}],
"τἀνδρὸς": [{ORTH: "τ", NORM: "τοῦ"}, {ORTH: "ἀνδρὸς", NORM: "ἀνδρός"}],
"τἀνδρί": [{ORTH: "τ", NORM: "τῷ"}, {ORTH: "ἀνδρί"}],
"τἀνδρὶ": [{ORTH: "τ", NORM: "τῷ"}, {ORTH: "ἀνδρὶ", NORM: "ἀνδρί"}],
"αὑτός": [{ORTH: "αὑ", NORM: ""}, {ORTH: "τός", NORM: "αὐτός"}],
"αὑτὸς": [{ORTH: "αὑ", NORM: ""}, {ORTH: "τὸς", NORM: "αὐτός"}],
"ταὐτοῦ": [{ORTH: "τ", NORM: "τοῦ"}, {ORTH: "αὐτοῦ"}],
}
_exc.update(_other_exc)
_exc_data = {}
_exc.update(_exc_data)
TOKENIZER_EXCEPTIONS = update_exc(BASE_EXCEPTIONS, _exc)

View File

@ -1,12 +1,14 @@
from typing import Optional
from thinc.api import Model
from .stop_words import STOP_WORDS
from .lemmatizer import DutchLemmatizer
from .lex_attrs import LEX_ATTRS
from .tokenizer_exceptions import TOKENIZER_EXCEPTIONS
from .punctuation import TOKENIZER_PREFIXES, TOKENIZER_INFIXES
from .punctuation import TOKENIZER_SUFFIXES
from .lemmatizer import DutchLemmatizer
from .stop_words import STOP_WORDS
from .syntax_iterators import SYNTAX_ITERATORS
from .tokenizer_exceptions import TOKENIZER_EXCEPTIONS
from ...language import Language
@ -16,6 +18,7 @@ class DutchDefaults(Language.Defaults):
infixes = TOKENIZER_INFIXES
suffixes = TOKENIZER_SUFFIXES
lex_attr_getters = LEX_ATTRS
syntax_iterators = SYNTAX_ITERATORS
stop_words = STOP_WORDS

View File

@ -0,0 +1,72 @@
from typing import Union, Iterator
from ...symbols import NOUN, PRON
from ...errors import Errors
from ...tokens import Doc, Span
def noun_chunks(doclike: Union[Doc, Span]) -> Iterator[Span]:
"""
Detect base noun phrases from a dependency parse. Works on Doc and Span.
The definition is inspired by https://www.nltk.org/book/ch07.html
Consider : [Noun + determinant / adjective] and also [Pronoun]
"""
# fmt: off
# labels = ["nsubj", "nsubj:pass", "obj", "iobj", "ROOT", "appos", "nmod", "nmod:poss"]
# fmt: on
doc = doclike.doc # Ensure works on both Doc and Span.
# Check for dependencies: POS, DEP
if not doc.has_annotation("POS"):
raise ValueError(Errors.E1019)
if not doc.has_annotation("DEP"):
raise ValueError(Errors.E029)
# See UD tags: https://universaldependencies.org/u/dep/index.html
# amod = adjectival modifier
# nmod:poss = possessive nominal modifier
# nummod = numeric modifier
# det = determiner
# det:poss = possessive determiner
noun_deps = [
doc.vocab.strings[label] for label in ["amod", "nmod:poss", "det", "det:poss"]
]
# nsubj = nominal subject
# nsubj:pass = passive nominal subject
pronoun_deps = [doc.vocab.strings[label] for label in ["nsubj", "nsubj:pass"]]
# Label NP for the Span to identify it as Noun-Phrase
span_label = doc.vocab.strings.add("NP")
# Only NOUNS and PRONOUNS matter
for i, word in enumerate(filter(lambda x: x.pos in [PRON, NOUN], doclike)):
# For NOUNS
# Pick children from syntactic parse (only those with certain dependencies)
if word.pos == NOUN:
# Some debugging. It happens that VERBS are POS-TAGGED as NOUNS
# We check if the word has a "nsubj", if it's the case, we eliminate it
nsubjs = filter(
lambda x: x.dep == doc.vocab.strings["nsubj"], word.children
)
next_word = next(nsubjs, None)
if next_word is not None:
# We found some nsubj, so we skip this word. Otherwise, consider it a normal NOUN
continue
children = filter(lambda x: x.dep in noun_deps, word.children)
children_i = [c.i for c in children] + [word.i]
start_span = min(children_i)
end_span = max(children_i) + 1
yield start_span, end_span, span_label
# PRONOUNS only if it is the subject of a verb
elif word.pos == PRON:
if word.dep in pronoun_deps:
start_span = word.i
end_span = word.i + 1
yield start_span, end_span, span_label
SYNTAX_ITERATORS = {"noun_chunks": noun_chunks}

View File

@ -12,8 +12,6 @@ PUNCT_RULES = {"«": '"', "»": '"'}
class RussianLemmatizer(Lemmatizer):
_morph = None
def __init__(
self,
vocab: Vocab,
@ -31,8 +29,8 @@ class RussianLemmatizer(Lemmatizer):
"The Russian lemmatizer mode 'pymorphy2' requires the "
"pymorphy2 library. Install it with: pip install pymorphy2"
) from None
if RussianLemmatizer._morph is None:
RussianLemmatizer._morph = MorphAnalyzer()
if getattr(self, "_morph", None) is None:
self._morph = MorphAnalyzer()
super().__init__(vocab, model, name, mode=mode, overwrite=overwrite)
def pymorphy2_lemmatize(self, token: Token) -> List[str]:

View File

@ -7,8 +7,6 @@ from ...vocab import Vocab
class UkrainianLemmatizer(RussianLemmatizer):
_morph = None
def __init__(
self,
vocab: Vocab,
@ -27,6 +25,6 @@ class UkrainianLemmatizer(RussianLemmatizer):
"pymorphy2 library and dictionaries. Install them with: "
"pip install pymorphy2 pymorphy2-dicts-uk"
) from None
if UkrainianLemmatizer._morph is None:
UkrainianLemmatizer._morph = MorphAnalyzer(lang="uk")
if getattr(self, "_morph", None) is None:
self._morph = MorphAnalyzer(lang="uk")
super().__init__(vocab, model, name, mode=mode, overwrite=overwrite)

View File

@ -872,14 +872,14 @@ class Language:
DOCS: https://spacy.io/api/language#replace_pipe
"""
if name not in self.pipe_names:
if name not in self.component_names:
raise ValueError(Errors.E001.format(name=name, opts=self.pipe_names))
if hasattr(factory_name, "__call__"):
err = Errors.E968.format(component=repr(factory_name), name=name)
raise ValueError(err)
# We need to delegate to Language.add_pipe here instead of just writing
# to Language.pipeline to make sure the configs are handled correctly
pipe_index = self.pipe_names.index(name)
pipe_index = self.component_names.index(name)
self.remove_pipe(name)
if not len(self._components) or pipe_index == len(self._components):
# we have no components to insert before/after, or we're replacing the last component
@ -1447,7 +1447,7 @@ class Language:
) -> Iterator[Tuple[Doc, _AnyContext]]:
...
def pipe(
def pipe( # noqa: F811
self,
texts: Iterable[str],
*,
@ -1740,10 +1740,16 @@ class Language:
listeners_replaced = True
with warnings.catch_warnings():
warnings.filterwarnings("ignore", message="\\[W113\\]")
nlp.add_pipe(source_name, source=source_nlps[model], name=pipe_name)
nlp.add_pipe(
source_name, source=source_nlps[model], name=pipe_name
)
if model not in source_nlp_vectors_hashes:
source_nlp_vectors_hashes[model] = hash(source_nlps[model].vocab.vectors.to_bytes())
nlp.meta["_sourced_vectors_hashes"][pipe_name] = source_nlp_vectors_hashes[model]
source_nlp_vectors_hashes[model] = hash(
source_nlps[model].vocab.vectors.to_bytes()
)
nlp.meta["_sourced_vectors_hashes"][
pipe_name
] = source_nlp_vectors_hashes[model]
# Delete from cache if listeners were replaced
if listeners_replaced:
del source_nlps[model]

View File

@ -3,7 +3,7 @@ from thinc.api import chain, Maxout, LayerNorm, Softmax, Linear, zero_init, Mode
from thinc.api import MultiSoftmax, list2array
from thinc.api import to_categorical, CosineDistance, L2Distance
from ...util import registry
from ...util import registry, OOV_RANK
from ...errors import Errors
from ...attrs import ID
@ -70,6 +70,7 @@ def get_vectors_loss(ops, docs, prediction, distance):
# and look them up all at once. This prevents data copying.
ids = ops.flatten([doc.to_array(ID).ravel() for doc in docs])
target = docs[0].vocab.vectors.data[ids]
target[ids == OOV_RANK] = 0
d_target, loss = distance(prediction, target)
return loss, d_target

View File

@ -78,6 +78,17 @@ def build_ngram_suggester(sizes: List[int]) -> Callable[[List[Doc]], Ragged]:
return ngram_suggester
@registry.misc("spacy.ngram_range_suggester.v1")
def build_ngram_range_suggester(
min_size: int, max_size: int
) -> Callable[[List[Doc]], Ragged]:
"""Suggest all spans of the given lengths between a given min and max value - both inclusive.
Spans are returned as a ragged array of integers. The array has two columns,
indicating the start and end position."""
sizes = range(min_size, max_size + 1)
return build_ngram_suggester(sizes)
@Language.factory(
"spancat",
assigns=["doc.spans"],

View File

@ -222,7 +222,7 @@ class Tagger(TrainablePipe):
DOCS: https://spacy.io/api/tagger#get_loss
"""
validate_examples(examples, "Tagger.get_loss")
loss_func = SequenceCategoricalCrossentropy(names=self.labels, normalize=False)
loss_func = SequenceCategoricalCrossentropy(names=self.labels, normalize=False, neg_prefix="!")
# Convert empty tag "" to missing value None so that both misaligned
# tokens and tokens with missing annotation have the default missing
# value None.

View File

@ -125,6 +125,11 @@ def ga_tokenizer():
return get_lang_class("ga")().tokenizer
@pytest.fixture(scope="session")
def grc_tokenizer():
return get_lang_class("grc")().tokenizer
@pytest.fixture(scope="session")
def gu_tokenizer():
return get_lang_class("gu")().tokenizer
@ -202,6 +207,11 @@ def ne_tokenizer():
return get_lang_class("ne")().tokenizer
@pytest.fixture(scope="session")
def nl_vocab():
return get_lang_class("nl")().vocab
@pytest.fixture(scope="session")
def nl_tokenizer():
return get_lang_class("nl")().tokenizer

View File

@ -69,4 +69,4 @@ def test_create_with_heads_and_no_deps(vocab):
words = "I like ginger".split()
heads = list(range(len(words)))
with pytest.raises(ValueError):
doc = Doc(vocab, words=words, heads=heads)
Doc(vocab, words=words, heads=heads)

View File

View File

@ -0,0 +1,23 @@
import pytest
@pytest.mark.parametrize(
"text,match",
[
("ι", True),
("α", True),
("ϟα", True),
("ἑκατόν", True),
("ἐνακόσια", True),
("δισχίλια", True),
("μύρια", True),
("εἷς", True),
("λόγος", False),
(",", False),
("λβ", True),
],
)
def test_lex_attrs_like_number(grc_tokenizer, text, match):
tokens = grc_tokenizer(text)
assert len(tokens) == 1
assert tokens[0].like_num == match

View File

@ -0,0 +1,209 @@
from spacy.tokens import Doc
import pytest
@pytest.fixture
def nl_sample(nl_vocab):
# TEXT :
# Haar vriend lacht luid. We kregen alweer ruzie toen we de supermarkt ingingen.
# Aan het begin van de supermarkt is al het fruit en de groentes. Uiteindelijk hebben we dan ook
# geen avondeten gekocht.
words = [
"Haar",
"vriend",
"lacht",
"luid",
".",
"We",
"kregen",
"alweer",
"ruzie",
"toen",
"we",
"de",
"supermarkt",
"ingingen",
".",
"Aan",
"het",
"begin",
"van",
"de",
"supermarkt",
"is",
"al",
"het",
"fruit",
"en",
"de",
"groentes",
".",
"Uiteindelijk",
"hebben",
"we",
"dan",
"ook",
"geen",
"avondeten",
"gekocht",
".",
]
heads = [
1,
2,
2,
2,
2,
6,
6,
6,
6,
13,
13,
12,
13,
6,
6,
17,
17,
24,
20,
20,
17,
24,
24,
24,
24,
27,
27,
24,
24,
36,
36,
36,
36,
36,
35,
36,
36,
36,
]
deps = [
"nmod:poss",
"nsubj",
"ROOT",
"advmod",
"punct",
"nsubj",
"ROOT",
"advmod",
"obj",
"mark",
"nsubj",
"det",
"obj",
"advcl",
"punct",
"case",
"det",
"obl",
"case",
"det",
"nmod",
"cop",
"advmod",
"det",
"ROOT",
"cc",
"det",
"conj",
"punct",
"advmod",
"aux",
"nsubj",
"advmod",
"advmod",
"det",
"obj",
"ROOT",
"punct",
]
pos = [
"PRON",
"NOUN",
"VERB",
"ADJ",
"PUNCT",
"PRON",
"VERB",
"ADV",
"NOUN",
"SCONJ",
"PRON",
"DET",
"NOUN",
"NOUN",
"PUNCT",
"ADP",
"DET",
"NOUN",
"ADP",
"DET",
"NOUN",
"AUX",
"ADV",
"DET",
"NOUN",
"CCONJ",
"DET",
"NOUN",
"PUNCT",
"ADJ",
"AUX",
"PRON",
"ADV",
"ADV",
"DET",
"NOUN",
"VERB",
"PUNCT",
]
return Doc(nl_vocab, words=words, heads=heads, deps=deps, pos=pos)
@pytest.fixture
def nl_reference_chunking():
# Using frog https://github.com/LanguageMachines/frog/ we obtain the following NOUN-PHRASES:
return [
"haar vriend",
"we",
"ruzie",
"we",
"de supermarkt",
"het begin",
"de supermarkt",
"het fruit",
"de groentes",
"we",
"geen avondeten",
]
def test_need_dep(nl_tokenizer):
"""
Test that noun_chunks raises Value Error for 'nl' language if Doc is not parsed.
"""
txt = "Haar vriend lacht luid."
doc = nl_tokenizer(txt)
with pytest.raises(ValueError):
list(doc.noun_chunks)
def test_chunking(nl_sample, nl_reference_chunking):
"""
Test the noun chunks of a sample text. Uses a sample.
The sample text simulates a Doc object as would be produced by nl_core_news_md.
"""
chunks = [s.text.lower() for s in nl_sample.noun_chunks]
assert chunks == nl_reference_chunking

View File

@ -4,12 +4,13 @@ from spacy.util import get_lang_class
# fmt: off
# Only include languages with no external dependencies
# excluded: ja, ru, th, uk, vi, zh
LANGUAGES = ["af", "ar", "bg", "bn", "ca", "cs", "da", "de", "el", "en", "es",
"et", "fa", "fi", "fr", "ga", "he", "hi", "hr", "hu", "id", "is",
"it", "kn", "lt", "lv", "nb", "nl", "pl", "pt", "ro", "si", "sk",
"sl", "sq", "sr", "sv", "ta", "te", "tl", "tn", "tr", "tt", "ur",
"yo"]
# excluded: ja, ko, th, vi, zh
LANGUAGES = ["af", "am", "ar", "az", "bg", "bn", "ca", "cs", "da", "de", "el",
"en", "es", "et", "eu", "fa", "fi", "fr", "ga", "gu", "he", "hi",
"hr", "hu", "hy", "id", "is", "it", "kn", "ky", "lb", "lt", "lv",
"mk", "ml", "mr", "nb", "ne", "nl", "pl", "pt", "ro", "ru", "sa",
"si", "sk", "sl", "sq", "sr", "sv", "ta", "te", "ti", "tl", "tn",
"tr", "tt", "uk", "ur", "xx", "yo"]
# fmt: on

View File

@ -11,6 +11,7 @@ def test_build_dependencies():
"mock",
"flake8",
"hypothesis",
"pre-commit",
]
# ignore language-specific packages that shouldn't be installed by all
libs_ignore_setup = [

View File

@ -329,8 +329,8 @@ def test_ner_constructor(en_vocab):
}
cfg = {"model": DEFAULT_NER_MODEL}
model = registry.resolve(cfg, validate=True)["model"]
ner_1 = EntityRecognizer(en_vocab, model, **config)
ner_2 = EntityRecognizer(en_vocab, model)
EntityRecognizer(en_vocab, model, **config)
EntityRecognizer(en_vocab, model)
def test_ner_before_ruler():

View File

@ -224,8 +224,8 @@ def test_parser_constructor(en_vocab):
}
cfg = {"model": DEFAULT_PARSER_MODEL}
model = registry.resolve(cfg, validate=True)["model"]
parser_1 = DependencyParser(en_vocab, model, **config)
parser_2 = DependencyParser(en_vocab, model)
DependencyParser(en_vocab, model, **config)
DependencyParser(en_vocab, model)
@pytest.mark.parametrize("pipe_name", ["parser", "beam_parser"])

View File

@ -74,7 +74,7 @@ def test_annotates_on_update():
nlp.add_pipe("assert_sents")
# When the pipeline runs, annotations are set
doc = nlp("This is a sentence.")
nlp("This is a sentence.")
examples = []
for text in ["a a", "b b", "c c"]:

View File

@ -110,4 +110,4 @@ def test_lemmatizer_serialize(nlp):
assert doc2[0].lemma_ == "cope"
# Make sure that lemmatizer cache can be pickled
b = pickle.dumps(lemmatizer2)
pickle.dumps(lemmatizer2)

View File

@ -52,7 +52,7 @@ def test_cant_add_pipe_first_and_last(nlp):
nlp.add_pipe("new_pipe", first=True, last=True)
@pytest.mark.parametrize("name", ["my_component"])
@pytest.mark.parametrize("name", ["test_get_pipe"])
def test_get_pipe(nlp, name):
with pytest.raises(KeyError):
nlp.get_pipe(name)
@ -62,7 +62,7 @@ def test_get_pipe(nlp, name):
@pytest.mark.parametrize(
"name,replacement,invalid_replacement",
[("my_component", "other_pipe", lambda doc: doc)],
[("test_replace_pipe", "other_pipe", lambda doc: doc)],
)
def test_replace_pipe(nlp, name, replacement, invalid_replacement):
with pytest.raises(ValueError):
@ -435,8 +435,8 @@ def test_update_with_annotates():
return component
c1 = Language.component(f"{name}1", func=make_component(f"{name}1"))
c2 = Language.component(f"{name}2", func=make_component(f"{name}2"))
Language.component(f"{name}1", func=make_component(f"{name}1"))
Language.component(f"{name}2", func=make_component(f"{name}2"))
components = set([f"{name}1", f"{name}2"])

View File

@ -183,3 +183,24 @@ def test_ngram_suggester(en_tokenizer):
docs = [en_tokenizer(text) for text in ["", "", ""]]
ngrams = ngram_suggester(docs)
assert_equal(ngrams.lengths, [len(doc) for doc in docs])
def test_ngram_sizes(en_tokenizer):
# test that the range suggester works well
size_suggester = registry.misc.get("spacy.ngram_suggester.v1")(sizes=[1, 2, 3])
suggester_factory = registry.misc.get("spacy.ngram_range_suggester.v1")
range_suggester = suggester_factory(min_size=1, max_size=3)
docs = [
en_tokenizer(text) for text in ["a", "a b", "a b c", "a b c d", "a b c d e"]
]
ngrams_1 = size_suggester(docs)
ngrams_2 = range_suggester(docs)
assert_equal(ngrams_1.lengths, [1, 3, 6, 9, 12])
assert_equal(ngrams_1.lengths, ngrams_2.lengths)
assert_equal(ngrams_1.data, ngrams_2.data)
# one more variation
suggester_factory = registry.misc.get("spacy.ngram_range_suggester.v1")
range_suggester = suggester_factory(min_size=2, max_size=4)
ngrams_3 = range_suggester(docs)
assert_equal(ngrams_3.lengths, [0, 1, 3, 6, 9])

View File

@ -182,6 +182,17 @@ def test_overfitting_IO():
assert_equal(batch_deps_1, batch_deps_2)
assert_equal(batch_deps_1, no_batch_deps)
# Try to unlearn the first 'N' tag with negative annotation
neg_ex = Example.from_dict(nlp.make_doc(test_text), {"tags": ["!N", "V", "J", "N"]})
for i in range(20):
losses = {}
nlp.update([neg_ex], sgd=optimizer, losses=losses)
# test the "untrained" tag
doc3 = nlp(test_text)
assert doc3[0].tag_ != "N"
def test_tagger_requires_labels():
nlp = English()

View File

@ -69,9 +69,12 @@ def test_issue5082():
def test_issue5137():
@Language.factory("my_component")
factory_name = "test_issue5137"
pipe_name = "my_component"
@Language.factory(factory_name)
class MyComponent:
def __init__(self, nlp, name="my_component", categories="all_categories"):
def __init__(self, nlp, name=pipe_name, categories="all_categories"):
self.nlp = nlp
self.categories = categories
self.name = name
@ -86,13 +89,13 @@ def test_issue5137():
pass
nlp = English()
my_component = nlp.add_pipe("my_component")
my_component = nlp.add_pipe(factory_name, name=pipe_name)
assert my_component.categories == "all_categories"
with make_tempdir() as tmpdir:
nlp.to_disk(tmpdir)
overrides = {"components": {"my_component": {"categories": "my_categories"}}}
overrides = {"components": {pipe_name: {"categories": "my_categories"}}}
nlp2 = spacy.load(tmpdir, config=overrides)
assert nlp2.get_pipe("my_component").categories == "my_categories"
assert nlp2.get_pipe(pipe_name).categories == "my_categories"
def test_issue5141(en_vocab):

View File

@ -0,0 +1,281 @@
from spacy.cli.evaluate import print_textcats_auc_per_cat, print_prf_per_type
from spacy.lang.en import English
from spacy.training import Example
from spacy.tokens.doc import Doc
from spacy.vocab import Vocab
from spacy.kb import KnowledgeBase
from spacy.pipeline._parser_internals.arc_eager import ArcEager
from spacy.util import load_config_from_str, load_config
from spacy.cli.init_config import fill_config
from thinc.api import Config
from wasabi import msg
from ..util import make_tempdir
def test_issue7019():
scores = {"LABEL_A": 0.39829102, "LABEL_B": 0.938298329382, "LABEL_C": None}
print_textcats_auc_per_cat(msg, scores)
scores = {
"LABEL_A": {"p": 0.3420302, "r": 0.3929020, "f": 0.49823928932},
"LABEL_B": {"p": None, "r": None, "f": None},
}
print_prf_per_type(msg, scores, name="foo", type="bar")
CONFIG_7029 = """
[nlp]
lang = "en"
pipeline = ["tok2vec", "tagger"]
[components]
[components.tok2vec]
factory = "tok2vec"
[components.tok2vec.model]
@architectures = "spacy.Tok2Vec.v1"
[components.tok2vec.model.embed]
@architectures = "spacy.MultiHashEmbed.v1"
width = ${components.tok2vec.model.encode:width}
attrs = ["NORM","PREFIX","SUFFIX","SHAPE"]
rows = [5000,2500,2500,2500]
include_static_vectors = false
[components.tok2vec.model.encode]
@architectures = "spacy.MaxoutWindowEncoder.v1"
width = 96
depth = 4
window_size = 1
maxout_pieces = 3
[components.tagger]
factory = "tagger"
[components.tagger.model]
@architectures = "spacy.Tagger.v1"
nO = null
[components.tagger.model.tok2vec]
@architectures = "spacy.Tok2VecListener.v1"
width = ${components.tok2vec.model.encode:width}
upstream = "*"
"""
def test_issue7029():
"""Test that an empty document doesn't mess up an entire batch."""
TRAIN_DATA = [
("I like green eggs", {"tags": ["N", "V", "J", "N"]}),
("Eat blue ham", {"tags": ["V", "J", "N"]}),
]
nlp = English.from_config(load_config_from_str(CONFIG_7029))
train_examples = []
for t in TRAIN_DATA:
train_examples.append(Example.from_dict(nlp.make_doc(t[0]), t[1]))
optimizer = nlp.initialize(get_examples=lambda: train_examples)
for i in range(50):
losses = {}
nlp.update(train_examples, sgd=optimizer, losses=losses)
texts = ["first", "second", "third", "fourth", "and", "then", "some", ""]
docs1 = list(nlp.pipe(texts, batch_size=1))
docs2 = list(nlp.pipe(texts, batch_size=4))
assert [doc[0].tag_ for doc in docs1[:-1]] == [doc[0].tag_ for doc in docs2[:-1]]
def test_issue7055():
"""Test that fill-config doesn't turn sourced components into factories."""
source_cfg = {
"nlp": {"lang": "en", "pipeline": ["tok2vec", "tagger"]},
"components": {
"tok2vec": {"factory": "tok2vec"},
"tagger": {"factory": "tagger"},
},
}
source_nlp = English.from_config(source_cfg)
with make_tempdir() as dir_path:
# We need to create a loadable source pipeline
source_path = dir_path / "test_model"
source_nlp.to_disk(source_path)
base_cfg = {
"nlp": {"lang": "en", "pipeline": ["tok2vec", "tagger", "ner"]},
"components": {
"tok2vec": {"source": str(source_path)},
"tagger": {"source": str(source_path)},
"ner": {"factory": "ner"},
},
}
base_cfg = Config(base_cfg)
base_path = dir_path / "base.cfg"
base_cfg.to_disk(base_path)
output_path = dir_path / "config.cfg"
fill_config(output_path, base_path, silent=True)
filled_cfg = load_config(output_path)
assert filled_cfg["components"]["tok2vec"]["source"] == str(source_path)
assert filled_cfg["components"]["tagger"]["source"] == str(source_path)
assert filled_cfg["components"]["ner"]["factory"] == "ner"
assert "model" in filled_cfg["components"]["ner"]
def test_issue7056():
"""Test that the Unshift transition works properly, and doesn't cause
sentence segmentation errors."""
vocab = Vocab()
ae = ArcEager(
vocab.strings, ArcEager.get_actions(left_labels=["amod"], right_labels=["pobj"])
)
doc = Doc(vocab, words="Severe pain , after trauma".split())
state = ae.init_batch([doc])[0]
ae.apply_transition(state, "S")
ae.apply_transition(state, "L-amod")
ae.apply_transition(state, "S")
ae.apply_transition(state, "S")
ae.apply_transition(state, "S")
ae.apply_transition(state, "R-pobj")
ae.apply_transition(state, "D")
ae.apply_transition(state, "D")
ae.apply_transition(state, "D")
assert not state.eol()
def test_partial_links():
# Test that having some entities on the doc without gold links, doesn't crash
TRAIN_DATA = [
(
"Russ Cochran his reprints include EC Comics.",
{
"links": {(0, 12): {"Q2146908": 1.0}},
"entities": [(0, 12, "PERSON")],
"sent_starts": [1, -1, 0, 0, 0, 0, 0, 0],
},
)
]
nlp = English()
vector_length = 3
train_examples = []
for text, annotation in TRAIN_DATA:
doc = nlp(text)
train_examples.append(Example.from_dict(doc, annotation))
def create_kb(vocab):
# create artificial KB
mykb = KnowledgeBase(vocab, entity_vector_length=vector_length)
mykb.add_entity(entity="Q2146908", freq=12, entity_vector=[6, -4, 3])
mykb.add_alias("Russ Cochran", ["Q2146908"], [0.9])
return mykb
# Create and train the Entity Linker
entity_linker = nlp.add_pipe("entity_linker", last=True)
entity_linker.set_kb(create_kb)
optimizer = nlp.initialize(get_examples=lambda: train_examples)
for i in range(2):
losses = {}
nlp.update(train_examples, sgd=optimizer, losses=losses)
# adding additional components that are required for the entity_linker
nlp.add_pipe("sentencizer", first=True)
patterns = [
{"label": "PERSON", "pattern": [{"LOWER": "russ"}, {"LOWER": "cochran"}]},
{"label": "ORG", "pattern": [{"LOWER": "ec"}, {"LOWER": "comics"}]},
]
ruler = nlp.add_pipe("entity_ruler", before="entity_linker")
ruler.add_patterns(patterns)
# this will run the pipeline on the examples and shouldn't crash
results = nlp.evaluate(train_examples)
assert "PERSON" in results["ents_per_type"]
assert "PERSON" in results["nel_f_per_type"]
assert "ORG" in results["ents_per_type"]
assert "ORG" not in results["nel_f_per_type"]
def test_issue7065():
text = "Kathleen Battle sang in Mahler 's Symphony No. 8 at the Cincinnati Symphony Orchestra 's May Festival."
nlp = English()
nlp.add_pipe("sentencizer")
ruler = nlp.add_pipe("entity_ruler")
patterns = [
{
"label": "THING",
"pattern": [
{"LOWER": "symphony"},
{"LOWER": "no"},
{"LOWER": "."},
{"LOWER": "8"},
],
}
]
ruler.add_patterns(patterns)
doc = nlp(text)
sentences = [s for s in doc.sents]
assert len(sentences) == 2
sent0 = sentences[0]
ent = doc.ents[0]
assert ent.start < sent0.end < ent.end
assert sentences.index(ent.sent) == 0
def test_issue7065_b():
# Test that the NEL doesn't crash when an entity crosses a sentence boundary
nlp = English()
vector_length = 3
nlp.add_pipe("sentencizer")
text = "Mahler 's Symphony No. 8 was beautiful."
entities = [(0, 6, "PERSON"), (10, 24, "WORK")]
links = {
(0, 6): {"Q7304": 1.0, "Q270853": 0.0},
(10, 24): {"Q7304": 0.0, "Q270853": 1.0},
}
sent_starts = [1, -1, 0, 0, 0, 0, 0, 0, 0]
doc = nlp(text)
example = Example.from_dict(
doc, {"entities": entities, "links": links, "sent_starts": sent_starts}
)
train_examples = [example]
def create_kb(vocab):
# create artificial KB
mykb = KnowledgeBase(vocab, entity_vector_length=vector_length)
mykb.add_entity(entity="Q270853", freq=12, entity_vector=[9, 1, -7])
mykb.add_alias(
alias="No. 8",
entities=["Q270853"],
probabilities=[1.0],
)
mykb.add_entity(entity="Q7304", freq=12, entity_vector=[6, -4, 3])
mykb.add_alias(
alias="Mahler",
entities=["Q7304"],
probabilities=[1.0],
)
return mykb
# Create the Entity Linker component and add it to the pipeline
entity_linker = nlp.add_pipe("entity_linker", last=True)
entity_linker.set_kb(create_kb)
# train the NEL pipe
optimizer = nlp.initialize(get_examples=lambda: train_examples)
for i in range(2):
losses = {}
nlp.update(train_examples, sgd=optimizer, losses=losses)
# Add a custom rule-based component to mimick NER
patterns = [
{"label": "PERSON", "pattern": [{"LOWER": "mahler"}]},
{
"label": "WORK",
"pattern": [
{"LOWER": "symphony"},
{"LOWER": "no"},
{"LOWER": "."},
{"LOWER": "8"},
],
},
]
ruler = nlp.add_pipe("entity_ruler", before="entity_linker")
ruler.add_patterns(patterns)
# test the trained model - this should not throw E148
doc = nlp(text)
assert doc

View File

@ -1,12 +0,0 @@
from spacy.cli.evaluate import print_textcats_auc_per_cat, print_prf_per_type
from wasabi import msg
def test_issue7019():
scores = {"LABEL_A": 0.39829102, "LABEL_B": 0.938298329382, "LABEL_C": None}
print_textcats_auc_per_cat(msg, scores)
scores = {
"LABEL_A": {"p": 0.3420302, "r": 0.3929020, "f": 0.49823928932},
"LABEL_B": {"p": None, "r": None, "f": None},
}
print_prf_per_type(msg, scores, name="foo", type="bar")

View File

@ -1,66 +0,0 @@
from spacy.lang.en import English
from spacy.training import Example
from spacy.util import load_config_from_str
CONFIG = """
[nlp]
lang = "en"
pipeline = ["tok2vec", "tagger"]
[components]
[components.tok2vec]
factory = "tok2vec"
[components.tok2vec.model]
@architectures = "spacy.Tok2Vec.v1"
[components.tok2vec.model.embed]
@architectures = "spacy.MultiHashEmbed.v1"
width = ${components.tok2vec.model.encode:width}
attrs = ["NORM","PREFIX","SUFFIX","SHAPE"]
rows = [5000,2500,2500,2500]
include_static_vectors = false
[components.tok2vec.model.encode]
@architectures = "spacy.MaxoutWindowEncoder.v1"
width = 96
depth = 4
window_size = 1
maxout_pieces = 3
[components.tagger]
factory = "tagger"
[components.tagger.model]
@architectures = "spacy.Tagger.v1"
nO = null
[components.tagger.model.tok2vec]
@architectures = "spacy.Tok2VecListener.v1"
width = ${components.tok2vec.model.encode:width}
upstream = "*"
"""
TRAIN_DATA = [
("I like green eggs", {"tags": ["N", "V", "J", "N"]}),
("Eat blue ham", {"tags": ["V", "J", "N"]}),
]
def test_issue7029():
"""Test that an empty document doesn't mess up an entire batch."""
nlp = English.from_config(load_config_from_str(CONFIG))
train_examples = []
for t in TRAIN_DATA:
train_examples.append(Example.from_dict(nlp.make_doc(t[0]), t[1]))
optimizer = nlp.initialize(get_examples=lambda: train_examples)
for i in range(50):
losses = {}
nlp.update(train_examples, sgd=optimizer, losses=losses)
texts = ["first", "second", "third", "fourth", "and", "then", "some", ""]
docs1 = list(nlp.pipe(texts, batch_size=1))
docs2 = list(nlp.pipe(texts, batch_size=4))
assert [doc[0].tag_ for doc in docs1[:-1]] == [doc[0].tag_ for doc in docs2[:-1]]

View File

@ -1,40 +0,0 @@
from spacy.cli.init_config import fill_config
from spacy.util import load_config
from spacy.lang.en import English
from thinc.api import Config
from ..util import make_tempdir
def test_issue7055():
"""Test that fill-config doesn't turn sourced components into factories."""
source_cfg = {
"nlp": {"lang": "en", "pipeline": ["tok2vec", "tagger"]},
"components": {
"tok2vec": {"factory": "tok2vec"},
"tagger": {"factory": "tagger"},
},
}
source_nlp = English.from_config(source_cfg)
with make_tempdir() as dir_path:
# We need to create a loadable source pipeline
source_path = dir_path / "test_model"
source_nlp.to_disk(source_path)
base_cfg = {
"nlp": {"lang": "en", "pipeline": ["tok2vec", "tagger", "ner"]},
"components": {
"tok2vec": {"source": str(source_path)},
"tagger": {"source": str(source_path)},
"ner": {"factory": "ner"},
},
}
base_cfg = Config(base_cfg)
base_path = dir_path / "base.cfg"
base_cfg.to_disk(base_path)
output_path = dir_path / "config.cfg"
fill_config(output_path, base_path, silent=True)
filled_cfg = load_config(output_path)
assert filled_cfg["components"]["tok2vec"]["source"] == str(source_path)
assert filled_cfg["components"]["tagger"]["source"] == str(source_path)
assert filled_cfg["components"]["ner"]["factory"] == "ner"
assert "model" in filled_cfg["components"]["ner"]

View File

@ -1,24 +0,0 @@
from spacy.tokens.doc import Doc
from spacy.vocab import Vocab
from spacy.pipeline._parser_internals.arc_eager import ArcEager
def test_issue7056():
"""Test that the Unshift transition works properly, and doesn't cause
sentence segmentation errors."""
vocab = Vocab()
ae = ArcEager(
vocab.strings, ArcEager.get_actions(left_labels=["amod"], right_labels=["pobj"])
)
doc = Doc(vocab, words="Severe pain , after trauma".split())
state = ae.init_batch([doc])[0]
ae.apply_transition(state, "S")
ae.apply_transition(state, "L-amod")
ae.apply_transition(state, "S")
ae.apply_transition(state, "S")
ae.apply_transition(state, "S")
ae.apply_transition(state, "R-pobj")
ae.apply_transition(state, "D")
ae.apply_transition(state, "D")
ae.apply_transition(state, "D")
assert not state.eol()

View File

@ -1,54 +0,0 @@
from spacy.kb import KnowledgeBase
from spacy.training import Example
from spacy.lang.en import English
# fmt: off
TRAIN_DATA = [
("Russ Cochran his reprints include EC Comics.",
{"links": {(0, 12): {"Q2146908": 1.0}},
"entities": [(0, 12, "PERSON")],
"sent_starts": [1, -1, 0, 0, 0, 0, 0, 0]})
]
# fmt: on
def test_partial_links():
# Test that having some entities on the doc without gold links, doesn't crash
nlp = English()
vector_length = 3
train_examples = []
for text, annotation in TRAIN_DATA:
doc = nlp(text)
train_examples.append(Example.from_dict(doc, annotation))
def create_kb(vocab):
# create artificial KB
mykb = KnowledgeBase(vocab, entity_vector_length=vector_length)
mykb.add_entity(entity="Q2146908", freq=12, entity_vector=[6, -4, 3])
mykb.add_alias("Russ Cochran", ["Q2146908"], [0.9])
return mykb
# Create and train the Entity Linker
entity_linker = nlp.add_pipe("entity_linker", last=True)
entity_linker.set_kb(create_kb)
optimizer = nlp.initialize(get_examples=lambda: train_examples)
for i in range(2):
losses = {}
nlp.update(train_examples, sgd=optimizer, losses=losses)
# adding additional components that are required for the entity_linker
nlp.add_pipe("sentencizer", first=True)
patterns = [
{"label": "PERSON", "pattern": [{"LOWER": "russ"}, {"LOWER": "cochran"}]},
{"label": "ORG", "pattern": [{"LOWER": "ec"}, {"LOWER": "comics"}]},
]
ruler = nlp.add_pipe("entity_ruler", before="entity_linker")
ruler.add_patterns(patterns)
# this will run the pipeline on the examples and shouldn't crash
results = nlp.evaluate(train_examples)
assert "PERSON" in results["ents_per_type"]
assert "PERSON" in results["nel_f_per_type"]
assert "ORG" in results["ents_per_type"]
assert "ORG" not in results["nel_f_per_type"]

View File

@ -1,97 +0,0 @@
from spacy.kb import KnowledgeBase
from spacy.lang.en import English
from spacy.training import Example
def test_issue7065():
text = "Kathleen Battle sang in Mahler 's Symphony No. 8 at the Cincinnati Symphony Orchestra 's May Festival."
nlp = English()
nlp.add_pipe("sentencizer")
ruler = nlp.add_pipe("entity_ruler")
patterns = [
{
"label": "THING",
"pattern": [
{"LOWER": "symphony"},
{"LOWER": "no"},
{"LOWER": "."},
{"LOWER": "8"},
],
}
]
ruler.add_patterns(patterns)
doc = nlp(text)
sentences = [s for s in doc.sents]
assert len(sentences) == 2
sent0 = sentences[0]
ent = doc.ents[0]
assert ent.start < sent0.end < ent.end
assert sentences.index(ent.sent) == 0
def test_issue7065_b():
# Test that the NEL doesn't crash when an entity crosses a sentence boundary
nlp = English()
vector_length = 3
nlp.add_pipe("sentencizer")
text = "Mahler 's Symphony No. 8 was beautiful."
entities = [(0, 6, "PERSON"), (10, 24, "WORK")]
links = {
(0, 6): {"Q7304": 1.0, "Q270853": 0.0},
(10, 24): {"Q7304": 0.0, "Q270853": 1.0},
}
sent_starts = [1, -1, 0, 0, 0, 0, 0, 0, 0]
doc = nlp(text)
example = Example.from_dict(
doc, {"entities": entities, "links": links, "sent_starts": sent_starts}
)
train_examples = [example]
def create_kb(vocab):
# create artificial KB
mykb = KnowledgeBase(vocab, entity_vector_length=vector_length)
mykb.add_entity(entity="Q270853", freq=12, entity_vector=[9, 1, -7])
mykb.add_alias(
alias="No. 8",
entities=["Q270853"],
probabilities=[1.0],
)
mykb.add_entity(entity="Q7304", freq=12, entity_vector=[6, -4, 3])
mykb.add_alias(
alias="Mahler",
entities=["Q7304"],
probabilities=[1.0],
)
return mykb
# Create the Entity Linker component and add it to the pipeline
entity_linker = nlp.add_pipe("entity_linker", last=True)
entity_linker.set_kb(create_kb)
# train the NEL pipe
optimizer = nlp.initialize(get_examples=lambda: train_examples)
for i in range(2):
losses = {}
nlp.update(train_examples, sgd=optimizer, losses=losses)
# Add a custom rule-based component to mimick NER
patterns = [
{"label": "PERSON", "pattern": [{"LOWER": "mahler"}]},
{
"label": "WORK",
"pattern": [
{"LOWER": "symphony"},
{"LOWER": "no"},
{"LOWER": "."},
{"LOWER": "8"},
],
},
]
ruler = nlp.add_pipe("entity_ruler", before="entity_linker")
ruler.add_patterns(patterns)
# test the trained model - this should not throw E148
doc = nlp(text)
assert doc

View File

@ -60,12 +60,6 @@ def taggers(en_vocab):
@pytest.mark.parametrize("Parser", test_parsers)
def test_serialize_parser_roundtrip_bytes(en_vocab, Parser):
config = {
"update_with_oracle_cut_size": 100,
"beam_width": 1,
"beam_update_prob": 1.0,
"beam_density": 0.0,
}
cfg = {"model": DEFAULT_PARSER_MODEL}
model = registry.resolve(cfg, validate=True)["model"]
parser = Parser(en_vocab, model)

View File

@ -440,7 +440,7 @@ def test_init_config(lang, pipeline, optimize, pretraining):
assert isinstance(config, Config)
if pretraining:
config["paths"]["raw_text"] = "my_data.jsonl"
nlp = load_model_from_config(config, auto_fill=True)
load_model_from_config(config, auto_fill=True)
def test_model_recommendations():

View File

@ -211,7 +211,7 @@ def test_empty_docs(model_func, kwargs):
def test_init_extract_spans():
model = extract_spans().initialize()
extract_spans().initialize()
def test_extract_spans_span_indices():

View File

@ -71,10 +71,13 @@ def init_nlp(config: Config, *, use_gpu: int = -1) -> "Language":
nlp._link_components()
with nlp.select_pipes(disable=[*frozen_components, *resume_components]):
if T["max_epochs"] == -1:
sample_size = 100
logger.debug(
"Due to streamed train corpus, using only first 100 examples for initialization. If necessary, provide all labels in [initialize]. More info: https://spacy.io/api/cli#init_labels"
f"Due to streamed train corpus, using only first {sample_size} "
f"examples for initialization. If necessary, provide all labels "
f"in [initialize]. More info: https://spacy.io/api/cli#init_labels"
)
nlp.initialize(lambda: islice(train_corpus(nlp), 100), sgd=optimizer)
nlp.initialize(lambda: islice(train_corpus(nlp), sample_size), sgd=optimizer)
else:
nlp.initialize(lambda: train_corpus(nlp), sgd=optimizer)
logger.info(f"Initialized pipeline components: {nlp.pipe_names}")
@ -86,7 +89,6 @@ def init_nlp(config: Config, *, use_gpu: int = -1) -> "Language":
# Don't warn about components not in the pipeline
if listener not in nlp.pipe_names:
continue
if listener in frozen_components and name not in frozen_components:
logger.warning(Warnings.W087.format(name=name, listener=listener))
# We always check this regardless, in case user freezes tok2vec
@ -154,6 +156,8 @@ def load_vectors_into_model(
logger.warning(Warnings.W112.format(name=name))
nlp.vocab.vectors = vectors_nlp.vocab.vectors
for lex in nlp.vocab:
lex.rank = nlp.vocab.vectors.key2row.get(lex.orth, OOV_RANK)
if add_strings:
# I guess we should add the strings from the vectors_nlp model?
# E.g. if someone does a similarity query, they might expect the strings.

View File

@ -451,3 +451,24 @@ integers. The array has two columns, indicating the start and end position.
| ----------- | -------------------------------------------------------------------------------------------------------------------- |
| `sizes` | The phrase lengths to suggest. For example, `[1, 2]` will suggest phrases consisting of 1 or 2 tokens. ~~List[int]~~ |
| **CREATES** | The suggester function. ~~Callable[[List[Doc]], Ragged]~~ |
### spacy.ngram_range_suggester.v1 {#ngram_range_suggester}
> #### Example Config
>
> ```ini
> [components.spancat.suggester]
> @misc = "spacy.ngram_range_suggester.v1"
> min_size = 2
> max_size = 4
> ```
Suggest all spans of at least length `min_size` and at most length `max_size`
(both inclusive). Spans are returned as a ragged array of integers. The array
has two columns, indicating the start and end position.
| Name | Description |
| ----------- | ------------------------------------------------------------ |
| `min_size` | The minimal phrase lengths to suggest (inclusive). ~~[int]~~ |
| `max_size` | The maximal phrase lengths to suggest (exclusive). ~~[int]~~ |
| **CREATES** | The suggester function. ~~Callable[[List[Doc]], Ragged]~~ |