diff --git a/.github/contributors/jmyerston.md b/.github/contributors/jmyerston.md new file mode 100644 index 000000000..be5db5453 --- /dev/null +++ b/.github/contributors/jmyerston.md @@ -0,0 +1,106 @@ +# spaCy contributor agreement + +This spaCy Contributor Agreement (**"SCA"**) is based on the +[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf). +The SCA applies to any contribution that you make to any product or project +managed by us (the **"project"**), and sets out the intellectual property rights +you grant to us in the contributed materials. The term **"us"** shall mean +[ExplosionAI GmbH](https://explosion.ai/legal). The term +**"you"** shall mean the person or entity identified below. + +If you agree to be bound by these terms, fill in the information requested +below and include the filled-in version with your first pull request, under the +folder [`.github/contributors/`](/.github/contributors/). The name of the file +should be your GitHub username, with the extension `.md`. For example, the user +example_user would create the file `.github/contributors/example_user.md`. + +Read this agreement carefully before signing. These terms and conditions +constitute a binding legal agreement. + +## Contributor Agreement + +1. The term "contribution" or "contributed materials" means any source code, +object code, patch, tool, sample, graphic, specification, manual, +documentation, or any other material posted or submitted by you to the project. + +2. With respect to any worldwide copyrights, or copyright applications and +registrations, in your contribution: + + * you hereby assign to us joint ownership, and to the extent that such + assignment is or becomes invalid, ineffective or unenforceable, you hereby + grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge, + royalty-free, unrestricted license to exercise all rights under those + copyrights. This includes, at our option, the right to sublicense these same + rights to third parties through multiple levels of sublicensees or other + licensing arrangements; + + * you agree that each of us can do all things in relation to your + contribution as if each of us were the sole owners, and if one of us makes + a derivative work of your contribution, the one who makes the derivative + work (or has it made will be the sole owner of that derivative work; + + * you agree that you will not assert any moral rights in your contribution + against us, our licensees or transferees; + + * you agree that we may register a copyright in your contribution and + exercise all ownership rights associated with it; and + + * you agree that neither of us has any duty to consult with, obtain the + consent of, pay or render an accounting to the other for any use or + distribution of your contribution. + +3. With respect to any patents you own, or that you can license without payment +to any third party, you hereby grant to us a perpetual, irrevocable, +non-exclusive, worldwide, no-charge, royalty-free license to: + + * make, have made, use, sell, offer to sell, import, and otherwise transfer + your contribution in whole or in part, alone or in combination with or + included in any product, work or materials arising out of the project to + which your contribution was submitted, and + + * at our option, to sublicense these same rights to third parties through + multiple levels of sublicensees or other licensing arrangements. + +4. Except as set out above, you keep all right, title, and interest in your +contribution. The rights that you grant to us under these terms are effective +on the date you first submitted a contribution to us, even if your submission +took place before the date you sign these terms. + +5. You covenant, represent, warrant and agree that: + + * Each contribution that you submit is and shall be an original work of + authorship and you can legally grant the rights set out in this SCA; + + * to the best of your knowledge, each contribution will not violate any + third party's copyrights, trademarks, patents, or other intellectual + property rights; and + + * each contribution shall be in compliance with U.S. export control laws and + other applicable export and import laws. You agree to notify us if you + become aware of any circumstance which would make any of the foregoing + representations inaccurate in any respect. We may publicly disclose your + participation in the project, including the fact that you have signed the SCA. + +6. This SCA is governed by the laws of the State of California and applicable +U.S. Federal law. Any choice of law rules will not apply. + +7. Please place an “x” on one of the applicable statement below. Please do NOT +mark both statements: + + * [x] I am signing on behalf of myself as an individual and no other person + or entity, including my employer, has or will have rights with respect to my + contributions. + + * [ ] I am signing on behalf of my employer or a legal entity and I have the + actual authority to contractually bind that entity. + +## Contributor Details + +| Field | Entry | +| ----------------------------- | ----------------------------------- | +| Name | Jacobo Myerston | +| Company name (if applicable) | University of California, San Diego | +| Title or role (if applicable) | Academic | +| Date | 07/05/2021 | +| GitHub username | jmyerston | +| Website (optional) | diogenet.ucsd.edu | diff --git a/.github/contributors/julien-talkair.md b/.github/contributors/julien-talkair.md new file mode 100644 index 000000000..f8a1933b2 --- /dev/null +++ b/.github/contributors/julien-talkair.md @@ -0,0 +1,106 @@ +# spaCy contributor agreement + +This spaCy Contributor Agreement (**"SCA"**) is based on the +[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf). +The SCA applies to any contribution that you make to any product or project +managed by us (the **"project"**), and sets out the intellectual property rights +you grant to us in the contributed materials. The term **"us"** shall mean +[ExplosionAI GmbH](https://explosion.ai/legal). The term +**"you"** shall mean the person or entity identified below. + +If you agree to be bound by these terms, fill in the information requested +below and include the filled-in version with your first pull request, under the +folder [`.github/contributors/`](/.github/contributors/). The name of the file +should be your GitHub username, with the extension `.md`. For example, the user +example_user would create the file `.github/contributors/example_user.md`. + +Read this agreement carefully before signing. These terms and conditions +constitute a binding legal agreement. + +## Contributor Agreement + +1. The term "contribution" or "contributed materials" means any source code, +object code, patch, tool, sample, graphic, specification, manual, +documentation, or any other material posted or submitted by you to the project. + +2. With respect to any worldwide copyrights, or copyright applications and +registrations, in your contribution: + + * you hereby assign to us joint ownership, and to the extent that such + assignment is or becomes invalid, ineffective or unenforceable, you hereby + grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge, + royalty-free, unrestricted license to exercise all rights under those + copyrights. This includes, at our option, the right to sublicense these same + rights to third parties through multiple levels of sublicensees or other + licensing arrangements; + + * you agree that each of us can do all things in relation to your + contribution as if each of us were the sole owners, and if one of us makes + a derivative work of your contribution, the one who makes the derivative + work (or has it made will be the sole owner of that derivative work; + + * you agree that you will not assert any moral rights in your contribution + against us, our licensees or transferees; + + * you agree that we may register a copyright in your contribution and + exercise all ownership rights associated with it; and + + * you agree that neither of us has any duty to consult with, obtain the + consent of, pay or render an accounting to the other for any use or + distribution of your contribution. + +3. With respect to any patents you own, or that you can license without payment +to any third party, you hereby grant to us a perpetual, irrevocable, +non-exclusive, worldwide, no-charge, royalty-free license to: + + * make, have made, use, sell, offer to sell, import, and otherwise transfer + your contribution in whole or in part, alone or in combination with or + included in any product, work or materials arising out of the project to + which your contribution was submitted, and + + * at our option, to sublicense these same rights to third parties through + multiple levels of sublicensees or other licensing arrangements. + +4. Except as set out above, you keep all right, title, and interest in your +contribution. The rights that you grant to us under these terms are effective +on the date you first submitted a contribution to us, even if your submission +took place before the date you sign these terms. + +5. You covenant, represent, warrant and agree that: + + * Each contribution that you submit is and shall be an original work of + authorship and you can legally grant the rights set out in this SCA; + + * to the best of your knowledge, each contribution will not violate any + third party's copyrights, trademarks, patents, or other intellectual + property rights; and + + * each contribution shall be in compliance with U.S. export control laws and + other applicable export and import laws. You agree to notify us if you + become aware of any circumstance which would make any of the foregoing + representations inaccurate in any respect. We may publicly disclose your + participation in the project, including the fact that you have signed the SCA. + +6. This SCA is governed by the laws of the State of California and applicable +U.S. Federal law. Any choice of law rules will not apply. + +7. Please place an “x” on one of the applicable statement below. Please do NOT +mark both statements: + + * [ ] I am signing on behalf of myself as an individual and no other person + or entity, including my employer, has or will have rights with respect to my + contributions. + + * [x] I am signing on behalf of my employer or a legal entity and I have the + actual authority to contractually bind that entity. + +## Contributor Details + +| Field | Entry | +|------------------------------- | -------------------- | +| Name | Julien Rossi | +| Company name (if applicable) | TalkAir BV | +| Title or role (if applicable) | CTO, Partner | +| Date | June 28 2021 | +| GitHub username | julien-talkair | +| Website (optional) | | diff --git a/.github/contributors/thomashacker.md b/.github/contributors/thomashacker.md new file mode 100644 index 000000000..d88727dc8 --- /dev/null +++ b/.github/contributors/thomashacker.md @@ -0,0 +1,106 @@ +# spaCy contributor agreement + +This spaCy Contributor Agreement (**"SCA"**) is based on the +[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf). +The SCA applies to any contribution that you make to any product or project +managed by us (the **"project"**), and sets out the intellectual property rights +you grant to us in the contributed materials. The term **"us"** shall mean +[ExplosionAI GmbH](https://explosion.ai/legal). The term +**"you"** shall mean the person or entity identified below. + +If you agree to be bound by these terms, fill in the information requested +below and include the filled-in version with your first pull request, under the +folder [`.github/contributors/`](/.github/contributors/). The name of the file +should be your GitHub username, with the extension `.md`. For example, the user +example_user would create the file `.github/contributors/example_user.md`. + +Read this agreement carefully before signing. These terms and conditions +constitute a binding legal agreement. + +## Contributor Agreement + +1. The term "contribution" or "contributed materials" means any source code, +object code, patch, tool, sample, graphic, specification, manual, +documentation, or any other material posted or submitted by you to the project. + +2. With respect to any worldwide copyrights, or copyright applications and +registrations, in your contribution: + + * you hereby assign to us joint ownership, and to the extent that such + assignment is or becomes invalid, ineffective or unenforceable, you hereby + grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge, + royalty-free, unrestricted license to exercise all rights under those + copyrights. This includes, at our option, the right to sublicense these same + rights to third parties through multiple levels of sublicensees or other + licensing arrangements; + + * you agree that each of us can do all things in relation to your + contribution as if each of us were the sole owners, and if one of us makes + a derivative work of your contribution, the one who makes the derivative + work (or has it made will be the sole owner of that derivative work; + + * you agree that you will not assert any moral rights in your contribution + against us, our licensees or transferees; + + * you agree that we may register a copyright in your contribution and + exercise all ownership rights associated with it; and + + * you agree that neither of us has any duty to consult with, obtain the + consent of, pay or render an accounting to the other for any use or + distribution of your contribution. + +3. With respect to any patents you own, or that you can license without payment +to any third party, you hereby grant to us a perpetual, irrevocable, +non-exclusive, worldwide, no-charge, royalty-free license to: + + * make, have made, use, sell, offer to sell, import, and otherwise transfer + your contribution in whole or in part, alone or in combination with or + included in any product, work or materials arising out of the project to + which your contribution was submitted, and + + * at our option, to sublicense these same rights to third parties through + multiple levels of sublicensees or other licensing arrangements. + +4. Except as set out above, you keep all right, title, and interest in your +contribution. The rights that you grant to us under these terms are effective +on the date you first submitted a contribution to us, even if your submission +took place before the date you sign these terms. + +5. You covenant, represent, warrant and agree that: + + * Each contribution that you submit is and shall be an original work of + authorship and you can legally grant the rights set out in this SCA; + + * to the best of your knowledge, each contribution will not violate any + third party's copyrights, trademarks, patents, or other intellectual + property rights; and + + * each contribution shall be in compliance with U.S. export control laws and + other applicable export and import laws. You agree to notify us if you + become aware of any circumstance which would make any of the foregoing + representations inaccurate in any respect. We may publicly disclose your + participation in the project, including the fact that you have signed the SCA. + +6. This SCA is governed by the laws of the State of California and applicable +U.S. Federal law. Any choice of law rules will not apply. + +7. Please place an “x” on one of the applicable statement below. Please do NOT +mark both statements: + + * [x] I am signing on behalf of myself as an individual and no other person + or entity, including my employer, has or will have rights with respect to my + contributions. + + * [ ] I am signing on behalf of my employer or a legal entity and I have the + actual authority to contractually bind that entity. + +## Contributor Details + +| Field | Entry | +|------------------------------- | -------------------- | +| Name | Edward Schmuhl | +| Company name (if applicable) | | +| Title or role (if applicable) | | +| Date | 09.07.2021 | +| GitHub username | thomashacker | +| Website (optional) | | diff --git a/.github/workflows/autoblack.yml b/.github/workflows/autoblack.yml index 5a2fd6210..8d0282650 100644 --- a/.github/workflows/autoblack.yml +++ b/.github/workflows/autoblack.yml @@ -9,6 +9,7 @@ on: jobs: autoblack: + if: github.repository_owner == 'explosion' runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml new file mode 100644 index 000000000..a7a12fd24 --- /dev/null +++ b/.pre-commit-config.yaml @@ -0,0 +1,12 @@ +repos: +- repo: https://github.com/ambv/black + rev: 21.6b0 + hooks: + - id: black + language_version: python3.7 +- repo: https://gitlab.com/pycqa/flake8 + rev: 3.9.2 + hooks: + - id: flake8 + args: + - "--config=setup.cfg" diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 682e5134c..3a94b9b67 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -177,6 +177,15 @@ tools installed. **⚠️ Note that formatting and linting is currently only possible for Python modules in `.py` files, not Cython modules in `.pyx` and `.pxd` files.** +### Pre-Commit Hooks + +After cloning the repo, after installing the packages from `requirements.txt`, enter the repo folder and run `pre-commit install`. +Each time a `git commit` is initiated, `black` and `flake8` will run automatically on the modified files only. + +In case of error, or when `black` modified a file, the modified file needs to be `git add` once again and a new +`git commit` has to be issued. + + ### Code formatting [`black`](https://github.com/ambv/black) is an opinionated Python code diff --git a/pyproject.toml b/pyproject.toml index 6d2dd2030..07091123a 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -5,7 +5,7 @@ requires = [ "cymem>=2.0.2,<2.1.0", "preshed>=3.0.2,<3.1.0", "murmurhash>=0.28.0,<1.1.0", - "thinc>=8.0.7,<8.1.0", + "thinc>=8.0.8,<8.1.0", "blis>=0.4.0,<0.8.0", "pathy", "numpy>=1.15.0", diff --git a/requirements.txt b/requirements.txt index 0749ec4e8..ad8c70318 100644 --- a/requirements.txt +++ b/requirements.txt @@ -2,7 +2,7 @@ spacy-legacy>=3.0.7,<3.1.0 cymem>=2.0.2,<2.1.0 preshed>=3.0.2,<3.1.0 -thinc>=8.0.7,<8.1.0 +thinc>=8.0.8,<8.1.0 blis>=0.4.0,<0.8.0 ml_datasets>=0.2.0,<0.3.0 murmurhash>=0.28.0,<1.1.0 @@ -22,6 +22,7 @@ setuptools packaging>=20.0 typing_extensions>=3.7.4.1,<4.0.0.0; python_version < "3.8" # Development dependencies +pre-commit>=2.13.0 cython>=0.25,<3.0 pytest>=5.2.0 pytest-timeout>=1.3.0,<2.0.0 diff --git a/setup.cfg b/setup.cfg index afc4c4ed1..1fa5b828d 100644 --- a/setup.cfg +++ b/setup.cfg @@ -37,14 +37,14 @@ setup_requires = cymem>=2.0.2,<2.1.0 preshed>=3.0.2,<3.1.0 murmurhash>=0.28.0,<1.1.0 - thinc>=8.0.7,<8.1.0 + thinc>=8.0.8,<8.1.0 install_requires = # Our libraries spacy-legacy>=3.0.7,<3.1.0 murmurhash>=0.28.0,<1.1.0 cymem>=2.0.2,<2.1.0 preshed>=3.0.2,<3.1.0 - thinc>=8.0.7,<8.1.0 + thinc>=8.0.8,<8.1.0 blis>=0.4.0,<0.8.0 wasabi>=0.8.1,<1.1.0 srsly>=2.4.1,<3.0.0 diff --git a/spacy/__init__.py b/spacy/__init__.py index f20c32eb5..ca47edc94 100644 --- a/spacy/__init__.py +++ b/spacy/__init__.py @@ -5,7 +5,7 @@ import sys # set library-specific custom warning handling before doing anything else from .errors import setup_default_warnings -setup_default_warnings() +setup_default_warnings() # noqa: E402 # These are imported as part of the API from thinc.api import prefer_gpu, require_gpu, require_cpu # noqa: F401 diff --git a/spacy/about.py b/spacy/about.py index 499133cc0..51154dc1a 100644 --- a/spacy/about.py +++ b/spacy/about.py @@ -1,6 +1,6 @@ # fmt: off __title__ = "spacy" -__version__ = "3.1.0" +__version__ = "3.1.1" __download_url__ = "https://github.com/explosion/spacy-models/releases/download" __compatibility__ = "https://raw.githubusercontent.com/explosion/spacy-models/master/compatibility.json" __projects__ = "https://github.com/explosion/projects" diff --git a/spacy/cli/package.py b/spacy/cli/package.py index 5dfe67296..342baa8ab 100644 --- a/spacy/cli/package.py +++ b/spacy/cli/package.py @@ -139,6 +139,9 @@ def package( readme = generate_readme(meta) create_file(readme_path, readme) create_file(package_path / model_name_v / "README.md", readme) + msg.good("Generated README.md from meta.json") + else: + msg.info("Using existing README.md from pipeline directory") imports = [] for code_path in code_paths: imports.append(code_path.stem) @@ -202,8 +205,9 @@ def get_meta( "url": "", "license": "MIT", } - meta.update(existing_meta) nlp = util.load_model_from_path(Path(model_path)) + meta.update(nlp.meta) + meta.update(existing_meta) meta["spacy_version"] = util.get_model_version_range(about.__version__) meta["vectors"] = { "width": nlp.vocab.vectors_length, diff --git a/spacy/errors.py b/spacy/errors.py index 2173dd58a..5651ab0fa 100644 --- a/spacy/errors.py +++ b/spacy/errors.py @@ -864,6 +864,9 @@ class Errors: E1018 = ("Knowledge base for component '{name}' is not set. " "Make sure either `nel.initialize` or `nel.set_kb` " "is called with a `kb_loader` function.") + E1019 = ("`noun_chunks` requires the pos tagging, which requires a " + "statistical model to be installed and loaded. For more info, see " + "the documentation:\nhttps://spacy.io/usage/models") # Deprecated model shortcuts, only used in errors and warnings diff --git a/spacy/lang/az/__init__.py b/spacy/lang/az/__init__.py index 6a4288d1e..2937e2ecf 100644 --- a/spacy/lang/az/__init__.py +++ b/spacy/lang/az/__init__.py @@ -1,16 +1,11 @@ -from .tokenizer_exceptions import TOKENIZER_EXCEPTIONS, TOKEN_MATCH from .stop_words import STOP_WORDS -from .syntax_iterators import SYNTAX_ITERATORS from .lex_attrs import LEX_ATTRS from ...language import Language class AzerbaijaniDefaults(Language.Defaults): - tokenizer_exceptions = TOKENIZER_EXCEPTIONS lex_attr_getters = LEX_ATTRS stop_words = STOP_WORDS - token_match = TOKEN_MATCH - syntax_iterators = SYNTAX_ITERATORS class Azerbaijani(Language): diff --git a/spacy/lang/grc/__init__.py b/spacy/lang/grc/__init__.py new file mode 100644 index 000000000..e29252da9 --- /dev/null +++ b/spacy/lang/grc/__init__.py @@ -0,0 +1,18 @@ +from .tokenizer_exceptions import TOKENIZER_EXCEPTIONS +from .stop_words import STOP_WORDS +from .lex_attrs import LEX_ATTRS +from ...language import Language + + +class AncientGreekDefaults(Language.Defaults): + tokenizer_exceptions = TOKENIZER_EXCEPTIONS + lex_attr_getters = LEX_ATTRS + stop_words = STOP_WORDS + + +class AncientGreek(Language): + lang = "grc" + Defaults = AncientGreekDefaults + + +__all__ = ["AncientGreek"] diff --git a/spacy/lang/grc/examples.py b/spacy/lang/grc/examples.py new file mode 100644 index 000000000..9c0bcb265 --- /dev/null +++ b/spacy/lang/grc/examples.py @@ -0,0 +1,17 @@ +""" +Example sentences to test spaCy and its language models. + +>>> from spacy.lang.grc.examples import sentences +>>> docs = nlp.pipe(sentences) +""" + + +sentences = [ + "ἐρᾷ μὲν ἁγνὸς οὐρανὸς τρῶσαι χθόνα, ἔρως δὲ γαῖαν λαμβάνει γάμου τυχεῖν·", + "εὐδαίμων Χαρίτων καὶ Μελάνιππος ἔφυ, θείας ἁγητῆρες ἐφαμερίοις φιλότατος.", + "ὃ μὲν δὴ ἀπόστολος ἐς τὴν Μίλητον ἦν.", + "Θρασύβουλος δὲ σαφέως προπεπυσμένος πάντα λόγον καὶ εἰδὼς τὰ Ἀλυάττης μέλλοι ποιήσειν μηχανᾶται τοιάδε.", + "φιλόπαις δ' ἦν ἐκμανῶς καὶ Ἀλέξανδρος ὁ βασιλεύς.", + "Ἀντίγονος ὁ βασιλεὺς ἐπεκώμαζε τῷ Ζήνωνι", + "αὐτὰρ ὃ δεύτατος ἦλθεν ἄναξ ἀνδρῶν Ἀγαμέμνων ἕλκος ἔχων", +] diff --git a/spacy/lang/grc/lex_attrs.py b/spacy/lang/grc/lex_attrs.py new file mode 100644 index 000000000..0ab15e6fd --- /dev/null +++ b/spacy/lang/grc/lex_attrs.py @@ -0,0 +1,314 @@ +from ...attrs import LIKE_NUM + + +_num_words = [ + # CARDINALS + "εἷς", + "ἑνός", + "ἑνί", + "ἕνα", + "μία", + "μιᾶς", + "μιᾷ", + "μίαν", + "ἕν", + "δύο", + "δυοῖν", + "τρεῖς", + "τριῶν", + "τρισί", + "τρία", + "τέτταρες", + "τεττάρων", + "τέτταρσι", + "τέτταρα", + "τέτταρας", + "πέντε", + "ἕξ", + "ἑπτά", + "ὀκτώ", + "ἐννέα", + "δέκα", + "ἕνδεκα", + "δώδεκα", + "πεντεκαίδεκα", + "ἑκκαίδεκα", + "ἑπτακαίδεκα", + "ὀκτωκαίδεκα", + "ἐννεακαίδεκα", + "εἴκοσι", + "τριάκοντα", + "τετταράκοντα", + "πεντήκοντα", + "ἑξήκοντα", + "ἑβδομήκοντα", + "ὀγδοήκοντα", + "ἐνενήκοντα", + "ἑκατόν", + "διακόσιοι", + "διακοσίων", + "διακοσιᾶν", + "διακοσίους", + "διακοσίοις", + "διακόσια", + "διακόσιαι", + "διακοσίαις", + "διακοσίαισι", + "διηκόσιοι", + "διηκοσίων", + "διηκοσιέων", + "διακοσίας", + "διηκόσια", + "διηκόσιαι", + "διηκοσίας", + "τριακόσιοι", + "τριακοσίων", + "τριακοσιᾶν", + "τριακοσίους", + "τριακοσίοις", + "τριακόσια", + "τριακόσιαι", + "τριακοσίαις", + "τριακοσίαισι", + "τριακοσιέων", + "τριακοσίας", + "τριηκόσια", + "τριηκοσίας", + "τριηκόσιοι", + "τριηκοσίοισιν", + "τριηκοσίους", + "τριηκοσίων", + "τετρακόσιοι", + "τετρακοσίων", + "τετρακοσιᾶν", + "τετρακοσίους", + "τετρακοσίοις", + "τετρακόσια", + "τετρακόσιαι", + "τετρακοσίαις", + "τετρακοσίαισι", + "τετρακοσιέων", + "τετρακοσίας", + "πεντακόσιοι", + "πεντακοσίων", + "πεντακοσιᾶν", + "πεντακοσίους", + "πεντακοσίοις", + "πεντακόσια", + "πεντακόσιαι", + "πεντακοσίαις", + "πεντακοσίαισι", + "πεντακοσιέων", + "πεντακοσίας", + "ἑξακόσιοι", + "ἑξακοσίων", + "ἑξακοσιᾶν", + "ἑξακοσίους", + "ἑξακοσίοις", + "ἑξακόσια", + "ἑξακόσιαι", + "ἑξακοσίαις", + "ἑξακοσίαισι", + "ἑξακοσιέων", + "ἑξακοσίας", + "ἑπτακόσιοι", + "ἑπτακοσίων", + "ἑπτακοσιᾶν", + "ἑπτακοσίους", + "ἑπτακοσίοις", + "ἑπτακόσια", + "ἑπτακόσιαι", + "ἑπτακοσίαις", + "ἑπτακοσίαισι", + "ἑπτακοσιέων", + "ἑπτακοσίας", + "ὀκτακόσιοι", + "ὀκτακοσίων", + "ὀκτακοσιᾶν", + "ὀκτακοσίους", + "ὀκτακοσίοις", + "ὀκτακόσια", + "ὀκτακόσιαι", + "ὀκτακοσίαις", + "ὀκτακοσίαισι", + "ὀκτακοσιέων", + "ὀκτακοσίας", + "ἐνακόσιοι", + "ἐνακοσίων", + "ἐνακοσιᾶν", + "ἐνακοσίους", + "ἐνακοσίοις", + "ἐνακόσια", + "ἐνακόσιαι", + "ἐνακοσίαις", + "ἐνακοσίαισι", + "ἐνακοσιέων", + "ἐνακοσίας", + "χίλιοι", + "χιλίων", + "χιλιῶν", + "χιλίους", + "χιλίοις", + "χίλιαι", + "χιλίας", + "χιλίαις", + "χίλια", + "χίλι", + "δισχίλιοι", + "δισχιλίων", + "δισχιλιῶν", + "δισχιλίους", + "δισχιλίοις", + "δισχίλιαι", + "δισχιλίας", + "δισχιλίαις", + "δισχίλια", + "δισχίλι", + "τρισχίλιοι", + "τρισχιλίων", + "τρισχιλιῶν", + "τρισχιλίους", + "τρισχιλίοις", + "τρισχίλιαι", + "τρισχιλίας", + "τρισχιλίαις", + "τρισχίλια", + "τρισχίλι", + "μύριοι", + "μύριοί", + "μυρίων", + "μυρίοις", + "μυρίους", + "μύριαι", + "μυρίαις", + "μυρίας", + "μύρια", + "δισμύριοι", + "δισμύριοί", + "δισμυρίων", + "δισμυρίοις", + "δισμυρίους", + "δισμύριαι", + "δισμυρίαις", + "δισμυρίας", + "δισμύρια", + "δεκακισμύριοι", + "δεκακισμύριοί", + "δεκακισμυρίων", + "δεκακισμυρίοις", + "δεκακισμυρίους", + "δεκακισμύριαι", + "δεκακισμυρίαις", + "δεκακισμυρίας", + "δεκακισμύρια", + # ANCIENT GREEK NUMBERS (1-100) + "α", + "β", + "γ", + "δ", + "ε", + "ϛ", + "ζ", + "η", + "θ", + "ι", + "ια", + "ιβ", + "ιγ", + "ιδ", + "ιε", + "ιϛ", + "ιζ", + "ιη", + "ιθ", + "κ", + "κα", + "κβ", + "κγ", + "κδ", + "κε", + "κϛ", + "κζ", + "κη", + "κθ", + "λ", + "λα", + "λβ", + "λγ", + "λδ", + "λε", + "λϛ", + "λζ", + "λη", + "λθ", + "μ", + "μα", + "μβ", + "μγ", + "μδ", + "με", + "μϛ", + "μζ", + "μη", + "μθ", + "ν", + "να", + "νβ", + "νγ", + "νδ", + "νε", + "νϛ", + "νζ", + "νη", + "νθ", + "ξ", + "ξα", + "ξβ", + "ξγ", + "ξδ", + "ξε", + "ξϛ", + "ξζ", + "ξη", + "ξθ", + "ο", + "οα", + "οβ", + "ογ", + "οδ", + "οε", + "οϛ", + "οζ", + "οη", + "οθ", + "π", + "πα", + "πβ", + "πγ", + "πδ", + "πε", + "πϛ", + "πζ", + "πη", + "πθ", + "ϟ", + "ϟα", + "ϟβ", + "ϟγ", + "ϟδ", + "ϟε", + "ϟϛ", + "ϟζ", + "ϟη", + "ϟθ", + "ρ", +] + + +def like_num(text): + if text.lower() in _num_words: + return True + return False + + +LEX_ATTRS = {LIKE_NUM: like_num} diff --git a/spacy/lang/grc/stop_words.py b/spacy/lang/grc/stop_words.py new file mode 100644 index 000000000..cbb766a8c --- /dev/null +++ b/spacy/lang/grc/stop_words.py @@ -0,0 +1,61 @@ +STOP_WORDS = set( + """ +αὐτῷ αὐτοῦ αὐτῆς αὐτόν αὐτὸν αὐτῶν αὐτὸς αὐτὸ αὐτό αὐτός αὐτὴν αὐτοῖς αὐτοὺς αὔτ' αὐτὰ αὐτῇ αὐτὴ +αὐτὼ αὑταὶ καὐτὸς αὐτά αὑτός αὐτοῖσι αὐτοῖσιν αὑτὸς αὐτήν αὐτοῖσί αὐτοί αὐτοὶ αὐτοῖο αὐτάων αὐτὰς +αὐτέων αὐτώ αὐτάς αὐτούς αὐτή αὐταί αὐταὶ αὐτῇσιν τὠυτῷ τὠυτὸ ταὐτὰ ταύτῃ αὐτῇσι αὐτῇς αὐταῖς αὐτᾶς αὐτὰν ταὐτὸν + +γε γ' γέ γὰρ γάρ δαῖτα δαιτὸς δαιτὶ δαὶ δαιτί δαῖτ' δαΐδας δαΐδων δἰ διὰ διά δὲ δ' δέ δὴ δή εἰ εἴ κεἰ κεἴ αἴ αἲ εἲ αἰ + +ἐστί ἐστιν ὢν ἦν ἐστὶν ὦσιν εἶναι ὄντι εἰσιν ἐστι ὄντα οὖσαν ἦσαν ἔστι ὄντας ἐστὲ εἰσὶ εἶ ὤν ἦ οὖσαι ἔσται ἐσμὲν ἐστ' ἐστίν ἔστ' ὦ ἔσει ἦμεν εἰμι εἰσὶν ἦσθ' +ἐστὶ ᾖ οὖσ' ἔστιν εἰμὶ εἴμ' ἐσθ' ᾖς στί εἴην εἶναί οὖσα κἄστ' εἴη ἦσθα εἰμ' ἔστω ὄντ' ἔσθ' ἔμμεναι ἔω ἐὼν ἐσσι ἔσσεται ἐστὸν ἔσαν ἔστων ἐόντα ἦεν ἐοῦσαν ἔην +ἔσσομαι εἰσί ἐστόν ἔσκεν ἐόντ' ἐών ἔσσεσθ' εἰσ' ἐόντες ἐόντε ἐσσεῖται εἰμεν ἔασιν ἔσκε ἔμεναι ἔσεσθαι ἔῃ εἰμὲν εἰσι ἐόντας ἔστε εἰς ἦτε εἰμί ἔσσεαι ἔμμεν +ἐοῦσα ἔμεν ᾖσιν ἐστε ἐόντι εἶεν ἔσσονται ἔησθα ἔσεσθε ἐσσί ἐοῦσ' ἔασι ἔα ἦα ἐόν ἔσσεσθαι ἔσομαι ἔσκον εἴης ἔωσιν εἴησαν ἐὸν ἐουσέων ἔσσῃ ἐούσης ἔσονται +ἐούσας ἐόντων ἐόντος ἐσομένην ἔστωσαν ἔωσι ἔας ἐοῦσαι ἣν εἰσίν ἤστην ὄντες ὄντων οὔσας οὔσαις ὄντος οὖσι οὔσης ἔσῃ ὂν ἐσμεν ἐσμέν οὖσιν ἐσομένους ἐσσόμεσθα + +ἒς ἐς ἔς ἐν κεἰς εἲς κἀν ἔν κατὰ κατ' καθ' κατά κάτα κὰπ κὰκ κὰδ κὰρ κάρ κὰγ κὰμ καὶ καί μετὰ μεθ' μετ' μέτα μετά μέθ' μέτ' μὲν μέν μὴ + +μή μη οὐκ οὒ οὐ οὐχ οὐχὶ κοὐ κοὐχ οὔ κοὐκ οὐχί οὐκὶ οὐδὲν οὐδεὶς οὐδέν κοὐδεὶς κοὐδὲν οὐδένα οὐδενὸς οὐδέν' οὐδενός οὐδενὶ +οὐδεμία οὐδείς οὐδεμίαν οὐδὲ οὐδ' κοὐδ' οὐδέ οὔτε οὔθ' οὔτέ τε οὔτ' οὕτως οὕτω οὕτῶ χοὔτως οὖν ὦν ὧν τοῦτο τοῦθ' τοῦτον τούτῳ +τούτοις ταύτας αὕτη ταῦτα οὗτος ταύτης ταύτην τούτων ταῦτ' τοῦτ' τούτου αὗται τούτους τοῦτό ταῦτά τούτοισι χαὔτη ταῦθ' χοὖτοι +τούτοισιν οὗτός οὗτοι τούτω τουτέων τοῦτὸν οὗτοί τοῦτου οὗτοὶ ταύτῃσι ταύταις ταυτὶ παρὰ παρ' πάρα παρά πὰρ παραὶ πάρ' περὶ +πέρι περί πρὸς πρός ποτ' ποτὶ προτὶ προτί πότι + +σὸς σήν σὴν σὸν σόν σὰ σῶν σοῖσιν σός σῆς σῷ σαῖς σῇ σοῖς σοῦ σ' σὰν σά σὴ σὰς +σᾷ σοὺς σούς σοῖσι σῇς σῇσι σή σῇσιν σοὶ σου ὑμεῖς σὲ σύ σοι ὑμᾶς ὑμῶν ὑμῖν σε +σέ σὺ σέθεν σοί ὑμὶν σφῷν ὑμίν τοι τοὶ σφὼ ὔμμ' σφῶϊ σεῖο τ' σφῶϊν ὔμμιν σέο σευ σεῦ +ὔμμι ὑμέων τύνη ὑμείων τοί ὔμμες σεο τέ τεοῖο ὑμέας σὺν ξὺν σύν + +θ' τί τι τις τινες τινα τινος τινὸς τινὶ τινῶν τίς τίνες τινὰς τιν' τῳ του τίνα τοῦ τῷ τινί τινά τίνος τινι τινας τινὰ τινων +τίν' τευ τέο τινές τεο τινὲς τεῷ τέῳ τινός τεῳ τισὶ + +τοιαῦτα τοιοῦτον τοιοῦθ' τοιοῦτος τοιαύτην τοιαῦτ' τοιούτου τοιαῦθ' τοιαύτῃ τοιούτοις τοιαῦται τοιαῦτά τοιαύτη τοιοῦτοι τοιούτων τοιούτοισι +τοιοῦτο τοιούτους τοιούτῳ τοιαύτης τοιαύταις τοιαύτας τοιοῦτός τίνι τοῖσι τίνων τέων τέοισί τὰ τῇ τώ τὼ + +ἀλλὰ ἀλλ' ἀλλά ἀπ' ἀπὸ κἀπ' ἀφ' τἀπὸ κἀφ' ἄπο ἀπό τὠπὸ τἀπ' ἄλλων ἄλλῳ ἄλλη ἄλλης ἄλλους ἄλλοις ἄλλον ἄλλο ἄλλου τἄλλα ἄλλα +ἄλλᾳ ἄλλοισιν τἄλλ' ἄλλ' ἄλλος ἄλλοισι κἄλλ' ἄλλοι ἄλλῃσι ἄλλόν ἄλλην ἄλλά ἄλλαι ἄλλοισίν ὧλλοι ἄλλῃ ἄλλας ἀλλέων τἆλλα ἄλλως +ἀλλάων ἄλλαις τἆλλ' + +ἂν ἄν κἂν τἂν ἃν κεν κ' κέν κέ κε χ' ἄρα τἄρα ἄρ' τἄρ' ἄρ ῥα ῥά ῥ τὰρ ἄρά ἂρ + +ἡμᾶς με ἐγὼ ἐμὲ μοι κἀγὼ ἡμῶν ἡμεῖς ἐμοὶ ἔγωγ' ἁμοὶ ἡμῖν μ' ἔγωγέ ἐγώ ἐμοί ἐμοῦ κἀμοῦ ἔμ' κἀμὲ ἡμὶν μου ἐμέ ἔγωγε νῷν νὼ χἠμεῖς ἁμὲ κἀγώ κἀμοὶ χἠμᾶς +ἁγὼ ἡμίν κἄμ' ἔμοιγ' μοί τοὐμὲ ἄμμε ἐγὼν ἐμεῦ ἐμεῖο μευ ἔμοιγε ἄμμι μέ ἡμέας νῶϊ ἄμμιν ἧμιν ἐγών νῶΐ ἐμέθεν ἥμιν ἄμμες νῶι ἡμείων ἄμμ' ἡμέων ἐμέο +ἐκ ἔκ ἐξ κἀκ κ ἃκ κἀξ ἔξ εξ Ἐκ τἀμὰ ἐμοῖς τοὐμόν ἐμᾶς τοὐμὸν ἐμῶν ἐμὸς ἐμῆς ἐμῷ τὠμῷ ἐμὸν τἄμ' ἐμὴ ἐμὰς ἐμαῖς ἐμὴν ἐμόν ἐμὰ ἐμός ἐμοὺς ἐμῇ ἐμᾷ +οὑμὸς ἐμοῖν οὑμός κἀμὸν ἐμαὶ ἐμή ἐμάς ἐμοῖσι ἐμοῖσιν ἐμῇσιν ἐμῇσι ἐμῇς ἐμήν + +ἔνι ἐνὶ εἰνὶ εἰν ἐμ ἐπὶ ἐπ' ἔπι ἐφ' κἀπὶ τἀπὶ ἐπί ἔφ' ἔπ' ἐὰν ἢν ἐάν ἤν ἄνπερ + +αὑτοῖς αὑτὸν αὑτῷ ἑαυτοῦ αὑτόν αὑτῆς αὑτῶν αὑτοῦ αὑτὴν αὑτοῖν χαὐτοῦ αὑταῖς ἑωυτοῦ ἑωυτῇ ἑωυτὸν ἐωυτῷ ἑωυτῆς ἑωυτόν ἑωυτῷ +ἑωυτάς ἑωυτῶν ἑωυτοὺς ἑωυτοῖσι ἑαυτῇ ἑαυτούς αὑτοὺς ἑαυτῶν ἑαυτοὺς ἑαυτὸν ἑαυτῷ ἑαυτοῖς ἑαυτὴν ἑαυτῆς + +ἔτι ἔτ' ἔθ' κἄτι ἢ ἤ ἠέ ἠὲ ἦε ἦέ ἡ τοὺς τὴν τὸ τῶν τὸν ὁ ἁ οἱ τοῖς ταῖς τῆς τὰς αἱ τό τὰν τᾶς τοῖσιν αἳ χὠ τήν τά τοῖν τάς ὅ +χοἰ ἣ ἥ χἠ τάν τᾶν ὃ οἳ οἵ τοῖο τόν τοῖιν τούς τάων ταὶ τῇς τῇσι τῇσιν αἵ τοῖό τοῖσίν ὅττί ταί Τὴν τῆ τῶ τάδε ὅδε τοῦδε τόδε τόνδ' +τάδ' τῆσδε τῷδε ὅδ' τῶνδ' τῇδ' τοῦδέ τῶνδε τόνδε τόδ' τοῦδ' τάσδε τήνδε τάσδ' τήνδ' ταῖσδέ τῇδε τῆσδ' τάνδ' τῷδ' τάνδε ἅδε τοῖσδ' ἥδ' +τᾷδέ τοῖσδε τούσδ' ἥδε τούσδε τώδ' ἅδ' οἵδ' τῶνδέ οἵδε τᾷδε τοῖσδεσσι τώδε τῇδέ τοῖσιδε αἵδε τοῦδὲ τῆδ' αἵδ' τοῖσδεσι ὃν ἃ ὃς ᾧ οὗ ἅπερ +οὓς ἧς οἷς ἅσπερ ᾗ ἅ χὦνπερ ὣ αἷς ᾇ ὅς ἥπερ ἃς ὅσπερ ὅνπερ ὧνπερ ᾧπερ ὅν αἷν οἷσι ἇς ἅς ὥ οὕς ἥν οἷσιν ἕης ὅου ᾗς οἷσί οἷσίν τοῖσί ᾗσιν οἵπερ αἷσπερ +ὅστις ἥτις ὅτου ὅτοισι ἥντιν' ὅτῳ ὅντιν' ὅττι ἅσσά ὅτεῳ ὅτις ὅτιν' ὅτευ ἥντινα αἵτινές ὅντινα ἅσσα ᾧτινι οἵτινες ὅτι ἅτις ὅτ' ὑμὴ +ὑμήν ὑμὸν ὑπὲρ ὕπερ ὑπέρτερον ὑπεὶρ ὑπέρτατος ὑπὸ ὑπ' ὑφ' ὕπο ὑπαὶ ὑπό ὕπ' ὕφ' + + ὣς ὡς ὥς ὧς ὥστ' ὥστε ὥσθ' ὤ ὢ + + """.split() +) diff --git a/spacy/lang/grc/tokenizer_exceptions.py b/spacy/lang/grc/tokenizer_exceptions.py new file mode 100644 index 000000000..230a58fd2 --- /dev/null +++ b/spacy/lang/grc/tokenizer_exceptions.py @@ -0,0 +1,115 @@ +from ..tokenizer_exceptions import BASE_EXCEPTIONS +from ...symbols import ORTH, NORM +from ...util import update_exc + +_exc = {} + +for token in ["᾽Απ'", "᾽ΑΠ'", "ἀφ'", "᾽Αφ", "ἀπὸ"]: + _exc[token] = [{ORTH: token, NORM: "από"}] + +for token in ["᾽Αλλ'", "ἀλλ'", "ἀλλὰ"]: + _exc[token] = [{ORTH: token, NORM: "ἀλλά"}] + +for token in ["παρ'", "Παρ'", "παρὰ", "παρ"]: + _exc[token] = [{ORTH: token, NORM: "παρά"}] + +for token in ["καθ'", "Καθ'", "κατ'", "Κατ'", "κατὰ"]: + _exc[token] = [{ORTH: token, NORM: "κατά"}] + +for token in ["Ἐπ'", "ἐπ'", "ἐπὶ", "Εφ'", "εφ'"]: + _exc[token] = [{ORTH: token, NORM: "επί"}] + +for token in ["Δι'", "δι'", "διὰ"]: + _exc[token] = [{ORTH: token, NORM: "διά"}] + +for token in ["Ὑπ'", "ὑπ'", "ὑφ'"]: + _exc[token] = [{ORTH: token, NORM: "ὑπό"}] + +for token in ["Μετ'", "μετ'", "μεθ'", "μετὰ"]: + _exc[token] = [{ORTH: token, NORM: "μετά"}] + +for token in ["Μ'", "μ'", "μέ", "μὲ"]: + _exc[token] = [{ORTH: token, NORM: "με"}] + +for token in ["Σ'", "σ'", "σέ", "σὲ"]: + _exc[token] = [{ORTH: token, NORM: "σε"}] + +for token in ["Τ'", "τ'", "τέ", "τὲ"]: + _exc[token] = [{ORTH: token, NORM: "τε"}] + +for token in ["Δ'", "δ'", "δὲ"]: + _exc[token] = [{ORTH: token, NORM: "δέ"}] + + +_other_exc = { + "μὲν": [{ORTH: "μὲν", NORM: "μέν"}], + "μὴν": [{ORTH: "μὴν", NORM: "μήν"}], + "τὴν": [{ORTH: "τὴν", NORM: "τήν"}], + "τὸν": [{ORTH: "τὸν", NORM: "τόν"}], + "καὶ": [{ORTH: "καὶ", NORM: "καί"}], + "καὐτός": [{ORTH: "κ", NORM: "καί"}, {ORTH: "αὐτός"}], + "καὐτὸς": [{ORTH: "κ", NORM: "καί"}, {ORTH: "αὐτὸς", NORM: "αὐτός"}], + "κοὐ": [{ORTH: "κ", NORM: "καί"}, {ORTH: "οὐ"}], + "χἡ": [{ORTH: "χ", NORM: "καί"}, {ORTH: "ἡ"}], + "χοἱ": [{ORTH: "χ", NORM: "καί"}, {ORTH: "οἱ"}], + "χἱκετεύετε": [{ORTH: "χ", NORM: "καί"}, {ORTH: "ἱκετεύετε"}], + "κἀν": [{ORTH: "κ", NORM: "καί"}, {ORTH: "ἀν", NORM: "ἐν"}], + "κἀγὼ": [{ORTH: "κἀ", NORM: "καί"}, {ORTH: "γὼ", NORM: "ἐγώ"}], + "κἀγώ": [{ORTH: "κἀ", NORM: "καί"}, {ORTH: "γώ", NORM: "ἐγώ"}], + "ἁγώ": [{ORTH: "ἁ", NORM: "ἃ"}, {ORTH: "γώ", NORM: "ἐγώ"}], + "ἁγὼ": [{ORTH: "ἁ", NORM: "ἃ"}, {ORTH: "γὼ", NORM: "ἐγώ"}], + "ἐγᾦδα": [{ORTH: "ἐγ", NORM: "ἐγώ"}, {ORTH: "ᾦδα", NORM: "οἶδα"}], + "ἐγᾦμαι": [{ORTH: "ἐγ", NORM: "ἐγώ"}, {ORTH: "ᾦμαι", NORM: "οἶμαι"}], + "κἀς": [{ORTH: "κ", NORM: "καί"}, {ORTH: "ἀς", NORM: "ἐς"}], + "κᾆτα": [{ORTH: "κ", NORM: "καί"}, {ORTH: "ᾆτα", NORM: "εἶτα"}], + "κεἰ": [{ORTH: "κ", NORM: "καί"}, {ORTH: "εἰ"}], + "κεἰς": [{ORTH: "κ", NORM: "καί"}, {ORTH: "εἰς"}], + "χὤτε": [{ORTH: "χ", NORM: "καί"}, {ORTH: "ὤτε", NORM: "ὅτε"}], + "χὤπως": [{ORTH: "χ", NORM: "καί"}, {ORTH: "ὤπως", NORM: "ὅπως"}], + "χὤτι": [{ORTH: "χ", NORM: "καί"}, {ORTH: "ὤτι", NORM: "ὅτι"}], + "χὤταν": [{ORTH: "χ", NORM: "καί"}, {ORTH: "ὤταν", NORM: "ὅταν"}], + "οὑμός": [{ORTH: "οὑ", NORM: "ὁ"}, {ORTH: "μός", NORM: "ἐμός"}], + "οὑμὸς": [{ORTH: "οὑ", NORM: "ὁ"}, {ORTH: "μὸς", NORM: "ἐμός"}], + "οὑμοί": [{ORTH: "οὑ", NORM: "οἱ"}, {ORTH: "μοί", NORM: "ἐμoί"}], + "οὑμοὶ": [{ORTH: "οὑ", NORM: "οἱ"}, {ORTH: "μοὶ", NORM: "ἐμoί"}], + "σοὔστι": [{ORTH: "σοὔ", NORM: "σοί"}, {ORTH: "στι", NORM: "ἐστι"}], + "σοὐστί": [{ORTH: "σοὐ", NORM: "σοί"}, {ORTH: "στί", NORM: "ἐστί"}], + "σοὐστὶ": [{ORTH: "σοὐ", NORM: "σοί"}, {ORTH: "στὶ", NORM: "ἐστί"}], + "μοὖστι": [{ORTH: "μοὖ", NORM: "μοί"}, {ORTH: "στι", NORM: "ἐστι"}], + "μοὔστι": [{ORTH: "μοὔ", NORM: "μοί"}, {ORTH: "στι", NORM: "ἐστι"}], + "τοὔνομα": [{ORTH: "τοὔ", NORM: "τό"}, {ORTH: "νομα", NORM: "ὄνομα"}], + "οὑν": [{ORTH: "οὑ", NORM: "ὁ"}, {ORTH: "ν", NORM: "ἐν"}], + "ὦνερ": [{ORTH: "ὦ", NORM: "ὦ"}, {ORTH: "νερ", NORM: "ἄνερ"}], + "ὦνδρες": [{ORTH: "ὦ", NORM: "ὦ"}, {ORTH: "νδρες", NORM: "ἄνδρες"}], + "προὔχων": [{ORTH: "προὔ", NORM: "πρό"}, {ORTH: "χων", NORM: "ἔχων"}], + "προὔχοντα": [{ORTH: "προὔ", NORM: "πρό"}, {ORTH: "χοντα", NORM: "ἔχοντα"}], + "ὥνεκα": [{ORTH: "ὥ", NORM: "οὗ"}, {ORTH: "νεκα", NORM: "ἕνεκα"}], + "θοἰμάτιον": [{ORTH: "θο", NORM: "τό"}, {ORTH: "ἰμάτιον"}], + "ὥνεκα": [{ORTH: "ὥ", NORM: "οὗ"}, {ORTH: "νεκα", NORM: "ἕνεκα"}], + "τὠληθές": [{ORTH: "τὠ", NORM: "τὸ"}, {ORTH: "ληθές", NORM: "ἀληθές"}], + "θἡμέρᾳ": [{ORTH: "θ", NORM: "τῇ"}, {ORTH: "ἡμέρᾳ"}], + "ἅνθρωπος": [{ORTH: "ἅ", NORM: "ὁ"}, {ORTH: "νθρωπος", NORM: "ἄνθρωπος"}], + "τἄλλα": [{ORTH: "τ", NORM: "τὰ"}, {ORTH: "ἄλλα"}], + "τἆλλα": [{ORTH: "τἆ", NORM: "τὰ"}, {ORTH: "λλα", NORM: "ἄλλα"}], + "ἁνήρ": [{ORTH: "ἁ", NORM: "ὁ"}, {ORTH: "νήρ", NORM: "ἀνήρ"}], + "ἁνὴρ": [{ORTH: "ἁ", NORM: "ὁ"}, {ORTH: "νὴρ", NORM: "ἀνήρ"}], + "ἅνδρες": [{ORTH: "ἅ", NORM: "οἱ"}, {ORTH: "νδρες", NORM: "ἄνδρες"}], + "ἁγαθαί": [{ORTH: "ἁ", NORM: "αἱ"}, {ORTH: "γαθαί", NORM: "ἀγαθαί"}], + "ἁγαθαὶ": [{ORTH: "ἁ", NORM: "αἱ"}, {ORTH: "γαθαὶ", NORM: "ἀγαθαί"}], + "ἁλήθεια": [{ORTH: "ἁ", NORM: "ἡ"}, {ORTH: "λήθεια", NORM: "ἀλήθεια"}], + "τἀνδρός": [{ORTH: "τ", NORM: "τοῦ"}, {ORTH: "ἀνδρός"}], + "τἀνδρὸς": [{ORTH: "τ", NORM: "τοῦ"}, {ORTH: "ἀνδρὸς", NORM: "ἀνδρός"}], + "τἀνδρί": [{ORTH: "τ", NORM: "τῷ"}, {ORTH: "ἀνδρί"}], + "τἀνδρὶ": [{ORTH: "τ", NORM: "τῷ"}, {ORTH: "ἀνδρὶ", NORM: "ἀνδρί"}], + "αὑτός": [{ORTH: "αὑ", NORM: "ὁ"}, {ORTH: "τός", NORM: "αὐτός"}], + "αὑτὸς": [{ORTH: "αὑ", NORM: "ὁ"}, {ORTH: "τὸς", NORM: "αὐτός"}], + "ταὐτοῦ": [{ORTH: "τ", NORM: "τοῦ"}, {ORTH: "αὐτοῦ"}], +} + +_exc.update(_other_exc) + +_exc_data = {} + +_exc.update(_exc_data) + +TOKENIZER_EXCEPTIONS = update_exc(BASE_EXCEPTIONS, _exc) diff --git a/spacy/lang/nl/__init__.py b/spacy/lang/nl/__init__.py index 7fff3c3d2..5e95b4a8b 100644 --- a/spacy/lang/nl/__init__.py +++ b/spacy/lang/nl/__init__.py @@ -1,12 +1,14 @@ from typing import Optional + from thinc.api import Model -from .stop_words import STOP_WORDS +from .lemmatizer import DutchLemmatizer from .lex_attrs import LEX_ATTRS -from .tokenizer_exceptions import TOKENIZER_EXCEPTIONS from .punctuation import TOKENIZER_PREFIXES, TOKENIZER_INFIXES from .punctuation import TOKENIZER_SUFFIXES -from .lemmatizer import DutchLemmatizer +from .stop_words import STOP_WORDS +from .syntax_iterators import SYNTAX_ITERATORS +from .tokenizer_exceptions import TOKENIZER_EXCEPTIONS from ...language import Language @@ -16,6 +18,7 @@ class DutchDefaults(Language.Defaults): infixes = TOKENIZER_INFIXES suffixes = TOKENIZER_SUFFIXES lex_attr_getters = LEX_ATTRS + syntax_iterators = SYNTAX_ITERATORS stop_words = STOP_WORDS diff --git a/spacy/lang/nl/syntax_iterators.py b/spacy/lang/nl/syntax_iterators.py new file mode 100644 index 000000000..1959ba6e2 --- /dev/null +++ b/spacy/lang/nl/syntax_iterators.py @@ -0,0 +1,72 @@ +from typing import Union, Iterator + +from ...symbols import NOUN, PRON +from ...errors import Errors +from ...tokens import Doc, Span + + +def noun_chunks(doclike: Union[Doc, Span]) -> Iterator[Span]: + """ + Detect base noun phrases from a dependency parse. Works on Doc and Span. + The definition is inspired by https://www.nltk.org/book/ch07.html + Consider : [Noun + determinant / adjective] and also [Pronoun] + """ + # fmt: off + # labels = ["nsubj", "nsubj:pass", "obj", "iobj", "ROOT", "appos", "nmod", "nmod:poss"] + # fmt: on + doc = doclike.doc # Ensure works on both Doc and Span. + + # Check for dependencies: POS, DEP + if not doc.has_annotation("POS"): + raise ValueError(Errors.E1019) + if not doc.has_annotation("DEP"): + raise ValueError(Errors.E029) + + # See UD tags: https://universaldependencies.org/u/dep/index.html + # amod = adjectival modifier + # nmod:poss = possessive nominal modifier + # nummod = numeric modifier + # det = determiner + # det:poss = possessive determiner + noun_deps = [ + doc.vocab.strings[label] for label in ["amod", "nmod:poss", "det", "det:poss"] + ] + + # nsubj = nominal subject + # nsubj:pass = passive nominal subject + pronoun_deps = [doc.vocab.strings[label] for label in ["nsubj", "nsubj:pass"]] + + # Label NP for the Span to identify it as Noun-Phrase + span_label = doc.vocab.strings.add("NP") + + # Only NOUNS and PRONOUNS matter + for i, word in enumerate(filter(lambda x: x.pos in [PRON, NOUN], doclike)): + # For NOUNS + # Pick children from syntactic parse (only those with certain dependencies) + if word.pos == NOUN: + # Some debugging. It happens that VERBS are POS-TAGGED as NOUNS + # We check if the word has a "nsubj", if it's the case, we eliminate it + nsubjs = filter( + lambda x: x.dep == doc.vocab.strings["nsubj"], word.children + ) + next_word = next(nsubjs, None) + if next_word is not None: + # We found some nsubj, so we skip this word. Otherwise, consider it a normal NOUN + continue + + children = filter(lambda x: x.dep in noun_deps, word.children) + children_i = [c.i for c in children] + [word.i] + + start_span = min(children_i) + end_span = max(children_i) + 1 + yield start_span, end_span, span_label + + # PRONOUNS only if it is the subject of a verb + elif word.pos == PRON: + if word.dep in pronoun_deps: + start_span = word.i + end_span = word.i + 1 + yield start_span, end_span, span_label + + +SYNTAX_ITERATORS = {"noun_chunks": noun_chunks} diff --git a/spacy/lang/ru/lemmatizer.py b/spacy/lang/ru/lemmatizer.py index 63aa94a36..399cd174c 100644 --- a/spacy/lang/ru/lemmatizer.py +++ b/spacy/lang/ru/lemmatizer.py @@ -12,8 +12,6 @@ PUNCT_RULES = {"«": '"', "»": '"'} class RussianLemmatizer(Lemmatizer): - _morph = None - def __init__( self, vocab: Vocab, @@ -31,8 +29,8 @@ class RussianLemmatizer(Lemmatizer): "The Russian lemmatizer mode 'pymorphy2' requires the " "pymorphy2 library. Install it with: pip install pymorphy2" ) from None - if RussianLemmatizer._morph is None: - RussianLemmatizer._morph = MorphAnalyzer() + if getattr(self, "_morph", None) is None: + self._morph = MorphAnalyzer() super().__init__(vocab, model, name, mode=mode, overwrite=overwrite) def pymorphy2_lemmatize(self, token: Token) -> List[str]: diff --git a/spacy/lang/uk/lemmatizer.py b/spacy/lang/uk/lemmatizer.py index e1fdf39fc..1fb030e06 100644 --- a/spacy/lang/uk/lemmatizer.py +++ b/spacy/lang/uk/lemmatizer.py @@ -7,8 +7,6 @@ from ...vocab import Vocab class UkrainianLemmatizer(RussianLemmatizer): - _morph = None - def __init__( self, vocab: Vocab, @@ -27,6 +25,6 @@ class UkrainianLemmatizer(RussianLemmatizer): "pymorphy2 library and dictionaries. Install them with: " "pip install pymorphy2 pymorphy2-dicts-uk" ) from None - if UkrainianLemmatizer._morph is None: - UkrainianLemmatizer._morph = MorphAnalyzer(lang="uk") + if getattr(self, "_morph", None) is None: + self._morph = MorphAnalyzer(lang="uk") super().__init__(vocab, model, name, mode=mode, overwrite=overwrite) diff --git a/spacy/language.py b/spacy/language.py index 927ed0037..589dca2bf 100644 --- a/spacy/language.py +++ b/spacy/language.py @@ -872,14 +872,14 @@ class Language: DOCS: https://spacy.io/api/language#replace_pipe """ - if name not in self.pipe_names: + if name not in self.component_names: raise ValueError(Errors.E001.format(name=name, opts=self.pipe_names)) if hasattr(factory_name, "__call__"): err = Errors.E968.format(component=repr(factory_name), name=name) raise ValueError(err) # We need to delegate to Language.add_pipe here instead of just writing # to Language.pipeline to make sure the configs are handled correctly - pipe_index = self.pipe_names.index(name) + pipe_index = self.component_names.index(name) self.remove_pipe(name) if not len(self._components) or pipe_index == len(self._components): # we have no components to insert before/after, or we're replacing the last component @@ -1447,7 +1447,7 @@ class Language: ) -> Iterator[Tuple[Doc, _AnyContext]]: ... - def pipe( + def pipe( # noqa: F811 self, texts: Iterable[str], *, @@ -1740,10 +1740,16 @@ class Language: listeners_replaced = True with warnings.catch_warnings(): warnings.filterwarnings("ignore", message="\\[W113\\]") - nlp.add_pipe(source_name, source=source_nlps[model], name=pipe_name) + nlp.add_pipe( + source_name, source=source_nlps[model], name=pipe_name + ) if model not in source_nlp_vectors_hashes: - source_nlp_vectors_hashes[model] = hash(source_nlps[model].vocab.vectors.to_bytes()) - nlp.meta["_sourced_vectors_hashes"][pipe_name] = source_nlp_vectors_hashes[model] + source_nlp_vectors_hashes[model] = hash( + source_nlps[model].vocab.vectors.to_bytes() + ) + nlp.meta["_sourced_vectors_hashes"][ + pipe_name + ] = source_nlp_vectors_hashes[model] # Delete from cache if listeners were replaced if listeners_replaced: del source_nlps[model] diff --git a/spacy/ml/models/multi_task.py b/spacy/ml/models/multi_task.py index d4d2d638b..97bef2d0e 100644 --- a/spacy/ml/models/multi_task.py +++ b/spacy/ml/models/multi_task.py @@ -3,7 +3,7 @@ from thinc.api import chain, Maxout, LayerNorm, Softmax, Linear, zero_init, Mode from thinc.api import MultiSoftmax, list2array from thinc.api import to_categorical, CosineDistance, L2Distance -from ...util import registry +from ...util import registry, OOV_RANK from ...errors import Errors from ...attrs import ID @@ -70,6 +70,7 @@ def get_vectors_loss(ops, docs, prediction, distance): # and look them up all at once. This prevents data copying. ids = ops.flatten([doc.to_array(ID).ravel() for doc in docs]) target = docs[0].vocab.vectors.data[ids] + target[ids == OOV_RANK] = 0 d_target, loss = distance(prediction, target) return loss, d_target diff --git a/spacy/pipeline/spancat.py b/spacy/pipeline/spancat.py index 524e3a659..8d1be06c3 100644 --- a/spacy/pipeline/spancat.py +++ b/spacy/pipeline/spancat.py @@ -78,6 +78,17 @@ def build_ngram_suggester(sizes: List[int]) -> Callable[[List[Doc]], Ragged]: return ngram_suggester +@registry.misc("spacy.ngram_range_suggester.v1") +def build_ngram_range_suggester( + min_size: int, max_size: int +) -> Callable[[List[Doc]], Ragged]: + """Suggest all spans of the given lengths between a given min and max value - both inclusive. + Spans are returned as a ragged array of integers. The array has two columns, + indicating the start and end position.""" + sizes = range(min_size, max_size + 1) + return build_ngram_suggester(sizes) + + @Language.factory( "spancat", assigns=["doc.spans"], diff --git a/spacy/pipeline/tagger.pyx b/spacy/pipeline/tagger.pyx index 938131f6f..fa260bdd6 100644 --- a/spacy/pipeline/tagger.pyx +++ b/spacy/pipeline/tagger.pyx @@ -222,7 +222,7 @@ class Tagger(TrainablePipe): DOCS: https://spacy.io/api/tagger#get_loss """ validate_examples(examples, "Tagger.get_loss") - loss_func = SequenceCategoricalCrossentropy(names=self.labels, normalize=False) + loss_func = SequenceCategoricalCrossentropy(names=self.labels, normalize=False, neg_prefix="!") # Convert empty tag "" to missing value None so that both misaligned # tokens and tokens with missing annotation have the default missing # value None. diff --git a/spacy/tests/conftest.py b/spacy/tests/conftest.py index 8c450b154..a5dedcc87 100644 --- a/spacy/tests/conftest.py +++ b/spacy/tests/conftest.py @@ -125,6 +125,11 @@ def ga_tokenizer(): return get_lang_class("ga")().tokenizer +@pytest.fixture(scope="session") +def grc_tokenizer(): + return get_lang_class("grc")().tokenizer + + @pytest.fixture(scope="session") def gu_tokenizer(): return get_lang_class("gu")().tokenizer @@ -202,6 +207,11 @@ def ne_tokenizer(): return get_lang_class("ne")().tokenizer +@pytest.fixture(scope="session") +def nl_vocab(): + return get_lang_class("nl")().vocab + + @pytest.fixture(scope="session") def nl_tokenizer(): return get_lang_class("nl")().tokenizer diff --git a/spacy/tests/doc/test_creation.py b/spacy/tests/doc/test_creation.py index 6c9de8f07..6989b965f 100644 --- a/spacy/tests/doc/test_creation.py +++ b/spacy/tests/doc/test_creation.py @@ -69,4 +69,4 @@ def test_create_with_heads_and_no_deps(vocab): words = "I like ginger".split() heads = list(range(len(words))) with pytest.raises(ValueError): - doc = Doc(vocab, words=words, heads=heads) + Doc(vocab, words=words, heads=heads) diff --git a/spacy/tests/lang/grc/__init__.py b/spacy/tests/lang/grc/__init__.py new file mode 100644 index 000000000..e69de29bb diff --git a/spacy/tests/lang/grc/test_text.py b/spacy/tests/lang/grc/test_text.py new file mode 100644 index 000000000..5d8317c36 --- /dev/null +++ b/spacy/tests/lang/grc/test_text.py @@ -0,0 +1,23 @@ +import pytest + + +@pytest.mark.parametrize( + "text,match", + [ + ("ι", True), + ("α", True), + ("ϟα", True), + ("ἑκατόν", True), + ("ἐνακόσια", True), + ("δισχίλια", True), + ("μύρια", True), + ("εἷς", True), + ("λόγος", False), + (",", False), + ("λβ", True), + ], +) +def test_lex_attrs_like_number(grc_tokenizer, text, match): + tokens = grc_tokenizer(text) + assert len(tokens) == 1 + assert tokens[0].like_num == match diff --git a/spacy/tests/lang/nl/test_noun_chunks.py b/spacy/tests/lang/nl/test_noun_chunks.py new file mode 100644 index 000000000..73b501e4a --- /dev/null +++ b/spacy/tests/lang/nl/test_noun_chunks.py @@ -0,0 +1,209 @@ +from spacy.tokens import Doc +import pytest + + +@pytest.fixture +def nl_sample(nl_vocab): + # TEXT : + # Haar vriend lacht luid. We kregen alweer ruzie toen we de supermarkt ingingen. + # Aan het begin van de supermarkt is al het fruit en de groentes. Uiteindelijk hebben we dan ook + # geen avondeten gekocht. + words = [ + "Haar", + "vriend", + "lacht", + "luid", + ".", + "We", + "kregen", + "alweer", + "ruzie", + "toen", + "we", + "de", + "supermarkt", + "ingingen", + ".", + "Aan", + "het", + "begin", + "van", + "de", + "supermarkt", + "is", + "al", + "het", + "fruit", + "en", + "de", + "groentes", + ".", + "Uiteindelijk", + "hebben", + "we", + "dan", + "ook", + "geen", + "avondeten", + "gekocht", + ".", + ] + heads = [ + 1, + 2, + 2, + 2, + 2, + 6, + 6, + 6, + 6, + 13, + 13, + 12, + 13, + 6, + 6, + 17, + 17, + 24, + 20, + 20, + 17, + 24, + 24, + 24, + 24, + 27, + 27, + 24, + 24, + 36, + 36, + 36, + 36, + 36, + 35, + 36, + 36, + 36, + ] + deps = [ + "nmod:poss", + "nsubj", + "ROOT", + "advmod", + "punct", + "nsubj", + "ROOT", + "advmod", + "obj", + "mark", + "nsubj", + "det", + "obj", + "advcl", + "punct", + "case", + "det", + "obl", + "case", + "det", + "nmod", + "cop", + "advmod", + "det", + "ROOT", + "cc", + "det", + "conj", + "punct", + "advmod", + "aux", + "nsubj", + "advmod", + "advmod", + "det", + "obj", + "ROOT", + "punct", + ] + pos = [ + "PRON", + "NOUN", + "VERB", + "ADJ", + "PUNCT", + "PRON", + "VERB", + "ADV", + "NOUN", + "SCONJ", + "PRON", + "DET", + "NOUN", + "NOUN", + "PUNCT", + "ADP", + "DET", + "NOUN", + "ADP", + "DET", + "NOUN", + "AUX", + "ADV", + "DET", + "NOUN", + "CCONJ", + "DET", + "NOUN", + "PUNCT", + "ADJ", + "AUX", + "PRON", + "ADV", + "ADV", + "DET", + "NOUN", + "VERB", + "PUNCT", + ] + return Doc(nl_vocab, words=words, heads=heads, deps=deps, pos=pos) + + +@pytest.fixture +def nl_reference_chunking(): + # Using frog https://github.com/LanguageMachines/frog/ we obtain the following NOUN-PHRASES: + return [ + "haar vriend", + "we", + "ruzie", + "we", + "de supermarkt", + "het begin", + "de supermarkt", + "het fruit", + "de groentes", + "we", + "geen avondeten", + ] + + +def test_need_dep(nl_tokenizer): + """ + Test that noun_chunks raises Value Error for 'nl' language if Doc is not parsed. + """ + txt = "Haar vriend lacht luid." + doc = nl_tokenizer(txt) + + with pytest.raises(ValueError): + list(doc.noun_chunks) + + +def test_chunking(nl_sample, nl_reference_chunking): + """ + Test the noun chunks of a sample text. Uses a sample. + The sample text simulates a Doc object as would be produced by nl_core_news_md. + """ + chunks = [s.text.lower() for s in nl_sample.noun_chunks] + assert chunks == nl_reference_chunking diff --git a/spacy/tests/lang/test_initialize.py b/spacy/tests/lang/test_initialize.py index 46f1f2bd1..36f4a75e0 100644 --- a/spacy/tests/lang/test_initialize.py +++ b/spacy/tests/lang/test_initialize.py @@ -4,12 +4,13 @@ from spacy.util import get_lang_class # fmt: off # Only include languages with no external dependencies -# excluded: ja, ru, th, uk, vi, zh -LANGUAGES = ["af", "ar", "bg", "bn", "ca", "cs", "da", "de", "el", "en", "es", - "et", "fa", "fi", "fr", "ga", "he", "hi", "hr", "hu", "id", "is", - "it", "kn", "lt", "lv", "nb", "nl", "pl", "pt", "ro", "si", "sk", - "sl", "sq", "sr", "sv", "ta", "te", "tl", "tn", "tr", "tt", "ur", - "yo"] +# excluded: ja, ko, th, vi, zh +LANGUAGES = ["af", "am", "ar", "az", "bg", "bn", "ca", "cs", "da", "de", "el", + "en", "es", "et", "eu", "fa", "fi", "fr", "ga", "gu", "he", "hi", + "hr", "hu", "hy", "id", "is", "it", "kn", "ky", "lb", "lt", "lv", + "mk", "ml", "mr", "nb", "ne", "nl", "pl", "pt", "ro", "ru", "sa", + "si", "sk", "sl", "sq", "sr", "sv", "ta", "te", "ti", "tl", "tn", + "tr", "tt", "uk", "ur", "xx", "yo"] # fmt: on diff --git a/spacy/tests/package/test_requirements.py b/spacy/tests/package/test_requirements.py index 82c39b72c..8e042c9cf 100644 --- a/spacy/tests/package/test_requirements.py +++ b/spacy/tests/package/test_requirements.py @@ -11,6 +11,7 @@ def test_build_dependencies(): "mock", "flake8", "hypothesis", + "pre-commit", ] # ignore language-specific packages that shouldn't be installed by all libs_ignore_setup = [ diff --git a/spacy/tests/parser/test_ner.py b/spacy/tests/parser/test_ner.py index ee9b6bf01..a30001b27 100644 --- a/spacy/tests/parser/test_ner.py +++ b/spacy/tests/parser/test_ner.py @@ -329,8 +329,8 @@ def test_ner_constructor(en_vocab): } cfg = {"model": DEFAULT_NER_MODEL} model = registry.resolve(cfg, validate=True)["model"] - ner_1 = EntityRecognizer(en_vocab, model, **config) - ner_2 = EntityRecognizer(en_vocab, model) + EntityRecognizer(en_vocab, model, **config) + EntityRecognizer(en_vocab, model) def test_ner_before_ruler(): diff --git a/spacy/tests/parser/test_parse.py b/spacy/tests/parser/test_parse.py index 1b0d9d256..b7575d063 100644 --- a/spacy/tests/parser/test_parse.py +++ b/spacy/tests/parser/test_parse.py @@ -224,8 +224,8 @@ def test_parser_constructor(en_vocab): } cfg = {"model": DEFAULT_PARSER_MODEL} model = registry.resolve(cfg, validate=True)["model"] - parser_1 = DependencyParser(en_vocab, model, **config) - parser_2 = DependencyParser(en_vocab, model) + DependencyParser(en_vocab, model, **config) + DependencyParser(en_vocab, model) @pytest.mark.parametrize("pipe_name", ["parser", "beam_parser"]) diff --git a/spacy/tests/pipeline/test_annotates_on_update.py b/spacy/tests/pipeline/test_annotates_on_update.py index c5288112d..869b8b874 100644 --- a/spacy/tests/pipeline/test_annotates_on_update.py +++ b/spacy/tests/pipeline/test_annotates_on_update.py @@ -74,7 +74,7 @@ def test_annotates_on_update(): nlp.add_pipe("assert_sents") # When the pipeline runs, annotations are set - doc = nlp("This is a sentence.") + nlp("This is a sentence.") examples = [] for text in ["a a", "b b", "c c"]: diff --git a/spacy/tests/pipeline/test_lemmatizer.py b/spacy/tests/pipeline/test_lemmatizer.py index 1bec8696c..0d2d3d6e5 100644 --- a/spacy/tests/pipeline/test_lemmatizer.py +++ b/spacy/tests/pipeline/test_lemmatizer.py @@ -110,4 +110,4 @@ def test_lemmatizer_serialize(nlp): assert doc2[0].lemma_ == "cope" # Make sure that lemmatizer cache can be pickled - b = pickle.dumps(lemmatizer2) + pickle.dumps(lemmatizer2) diff --git a/spacy/tests/pipeline/test_pipe_methods.py b/spacy/tests/pipeline/test_pipe_methods.py index e530cb5c4..87fd64307 100644 --- a/spacy/tests/pipeline/test_pipe_methods.py +++ b/spacy/tests/pipeline/test_pipe_methods.py @@ -52,7 +52,7 @@ def test_cant_add_pipe_first_and_last(nlp): nlp.add_pipe("new_pipe", first=True, last=True) -@pytest.mark.parametrize("name", ["my_component"]) +@pytest.mark.parametrize("name", ["test_get_pipe"]) def test_get_pipe(nlp, name): with pytest.raises(KeyError): nlp.get_pipe(name) @@ -62,7 +62,7 @@ def test_get_pipe(nlp, name): @pytest.mark.parametrize( "name,replacement,invalid_replacement", - [("my_component", "other_pipe", lambda doc: doc)], + [("test_replace_pipe", "other_pipe", lambda doc: doc)], ) def test_replace_pipe(nlp, name, replacement, invalid_replacement): with pytest.raises(ValueError): @@ -435,8 +435,8 @@ def test_update_with_annotates(): return component - c1 = Language.component(f"{name}1", func=make_component(f"{name}1")) - c2 = Language.component(f"{name}2", func=make_component(f"{name}2")) + Language.component(f"{name}1", func=make_component(f"{name}1")) + Language.component(f"{name}2", func=make_component(f"{name}2")) components = set([f"{name}1", f"{name}2"]) diff --git a/spacy/tests/pipeline/test_spancat.py b/spacy/tests/pipeline/test_spancat.py index 5345a4749..0364abf73 100644 --- a/spacy/tests/pipeline/test_spancat.py +++ b/spacy/tests/pipeline/test_spancat.py @@ -183,3 +183,24 @@ def test_ngram_suggester(en_tokenizer): docs = [en_tokenizer(text) for text in ["", "", ""]] ngrams = ngram_suggester(docs) assert_equal(ngrams.lengths, [len(doc) for doc in docs]) + + +def test_ngram_sizes(en_tokenizer): + # test that the range suggester works well + size_suggester = registry.misc.get("spacy.ngram_suggester.v1")(sizes=[1, 2, 3]) + suggester_factory = registry.misc.get("spacy.ngram_range_suggester.v1") + range_suggester = suggester_factory(min_size=1, max_size=3) + docs = [ + en_tokenizer(text) for text in ["a", "a b", "a b c", "a b c d", "a b c d e"] + ] + ngrams_1 = size_suggester(docs) + ngrams_2 = range_suggester(docs) + assert_equal(ngrams_1.lengths, [1, 3, 6, 9, 12]) + assert_equal(ngrams_1.lengths, ngrams_2.lengths) + assert_equal(ngrams_1.data, ngrams_2.data) + + # one more variation + suggester_factory = registry.misc.get("spacy.ngram_range_suggester.v1") + range_suggester = suggester_factory(min_size=2, max_size=4) + ngrams_3 = range_suggester(docs) + assert_equal(ngrams_3.lengths, [0, 1, 3, 6, 9]) diff --git a/spacy/tests/pipeline/test_tagger.py b/spacy/tests/pipeline/test_tagger.py index 37895e7c8..ec14b70da 100644 --- a/spacy/tests/pipeline/test_tagger.py +++ b/spacy/tests/pipeline/test_tagger.py @@ -182,6 +182,17 @@ def test_overfitting_IO(): assert_equal(batch_deps_1, batch_deps_2) assert_equal(batch_deps_1, no_batch_deps) + # Try to unlearn the first 'N' tag with negative annotation + neg_ex = Example.from_dict(nlp.make_doc(test_text), {"tags": ["!N", "V", "J", "N"]}) + + for i in range(20): + losses = {} + nlp.update([neg_ex], sgd=optimizer, losses=losses) + + # test the "untrained" tag + doc3 = nlp(test_text) + assert doc3[0].tag_ != "N" + def test_tagger_requires_labels(): nlp = English() diff --git a/spacy/tests/regression/test_issue5001-5500.py b/spacy/tests/regression/test_issue5001-5500.py index 9eefef2e5..bc9bcb982 100644 --- a/spacy/tests/regression/test_issue5001-5500.py +++ b/spacy/tests/regression/test_issue5001-5500.py @@ -69,9 +69,12 @@ def test_issue5082(): def test_issue5137(): - @Language.factory("my_component") + factory_name = "test_issue5137" + pipe_name = "my_component" + + @Language.factory(factory_name) class MyComponent: - def __init__(self, nlp, name="my_component", categories="all_categories"): + def __init__(self, nlp, name=pipe_name, categories="all_categories"): self.nlp = nlp self.categories = categories self.name = name @@ -86,13 +89,13 @@ def test_issue5137(): pass nlp = English() - my_component = nlp.add_pipe("my_component") + my_component = nlp.add_pipe(factory_name, name=pipe_name) assert my_component.categories == "all_categories" with make_tempdir() as tmpdir: nlp.to_disk(tmpdir) - overrides = {"components": {"my_component": {"categories": "my_categories"}}} + overrides = {"components": {pipe_name: {"categories": "my_categories"}}} nlp2 = spacy.load(tmpdir, config=overrides) - assert nlp2.get_pipe("my_component").categories == "my_categories" + assert nlp2.get_pipe(pipe_name).categories == "my_categories" def test_issue5141(en_vocab): diff --git a/spacy/tests/regression/test_issue7001-8000.py b/spacy/tests/regression/test_issue7001-8000.py new file mode 100644 index 000000000..5bb7cc08e --- /dev/null +++ b/spacy/tests/regression/test_issue7001-8000.py @@ -0,0 +1,281 @@ +from spacy.cli.evaluate import print_textcats_auc_per_cat, print_prf_per_type +from spacy.lang.en import English +from spacy.training import Example +from spacy.tokens.doc import Doc +from spacy.vocab import Vocab +from spacy.kb import KnowledgeBase +from spacy.pipeline._parser_internals.arc_eager import ArcEager +from spacy.util import load_config_from_str, load_config +from spacy.cli.init_config import fill_config +from thinc.api import Config +from wasabi import msg + +from ..util import make_tempdir + + +def test_issue7019(): + scores = {"LABEL_A": 0.39829102, "LABEL_B": 0.938298329382, "LABEL_C": None} + print_textcats_auc_per_cat(msg, scores) + scores = { + "LABEL_A": {"p": 0.3420302, "r": 0.3929020, "f": 0.49823928932}, + "LABEL_B": {"p": None, "r": None, "f": None}, + } + print_prf_per_type(msg, scores, name="foo", type="bar") + + +CONFIG_7029 = """ +[nlp] +lang = "en" +pipeline = ["tok2vec", "tagger"] + +[components] + +[components.tok2vec] +factory = "tok2vec" + +[components.tok2vec.model] +@architectures = "spacy.Tok2Vec.v1" + +[components.tok2vec.model.embed] +@architectures = "spacy.MultiHashEmbed.v1" +width = ${components.tok2vec.model.encode:width} +attrs = ["NORM","PREFIX","SUFFIX","SHAPE"] +rows = [5000,2500,2500,2500] +include_static_vectors = false + +[components.tok2vec.model.encode] +@architectures = "spacy.MaxoutWindowEncoder.v1" +width = 96 +depth = 4 +window_size = 1 +maxout_pieces = 3 + +[components.tagger] +factory = "tagger" + +[components.tagger.model] +@architectures = "spacy.Tagger.v1" +nO = null + +[components.tagger.model.tok2vec] +@architectures = "spacy.Tok2VecListener.v1" +width = ${components.tok2vec.model.encode:width} +upstream = "*" +""" + + +def test_issue7029(): + """Test that an empty document doesn't mess up an entire batch.""" + TRAIN_DATA = [ + ("I like green eggs", {"tags": ["N", "V", "J", "N"]}), + ("Eat blue ham", {"tags": ["V", "J", "N"]}), + ] + nlp = English.from_config(load_config_from_str(CONFIG_7029)) + train_examples = [] + for t in TRAIN_DATA: + train_examples.append(Example.from_dict(nlp.make_doc(t[0]), t[1])) + optimizer = nlp.initialize(get_examples=lambda: train_examples) + for i in range(50): + losses = {} + nlp.update(train_examples, sgd=optimizer, losses=losses) + texts = ["first", "second", "third", "fourth", "and", "then", "some", ""] + docs1 = list(nlp.pipe(texts, batch_size=1)) + docs2 = list(nlp.pipe(texts, batch_size=4)) + assert [doc[0].tag_ for doc in docs1[:-1]] == [doc[0].tag_ for doc in docs2[:-1]] + + +def test_issue7055(): + """Test that fill-config doesn't turn sourced components into factories.""" + source_cfg = { + "nlp": {"lang": "en", "pipeline": ["tok2vec", "tagger"]}, + "components": { + "tok2vec": {"factory": "tok2vec"}, + "tagger": {"factory": "tagger"}, + }, + } + source_nlp = English.from_config(source_cfg) + with make_tempdir() as dir_path: + # We need to create a loadable source pipeline + source_path = dir_path / "test_model" + source_nlp.to_disk(source_path) + base_cfg = { + "nlp": {"lang": "en", "pipeline": ["tok2vec", "tagger", "ner"]}, + "components": { + "tok2vec": {"source": str(source_path)}, + "tagger": {"source": str(source_path)}, + "ner": {"factory": "ner"}, + }, + } + base_cfg = Config(base_cfg) + base_path = dir_path / "base.cfg" + base_cfg.to_disk(base_path) + output_path = dir_path / "config.cfg" + fill_config(output_path, base_path, silent=True) + filled_cfg = load_config(output_path) + assert filled_cfg["components"]["tok2vec"]["source"] == str(source_path) + assert filled_cfg["components"]["tagger"]["source"] == str(source_path) + assert filled_cfg["components"]["ner"]["factory"] == "ner" + assert "model" in filled_cfg["components"]["ner"] + + +def test_issue7056(): + """Test that the Unshift transition works properly, and doesn't cause + sentence segmentation errors.""" + vocab = Vocab() + ae = ArcEager( + vocab.strings, ArcEager.get_actions(left_labels=["amod"], right_labels=["pobj"]) + ) + doc = Doc(vocab, words="Severe pain , after trauma".split()) + state = ae.init_batch([doc])[0] + ae.apply_transition(state, "S") + ae.apply_transition(state, "L-amod") + ae.apply_transition(state, "S") + ae.apply_transition(state, "S") + ae.apply_transition(state, "S") + ae.apply_transition(state, "R-pobj") + ae.apply_transition(state, "D") + ae.apply_transition(state, "D") + ae.apply_transition(state, "D") + assert not state.eol() + + +def test_partial_links(): + # Test that having some entities on the doc without gold links, doesn't crash + TRAIN_DATA = [ + ( + "Russ Cochran his reprints include EC Comics.", + { + "links": {(0, 12): {"Q2146908": 1.0}}, + "entities": [(0, 12, "PERSON")], + "sent_starts": [1, -1, 0, 0, 0, 0, 0, 0], + }, + ) + ] + nlp = English() + vector_length = 3 + train_examples = [] + for text, annotation in TRAIN_DATA: + doc = nlp(text) + train_examples.append(Example.from_dict(doc, annotation)) + + def create_kb(vocab): + # create artificial KB + mykb = KnowledgeBase(vocab, entity_vector_length=vector_length) + mykb.add_entity(entity="Q2146908", freq=12, entity_vector=[6, -4, 3]) + mykb.add_alias("Russ Cochran", ["Q2146908"], [0.9]) + return mykb + + # Create and train the Entity Linker + entity_linker = nlp.add_pipe("entity_linker", last=True) + entity_linker.set_kb(create_kb) + optimizer = nlp.initialize(get_examples=lambda: train_examples) + for i in range(2): + losses = {} + nlp.update(train_examples, sgd=optimizer, losses=losses) + + # adding additional components that are required for the entity_linker + nlp.add_pipe("sentencizer", first=True) + patterns = [ + {"label": "PERSON", "pattern": [{"LOWER": "russ"}, {"LOWER": "cochran"}]}, + {"label": "ORG", "pattern": [{"LOWER": "ec"}, {"LOWER": "comics"}]}, + ] + ruler = nlp.add_pipe("entity_ruler", before="entity_linker") + ruler.add_patterns(patterns) + + # this will run the pipeline on the examples and shouldn't crash + results = nlp.evaluate(train_examples) + assert "PERSON" in results["ents_per_type"] + assert "PERSON" in results["nel_f_per_type"] + assert "ORG" in results["ents_per_type"] + assert "ORG" not in results["nel_f_per_type"] + + +def test_issue7065(): + text = "Kathleen Battle sang in Mahler 's Symphony No. 8 at the Cincinnati Symphony Orchestra 's May Festival." + nlp = English() + nlp.add_pipe("sentencizer") + ruler = nlp.add_pipe("entity_ruler") + patterns = [ + { + "label": "THING", + "pattern": [ + {"LOWER": "symphony"}, + {"LOWER": "no"}, + {"LOWER": "."}, + {"LOWER": "8"}, + ], + } + ] + ruler.add_patterns(patterns) + + doc = nlp(text) + sentences = [s for s in doc.sents] + assert len(sentences) == 2 + sent0 = sentences[0] + ent = doc.ents[0] + assert ent.start < sent0.end < ent.end + assert sentences.index(ent.sent) == 0 + + +def test_issue7065_b(): + # Test that the NEL doesn't crash when an entity crosses a sentence boundary + nlp = English() + vector_length = 3 + nlp.add_pipe("sentencizer") + text = "Mahler 's Symphony No. 8 was beautiful." + entities = [(0, 6, "PERSON"), (10, 24, "WORK")] + links = { + (0, 6): {"Q7304": 1.0, "Q270853": 0.0}, + (10, 24): {"Q7304": 0.0, "Q270853": 1.0}, + } + sent_starts = [1, -1, 0, 0, 0, 0, 0, 0, 0] + doc = nlp(text) + example = Example.from_dict( + doc, {"entities": entities, "links": links, "sent_starts": sent_starts} + ) + train_examples = [example] + + def create_kb(vocab): + # create artificial KB + mykb = KnowledgeBase(vocab, entity_vector_length=vector_length) + mykb.add_entity(entity="Q270853", freq=12, entity_vector=[9, 1, -7]) + mykb.add_alias( + alias="No. 8", + entities=["Q270853"], + probabilities=[1.0], + ) + mykb.add_entity(entity="Q7304", freq=12, entity_vector=[6, -4, 3]) + mykb.add_alias( + alias="Mahler", + entities=["Q7304"], + probabilities=[1.0], + ) + return mykb + + # Create the Entity Linker component and add it to the pipeline + entity_linker = nlp.add_pipe("entity_linker", last=True) + entity_linker.set_kb(create_kb) + # train the NEL pipe + optimizer = nlp.initialize(get_examples=lambda: train_examples) + for i in range(2): + losses = {} + nlp.update(train_examples, sgd=optimizer, losses=losses) + + # Add a custom rule-based component to mimick NER + patterns = [ + {"label": "PERSON", "pattern": [{"LOWER": "mahler"}]}, + { + "label": "WORK", + "pattern": [ + {"LOWER": "symphony"}, + {"LOWER": "no"}, + {"LOWER": "."}, + {"LOWER": "8"}, + ], + }, + ] + ruler = nlp.add_pipe("entity_ruler", before="entity_linker") + ruler.add_patterns(patterns) + # test the trained model - this should not throw E148 + doc = nlp(text) + assert doc diff --git a/spacy/tests/regression/test_issue7019.py b/spacy/tests/regression/test_issue7019.py deleted file mode 100644 index 53958b594..000000000 --- a/spacy/tests/regression/test_issue7019.py +++ /dev/null @@ -1,12 +0,0 @@ -from spacy.cli.evaluate import print_textcats_auc_per_cat, print_prf_per_type -from wasabi import msg - - -def test_issue7019(): - scores = {"LABEL_A": 0.39829102, "LABEL_B": 0.938298329382, "LABEL_C": None} - print_textcats_auc_per_cat(msg, scores) - scores = { - "LABEL_A": {"p": 0.3420302, "r": 0.3929020, "f": 0.49823928932}, - "LABEL_B": {"p": None, "r": None, "f": None}, - } - print_prf_per_type(msg, scores, name="foo", type="bar") diff --git a/spacy/tests/regression/test_issue7029.py b/spacy/tests/regression/test_issue7029.py deleted file mode 100644 index 8435b32e1..000000000 --- a/spacy/tests/regression/test_issue7029.py +++ /dev/null @@ -1,66 +0,0 @@ -from spacy.lang.en import English -from spacy.training import Example -from spacy.util import load_config_from_str - - -CONFIG = """ -[nlp] -lang = "en" -pipeline = ["tok2vec", "tagger"] - -[components] - -[components.tok2vec] -factory = "tok2vec" - -[components.tok2vec.model] -@architectures = "spacy.Tok2Vec.v1" - -[components.tok2vec.model.embed] -@architectures = "spacy.MultiHashEmbed.v1" -width = ${components.tok2vec.model.encode:width} -attrs = ["NORM","PREFIX","SUFFIX","SHAPE"] -rows = [5000,2500,2500,2500] -include_static_vectors = false - -[components.tok2vec.model.encode] -@architectures = "spacy.MaxoutWindowEncoder.v1" -width = 96 -depth = 4 -window_size = 1 -maxout_pieces = 3 - -[components.tagger] -factory = "tagger" - -[components.tagger.model] -@architectures = "spacy.Tagger.v1" -nO = null - -[components.tagger.model.tok2vec] -@architectures = "spacy.Tok2VecListener.v1" -width = ${components.tok2vec.model.encode:width} -upstream = "*" -""" - - -TRAIN_DATA = [ - ("I like green eggs", {"tags": ["N", "V", "J", "N"]}), - ("Eat blue ham", {"tags": ["V", "J", "N"]}), -] - - -def test_issue7029(): - """Test that an empty document doesn't mess up an entire batch.""" - nlp = English.from_config(load_config_from_str(CONFIG)) - train_examples = [] - for t in TRAIN_DATA: - train_examples.append(Example.from_dict(nlp.make_doc(t[0]), t[1])) - optimizer = nlp.initialize(get_examples=lambda: train_examples) - for i in range(50): - losses = {} - nlp.update(train_examples, sgd=optimizer, losses=losses) - texts = ["first", "second", "third", "fourth", "and", "then", "some", ""] - docs1 = list(nlp.pipe(texts, batch_size=1)) - docs2 = list(nlp.pipe(texts, batch_size=4)) - assert [doc[0].tag_ for doc in docs1[:-1]] == [doc[0].tag_ for doc in docs2[:-1]] diff --git a/spacy/tests/regression/test_issue7055.py b/spacy/tests/regression/test_issue7055.py deleted file mode 100644 index c7ddb0a75..000000000 --- a/spacy/tests/regression/test_issue7055.py +++ /dev/null @@ -1,40 +0,0 @@ -from spacy.cli.init_config import fill_config -from spacy.util import load_config -from spacy.lang.en import English -from thinc.api import Config - -from ..util import make_tempdir - - -def test_issue7055(): - """Test that fill-config doesn't turn sourced components into factories.""" - source_cfg = { - "nlp": {"lang": "en", "pipeline": ["tok2vec", "tagger"]}, - "components": { - "tok2vec": {"factory": "tok2vec"}, - "tagger": {"factory": "tagger"}, - }, - } - source_nlp = English.from_config(source_cfg) - with make_tempdir() as dir_path: - # We need to create a loadable source pipeline - source_path = dir_path / "test_model" - source_nlp.to_disk(source_path) - base_cfg = { - "nlp": {"lang": "en", "pipeline": ["tok2vec", "tagger", "ner"]}, - "components": { - "tok2vec": {"source": str(source_path)}, - "tagger": {"source": str(source_path)}, - "ner": {"factory": "ner"}, - }, - } - base_cfg = Config(base_cfg) - base_path = dir_path / "base.cfg" - base_cfg.to_disk(base_path) - output_path = dir_path / "config.cfg" - fill_config(output_path, base_path, silent=True) - filled_cfg = load_config(output_path) - assert filled_cfg["components"]["tok2vec"]["source"] == str(source_path) - assert filled_cfg["components"]["tagger"]["source"] == str(source_path) - assert filled_cfg["components"]["ner"]["factory"] == "ner" - assert "model" in filled_cfg["components"]["ner"] diff --git a/spacy/tests/regression/test_issue7056.py b/spacy/tests/regression/test_issue7056.py deleted file mode 100644 index e94a975d4..000000000 --- a/spacy/tests/regression/test_issue7056.py +++ /dev/null @@ -1,24 +0,0 @@ -from spacy.tokens.doc import Doc -from spacy.vocab import Vocab -from spacy.pipeline._parser_internals.arc_eager import ArcEager - - -def test_issue7056(): - """Test that the Unshift transition works properly, and doesn't cause - sentence segmentation errors.""" - vocab = Vocab() - ae = ArcEager( - vocab.strings, ArcEager.get_actions(left_labels=["amod"], right_labels=["pobj"]) - ) - doc = Doc(vocab, words="Severe pain , after trauma".split()) - state = ae.init_batch([doc])[0] - ae.apply_transition(state, "S") - ae.apply_transition(state, "L-amod") - ae.apply_transition(state, "S") - ae.apply_transition(state, "S") - ae.apply_transition(state, "S") - ae.apply_transition(state, "R-pobj") - ae.apply_transition(state, "D") - ae.apply_transition(state, "D") - ae.apply_transition(state, "D") - assert not state.eol() diff --git a/spacy/tests/regression/test_issue7062.py b/spacy/tests/regression/test_issue7062.py deleted file mode 100644 index 66bf09523..000000000 --- a/spacy/tests/regression/test_issue7062.py +++ /dev/null @@ -1,54 +0,0 @@ -from spacy.kb import KnowledgeBase -from spacy.training import Example -from spacy.lang.en import English - - -# fmt: off -TRAIN_DATA = [ - ("Russ Cochran his reprints include EC Comics.", - {"links": {(0, 12): {"Q2146908": 1.0}}, - "entities": [(0, 12, "PERSON")], - "sent_starts": [1, -1, 0, 0, 0, 0, 0, 0]}) -] -# fmt: on - - -def test_partial_links(): - # Test that having some entities on the doc without gold links, doesn't crash - nlp = English() - vector_length = 3 - train_examples = [] - for text, annotation in TRAIN_DATA: - doc = nlp(text) - train_examples.append(Example.from_dict(doc, annotation)) - - def create_kb(vocab): - # create artificial KB - mykb = KnowledgeBase(vocab, entity_vector_length=vector_length) - mykb.add_entity(entity="Q2146908", freq=12, entity_vector=[6, -4, 3]) - mykb.add_alias("Russ Cochran", ["Q2146908"], [0.9]) - return mykb - - # Create and train the Entity Linker - entity_linker = nlp.add_pipe("entity_linker", last=True) - entity_linker.set_kb(create_kb) - optimizer = nlp.initialize(get_examples=lambda: train_examples) - for i in range(2): - losses = {} - nlp.update(train_examples, sgd=optimizer, losses=losses) - - # adding additional components that are required for the entity_linker - nlp.add_pipe("sentencizer", first=True) - patterns = [ - {"label": "PERSON", "pattern": [{"LOWER": "russ"}, {"LOWER": "cochran"}]}, - {"label": "ORG", "pattern": [{"LOWER": "ec"}, {"LOWER": "comics"}]}, - ] - ruler = nlp.add_pipe("entity_ruler", before="entity_linker") - ruler.add_patterns(patterns) - - # this will run the pipeline on the examples and shouldn't crash - results = nlp.evaluate(train_examples) - assert "PERSON" in results["ents_per_type"] - assert "PERSON" in results["nel_f_per_type"] - assert "ORG" in results["ents_per_type"] - assert "ORG" not in results["nel_f_per_type"] diff --git a/spacy/tests/regression/test_issue7065.py b/spacy/tests/regression/test_issue7065.py deleted file mode 100644 index d40763c63..000000000 --- a/spacy/tests/regression/test_issue7065.py +++ /dev/null @@ -1,97 +0,0 @@ -from spacy.kb import KnowledgeBase -from spacy.lang.en import English -from spacy.training import Example - - -def test_issue7065(): - text = "Kathleen Battle sang in Mahler 's Symphony No. 8 at the Cincinnati Symphony Orchestra 's May Festival." - nlp = English() - nlp.add_pipe("sentencizer") - ruler = nlp.add_pipe("entity_ruler") - patterns = [ - { - "label": "THING", - "pattern": [ - {"LOWER": "symphony"}, - {"LOWER": "no"}, - {"LOWER": "."}, - {"LOWER": "8"}, - ], - } - ] - ruler.add_patterns(patterns) - - doc = nlp(text) - sentences = [s for s in doc.sents] - assert len(sentences) == 2 - sent0 = sentences[0] - ent = doc.ents[0] - assert ent.start < sent0.end < ent.end - assert sentences.index(ent.sent) == 0 - - -def test_issue7065_b(): - # Test that the NEL doesn't crash when an entity crosses a sentence boundary - nlp = English() - vector_length = 3 - nlp.add_pipe("sentencizer") - - text = "Mahler 's Symphony No. 8 was beautiful." - entities = [(0, 6, "PERSON"), (10, 24, "WORK")] - links = { - (0, 6): {"Q7304": 1.0, "Q270853": 0.0}, - (10, 24): {"Q7304": 0.0, "Q270853": 1.0}, - } - sent_starts = [1, -1, 0, 0, 0, 0, 0, 0, 0] - doc = nlp(text) - example = Example.from_dict( - doc, {"entities": entities, "links": links, "sent_starts": sent_starts} - ) - train_examples = [example] - - def create_kb(vocab): - # create artificial KB - mykb = KnowledgeBase(vocab, entity_vector_length=vector_length) - mykb.add_entity(entity="Q270853", freq=12, entity_vector=[9, 1, -7]) - mykb.add_alias( - alias="No. 8", - entities=["Q270853"], - probabilities=[1.0], - ) - mykb.add_entity(entity="Q7304", freq=12, entity_vector=[6, -4, 3]) - mykb.add_alias( - alias="Mahler", - entities=["Q7304"], - probabilities=[1.0], - ) - return mykb - - # Create the Entity Linker component and add it to the pipeline - entity_linker = nlp.add_pipe("entity_linker", last=True) - entity_linker.set_kb(create_kb) - - # train the NEL pipe - optimizer = nlp.initialize(get_examples=lambda: train_examples) - for i in range(2): - losses = {} - nlp.update(train_examples, sgd=optimizer, losses=losses) - - # Add a custom rule-based component to mimick NER - patterns = [ - {"label": "PERSON", "pattern": [{"LOWER": "mahler"}]}, - { - "label": "WORK", - "pattern": [ - {"LOWER": "symphony"}, - {"LOWER": "no"}, - {"LOWER": "."}, - {"LOWER": "8"}, - ], - }, - ] - ruler = nlp.add_pipe("entity_ruler", before="entity_linker") - ruler.add_patterns(patterns) - - # test the trained model - this should not throw E148 - doc = nlp(text) - assert doc diff --git a/spacy/tests/serialize/test_serialize_pipeline.py b/spacy/tests/serialize/test_serialize_pipeline.py index 35cc22d24..c8162a690 100644 --- a/spacy/tests/serialize/test_serialize_pipeline.py +++ b/spacy/tests/serialize/test_serialize_pipeline.py @@ -60,12 +60,6 @@ def taggers(en_vocab): @pytest.mark.parametrize("Parser", test_parsers) def test_serialize_parser_roundtrip_bytes(en_vocab, Parser): - config = { - "update_with_oracle_cut_size": 100, - "beam_width": 1, - "beam_update_prob": 1.0, - "beam_density": 0.0, - } cfg = {"model": DEFAULT_PARSER_MODEL} model = registry.resolve(cfg, validate=True)["model"] parser = Parser(en_vocab, model) diff --git a/spacy/tests/test_cli.py b/spacy/tests/test_cli.py index 03bef3528..6f0fdcfa5 100644 --- a/spacy/tests/test_cli.py +++ b/spacy/tests/test_cli.py @@ -440,7 +440,7 @@ def test_init_config(lang, pipeline, optimize, pretraining): assert isinstance(config, Config) if pretraining: config["paths"]["raw_text"] = "my_data.jsonl" - nlp = load_model_from_config(config, auto_fill=True) + load_model_from_config(config, auto_fill=True) def test_model_recommendations(): diff --git a/spacy/tests/test_models.py b/spacy/tests/test_models.py index 33d394933..47540198a 100644 --- a/spacy/tests/test_models.py +++ b/spacy/tests/test_models.py @@ -211,7 +211,7 @@ def test_empty_docs(model_func, kwargs): def test_init_extract_spans(): - model = extract_spans().initialize() + extract_spans().initialize() def test_extract_spans_span_indices(): diff --git a/spacy/training/initialize.py b/spacy/training/initialize.py index 3cfd33f95..6051eef29 100644 --- a/spacy/training/initialize.py +++ b/spacy/training/initialize.py @@ -71,10 +71,13 @@ def init_nlp(config: Config, *, use_gpu: int = -1) -> "Language": nlp._link_components() with nlp.select_pipes(disable=[*frozen_components, *resume_components]): if T["max_epochs"] == -1: + sample_size = 100 logger.debug( - "Due to streamed train corpus, using only first 100 examples for initialization. If necessary, provide all labels in [initialize]. More info: https://spacy.io/api/cli#init_labels" + f"Due to streamed train corpus, using only first {sample_size} " + f"examples for initialization. If necessary, provide all labels " + f"in [initialize]. More info: https://spacy.io/api/cli#init_labels" ) - nlp.initialize(lambda: islice(train_corpus(nlp), 100), sgd=optimizer) + nlp.initialize(lambda: islice(train_corpus(nlp), sample_size), sgd=optimizer) else: nlp.initialize(lambda: train_corpus(nlp), sgd=optimizer) logger.info(f"Initialized pipeline components: {nlp.pipe_names}") @@ -86,7 +89,6 @@ def init_nlp(config: Config, *, use_gpu: int = -1) -> "Language": # Don't warn about components not in the pipeline if listener not in nlp.pipe_names: continue - if listener in frozen_components and name not in frozen_components: logger.warning(Warnings.W087.format(name=name, listener=listener)) # We always check this regardless, in case user freezes tok2vec @@ -154,6 +156,8 @@ def load_vectors_into_model( logger.warning(Warnings.W112.format(name=name)) nlp.vocab.vectors = vectors_nlp.vocab.vectors + for lex in nlp.vocab: + lex.rank = nlp.vocab.vectors.key2row.get(lex.orth, OOV_RANK) if add_strings: # I guess we should add the strings from the vectors_nlp model? # E.g. if someone does a similarity query, they might expect the strings. diff --git a/website/docs/api/spancategorizer.md b/website/docs/api/spancategorizer.md index f26dba149..57395846d 100644 --- a/website/docs/api/spancategorizer.md +++ b/website/docs/api/spancategorizer.md @@ -451,3 +451,24 @@ integers. The array has two columns, indicating the start and end position. | ----------- | -------------------------------------------------------------------------------------------------------------------- | | `sizes` | The phrase lengths to suggest. For example, `[1, 2]` will suggest phrases consisting of 1 or 2 tokens. ~~List[int]~~ | | **CREATES** | The suggester function. ~~Callable[[List[Doc]], Ragged]~~ | + +### spacy.ngram_range_suggester.v1 {#ngram_range_suggester} + +> #### Example Config +> +> ```ini +> [components.spancat.suggester] +> @misc = "spacy.ngram_range_suggester.v1" +> min_size = 2 +> max_size = 4 +> ``` + +Suggest all spans of at least length `min_size` and at most length `max_size` +(both inclusive). Spans are returned as a ragged array of integers. The array +has two columns, indicating the start and end position. + +| Name | Description | +| ----------- | ------------------------------------------------------------ | +| `min_size` | The minimal phrase lengths to suggest (inclusive). ~~[int]~~ | +| `max_size` | The maximal phrase lengths to suggest (exclusive). ~~[int]~~ | +| **CREATES** | The suggester function. ~~Callable[[List[Doc]], Ragged]~~ |