spaCy/spacy/tests/conftest.py

317 lines
7.8 KiB
Python
Raw Normal View History

2017-01-11 15:56:32 +03:00
# coding: utf-8
from __future__ import unicode_literals
2017-01-11 15:56:32 +03:00
import pytest
💫 Refactor test suite (#2568) ## Description Related issues: #2379 (should be fixed by separating model tests) * **total execution time down from > 300 seconds to under 60 seconds** 🎉 * removed all model-specific tests that could only really be run manually anyway – those will now live in a separate test suite in the [`spacy-models`](https://github.com/explosion/spacy-models) repository and are already integrated into our new model training infrastructure * changed all relative imports to absolute imports to prepare for moving the test suite from `/spacy/tests` to `/tests` (it'll now always test against the installed version) * merged old regression tests into collections, e.g. `test_issue1001-1500.py` (about 90% of the regression tests are very short anyways) * tidied up and rewrote existing tests wherever possible ### Todo - [ ] move tests to `/tests` and adjust CI commands accordingly - [x] move model test suite from internal repo to `spacy-models` - [x] ~~investigate why `pipeline/test_textcat.py` is flakey~~ - [x] review old regression tests (leftover files) and see if they can be merged, simplified or deleted - [ ] update documentation on how to run tests ### Types of change enhancement, tests ## Checklist <!--- Before you submit the PR, go over this checklist and make sure you can tick off all the boxes. [] -> [x] --> - [x] I have submitted the spaCy Contributor Agreement. - [x] I ran the tests, and all new and existing tests passed. - [ ] My changes don't require a change to the documentation, or if they do, I've added all required information.
2018-07-25 00:38:44 +03:00
from spacy.util import get_lang_class
2017-01-11 15:56:32 +03:00
💫 Refactor test suite (#2568) ## Description Related issues: #2379 (should be fixed by separating model tests) * **total execution time down from > 300 seconds to under 60 seconds** 🎉 * removed all model-specific tests that could only really be run manually anyway – those will now live in a separate test suite in the [`spacy-models`](https://github.com/explosion/spacy-models) repository and are already integrated into our new model training infrastructure * changed all relative imports to absolute imports to prepare for moving the test suite from `/spacy/tests` to `/tests` (it'll now always test against the installed version) * merged old regression tests into collections, e.g. `test_issue1001-1500.py` (about 90% of the regression tests are very short anyways) * tidied up and rewrote existing tests wherever possible ### Todo - [ ] move tests to `/tests` and adjust CI commands accordingly - [x] move model test suite from internal repo to `spacy-models` - [x] ~~investigate why `pipeline/test_textcat.py` is flakey~~ - [x] review old regression tests (leftover files) and see if they can be merged, simplified or deleted - [ ] update documentation on how to run tests ### Types of change enhancement, tests ## Checklist <!--- Before you submit the PR, go over this checklist and make sure you can tick off all the boxes. [] -> [x] --> - [x] I have submitted the spaCy Contributor Agreement. - [x] I ran the tests, and all new and existing tests passed. - [ ] My changes don't require a change to the documentation, or if they do, I've added all required information.
2018-07-25 00:38:44 +03:00
def pytest_addoption(parser):
try:
parser.addoption("--slow", action="store_true", help="include slow tests")
parser.addoption("--issue", action="store", help="test specific issues")
# Options are already added, e.g. if conftest is copied in a build pipeline
# and runs twice
except ValueError:
pass
2019-02-14 15:27:13 +03:00
def pytest_runtest_setup(item):
def getopt(opt):
# When using 'pytest --pyargs spacy' to test an installed copy of
# spacy, pytest skips running our pytest_addoption() hook. Later, when
# we call getoption(), pytest raises an error, because it doesn't
# recognize the option we're asking about. To avoid this, we need to
# pass a default value. We default to False, i.e., we act like all the
# options weren't given.
return item.config.getoption("--%s" % opt, False)
for opt in ["slow"]:
if opt in item.keywords and not getopt(opt):
pytest.skip("need --%s option to run" % opt)
# Fixtures for language tokenizers (languages sorted alphabetically)
@pytest.fixture(scope="module")
2017-06-05 03:26:13 +03:00
def tokenizer():
return get_lang_class("xx").Defaults.create_tokenizer()
2017-01-11 15:56:32 +03:00
@pytest.fixture(scope="session")
def am_tokenizer():
return get_lang_class("am").Defaults.create_tokenizer()
2017-01-11 15:56:32 +03:00
@pytest.fixture(scope="session")
2019-02-14 15:27:13 +03:00
def ar_tokenizer():
return get_lang_class("ar").Defaults.create_tokenizer()
@pytest.fixture(scope="session")
2019-02-14 15:27:13 +03:00
def bn_tokenizer():
return get_lang_class("bn").Defaults.create_tokenizer()
2017-01-11 15:56:32 +03:00
@pytest.fixture(scope="session")
2019-02-14 15:27:13 +03:00
def ca_tokenizer():
return get_lang_class("ca").Defaults.create_tokenizer()
2017-05-09 01:02:21 +03:00
2017-01-11 23:29:59 +03:00
@pytest.fixture(scope="session")
def cs_tokenizer():
return get_lang_class("cs").Defaults.create_tokenizer()
@pytest.fixture(scope="session")
2019-02-14 15:27:13 +03:00
def da_tokenizer():
return get_lang_class("da").Defaults.create_tokenizer()
2017-01-11 23:29:59 +03:00
@pytest.fixture(scope="session")
2017-01-11 15:56:32 +03:00
def de_tokenizer():
return get_lang_class("de").Defaults.create_tokenizer()
2017-01-11 15:56:32 +03:00
@pytest.fixture(scope="session")
2019-02-14 15:27:13 +03:00
def el_tokenizer():
return get_lang_class("el").Defaults.create_tokenizer()
2017-01-11 15:56:32 +03:00
2017-01-12 18:49:19 +03:00
@pytest.fixture(scope="session")
2019-02-14 15:27:13 +03:00
def en_tokenizer():
return get_lang_class("en").Defaults.create_tokenizer()
@pytest.fixture(scope="session")
2019-02-14 15:27:13 +03:00
def en_vocab():
return get_lang_class("en").Defaults.create_vocab()
@pytest.fixture(scope="session")
2019-02-14 15:27:13 +03:00
def en_parser(en_vocab):
nlp = get_lang_class("en")(en_vocab)
return nlp.create_pipe("parser")
@pytest.fixture(scope="session")
2019-02-14 15:27:13 +03:00
def es_tokenizer():
return get_lang_class("es").Defaults.create_tokenizer()
2020-03-04 09:58:56 +03:00
@pytest.fixture(scope="session")
def eu_tokenizer():
return get_lang_class("eu").Defaults.create_tokenizer()
@pytest.fixture(scope="session")
def fa_tokenizer():
return get_lang_class("fa").Defaults.create_tokenizer()
@pytest.fixture(scope="session")
2019-02-14 15:27:13 +03:00
def fi_tokenizer():
return get_lang_class("fi").Defaults.create_tokenizer()
@pytest.fixture(scope="session")
2019-02-14 15:27:13 +03:00
def fr_tokenizer():
return get_lang_class("fr").Defaults.create_tokenizer()
2017-03-05 04:11:26 +03:00
2017-10-31 17:50:13 +03:00
@pytest.fixture(scope="session")
2017-09-11 12:31:41 +03:00
def ga_tokenizer():
return get_lang_class("ga").Defaults.create_tokenizer()
2017-09-11 12:31:41 +03:00
@pytest.fixture(scope="session")
def gu_tokenizer():
return get_lang_class("gu").Defaults.create_tokenizer()
2020-05-21 15:14:01 +03:00
@pytest.fixture(scope="session")
2017-03-24 18:27:44 +03:00
def he_tokenizer():
return get_lang_class("he").Defaults.create_tokenizer()
2017-03-24 18:27:44 +03:00
@pytest.fixture(scope="session")
def hi_tokenizer():
return get_lang_class("hi").Defaults.create_tokenizer()
@pytest.fixture(scope="session")
def hr_tokenizer():
return get_lang_class("hr").Defaults.create_tokenizer()
2019-02-14 15:27:13 +03:00
@pytest.fixture
def hu_tokenizer():
return get_lang_class("hu").Defaults.create_tokenizer()
@pytest.fixture(scope="session")
2019-02-14 15:27:13 +03:00
def id_tokenizer():
return get_lang_class("id").Defaults.create_tokenizer()
2017-05-09 01:02:21 +03:00
@pytest.fixture(scope="session")
2019-02-14 15:27:13 +03:00
def it_tokenizer():
return get_lang_class("it").Defaults.create_tokenizer()
2017-03-24 18:27:44 +03:00
@pytest.fixture(scope="session")
2017-10-14 14:11:39 +03:00
def ja_tokenizer():
pytest.importorskip("sudachipy")
return get_lang_class("ja").Defaults.create_tokenizer()
2017-10-14 14:11:39 +03:00
@pytest.fixture(scope="session")
def ko_tokenizer():
pytest.importorskip("natto")
return get_lang_class("ko").Defaults.create_tokenizer()
2019-10-18 12:27:38 +03:00
@pytest.fixture(scope="session")
def lb_tokenizer():
return get_lang_class("lb").Defaults.create_tokenizer()
2019-10-18 12:27:38 +03:00
2019-07-08 11:25:22 +03:00
@pytest.fixture(scope="session")
def lt_tokenizer():
return get_lang_class("lt").Defaults.create_tokenizer()
@pytest.fixture(scope="session")
def mk_tokenizer():
return get_lang_class("mk").Defaults.create_tokenizer()
2019-07-08 11:25:22 +03:00
@pytest.fixture(scope="session")
def ml_tokenizer():
return get_lang_class("ml").Defaults.create_tokenizer()
@pytest.fixture(scope="session")
2019-02-14 15:27:13 +03:00
def nb_tokenizer():
return get_lang_class("nb").Defaults.create_tokenizer()
2017-09-26 17:36:27 +03:00
@pytest.fixture(scope="session")
def ne_tokenizer():
return get_lang_class("ne").Defaults.create_tokenizer()
@pytest.fixture(scope="session")
2019-02-14 15:27:13 +03:00
def nl_tokenizer():
return get_lang_class("nl").Defaults.create_tokenizer()
2017-12-01 17:04:32 +03:00
2019-02-07 23:10:08 +03:00
@pytest.fixture(scope="session")
2019-02-14 15:27:13 +03:00
def pl_tokenizer():
return get_lang_class("pl").Defaults.create_tokenizer()
2018-11-30 19:43:08 +03:00
@pytest.fixture(scope="session")
2019-02-14 15:27:13 +03:00
def pt_tokenizer():
return get_lang_class("pt").Defaults.create_tokenizer()
💫 Port master changes over to develop (#2979) * Create aryaprabhudesai.md (#2681) * Update _install.jade (#2688) Typo fix: "models" -> "model" * Add FAC to spacy.explain (resolves #2706) * Remove docstrings for deprecated arguments (see #2703) * When calling getoption() in conftest.py, pass a default option (#2709) * When calling getoption() in conftest.py, pass a default option This is necessary to allow testing an installed spacy by running: pytest --pyargs spacy * Add contributor agreement * update bengali token rules for hyphen and digits (#2731) * Less norm computations in token similarity (#2730) * Less norm computations in token similarity * Contributor agreement * Remove ')' for clarity (#2737) Sorry, don't mean to be nitpicky, I just noticed this when going through the CLI and thought it was a quick fix. That said, if this was intention than please let me know. * added contributor agreement for mbkupfer (#2738) * Basic support for Telugu language (#2751) * Lex _attrs for polish language (#2750) * Signed spaCy contributor agreement * Added polish version of english lex_attrs * Introduces a bulk merge function, in order to solve issue #653 (#2696) * Fix comment * Introduce bulk merge to increase performance on many span merges * Sign contributor agreement * Implement pull request suggestions * Describe converters more explicitly (see #2643) * Add multi-threading note to Language.pipe (resolves #2582) [ci skip] * Fix formatting * Fix dependency scheme docs (closes #2705) [ci skip] * Don't set stop word in example (closes #2657) [ci skip] * Add words to portuguese language _num_words (#2759) * Add words to portuguese language _num_words * Add words to portuguese language _num_words * Update Indonesian model (#2752) * adding e-KTP in tokenizer exceptions list * add exception token * removing lines with containing space as it won't matter since we use .split() method in the end, added new tokens in exception * add tokenizer exceptions list * combining base_norms with norm_exceptions * adding norm_exception * fix double key in lemmatizer * remove unused import on punctuation.py * reformat stop_words to reduce number of lines, improve readibility * updating tokenizer exception * implement is_currency for lang/id * adding orth_first_upper in tokenizer_exceptions * update the norm_exception list * remove bunch of abbreviations * adding contributors file * Fixed spaCy+Keras example (#2763) * bug fixes in keras example * created contributor agreement * Adding French hyphenated first name (#2786) * Fix typo (closes #2784) * Fix typo (#2795) [ci skip] Fixed typo on line 6 "regcognizer --> recognizer" * Adding basic support for Sinhala language. (#2788) * adding Sinhala language package, stop words, examples and lex_attrs. * Adding contributor agreement * Updating contributor agreement * Also include lowercase norm exceptions * Fix error (#2802) * Fix error ValueError: cannot resize an array that references or is referenced by another array in this way. Use the resize function * added spaCy Contributor Agreement * Add charlax's contributor agreement (#2805) * agreement of contributor, may I introduce a tiny pl languge contribution (#2799) * Contributors agreement * Contributors agreement * Contributors agreement * Add jupyter=True to displacy.render in documentation (#2806) * Revert "Also include lowercase norm exceptions" This reverts commit 70f4e8adf37cfcfab60be2b97d6deae949b30e9e. * Remove deprecated encoding argument to msgpack * Set up dependency tree pattern matching skeleton (#2732) * Fix bug when too many entity types. Fixes #2800 * Fix Python 2 test failure * Require older msgpack-numpy * Restore encoding arg on msgpack-numpy * Try to fix version pin for msgpack-numpy * Update Portuguese Language (#2790) * Add words to portuguese language _num_words * Add words to portuguese language _num_words * Portuguese - Add/remove stopwords, fix tokenizer, add currency symbols * Extended punctuation and norm_exceptions in the Portuguese language * Correct error in spacy universe docs concerning spacy-lookup (#2814) * Update Keras Example for (Parikh et al, 2016) implementation (#2803) * bug fixes in keras example * created contributor agreement * baseline for Parikh model * initial version of parikh 2016 implemented * tested asymmetric models * fixed grevious error in normalization * use standard SNLI test file * begin to rework parikh example * initial version of running example * start to document the new version * start to document the new version * Update Decompositional Attention.ipynb * fixed calls to similarity * updated the README * import sys package duh * simplified indexing on mapping word to IDs * stupid python indent error * added code from https://github.com/tensorflow/tensorflow/issues/3388 for tf bug workaround * Fix typo (closes #2815) [ci skip] * Update regex version dependency * Set version to 2.0.13.dev3 * Skip seemingly problematic test * Remove problematic test * Try previous version of regex * Revert "Remove problematic test" This reverts commit bdebbef45552d698d390aa430b527ee27830f11b. * Unskip test * Try older version of regex * 💫 Update training examples and use minibatching (#2830) <!--- Provide a general summary of your changes in the title. --> ## Description Update the training examples in `/examples/training` to show usage of spaCy's `minibatch` and `compounding` helpers ([see here](https://spacy.io/usage/training#tips-batch-size) for details). The lack of batching in the examples has caused some confusion in the past, especially for beginners who would copy-paste the examples, update them with large training sets and experienced slow and unsatisfying results. ### Types of change enhancements ## Checklist <!--- Before you submit the PR, go over this checklist and make sure you can tick off all the boxes. [] -> [x] --> - [x] I have submitted the spaCy Contributor Agreement. - [x] I ran the tests, and all new and existing tests passed. - [x] My changes don't require a change to the documentation, or if they do, I've added all required information. * Visual C++ link updated (#2842) (closes #2841) [ci skip] * New landing page * Add contribution agreement * Correcting lang/ru/examples.py (#2845) * Correct some grammatical inaccuracies in lang\ru\examples.py; filled Contributor Agreement * Correct some grammatical inaccuracies in lang\ru\examples.py * Move contributor agreement to separate file * Set version to 2.0.13.dev4 * Add Persian(Farsi) language support (#2797) * Also include lowercase norm exceptions * Remove in favour of https://github.com/explosion/spaCy/graphs/contributors * Rule-based French Lemmatizer (#2818) <!--- Provide a general summary of your changes in the title. --> ## Description <!--- Use this section to describe your changes. If your changes required testing, include information about the testing environment and the tests you ran. If your test fixes a bug reported in an issue, don't forget to include the issue number. If your PR is still a work in progress, that's totally fine – just include a note to let us know. --> Add a rule-based French Lemmatizer following the english one and the excellent PR for [greek language optimizations](https://github.com/explosion/spaCy/pull/2558) to adapt the Lemmatizer class. ### Types of change <!-- What type of change does your PR cover? Is it a bug fix, an enhancement or new feature, or a change to the documentation? --> - Lemma dictionary used can be found [here](http://infolingu.univ-mlv.fr/DonneesLinguistiques/Dictionnaires/telechargement.html), I used the XML version. - Add several files containing exhaustive list of words for each part of speech - Add some lemma rules - Add POS that are not checked in the standard Lemmatizer, i.e PRON, DET, ADV and AUX - Modify the Lemmatizer class to check in lookup table as a last resort if POS not mentionned - Modify the lemmatize function to check in lookup table as a last resort - Init files are updated so the model can support all the functionalities mentioned above - Add words to tokenizer_exceptions_list.py in respect to regex used in tokenizer_exceptions.py ## Checklist <!--- Before you submit the PR, go over this checklist and make sure you can tick off all the boxes. [] -> [x] --> - [X] I have submitted the spaCy Contributor Agreement. - [X] I ran the tests, and all new and existing tests passed. - [X] My changes don't require a change to the documentation, or if they do, I've added all required information. * Set version to 2.0.13 * Fix formatting and consistency * Update docs for new version [ci skip] * Increment version [ci skip] * Add info on wheels [ci skip] * Adding "This is a sentence" example to Sinhala (#2846) * Add wheels badge * Update badge [ci skip] * Update README.rst [ci skip] * Update murmurhash pin * Increment version to 2.0.14.dev0 * Update GPU docs for v2.0.14 * Add wheel to setup_requires * Import prefer_gpu and require_gpu functions from Thinc * Add tests for prefer_gpu() and require_gpu() * Update requirements and setup.py * Workaround bug in thinc require_gpu * Set version to v2.0.14 * Update push-tag script * Unhack prefer_gpu * Require thinc 6.10.6 * Update prefer_gpu and require_gpu docs [ci skip] * Fix specifiers for GPU * Set version to 2.0.14.dev1 * Set version to 2.0.14 * Update Thinc version pin * Increment version * Fix msgpack-numpy version pin * Increment version * Update version to 2.0.16 * Update version [ci skip] * Redundant ')' in the Stop words' example (#2856) <!--- Provide a general summary of your changes in the title. --> ## Description <!--- Use this section to describe your changes. If your changes required testing, include information about the testing environment and the tests you ran. If your test fixes a bug reported in an issue, don't forget to include the issue number. If your PR is still a work in progress, that's totally fine – just include a note to let us know. --> ### Types of change <!-- What type of change does your PR cover? Is it a bug fix, an enhancement or new feature, or a change to the documentation? --> ## Checklist <!--- Before you submit the PR, go over this checklist and make sure you can tick off all the boxes. [] -> [x] --> - [ ] I have submitted the spaCy Contributor Agreement. - [ ] I ran the tests, and all new and existing tests passed. - [ ] My changes don't require a change to the documentation, or if they do, I've added all required information. * Documentation improvement regarding joblib and SO (#2867) Some documentation improvements ## Description 1. Fixed the dead URL to joblib 2. Fixed Stack Overflow brand name (with space) ### Types of change Documentation ## Checklist <!--- Before you submit the PR, go over this checklist and make sure you can tick off all the boxes. [] -> [x] --> - [x] I have submitted the spaCy Contributor Agreement. - [x] I ran the tests, and all new and existing tests passed. - [x] My changes don't require a change to the documentation, or if they do, I've added all required information. * raise error when setting overlapping entities as doc.ents (#2880) * Fix out-of-bounds access in NER training The helper method state.B(1) gets the index of the first token of the buffer, or -1 if no such token exists. Normally this is safe because we pass this to functions like state.safe_get(), which returns an empty token. Here we used it directly as an array index, which is not okay! This error may have been the cause of out-of-bounds access errors during training. Similar errors may still be around, so much be hunted down. Hunting this one down took a long time...I printed out values across training runs and diffed, looking for points of divergence between runs, when no randomness should be allowed. * Change PyThaiNLP Url (#2876) * Fix missing comma * Add example showing a fix-up rule for space entities * Set version to 2.0.17.dev0 * Update regex version * Revert "Update regex version" This reverts commit 62358dd867d15bc6a475942dff34effba69dd70a. * Try setting older regex version, to align with conda * Set version to 2.0.17 * Add spacy-js to universe [ci-skip] * Add spacy-raspberry to universe (closes #2889) * Add script to validate universe json [ci skip] * Removed space in docs + added contributor indo (#2909) * - removed unneeded space in documentation * - added contributor info * Allow input text of length up to max_length, inclusive (#2922) * Include universe spec for spacy-wordnet component (#2919) * feat: include universe spec for spacy-wordnet component * chore: include spaCy contributor agreement * Minor formatting changes [ci skip] * Fix image [ci skip] Twitter URL doesn't work on live site * Check if the word is in one of the regular lists specific to each POS (#2886) * 💫 Create random IDs for SVGs to prevent ID clashes (#2927) Resolves #2924. ## Description Fixes problem where multiple visualizations in Jupyter notebooks would have clashing arc IDs, resulting in weirdly positioned arc labels. Generating a random ID prefix so even identical parses won't receive the same IDs for consistency (even if effect of ID clash isn't noticable here.) ### Types of change bug fix ## Checklist <!--- Before you submit the PR, go over this checklist and make sure you can tick off all the boxes. [] -> [x] --> - [x] I have submitted the spaCy Contributor Agreement. - [x] I ran the tests, and all new and existing tests passed. - [x] My changes don't require a change to the documentation, or if they do, I've added all required information. * Fix typo [ci skip] * fixes symbolic link on py3 and windows (#2949) * fixes symbolic link on py3 and windows during setup of spacy using command python -m spacy link en_core_web_sm en closes #2948 * Update spacy/compat.py Co-Authored-By: cicorias <cicorias@users.noreply.github.com> * Fix formatting * Update universe [ci skip] * Catalan Language Support (#2940) * Catalan language Support * Ddding Catalan to documentation * Sort languages alphabetically [ci skip] * Update tests for pytest 4.x (#2965) <!--- Provide a general summary of your changes in the title. --> ## Description - [x] Replace marks in params for pytest 4.0 compat ([see here](https://docs.pytest.org/en/latest/deprecations.html#marks-in-pytest-mark-parametrize)) - [x] Un-xfail passing tests (some fixes in a recent update resolved a bunch of issues, but tests were apparently never updated here) ### Types of change <!-- What type of change does your PR cover? Is it a bug fix, an enhancement or new feature, or a change to the documentation? --> ## Checklist <!--- Before you submit the PR, go over this checklist and make sure you can tick off all the boxes. [] -> [x] --> - [x] I have submitted the spaCy Contributor Agreement. - [x] I ran the tests, and all new and existing tests passed. - [x] My changes don't require a change to the documentation, or if they do, I've added all required information. * Fix regex pin to harmonize with conda (#2964) * Update README.rst * Fix bug where Vocab.prune_vector did not use 'batch_size' (#2977) Fixes #2976 * Fix typo * Fix typo * Remove duplicate file * Require thinc 7.0.0.dev2 Fixes bug in gpu_ops that would use cupy instead of numpy on CPU * Add missing import * Fix error IDs * Fix tests
2018-11-29 18:30:29 +03:00
2019-02-08 15:28:09 +03:00
@pytest.fixture(scope="session")
2019-02-14 15:27:13 +03:00
def ro_tokenizer():
return get_lang_class("ro").Defaults.create_tokenizer()
2017-01-11 15:56:32 +03:00
@pytest.fixture(scope="session")
2019-02-14 15:27:13 +03:00
def ru_tokenizer():
pytest.importorskip("pymorphy2")
return get_lang_class("ru").Defaults.create_tokenizer()
2019-09-27 18:57:59 +03:00
@pytest.fixture
def ru_lemmatizer():
pytest.importorskip("pymorphy2")
return get_lang_class("ru").Defaults.create_lemmatizer()
@pytest.fixture(scope="session")
def sa_tokenizer():
return get_lang_class("sa").Defaults.create_tokenizer()
@pytest.fixture(scope="session")
def sr_tokenizer():
return get_lang_class("sr").Defaults.create_tokenizer()
@pytest.fixture(scope="session")
2019-02-14 15:27:13 +03:00
def sv_tokenizer():
return get_lang_class("sv").Defaults.create_tokenizer()
@pytest.fixture(scope="session")
2019-02-14 15:27:13 +03:00
def th_tokenizer():
pytest.importorskip("pythainlp")
return get_lang_class("th").Defaults.create_tokenizer()
2017-03-24 18:27:44 +03:00
@pytest.fixture(scope="session")
def ti_tokenizer():
return get_lang_class("ti").Defaults.create_tokenizer()
@pytest.fixture(scope="session")
2019-02-14 15:27:13 +03:00
def tr_tokenizer():
return get_lang_class("tr").Defaults.create_tokenizer()
@pytest.fixture(scope="session")
def tr_vocab():
return get_lang_class("tr").Defaults.create_vocab()
@pytest.fixture(scope="session")
2019-02-14 15:27:13 +03:00
def tt_tokenizer():
return get_lang_class("tt").Defaults.create_tokenizer()
2018-09-03 10:57:52 +03:00
2021-01-24 17:56:16 +03:00
@pytest.fixture(scope="session")
def ky_tokenizer():
return get_lang_class("ky").Defaults.create_tokenizer()
2019-02-14 15:27:13 +03:00
@pytest.fixture(scope="session")
def uk_tokenizer():
pytest.importorskip("pymorphy2")
2019-02-25 17:54:55 +03:00
pytest.importorskip("pymorphy2.lang")
2019-02-25 17:55:29 +03:00
return get_lang_class("uk").Defaults.create_tokenizer()
2018-09-03 10:57:52 +03:00
2019-02-14 15:27:13 +03:00
@pytest.fixture(scope="session")
def ur_tokenizer():
return get_lang_class("ur").Defaults.create_tokenizer()
2019-12-21 21:04:17 +03:00
@pytest.fixture(scope="session")
def yo_tokenizer():
return get_lang_class("yo").Defaults.create_tokenizer()
2019-12-21 21:04:17 +03:00
@pytest.fixture(scope="session")
Add pkuseg and serialization support for Chinese (#5308) * Add pkuseg and serialization support for Chinese Add support for pkuseg alongside jieba * Specify model through `Language` meta: * split on characters (if no word segmentation packages are installed) ``` Chinese(meta={"tokenizer": {"config": {"use_jieba": False, "use_pkuseg": False}}}) ``` * jieba (remains the default tokenizer if installed) ``` Chinese() Chinese(meta={"tokenizer": {"config": {"use_jieba": True}}}) # explicit ``` * pkuseg ``` Chinese(meta={"tokenizer": {"config": {"pkuseg_model": "default", "use_jieba": False, "use_pkuseg": True}}}) ``` * The new tokenizer setting `require_pkuseg` is used to override `use_jieba` default, which is intended for models that provide a pkuseg model: ``` nlp_pkuseg = Chinese(meta={"tokenizer": {"config": {"pkuseg_model": "default", "require_pkuseg": True}}}) nlp = Chinese() # has `use_jieba` as `True` by default nlp.from_bytes(nlp_pkuseg.to_bytes()) # `require_pkuseg` overrides `use_jieba` when calling the tokenizer ``` Add support for serialization of tokenizer settings and pkuseg model, if loaded * Add sorting for `Language.to_bytes()` serialization of `Language.meta` so that the (emptied, but still present) tokenizer metadata is in a consistent position in the serialized data Extend tests to cover all three tokenizer configurations and serialization * Fix from_disk and tests without jieba or pkuseg * Load cfg first and only show error if `use_pkuseg` * Fix blank/default initialization in serialization tests * Explicitly initialize jieba's cache on init * Add serialization for pkuseg pre/postprocessors * Reformat pkuseg install message
2020-04-18 18:01:53 +03:00
def zh_tokenizer_char():
2020-05-21 15:14:01 +03:00
return get_lang_class("zh").Defaults.create_tokenizer(
config={"use_jieba": False, "use_pkuseg": False}
)
Add pkuseg and serialization support for Chinese (#5308) * Add pkuseg and serialization support for Chinese Add support for pkuseg alongside jieba * Specify model through `Language` meta: * split on characters (if no word segmentation packages are installed) ``` Chinese(meta={"tokenizer": {"config": {"use_jieba": False, "use_pkuseg": False}}}) ``` * jieba (remains the default tokenizer if installed) ``` Chinese() Chinese(meta={"tokenizer": {"config": {"use_jieba": True}}}) # explicit ``` * pkuseg ``` Chinese(meta={"tokenizer": {"config": {"pkuseg_model": "default", "use_jieba": False, "use_pkuseg": True}}}) ``` * The new tokenizer setting `require_pkuseg` is used to override `use_jieba` default, which is intended for models that provide a pkuseg model: ``` nlp_pkuseg = Chinese(meta={"tokenizer": {"config": {"pkuseg_model": "default", "require_pkuseg": True}}}) nlp = Chinese() # has `use_jieba` as `True` by default nlp.from_bytes(nlp_pkuseg.to_bytes()) # `require_pkuseg` overrides `use_jieba` when calling the tokenizer ``` Add support for serialization of tokenizer settings and pkuseg model, if loaded * Add sorting for `Language.to_bytes()` serialization of `Language.meta` so that the (emptied, but still present) tokenizer metadata is in a consistent position in the serialized data Extend tests to cover all three tokenizer configurations and serialization * Fix from_disk and tests without jieba or pkuseg * Load cfg first and only show error if `use_pkuseg` * Fix blank/default initialization in serialization tests * Explicitly initialize jieba's cache on init * Add serialization for pkuseg pre/postprocessors * Reformat pkuseg install message
2020-04-18 18:01:53 +03:00
@pytest.fixture(scope="session")
def zh_tokenizer_jieba():
pytest.importorskip("jieba")
return get_lang_class("zh").Defaults.create_tokenizer()
Add pkuseg and serialization support for Chinese (#5308) * Add pkuseg and serialization support for Chinese Add support for pkuseg alongside jieba * Specify model through `Language` meta: * split on characters (if no word segmentation packages are installed) ``` Chinese(meta={"tokenizer": {"config": {"use_jieba": False, "use_pkuseg": False}}}) ``` * jieba (remains the default tokenizer if installed) ``` Chinese() Chinese(meta={"tokenizer": {"config": {"use_jieba": True}}}) # explicit ``` * pkuseg ``` Chinese(meta={"tokenizer": {"config": {"pkuseg_model": "default", "use_jieba": False, "use_pkuseg": True}}}) ``` * The new tokenizer setting `require_pkuseg` is used to override `use_jieba` default, which is intended for models that provide a pkuseg model: ``` nlp_pkuseg = Chinese(meta={"tokenizer": {"config": {"pkuseg_model": "default", "require_pkuseg": True}}}) nlp = Chinese() # has `use_jieba` as `True` by default nlp.from_bytes(nlp_pkuseg.to_bytes()) # `require_pkuseg` overrides `use_jieba` when calling the tokenizer ``` Add support for serialization of tokenizer settings and pkuseg model, if loaded * Add sorting for `Language.to_bytes()` serialization of `Language.meta` so that the (emptied, but still present) tokenizer metadata is in a consistent position in the serialized data Extend tests to cover all three tokenizer configurations and serialization * Fix from_disk and tests without jieba or pkuseg * Load cfg first and only show error if `use_pkuseg` * Fix blank/default initialization in serialization tests * Explicitly initialize jieba's cache on init * Add serialization for pkuseg pre/postprocessors * Reformat pkuseg install message
2020-04-18 18:01:53 +03:00
@pytest.fixture(scope="session")
def zh_tokenizer_pkuseg():
pytest.importorskip("pkuseg")
2020-05-21 15:14:01 +03:00
return get_lang_class("zh").Defaults.create_tokenizer(
config={"pkuseg_model": "default", "use_jieba": False, "use_pkuseg": True}
)
Add pkuseg and serialization support for Chinese (#5308) * Add pkuseg and serialization support for Chinese Add support for pkuseg alongside jieba * Specify model through `Language` meta: * split on characters (if no word segmentation packages are installed) ``` Chinese(meta={"tokenizer": {"config": {"use_jieba": False, "use_pkuseg": False}}}) ``` * jieba (remains the default tokenizer if installed) ``` Chinese() Chinese(meta={"tokenizer": {"config": {"use_jieba": True}}}) # explicit ``` * pkuseg ``` Chinese(meta={"tokenizer": {"config": {"pkuseg_model": "default", "use_jieba": False, "use_pkuseg": True}}}) ``` * The new tokenizer setting `require_pkuseg` is used to override `use_jieba` default, which is intended for models that provide a pkuseg model: ``` nlp_pkuseg = Chinese(meta={"tokenizer": {"config": {"pkuseg_model": "default", "require_pkuseg": True}}}) nlp = Chinese() # has `use_jieba` as `True` by default nlp.from_bytes(nlp_pkuseg.to_bytes()) # `require_pkuseg` overrides `use_jieba` when calling the tokenizer ``` Add support for serialization of tokenizer settings and pkuseg model, if loaded * Add sorting for `Language.to_bytes()` serialization of `Language.meta` so that the (emptied, but still present) tokenizer metadata is in a consistent position in the serialized data Extend tests to cover all three tokenizer configurations and serialization * Fix from_disk and tests without jieba or pkuseg * Load cfg first and only show error if `use_pkuseg` * Fix blank/default initialization in serialization tests * Explicitly initialize jieba's cache on init * Add serialization for pkuseg pre/postprocessors * Reformat pkuseg install message
2020-04-18 18:01:53 +03:00
@pytest.fixture(scope="session")
def hy_tokenizer():
Add pkuseg and serialization support for Chinese (#5308) * Add pkuseg and serialization support for Chinese Add support for pkuseg alongside jieba * Specify model through `Language` meta: * split on characters (if no word segmentation packages are installed) ``` Chinese(meta={"tokenizer": {"config": {"use_jieba": False, "use_pkuseg": False}}}) ``` * jieba (remains the default tokenizer if installed) ``` Chinese() Chinese(meta={"tokenizer": {"config": {"use_jieba": True}}}) # explicit ``` * pkuseg ``` Chinese(meta={"tokenizer": {"config": {"pkuseg_model": "default", "use_jieba": False, "use_pkuseg": True}}}) ``` * The new tokenizer setting `require_pkuseg` is used to override `use_jieba` default, which is intended for models that provide a pkuseg model: ``` nlp_pkuseg = Chinese(meta={"tokenizer": {"config": {"pkuseg_model": "default", "require_pkuseg": True}}}) nlp = Chinese() # has `use_jieba` as `True` by default nlp.from_bytes(nlp_pkuseg.to_bytes()) # `require_pkuseg` overrides `use_jieba` when calling the tokenizer ``` Add support for serialization of tokenizer settings and pkuseg model, if loaded * Add sorting for `Language.to_bytes()` serialization of `Language.meta` so that the (emptied, but still present) tokenizer metadata is in a consistent position in the serialized data Extend tests to cover all three tokenizer configurations and serialization * Fix from_disk and tests without jieba or pkuseg * Load cfg first and only show error if `use_pkuseg` * Fix blank/default initialization in serialization tests * Explicitly initialize jieba's cache on init * Add serialization for pkuseg pre/postprocessors * Reformat pkuseg install message
2020-04-18 18:01:53 +03:00
return get_lang_class("hy").Defaults.create_tokenizer()