spaCy/spacy/tests/doc/test_morphanalysis.py
Adriane Boyd e962784531
Add Lemmatizer and simplify related components (#5848)
* Add Lemmatizer and simplify related components

* Add `Lemmatizer` pipe with `lookup` and `rule` modes using the
`Lookups` tables.
* Reduce `Tagger` to a simple tagger that sets `Token.tag` (no pos or lemma)
* Reduce `Morphology` to only keep track of morph tags (no tag map, lemmatizer,
or morph rules)
* Remove lemmatizer from `Vocab`
* Adjust many many tests

Differences:

* No default lookup lemmas
* No special treatment of TAG in `from_array` and similar required
* Easier to modify labels in a `Tagger`
* No extra strings added from morphology / tag map

* Fix test

* Initial fix for Lemmatizer config/serialization

* Adjust init test to be more generic

* Adjust init test to force empty Lookups

* Add simple cache to rule-based lemmatizer

* Convert language-specific lemmatizers

Convert language-specific lemmatizers to component lemmatizers. Remove
previous lemmatizer class.

* Fix French and Polish lemmatizers

* Remove outdated UPOS conversions

* Update Russian lemmatizer init in tests

* Add minimal init/run tests for custom lemmatizers

* Add option to overwrite existing lemmas

* Update mode setting, lookup loading, and caching

* Make `mode` an immutable property
* Only enforce strict `load_lookups` for known supported modes
* Move caching into individual `_lemmatize` methods

* Implement strict when lang is not found in lookups

* Fix tables/lookups in make_lemmatizer

* Reallow provided lookups and allow for stricter checks

* Add lookups asset to all Lemmatizer pipe tests

* Rename lookups in lemmatizer init test

* Clean up merge

* Refactor lookup table loading

* Add helper from `load_lemmatizer_lookups` that loads required and
optional lookups tables based on settings provided by a config.

Additional slight refactor of lookups:

* Add `Lookups.set_table` to set a table from a provided `Table`
* Reorder class definitions to be able to specify type as `Table`

* Move registry assets into test methods

* Refactor lookups tables config

Use class methods within `Lemmatizer` to provide the config for
particular modes and to load the lookups from a config.

* Add pipe and score to lemmatizer

* Simplify Tagger.score

* Add missing import

* Clean up imports and auto-format

* Remove unused kwarg

* Tidy up and auto-format

* Update docstrings for Lemmatizer

Update docstrings for Lemmatizer.

Additionally modify `is_base_form` API to take `Token` instead of
individual features.

* Update docstrings

* Remove tag map values from Tagger.add_label

* Update API docs

* Fix relative link in Lemmatizer API docs
2020-08-07 15:27:13 +02:00

69 lines
2.0 KiB
Python

import pytest
@pytest.fixture
def i_has(en_tokenizer):
doc = en_tokenizer("I has")
doc[0].morph_ = {"PronType": "prs"}
doc[1].morph_ = {
"VerbForm": "fin",
"Tense": "pres",
"Number": "sing",
"Person": "three",
}
return doc
def test_token_morph_eq(i_has):
assert i_has[0].morph is not i_has[0].morph
assert i_has[0].morph == i_has[0].morph
assert i_has[0].morph != i_has[1].morph
def test_token_morph_key(i_has):
assert i_has[0].morph.key != 0
assert i_has[1].morph.key != 0
assert i_has[0].morph.key == i_has[0].morph.key
assert i_has[0].morph.key != i_has[1].morph.key
def test_morph_props(i_has):
assert i_has[0].morph.get("PronType") == ["prs"]
assert i_has[1].morph.get("PronType") == []
def test_morph_iter(i_has):
assert set(i_has[0].morph) == set(["PronType=prs"])
assert set(i_has[1].morph) == set(
["Number=sing", "Person=three", "Tense=pres", "VerbForm=fin"]
)
def test_morph_get(i_has):
assert i_has[0].morph.get("PronType") == ["prs"]
def test_morph_set(i_has):
assert i_has[0].morph.get("PronType") == ["prs"]
# set by string
i_has[0].morph_ = "PronType=unk"
assert i_has[0].morph.get("PronType") == ["unk"]
# set by string, fields are alphabetized
i_has[0].morph_ = "PronType=123|NounType=unk"
assert i_has[0].morph_ == "NounType=unk|PronType=123"
# set by dict
i_has[0].morph_ = {"AType": "123", "BType": "unk"}
assert i_has[0].morph_ == "AType=123|BType=unk"
# set by string with multiple values, fields and values are alphabetized
i_has[0].morph_ = "BType=c|AType=b,a"
assert i_has[0].morph_ == "AType=a,b|BType=c"
# set by dict with multiple values, fields and values are alphabetized
i_has[0].morph_ = {"AType": "b,a", "BType": "c"}
assert i_has[0].morph_ == "AType=a,b|BType=c"
def test_morph_str(i_has):
assert str(i_has[0].morph) == "PronType=prs"
assert str(i_has[1].morph) == "Number=sing|Person=three|Tense=pres|VerbForm=fin"