mirror of
https://github.com/explosion/spaCy.git
synced 2024-12-26 18:06:29 +03:00
7d8df69158
* Improve load_language_data helper * WIP: Add Lookups implementation * Start moving lemma data over to JSON * WIP: move data over for more languages * Convert more languages * Fix lemmatizer fixtures in tests * Finish conversion * Auto-format JSON files * Fix test for now * Make sure tables are stored on instance * Update docstrings * Update docstrings and errors * Update test * Add Lookups.__len__ * Add serialization methods * Add Lookups.remove_table * Use msgpack for serialization to disk * Fix file exists check * Try using OrderedDict for everything * Update .flake8 [ci skip] * Try fixing serialization * Update test_lookups.py * Update test_serialize_vocab_strings.py * Lookups / Tables now work This implements the stubs in the Lookups/Table classes. Currently this is in Cython but with no type declarations, so that could be improved. * Add lookups to setup.py * Actually add lookups pyx The previous commit added the old py file... * Lookups work-in-progress * Move from pyx back to py * Add string based lookups, fix serialization * Update tests, language/lemmatizer to work with string lookups There are some outstanding issues here: - a pickling-related test fails due to the bloom filter - some custom lemmatizers (fr/nl at least) have issues More generally, there's a question of how to deal with the case where you have a string but want to use the lookup table. Currently the table allows access by string or id, but that's getting pretty awkward. * Change lemmatizer lookup method to pass (orth, string) * Fix token lookup * Fix French lookup * Fix lt lemmatizer test * Fix Dutch lemmatizer * Fix lemmatizer lookup test This was using a normal dict instead of a Table, so checks for the string instead of an integer key failed. * Make uk/nl/ru lemmatizer lookup methods consistent The mentioned tokenizers all have their own implementation of the `lookup` method, which accesses a `Lookups` table. The way that was called in `token.pyx` was changed so this should be updated to have the same arguments as `lookup` in `lemmatizer.py` (specificially (orth/id, string)). Prior to this change tests weren't failing, but there would probably be issues with normal use of a model. More tests should proably be added. Additionally, the language-specific `lookup` implementations seem like they might not be needed, since they handle things like lower-casing that aren't actually language specific. * Make recently added Greek method compatible * Remove redundant class/method Leftovers from a merge not cleaned up adequately.
40 lines
901 B
Python
40 lines
901 B
Python
# coding: utf-8
|
|
from __future__ import unicode_literals
|
|
|
|
import pytest
|
|
from spacy.vocab import Vocab
|
|
from spacy.tokens import Doc
|
|
from spacy.lemmatizer import Lemmatizer
|
|
from spacy.lookups import Table
|
|
|
|
|
|
@pytest.fixture
|
|
def lemmatizer():
|
|
lookup = Table(data={"dogs": "dog", "boxen": "box", "mice": "mouse"})
|
|
return Lemmatizer(lookup=lookup)
|
|
|
|
|
|
@pytest.fixture
|
|
def vocab(lemmatizer):
|
|
return Vocab(lemmatizer=lemmatizer)
|
|
|
|
|
|
def test_empty_doc(vocab):
|
|
doc = Doc(vocab)
|
|
assert len(doc) == 0
|
|
|
|
|
|
def test_single_word(vocab):
|
|
doc = Doc(vocab, words=["a"])
|
|
assert doc.text == "a "
|
|
doc = Doc(vocab, words=["a"], spaces=[False])
|
|
assert doc.text == "a"
|
|
|
|
|
|
def test_lookup_lemmatization(vocab):
|
|
doc = Doc(vocab, words=["dogs", "dogses"])
|
|
assert doc[0].text == "dogs"
|
|
assert doc[0].lemma_ == "dog"
|
|
assert doc[1].text == "dogses"
|
|
assert doc[1].lemma_ == "dogses"
|