spaCy/spacy/tests/doc
Paul O'Leary McCann 7d8df69158 Bloom-filter backed Lookup Tables (#4268)
* Improve load_language_data helper

* WIP: Add Lookups implementation

* Start moving lemma data over to JSON

* WIP: move data over for more languages

* Convert more languages

* Fix lemmatizer fixtures in tests

* Finish conversion

* Auto-format JSON files

* Fix test for now

* Make sure tables are stored on instance

* Update docstrings

* Update docstrings and errors

* Update test

* Add Lookups.__len__

* Add serialization methods

* Add Lookups.remove_table

* Use msgpack for serialization to disk

* Fix file exists check

* Try using OrderedDict for everything

* Update .flake8 [ci skip]

* Try fixing serialization

* Update test_lookups.py

* Update test_serialize_vocab_strings.py

* Lookups / Tables now work

This implements the stubs in the Lookups/Table classes. Currently this
is in Cython but with no type declarations, so that could be improved.

* Add lookups to setup.py

* Actually add lookups pyx

The previous commit added the old py file...

* Lookups work-in-progress

* Move from pyx back to py

* Add string based lookups, fix serialization

* Update tests, language/lemmatizer to work with string lookups

There are some outstanding issues here:

- a pickling-related test fails due to the bloom filter
- some custom lemmatizers (fr/nl at least) have issues

More generally, there's a question of how to deal with the case where
you have a string but want to use the lookup table. Currently the table
allows access by string or id, but that's getting pretty awkward.

* Change lemmatizer lookup method to pass (orth, string)

* Fix token lookup

* Fix French lookup

* Fix lt lemmatizer test

* Fix Dutch lemmatizer

* Fix lemmatizer lookup test

This was using a normal dict instead of a Table, so checks for the
string instead of an integer key failed.

* Make uk/nl/ru lemmatizer lookup methods consistent

The mentioned tokenizers all have their own implementation of the
`lookup` method, which accesses a `Lookups` table. The way that was
called in `token.pyx` was changed so this should be updated to have the
same arguments as `lookup` in `lemmatizer.py` (specificially (orth/id,
string)).

Prior to this change tests weren't failing, but there would probably be
issues with normal use of a model. More tests should proably be added.

Additionally, the language-specific `lookup` implementations seem like
they might not be needed, since they handle things like lower-casing
that aren't actually language specific.

* Make recently added Greek method compatible

* Remove redundant class/method

Leftovers from a merge not cleaned up adequately.
2019-09-12 17:26:11 +02:00
..
__init__.py Rename "tokens" tests to "doc" 2017-01-11 18:59:01 +01:00
test_add_entities.py Tidy up and format remaining files 2018-11-30 17:43:08 +01:00
test_array.py Un-xfail test 2019-03-10 15:51:15 +01:00
test_creation.py Bloom-filter backed Lookup Tables (#4268) 2019-09-12 17:26:11 +02:00
test_doc_api.py Add Doc.lang and Doc.lang_ 2019-03-11 14:21:40 +01:00
test_morphanalysis.py Tidy up and auto-format 2019-03-08 13:28:53 +01:00
test_pickle_doc.py Tidy up and format remaining files 2018-11-30 17:43:08 +01:00
test_retokenize_merge.py Tidy up and auto-format 2019-09-11 11:38:22 +02:00
test_retokenize_split.py 💫 Support lexical attributes in retokenizer attrs (closes #2390) (#3325) 2019-02-24 21:13:51 +01:00
test_span.py Add util.filter_spans helper (#3686) 2019-05-08 02:33:40 +02:00
test_to_json.py 💫 Add token match pattern validation via JSON schemas (#3244) 2019-02-13 01:47:26 +11:00
test_token_api.py Fix token.conjuncts (closes #795) (#3392) 2019-03-11 17:05:45 +01:00
test_underscore.py 💫 Improve introspection of custom extension attributes (#3729) 2019-05-12 00:53:11 +02:00