spaCy/spacy/cli
adrianeboyd a5cd203284
Reduce stored lexemes data, move feats to lookups (#5238)
* Reduce stored lexemes data, move feats to lookups

* Move non-derivable lexemes features (`norm / cluster / prob`) to
`spacy-lookups-data` as lookups
  * Get/set `norm` in both lookups and `LexemeC`, serialize in lookups
  * Remove `cluster` and `prob` from `LexemesC`, get/set/serialize in
    lookups only
* Remove serialization of lexemes data as `vocab/lexemes.bin`
  * Remove `SerializedLexemeC`
  * Remove `Lexeme.to_bytes/from_bytes`
* Modify normalization exception loading:
  * Always create `Vocab.lookups` table `lexeme_norm` for
    normalization exceptions
  * Load base exceptions from `lang.norm_exceptions`, but load
    language-specific exceptions from lookups
  * Set `lex_attr_getter[NORM]` including new lookups table in
    `BaseDefaults.create_vocab()` and when deserializing `Vocab`
* Remove all cached lexemes when deserializing vocab to override
  existing normalizations with the new normalizations (as a replacement
  for the previous step that replaced all lexemes data with the
  deserialized data)

* Skip English normalization test

Skip English normalization test because the data is now in
`spacy-lookups-data`.

* Remove norm exceptions

Moved to spacy-lookups-data.

* Move norm exceptions test to spacy-lookups-data

* Load extra lookups from spacy-lookups-data lazily

Load extra lookups (currently for cluster and prob) lazily from the
entry point `lg_extra` as `Vocab.lookups_extra`.

* Skip creating lexeme cache on load

To improve model loading times, do not create the full lexeme cache when
loading. The lexemes will be created on demand when processing.

* Identify numeric values in Lexeme.set_attrs()

With the removal of a special case for `PROB`, also identify `float` to
avoid trying to convert it with the `StringStore`.

* Skip lexeme cache init in from_bytes

* Unskip and update lookups tests for python3.6+

* Update vocab pickle to include lookups_extra

* Update vocab serialization tests

Check strings rather than lexemes since lexemes aren't initialized
automatically, account for addition of "_SP".

* Re-skip lookups test because of python3.5

* Skip PROB/float values in Lexeme.set_attrs

* Convert is_oov from lexeme flag to lex in vectors

Instead of storing `is_oov` as a lexeme flag, `is_oov` reports whether
the lexeme has a vector.

Co-authored-by: Matthew Honnibal <honnibal+gh@gmail.com>
2020-05-19 15:59:14 +02:00
..
converters Fix for Issue 4665 - conllu2json (#4953) 2020-02-03 13:01:48 +01:00
__init__.py Move UD scripts to bin 2019-03-20 01:19:34 +01:00
_schemas.py Store JSON schemas in Python and tidy up (#3235) 2019-02-07 19:44:31 +11:00
convert.py Auto-format [ci skip] 2019-10-24 16:21:08 +02:00
debug_data.py Various updates/additions to CLI scripts (#5362) 2020-04-29 12:56:46 +02:00
download.py Use latest wasabi 2019-11-04 02:38:45 +01:00
evaluate.py Various updates/additions to CLI scripts (#5362) 2020-04-29 12:56:46 +02:00
info.py Use latest wasabi 2019-11-04 02:38:45 +01:00
init_model.py Reduce stored lexemes data, move feats to lookups (#5238) 2020-05-19 15:59:14 +02:00
link.py Use latest wasabi 2019-11-04 02:38:45 +01:00
package.py Use latest wasabi 2019-11-04 02:38:45 +01:00
pretrain.py add tok2vec parameters to train script to facilitate init_tok2vec (#5021) 2020-02-16 17:16:41 +01:00
profile.py Restore tqdm imports (#4804) 2019-12-16 13:12:19 +01:00
train.py Reduce stored lexemes data, move feats to lookups (#5238) 2020-05-19 15:59:14 +02:00
validate.py Use latest wasabi 2019-11-04 02:38:45 +01:00