spaCy/spacy/lang/ja
Paul O'Leary McCann 1ee6541ab0
Moving Japanese tokenizer extra info to Token.morph (#8977)
* Use morph for extra Japanese tokenizer info

Previously Japanese tokenizer info that didn't correspond to Token
fields was put in user data. Since spaCy core should avoid touching user
data, this moves most information to the Token.morph attribute. It also
adds the normalized form, which wasn't exposed before.

The subtokens, which are a list of full tokens, are still added to user
data, except with the default tokenizer granualarity. With the default
tokenizer settings the subtokens are all None, so in this case the user
data is simply not set.

* Update tests

Also adds a new test for norm data.

* Update docs

* Add Japanese morphologizer factory

Set the default to `extend=True` so that the morphologizer does not
clobber the values set by the tokenizer.

* Use the norm_ field for normalized forms

Before this commit, normalized forms were put in the "norm" field in the
morph attributes. I am not sure why I did that instead of using the
token morph, I think I just forgot about it.

* Skip test if sudachipy is not installed

* Fix import

Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
2021-10-01 19:19:26 +02:00
..
__init__.py Moving Japanese tokenizer extra info to Token.morph (#8977) 2021-10-01 19:19:26 +02:00
examples.py Tidy up and auto-format 2020-02-18 15:38:18 +01:00
stop_words.py Drop Python 2.7 and 3.5 (#4828) 2019-12-22 01:53:56 +01:00
syntax_iterators.py Tidy up and move noun_chunks, token_match, url_match 2020-07-22 22:18:46 +02:00
tag_bigram_map.py Tidy up and auto-format 2020-06-21 22:38:04 +02:00
tag_map.py Merge branch 'develop' into master-tmp 2020-06-20 15:52:00 +02:00
tag_orth_map.py Tidy up and auto-format 2020-06-21 22:38:04 +02:00