spaCy/spacy/lang
Paul O'Leary McCann 410fb7ee43
Add Japanese Model (#5544)
* Add more rules to deal with Japanese UD mappings

Japanese UD rules sometimes give different UD tags to tokens with the
same underlying POS tag. The UD spec indicates these cases should be
disambiguated using the output of a tool called "comainu", but rules are
enough to get the right result.

These rules are taken from Ginza at time of writing, see #3756.

* Add new tags from GSD

This is a few rare tags that aren't in Unidic but are in the GSD data.

* Add basic Japanese sentencization

This code is taken from Ginza again.

* Add sentenceizer quote handling

Could probably add more paired characters but this will do for now. Also
includes some tests.

* Replace fugashi with SudachiPy

* Modify tag format to match GSD annotations

Some of the tests still need to be updated, but I want to get this up
for testing training.

* Deal with case with closing punct without opening

* refactor resolve_pos()

* change tag field separator from "," to "-"

* add TAG_ORTH_MAP

* add TAG_BIGRAM_MAP

* revise rules for 連体詞

* revise rules for 連体詞

* improve POS about 2%

* add syntax_iterator.py (not mature yet)

* improve syntax_iterators.py

* improve syntax_iterators.py

* add phrases including nouns and drop NPs consist of STOP_WORDS

* First take at noun chunks

This works in many situations but still has issues in others.

If the start of a subtree has no noun, then nested phrases can be
generated.

    また行きたい、そんな気持ちにさせてくれるお店です。
    [そんな気持ち, また行きたい、そんな気持ちにさせてくれるお店]

For some reason て gets included sometimes. Not sure why.

    ゲンに連れ添って円盤生物を調査するパートナーとなる。
    [て円盤生物, ...]

Some phrases that look like they should be split are grouped together;
not entirely sure that's wrong. This whole thing becomes one chunk:

    道の駅遠山郷北側からかぐら大橋南詰現道交点までの1.060kmのみ開通済み

* Use new generic get_words_and_spaces

The new get_words_and_spaces function is simpler than what was used in
Japanese, so it's good to be able to switch to it. However, there was an
issue. The new function works just on text, so POS info could get out of
sync. Fixing this required a small change to the way dtokens (tokens
with POS and lemma info) were generated.

Specifically, multiple extraneous spaces now become a single token, so
when generating dtokens multiple space tokens should be created in a
row.

* Fix noun_chunks, should be working now

* Fix some tests, add naughty strings tests

Some of the existing tests changed because the tokenization mode of
Sudachi changed to the more fine-grained A mode.

Sudachi also has issues with some strings, so this adds a test against
the naughty strings.

* Remove empty Sudachi tokens

Not doing this creates zero-length tokens and causes errors in the
internal spaCy processing.

* Add yield_bunsetu back in as a separate piece of code

Co-authored-by: Hiroshi Matsuda <40782025+hiroshi-matsuda-rit@users.noreply.github.com>
Co-authored-by: hiroshi <hiroshi_matsuda@megagon.ai>
2020-06-04 19:15:43 +02:00
..
af 💫 Add base Language classes for more languages (#3276) 2019-02-15 01:31:19 +11:00
ar Add writing_system to ArabicDefaults (experimental) 2019-03-11 14:22:23 +01:00
bg Update examples and languages.json [ci skip] 2019-09-15 17:56:40 +02:00
bn Move lookup tables out of the core library (#4346) 2019-10-01 00:01:27 +02:00
ca Move lookup tables out of the core library (#4346) 2019-10-01 00:01:27 +02:00
cs 💫 Add base Language classes for more languages (#3276) 2019-02-15 01:31:19 +11:00
da Tidy up and auto-format 2020-05-21 14:14:01 +02:00
de Rename argument: doc_or_span/obj -> doclike (#5463) 2020-05-21 15:17:39 +02:00
el span / noun chunk has +1 because end is exclusive 2020-05-21 19:56:56 +02:00
en span / noun chunk has +1 because end is exclusive 2020-05-21 19:56:56 +02:00
es Spanish tokenizer exception and examples improvement (#5531) 2020-06-01 18:18:34 +02:00
et 💫 Add base Language classes for more languages (#3276) 2019-02-15 01:31:19 +11:00
eu Tidy up and auto-format 2020-03-25 12:28:12 +01:00
fa span / noun chunk has +1 because end is exclusive 2020-05-21 19:56:56 +02:00
fi add two abbreviations and some additional unit tests (#5040) 2020-02-22 14:12:32 +01:00
fr corrected issue #5524 changed <U+009C> 'STRING TERMINATOR' for <U+0153> LATIN SMALL LIGATURE OE' (#5526) 2020-05-31 22:08:12 +02:00
ga 💫 Tidy up and auto-format .py files (#2983) 2018-11-30 17:03:03 +01:00
gu Tidy up and auto-format 2020-05-21 14:14:01 +02:00
he Auto-format [ci skip] 2019-03-11 17:10:50 +01:00
hi Fix example sentences in Hindi for grammatical errors (#4343) 2019-09-30 23:32:49 +02:00
hr Move lookup tables out of the core library (#4346) 2019-10-01 00:01:27 +02:00
hu Add tokenizer option for token match with affixes 2020-05-05 10:35:33 +02:00
hy Tidy up and auto-format 2020-05-21 14:14:01 +02:00
id span / noun chunk has +1 because end is exclusive 2020-05-21 19:56:56 +02:00
is 💫 Add base Language classes for more languages (#3276) 2019-02-15 01:31:19 +11:00
it Tidy up and auto-format 2020-03-25 12:28:12 +01:00
ja Add Japanese Model (#5544) 2020-06-04 19:15:43 +02:00
kn Add kannada examples (#5162) 2020-03-29 13:54:42 +02:00
ko Very minor issues in Korean example sentences (#5446) 2020-05-17 13:43:34 +02:00
lb Reduce stored lexemes data, move feats to lookups (#5238) 2020-05-19 15:59:14 +02:00
lij Tidy up and auto-format 2020-03-25 12:28:12 +01:00
lt Tidy up and auto-format 2020-03-25 12:28:12 +01:00
lv 💫 Add base Language classes for more languages (#3276) 2019-02-15 01:31:19 +11:00
ml Tidy up and auto-format 2020-05-21 14:14:01 +02:00
mr Tidy up [ci skip] 2019-06-12 13:38:23 +02:00
nb span / noun chunk has +1 because end is exclusive 2020-05-21 19:56:56 +02:00
nl Improve tokenization for UD Dutch corpora (#5259) 2020-04-06 13:18:07 +02:00
pl Fix Polish lemmatizer for deserialized models 2020-05-26 09:56:12 +02:00
pt Reduce stored lexemes data, move feats to lookups (#5238) 2020-05-19 15:59:14 +02:00
ro Improved Romanian tokenization for UD RRT (#5036) 2020-02-19 16:15:59 +01:00
ru Reduce stored lexemes data, move feats to lookups (#5238) 2020-05-19 15:59:14 +02:00
si 💫 Tidy up and auto-format .py files (#2983) 2018-11-30 17:03:03 +01:00
sk Tidy up and auto-format 2020-03-25 12:28:12 +01:00
sl 💫 Add base Language classes for more languages (#3276) 2019-02-15 01:31:19 +11:00
sq Update languages and examples (see #1107) 2019-06-26 16:19:17 +02:00
sr Reduce stored lexemes data, move feats to lookups (#5238) 2020-05-19 15:59:14 +02:00
sv span / noun chunk has +1 because end is exclusive 2020-05-21 19:56:56 +02:00
ta Reduce stored lexemes data, move feats to lookups (#5238) 2020-05-19 15:59:14 +02:00
te 💫 Tidy up and auto-format .py files (#2983) 2018-11-30 17:03:03 +01:00
th Reduce stored lexemes data, move feats to lookups (#5238) 2020-05-19 15:59:14 +02:00
tl Move lookup tables out of the core library (#4346) 2019-10-01 00:01:27 +02:00
tr Move lookup tables out of the core library (#4346) 2019-10-01 00:01:27 +02:00
tt Tidy up and auto-format 2019-08-20 17:36:34 +02:00
uk Update Ukrainian lemmatizer with new lookups (#4359) 2019-10-02 12:04:06 +02:00
ur Tidy up and auto-format 2020-05-21 14:14:01 +02:00
vi 💫 Tidy up and auto-format .py files (#2983) 2018-11-30 17:03:03 +01:00
xx Minor updates to language example sentences (#4608) 2019-11-07 22:34:58 +01:00
yo Adding support for Yoruba Language (#4614) 2019-12-21 14:11:50 +01:00
zh Map NR to PROPN (#5512) 2020-05-26 22:30:53 +02:00
__init__.py Remove imports in /lang/__init__.py 2017-05-08 23:58:07 +02:00
char_classes.py Add CJK to character classes (#4884) 2020-01-08 16:50:19 +01:00
lex_attrs.py Reduce stored lexemes data, move feats to lookups (#5238) 2020-05-19 15:59:14 +02:00
norm_exceptions.py Update norm_exceptions.py (#3778) 2019-05-27 11:52:52 +02:00
punctuation.py Allow period as suffix following punctuation (#4248) 2019-09-09 19:19:22 +02:00
tag_map.py 💫 Tidy up and auto-format .py files (#2983) 2018-11-30 17:03:03 +01:00
tokenizer_exceptions.py Rename to url_match 2020-05-22 12:41:03 +02:00