* Added Examples for Tamil Sentences
#### Description
This PR add example sentences for the Tamil language which were missing as per issue #1107
#### Type of Change
This is an enhancement.
* Accepting spaCy Contributor Agreement
* Signed on my behalf as an individual
* Use `config` dict for tokenizer settings
* Add serialization of split mode setting
* Add tests for tokenizer split modes and serialization of split mode
setting
Based on #5561
* Add more rules to deal with Japanese UD mappings
Japanese UD rules sometimes give different UD tags to tokens with the
same underlying POS tag. The UD spec indicates these cases should be
disambiguated using the output of a tool called "comainu", but rules are
enough to get the right result.
These rules are taken from Ginza at time of writing, see #3756.
* Add new tags from GSD
This is a few rare tags that aren't in Unidic but are in the GSD data.
* Add basic Japanese sentencization
This code is taken from Ginza again.
* Add sentenceizer quote handling
Could probably add more paired characters but this will do for now. Also
includes some tests.
* Replace fugashi with SudachiPy
* Modify tag format to match GSD annotations
Some of the tests still need to be updated, but I want to get this up
for testing training.
* Deal with case with closing punct without opening
* refactor resolve_pos()
* change tag field separator from "," to "-"
* add TAG_ORTH_MAP
* add TAG_BIGRAM_MAP
* revise rules for 連体詞
* revise rules for 連体詞
* improve POS about 2%
* add syntax_iterator.py (not mature yet)
* improve syntax_iterators.py
* improve syntax_iterators.py
* add phrases including nouns and drop NPs consist of STOP_WORDS
* First take at noun chunks
This works in many situations but still has issues in others.
If the start of a subtree has no noun, then nested phrases can be
generated.
また行きたい、そんな気持ちにさせてくれるお店です。
[そんな気持ち, また行きたい、そんな気持ちにさせてくれるお店]
For some reason て gets included sometimes. Not sure why.
ゲンに連れ添って円盤生物を調査するパートナーとなる。
[て円盤生物, ...]
Some phrases that look like they should be split are grouped together;
not entirely sure that's wrong. This whole thing becomes one chunk:
道の駅遠山郷北側からかぐら大橋南詰現道交点までの1.060kmのみ開通済み
* Use new generic get_words_and_spaces
The new get_words_and_spaces function is simpler than what was used in
Japanese, so it's good to be able to switch to it. However, there was an
issue. The new function works just on text, so POS info could get out of
sync. Fixing this required a small change to the way dtokens (tokens
with POS and lemma info) were generated.
Specifically, multiple extraneous spaces now become a single token, so
when generating dtokens multiple space tokens should be created in a
row.
* Fix noun_chunks, should be working now
* Fix some tests, add naughty strings tests
Some of the existing tests changed because the tokenization mode of
Sudachi changed to the more fine-grained A mode.
Sudachi also has issues with some strings, so this adds a test against
the naughty strings.
* Remove empty Sudachi tokens
Not doing this creates zero-length tokens and causes errors in the
internal spaCy processing.
* Add yield_bunsetu back in as a separate piece of code
Co-authored-by: Hiroshi Matsuda <40782025+hiroshi-matsuda-rit@users.noreply.github.com>
Co-authored-by: hiroshi <hiroshi_matsuda@megagon.ai>
Restructure Polish lemmatizer not to depend on lookups data in
`__init__` since the lemmatizer is initialized before the lookups data
is loaded from a saved model. The lookups tables are accessed first in
`__call__` instead once the data is available.
Update Polish tokenizer for UD_Polish-PDB, which is a relatively major
change from the existing tokenizer. Unused exceptions files and
conflicting test cases removed.
Co-authored-by: Matthew Honnibal <honnibal+gh@gmail.com>
* Reduce stored lexemes data, move feats to lookups
* Move non-derivable lexemes features (`norm / cluster / prob`) to
`spacy-lookups-data` as lookups
* Get/set `norm` in both lookups and `LexemeC`, serialize in lookups
* Remove `cluster` and `prob` from `LexemesC`, get/set/serialize in
lookups only
* Remove serialization of lexemes data as `vocab/lexemes.bin`
* Remove `SerializedLexemeC`
* Remove `Lexeme.to_bytes/from_bytes`
* Modify normalization exception loading:
* Always create `Vocab.lookups` table `lexeme_norm` for
normalization exceptions
* Load base exceptions from `lang.norm_exceptions`, but load
language-specific exceptions from lookups
* Set `lex_attr_getter[NORM]` including new lookups table in
`BaseDefaults.create_vocab()` and when deserializing `Vocab`
* Remove all cached lexemes when deserializing vocab to override
existing normalizations with the new normalizations (as a replacement
for the previous step that replaced all lexemes data with the
deserialized data)
* Skip English normalization test
Skip English normalization test because the data is now in
`spacy-lookups-data`.
* Remove norm exceptions
Moved to spacy-lookups-data.
* Move norm exceptions test to spacy-lookups-data
* Load extra lookups from spacy-lookups-data lazily
Load extra lookups (currently for cluster and prob) lazily from the
entry point `lg_extra` as `Vocab.lookups_extra`.
* Skip creating lexeme cache on load
To improve model loading times, do not create the full lexeme cache when
loading. The lexemes will be created on demand when processing.
* Identify numeric values in Lexeme.set_attrs()
With the removal of a special case for `PROB`, also identify `float` to
avoid trying to convert it with the `StringStore`.
* Skip lexeme cache init in from_bytes
* Unskip and update lookups tests for python3.6+
* Update vocab pickle to include lookups_extra
* Update vocab serialization tests
Check strings rather than lexemes since lexemes aren't initialized
automatically, account for addition of "_SP".
* Re-skip lookups test because of python3.5
* Skip PROB/float values in Lexeme.set_attrs
* Convert is_oov from lexeme flag to lex in vectors
Instead of storing `is_oov` as a lexeme flag, `is_oov` reports whether
the lexeme has a vector.
Co-authored-by: Matthew Honnibal <honnibal+gh@gmail.com>
* Limiting noun_chunks for specific langauges
* Limiting noun_chunks for specific languages
Contributor Agreement
* Addressing review comments
* Removed unused fixtures and imports
* Add fa_tokenizer in test suite
* Use fa_tokenizer in test
* Undo extraneous reformatting
Co-authored-by: adrianeboyd <adrianeboyd@gmail.com>
Remove `TAG` value from Danish and Swedish tokenizer exceptions because
it may not be included in a tag map (and these settings are problematic
as tokenizer exceptions anyway).
Instead of treating `'d` in contractions like `I'd` as `would` in all
cases in the tokenizer exceptions, leave the tagging and lemmatization
up to later components.
To fix the slow tokenizer URL (#4374) and allow `token_match` to take
priority over prefixes and suffixes by default, introduce a new
tokenizer option for a token match pattern that's applied after prefixes
and suffixes but before infixes.
Modify jieba install message to instruct the user to use
`ChineseDefaults.use_jieba = False` so that it's possible to load
pkuseg-only models without jieba installed.