Commit Graph

1605 Commits

Author SHA1 Message Date
Shashank
450720aca2
Added support for Sanskrit language (#5956)
* Added support for Sanskrit language

* Added tests for lexical attribute like_num
2020-08-25 10:56:29 +02:00
idoshr
b10c7bc56e
Hebrew like num (#5952)
* Update stop_words.py

Hebrew STOP WORDS

* Update stop_words.py

* contributor

* contributor

* add some common domain extentions
support human number 1K/1M....

* support human number 1K/1M....

* hebrew number tokenize
1K/1M implement in EN

* test human tokenize fix

* test

* heb like num
revert human number change

* heb like num
2020-08-24 14:30:05 +02:00
Sofie Van Landeghem
56eabcb2f2
Adding num_like test for Czech (#5946)
* Create lex_attrs.py

Hello,

I am missing a CZECH language in SpaCy. So I would like to help to push it a little. This file is base on others lex_attrs.py files just with translation to Czech.

* Update __init__.py

Updated for use with new Czech Lex_attrs file

* Update stop_words.py

* Create test_text.py

* add like_num testing for czech

Co-authored-by: holubvl3 <47881982+holubvl3@users.noreply.github.com>
Co-authored-by: holubvl3 <vilemrousi@gmail.com>
Co-authored-by: Vladimír Holubec <vholubec@arcdata.cz>
2020-08-21 17:06:33 +02:00
Sofie Van Landeghem
071c09ff35
add coding (#5942) 2020-08-20 11:08:38 +02:00
Adriane Boyd
4193402c47
Add warning when Matcher subpattern is discarded (#5873)
* Add a warning when a subpattern is not processed and discarded

* Normalize subpattern attribute/operator keys to upper case like
top-level attributes
2020-08-05 14:56:14 +02:00
Adriane Boyd
c62fd878a3
Allow Doc.char_span to snap to token boundaries (#5849)
* Allow Doc.char_span to snap to token boundaries

Add a `mode` option to allow `Doc.char_span` to snap to token
boundaries. The `mode` options:

* `strict`: character offsets must match token boundaries (default, same as
before)
* `inside`: all tokens completely within the character span
* `outside`: all tokens at least partially covered by the character span

Add a new helper function `token_by_char` that returns the token
corresponding to a character position in the text. Update
`token_by_start` and `token_by_end` to use `token_by_char` for more
efficient searching.

* Remove unused import

* Rename mode to alignment_mode

Rename `mode` to `alignment_mode` with the options
`strict`/`contract`/`expand`. Any unrecognized modes are silently
converted to `strict`.
2020-08-04 13:36:32 +02:00
Adriane Boyd
b841248589
Add Span index boundary checks (#5861)
* Add Span index boundary checks

* Return Span-specific IndexError in all cases

* Simplify and fix if/else
2020-08-04 13:35:25 +02:00
Adriane Boyd
ac14ce7c30
Prefer earlier spans in EntityRuler (#5843)
Similar to #4414, update the sorting in EntityRuler to prefer the first
span in overlapping spans.
2020-07-31 16:09:32 +02:00
Rahul Gupta
f76fae0e8d
English: adds ordinal numbers (#5830) 2020-07-29 20:22:47 +02:00
Gustavo Zadrozny Leyendecker
90b958fd01
Fix on EntityRendered to support break lines (after last entity) (closes #5838) 2020-07-29 18:48:39 +02:00
graue70
b97dbab998
Fix typo in unit tests (#5823) 2020-07-27 20:18:48 +02:00
Adriane Boyd
0a62098c5f
Fix lemmatizer is_base_form for python2.7 (#5734)
* Fix lemmatizer init args for python2.7

* Move English is_base_form to a class method

* Skip test pickling PhraseMatcher for python2
2020-07-09 22:11:24 +02:00
graue70
9860b8399e
Fix typo in test function docstring (#5696) 2020-07-05 15:49:06 +02:00
Adriane Boyd
a77c4c3465
Add strings and ENT_KB_ID to Doc serialization (#5691)
* Add strings for all writeable Token attributes to `Doc.to/from_bytes()`.
* Add ENT_KB_ID to default attributes.
2020-07-02 17:11:57 +02:00
Matthew Honnibal
2d715451a2
Revert "Convert custom user_data to token extension format for Japanese tokenizer (#5652)" (#5665)
This reverts commit 1dd38191ec.
2020-06-29 14:34:15 +02:00
Adriane Boyd
1dd38191ec
Convert custom user_data to token extension format for Japanese tokenizer (#5652)
* Convert custom user_data to token extension format

Convert the user_data values so that they can be loaded as custom token
extensions for `inflection`, `reading_form`, `sub_tokens`, and `lemma`.

* Reset Underscore state in ja tokenizer tests
2020-06-29 14:20:26 +02:00
Adriane Boyd
167df42cb6
Move lemmatizer is_base_form to language settings (#5663)
Move `Lemmatizer.is_base_form` to the language settings so that each
language can provide a language-specific method as
`LanguageDefaults.is_base_form`.

The existing English-specific `Lemmatizer.is_base_form` is moved to
`EnglishDefaults`.
2020-06-29 14:16:57 +02:00
Adriane Boyd
b7107ac89f
Disregard special tag _SP in check for new tag map (#5641)
* Skip special tag  _SP in check for new tag map

In `Tagger.begin_training()` check for new tags aside from `_SP` in the
new tag map initialized from the provided gold tuples when determining
whether to reinitialize the morphology with the new tag map.

* Simplify _SP check
2020-06-26 09:23:21 +02:00
Adriane Boyd
d94e961f14
Fix polarity of Token.is_oov and Lexeme.is_oov (#5634)
Fix `Token.is_oov` and `Lexeme.is_oov` so they return `True` when the
lexeme does **not** have a vector.
2020-06-23 13:29:51 +02:00
Hiroshi Matsuda
150a39ccca
Japanese model: add user_dict entries and small refactor (#5573)
* user_dict fields: adding inflections, reading_forms, sub_tokens
deleting: unidic_tags
improve code readability around the token alignment procedure

* add test cases, replace fugashi with sudachipy in conftest

* move bunsetu.py to spaCy Universe as a pipeline component BunsetuRecognizer

* tag is space -> both surface and tag are spaces

* consider len(text)==0
2020-06-22 14:32:25 +02:00
Rameshh
c34420794a
Add Nepali Language (#5622)
* added support for nepali lang

* added examples and test files

* added spacy contributor agreement
2020-06-22 10:25:46 +02:00
Adriane Boyd
c482f20778
Fix and add warnings related to spacy-lookups-data (#5588)
* Fix warning message for lemmatization tables

* Add a warning when the `lexeme_norm` table is empty. (Given the
relatively lang-specific loading for `Lookups`, it seemed like too much
overhead to dynamically extract the list of languages, so for now it's
hard-coded.)
2020-06-15 14:56:04 +02:00
adrianeboyd
fe167fcf7d
Update pytest conf for sudachipy with Japanese (#5574) 2020-06-11 10:23:50 +02:00
Jones Martins
28db7dd5d9
Add missing pronoums/determiners (#5569)
* Add missing pronoums/determiners

* Add test for missing pronoums

* Add contributor file
2020-06-10 18:47:04 +02:00
adrianeboyd
b7e6e1b9a7
Disable sentence segmentation in ja tokenizer (#5566) 2020-06-09 12:00:59 +02:00
adrianeboyd
f162815f45
Handle empty and whitespace-only docs for Japanese (#5564)
Handle empty and whitespace-only docs in the custom alignment method
used by the Japanese tokenizer.
2020-06-08 21:09:23 +02:00
adrianeboyd
3bf111585d
Update Japanese tokenizer config and add serialization (#5562)
* Use `config` dict for tokenizer settings
* Add serialization of split mode setting
* Add tests for tokenizer split modes and serialization of split mode
setting

Based on #5561
2020-06-08 16:29:05 +02:00
Paul O'Leary McCann
410fb7ee43
Add Japanese Model (#5544)
* Add more rules to deal with Japanese UD mappings

Japanese UD rules sometimes give different UD tags to tokens with the
same underlying POS tag. The UD spec indicates these cases should be
disambiguated using the output of a tool called "comainu", but rules are
enough to get the right result.

These rules are taken from Ginza at time of writing, see #3756.

* Add new tags from GSD

This is a few rare tags that aren't in Unidic but are in the GSD data.

* Add basic Japanese sentencization

This code is taken from Ginza again.

* Add sentenceizer quote handling

Could probably add more paired characters but this will do for now. Also
includes some tests.

* Replace fugashi with SudachiPy

* Modify tag format to match GSD annotations

Some of the tests still need to be updated, but I want to get this up
for testing training.

* Deal with case with closing punct without opening

* refactor resolve_pos()

* change tag field separator from "," to "-"

* add TAG_ORTH_MAP

* add TAG_BIGRAM_MAP

* revise rules for 連体詞

* revise rules for 連体詞

* improve POS about 2%

* add syntax_iterator.py (not mature yet)

* improve syntax_iterators.py

* improve syntax_iterators.py

* add phrases including nouns and drop NPs consist of STOP_WORDS

* First take at noun chunks

This works in many situations but still has issues in others.

If the start of a subtree has no noun, then nested phrases can be
generated.

    また行きたい、そんな気持ちにさせてくれるお店です。
    [そんな気持ち, また行きたい、そんな気持ちにさせてくれるお店]

For some reason て gets included sometimes. Not sure why.

    ゲンに連れ添って円盤生物を調査するパートナーとなる。
    [て円盤生物, ...]

Some phrases that look like they should be split are grouped together;
not entirely sure that's wrong. This whole thing becomes one chunk:

    道の駅遠山郷北側からかぐら大橋南詰現道交点までの1.060kmのみ開通済み

* Use new generic get_words_and_spaces

The new get_words_and_spaces function is simpler than what was used in
Japanese, so it's good to be able to switch to it. However, there was an
issue. The new function works just on text, so POS info could get out of
sync. Fixing this required a small change to the way dtokens (tokens
with POS and lemma info) were generated.

Specifically, multiple extraneous spaces now become a single token, so
when generating dtokens multiple space tokens should be created in a
row.

* Fix noun_chunks, should be working now

* Fix some tests, add naughty strings tests

Some of the existing tests changed because the tokenization mode of
Sudachi changed to the more fine-grained A mode.

Sudachi also has issues with some strings, so this adds a test against
the naughty strings.

* Remove empty Sudachi tokens

Not doing this creates zero-length tokens and causes errors in the
internal spaCy processing.

* Add yield_bunsetu back in as a separate piece of code

Co-authored-by: Hiroshi Matsuda <40782025+hiroshi-matsuda-rit@users.noreply.github.com>
Co-authored-by: hiroshi <hiroshi_matsuda@megagon.ai>
2020-06-04 19:15:43 +02:00
Adriane Boyd
8c758ed1eb Fix meta path 2020-06-03 12:11:57 +02:00
Adriane Boyd
a57bdeecac Test util.get_model_meta instead of util.load_model 2020-06-03 12:10:12 +02:00
Adriane Boyd
bbc1836581 Add rudimentary version checks on model load 2020-06-02 17:33:48 +02:00
Adriane Boyd
e06ca7ea24 Switch to new add API in PhraseMatcher unpickle 2020-05-25 11:22:47 +02:00
Ines Montani
6728747f71
Merge pull request #5486 from explosion/fix/compat-py2 2020-05-22 15:47:21 +02:00
Matthew Honnibal
f6078d866a
Merge pull request #5121 from adrianeboyd/bugfix/revert-token-match
Revert token_match priority changes from #4374 and extend token match options
2020-05-22 14:42:51 +02:00
Ines Montani
c685ee734a Fix compat for v2.x branch 2020-05-22 14:22:36 +02:00
Adriane Boyd
e4a1b5dab1 Rename to url_match
Rename to `url_match` and update docs.
2020-05-22 12:41:03 +02:00
Adriane Boyd
730fa493a4 Merge remote-tracking branch 'upstream/master' into bugfix/revert-token-match 2020-05-22 12:18:00 +02:00
Adriane Boyd
71fe61fdcd Disallow merging 0-length spans 2020-05-22 10:14:34 +02:00
Matthew Honnibal
93c4d13588
Merge pull request #5264 from lfiedler/issue-5230
Fix ResourceWarnings during unittest
2020-05-22 00:31:07 +02:00
Matthew Honnibal
e1cb7e838b
Merge pull request #5481 from explosion/feature/blank-shortcut-v2
Add blank:{lang} shortcut support to util.load_model
2020-05-22 00:08:23 +02:00
Ines Montani
cb02bff0eb Add blank:{lang} shortcut to util.load_mode 2020-05-21 20:24:07 +02:00
svlandeg
84d5b7ad0a Merge remote-tracking branch 'upstream/master' into bugfix/noun-chunks
# Conflicts:
#	spacy/lang/el/syntax_iterators.py
#	spacy/lang/en/syntax_iterators.py
#	spacy/lang/fa/syntax_iterators.py
#	spacy/lang/fr/syntax_iterators.py
#	spacy/lang/id/syntax_iterators.py
#	spacy/lang/nb/syntax_iterators.py
#	spacy/lang/sv/syntax_iterators.py
2020-05-21 19:19:50 +02:00
Ines Montani
c6ec19c844 Add missing declaration 2020-05-21 17:30:05 +02:00
Ines Montani
bd6353715a Merge branch 'master' into fix/travis-tests 2020-05-21 14:23:04 +02:00
Ines Montani
d8f3190c0a Tidy up and auto-format 2020-05-21 14:14:01 +02:00
Ines Montani
56de520afd Try to fix tests on Travis (2.7) 2020-05-21 14:04:57 +02:00
svlandeg
36a94c409a failing test to reproduce overlapping spans problem 2020-05-20 23:06:03 +02:00
adrianeboyd
40e65d6f63
Fix most_similar for vectors with unused rows (#5348)
* Fix most_similar for vectors with unused rows

Address issues related to the unused rows in the vector table and
`most_similar`:

* Update `most_similar()` to search only through rows that are in use
according to `key2row`.

* Raise an error when `most_similar(n=n)` is larger than the number of
vectors in the table.

* Set and restore `_unset` correctly when vectors are added or
deserialized so that new vectors are added in the correct row.

* Set data and keys to the same length in `Vocab.prune_vectors()` to
avoid spurious entries in `key2row`.

* Fix regression test using `most_similar`

Co-authored-by: Matthew Honnibal <honnibal+gh@gmail.com>
2020-05-19 16:41:26 +02:00
adrianeboyd
70da1fd2d6
Add warning for misaligned character offset spans (#5007)
* Add warning for misaligned character offset spans

* Resolve conflict

* Filter warnings in example scripts

Filter warnings in example scripts to show warnings once, in particular
warnings about misaligned entities.

Co-authored-by: Ines Montani <ines@ines.io>
2020-05-19 16:01:18 +02:00
adrianeboyd
0061992d95
Update Polish tokenizer for UD_Polish-PDB (#5432)
Update Polish tokenizer for UD_Polish-PDB, which is a relatively major
change from the existing tokenizer. Unused exceptions files and
conflicting test cases removed.

Co-authored-by: Matthew Honnibal <honnibal+gh@gmail.com>
2020-05-19 15:59:55 +02:00