Expected an `entity_ruler.jsonl` file in the top-level model directory, so the path passed to from_disk by default (model path plus componentn name), but with the suffix ".jsonl".
* Perserve flags in EntityRuler
The EntityRuler (explosion/spaCy#3526) does not preserve
overwrite flags (or `ent_id_sep`) when serialized. This
commit adds support for serialization/deserialization preserving
overwrite and ent_id_sep flags.
* add signed contributor agreement
* flake8 cleanup
mostly blank line issues.
* mark test from the issue as needing a model
The test from the issue needs some language model for serialization
but the test wasn't originally marked correctly.
* Adds `phrase_matcher_attr` to allow args to PhraseMatcher
This is an added arg to pass to the `PhraseMatcher`. For example,
this allows creation of a case insensitive phrase matcher when the
`EntityRuler` is created. References explosion/spaCy#3822
* remove unneeded model loading
The model didn't need to be loaded, and I replaced it with
a change that doesn't require it (using existings fixtures)
* updated docstring for new argument
* updated docs to reflect new argument to the EntityRuler constructor
* change tempdir handling to be compatible with python 2.7
* return conflicted code to entityruler
Some stuff got cut out because of merge conflicts, this
returns that code for the phrase_matcher_attr.
* fixed typo in the code added back after conflicts
* flake8 compliance
When I deconflicted the branch there were some flake8 issues
introduced. This resolves the spacing problems.
* test changes: attempts to fix flaky test in python3.5
These tests seem to be alittle flaky in 3.5 so I changed the check to avoid
the comparisons that seem to be fail sometimes.
* Perserve flags in EntityRuler
The EntityRuler (explosion/spaCy#3526) does not preserve
overwrite flags (or `ent_id_sep`) when serialized. This
commit adds support for serialization/deserialization preserving
overwrite and ent_id_sep flags.
* add signed contributor agreement
* flake8 cleanup
mostly blank line issues.
* mark test from the issue as needing a model
The test from the issue needs some language model for serialization
but the test wasn't originally marked correctly.
* remove unneeded model loading
The model didn't need to be loaded, and I replaced it with
a change that doesn't require it (using existings fixtures)
* change tempdir handling to be compatible with python 2.7
* Adds code to handle item saved before this change.
This code chanes how the save files are handled and how the bytes
are stored as well. This code adds check to dispatch correctly
if it encounters bytes or files saved in the old format (and tests
for those cases).
* use util function for tempdir management
Updated after PR comments: this code now uses the make_tempdir function from util
instead of doing it by hand.
* initial LT lang support
* Added more stopwords. Started setting up some basic test environment (not complete)
* Initial morph rules for LT lang
* Closes#1 Adds tokenizer exceptions for Lithuanian
* Closes#5 Punctuation rules. Closes#6 Lexical Attributes
* test: add native examples to basic tests
* feat: add tag map for lt lang
* fix: remove undefined tag attribute 'Definite'
* feat: add lemmatizer for lt lang
* refactor: add new instances to lt lang morph rules; use tags from tag map
* refactor: add morph rules to lt lang defaults
* refactor: only keep nouns, verbs, adverbs and adjectives in lt lang lemmatizer lookup
* refactor: add capitalized words to lt lang lemmatizer
* refactor: add more num words to lt lang lex attrs
* refactor: update lt lang stop word set
* refactor: add new instances to lt lang tokenizer exceptions
* refactor: remove comments form lt lang init file
* refactor: use function instead of lambda in lt lex lang getter
* refactor: remove conversion to dict in lt init when dict is already provided
* chore: rename lt 'test_basic' to 'test_text'
* feat: add more lt text tests
* feat: add lemmatizer tests
* refactor: remove unused imports, add newline to end of file
* chore: add contributor agreement
* chore: change 'en' to 'lt' in lt example description
* fix: add missing encoding info
* style: add newline to end of file
* refactor: use python2 compatible syntax
* style: reformat code using black
* Adding support for entity_id in EntityRuler pipeline component
* Adding Spacy Contributor aggreement
* Updating EntityRuler to use string.format instead of f strings
* Update Entity Ruler to support an 'id' attribute per pattern that explicitly identifies an entity.
* Fixing tests
* Remove custom extension entity_id and use built in ent_id token attribute.
* Changing entity_id to ent_id for consistent naming
* entity_ids => ent_ids
* Removing kb, cleaning up tests, making util functions private, use rsplit instead of split
* Add check for empty input file to CLI pretrain
* Raise error if JSONL is not a dict or contains neither `tokens` nor `text` key
* Skip empty values for correct pretrain keys and log a counter as warning
* Add tests for CLI pretrain core function make_docs.
* Add a short hint for the `tokens` key to the CLI pretrain docs
* Add success message to CLI pretrain
* Update model loading to fix the tests
* Skip empty values and do not create docs out of it
* Add custom __dir__ to Underscore (see #3707)
* Make sure custom extension methods keep their docstrings (see #3707)
* Improve tests
* Prepend note on partial to docstring (see #3707)
* Remove print statement
* Handle cases where docstring is None
<!--- Provide a general summary of your changes in the title. -->
## Description
<!--- Use this section to describe your changes. If your changes required
testing, include information about the testing environment and the tests you
ran. If your test fixes a bug reported in an issue, don't forget to include the
issue number. If your PR is still a work in progress, that's totally fine – just
include a note to let us know. -->
Fix a bug in the test of JapaneseTokenizer.
This PR may require @polm's review.
### Types of change
<!-- What type of change does your PR cover? Is it a bug fix, an enhancement
or new feature, or a change to the documentation? -->
Bug fix
## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.
* fix(util): fix decaying function output
* fix(util): better test and adhere to code standards
* fix(util): correct variable name, pytestify test, update website text
v2.1 introduced a regression when deserializing the parser after
parser.add_label() had been called. The code around the class mapping is
pretty confusing currently, as it was written to accommodate backwards
model compatibility. It needs to be revised when the models are next
retrained.
Closes#3433
spaCy v2.1 switched to the built-in re module, where v2.0 had been using
the third-party regex library. When the tokenizer was deserialized on
Python2.7, the `re.compile()` function was called with expressions that
featured escaped unicode codepoints that were not in Python2.7's unicode
database.
Problems occurred when we had a range between two of these unknown
codepoints, like this:
```
'[\\uAA77-\\uAA79]'
```
On Python2.7, the unknown codepoints are not unescaped correctly,
resulting in arbitrary out-of-range characters being matched by the
expression.
This problem does not occur if we instead have a range between two
unicode literals, rather than the escape sequences. To fix the bug, we
therefore add a new compat function that unescapes unicode sequences
using the `ast.literal_eval()` function. Care is taken to ensure we
do not also escape non-unicode sequences.
Closes#3356.
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.
* merging conllu/conll and conllubio scripts
* tabs to spaces
* removing conllubio2json from converters/__init__.py
* Move not-really-CLI tests to misc
* Add converter test using no-ud data
* Fix test I broke
* removing include_biluo parameter
* fixing read_conllx
* remove include_biluo from convert.py
* label in span not writable anymore
* more explicit unit test and error message for readonly label
* bit more explanation (view)
* error msg tailored to specific case
* fix None case
Closes#2091.
## Description
With the new `vocab.writing_system` property introduced in #3390 (exposed via the language defaults), I was able to finally fix this (I think!). Based on the `Doc`, dispaCy now detects whether it's a RTL or LTR language and adjusts the visualization accordingly. Wherever possible, I've also added `direction` and `lang` attributes.
Entity visualization now looks like this:
<img width="318" alt="Screenshot 2019-03-11 at 16 06 51" src="https://user-images.githubusercontent.com/13643239/54136866-d97afd80-441c-11e9-8c27-3d46994cc833.png">
And dependencies like this (ignore the most likely incorrect tags and dependencies):
<img width="621" alt="Screenshot 2019-03-11 at 16 51 59" src="https://user-images.githubusercontent.com/13643239/54137771-8b66f980-441e-11e9-8460-0682b95eef2a.png">
### Types of change
enhancement, bug fix
## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.
* Add xfail test for vocab.writing_system
* Add vocab.writing_system property
* Set Language.Defaults.writing_system
* Set default writing system
* Remove xfail on test_vocab_writing_system
Closes#2203. Closes#3268.
Lemmas set from outside the `Morphology` class were being overwritten. The result was especially confusing when deserialising, as it meant some lemmas could change when storing and retrieving a `Doc` object.
This PR applies two fixes:
1) When we go to set the lemma in the `Morphology` class, first check whether a lemma is already set. If so, don't overwrite.
2) When we load with `doc.from_array()`, take care to apply the `TAG` field first. This allows other fields to overwrite the `TAG` implied properties, if they're provided explicitly (e.g. the `LEMMA`).
## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.
* Make serialization methods consistent
exclude keyword argument instead of random named keyword arguments and deprecation handling
* Update docs and add section on serialization fields
* Use default return instead of else
* Add Doc.is_nered to indicate if entities have been set
* Add properties in Doc.to_json if they were set, not if they're available
This way, if a processed Doc exports "pos": None, it means that the tag was explicitly unset. If it exports "ents": [], it means that entity annotations are available but that this document doesn't contain any entities. Before, this would have been unclear and problematic for training.
* Improve handling of missing NER tags
GoldParse can accept missing NER tags, if entities is provided
in BILUO format (rather than as spans). Missing tags can be provided
as None values.
Fix bug that occurred when first tag was a None value. Closes#2603.
* Document specification of missing NER tags.
* Classes for Ukrainian; small fix in Russian.
* Contributor agreement
* pymorphy2 initialization split for ru and uk (#3327)
* stop-words fixed
* Unit-tests updated
<!--- Provide a general summary of your changes in the title. -->
## Description
This PR adds the abilility to override custom extension attributes during merging. This will only work for attributes that are writable, i.e. attributes registered with a default value like `default=False` or attribute that have both a getter *and* a setter implemented.
```python
Token.set_extension('is_musician', default=False)
doc = nlp("I like David Bowie.")
with doc.retokenize() as retokenizer:
attrs = {"LEMMA": "David Bowie", "_": {"is_musician": True}}
retokenizer.merge(doc[2:4], attrs=attrs)
assert doc[2].text == "David Bowie"
assert doc[2].lemma_ == "David Bowie"
assert doc[2]._.is_musician
```
### Types of change
enhancement
## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.
* splitting up latin unicode interval
* removing hyphen as infix for French
* adding failing test for issue 1235
* test for issue #3002 which now works
* partial fix for issue #2070
* keep the hyphen as infix for French (as it was)
* restore french expressions with hyphen as infix (as it was)
* added succeeding unit test for Issue #2656
* Fix issue #2822 with custom Italian exception
* Fix issue #2926 by allowing numbers right before infix /
* splitting up latin unicode interval
* removing hyphen as infix for French
* adding failing test for issue 1235
* test for issue #3002 which now works
* partial fix for issue #2070
* keep the hyphen as infix for French (as it was)
* restore french expressions with hyphen as infix (as it was)
* added succeeding unit test for Issue #2656
* Fix issue #2822 with custom Italian exception
* Fix issue #2926 by allowing numbers right before infix /
* remove duplicate
* remove xfail for Issue #2179 fixed by Matt
* adjust documentation and remove reference to regex lib
* Fix matching on extension attrs and predicates
* Fix detection of match_id when using extension attributes. The match
ID is stored as the last entry in the pattern. We were checking for this
with nr_attr == 0, which didn't account for extension attributes.
* Fix handling of predicates. The wrong count was being passed through,
so even patterns that didn't have a predicate were being checked.
* Fix regex pattern
* Fix matcher set value test
* Change retokenize.split() API for heads
* Pass lists as values for attrs in split
* Fix test_doc_split filename
* Add error for mismatched tokens after split
* Raise error if new tokens don't match text
* Fix doc test
* Fix error
* Move deps under attrs
* Fix split tests
* Fix retokenize.split
* Add base classes for more languages
* Add test for language class initialization
Make sure language can be initialize – otherwise, it's difficult to catch serious errors in the test suite, because languages are lazy-loaded
* Add split one token into several (resolves#2838)
* Improve error message for token splitting
* Make retokenizer.split() tests use a Token object
Change retokenizer.split() to use a Token object, instead of an index.
* Pass Token into retokenize.split()
Tweak retokenize.split() API so that we pass the `Token` object, not the index.
* Fix token.idx in retokenize.split()
* Test that token.idx is correct after split
* Fix token.idx for split tokens
* Fix retokenize.split()
* Fix retokenize.split
* Fix retokenize.split() test
* Add custom MatchPatternError
* Improve validators and add validation option to Matcher
* Adjust formatting
* Never validate in Matcher within PhraseMatcher
If we do decide to make validate default to True, the PhraseMatcher's Matcher shouldn't ever validate. Here, we create the patterns automatically anyways (and it's currently unclear whether the validation has performance impacts at a very large scale).
In most cases, the PhraseMatcher will match on the verbatim token text or as of v2.1, sometimes the lowercase text. This means that we only need a tokenized Doc, without any other attributes.
If phrase patterns are created by processing large terminology lists with the full `nlp` object, this easily can make things a lot slower, because all components will be applied, even if we don't actually need the attributes they set (like part-of-speech tags, dependency labels).
The warning message also includes a suggestion to use nlp.make_doc or nlp.tokenizer.pipe for even faster processing. For now, the validation has to be enabled explicitly by setting validate=True.
* Improved stop words list
* Removed some wrong stop words form list
* Improved stop words list
* Removed some wrong stop words form list
* Improved Polish Tokenizer (#38)
* Add tests for polish tokenizer
* Add polish tokenizer exceptions
* Don't split any words containing hyphens
* Fix test case with wrong model answer
* Remove commented out line of code until better solution is found
* Add source srx' license
* Rename exception_list.py to match spaCy conventionality
* Add a brief explanation of where the exception list comes from
* Add newline after reach exception
* Rename COPYING.txt to LICENSE
* Delete old files
* Add header to the license
* Agreements signed
* Stanisław Giziński agreement
* Krzysztof Kowalczyk - signed agreement
* Mateusz Olko agreement
* Add DoomCoder's contributor agreement
* Improve like number checking in polish lang
* like num tests added
* all from SI system added
* Final licence and removed splitting exceptions
* Added polish stop words to LEX_ATTRA
* Add encoding info to pl tokenizer exceptions
## Description
1. Added the same infix rule as in French (`d'une`, `j'ai`) for Italian (`c'è`, `l'ha`), bringing F-score on `it_isdt-ud-train.txt` from 96% to 99%. Added unit test to check this behaviour.
2. Added specific Urdu punctuation character as suffix, improving F-score on `ur_udtb-ud-train.txt` from 94% to 100%. Added unit test to check this behaviour.
### Types of change
Enhancement of Italian & Urdu tokenization
## Checklist
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.
* replace unicode categories with raw list of code points
* simplifying ranges
* fixing variable length quotes
* removing redundant regular expression
* small cleanup of regexp notations
* quotes and alpha as ranges instead of alterations
* removed most regexp dependencies and features
* exponential backtracking - unit tests
* rewrote expression with pathological backtracking
* disabling double hyphen tests for now
* test additional variants of repeating punctuation
* remove regex and redundant backslashes from load_reddit script
* small typo fixes
* disable double punctuation test for russian
* clean up old comments
* format block code
* final cleanup
* naming consistency
* french strings as unicode for python 2 support
* french regular expression case insensitive
* Update matcher engine for regex and extensions
Add support for matching over arbitrary Python predicate functions, and
arbitrary Python attribute getters. This will allow matching over regex
patterns, and allow supporting extension attributes.
The results of the Python predicate functions are cached, so that we don't
call the same predicate function twice for the same token. The extension
attributes are fetched into an array for each token in the doc. This
should minimise the performance impact of the new features.
We still need to wire up these features to the patterns, and test it
all.
* Work on wiring up extra attributes in matcher
* Work on tests for extra matcher attrs
* Add support for extension attrs to matcher
* Test extension attribute matching
* Work on implementing predicate-based match patterns
* Get predicates working for set membership
* Add test for set membership
* Make extensions+predicates work
* Test matcher extensions
* Cache predicate results better in Matcher
* Remove print statement in matcher test
* Use srsly to get key for predicates
* Added the same punctuation rules as danish language.
* Added abbreviations and also the possibility to have capitalized abbreviations on some. Added a few specific cases too
* Added test for long texts in swedish
* Added morph rules, infixes and suffixes to __init__.py for swedish
* Added some tests for prefixes, infixes and suffixes
* Added tests for lemma
* Renamed files to follow convention
* [sv] Removed ambigious abbreviations
* Added more tests for tokenizer exceptions
* Added test for problem with punctuation in issue #2578
* Contributor agreement
* Removed faulty lemmatization of 'jag' ('I') as it was lemmatized to 'jaga' ('hunt')
This PR adds a test for an untested case of `Span.get_lca_matrix`, and fixes a bug for that scenario, which I introduced in [this PR](https://github.com/explosion/spaCy/pull/3089) (sorry!).
## Description
The previous implementation of get_lca_matrix was failing for the case `doc[j:k].get_lca_matrix()` where `j > 0`. A test has been added for this case and the bug has been fixed.
### Types of change
Bug fix
## Checklist
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.
Initially span.as_doc() was designed to return a view of the span's contents, as a Doc object. This was a nice idea, but it fails due to the token.idx property, which refers to the character offset within the string. In a span, the idx of the first token might not be 0. Because this data is different, we can't have a view --- it'll be inconsistent.
This patch changes span.as_doc() to instead return a copy. The docs are updated accordingly. Closes#1537
* Update test for span.as_doc()
* Make span.as_doc() return a copy. Closes#1537
* Document change to Span.as_doc()
The doc.retokenize() context manager wasn't resizing doc.tensor, leading to a mismatch between the number of tokens in the doc and the number of rows in the tensor. We fix this by deleting rows from the tensor. Merged spans are represented by the vector of their last token.
* Add test for resizing doc.tensor when merging
* Add test for resizing doc.tensor when merging. Closes#1963
* Update get_lca_matrix test for develop
* Fix retokenize if tensor unset