Expected an `entity_ruler.jsonl` file in the top-level model directory, so the path passed to from_disk by default (model path plus componentn name), but with the suffix ".jsonl".
* Perserve flags in EntityRuler
The EntityRuler (explosion/spaCy#3526) does not preserve
overwrite flags (or `ent_id_sep`) when serialized. This
commit adds support for serialization/deserialization preserving
overwrite and ent_id_sep flags.
* add signed contributor agreement
* flake8 cleanup
mostly blank line issues.
* mark test from the issue as needing a model
The test from the issue needs some language model for serialization
but the test wasn't originally marked correctly.
* Adds `phrase_matcher_attr` to allow args to PhraseMatcher
This is an added arg to pass to the `PhraseMatcher`. For example,
this allows creation of a case insensitive phrase matcher when the
`EntityRuler` is created. References explosion/spaCy#3822
* remove unneeded model loading
The model didn't need to be loaded, and I replaced it with
a change that doesn't require it (using existings fixtures)
* updated docstring for new argument
* updated docs to reflect new argument to the EntityRuler constructor
* change tempdir handling to be compatible with python 2.7
* return conflicted code to entityruler
Some stuff got cut out because of merge conflicts, this
returns that code for the phrase_matcher_attr.
* fixed typo in the code added back after conflicts
* flake8 compliance
When I deconflicted the branch there were some flake8 issues
introduced. This resolves the spacing problems.
* test changes: attempts to fix flaky test in python3.5
These tests seem to be alittle flaky in 3.5 so I changed the check to avoid
the comparisons that seem to be fail sometimes.
* Perserve flags in EntityRuler
The EntityRuler (explosion/spaCy#3526) does not preserve
overwrite flags (or `ent_id_sep`) when serialized. This
commit adds support for serialization/deserialization preserving
overwrite and ent_id_sep flags.
* add signed contributor agreement
* flake8 cleanup
mostly blank line issues.
* mark test from the issue as needing a model
The test from the issue needs some language model for serialization
but the test wasn't originally marked correctly.
* remove unneeded model loading
The model didn't need to be loaded, and I replaced it with
a change that doesn't require it (using existings fixtures)
* change tempdir handling to be compatible with python 2.7
* Adds code to handle item saved before this change.
This code chanes how the save files are handled and how the bytes
are stored as well. This code adds check to dispatch correctly
if it encounters bytes or files saved in the old format (and tests
for those cases).
* use util function for tempdir management
Updated after PR comments: this code now uses the make_tempdir function from util
instead of doing it by hand.
* fix(util): fix decaying function output
* fix(util): better test and adhere to code standards
* fix(util): correct variable name, pytestify test, update website text
spaCy v2.1 switched to the built-in re module, where v2.0 had been using
the third-party regex library. When the tokenizer was deserialized on
Python2.7, the `re.compile()` function was called with expressions that
featured escaped unicode codepoints that were not in Python2.7's unicode
database.
Problems occurred when we had a range between two of these unknown
codepoints, like this:
```
'[\\uAA77-\\uAA79]'
```
On Python2.7, the unknown codepoints are not unescaped correctly,
resulting in arbitrary out-of-range characters being matched by the
expression.
This problem does not occur if we instead have a range between two
unicode literals, rather than the escape sequences. To fix the bug, we
therefore add a new compat function that unescapes unicode sequences
using the `ast.literal_eval()` function. Care is taken to ensure we
do not also escape non-unicode sequences.
Closes#3356.
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.
Closes#2203. Closes#3268.
Lemmas set from outside the `Morphology` class were being overwritten. The result was especially confusing when deserialising, as it meant some lemmas could change when storing and retrieving a `Doc` object.
This PR applies two fixes:
1) When we go to set the lemma in the `Morphology` class, first check whether a lemma is already set. If so, don't overwrite.
2) When we load with `doc.from_array()`, take care to apply the `TAG` field first. This allows other fields to overwrite the `TAG` implied properties, if they're provided explicitly (e.g. the `LEMMA`).
## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.
<!--- Provide a general summary of your changes in the title. -->
## Description
This PR adds the abilility to override custom extension attributes during merging. This will only work for attributes that are writable, i.e. attributes registered with a default value like `default=False` or attribute that have both a getter *and* a setter implemented.
```python
Token.set_extension('is_musician', default=False)
doc = nlp("I like David Bowie.")
with doc.retokenize() as retokenizer:
attrs = {"LEMMA": "David Bowie", "_": {"is_musician": True}}
retokenizer.merge(doc[2:4], attrs=attrs)
assert doc[2].text == "David Bowie"
assert doc[2].lemma_ == "David Bowie"
assert doc[2]._.is_musician
```
### Types of change
enhancement
## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.
* splitting up latin unicode interval
* removing hyphen as infix for French
* adding failing test for issue 1235
* test for issue #3002 which now works
* partial fix for issue #2070
* keep the hyphen as infix for French (as it was)
* restore french expressions with hyphen as infix (as it was)
* added succeeding unit test for Issue #2656
* Fix issue #2822 with custom Italian exception
* Fix issue #2926 by allowing numbers right before infix /
* splitting up latin unicode interval
* removing hyphen as infix for French
* adding failing test for issue 1235
* test for issue #3002 which now works
* partial fix for issue #2070
* keep the hyphen as infix for French (as it was)
* restore french expressions with hyphen as infix (as it was)
* added succeeding unit test for Issue #2656
* Fix issue #2822 with custom Italian exception
* Fix issue #2926 by allowing numbers right before infix /
* remove duplicate
* remove xfail for Issue #2179 fixed by Matt
* adjust documentation and remove reference to regex lib
* Fix matching on extension attrs and predicates
* Fix detection of match_id when using extension attributes. The match
ID is stored as the last entry in the pattern. We were checking for this
with nr_attr == 0, which didn't account for extension attributes.
* Fix handling of predicates. The wrong count was being passed through,
so even patterns that didn't have a predicate were being checked.
* Fix regex pattern
* Fix matcher set value test
* replace unicode categories with raw list of code points
* simplifying ranges
* fixing variable length quotes
* removing redundant regular expression
* small cleanup of regexp notations
* quotes and alpha as ranges instead of alterations
* removed most regexp dependencies and features
* exponential backtracking - unit tests
* rewrote expression with pathological backtracking
* disabling double hyphen tests for now
* test additional variants of repeating punctuation
* remove regex and redundant backslashes from load_reddit script
* small typo fixes
* disable double punctuation test for russian
* clean up old comments
* format block code
* final cleanup
* naming consistency
* french strings as unicode for python 2 support
* french regular expression case insensitive
This PR adds a test for an untested case of `Span.get_lca_matrix`, and fixes a bug for that scenario, which I introduced in [this PR](https://github.com/explosion/spaCy/pull/3089) (sorry!).
## Description
The previous implementation of get_lca_matrix was failing for the case `doc[j:k].get_lca_matrix()` where `j > 0`. A test has been added for this case and the bug has been fixed.
### Types of change
Bug fix
## Checklist
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.
The doc.retokenize() context manager wasn't resizing doc.tensor, leading to a mismatch between the number of tokens in the doc and the number of rows in the tensor. We fix this by deleting rows from the tensor. Merged spans are represented by the vector of their last token.
* Add test for resizing doc.tensor when merging
* Add test for resizing doc.tensor when merging. Closes#1963
* Update get_lca_matrix test for develop
* Fix retokenize if tensor unset
* Test on #2396: bug in Doc.get_lca_matrix()
* reimplementation of Doc.get_lca_matrix(), (closes#2396)
* reimplement Span.get_lca_matrix(), and call it from Doc.get_lca_matrix()
* tests Span.get_lca_matrix() as well as Doc.get_lca_matrix()
* implement _get_lca_matrix as a helper function in doc.pyx; call it from Doc.get_lca_matrix and Span.get_lca_matrix
* use memory view instead of np.ndarray in _get_lca_matrix (faster)
* fix bug when calling Span.get_lca_matrix; return lca matrix as np.array instead of memoryview
* cleaner conditional, add comment
* Test on #2396: bug in Doc.get_lca_matrix()
* reimplementation of Doc.get_lca_matrix(), (closes#2396)
* reimplement Span.get_lca_matrix(), and call it from Doc.get_lca_matrix()
* tests Span.get_lca_matrix() as well as Doc.get_lca_matrix()
* implement _get_lca_matrix as a helper function in doc.pyx; call it from Doc.get_lca_matrix and Span.get_lca_matrix
* use memory view instead of np.ndarray in _get_lca_matrix (faster)
* fix bug when calling Span.get_lca_matrix; return lca matrix as np.array instead of memoryview
* cleaner conditional, add comment
* Add failing test for matcher bug #3009
* Deduplicate matches from Matcher
* Update matcher ? quantifier test
* Fix bug with ? quantifier in Matcher
The ? quantifier indicates a token may occur zero or one times. If the
token pattern fit, the matcher would fail to consider valid matches
where the token pattern did not fit. Consider a simple regex like:
.?b
If we have the string 'b', the .? part will fit --- but then the 'b' in
the pattern will not fit, leaving us with no match. The same bug left us
with too few matches in some cases. For instance, consider:
.?.?
If we have a string of length two, like 'ab', we actually have three
possible matches here: [a, b, ab]. We were only recovering 'ab'. This
should now be fixed. Note that the fix also uncovered another bug, where
we weren't deduplicating the matches. There are actually two ways we
might match 'a' and two ways we might match 'b': as the second token of the pattern,
or as the first token of the pattern. This ambiguity is spurious, so we
need to deduplicate.
Closes#2464 and #3009
* Fix Python2
* issue #3012: add test
* add contributor aggreement
* Make test work without models and fix typos
ten.pos_ instead of ten.orth_ and comparison against "10" instead of integer 10
See #3028. The solution in this patch is pretty debateable.
What we do is give the TokenC struct a .norm field, by repurposing the previously idle .sense attribute. It's nice to repurpose a previous field because it means the TokenC doesn't change size, so even if someone's using the internals very deeply, nothing will break.
The weird thing here is that the TokenC and the LexemeC both have an attribute named NORM. This arguably assists in backwards compatibility. On the other hand, maybe it's really bad! We're changing the semantics of the attribute subtly, so maybe it's better if someone calling lex.norm gets a breakage, and instead is told to write lex.default_norm?
Overall I believe this patch makes the NORM feature work the way we sort of expected it to work. Certainly it's much more like how the docs describe it, and more in line with how we've been directing people to use the norm attribute. We'll also be able to use token.norm to do stuff like spelling correction, which is pretty cool.
<!--- Provide a general summary of your changes in the title. -->
## Description
- [x] Use [`black`](https://github.com/ambv/black) to auto-format all `.py` files.
- [x] Update flake8 config to exclude very large files (lemmatization tables etc.)
- [x] Update code to be compatible with flake8 rules
- [x] Fix various small bugs, inconsistencies and messy stuff in the language data
- [x] Update docs to explain new code style (`black`, `flake8`, when to use `# fmt: off` and `# fmt: on` and what `# noqa` means)
Once #2932 is merged, which auto-formats and tidies up the CLI, we'll be able to run `flake8 spacy` actually get meaningful results.
At the moment, the code style and linting isn't applied automatically, but I'm hoping that the new [GitHub Actions](https://github.com/features/actions) will let us auto-format pull requests and post comments with relevant linting information.
### Types of change
enhancement, code style
## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.