* Add missing int value option to top-level pattern validation in Matcher
* Adjust existing tests accordingly
* Add new test for valid pattern `{"LENGTH": int}`
* Replace PhraseMatcher with Aho-Corasick
Replace PhraseMatcher with the Aho-Corasick algorithm over numpy arrays
of the hash values for the relevant attribute. The implementation is
based on FlashText.
The speed should be similar to the previous PhraseMatcher. It is now
possible to easily remove match IDs and matches don't go missing with
large keyword lists / vocabularies.
Fixes#4308.
* Restore support for pickling
* Fix internal keyword add/remove for numpy arrays
* Add missing loop for match ID set in search loop
* Remove cruft in matching loop for partial matches
There was a bit of unnecessary code left over from FlashText in the
matching loop to handle partial token matches, which we don't have with
PhraseMatcher.
* Replace dict trie with MapStruct trie
* Fix how match ID hash is stored/added
* Update fix for match ID vocab
* Switch from map_get_unless_missing to map_get
* Switch from numpy array to Token.get_struct_attr
Access token attributes directly in Doc instead of making a copy of the
relevant values in a numpy array.
Add unsatisfactory warning for hash collision with reserved terminal
hash key. (Ideally it would change the reserved terminal hash and redo
the whole trie, but for now, I'm hoping there won't be collisions.)
* Restructure imports to export find_matches
* Implement full remove()
Remove unnecessary trie paths and free unused maps.
Parallel to Matcher, raise KeyError when attempting to remove a match ID
that has not been added.
* Store docs internally only as attr lists
* Reduces size for pickle
* Remove duplicate keywords store
Now that docs are stored as lists of attr hashes, there's no need to
have the duplicate _keywords store.
Check for relevant components in the pipeline when Matcher is called,
similar to the checks for PhraseMatcher in #4105.
* keep track of attributes seen in patterns
* when Matcher is called on a Doc, check for is_tagged for LEMMA, TAG,
POS and for is_parsed for DEP
* Fix typo in rule-based matching docs
* Improve token pattern checking without validation
Add more detailed token pattern checks without full JSON pattern validation and
provide more detailed error messages.
Addresses #4070 (also related: #4063, #4100).
* Check whether top-level attributes in patterns and attr for PhraseMatcher are
in token pattern schema
* Check whether attribute value types are supported in general (as opposed to
per attribute with full validation)
* Report various internal error types (OverflowError, AttributeError, KeyError)
as ValueError with standard error messages
* Check for tagger/parser in PhraseMatcher pipeline for attributes TAG, POS,
LEMMA, and DEP
* Add error messages with relevant details on how to use validate=True or nlp()
instead of nlp.make_doc()
* Support attr=TEXT for PhraseMatcher
* Add NORM to schema
* Expand tests for pattern validation, Matcher, PhraseMatcher, and EntityRuler
* Remove unnecessary .keys()
* Rephrase error messages
* Add another type check to Matcher
Add another type check to Matcher for more understandable error messages
in some rare cases.
* Support phrase_matcher_attr=TEXT for EntityRuler
* Don't use spacy.errors in examples and bin scripts
* Fix error code
* Auto-format
Also try get Azure pipelines to finally start a build :(
* Update errors.py
Co-authored-by: Ines Montani <ines@ines.io>
Co-authored-by: Matthew Honnibal <honnibal+gh@gmail.com>
* Fix matching on extension attrs and predicates
* Fix detection of match_id when using extension attributes. The match
ID is stored as the last entry in the pattern. We were checking for this
with nr_attr == 0, which didn't account for extension attributes.
* Fix handling of predicates. The wrong count was being passed through,
so even patterns that didn't have a predicate were being checked.
* Fix regex pattern
* Fix matcher set value test
* Add custom MatchPatternError
* Improve validators and add validation option to Matcher
* Adjust formatting
* Never validate in Matcher within PhraseMatcher
If we do decide to make validate default to True, the PhraseMatcher's Matcher shouldn't ever validate. Here, we create the patterns automatically anyways (and it's currently unclear whether the validation has performance impacts at a very large scale).
In most cases, the PhraseMatcher will match on the verbatim token text or as of v2.1, sometimes the lowercase text. This means that we only need a tokenized Doc, without any other attributes.
If phrase patterns are created by processing large terminology lists with the full `nlp` object, this easily can make things a lot slower, because all components will be applied, even if we don't actually need the attributes they set (like part-of-speech tags, dependency labels).
The warning message also includes a suggestion to use nlp.make_doc or nlp.tokenizer.pipe for even faster processing. For now, the validation has to be enabled explicitly by setting validate=True.
* Update matcher engine for regex and extensions
Add support for matching over arbitrary Python predicate functions, and
arbitrary Python attribute getters. This will allow matching over regex
patterns, and allow supporting extension attributes.
The results of the Python predicate functions are cached, so that we don't
call the same predicate function twice for the same token. The extension
attributes are fetched into an array for each token in the doc. This
should minimise the performance impact of the new features.
We still need to wire up these features to the patterns, and test it
all.
* Work on wiring up extra attributes in matcher
* Work on tests for extra matcher attrs
* Add support for extension attrs to matcher
* Test extension attribute matching
* Work on implementing predicate-based match patterns
* Get predicates working for set membership
* Add test for set membership
* Make extensions+predicates work
* Test matcher extensions
* Cache predicate results better in Matcher
* Remove print statement in matcher test
* Use srsly to get key for predicates
* Support nowrap setting in util.prints
* Tidy up and fix whitespace
* Simplify script and use read_jsonl helper
* Add JSON schemas (see #2928)
* Deprecate Doc.print_tree
Will be replaced with Doc.to_json, which will produce a unified format
* Add Doc.to_json() method (see #2928)
Converts Doc objects to JSON using the same unified format as the training data. Method also supports serializing selected custom attributes in the doc._. space.
* Remove outdated test
* Add write_json and write_jsonl helpers
* WIP: Update spacy train
* Tidy up spacy train
* WIP: Use wasabi for formatting
* Add GoldParse helpers for JSON format
* WIP: add debug-data command
* Fix typo
* Add missing import
* Update wasabi pin
* Add missing import
* 💫 Refactor CLI (#2943)
To be merged into #2932.
## Description
- [x] refactor CLI To use [`wasabi`](https://github.com/ines/wasabi)
- [x] use [`black`](https://github.com/ambv/black) for auto-formatting
- [x] add `flake8` config
- [x] move all messy UD-related scripts to `cli.ud`
- [x] make converters function that take the opened file and return the converted data (instead of having them handle the IO)
### Types of change
enhancement
## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.
* Update wasabi pin
* Delete old test
* Update errors
* Fix typo
* Tidy up and format remaining code
* Fix formatting
* Improve formatting of messages
* Auto-format remaining code
* Add tok2vec stuff to spacy.train
* Fix typo
* Update wasabi pin
* Fix path checks for when train() is called as function
* Reformat and tidy up pretrain script
* Update argument annotations
* Raise error if model language doesn't match lang
* Document new train command
* Auto-format tests with black
* Add flake8 config
* Tidy up and remove unused imports
* Fix redefinitions of test functions
* Replace orths_and_spaces with words and spaces
* Fix compatibility with pytest 4.0
* xfail test for now
Test was previously overwritten by following test due to naming conflict, so failure wasn't reported
* Unfail passing test
* Only use fixture via arguments
Fixes pytest 4.0 compatibility
* Allow matching non-orth attributes in PhraseMatcher (see #1971)
Usage: PhraseMatcher(nlp.vocab, attr='POS')
* Allow attr argument to be int
* Fix formatting
* Fix typo
## Description
Related issues: #2379 (should be fixed by separating model tests)
* **total execution time down from > 300 seconds to under 60 seconds** 🎉
* removed all model-specific tests that could only really be run manually anyway – those will now live in a separate test suite in the [`spacy-models`](https://github.com/explosion/spacy-models) repository and are already integrated into our new model training infrastructure
* changed all relative imports to absolute imports to prepare for moving the test suite from `/spacy/tests` to `/tests` (it'll now always test against the installed version)
* merged old regression tests into collections, e.g. `test_issue1001-1500.py` (about 90% of the regression tests are very short anyways)
* tidied up and rewrote existing tests wherever possible
### Todo
- [ ] move tests to `/tests` and adjust CI commands accordingly
- [x] move model test suite from internal repo to `spacy-models`
- [x] ~~investigate why `pipeline/test_textcat.py` is flakey~~
- [x] review old regression tests (leftover files) and see if they can be merged, simplified or deleted
- [ ] update documentation on how to run tests
### Types of change
enhancement, tests
## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [ ] My changes don't require a change to the documentation, or if they do, I've added all required information.