* implement textcat resizing for TextCatCNN
* resizing textcat in-place
* simplify code
* ensure predictions for old textcat labels remain the same after resizing (WIP)
* fix for softmax
* store softmax as attr
* fix ensemble weight copy and cleanup
* restructure slightly
* adjust documentation, update tests and quickstart templates to use latest versions
* extend unit test slightly
* revert unnecessary edits
* fix typo
* ensemble architecture won't be resizable for now
* use resizable layer (WIP)
* revert using resizable layer
* resizable container while avoid shape inference trouble
* cleanup
* ensure model continues training after resizing
* use fill_b parameter
* use fill_defaults
* resize_layer callback
* format
* bump thinc to 8.0.4
* bump spacy-legacy to 3.0.6
* Change span lemmas to use original whitespace (fix#8368)
This is a redo of #8371 based off master.
The test for this required some changes to existing tests. I don't think
the changes were significant but I'd like someone to check them.
* Remove mystery docstring
This sentence was uncompleted for years, and now we will never know how
it ends.
* Fill in deps if not provided with heads
Before this change, if heads were passed without deps they would be
silently ignored, which could be confusing. See #8334.
* Use "dep" instead of a blank string
This is the customary placeholder dep. It might be better to show an
error here instead though.
* Throw error on heads without deps
* Add a test
* Fix tests
* Formatting
* Fix all tests
* Fix a test I missed
* Revise error message
* Clean up whitespace
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* Update Catalan language data
Update Catalan language data based on contributions from the Text Mining
Unit at the Barcelona Supercomputing Center:
https://github.com/TeMU-BSC/spacy4release/tree/main/lang_data
* Update tokenizer settings for UD Catalan AnCora
Update for UD Catalan AnCora v2.7 with merged multi-word tokens.
* Update test
* Move prefix patternt to more generic infix pattern
* Clean up
For the Russian and Ukrainian lemmatizers, restrict the `pymorphy2`
requirement to the mode `pymorphy2` so that lookup or other lemmatizer
modes can be loaded without installing `pymorphy2`.
* Show warning if entity_ruler runs without patterns
* Show warning if matcher runs without patterns
* fix wording
* unit test for warning once (WIP)
* warn W036 only once
* cleanup
* create filter_warning helper
* Don't add duplicate patterns (fix#8216)
* Refactor EntityRuler init
This simplifies the EntityRuler init code. This is helpful as prep for
allowing the EntityRuler to reset itself.
* Make EntityRuler.clear reset matchers
Includes a new test for this.
* Tidy PhraseMatcher instantiation
Since the attr can be None safely now, the guard if is no longer
required here.
Also renamed the `_validate` attr. Maybe it's not needed?
* Fix NER test
* Add test to make sure patterns aren't increasing
* Move test to regression tests
The attributes `PROB`, `CLUSTER` and `SENT_END` are not supported by
`Lexeme.get_struct_attr` so should not be included through `attrs.IDS`
as supported attributes in `Doc.to_array` and other methods.
* Show warning if entity_ruler runs without patterns
* Show warning if matcher runs without patterns
* fix wording
* unit test for warning once (WIP)
* warn W036 only once
* cleanup
* create filter_warning helper
* Add all symbols in Unicode Currency Symbols block
In #8102 it came up that the rupee symbol was treated different from
dollar / euro / yen symbols. This adds many symbols not already
included.
* Fix test
* Fix training test
* unit test for pickling KB
* add pickling test for NEL
* KB to_bytes and from_bytes
* NEL to_bytes and from_bytes
* xfail pickle tests for now
* fix docs
* cleanup
* Fix range in Span.get_lca_matrix
Fix the adjusted token index / lca matrix index ranges for
`_get_lca_matrix` for spans.
* The range for `k` should correspond to the adjusted indices in
`lca_matrix` with the `start` indexed at `0`
* Update test for v3.x
* Handle errors while multiprocessing
Handle errors while multiprocessing without hanging.
* Return the traceback for errors raised while processing a batch, which
can be handled by the top-level error handler
* Allow for shortened batches due to custom error handlers that ignore
errors and skip documents
* Define custom components at a higher level
* Also move up custom error handler
* Use simpler component for test
* Switch error type
* Adjust test
* Only call top-level error handler for exceptions
* Register custom test components within tests
Use global functions (so they can be pickled) but register the
components only within the individual tests.
* Adapt tokenization methods from `pyvi` to preserve text encoding and
whitespace
* Add serialization support similar to Chinese and Japanese
Note: as for Chinese and Japanese, some settings are duplicated in
`config.cfg` and `tokenizer/cfg`.
* Handle partial entities in Span.as_doc
In `Span.as_doc` replace partial entities at the beginning or end of the
span with missing entity annotation.
Fixes a bug where invalid entity annotation (no initial `B`) was
returned for an initial partial entity.
* Check for empty span in ents conversion
Note: `Span.as_doc()` will still fail on an empty span due to failures
in `Span.vector`.
* Preserve existing ENT_KB_ID annotation in NER
Preserve `ent_kb_id` annotation on existing entity spans, which is not
preserved by the transition system.
* Simplify kb_id assignment
* Simplify further
* Add training option to set annotations on update
Add a `[training]` option called `set_annotations_on_update` to specify
a list of components for which the predicted annotations should be set
on `example.predicted` immediately after that component has been
updated. The predicted annotations can be accessed by later components
in the pipeline during the processing of the batch in the same `update`
call.
* Rename to annotates / annotating_components
* Add test for `annotating_components` when training from config
* Add documentation
* Set up CI for tests with GPU agent
* Update tests for enabled GPU
* Fix steps filename
* Add parallel build jobs as a setting
* Fix test requirements
* Fix install test requirements condition
* Fix pipeline models test
* Reset current ops in prefer/require testing
* Fix more tests
* Remove separate test_models test
* Fix regression 5551
* fix StaticVectors for GPU use
* fix vocab tests
* Fix regression test 5082
* Move azure steps to .github and reenable default pool jobs
* Consolidate/rename azure steps
Co-authored-by: svlandeg <sofie.vanlandeghem@gmail.com>
* Update sent_starts in Example.from_dict
Update `sent_starts` for `Example.from_dict` so that `Optional[bool]`
values have the same meaning as for `Token.is_sent_start`.
Use `Optional[bool]` as the type for sent start values in the docs.
* Use helper function for conversion to ternary ints
* Fix tokenizer cache flushing
Fix/simplify tokenizer init detection in order to fix cache flushing
when properties are modified.
* Remove init reloading logic
* Remove logic disabling `_reload_special_cases` on init
* Setting `rules` last in `__init__` (as before) means that setting
other properties doesn't reload any special cases
* Reset `rules` first in `from_bytes` so that setting other properties
during deserialization doesn't reload any special cases
unnecessarily
* Reset all properties in `Tokenizer.from_bytes` to allow any settings
to be `None`
* Also reset special matcher when special cache is flushed
* Remove duplicate special case validation
* Add test for special cases flushing
* Extend test for tokenizer deserialization of None values
* Update Tokenizer.explain with special matches
Update `Tokenizer.explain` and the pseudo-code in the docs to include
the processing of special cases that contain affixes or whitespace.
* Handle optional settings in explain
* Add test for special matches in explain
Add test for `Tokenizer.explain` for special cases containing affixes.
* Make vocab update in get_docs deterministic
The attribute `DocBin.strings` is a set. In `DocBin.get_docs`
a given vocab is updated by iterating over this set.
Iteration over a python set produces an arbitrary ordering,
therefore vocab is updated non-deterministically.
When training (fine-tuning) a spacy model, the base model's
vocabulary will be updated with the new vocabulary in the
training data in exactly the way described above. After
serialization, the file `model/vocab/strings.json` will
be sorted in an arbitrary way. This prevents reproducible
model training.
* Revert "Make vocab update in get_docs deterministic"
This reverts commit d6b87a2f55.
* Sort strings in StringStore serialization
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* extend span scorer with consider_label and allow_overlap
* unit test for spans y2x overlap
* add score_spans unit test
* docs for new fields in scorer.score_spans
* rename to include_label
* spell out if-else for clarity
* rename to 'labeled'
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* Support match alignments
* change naming from match_alignments to with_alignments, add conditional flow if with_alignments is given, validate with_alignments, add related test case
* remove added errors, utilize bint type, cleanup whitespace
* fix no new line in end of file
* Minor formatting
* Skip alignments processing if as_spans is set
* Add with_alignments to Matcher API docs
* Update website/docs/api/matcher.md
Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>
* Add failing test for PRFScore
* Fix erroneous implementation of __add__
* Simplify constructor
Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>
Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>
* Adjust custom extension data when copying user data in `Span.as_doc()`
* Restrict `Doc.from_docs()` to adjusting offsets for custom extension
data
* Update test to use extension
* (Duplicate bug fix for character offset from #7497)
Merge data from `doc.spans` in `Doc.from_docs()`.
* Fix internal character offset set when merging empty docs (only
affects tokens and spans in `user_data` if an empty doc is in the list
of docs)
In the retokenizer, only reset sent starts (with
`set_children_from_head`) if the doc is parsed. If there is no parse,
merged tokens have the unset `token.is_sent_start == None` by default after
retokenization.