Commit Graph

2200 Commits

Author SHA1 Message Date
Adriane Boyd
95c0833656
Add training option to set annotations on update (#7767)
* Add training option to set annotations on update

Add a `[training]` option called `set_annotations_on_update` to specify
a list of components for which the predicted annotations should be set
on `example.predicted` immediately after that component has been
updated. The predicted annotations can be accessed by later components
in the pipeline during the processing of the batch in the same `update`
call.

* Rename to annotates / annotating_components

* Add test for `annotating_components` when training from config

* Add documentation
2021-04-26 16:53:53 +02:00
Sofie Van Landeghem
e0b29f8ef7
Fix scoring normalization (#7629)
* fix scoring normalization

* score weights by total sum instead of per component

* cleanup

* more cleanup
2021-04-26 16:53:38 +02:00
Adriane Boyd
36ecba224e
Set up GPU CI testing (#7293)
* Set up CI for tests with GPU agent

* Update tests for enabled GPU

* Fix steps filename

* Add parallel build jobs as a setting

* Fix test requirements

* Fix install test requirements condition

* Fix pipeline models test

* Reset current ops in prefer/require testing

* Fix more tests

* Remove separate test_models test

* Fix regression 5551

* fix StaticVectors for GPU use

* fix vocab tests

* Fix regression test 5082

* Move azure steps to .github and reenable default pool jobs

* Consolidate/rename azure steps

Co-authored-by: svlandeg <sofie.vanlandeghem@gmail.com>
2021-04-22 14:58:29 +02:00
Adriane Boyd
f68fc29130
Update sent_starts in Example.from_dict (#7847)
* Update sent_starts in Example.from_dict

Update `sent_starts` for `Example.from_dict` so that `Optional[bool]`
values have the same meaning as for `Token.is_sent_start`.

Use `Optional[bool]` as the type for sent start values in the docs.

* Use helper function for conversion to ternary ints
2021-04-22 11:32:45 +02:00
Adriane Boyd
f4339f9bff
Fix tokenizer cache flushing (#7836)
* Fix tokenizer cache flushing

Fix/simplify tokenizer init detection in order to fix cache flushing
when properties are modified.

* Remove init reloading logic

* Remove logic disabling `_reload_special_cases` on init
  * Setting `rules` last in `__init__` (as before) means that setting
    other properties doesn't reload any special cases
  * Reset `rules` first in `from_bytes` so that setting other properties
    during deserialization doesn't reload any special cases
    unnecessarily
* Reset all properties in `Tokenizer.from_bytes` to allow any settings
  to be `None`

* Also reset special matcher when special cache is flushed

* Remove duplicate special case validation

* Add test for special cases flushing

* Extend test for tokenizer deserialization of None values
2021-04-22 18:14:57 +10:00
Sofie Van Landeghem
cfad7e21d5
fix config parsing of ints/strings (#7755)
* add few failing tests for parsing integers and strings

* bump thinc to 8.0.3
2021-04-22 18:09:13 +10:00
Adriane Boyd
0e7f94b247
Update Tokenizer.explain with special matches (#7749)
* Update Tokenizer.explain with special matches

Update `Tokenizer.explain` and the pseudo-code in the docs to include
the processing of special cases that contain affixes or whitespace.

* Handle optional settings in explain

* Add test for special matches in explain

Add test for `Tokenizer.explain` for special cases containing affixes.
2021-04-19 19:08:20 +10:00
Adriane Boyd
1ad646cbcf
Improve checks for sourced components (#7490)
* Improve checks for sourced components

* Remove language class checks

* Convert python warning to logger warning

* Remove unused warning

* Fix formatting
2021-04-19 18:36:32 +10:00
Sofie Van Landeghem
8d7af5b2b1
Ensure hyphen in config file works as string value (#7642)
* add test for serializing '-' in a config file

* bump srsly to 2.4.1
2021-04-12 14:35:57 +02:00
Sofie Van Landeghem
27dbbb9903
Bugfix/nel crossing sentence (#7630)
* ensure each entity gets a KB ID, even when it's not within a sentence

* cleanup
2021-04-12 18:08:01 +10:00
Stanislav Schmidt
2516896849
Make vocab update in get_docs deterministic (#7603)
* Make vocab update in get_docs deterministic

The attribute `DocBin.strings` is a set. In `DocBin.get_docs`
a given vocab is updated by iterating over this set.
Iteration over a python set produces an arbitrary ordering,
therefore vocab is updated non-deterministically.

When training (fine-tuning) a spacy model, the base model's
vocabulary will be updated with the new vocabulary in the
training data in exactly the way described above. After
serialization, the file `model/vocab/strings.json` will
be sorted in an arbitrary way. This prevents reproducible
model training.

* Revert "Make vocab update in get_docs deterministic"

This reverts commit d6b87a2f55.

* Sort strings in StringStore serialization

Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
2021-04-09 11:53:13 +02:00
Adriane Boyd
8008e2f75b
Use morph hash in lemmatizer cache key (#7690)
Use the morph hash rather than the `MorphAnalysis` object in the cache
key so that the `Lemmatizer` can be pickled.
2021-04-08 13:22:38 +02:00
Sofie Van Landeghem
204c2f116b
Extend score_spans for overlapping & non-labeled spans (#7209)
* extend span scorer with consider_label and allow_overlap

* unit test for spans y2x overlap

* add score_spans unit test

* docs for new fields in scorer.score_spans

* rename to include_label

* spell out if-else for clarity

* rename to 'labeled'

Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
2021-04-08 12:19:17 +02:00
broaddeep
ee159b8543
Support match alignments (#7321)
* Support match alignments

* change naming from match_alignments to with_alignments, add conditional flow if with_alignments is given, validate with_alignments, add related test case

* remove added errors, utilize bint type, cleanup whitespace

* fix no new line in end of file

* Minor formatting

* Skip alignments processing if as_spans is set

* Add with_alignments to Matcher API docs

* Update website/docs/api/matcher.md

Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>

Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>
2021-04-08 18:10:14 +10:00
graue70
81fd595223
Fix __add__ method of PRFScore (#7557)
* Add failing test for PRFScore

* Fix erroneous implementation of __add__

* Simplify constructor

Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>

Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>
2021-04-08 17:34:14 +10:00
Adriane Boyd
348d1829c7
Preserve user data for DependencyMatcher on spans (#7528)
* Preserve user data for DependencyMatcher on spans

* Clean underscore in test

* Modify test to use extensions stored in user data
2021-03-30 12:26:22 +02:00
Adriane Boyd
27a48f2802
Fix/update extension copying in Span.as_doc and Doc.from_docs (#7574)
* Adjust custom extension data when copying user data in `Span.as_doc()`
* Restrict `Doc.from_docs()` to adjusting offsets for custom extension
data
  * Update test to use extension
  * (Duplicate bug fix for character offset from #7497)
2021-03-30 09:49:12 +02:00
Adriane Boyd
139f655f34
Merge doc.spans in Doc.from_docs() (#7497)
Merge data from `doc.spans` in `Doc.from_docs()`.

* Fix internal character offset set when merging empty docs (only
affects tokens and spans in `user_data` if an empty doc is in the list
of docs)
2021-03-29 22:34:01 +11:00
Adriane Boyd
d59f968d08
Keep sent starts without parse in retokenization (#7424)
In the retokenizer, only reset sent starts (with
`set_children_from_head`) if the doc is parsed. If there is no parse,
merged tokens have the unset `token.is_sent_start == None` by default after
retokenization.
2021-03-29 22:32:00 +11:00
Adriane Boyd
deffc3a532
Update package requirements tests (#7409)
* Add hypothesis to packages skipped in version check

* Add numpy back to tests following 2df1ab8a
2021-03-11 16:24:31 +01:00
Sofie Van Landeghem
932887b950
textcat scoring fix and multi_label docs (#6974)
* add multi-label textcat to menu

* add infobox on textcat API

* add info to v3 migration guide

* small edits

* further fixes in doc strings

* add infobox to textcat architectures

* add textcat_multilabel to overview of built-in components

* spelling

* fix unrelated warn msg

* Add textcat_multilabel to quickstart [ci skip]

* remove separate documentation page for multilabel_textcategorizer

* small edits

* positive label clarification

* avoid duplicating information in self.cfg and fix textcat.score

* fix multilabel textcat too

* revert threshold to storage in cfg

* revert threshold stuff for multi-textcat

Co-authored-by: Ines Montani <ines@ines.io>
2021-03-09 23:04:22 +11:00
Adriane Boyd
3f3e8110dc
Fix lowercase augmentation (#7336)
* Fix aborted/skipped augmentation for `spacy.orth_variants.v1` if
lowercasing was enabled for an example
* Simplify `spacy.orth_variants.v1` for `Example` vs. `GoldParse`
* Preserve reference tokenization in `spacy.lower_case.v1`
2021-03-09 14:02:32 +11:00
Sofie Van Landeghem
cd70c3cb79
Fixing pretrain (#7342)
* initialize NLP with train corpus

* add more pretraining tests

* more tests

* function to fetch tok2vec layer for pretraining

* clarify parameter name

* test different objectives

* formatting

* fix check for static vectors when using vectors objective

* clarify docs

* logger statement

* fix init_tok2vec and proc.initialize order

* test training after pretraining

* add init_config tests for pretraining

* pop pretraining block to avoid config validation errors

* custom errors
2021-03-09 14:01:13 +11:00
svlandeg
d900c55061 consistently use registry as callable 2021-03-02 17:56:28 +01:00
Sofie Van Landeghem
212f0e779e
Support doc.spans in Example.from_dict (#7197)
* add support for spans in Example.from_dict

* add unit tests

* update error to E879
2021-03-03 01:12:54 +11:00
Ines Montani
635ae55b74
Merge pull request #7237 from adrianeboyd/bugfix/is-cython-func-7224 2021-03-03 00:05:16 +11:00
Adriane Boyd
0efb7413f9 Use make_tempdir instead 2021-03-01 17:54:14 +01:00
Adriane Boyd
e9f7f9a4bc Fix is_cython_func for additional imported code
* Fix `is_cython_func` for imported code loaded under `python_code`
module name
* Add `make_named_tempfile` context manager to test utils to test
loading of imported code
* Add test for validation of `initialize` params in custom module
2021-03-01 16:37:39 +01:00
Sofie Van Landeghem
dd99872bb0
Fix spans weak ref in doc copy (#7225)
* failing unit test

* ensure that doc.spans refers to the copied doc, not the old

* add type info
2021-02-28 12:32:48 +11:00
Sofie Van Landeghem
b92f81d5da
fix NEL config and IO, and n_sents functionality (#7100)
* fix NEL config and IO, and n_sents functionality

* add docs

* fix test
2021-02-22 14:49:52 +11:00
Sofie Van Landeghem
113e8d082b
only evaluate named entities for NEL if there is a corresponding gold span (#7074) 2021-02-22 11:06:50 +11:00
Boian Tzonev
cca8651fc8
Bulgarian tokenizer exceptions (#7114)
* [Bulgarian] Add tokenizer exceptions and like_num for Bulgarian

* [Bulgarian] Add tokenizer exceptions and like_num for Bulgarian
2021-02-19 19:19:19 +01:00
Sofie Van Landeghem
709c9e75af
span.ent only returns first sentence (#7084)
* return first sentence when span contains sentence boundary

* docs fix

* small fixes

* cleanup
2021-02-19 23:02:38 +11:00
Ines Montani
f4f46b617f
Preserve sourced components in fill-config (fixes #7055) (#7058) 2021-02-14 14:02:14 +11:00
Matthew Honnibal
0fb8d437c0
Fix sentence fragments bug (#7056, #7035) (#7057)
* Add test for #7035

* Update test for issue 7056

* Fix test

* Fix transitions method used in testing

* Fix state eol detection when rebuffer

* Clean up redundant fix
2021-02-14 13:38:13 +11:00
Ines Montani
9ba715ed16 Tidy up and auto-format 2021-02-13 12:55:56 +11:00
Ines Montani
34ee0fbd70
Merge pull request #7011 from Shumie82/master 2021-02-13 12:30:42 +11:00
Ines Montani
e583050547
Merge pull request #7039 from svlandeg/debug 2021-02-13 11:53:41 +11:00
Ines Montani
6c450decfc Fix punctuation settings and add to initialize tests 2021-02-13 11:51:21 +11:00
svlandeg
03b4ec7d7f fix typo 2021-02-12 14:30:16 +01:00
Adriane Boyd
5e47a54d29 Include noun chunks method when pickling Vocab 2021-02-12 13:27:46 +01:00
svlandeg
278e9eaa14 remove ner 2021-02-11 21:08:04 +01:00
svlandeg
ebeedfc70b regression test for 7029 2021-02-11 20:56:48 +01:00
Ines Montani
26bf642afd
Fix issue #7019: Handle None scores in evaluate printer (#7026) 2021-02-11 16:45:23 +11:00
Ines Montani
6b9026a219
Merge pull request #7000 from explosion/feature/project-yml-overrides
Support env vars and CLI overrides for project.yml
2021-02-11 12:31:45 +11:00
Ines Montani
ad9ce3c8f6 Fix issue #6950: allow pickling Tok2Vec with listeners 2021-02-11 11:37:39 +11:00
Peter Baumann
61b04a70d5
Run PhraseMatcher on Spans (#6918)
* Add regression test

* Run PhraseMatcher on Spans

* Add test for PhraseMatcher on Spans and Docs

* Add SCA

* Add test with 3 matches in Doc, 1 match in Span

* Update docs

* Use doc.length for find_matches in tokenizer

Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
2021-02-10 23:43:32 +11:00
Ines Montani
21176c69b0 Update and add test 2021-02-10 14:12:00 +11:00
Ines Montani
c08b3f294c Support env vars and CLI overrides for project.yml 2021-02-10 13:45:27 +11:00
melonwater211
a7977b5143
The test spacy/tests/vocab_vectors/test_lexeme.py::test_vocab_lexeme_add_flag_auto_id seems to fail occasionally when the test suite is run in a random order. (#6956)
```python
    def test_vocab_lexeme_add_flag_auto_id(en_vocab):
        is_len4 = en_vocab.add_flag(lambda string: len(string) == 4)
        assert en_vocab["1999"].check_flag(is_len4) is True
        assert en_vocab["1999"].check_flag(IS_DIGIT) is True
        assert en_vocab["199"].check_flag(is_len4) is False
>       assert en_vocab["199"].check_flag(IS_DIGIT) is True
E       assert False is True
E        +  where False = <built-in method check_flag of spacy.lexeme.Lexeme object at 0x7fa155c36840>(3)
E        +    where <built-in method check_flag of spacy.lexeme.Lexeme object at 0x7fa155c36840> = <spacy.lexeme.Lexeme object at 0x7fa155c36840>.check_flag

spacy/tests/vocab_vectors/test_lexeme.py:49: AssertionError
```

>  `pytest==6.1.1`
>
>  `numpy==1.19.2`
>
> `Python version: 3.8.3`

To reproduce the error, run `pytest --random-order-bucket=global --random-order-seed=170158 -v spacy/tests`

If `test_vocab_lexeme_add_flag_auto_id` is run after `test_vocab_lexeme_add_flag_provided_id`, it fails.
It seems like `test_vocab_lexeme_add_flag_provided_id` uses the `IS_DIGIT` bit for testing purposes but does not reset the bit.

This solution seems to work but, if anyone has a better fix, please let me know and I will integrate it.
2021-02-07 07:51:34 +08:00