* Clean up Vocab constructor
* Change effective type of `strings` from `Iterable[str]` to `Optional[StringStore]`
* Don't automatically add strings to vocab
* Change default values to `None`
* Remove `**deprecated_kwargs`
* Format
* Handle deprecation of pkg_resources
* Replace `pkg_resources` with `importlib_metadata` for `spacy info
--url`
* Remove requirements check from `spacy project` given the lack of
alternatives
* Fix installed model URL method and CI test
* Fix types/handling, simplify catch-all return
* Move imports instead of disabling requirements check
* Format
* Reenable test with ignored deprecation warning
* Fix except
* Fix return
* Make empty_kb() configurable.
* Format.
* Update docs.
* Be more specific in KB serialization test.
* Update KB serialization tests. Update docs.
* Remove doc update for batched candidate generation.
* Fix serialization of subclassed KB in tests.
* Format.
* Update docstring.
* Update docstring.
* Switch from pickle to json for custom field serialization.
* Add immediate left/right child/parent dependency relations
* Add tests for new REL_OPs: `>+`, `>-`, `<+`, and `<-`.
---------
Co-authored-by: Tan Long <tanloong@foxmail.com>
* add unittest for explosion#12311
* create punctuation.py for swedish
* removed : from infixes in swedish punctuation.py
* allow : as infix if succeeding char is uppercase
* standardize predicate key format
* single key function
* Make optional args in key function keyword-only
---------
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* Improve the correctness of _parse_patch
* If there are no more actions, do not attempt to make further
transitions, even if not all states are final.
* Assert that the number of actions for a step is the same as
the number of states.
* Reimplement distillation with oracle cut size
The code for distillation with an oracle cut size was not reimplemented
after the parser refactor. We did not notice, because we did not have
tests for this functionality. This change brings back the functionality
and adds this to the parser tests.
* Rename states2actions to _states_to_actions for consistency
* Test distillation max cuts in NER
* Mark parser/NER tests as slow
* Typo
* Fix invariant in _states_diff_to_actions
* Rename _init_batch -> _init_batch_from_teacher
* Ninja edit the ninja edit
* Check that we raise an exception when we pass the incorrect number or actions
* Remove unnecessary get
Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>
* Write out condition more explicitly
---------
Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>
* `Language.update`: ensure that tok2vec gets updated
The components in a pipeline can be updated independently. However,
tok2vec implementations are an exception to this, since they depend on
listeners for their gradients. The update method of a tok2vec
implementation computes the tok2vec forward and passes this along with a
backprop function to the listeners. This backprop function accumulates
gradients for all the listeners. There are two ways in which the
accumulated gradients can be used to update the tok2vec weights:
1. Call the `finish_update` method of tok2vec *after* the `update`
method is called on all of the pipes that use a tok2vec listener.
2. Pass an optimizer to the `update` method of tok2vec. In this
case, tok2vec will give the last listener a special backprop
function that calls `finish_update` on the tok2vec.
Unfortunately, `Language.update` did neither of these. Instead, it
immediately called `finish_update` on every pipe after `update`. As a
result, the tok2vec weights are updated when no gradients have been
accumulated from listeners yet. And the gradients of the listeners are
only used in the next call to `Language.update` (when `finish_update` is
called on tok2vec again).
This change fixes this issue by passing the optimizer to the `update`
method of trainable pipes, leading to use of the second strategy
outlined above.
The main updating loop in `Language.update` is also simplified by using
the `TrainableComponent` protocol consistently.
* Train loop: `sgd` is `Optional[Optimizer]`, do not pass false
* Language.update: call pipe finish_update after all pipe updates
This does correct and fast updates if multiple components update the
same parameters.
* Add comment why we moved `finish_update` to a separate loop
* Remove backwards-compatible overwrite from Entity Linker
This also adds a docstring about overwrite, since it wasn't present.
* Fix docstring
* Remove backward compat settings in Morphologizer
This also needed a docstring added.
For this component it's less clear what the right overwrite settings
are.
* Remove backward compat from sentencizer
This was simple
* Remove backward compat from senter
Another simple one
* Remove backward compat setting from tagger
* Add docstrings
* Update spacy/pipeline/morphologizer.pyx
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* Update docs
---------
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* change logging call for spacy.LookupsDataLoader.v1
* substitutions in language and _util
* various more substitutions
* add string formatting guidelines to contribution guidelines
* Move Entity Linker v1 component to spacy-legacy
This is a follow up to #11889 that moves the component instead of
removing it.
In general, we never import from spacy-legacy in spaCy proper. However,
to use this component, that kind of import will be necessary. I was able
to test this without issues, but is this current import strategy
acceptable? Or should we put the component in a registry?
* Use spacy-legacy pr for CI
This will need to be reverted before merging.
* Add temporary step to log installed spacy-legacy version
* Modify requirements.txt to trigger tests
* Add comment to Python to trigger tests
* TODO REVERT This is a commit with logic changes to trigger tests
* Remove pipe from YAML
Works locally, but possibly this is causing a quoting error or
something.
* Revert "TODO REVERT This is a commit with logic changes to trigger tests"
This reverts commit 689fae71f3.
* Revert "Add comment to Python to trigger tests"
This reverts commit 11840fc598.
* Add more logging
* Try installing directly in workflow
* Try explicitly uninstalling spacy-legacy first
* Cat requirements.txt to confirm contents
In the branch, the thinc version spec is `thinc>=8.1.0,<8.2.0`. But in
the logs, it's clear that a development release of 9.0 is being
installed. It's not clear why that would happen.
* Log requirements at start of build
* TODO REVERT Change thinc spec
Want to see what happens to the installed thinc spec with this change.
* Update thinc requirements
This makes it the same as it was before the merge, >=8.1.0,<8.2.0.
* Use same thinc version as v4 branch
* TODO REVERT Mark dependency check as xfail
spacy-legacy is specified as a git checkout in requirements.txt while
this PR is in progress, which makes the consistency check here fail.
* Remove debugging output / install step
* Revert "Remove debugging output / install step"
This reverts commit 923ea7448b.
* Clean up debugging output
The manual install step with the URL fragment seems to have caused
issues on Windows due to the = in the URL being misinterpreted. On the
other hand, removing it seems to mean the git version of spacy-legacy
isn't actually installed.
This PR removes the URL fragment but keeps the direct command-line
install. Additionally, since it looks like this job is configured to use
the default shell (and not bash), it removes a comment that upsets the
Windows cmd shell.
* Revert "TODO REVERT Mark dependency check as xfail"
This reverts commit d4863ec156.
* Fix requirements.txt, increasing spacy-legacy version
* Raise spacy legacy version in setup.cfg
* Remove azure build workarounds
* make spacy-legacy version explicit in error message
* Remove debugging line
* Suggestions from code review
* Init
* fix tests
* Update spacy/errors.py
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* Fix test_blank_languages
* Rename xx to mul in docs
* Format _util with black
* prettier formatting
---------
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* Language.distill: copy both reference and predicted
In distillation we also modify the teacher docs (e.g. in tok2vec
components), so we need to copy both the reference and predicted doc.
Problem caught by @shadeMe
* Make new `_copy_examples` args kwonly
* Add the configuration schema for distillation
This also adds the default configuration and some tests. The schema will
be used by the training loop and `distill` subcommand.
* Format
* Change distillation shortopt to -d
* Fix descripion of max_epochs
* Rename distillation flag to -dt
* Rename `pipe_map` to `student_to_teacher`
* Don't re-download installed models
When downloading a model, this checks if the same version of the same
model is already installed. If it is then the download is skipped.
This is necessary because pip uses the final download URL for its
caching feature, but because of the way models are hosted on Github,
their URLs change every few minutes.
* Use importlib instead of meta.json
* Use get_package_version
* Add untested, disabled test
---------
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* Add `Language.distill`
This method is the distillation counterpart of `Language.update`. It
takes a teacher `Language` instance and distills the student pipes on
the teacher pipes.
* Apply suggestions from code review
Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>
* Clarify that how Example is used in distillation
* Update transition parser distill docstring for examples argument
* Pass optimizer to `TrainablePipe.distill`
* Annotate pipe before update
As discussed internally, we want to let a pipe annotate before doing an
update with gold/silver data. Otherwise, the output may be (too)
informed by the gold/silver data.
* Rename `component_map` to `student_to_teacher`
* Better synopsis in `Language.distill` docstring
* `name` -> `student_name`
* Fix labels type in docstring
* Mark distill test as slow
* Fix `student_to_teacher` type in docs
---------
Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>
* Normalize whitespace in evaluate CLI output test
Depending on terminal settings, lines may be padded to the screen width
so the comparison is too strict with only the command string replacement.
* Move to test util method
* Change to normalization method
* Add span_id to Span.char_span, update Doc/Span.char_span docs
`Span.char_span(id=)` should be removed in the future.
* Also use Union[int, str] in Doc docstring
* WIP
* rm ipython embeds
* rm total
* WIP
* cleanup
* cleanup + reword
* rm component function
* remove migration support form
* fix reference dataset for dev data
* additional fixes
- set approach to identifying unique trees
- adjust line length on messages
- add logic for detecting docs without annotations
* use 0 instead of none for no annotation
* partial annotation support
* initial tests for _compile_gold lemma attributes
Using the example data from the edit tree lemmatizer tests for:
- lemmatizer_trees
- partial_lemma_annotations
- n_low_cardinality_lemmas
- no_lemma_annotations
* adds output test for cli app
* switch msg level
* rm unclear uniqueness check
* Revert "rm unclear uniqueness check"
This reverts commit 6ea2b3524b.
* remove good message on uniqueness
* formatting
* use en_vocab fixture
* clarify data set source in messages
* remove unnecessary import
Co-authored-by: svlandeg <svlandeg@github.com>
* Add `spacy.PlainTextCorpusReader.v1`
This is a corpus reader that reads plain text corpora with the following
format:
- UTF-8 encoding
- One line per document.
- Blank lines are ignored.
It is useful for applications where we deal with very large corpora,
such as distillation, and don't want to deal with the space overhead of
serialized formats. Additionally, many large corpora already use such
a text format, keeping the necessary preprocessing to a minimum.
* Update spacy/training/corpus.py
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* docs: add version to `PlainTextCorpus`
* Add docstring to registry function
* Add plain text corpus tests
* Only strip newline/carriage return
* Add return type _string_to_tmp_file helper
* Use a temporary directory in place of file name
Different OS auto delete/sharing semantics are just wonky.
* This will be new in 3.5.1 (rather than 4)
* Test improvements from code review
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* Don't pass mem pool to new lexeme function
* Remove unused mem from function args
Two methods calling _new_lexeme, get and get_by_orth, took mem arguments
just to call the internal method. That's no longer necessary, so this
cleans it up.
* prettier formatting
* Remove more unused mem args
* Refactor _scores2guesses
* Handle arrays on GPU
* Convert argmax result to raw integer
Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>
* Use NumpyOps() to copy data to CPU
Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>
* Changes based on review comments
* Use different _scores2guesses depending on tree_k
* Add tests for corner cases
* Add empty line for consistency
* Improve naming
Co-authored-by: Daniël de Kok <me@github.danieldk.eu>
* Improve naming
Co-authored-by: Daniël de Kok <me@github.danieldk.eu>
Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>
Co-authored-by: Daniël de Kok <me@github.danieldk.eu>
* API docs: Rename kb_in_memory to inmemorylookupkb, add to sidebar
* adjust to mdx
* linkout to InMemoryLookupKB at first occurrence in kb.mdx
* fix links to docs
* revert Azure trigger setting (I'll make a separate PR)
Co-authored-by: svlandeg <svlandeg@github.com>