* Address upcoming numpy v1.25 deprecations in test suite
* Temporarily test most recent numpy prerelease in CI
* Revert "Temporarily test most recent numpy prerelease in CI"
This reverts commit d75a66e55e.
* Add scorer option to return per-component scores
Add `per_component` option to `Language.evaluate` and `Scorer.score` to
return scores keyed by `tokenizer` (hard-coded) or by component name.
Add option to `evaluate` CLI to score by component. Per-component scores
can only be saved to JSON.
* Update help text and messages
This reverts commit 6f314f99c4.
We are reverting this until we can support this normalization more
consistently across vectors, training corpora, and lemmatizer data.
* Use Latin normalization for Serbian attrs
Use Latin normalization for Serbian `NORM`, `PREFIX`, and `SUFFIX`.
* Update NORMs in tokenizer exceptions and related tests
* Add tests for all custom lex attrs
* Remove unused imports
* Add spans in spacy benchmark
The current implementation of spaCy benchmark accuracy / spacy evaluate
doesn't include the "spans" type, so calling the command doesn't render
the HTML displaCy file needed.
This PR attempts to fix that by creating a new parameter for "spans"
and calling the appropriate displaCy value.
* Reformat file with black
* Add tests for evaluate
* Fix spans -> span for displacy style
* Update test to check render instead
* Update source so mypy passes
* Add parser information to avoid warnings
* Add noun chunking to la syntax iterators
* Expand list of numeral, ordinal words
* Expand abbreviations in la tokenizer_exceptions
* Add example sents
* Update spacy/lang/la/syntax_iterators.py
Reorganize la syntax iterators
Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>
* Minor updates based on review
* fix call
---------
Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>
* Add default to MorphAnalysis.get
Similar to `dict`, allow a `default` option for `MorphAnalysis.get` for
the user to provide a default return value if the field is not found.
The default return value remains `[]`, which is not the same as
`dict.get`, but is already established as this method's default return
value with the return type `List[str]`. However the new `default` option
does not enforce that the user-provided default is actually `List[str]`.
* Restore test case
* [wip] Update
* [wip] Update
* Add initial port
* [wip] Update
* Fix all imports
* Add spancat_exclusive to pipeline
* [WIP] Update
* [ci skip] Add breakpoint for debugging
* Use spacy.SpanCategorizer.v1 as default archi
* Update spacy/pipeline/spancat_exclusive.py
Co-authored-by: kadarakos <kadar.akos@gmail.com>
* [ci skip] Small updates
* Use Softmax v2 directly from thinc
* Cache the label map
* Fix mypy errors
However, I ignored line 370 because it opened up a bunch of type errors
that might be trickier to solve and might lead to a more complicated
codebase.
* avoid multiplication with 1.0
Co-authored-by: kadarakos <kadar.akos@gmail.com>
* Update spacy/pipeline/spancat_exclusive.py
Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>
* Update component versions to v2
* Add scorer to docstring
* Add _n_labels property to SpanCategorizer
Instead of using len(self.labels) in initialize() I am using a private
property self._n_labels. This achieves implementation parity and allows
me to delete the whole initialize() method for spancat_exclusive (since
it's now the same with spancat).
* Inherit from SpanCat instead of TrainablePipe
This commit changes the inheritance structure of Exclusive_Spancat,
now it's inheriting from SpanCategorizer than TrainablePipe. This
allows me to remove duplicate methods that are already present in
the parent function.
* Revert documentation link to spancat
* Fix init call for exclusive spancat
* Update spacy/pipeline/spancat_exclusive.py
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* Import Suggester from spancat
* Include zero_init.v1 for spancat
* Implement _allow_extra_label to use _n_labels
To ensure that spancat / spancat_exclusive cannot be resized after
initialization, I inherited the _allow_extra_label() method from
spacy/pipeline/trainable_pipe.pyx and used self._n_labels instead
of len(self.labels) for checking.
I think that changing it locally is a better solution rather than
forcing each class that inherits TrainablePipe to use the self._n_labels
attribute.
Also note that I turned-off black formatting in this block of code
because it reads better without the overhang.
* Extend existing tests to spancat_exclusive
In this commit, I extended the existing tests for spancat to include
spancat_exclusive. I parametrized the test functions with 'name'
(similar var name with textcat and textcat_multilabel) for each
applicable test.
TODO: Add overfitting tests for spancat_exclusive
* Update documentation for spancat
* Turn on formatting for allow_extra_label
* Remove initializers in default config
* Use DEFAULT_EXCL_SPANCAT_MODEL
I also renamed spancat_exclusive_default_config into
spancat_excl_default_config because black does some not pretty
formatting changes.
* Update documentation
Update grammar and usage
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* Clarify docstring for Exclusive_SpanCategorizer
* Remove mypy ignore and typecast labels to list
* Fix documentation API
* Use a single variable for tests
* Update defaults for number of rows
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* Put back initializers in spancat config
Whenever I remove model.scorer.init_w and model.scorer.init_b,
I encounter an error in the test:
SystemError: <method '__getitem__' of 'dict' objects> returned a result
with an error set.
My Thinc version is 8.1.5, but I can't seem to check what's causing the
error.
* Update spancat_exclusive docstring
* Remove init_W and init_B parameters
This commit is expected to fail until the new Thinc release.
* Require thinc>=8.1.6 for serializable Softmax defaults
* Handle zero suggestions to make tests pass
I'm not sure if this is the most elegant solution. But what should
happen is that the _make_span_group function MUST return an empty
SpanGroup if there are no suggestions.
The error happens when the 'scores' variable is empty. We cannot
get the 'predicted' and other downstream vars.
* Better approach for handling zero suggestions
* Update website/docs/api/spancategorizer.md
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* Update spancategorizer headers
* Apply suggestions from code review
Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>
* Add default value in negative_weight in docs
* Add default value in allow_overlap in docs
* Update how spancat_exclusive is constructed
In this commit, I added the following:
- Put the default values of negative_weight and allow_overlap
in the default_config dictionary.
- Rename make_spancat -> make_exclusive_spancat
* Run prettier on spancategorizer.mdx
* Change exactly one -> at most one
* Add suggester documentation in Exclusive_SpanCategorizer
* Add suggester to spancat docstrings
* merge multilabel and singlelabel spancat
* rename spancat_exclusive to singlelable
* wire up different make_spangroups for single and multilabel
* black
* black
* add docstrings
* more docstring and fix negative_label
* don't rely on default arguments
* black
* remove spancat exclusive
* replace single_label with add_negative_label and adjust inference
* mypy
* logical bug in configuration check
* add spans.attrs[scores]
* single label make_spangroup test
* bugfix
* black
* tests for make_span_group with negative labels
* refactor make_span_group
* black
* Update spacy/tests/pipeline/test_spancat.py
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* remove duplicate declaration
* Update spacy/pipeline/spancat.py
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* raise error instead of just print
* make label mapper private
* update docs
* run prettier
* Update website/docs/api/spancategorizer.mdx
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* Update website/docs/api/spancategorizer.mdx
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* Update spacy/pipeline/spancat.py
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* Update spacy/pipeline/spancat.py
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* Update spacy/pipeline/spancat.py
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* Update spacy/pipeline/spancat.py
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* don't keep recomputing self._label_map for each span
* typo in docs
* Intervals to private and document 'name' param
* Update spacy/pipeline/spancat.py
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* Update spacy/pipeline/spancat.py
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* add Tag to new features
* replace tags
* revert
* revert
* revert
* revert
* Update website/docs/api/spancategorizer.mdx
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* Update website/docs/api/spancategorizer.mdx
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* prettier
* Fix merge
* Update website/docs/api/spancategorizer.mdx
* remove references to 'single_label'
* remove old paragraph
* Add spancat_singlelabel to config template
* Format
* Extend init config tests
---------
Co-authored-by: kadarakos <kadar.akos@gmail.com>
Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* Handle deprecation of pkg_resources
* Replace `pkg_resources` with `importlib_metadata` for `spacy info
--url`
* Remove requirements check from `spacy project` given the lack of
alternatives
* Fix installed model URL method and CI test
* Fix types/handling, simplify catch-all return
* Move imports instead of disabling requirements check
* Format
* Reenable test with ignored deprecation warning
* Fix except
* Fix return
* Make empty_kb() configurable.
* Format.
* Update docs.
* Be more specific in KB serialization test.
* Update KB serialization tests. Update docs.
* Remove doc update for batched candidate generation.
* Fix serialization of subclassed KB in tests.
* Format.
* Update docstring.
* Update docstring.
* Switch from pickle to json for custom field serialization.
* Add immediate left/right child/parent dependency relations
* Add tests for new REL_OPs: `>+`, `>-`, `<+`, and `<-`.
---------
Co-authored-by: Tan Long <tanloong@foxmail.com>
* add unittest for explosion#12311
* create punctuation.py for swedish
* removed : from infixes in swedish punctuation.py
* allow : as infix if succeeding char is uppercase
* change logging call for spacy.LookupsDataLoader.v1
* substitutions in language and _util
* various more substitutions
* add string formatting guidelines to contribution guidelines
* Normalize whitespace in evaluate CLI output test
Depending on terminal settings, lines may be padded to the screen width
so the comparison is too strict with only the command string replacement.
* Move to test util method
* Change to normalization method
* Add span_id to Span.char_span, update Doc/Span.char_span docs
`Span.char_span(id=)` should be removed in the future.
* Also use Union[int, str] in Doc docstring
* WIP
* rm ipython embeds
* rm total
* WIP
* cleanup
* cleanup + reword
* rm component function
* remove migration support form
* fix reference dataset for dev data
* additional fixes
- set approach to identifying unique trees
- adjust line length on messages
- add logic for detecting docs without annotations
* use 0 instead of none for no annotation
* partial annotation support
* initial tests for _compile_gold lemma attributes
Using the example data from the edit tree lemmatizer tests for:
- lemmatizer_trees
- partial_lemma_annotations
- n_low_cardinality_lemmas
- no_lemma_annotations
* adds output test for cli app
* switch msg level
* rm unclear uniqueness check
* Revert "rm unclear uniqueness check"
This reverts commit 6ea2b3524b.
* remove good message on uniqueness
* formatting
* use en_vocab fixture
* clarify data set source in messages
* remove unnecessary import
Co-authored-by: svlandeg <svlandeg@github.com>
* Add `spacy.PlainTextCorpusReader.v1`
This is a corpus reader that reads plain text corpora with the following
format:
- UTF-8 encoding
- One line per document.
- Blank lines are ignored.
It is useful for applications where we deal with very large corpora,
such as distillation, and don't want to deal with the space overhead of
serialized formats. Additionally, many large corpora already use such
a text format, keeping the necessary preprocessing to a minimum.
* Update spacy/training/corpus.py
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* docs: add version to `PlainTextCorpus`
* Add docstring to registry function
* Add plain text corpus tests
* Only strip newline/carriage return
* Add return type _string_to_tmp_file helper
* Use a temporary directory in place of file name
Different OS auto delete/sharing semantics are just wonky.
* This will be new in 3.5.1 (rather than 4)
* Test improvements from code review
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* Refactor _scores2guesses
* Handle arrays on GPU
* Convert argmax result to raw integer
Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>
* Use NumpyOps() to copy data to CPU
Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>
* Changes based on review comments
* Use different _scores2guesses depending on tree_k
* Add tests for corner cases
* Add empty line for consistency
* Improve naming
Co-authored-by: Daniël de Kok <me@github.danieldk.eu>
* Improve naming
Co-authored-by: Daniël de Kok <me@github.danieldk.eu>
Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>
Co-authored-by: Daniël de Kok <me@github.danieldk.eu>
* Add a `spacy evaluate speed` subcommand
This subcommand reports the mean batch performance of a model on a data set with
a 95% confidence interval. For reliability, it first performs some warmup
rounds. Then it will measure performance on batches with randomly shuffled
documents.
To avoid having too many spaCy commands, `speed` is a subcommand of `evaluate`
and accuracy evaluation is moved to its own `evaluate accuracy` subcommand.
* Fix import cycle
* Restore `spacy evaluate`, make `spacy benchmark speed` an alias
* Add documentation for `spacy benchmark`
* CREATES -> PRINTS
* WPS -> words/s
* Disable formatting of benchmark speed arguments
* Fail with an error message when trying to speed bench empty corpus
* Make it clearer that `benchmark accuracy` is a replacement for `evaluate`
* Fix docstring webpage reference
* tests: check `evaluate` output against `benchmark accuracy`
In the v3 scorer refactoring, `token_acc` was implemented incorrectly.
It should use `precision` instead of `fscore` for the measure of
correctly aligned tokens / number of predicted tokens.
Fix the docs to reflect that the measure uses the number of predicted
tokens rather than the number of gold tokens.
* enable fuzzy matching
* add fuzzy param to EntityMatcher
* include rapidfuzz_capi
not yet used
* fix type
* add FUZZY predicate
* add fuzzy attribute list
* fix type properly
* tidying
* remove unnecessary dependency
* handle fuzzy sets
* simplify fuzzy sets
* case fix
* switch to FUZZYn predicates
use Levenshtein distance.
remove fuzzy param.
remove rapidfuzz_capi.
* revert changes added for fuzzy param
* switch to polyleven
(Python package)
* enable fuzzy matching
* add fuzzy param to EntityMatcher
* include rapidfuzz_capi
not yet used
* fix type
* add FUZZY predicate
* add fuzzy attribute list
* fix type properly
* tidying
* remove unnecessary dependency
* handle fuzzy sets
* simplify fuzzy sets
* case fix
* switch to FUZZYn predicates
use Levenshtein distance.
remove fuzzy param.
remove rapidfuzz_capi.
* revert changes added for fuzzy param
* switch to polyleven
(Python package)
* fuzzy match only on oov tokens
* remove polyleven
* exclude whitespace tokens
* don't allow more edits than characters
* fix min distance
* reinstate FUZZY operator
with length-based distance function
* handle sets inside regex operator
* remove is_oov check
* attempt build fix
no mypy failure locally
* re-attempt build fix
* don't overwrite fuzzy param value
* move fuzzy_match
to its own Python module to allow patching
* move fuzzy_match back inside Matcher
simplify logic and add tests
* Format tests
* Parametrize fuzzyn tests
* Parametrize and merge fuzzy+set tests
* Format
* Move fuzzy_match to a standalone method
* Change regex kwarg type to bool
* Add types for fuzzy_match
- Refactor variable names
- Add test for symmetrical behavior
* Parametrize fuzzyn+set tests
* Minor refactoring for fuzz/fuzzy
* Make fuzzy_match a Matcher kwarg
* Update type for _default_fuzzy_match
* don't overwrite function param
* Rename to fuzzy_compare
* Update fuzzy_compare default argument declarations
* allow fuzzy_compare override from EntityRuler
* define new Matcher keyword arg
* fix type definition
* Implement fuzzy_compare config option for EntityRuler and SpanRuler
* Rename _default_fuzzy_compare to fuzzy_compare, remove from reexported objects
* Use simpler fuzzy_compare algorithm
* Update types
* Increase minimum to 2 in fuzzy_compare to allow one transposition
* Fix predicate keys and matching for SetPredicate with FUZZY and REGEX
* Add FUZZY6..9
* Add initial docs
* Increase default fuzzy to rounded 30% of pattern length
* Update docs for fuzzy_compare in components
* Update EntityRuler and SpanRuler API docs
* Rename EntityRuler and SpanRuler setting to matcher_fuzzy_compare
To having naming similar to `phrase_matcher_attr`, rename
`fuzzy_compare` setting for `EntityRuler` and `SpanRuler` to
`matcher_fuzzy_compare. Organize next to `phrase_matcher_attr` in docs.
* Fix schema aliases
Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>
* Fix typo
Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>
* Add FUZZY6-9 operators and update tests
* Parameterize test over greedy
Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>
* Fix type for fuzzy_compare to remove Optional
* Rename to spacy.levenshtein_compare.v1, move to spacy.matcher.levenshtein
* Update docs following levenshtein_compare renaming
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>
* check port in use and add itself
* check port in use and add itself
* Auto switch to nearest available port.
* Use bind to check port instead of connect_ex.
* Reformat.
* Add auto_select_port argument.
* update docs for displacy.serve
* Update spacy/errors.py
Co-authored-by: Paul O'Leary McCann <polm@dampfkraft.com>
* Update website/docs/api/top-level.md
Co-authored-by: Paul O'Leary McCann <polm@dampfkraft.com>
* Update spacy/errors.py
Co-authored-by: Paul O'Leary McCann <polm@dampfkraft.com>
* Add test using multiprocessing
* fix argument name
* Increase sleep times
Want to rule this out as a cause of test failure
* Don't terminate a process that isn't alive
* Refactor port finding logic
This moves all the port logic into its own util function, which can be
tested without having to background a server directly.
* Use with for the server
This ensures the server is closed correctly.
* Pass in the host when checking port availability
* Shorten argument name
* Update error codes following merge
* Add types for arguments, specify docstrings.
* Add typing for arguments with default value.
* Update docstring to match spaCy format.
* Update docstring to match spaCy format.
* Fix docs
Arg name changed from `auto_select_port` to just `auto_select`.
* Revert "Fix docs"
This reverts commit 356966fe84.
Co-authored-by: zhiiw <1302593554@qq.com>
Co-authored-by: Paul O'Leary McCann <polm@dampfkraft.com>
Co-authored-by: Raphael Mitsch <r.mitsch@outlook.com>
* add test for running evaluate on an nlp pipeline with two distinct textcat components
* cleanup
* merge dicts instead of overwrite
* don't add more labels to the given set
* Revert "merge dicts instead of overwrite"
This reverts commit 89bee0ed77.
* Switch tests to separate scorer keys rather than merged dicts
* Revert unrelated edits
* Switch textcat scorers to v2
* formatting
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* fix processing of "auto" in walk_directory
* add check for None
* move AUTO check to convert and fix verification of args
* add specific CLI test with CliRunner
* cleanup
* more cleanup
* update docstring
* Convert all individual values explicitly to uint64 for array-based doc representations
* Temporarily test with latest numpy v1.24.0rc
* Remove unnecessary conversion from attr_t
* Reduce number of individual casts
* Convert specifically from int32 to uint64
* Revert "Temporarily test with latest numpy v1.24.0rc"
This reverts commit eb0e3c5006.
* Also use int32 in tests
Strings in replacement nodes where not added to the `StringStore`
when `EditTreeLemmatizer` was initialized from a set of labels. The
corresponding test did not capture this because it added the strings
through the examples that were passed to the initialization.
This change fixes both this bug in the initialization as the 'shadowing'
of the bug in the test.
* Support local filesystem remotes for projects
* Fix support for local filesystem remotes for projects
* Use `FluidPath` instead of `Pathy` to support both filesystem and
remote paths
* Create missing parent directories if required for local filesystem
* Add a more general `_file_exists` method to support both `Pathy`,
`Path`, and `smart_open`-compatible URLs
* Add explicit `smart_open` dependency starting with support for
`compression` flag
* Update `pathy` dependency to exclude older versions that aren't
compatible with required `smart_open` version
* Update docs to refer to `Pathy` instead of `smart_open` for project
remotes (technically you can still push to any `smart_open`-compatible
path but you can't pull from them)
* Add tests for local filesystem remotes
* Update pathy for general BlobStat sorting
* Add import
* Remove _file_exists since only Pathy remotes are supported
* Format CLI docs
* Clean up merge
* pymorph2 issues #11620, #11626, #11625:
- #11620: pymorphy2_lookup
- #11626: handle multiple forms pointing to the same normal form + handling empty POS tag
- #11625: matching DET that are labelled as PRON by pymorhp2
* Move lemmatizer algorithm changes back into RussianLemmatizer
* Fix uk pymorphy3_lookup mode init
* Move and update tests for ru/uk lookup lemmatizer modes
* Fix typo
* Remove traces of previous behavior for uninflected POS
* Refactor to private generic-looking pymorphy methods
* Remove xfailed uk lemmatizer cases
* Update spacy/lang/ru/lemmatizer.py
Co-authored-by: Richard Hudson <richard@explosion.ai>
Co-authored-by: Dmytro S Lituiev <d.lituiev@gmail.com>
Co-authored-by: Richard Hudson <richard@explosion.ai>
* Add `training.before_update` callback
This callback can be used to implement training paradigms like gradual (un)freezing of components (e.g: the Transformer) after a certain number of training steps to mitigate catastrophic forgetting during fine-tuning.
* Fix type annotation, default config value
* Generalize arguments passed to the callback
* Update schema
* Pass `epoch` to callback, rename `current_step` to `step`
* Add test
* Simplify test
* Replace config string with `spacy.blank`
* Apply suggestions from code review
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* Cleanup imports
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* Check textcat values for validity
* Fix error numbers
* Clean up vals reference
* Check category value validity through training
The _validate_categories is called in update, which for multilabel is
inherited from the single label component.
* Formatting
* Add equality definition for vectors
This re-uses the check from sourcing components.
* Use the equality check
* Format
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* Update warning, add tests for project requirements check
* Make warning more general for differences between PEP 508 and pip
* Add tests for _check_requirements
* Parameterize test
* Update textcat scorer threshold behavior
For `textcat` (with exclusive classes) the scorer should always use a
threshold of 0.0 because there should be one predicted label per doc and
the numeric score for that particular label should not matter.
* Rename to test_textcat_multilabel_threshold
* Remove all uses of threshold for multi_label=False
* Update Scorer.score_cats API docs
* Add tests for score_cats with thresholds
* Update textcat API docs
* Fix types
* Convert threshold back to float
* Fix threshold type in docstring
* Improve formatting in Scorer API docs
* Handle docs with no entities
If a whole batch contains no entities it won't make it to the model, but
it's possible for individual Docs to have no entities. Before this
commit, those Docs would cause an error when attempting to concatenate
arrays because the dimensions didn't match.
It turns out the process of preparing the Ragged at the end of the span
maker forward was a little different from list2ragged, which just uses
the flatten function directly. Letting list2ragged do the conversion
avoids the dimension issue.
This did not come up before because in NEL demo projects it's typical
for data with no entities to be discarded before it reaches the NEL
component.
This includes a simple direct test that shows the issue and checks it's
resolved. It doesn't check if there are any downstream changes, so a
more complete test could be added. A full run was tested by adding an
example with no entities to the Emerson sample project.
* Add a blank instance to default training data in tests
Rather than adding a specific test, since not failing on instances with
no entities is basic functionality, it makes sense to add it to the
default set.
* Fix without modifying architecture
If the architecture is modified this would have to be a new version, but
this change isn't big enough to merit that.
* Fix multiple extensions and character offset
* Rename token_start/end to start/end
* Refactor Doc.from_json based on review
* Iterate over user_data items
* Only add non-empty underscore entries
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* Change enable/disable behavior so that arguments take precedence over config options. Extend error message on conflict. Add warning message in case of overwriting config option with arguments.
* Fix tests in test_serialize_pipeline.py to reflect changes to handling of enable/disable.
* Fix type issue.
* Move comment.
* Move comment.
* Issue UserWarning instead of printing wasabi message. Adjust test.
* Added pytest.warns(UserWarning) for expected warning to fix tests.
* Update warning message.
* Move type handling out of fetch_pipes_status().
* Add global variable for default value. Use id() to determine whether used values are default value.
* Fix default value for disable.
* Rename DEFAULT_PIPE_STATUS to _DEFAULT_EMPTY_PIPES.
* add punctuation to grc
Add support for special editorial punctuation that is common in ancient Greek texts. Ancient Greek texts, as found in digital and print form, have been largely edited by scholars. Restorations and improvements are normally marked with special characters that need to be handled properly by the tokenizer.
* add unit tests
* simplify regex
* move generic quotes to char classes
* rename unit test
* fix regex
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
Co-authored-by: svlandeg <svlandeg@github.com>
Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
Preserve both `-` and `O` annotation in augmenters rather than relying
on `Example.to_dict`'s default support for one option outside of labeled
entity spans.
This is intended as a temporary workaround for augmenters for v3.4.x.
The behavior of `Example` and related IOB utils could be improved in the
general case for v3.5.
* Remove side effects from Doc.__init__()
* Changes based on review comment
* Readd test
* Change interface of Doc.__init__()
* Simplify test
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* Update doc.md
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* replicate bug with tok2vec in annotating components
* add overfitting test with a frozen tok2vec
* remove broadcast from predict and check doc.tensor instead
* remove broadcast
* proper error
* slight rephrase of documentation
* Add a dry run flag to download
* Remove --dry-run, add --url option to `spacy info` instead
* Make mypy happy
* Print only the URL, so it's easier to use in scripts
* Don't add the egg hash unless downloading an sdist
* Update spacy/cli/info.py
Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>
* Add two implementations of requirements
* Clean up requirements sample slightly
This should make mypy happy
* Update URL help string
* Remove requirements option
* Add url option to docs
* Add URL to spacy info model output, when available
* Add types-setuptools to testing reqs
* Add types-setuptools to requirements
* Add "compatible", expand docstring
* Update spacy/cli/info.py
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* Run prettier on CLI docs
* Update docs
Add a sidebar about finding download URLs, with some examples of the new
command.
* Add download URLs to table on model page
* Apply suggestions from code review
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* Updates from review
* download url -> download link
* Update docs
Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* `Matcher`: Better type checking of values in `SetPredicate`
`SetPredicate`: Emit warning and return `False` on unexpected value types
* Rename `value_type_mismatch` variable
* Inline warning
* Remove unexpected type warning from `_SetPredicate`
* Ensure that `str` values are not interpreted as sequences
Check elements of sequence values for convertibility to `str` or `int`
* Add more `INTERSECT` and `IN` test cases
* Test for inputs with multiple characters
* Return `False` early instead of using a boolean flag
* Remove superfluous `int` check, parentheses
* Apply suggestions from code review
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* Appy suggestions from code review
* Clarify test comment
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* adding unit test for spacy.load with disable/exclude string arg
* allow pure strings in from_config
* update docs
* upstream type adjustements
* docs update
* make docstring more consistent
* Update spacy/language.py
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* two more cleanups
* fix type in internal method
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* Fix `test_{prefer,require}_gpu`
These tests assumed that GPUs are only supported with CuPy, but since Thinc 8.1
we also support Metal Performance Shaders.
* test_misc: arrange thinc imports to be together
* Add lang folder for la (Latin)
* Add Latin lang classes
* Add minimal tokenizer exceptions
* Add minimal stopwords
* Add minimal lex_attrs
* Update stopwords, tokenizer exceptions
* Add la tests; register la_tokenizer in conftest.py
* Update spacy/lang/la/lex_attrs.py
Remove duplicate form in Latin lex_attrs
Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>
* Update natto-py version spec (#11222)
* Update natto-py version spec
* Update setup.cfg
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* Add scorer to textcat API docs config settings (#11263)
* Update docs for pipeline initialize() methods (#11221)
* Update documentation for dependency parser
* Update documentation for trainable_lemmatizer
* Update documentation for entity_linker
* Update documentation for ner
* Update documentation for morphologizer
* Update documentation for senter
* Update documentation for spancat
* Update documentation for tagger
* Update documentation for textcat
* Update documentation for tok2vec
* Run prettier on edited files
* Apply similar changes in transformer docs
* Remove need to say annotated example explicitly
I removed the need to say "Must contain at least one annotated Example"
because it's often a given that Examples will contain some gold-standard
annotation.
* Run prettier on transformer docs
* chore: add 'concepCy' to spacy universe (#11255)
* chore: add 'concepCy' to spacy universe
* docs: add 'slogan' to concepCy
* Support full prerelease versions in the compat table (#11228)
* Support full prerelease versions in the compat table
* Fix types
* adding spans to doc_annotation in Example.to_dict (#11261)
* adding spans to doc_annotation in Example.to_dict
* to_dict compatible with from_dict: tuples instead of spans
* use strings for label and kb_id
* Simplify test
* Update data formats docs
Co-authored-by: Stefanie Wolf <stefanie.wolf@vitecsoftware.com>
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* Fix regex invalid escape sequences (#11276)
* Add W605 to the errors raised by flake8 in the CI (#11283)
* Clean up automated label-based issue handling (#11284)
* Clean up automated label-based issue handline
1. upgrade tiangolo/issue-manager to latest
2. move needs-more-info to tiangolo
3. change needs-more-info close time to 7 days
4. delete old needs-more-info config
* Use old, longer message
* Fix label name
* Fix Dutch noun chunks to skip overlapping spans (#11275)
* Add test for overlapping noun chunks
* Skip overlapping noun chunks
* Update spacy/tests/lang/nl/test_noun_chunks.py
Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>
Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>
* Docs: displaCy documentation - data types, `parse_{deps,ents,spans}`, spans example (#10950)
* add in spans example and parse references
* rm autoformatter
* rm extra ents copy
* TypedDict draft
* type fixes
* restore non-documentation files
* docs update
* fix spans example
* fix hyperlinks
* add parse example
* example fix + argument fix
* fix api arg in docs
* fix bad variable replacement
* fix spacing in style
Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>
* fix spacing on table
* fix spacing on table
* rm temp files
Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>
* include span_ruler for default warning filter (#11333)
* Add uk pipelines to website (#11332)
* Check for . in factory names (#11336)
* Make fixes for PR #11349
* Fix roman numeral coverage in #11349
Co-authored-by: Patrick J. Burns <patricks@diyclassics.org>
Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>
Co-authored-by: Paul O'Leary McCann <polm@dampfkraft.com>
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
Co-authored-by: Lj Miranda <12949683+ljvmiranda921@users.noreply.github.com>
Co-authored-by: Jules Belveze <32683010+JulesBelveze@users.noreply.github.com>
Co-authored-by: stefawolf <wlf.ste@gmail.com>
Co-authored-by: Stefanie Wolf <stefanie.wolf@vitecsoftware.com>
Co-authored-by: Peter Baumgartner <5107405+pmbaumgartner@users.noreply.github.com>
* Add token and span custom attributes to to_json()
* Change logic for to_json
* Add functionality to from_json
* Small adjustments
* Move token/span attributes to new dict key
* Fix test
* Fix the same test but much better
* Add backwards compatibility tests and adjust logic
* Add test to check if attributes not set in underscore are not saved in the json
* Add tests for json compatibility
* Adjust test names
* Fix tests and clean up code
* Fix assert json tests
* small adjustment
* adjust naming and code readability
* Adjust naming, added more tests and changed logic
* Fix typo
* Adjust errors, naming, and small test optimization
* Fix byte tests
* Fix bytes tests
* Change naming and json structure
* update schema
* Update spacy/schemas.py
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* Update spacy/tokens/doc.pyx
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* Update spacy/tokens/doc.pyx
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* Update spacy/schemas.py
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* Update schema for underscore attributes
* Adjust underscore schema
* adjust schema tests
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>