* Fix `spacy.util.minibatch` when the size iterator is finished (#6745)
* Skip 0-length matches (#6759)
Add hack to prevent matcher from returning 0-length matches.
* support IS_SENT_START in PhraseMatcher (#6771)
* support IS_SENT_START in PhraseMatcher
* add unit test and friendlier error
* use IDS.get instead
* ensure span.text works for an empty span (#6772)
* Remove unicode_literals
Co-authored-by: Santiago Castro <bryant@montevideo.com.uy>
Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>
* Allow output_path to be None during training
* Fix cat scoring (?)
* Improve error message for weighted None score
* Improve messages
So we can call this in other places etc.
* FIx output path check
* Use latest wasabi
* Revert "Improve error message for weighted None score"
This reverts commit 7059926763.
* Exclude None scores from final score by default
It's otherwise very difficult to keep track of the score weights if we modify a config programmatically, source components etc.
* Update warnings and use logger.warning
* Spacy Cli info method causing backward compatibility issues #6791
fix backward compatibility by setting default value to exclude in info
method.
* setting empty list as default argument is dangerous.
so setting default to None and then setting it to emptylist, if None.
Reference : https://nikos7am.com/posts/mutable-default-arguments/
* Adding contributor agreement for user werew
* [DependencyMatcher] Comment and clean code
* [DependencyMatcher] Use defaultdicts
* [DependencyMatcher] Simplify _retrieve_tree method
* [DependencyMatcher] Remove prepended underscores
* [DependencyMatcher] Address TODO and move grouping of token's positions out of the loop
* [DependencyMatcher] Remove _nodes attribute
* [DependencyMatcher] Use enumerate in _retrieve_tree method
* [DependencyMatcher] Clean unused vars and use camel_case naming
* [DependencyMatcher] Memoize node+operator map
* Add root property to Token
* [DependencyMatcher] Groups matches by root
* [DependencyMatcher] Remove unused _keys_to_token attribute
* [DependencyMatcher] Use a list to map tokens to matcher's keys
* [DependencyMatcher] Remove recursion
* [DependencyMatcher] Use a generator to retrieve matches
* [DependencyMatcher] Remove unused memory pool
* [DependencyMatcher] Hide private methods and attributes
* [DependencyMatcher] Improvements to the matches validation
* Apply suggestions from code review
Co-authored-by: Matthew Honnibal <honnibal+gh@gmail.com>
* [DependencyMatcher] Fix keys_to_position_maps
* Remove Token.root property
* [DependencyMatcher] Remove functools' lru_cache
Co-authored-by: Matthew Honnibal <honnibal+gh@gmail.com>
* warn when frozen components break listener pattern
* few notes in the documentation
* update arg name
* formatting
* cleanup
* specify listeners return type
* raise NotImplementedError when noun_chunks iterator is not implemented
* bring back, fix and document span.noun_chunks
* formatting
Co-authored-by: Matthew Honnibal <honnibal+gh@gmail.com>
* Add long_token_splitter component
Add a `long_token_splitter` component for use with transformer
pipelines. This component splits up long tokens like URLs into smaller
tokens. This is particularly relevant for pretrained pipelines with
`strided_spans`, since the user can't change the length of the span
`window` and may not wish to preprocess the input texts.
The `long_token_splitter` splits tokens that are at least
`long_token_length` tokens long into smaller tokens of `split_length`
size.
Notes:
* Since this is intended for use as the first component in a pipeline,
the token splitter does not try to preserve any token annotation.
* API docs to come when the API is stable.
* Adjust API, add test
* Fix name in factory
Add all strings from the source model when adding a pipe from a source
model.
Minor:
* Skip `disable=["vocab", "tokenizer"]` when loading a source model from
the config, since this doesn't do anything and is misleading.
* Handle unset token.morph in Morphologizer
Handle unset `token.morph` in `Morphologizer.initialize` and
`Morphologizer.get_loss`. If both `token.morph` and `token.pos` are
unset, treat the annotation as missing rather than empty.
* Add token.has_morph()
* Override language defaults for null token and URL match
When the serialized `token_match` or `url_match` is `None`, override the
language defaults to preserve `None` on deserialization.
* Fix fixtures in tests
* Draft out initial Spans data structure
* Initial span group commit
* Basic span group support on Doc
* Basic test for span group
* Compile span_group.pyx
* Draft addition of SpanGroup to DocBin
* Add deserialization for SpanGroup
* Add tests for serializing SpanGroup
* Fix serialization of SpanGroup
* Add EdgeC and GraphC structs
* Add draft Graph data structure
* Compile graph
* More work on Graph
* Update GraphC
* Upd graph
* Fix walk functions
* Let Graph take nodes and edges on construction
* Fix walking and getting
* Add graph tests
* Fix import
* Add module with the SpanGroups dict thingy
* Update test
* Rename 'span_groups' attribute
* Try to fix c++11 compilation
* Fix test
* Update DocBin
* Try to fix compilation
* Try to fix graph
* Improve SpanGroup docstrings
* Add doc.spans to documentation
* Fix serialization
* Tidy up and add docs
* Update docs [ci skip]
* Add SpanGroup.has_overlap
* WIP updated Graph API
* Start testing new Graph API
* Update Graph tests
* Update Graph
* Add docstring
Co-authored-by: Ines Montani <ines@ines.io>