* Adding Support for Yoruba
* test text
* Updated test string.
* Fixing encoding declaration.
* Adding encoding to stop_words.py
* Added contributor agreement and removed iranlowo.
* Added removed test files and removed iranlowo to keep project bare.
* Returned CONTRIBUTING.md to default state.
* Added delted conftest entries
* Tidy up and auto-format
* Revert CONTRIBUTING.md
Co-authored-by: Ines Montani <ines@ines.io>
* Include Doc.cats in to_bytes()
* Include Doc.cats in DocBin serialization
* Add tests for serialization of cats
Test serialization of cats for Doc and DocBin.
* Enable lex_attrs on Finnish
* Copy the Danish tokenizer rules to Finnish
Specifically, don't break hyphenated compound words
* Contributor agreement
* A new file for Finnish tokenizer rules instead of including the Danish ones
- added some tests for tokenization issues
- fixed some issues with tokenization of words with hyphen infix
- rewrote the "tokenizer_exceptions.py" file (stemming from the German version)
* Restructure Sentencizer to follow Pipe API
Restructure Sentencizer to follow Pipe API so that it can be scored with
`nlp.evaluate()`.
* Add Sentencizer pipe() test
Iterate over lr_edges until all heads are within the current sentence.
Instead of iterating over them for a fixed number of iterations, check
whether the sentence boundaries are correct for the heads and stop when
all are correct. Stop after a maximum of 10 iterations, providing a
warning in this case since the sentence boundaries may not be correct.
* Switch from mecab-python3 to fugashi
mecab-python3 has been the best MeCab binding for a long time but it's
not very actively maintained, and since it's based on old SWIG code
distributed with MeCab there's a limit to how effectively it can be
maintained.
Fugashi is a new Cython-based MeCab wrapper I wrote. Since it's not
based on the old SWIG code it's easier to keep it current and make small
deviations from the MeCab C/C++ API where that makes sense.
* Change mecab-python3 to fugashi in setup.cfg
* Change "mecab tags" to "unidic tags"
The tags come from MeCab, but the tag schema is specified by Unidic, so
it's more proper to refer to it that way.
* Update conftest
* Add fugashi link to external deps list for Japanese
* Detect more empty matches in tokenizer.explain()
* Include a few languages in explain non-slow tests
Mark a few languages in tokenizer.explain() tests as not slow so they're
run by default.
* Expose tokenizer rules as a property
Expose the tokenizer rules property in the same way as the other core
properties. (The cache resetting is overkill, but consistent with
`from_bytes` for now.)
Add tests and update Tokenizer API docs.
* Update Hungarian punctuation to remove empty string
Update Hungarian punctuation definitions so that `_units` does not match
an empty string.
* Use _load_special_tokenization consistently
Use `_load_special_tokenization()` and have it to handle `None` checks.
* Fix precedence of `token_match` vs. special cases
Remove `token_match` check from `_split_affixes()` so that special cases
have precedence over `token_match`. `token_match` is checked only before
infixes are split.
* Add `make_debug_doc()` to the Tokenizer
Add `make_debug_doc()` to the Tokenizer as a working implementation of
the pseudo-code in the docs.
Add a test (marked as slow) that checks that `nlp.tokenizer()` and
`nlp.tokenizer.make_debug_doc()` return the same non-whitespace tokens
for all languages that have `examples.sentences` that can be imported.
* Update tokenization usage docs
Update pseudo-code and algorithm description to correspond to
`nlp.tokenizer.make_debug_doc()` with example debugging usage.
Add more examples for customizing tokenizers while preserving the
existing defaults.
Minor edits / clarifications.
* Revert "Update Hungarian punctuation to remove empty string"
This reverts commit f0a577f7a5.
* Rework `make_debug_doc()` as `explain()`
Rework `make_debug_doc()` as `explain()`, which returns a list of
`(pattern_string, token_string)` tuples rather than a non-standard
`Doc`. Update docs and tests accordingly, leaving the visualization for
future work.
* Handle cases with bad tokenizer patterns
Detect when tokenizer patterns match empty prefixes and suffixes so that
`explain()` does not hang on bad patterns.
* Remove unused displacy image
* Add tokenizer.explain() to usage docs
* Rework Chinese language initialization
* Create a `ChineseTokenizer` class
* Modify jieba post-processing to handle whitespace correctly
* Modify non-jieba character tokenization to handle whitespace correctly
* Add a `create_tokenizer()` method to `ChineseDefaults`
* Load lexical attributes
* Update Chinese tag_map for UD v2
* Add very basic Chinese tests
* Test tokenization with and without jieba
* Test `like_num` attribute
* Fix try_jieba_import()
* Fix zh code formatting
The model registry refactor of the Tok2Vec function broke loading models
trained with the previous function, because the model tree was slightly
different. Specifically, the new function wrote:
concatenate(norm, prefix, suffix, shape)
To build the embedding layer. In the previous implementation, I had used
the operator overloading shortcut:
( norm | prefix | suffix | shape )
This actually gets mapped to a binary association, giving something
like:
concatenate(norm, concatenate(prefix, concatenate(suffix, shape)))
This is a different tree, so the layers iterate differently and we
loaded the weights wrongly.
* Xfail new tokenization test
* Put new alignment behind feature flag
* Move USE_ALIGN to top of the file [ci skip]
Co-authored-by: Ines Montani <ines@ines.io>
The `Matcher` in `merge_subtokens()` returns all possible subsequences
of `subtok`, so for sequences of two or more subtoks it's necessary to
filter the matches so that the retokenizer is only merging the longest
matches with no overlapping spans.
* Add arch for MishWindowEncoder
* Support mish in tok2vec and conv window >=2
* Pass new tok2vec settings from parser
* Syntax error
* Fix tok2vec setting
* Fix registration of MishWindowEncoder
* Fix receptive field setting
* Fix mish arch
* Pass more options from parser
* Support more tok2vec options in pretrain
* Require thinc 7.3
* Add docs [ci skip]
* Require thinc 7.3.0.dev0 to run CI
* Run black
* Fix typo
* Update Thinc version
Co-authored-by: Ines Montani <ines@ines.io>
* Flag to ignore examples with mismatched raw/gold text
After #4525, we're seeing some alignment failures on our OntoNotes data. I think we actually have fixes for most of these cases.
In general it's better to fix the data, but it seems good to allow the GoldCorpus class to just skip cases where the raw text doesn't
match up to the gold words. I think previously we were silently ignoring these cases.
* Try to fix test on Python 2.7
The previous version worked with previous thinc, but only
because some thinc ops happened to have gpu/cpu compatible
implementations. It's better to call the right Ops instance.
* Fix get labels for textcat
* Fix char_embed for gpu
* Revert "Fix char_embed for gpu"
This reverts commit 055b9a9e85.
* Fix passing of cats in gold.pyx
* Revert "Match pop with append for training format (#4516)"
This reverts commit 8e7414dace.
* Fix popping gold parses
* Fix handling of cats in gold tuples
* Fix name
* Fix ner_multitask_objective script
* Add test for 4402
* trying to fix script - not succesful yet
* match pop() with extend() to avoid changing the data
* few more pop-extend fixes
* reinsert deleted print statement
* fix print statement
* add last tested version
* append instead of extend
* add in few comments
* quick fix for 4402 + unit test
* fixing number of docs (not counting cats)
* more fixes
* fix len
* print tmp file instead of using data from examples dir
* print tmp file instead of using data from examples dir (2)
* Add work in progress
* Update analysis helpers and component decorator
* Fix porting of docstrings for Python 2
* Fix docstring stuff on Python 2
* Support meta factories when loading model
* Put auto pipeline analysis behind flag for now
* Analyse pipes on remove_pipe and replace_pipe
* Move analysis to root for now
Try to find a better place for it, but it needs to go for now to avoid circular imports
* Simplify decorator
Don't return a wrapped class and instead just write to the object
* Update existing components and factories
* Add condition in factory for classes vs. functions
* Add missing from_nlp classmethods
* Add "retokenizes" to printed overview
* Update assigns/requires declarations of builtins
* Only return data if no_print is enabled
* Use multiline table for overview
* Don't support Span
* Rewrite errors/warnings and move them to spacy.errors
* Implement new API for {Phrase}Matcher.add (backwards-compatible)
* Update docs
* Also update DependencyMatcher.add
* Update internals
* Rewrite tests to use new API
* Add basic check for common mistake
Raise error with suggestion if user likely passed in a pattern instead of a list of patterns
* Fix typo [ci skip]
* Update English tag_map
Update English tag_map based on this conversion table:
https://universaldependencies.org/tagset-conversion/en-penn-uposf.html
* Update German tag_map
Update German tag_map based on this conversion table:
https://universaldependencies.org/tagset-conversion/de-stts-uposf.html
* Add missing Tiger dependencies to glossary
* Add quotes to definition of TO
* Update POS/TAG tables in docs
Update POS/TAG tables for English and German docs using current
information generated from the tag_maps and GLOSSARY.
* Update warning that -PRON- is specific to English
* Revert docs to default JSON output with convert
* Revert "Revert docs to default JSON output with convert"
This reverts commit 6b78c048f1.
* Support train dict format as JSONL
* Add (overly simple) check for dict vs. tuple to read JSONL lines as
either train dicts or train tuples
* Extend JSON/JSONL roundtrip conversion tests using `docs_to_json()`
and `GoldCorpus.train_tuples`
* Revert docs to default JSON output with convert
* raise specific error when removing a matcher rule that doesn't exist
* rephrasing
* goldparse init: allocate fields only if doc is not empty
* avoid zero length alloc in saving tokenizer cache
* avoid allocating zero length mem in matcher
* asserts to avoid allocating zero length mem
* fix zero-length allocation in matcher
* bump cymem version
* revert cymem version bump
* Free pointers in ActivationsC
* Restructure alloc/free for parser activations
* Rewrite/restructure to have allocation and free in parallel functions
in `_parser_model` rather than partially in `_parseC()` in `Parser`.
* Remove `resize_activations` from `_parser_model.pxd`.
* Create syntax_iterators.py
Replica of spacy/lang/fr/syntax_iterators.py
* Added import statements for SYNTAX_ITERATORS
* Create gustavengstrom.md
* Added "dobj" to list of labels in noun_chunks method and a test_noun_chunks method to the Swedish language model.
* Delete README-checkpoint.md
Co-authored-by: Gustav <gustav@davcon.se>
Co-authored-by: Ines Montani <ines@ines.io>
* Error for ill-formed input to iob_to_biluo()
Check for empty label in iob_to_biluo(), which can result from
ill-formed input.
* Check for empty NER label in debug-data
* Replace MatchStruct with Entity
Replace MatchStruct with Entity since the existing Entity struct is
nearly identical.
* Replace Entity with more general SpanC
* Add missing int value option to top-level pattern validation in Matcher
* Adjust existing tests accordingly
* Add new test for valid pattern `{"LENGTH": int}`
* fix overflow error on windows
* more documentation & logging fixes
* md fix
* 3 different limit parameters to play with execution time
* bug fixes directory locations
* small fixes
* exclude dev test articles from prior probabilities stats
* small fixes
* filtering wikidata entities, removing numeric and meta items
* adding aliases from wikidata also to the KB
* fix adding WD aliases
* adding also new aliases to previously added entities
* fixing comma's
* small doc fixes
* adding subclassof filtering
* append alias functionality in KB
* prevent appending the same entity-alias pair
* fix for appending WD aliases
* remove date filter
* remove unnecessary import
* small corrections and reformatting
* remove WD aliases for now (too slow)
* removing numeric entities from training and evaluation
* small fixes
* shortcut during prediction if there is only one candidate
* add counts and fscore logging, remove FP NER from evaluation
* fix entity_linker.predict to take docs instead of single sentences
* remove enumeration sentences from the WP dataset
* entity_linker.update to process full doc instead of single sentence
* spelling corrections and dump locations in readme
* NLP IO fix
* reading KB is unnecessary at the end of the pipeline
* small logging fix
* remove empty files
* Update util.filter_spans() to prefer earlier spans
* Add filter_spans test for first same-length span
* Update entity relation example to refer to util.filter_spans()
* raise specific error when removing a matcher rule that doesn't exist
* rephrasing
* ensure attrs is NULL when nr_attr == 0 + several fixes to prevent OOB
This is basically stabbing blindly at the ghost match problem, but it at
least seems like there was a bug previously here --- so this should
hopefully be an improvement, even if it doesn't fix the ghost match
problem.
* Only import pkg_resources where it's needed
Apparently it's really slow
* Use importlib_metadata for entry points
* Revert "Only import pkg_resources where it's needed"
This reverts commit 5ed8c03afa.
* Revert "Revert "Only import pkg_resources where it's needed""
This reverts commit 8b30b57957.
* Revert "Use importlib_metadata for entry points"
This reverts commit 9f071f5c40.
* Revert "Revert "Use importlib_metadata for entry points""
This reverts commit 02e12a17ec.
* Skip test that weirdly hangs
* Fix hanging test by using global
* Move prefix and suffix detection for URL_PATTERN
Move prefix and suffix detection for `URL_PATTERN` into the tokenizer.
Remove associated lookahead and lookbehind from `URL_PATTERN`.
Fix tokenization for Hungarian given new modified handling of prefixes
and suffixes.
* Match a wider range of URI schemes