mirror of
https://github.com/explosion/spaCy.git
synced 2024-12-25 09:26:27 +03:00
565e0eef73
To fix the slow tokenizer URL (#4374) and allow `token_match` to take priority over prefixes and suffixes by default, introduce a new tokenizer option for a token match pattern that's applied after prefixes and suffixes but before infixes. |
||
---|---|---|
.. | ||
__init__.py | ||
_tokenizer_exceptions_list.py | ||
examples.py | ||
lemmatizer.py | ||
lex_attrs.py | ||
punctuation.py | ||
stop_words.py | ||
syntax_iterators.py | ||
tag_map.py | ||
tokenizer_exceptions.py |