spaCy/spacy
2016-09-25 14:49:53 +02:00
..
data
de Refactor so that the tokenizer data is read from Python data, rather than from disk 2016-09-25 14:49:53 +02:00
en Refactor so that the tokenizer data is read from Python data, rather than from disk 2016-09-25 14:49:53 +02:00
fi
it
munge
serialize
syntax Finish refactoring data loading 2016-09-24 20:26:17 +02:00
tests Refactor so that the tokenizer data is read from Python data, rather than from disk 2016-09-25 14:49:53 +02:00
tokens Allow entities to be set by Span, or by 4-tuple (with entity ID) 2016-09-24 01:17:43 +02:00
zh * Work on Chinese support 2016-05-05 11:39:12 +02:00
__init__.pxd
__init__.py Whitespace 2016-09-24 22:17:01 +02:00
about.py * Increment version 2016-05-09 13:20:00 +02:00
attrs.pxd
attrs.pyx
cfile.pxd
cfile.pyx Handle pathlib.Path objects in CFile 2016-09-24 22:01:46 +02:00
deprecated.py Finish refactoring data loading 2016-09-24 20:26:17 +02:00
download.py Add parameter to download() for application to not exit if a Model exists. The default behavior is unchanged. 2016-09-14 10:04:09 -04:00
gold.pxd
gold.pyx don't require read_json_file to expect particular annotations 2016-05-02 15:29:30 +02:00
language.py Refactor so that the tokenizer data is read from Python data, rather than from disk 2016-09-25 14:49:53 +02:00
lemmatizer.py Finish refactoring data loading 2016-09-24 20:26:17 +02:00
lexeme.pxd
lexeme.pyx * Fix issue #372: mistake in Lexeme rich comparison 2016-05-12 12:58:57 +02:00
matcher.pyx Finish refactoring data loading 2016-09-24 20:26:17 +02:00
morphology.pxd
morphology.pyx
multi_words.py
orth.pxd
orth.pyx
parts_of_speech.pxd
parts_of_speech.pyx
scorer.py
strings.pxd
strings.pyx remove ujson as default non-dev dependency (still works as fallback if installed), because ujson doesn't ship wheels 2016-04-12 11:28:07 +02:00
structs.pxd Initial, limited support for quantified patterns in Matcher, and tracking of ent_id attribute in Token and Span. The quantifiers need a lot more testing, and there are some known problems. The main known problem is that the zero-plus and one-plus quantifiers won't work if a token can match both the quantified pattern expression AND the tail of the match. 2016-09-21 14:54:55 +02:00
symbols.pxd German noun chunk iterator now doesn't return tokens more than once 2016-05-03 16:58:59 +02:00
symbols.pyx German noun chunk iterator now doesn't return tokens more than once 2016-05-03 16:58:59 +02:00
tagger.pxd
tagger.pyx Finish refactoring data loading 2016-09-24 20:26:17 +02:00
tokenizer.pxd Finish refactoring data loading 2016-09-24 20:26:17 +02:00
tokenizer.pyx Refactor so that the tokenizer data is read from Python data, rather than from disk 2016-09-25 14:49:53 +02:00
typedefs.pxd
typedefs.pyx
util.py Refactor so that the tokenizer data is read from Python data, rather than from disk 2016-09-25 14:49:53 +02:00
vocab.pxd Refactor so that the tokenizer data is read from Python data, rather than from disk 2016-09-25 14:49:53 +02:00
vocab.pyx Refactor so that the tokenizer data is read from Python data, rather than from disk 2016-09-25 14:49:53 +02:00