.. |
data
|
add data dir
|
2015-11-18 11:48:55 +01:00 |
de
|
remove unnecessary imports
|
2016-05-02 17:33:22 +02:00 |
en
|
remove deprecated LOCAL_DATA_DIR
|
2016-04-05 11:25:54 +02:00 |
fi
|
access model via sputnik
|
2015-12-07 06:01:28 +01:00 |
it
|
access model via sputnik
|
2015-12-07 06:01:28 +01:00 |
munge
|
* Fix Python3 problem in align_raw
|
2015-07-28 16:06:53 +02:00 |
serialize
|
* Whitespace
|
2016-01-29 03:59:22 +01:00 |
syntax
|
Refactor model for beam parser, to avoid conditionals on model type
|
2016-07-29 19:33:01 +02:00 |
tests
|
* Fix Issue #360: Tokenizer failed when the infix regex matched the start of the string while trying to tokenize multi-infix tokens.
|
2016-05-09 13:23:47 +02:00 |
tokens
|
* Fix Issue #375: noun phrase iteration results in index error if noun phrases are merged during the loop. Fix by accumulating the spans inside the noun_chunks property, allowing the Span index tricks to work.
|
2016-05-20 10:14:06 +02:00 |
zh
|
* Work on Chinese support
|
2016-05-05 11:39:12 +02:00 |
__init__.pxd
|
* Seems to be working after refactor. Need to wire up more POS tag features, and wire up save/load of POS tags.
|
2014-10-24 02:23:42 +11:00 |
__init__.py
|
* Register Chinese language in spacy/__init__.py
|
2016-04-24 18:45:16 +02:00 |
about.py
|
* Increment version
|
2016-05-09 13:20:00 +02:00 |
attrs.pxd
|
introduce lang field for LexemeC to hold language id
|
2016-03-10 13:01:34 +01:00 |
attrs.pyx
|
introduce lang field for LexemeC to hold language id
|
2016-03-10 13:01:34 +01:00 |
cfile.pxd
|
* Add cfile.pyx
|
2015-07-23 01:10:36 +02:00 |
cfile.pyx
|
* Fix CFile for Python2
|
2015-07-25 22:55:53 +02:00 |
download.py
|
exit code 0 for when downloading a model that already was downloaded
|
2016-07-13 16:22:14 -07:00 |
gold.pxd
|
* Remove unused import
|
2015-07-25 18:11:16 +02:00 |
gold.pyx
|
Working NN, but very messy. Relies on BLIS.
|
2016-07-20 16:28:02 +02:00 |
language.py
|
* Change Language class to use a .pipeline attribute, instead of having the pipeline hard coded
|
2016-05-17 16:55:42 +02:00 |
lemmatizer.py
|
distinct load() and from_package() methods
|
2016-01-16 10:00:57 +01:00 |
lexeme.pxd
|
introduce lang field for LexemeC to hold language id
|
2016-03-10 13:01:34 +01:00 |
lexeme.pyx
|
* Fix issue #372: mistake in Lexeme rich comparison
|
2016-05-12 12:58:57 +02:00 |
matcher.pyx
|
* Make patterns argument to Matcher class optional
|
2016-04-17 21:32:24 +02:00 |
morphology.pxd
|
* Ensure Morphology can be pickled, to address Issue #125.
|
2015-10-13 13:44:41 +11:00 |
morphology.pyx
|
* Fix imports
|
2016-01-19 03:36:51 +01:00 |
multi_words.py
|
* Fix Issue #50: Python 3 compatibility of v0.80
|
2015-04-13 05:59:43 +02:00 |
orth.pxd
|
remove text-unidecode dependency
|
2016-02-24 08:01:59 +01:00 |
orth.pyx
|
introduce lang field for LexemeC to hold language id
|
2016-03-10 13:01:34 +01:00 |
parts_of_speech.pxd
|
* Fix parts_of_speech now that symbols list has been reformed
|
2015-10-13 13:44:40 +11:00 |
parts_of_speech.pyx
|
* Fix NAMES list in spacy/parts_of_speech.pyx
|
2015-10-13 14:18:45 +11:00 |
scorer.py
|
* Accept punct_labels as an argument to the scorer
|
2016-02-02 22:59:06 +01:00 |
strings.pxd
|
remove internal redundancy and overhead from StringStore
|
2016-03-24 15:25:27 +01:00 |
strings.pyx
|
remove ujson as default non-dev dependency (still works as fallback if installed), because ujson doesn't ship wheels
|
2016-04-12 11:28:07 +02:00 |
structs.pxd
|
introduce lang field for LexemeC to hold language id
|
2016-03-10 13:01:34 +01:00 |
symbols.pxd
|
German noun chunk iterator now doesn't return tokens more than once
|
2016-05-03 16:58:59 +02:00 |
symbols.pyx
|
German noun chunk iterator now doesn't return tokens more than once
|
2016-05-03 16:58:59 +02:00 |
tagger.pxd
|
Working NN, but very messy. Relies on BLIS.
|
2016-07-20 16:28:02 +02:00 |
tagger.pyx
|
Working NN, but very messy. Relies on BLIS.
|
2016-07-20 16:28:02 +02:00 |
tokenizer.pxd
|
* Fix bug in tokenizer that caused new tokens to be added for affixes
|
2016-02-21 23:17:47 +00:00 |
tokenizer.pyx
|
* Fix Issue #360: Tokenizer failed when the infix regex matched the start of the string while trying to tokenize multi-infix tokens.
|
2016-05-09 13:23:47 +02:00 |
typedefs.pxd
|
* Fix type declarations for attr_t. Remove unused id_t.
|
2015-07-18 22:39:57 +02:00 |
typedefs.pyx
|
* Move POS tag definitions to parts_of_speech.pxd
|
2015-01-25 16:31:07 +11:00 |
util.py
|
Fix get_lang_class parsing (take 2)
|
2016-05-16 16:40:31 -07:00 |
vocab.pxd
|
* Start trying to pickle Vocab
|
2015-10-13 13:44:41 +11:00 |
vocab.pyx
|
Merge pull request #306 from wbwseeker/german_noun_chunks
|
2016-04-08 00:54:24 +10:00 |