spaCy/tests
2014-12-09 16:08:17 +11:00
..
_depr_group_by.py * Refactor around Word objects, adapting tests. Tests passing, except for string views. 2014-08-23 19:55:06 +02:00
depr_test_ner.py * Add WordNet lemmatizer 2014-12-08 01:39:13 +11:00
my_test.py * Initial commit. Tests passing for punctuation handling. Need contractions, file transport, tokenize function, etc. 2014-07-05 20:51:42 +02:00
sun.tokens * Working tokenization. en doesn't match PTB perfectly. Need to reorganize before adding more schemes. 2014-07-07 01:15:59 +02:00
sun.txt * Working tokenization. en doesn't match PTB perfectly. Need to reorganize before adding more schemes. 2014-07-07 01:15:59 +02:00
test_align.py * Commit outstanding tests 2014-11-12 23:24:32 +11:00
test_asciify.py * Refactor to use tokens class. 2014-09-10 18:27:44 +02:00
test_canon_case.py * Rewriting Lexeme serialization. 2014-10-29 23:19:38 +11:00
test_contractions.py * Work on fixing special-cases, reading them in as JSON objects so that they can specify lemmas 2014-12-09 14:48:01 +11:00
test_detokenize.py * Add detokenize method and test 2014-10-18 18:07:29 +11:00
test_emoticons.py * Add false positive test for emoticons 2014-12-09 16:08:17 +11:00
test_flag_features.py * Switch to new data model, tests passing 2014-10-10 08:11:31 +11:00
test_infix.py * Comment out tests of hyphenation, while we decide what hyphenation policy should be. 2014-11-05 02:03:22 +11:00
test_intern.py * Make StringStore.__getitem__ accept unicode-typed keys. 2014-12-03 01:33:20 +11:00
test_is_punct.py * Switch to new data model, tests passing 2014-10-10 08:11:31 +11:00
test_iter_lexicon.py * Add test to make sure iterating over the lexicon isnt broken 2014-12-08 21:12:51 +11:00
test_lemmatizer.py * Add WordNet lemmatizer 2014-12-08 01:39:13 +11:00
test_lexeme_flags.py * Load the lexicon before we check flag values 2014-12-09 15:18:43 +11:00
test_number.py * Add tests for like_number 2014-11-02 13:21:39 +11:00
test_only_punct.py * Update tests 2014-11-03 00:23:04 +11:00
test_post_punct.py * Add test for '' in punct 2014-11-02 21:24:09 +11:00
test_pre_punct.py * Basic punct tests updated and passing 2014-08-27 19:38:57 +02:00
test_rules.py * Refactor spacy so that chunks return arrays of lexemes, so that there is properly one lexeme per word. 2014-08-18 19:14:00 +02:00
test_shape.py * Upd shape test 2014-11-07 04:42:54 +11:00
test_special_affix.py * Refactor tokenization, enable cache, and ensure we look up specials correctly even when there's confusing punctuation surrounding the token. 2014-09-16 18:01:46 +02:00
test_string_loading.py * Large refactor, particularly to Python API 2014-10-24 00:59:17 +11:00
test_surround_punct.py * Basic punct tests updated and passing 2014-08-27 19:38:57 +02:00
test_tokenizer.py * Work on fixing special-cases, reading them in as JSON objects so that they can specify lemmas 2014-12-09 14:48:01 +11:00
test_tokens_from_list.py * Commit outstanding tests 2014-11-12 23:24:32 +11:00
test_urlish.py * Update tests 2014-11-03 00:23:04 +11:00
test_vocab.py * Upd tests for tighter interface 2014-10-30 18:15:30 +11:00
test_whitespace.py * Have tokenizer emit tokens for whitespace other than single spaces 2014-10-14 20:25:57 +11:00
test_wiki_sun.py * Pass tests. Need to implement more feature functions. 2014-08-30 20:36:06 +02:00
tokenizer.sed * Working tokenization. en doesn't match PTB perfectly. Need to reorganize before adding more schemes. 2014-07-07 01:15:59 +02:00