spaCy/tests
2014-11-02 21:24:09 +11:00
..
_depr_group_by.py * Refactor around Word objects, adapting tests. Tests passing, except for string views. 2014-08-23 19:55:06 +02:00
my_test.py * Initial commit. Tests passing for punctuation handling. Need contractions, file transport, tokenize function, etc. 2014-07-05 20:51:42 +02:00
sun.tokens * Working tokenization. en doesn't match PTB perfectly. Need to reorganize before adding more schemes. 2014-07-07 01:15:59 +02:00
sun.txt * Working tokenization. en doesn't match PTB perfectly. Need to reorganize before adding more schemes. 2014-07-07 01:15:59 +02:00
test_asciify.py * Refactor to use tokens class. 2014-09-10 18:27:44 +02:00
test_canon_case.py * Rewriting Lexeme serialization. 2014-10-29 23:19:38 +11:00
test_contractions.py * Rewriting Lexeme serialization. 2014-10-29 23:19:38 +11:00
test_detokenize.py * Add detokenize method and test 2014-10-18 18:07:29 +11:00
test_emoticons.py * Add tests for emoticon tokenization 2014-11-02 13:22:14 +11:00
test_flag_features.py * Switch to new data model, tests passing 2014-10-10 08:11:31 +11:00
test_infix.py * Test hyphenation etc 2014-10-14 20:26:16 +11:00
test_intern.py * Add tests for string intern 2014-10-23 20:47:06 +11:00
test_is_punct.py * Switch to new data model, tests passing 2014-10-10 08:11:31 +11:00
test_lexeme_flags.py * Upd tests for tighter interface 2014-10-30 18:15:30 +11:00
test_non_sparse.py * Rewriting Lexeme serialization. 2014-10-29 23:19:38 +11:00
test_number.py * Add tests for like_number 2014-11-02 13:21:39 +11:00
test_only_punct.py * Refactor tokenization, enable cache, and ensure we look up specials correctly even when there's confusing punctuation surrounding the token. 2014-09-16 18:01:46 +02:00
test_post_punct.py * Add test for '' in punct 2014-11-02 21:24:09 +11:00
test_pre_punct.py * Basic punct tests updated and passing 2014-08-27 19:38:57 +02:00
test_read_pos.py * Add test for reading in POS tags 2014-10-22 10:18:43 +11:00
test_rules.py * Refactor spacy so that chunks return arrays of lexemes, so that there is properly one lexeme per word. 2014-08-18 19:14:00 +02:00
test_shape.py * Add tests for word shape features 2014-09-01 23:26:17 +02:00
test_special_affix.py * Refactor tokenization, enable cache, and ensure we look up specials correctly even when there's confusing punctuation surrounding the token. 2014-09-16 18:01:46 +02:00
test_string_loading.py * Large refactor, particularly to Python API 2014-10-24 00:59:17 +11:00
test_surround_punct.py * Basic punct tests updated and passing 2014-08-27 19:38:57 +02:00
test_tokenizer.py * Upd tests for tighter interface 2014-10-30 18:15:30 +11:00
test_urlish.py * Add tests for like_url 2014-11-02 13:21:57 +11:00
test_vocab.py * Upd tests for tighter interface 2014-10-30 18:15:30 +11:00
test_whitespace.py * Have tokenizer emit tokens for whitespace other than single spaces 2014-10-14 20:25:57 +11:00
test_wiki_sun.py * Pass tests. Need to implement more feature functions. 2014-08-30 20:36:06 +02:00
tokenizer.sed * Working tokenization. en doesn't match PTB perfectly. Need to reorganize before adding more schemes. 2014-07-07 01:15:59 +02:00