.. |
_depr_group_by.py
|
* Refactor around Word objects, adapting tests. Tests passing, except for string views.
|
2014-08-23 19:55:06 +02:00 |
my_test.py
|
* Initial commit. Tests passing for punctuation handling. Need contractions, file transport, tokenize function, etc.
|
2014-07-05 20:51:42 +02:00 |
sun.tokens
|
* Working tokenization. en doesn't match PTB perfectly. Need to reorganize before adding more schemes.
|
2014-07-07 01:15:59 +02:00 |
sun.txt
|
* Working tokenization. en doesn't match PTB perfectly. Need to reorganize before adding more schemes.
|
2014-07-07 01:15:59 +02:00 |
test_asciify.py
|
* Refactor to use tokens class.
|
2014-09-10 18:27:44 +02:00 |
test_canon_case.py
|
* Add tests for canon_case
|
2014-09-01 23:26:49 +02:00 |
test_contractions.py
|
* Fixed contraction tests. Need to correct problem with the way case stats and tag stats are supposed to work.
|
2014-08-27 20:22:33 +02:00 |
test_flag_features.py
|
* Work on tests for flag features
|
2014-09-01 23:41:43 +02:00 |
test_hashing.py
|
* PointerHash working, efficiency is good. 6-7 mins
|
2014-09-13 16:43:59 +02:00 |
test_non_sparse.py
|
* Add tests for non_sparse string transform
|
2014-09-01 23:27:31 +02:00 |
test_orth.py
|
* Refactor to use tokens class.
|
2014-09-10 18:27:44 +02:00 |
test_post_punct.py
|
* Basic punct tests updated and passing
|
2014-08-27 19:38:57 +02:00 |
test_pre_punct.py
|
* Basic punct tests updated and passing
|
2014-08-27 19:38:57 +02:00 |
test_rules.py
|
* Refactor spacy so that chunks return arrays of lexemes, so that there is properly one lexeme per word.
|
2014-08-18 19:14:00 +02:00 |
test_shape.py
|
* Add tests for word shape features
|
2014-09-01 23:26:17 +02:00 |
test_surround_punct.py
|
* Basic punct tests updated and passing
|
2014-08-27 19:38:57 +02:00 |
test_tokenizer.py
|
* Add new tests for fancier tokenization cases
|
2014-09-15 06:31:58 +02:00 |
test_vocab.py
|
* Avoid testing for object identity
|
2014-09-10 20:58:30 +02:00 |
test_wiki_sun.py
|
* Pass tests. Need to implement more feature functions.
|
2014-08-30 20:36:06 +02:00 |
tokenizer.sed
|
* Working tokenization. en doesn't match PTB perfectly. Need to reorganize before adding more schemes.
|
2014-07-07 01:15:59 +02:00 |