Commit Graph

6494 Commits

Author SHA1 Message Date
Matthew Honnibal
01469b0888 * Refactor spacy so that chunks return arrays of lexemes, so that there is properly one lexeme per word. 2014-08-18 19:14:00 +02:00
Matthew Honnibal
b94c9b72c9 * WordTree in use. Need to reform the way chunks are handled. Should be properly one Lexeme per word, with split points being the things that are cached. 2014-08-16 20:10:22 +02:00
Matthew Honnibal
34b68a18ab * Progress to getting WordTree working. Tests pass, but so far it's slower. 2014-08-16 19:59:38 +02:00
Matthew Honnibal
865cacfaf7 * Remove dependence on murmurhash 2014-08-16 17:37:09 +02:00
Matthew Honnibal
515d41d325 * Restore string saving to spacy 2014-08-16 16:09:24 +02:00
Matthew Honnibal
36073b89fe * Restore unicode, work on improving string storage. 2014-08-16 14:35:34 +02:00
Matthew Honnibal
a225ca5b0d * Refactoring tokenizer 2014-08-16 03:22:03 +02:00
Matthew Honnibal
213a440ffc * Add string decode and encode helpers to string_tools 2014-08-15 23:57:27 +02:00
Matthew Honnibal
f11c8e22eb * Remove happax stuff 2014-08-02 22:11:28 +01:00
Matthew Honnibal
d6e07aa922 * Switch to 32bit hash for strings 2014-08-02 21:51:52 +01:00
Matthew Honnibal
365a2af756 * Restore happax. commit uncommited work 2014-08-02 21:27:03 +01:00
Matthew Honnibal
6319ff0f22 * Add length property 2014-08-02 21:26:44 +01:00
Matthew Honnibal
18fb76b2c4 * Removed happax. Not sure if good idea. 2014-08-02 20:53:35 +01:00
Matthew Honnibal
edd38a84b1 * Removing happax stuff. Added length 2014-08-02 20:45:12 +01:00
Matthew Honnibal
fc7c10d7f8 * Ugly but seemingly working fix to the token memory leak 2014-08-01 09:43:19 +01:00
Matthew Honnibal
c7bb6b329c * Don't free clobbered lexemes, as they might be part of a tail 2014-08-01 08:22:38 +01:00
Matthew Honnibal
c48214460e * Free lexemes clobbered as happaxes 2014-08-01 07:40:20 +01:00
Matthew Honnibal
5b6457e80e * Free lexemes clobbered as happaxes 2014-08-01 07:37:50 +01:00
Matthew Honnibal
d8cb2288ce * Roll back to using murmurhash2 for now 2014-08-01 07:28:47 +01:00
Matthew Honnibal
f39211b2b1 * Add FixedTable for hashing 2014-08-01 07:27:21 +01:00
Matthew Honnibal
a44e15f623 * Hack around lack of distribution features for now. 2014-07-31 18:24:51 +01:00
Matthew Honnibal
4cb88c940b * Fix memory leak in tokenizer, caused by having a fixed vocab. 2014-07-31 18:19:38 +01:00
Matthew Honnibal
5b81ee716f * Use a sparse_hash_map to store happax vocab items, with a max size. 2014-07-31 17:40:43 +01:00
Matthew Honnibal
b9016c4633 * Switch to using sparsehash and murmurhash libraries out of pip 2014-07-25 15:47:27 +01:00
Matthew Honnibal
a895fe5ddb * Upd from spacy 2014-07-23 17:35:18 +01:00
Matthew Honnibal
87bf205b82 * Fix open apostrophe bug 2014-07-07 23:26:01 +02:00
Matthew Honnibal
571808a274 Group-by seems to be working 2014-07-07 20:27:02 +02:00
Matthew Honnibal
80b36f9f27 * 710k words per second for counts 2014-07-07 19:12:19 +02:00
Matthew Honnibal
057c21969b * Refactor for string view features. Working on setting up flags and enums. 2014-07-07 16:58:48 +02:00
Matthew Honnibal
f1bcbd4c4e * Reorganized code to accomodate Tokens class. Need string views before group_by and count_by can be done well. 2014-07-07 12:47:21 +02:00
Matthew Honnibal
6668e44961 * Whitespace 2014-07-07 08:15:44 +02:00
Matthew Honnibal
0074ae2fc0 * Switch to dynamically allocating array, based on the document length 2014-07-07 08:05:29 +02:00
Matthew Honnibal
ff1869ff07 * Fixed major efficiency problem, from not quite grokking pass by reference in cython c++ 2014-07-07 07:36:43 +02:00
Matthew Honnibal
0c76143b72 * Give value for assert 2014-07-07 05:10:46 +02:00
Matthew Honnibal
e244739dfe * Fix ptb tokenization 2014-07-07 05:10:09 +02:00
Matthew Honnibal
dc20500920 * Remove cpp files 2014-07-07 05:09:05 +02:00
Matthew Honnibal
25849fc926 * Generalize tokenization rules to capitals 2014-07-07 05:07:21 +02:00
Matthew Honnibal
df0458001d * Begin work on full PTB-compatible English tokenization 2014-07-07 04:29:24 +02:00
Matthew Honnibal
d5bef02c72 * Reorganized, moving language-independent stuff to spacy. The functions in spacy ask for the dictionaries and split function on input, but the language-specific modules are curried versions that use the globals 2014-07-07 04:21:06 +02:00
Matthew Honnibal
a62c38e1ef * Working tokenization. en doesn't match PTB perfectly. Need to reorganize before adding more schemes. 2014-07-07 01:15:59 +02:00
Matthew Honnibal
4e79446dc2 * Reading in tokenization rules correctly. Passing tests. 2014-07-07 00:02:55 +02:00
Matthew Honnibal
72159e7011 * Fixes to tokenization. Now segment sequences of the same punctuation. 2014-07-06 19:28:42 +02:00
Matthew Honnibal
e98e97d483 * Possessive test passing 2014-07-06 18:35:55 +02:00
Matthew Honnibal
556f6a18ca * Initial commit. Tests passing for punctuation handling. Need contractions, file transport, tokenize function, etc. 2014-07-05 20:51:42 +02:00