Commit Graph

9262 Commits

Author SHA1 Message Date
Matthew Honnibal
782806df08 * Moving to Word objects in place of the Lexeme struct. 2014-08-22 17:28:23 +02:00
Matthew Honnibal
47fbd0475a * Replace the use of dense_hash_map with Python dict 2014-08-22 17:13:09 +02:00
Matthew Honnibal
e289896603 * Fix ptb3 module 2014-08-22 16:36:17 +02:00
Matthew Honnibal
89d6faa9c9 * Move en_ptb to ptb3 2014-08-22 04:24:05 +02:00
Matthew Honnibal
07ecf5d2f4 * Fixed group_by, removed idea of general attr_of function. 2014-08-22 00:02:37 +02:00
Matthew Honnibal
811b7a6b91 * Struggling with arbitrary attr access... 2014-08-21 23:49:14 +02:00
Matthew Honnibal
314658b31c * Improve module docstring 2014-08-21 18:42:47 +02:00
Matthew Honnibal
d10993f41a * More docs work 2014-08-21 16:37:13 +02:00
Matthew Honnibal
248cbb6d07 * Update doc strings 2014-08-21 03:29:15 +02:00
Matthew Honnibal
76afbd7d69 * Remove compiled orthography file 2014-08-20 17:04:07 +02:00
Matthew Honnibal
f39dcb1d89 * Add orthography 2014-08-20 17:03:44 +02:00
Matthew Honnibal
a78ad4152d * Broken version being refactored for docs 2014-08-20 13:39:39 +02:00
Matthew Honnibal
5fddb8d165 * Working refactor, with updated data model for Lexemes 2014-08-19 04:21:20 +02:00
Matthew Honnibal
3379d7a571 * Reforming data model for lexemes 2014-08-19 02:40:37 +02:00
Matthew Honnibal
ab9b0daabf * Whitespace 2014-08-18 23:21:49 +02:00
Matthew Honnibal
1b71cbfe28 * Roll back to using unicode, and never Py_UNICODE. No dependence on murmurhash either. 2014-08-18 20:48:48 +02:00
Matthew Honnibal
bbf9a2c944 * Working version that uses arrays for chunks, which should be more memory efficient 2014-08-18 20:23:54 +02:00
Matthew Honnibal
8d3f6082be * Working version, adding improvements 2014-08-18 19:59:59 +02:00
Matthew Honnibal
01469b0888 * Refactor spacy so that chunks return arrays of lexemes, so that there is properly one lexeme per word. 2014-08-18 19:14:00 +02:00
Matthew Honnibal
b94c9b72c9 * WordTree in use. Need to reform the way chunks are handled. Should be properly one Lexeme per word, with split points being the things that are cached. 2014-08-16 20:10:22 +02:00
Matthew Honnibal
34b68a18ab * Progress to getting WordTree working. Tests pass, but so far it's slower. 2014-08-16 19:59:38 +02:00
Matthew Honnibal
865cacfaf7 * Remove dependence on murmurhash 2014-08-16 17:37:09 +02:00
Matthew Honnibal
515d41d325 * Restore string saving to spacy 2014-08-16 16:09:24 +02:00
Matthew Honnibal
36073b89fe * Restore unicode, work on improving string storage. 2014-08-16 14:35:34 +02:00
Matthew Honnibal
a225ca5b0d * Refactoring tokenizer 2014-08-16 03:22:03 +02:00
Matthew Honnibal
213a440ffc * Add string decode and encode helpers to string_tools 2014-08-15 23:57:27 +02:00
Matthew Honnibal
f11c8e22eb * Remove happax stuff 2014-08-02 22:11:28 +01:00
Matthew Honnibal
d6e07aa922 * Switch to 32bit hash for strings 2014-08-02 21:51:52 +01:00
Matthew Honnibal
365a2af756 * Restore happax. commit uncommited work 2014-08-02 21:27:03 +01:00
Matthew Honnibal
6319ff0f22 * Add length property 2014-08-02 21:26:44 +01:00
Matthew Honnibal
18fb76b2c4 * Removed happax. Not sure if good idea. 2014-08-02 20:53:35 +01:00
Matthew Honnibal
edd38a84b1 * Removing happax stuff. Added length 2014-08-02 20:45:12 +01:00
Matthew Honnibal
fc7c10d7f8 * Ugly but seemingly working fix to the token memory leak 2014-08-01 09:43:19 +01:00
Matthew Honnibal
c7bb6b329c * Don't free clobbered lexemes, as they might be part of a tail 2014-08-01 08:22:38 +01:00
Matthew Honnibal
c48214460e * Free lexemes clobbered as happaxes 2014-08-01 07:40:20 +01:00
Matthew Honnibal
5b6457e80e * Free lexemes clobbered as happaxes 2014-08-01 07:37:50 +01:00
Matthew Honnibal
d8cb2288ce * Roll back to using murmurhash2 for now 2014-08-01 07:28:47 +01:00
Matthew Honnibal
f39211b2b1 * Add FixedTable for hashing 2014-08-01 07:27:21 +01:00
Matthew Honnibal
a44e15f623 * Hack around lack of distribution features for now. 2014-07-31 18:24:51 +01:00
Matthew Honnibal
4cb88c940b * Fix memory leak in tokenizer, caused by having a fixed vocab. 2014-07-31 18:19:38 +01:00
Matthew Honnibal
5b81ee716f * Use a sparse_hash_map to store happax vocab items, with a max size. 2014-07-31 17:40:43 +01:00
Matthew Honnibal
b9016c4633 * Switch to using sparsehash and murmurhash libraries out of pip 2014-07-25 15:47:27 +01:00
Matthew Honnibal
a895fe5ddb * Upd from spacy 2014-07-23 17:35:18 +01:00
Matthew Honnibal
87bf205b82 * Fix open apostrophe bug 2014-07-07 23:26:01 +02:00
Matthew Honnibal
571808a274 Group-by seems to be working 2014-07-07 20:27:02 +02:00
Matthew Honnibal
80b36f9f27 * 710k words per second for counts 2014-07-07 19:12:19 +02:00
Matthew Honnibal
057c21969b * Refactor for string view features. Working on setting up flags and enums. 2014-07-07 16:58:48 +02:00
Matthew Honnibal
f1bcbd4c4e * Reorganized code to accomodate Tokens class. Need string views before group_by and count_by can be done well. 2014-07-07 12:47:21 +02:00
Matthew Honnibal
6668e44961 * Whitespace 2014-07-07 08:15:44 +02:00
Matthew Honnibal
0074ae2fc0 * Switch to dynamically allocating array, based on the document length 2014-07-07 08:05:29 +02:00
Matthew Honnibal
ff1869ff07 * Fixed major efficiency problem, from not quite grokking pass by reference in cython c++ 2014-07-07 07:36:43 +02:00
Matthew Honnibal
0c76143b72 * Give value for assert 2014-07-07 05:10:46 +02:00
Matthew Honnibal
e244739dfe * Fix ptb tokenization 2014-07-07 05:10:09 +02:00
Matthew Honnibal
dc20500920 * Remove cpp files 2014-07-07 05:09:05 +02:00
Matthew Honnibal
25849fc926 * Generalize tokenization rules to capitals 2014-07-07 05:07:21 +02:00
Matthew Honnibal
df0458001d * Begin work on full PTB-compatible English tokenization 2014-07-07 04:29:24 +02:00
Matthew Honnibal
d5bef02c72 * Reorganized, moving language-independent stuff to spacy. The functions in spacy ask for the dictionaries and split function on input, but the language-specific modules are curried versions that use the globals 2014-07-07 04:21:06 +02:00
Matthew Honnibal
a62c38e1ef * Working tokenization. en doesn't match PTB perfectly. Need to reorganize before adding more schemes. 2014-07-07 01:15:59 +02:00
Matthew Honnibal
4e79446dc2 * Reading in tokenization rules correctly. Passing tests. 2014-07-07 00:02:55 +02:00
Matthew Honnibal
72159e7011 * Fixes to tokenization. Now segment sequences of the same punctuation. 2014-07-06 19:28:42 +02:00
Matthew Honnibal
e98e97d483 * Possessive test passing 2014-07-06 18:35:55 +02:00
Matthew Honnibal
556f6a18ca * Initial commit. Tests passing for punctuation handling. Need contractions, file transport, tokenize function, etc. 2014-07-05 20:51:42 +02:00