Commit Graph

165 Commits

Author SHA1 Message Date
Matthew Honnibal
13909a2e24 * Rewriting Lexeme serialization. 2014-10-29 23:19:38 +11:00
Matthew Honnibal
234d49bf4d * Seems to be working after refactor. Need to wire up more POS tag features, and wire up save/load of POS tags. 2014-10-24 02:23:42 +11:00
Matthew Honnibal
08ce602243 * Large refactor, particularly to Python API 2014-10-24 00:59:17 +11:00
Matthew Honnibal
7baef5b7ff * Fix padding on tokens 2014-10-23 04:01:17 +11:00
Matthew Honnibal
96b835a3d4 * Upd for refactored Tokens class. Now gets 95.74, 185ms training on swbd_wsj_ewtb, eval on onto_web, Google POS tags. 2014-10-23 03:20:02 +11:00
Matthew Honnibal
e5e951ae67 * Remove the feature array stuff from Tokens class, and replace vector with array-based implementation, with padding. 2014-10-23 01:57:59 +11:00
Matthew Honnibal
ea1d4a81eb * Refactoring get_atoms, improving tokens API 2014-10-22 13:10:56 +11:00
Matthew Honnibal
ad49e2482e * Tagger now gets 97pc on wsj, parsing 19-21 in 500ms. Gets 92.7 on web text. 2014-10-22 12:57:06 +11:00
Matthew Honnibal
0a0e41f6c8 * Add prefix and suffix features 2014-10-22 12:56:09 +11:00
Matthew Honnibal
7018b53d3a * Improve array features in tokens 2014-10-22 12:55:42 +11:00
Matthew Honnibal
43d5964e13 * Add function to read detokenization rules 2014-10-22 12:54:59 +11:00
Matthew Honnibal
224bdae996 * Add POS utilities 2014-10-22 10:17:57 +11:00
Matthew Honnibal
5ebe14f353 * Add greedy pos tagger 2014-10-22 10:17:26 +11:00
Matthew Honnibal
12742f4f83 * Add detokenize method and test 2014-10-18 18:07:29 +11:00
Matthew Honnibal
99f5e59286 * Have tokenizer emit tokens for whitespace other than single spaces 2014-10-14 20:25:57 +11:00
Matthew Honnibal
43743a5d63 * Work on efficiency 2014-10-14 18:22:41 +11:00
Matthew Honnibal
6fb42c4919 * Add offsets to Tokens class. Some changes to interfaces, and reorganization of spacy.Lang 2014-10-14 16:17:45 +11:00
Matthew Honnibal
2805068ca8 * Have tokens track tuples that record the start offset and pos tag as well as a lexeme pointer 2014-10-14 15:21:03 +11:00
Matthew Honnibal
65d3ead4fd * Rename LexStr_casefix to LexStr_norm and LexInt_i to LexInt_id 2014-10-14 15:19:07 +11:00
Matthew Honnibal
868e558037 * Preparations in place to handle hyphenation etc 2014-10-10 20:23:23 +11:00
Matthew Honnibal
ff79dbac2e * More slight cleaning for lang.pyx 2014-10-10 20:11:22 +11:00
Matthew Honnibal
3d82ed1e5e * More slight cleaning for lang.pyx 2014-10-10 19:50:07 +11:00
Matthew Honnibal
02e948e7d5 * Remove counts stuff from Language class 2014-10-10 19:25:01 +11:00
Matthew Honnibal
71ee921055 * Slight cleaning of tokenizer code 2014-10-10 19:17:22 +11:00
Matthew Honnibal
59b41a9fd3 * Switch to new data model, tests passing 2014-10-10 08:11:31 +11:00
Matthew Honnibal
1b0e01d3d8 * Revising data model of lexeme. Compiles. 2014-10-09 19:53:30 +11:00
Matthew Honnibal
e40caae51f * Update Lexicon class to expect a list of lexeme dict descriptions 2014-10-09 14:51:35 +11:00
Matthew Honnibal
51d75b244b * Add serialize/deserialize functions for lexeme, transport to/from python dict. 2014-10-09 14:10:46 +11:00
Matthew Honnibal
d73d89a2de * Add i attribute to lexeme, giving lexemes sequential IDs. 2014-10-09 13:50:05 +11:00
Matthew Honnibal
096ef2b199 * Rename external hashing lib, from trustyc to preshed 2014-09-26 18:40:03 +02:00
Matthew Honnibal
11a346fd5e * Remove hashing modules, which are now taken over by external lib 2014-09-26 18:39:40 +02:00
Matthew Honnibal
93505276ed * Add German tokenizer files 2014-09-25 18:29:13 +02:00
Matthew Honnibal
2e44fa7179 * Add util.py 2014-09-25 18:26:22 +02:00
Matthew Honnibal
b15619e170 * Use PointerHash instead of locally provided _hashing module 2014-09-25 18:23:35 +02:00
Matthew Honnibal
ed446c67ad * Add typedefs file 2014-09-17 23:10:32 +02:00
Matthew Honnibal
316a57c4be * Remove own memory classes, which have now been broken out into their own package 2014-09-17 23:10:07 +02:00
Matthew Honnibal
ac522e2553 * Switch from own memory class to cymem, in pip 2014-09-17 23:09:24 +02:00
Matthew Honnibal
6266cac593 * Switch to using a Python ref counted gateway to malloc/free, to prevent memory leaks 2014-09-17 20:02:26 +02:00
Matthew Honnibal
5a20dfc03e * Add memory management code 2014-09-17 20:02:06 +02:00
Matthew Honnibal
0152831c89 * Refactor tokenization, enable cache, and ensure we look up specials correctly even when there's confusing punctuation surrounding the token. 2014-09-16 18:01:46 +02:00
Matthew Honnibal
143e51ec73 * Refactor tokenization, splitting it into a clearer life-cycle. 2014-09-16 13:16:02 +02:00
Matthew Honnibal
c396581a0b * Fiddle with the way strings are interned in lexeme 2014-09-15 06:34:45 +02:00
Matthew Honnibal
0bb547ab98 * Fix memory error in cache, where entry wasn't being null-terminated. Various other changes, some good for performance 2014-09-15 06:34:10 +02:00
Matthew Honnibal
7959141d36 * Add a few abbreviations, to get tests to pass 2014-09-15 06:32:18 +02:00
Matthew Honnibal
d235299260 * Few nips and tucks to hash table 2014-09-15 05:03:44 +02:00
Matthew Honnibal
e68a431e5e * Pass only the tokens vector to _tokenize, instead of the whole python object. 2014-09-15 04:01:38 +02:00
Matthew Honnibal
08cef75ffd * Switch to using a heap-allocated vector in tokens 2014-09-15 03:46:14 +02:00
Matthew Honnibal
f77b7098c0 * Upd Tokens to use vector, with bounds checking. 2014-09-15 03:22:40 +02:00
Matthew Honnibal
0f6bf2a2ee * Fix niggling memory error, which was caused by bug in the way tokens resized their internal vector. 2014-09-15 02:08:39 +02:00
Matthew Honnibal
df24e3708c * Move EnglishTokens stuff to Tokens 2014-09-15 01:31:44 +02:00