Commit Graph

13976 Commits

Author SHA1 Message Date
Matthew Honnibal
df110476d5 * Update docs 2014-10-15 21:50:34 +11:00
Matthew Honnibal
849de654e7 * Add file for infix patterns 2014-10-14 20:26:43 +11:00
Matthew Honnibal
31aad7c08a * Test hyphenation etc 2014-10-14 20:26:16 +11:00
Matthew Honnibal
99f5e59286 * Have tokenizer emit tokens for whitespace other than single spaces 2014-10-14 20:25:57 +11:00
Matthew Honnibal
43743a5d63 * Work on efficiency 2014-10-14 18:22:41 +11:00
Matthew Honnibal
6fb42c4919 * Add offsets to Tokens class. Some changes to interfaces, and reorganization of spacy.Lang 2014-10-14 16:17:45 +11:00
Matthew Honnibal
2805068ca8 * Have tokens track tuples that record the start offset and pos tag as well as a lexeme pointer 2014-10-14 15:21:03 +11:00
Matthew Honnibal
65d3ead4fd * Rename LexStr_casefix to LexStr_norm and LexInt_i to LexInt_id 2014-10-14 15:19:07 +11:00
Matthew Honnibal
5abb194553 * Add semi-colon to suffix punct 2014-10-14 10:43:45 +11:00
Matthew Honnibal
868e558037 * Preparations in place to handle hyphenation etc 2014-10-10 20:23:23 +11:00
Matthew Honnibal
ff79dbac2e * More slight cleaning for lang.pyx 2014-10-10 20:11:22 +11:00
Matthew Honnibal
3d82ed1e5e * More slight cleaning for lang.pyx 2014-10-10 19:50:07 +11:00
Matthew Honnibal
02e948e7d5 * Remove counts stuff from Language class 2014-10-10 19:25:01 +11:00
Matthew Honnibal
71ee921055 * Slight cleaning of tokenizer code 2014-10-10 19:17:22 +11:00
Matthew Honnibal
59b41a9fd3 * Switch to new data model, tests passing 2014-10-10 08:11:31 +11:00
Matthew Honnibal
1b0e01d3d8 * Revising data model of lexeme. Compiles. 2014-10-09 19:53:30 +11:00
Matthew Honnibal
e40caae51f * Update Lexicon class to expect a list of lexeme dict descriptions 2014-10-09 14:51:35 +11:00
Matthew Honnibal
51d75b244b * Add serialize/deserialize functions for lexeme, transport to/from python dict. 2014-10-09 14:10:46 +11:00
Matthew Honnibal
d73d89a2de * Add i attribute to lexeme, giving lexemes sequential IDs. 2014-10-09 13:50:05 +11:00
Matthew Honnibal
0c6402ab73 * Upd docs 2014-09-26 18:40:18 +02:00
Matthew Honnibal
096ef2b199 * Rename external hashing lib, from trustyc to preshed 2014-09-26 18:40:03 +02:00
Matthew Honnibal
11a346fd5e * Remove hashing modules, which are now taken over by external lib 2014-09-26 18:39:40 +02:00
Matthew Honnibal
bfab6403bc * Re-add docs, sorting out mess from gh-pages 2014-09-25 18:42:20 +02:00
Matthew Honnibal
aba4a7c7ea * Remove ptb3 file from setup 2014-09-25 18:41:25 +02:00
Matthew Honnibal
bc460de171 * Add extra tests 2014-09-25 18:29:42 +02:00
Matthew Honnibal
93505276ed * Add German tokenizer files 2014-09-25 18:29:13 +02:00
Matthew Honnibal
2e44fa7179 * Add util.py 2014-09-25 18:26:22 +02:00
Matthew Honnibal
c4cd3bc57a * Add prefix and suffix data files 2014-09-25 18:24:52 +02:00
Matthew Honnibal
2d4e5ceafd * Remove old docs stuff 2014-09-25 18:24:05 +02:00
Matthew Honnibal
b15619e170 * Use PointerHash instead of locally provided _hashing module 2014-09-25 18:23:35 +02:00
Matthew Honnibal
ed446c67ad * Add typedefs file 2014-09-17 23:10:32 +02:00
Matthew Honnibal
316a57c4be * Remove own memory classes, which have now been broken out into their own package 2014-09-17 23:10:07 +02:00
Matthew Honnibal
ac522e2553 * Switch from own memory class to cymem, in pip 2014-09-17 23:09:24 +02:00
Matthew Honnibal
6266cac593 * Switch to using a Python ref counted gateway to malloc/free, to prevent memory leaks 2014-09-17 20:02:26 +02:00
Matthew Honnibal
5a20dfc03e * Add memory management code 2014-09-17 20:02:06 +02:00
Matthew Honnibal
0152831c89 * Refactor tokenization, enable cache, and ensure we look up specials correctly even when there's confusing punctuation surrounding the token. 2014-09-16 18:01:46 +02:00
Matthew Honnibal
143e51ec73 * Refactor tokenization, splitting it into a clearer life-cycle. 2014-09-16 13:16:02 +02:00
Matthew Honnibal
c396581a0b * Fiddle with the way strings are interned in lexeme 2014-09-15 06:34:45 +02:00
Matthew Honnibal
0bb547ab98 * Fix memory error in cache, where entry wasn't being null-terminated. Various other changes, some good for performance 2014-09-15 06:34:10 +02:00
Matthew Honnibal
7959141d36 * Add a few abbreviations, to get tests to pass 2014-09-15 06:32:18 +02:00
Matthew Honnibal
db191361ee * Add new tests for fancier tokenization cases 2014-09-15 06:31:58 +02:00
Matthew Honnibal
6fc06bfe2f * Hack a hard-cased unit in to get a test to pass 2014-09-15 06:31:35 +02:00
Matthew Honnibal
d235299260 * Few nips and tucks to hash table 2014-09-15 05:03:44 +02:00
Matthew Honnibal
e68a431e5e * Pass only the tokens vector to _tokenize, instead of the whole python object. 2014-09-15 04:01:38 +02:00
Matthew Honnibal
08cef75ffd * Switch to using a heap-allocated vector in tokens 2014-09-15 03:46:14 +02:00
Matthew Honnibal
f77b7098c0 * Upd Tokens to use vector, with bounds checking. 2014-09-15 03:22:40 +02:00
Matthew Honnibal
0f6bf2a2ee * Fix niggling memory error, which was caused by bug in the way tokens resized their internal vector. 2014-09-15 02:08:39 +02:00
Matthew Honnibal
5dcc1a426a * Update tokenization tests for new tokenizer rules 2014-09-15 01:32:51 +02:00
Matthew Honnibal
df24e3708c * Move EnglishTokens stuff to Tokens 2014-09-15 01:31:44 +02:00
Matthew Honnibal
bd08cb09a2 * Remove short-circuiting of initial_size argument for PointerHash 2014-09-15 01:30:49 +02:00