Vimos Tan
a6d9fb5bb6
fix issue #1292
2017-08-30 14:49:14 +08:00
ines
dcff10abe9
Add regression test for #1281
2017-08-21 16:11:47 +02:00
ines
edc596d9a7
Add missing tokenizer exceptions ( resolves #1281 )
2017-08-21 16:11:36 +02:00
Delirious Lettuce
d3b03f0544
Fix typos:
...
* `auxillary` -> `auxiliary`
* `consistute` -> `constitute`
* `earlist` -> `earliest`
* `prefered` -> `preferred`
* `direcory` -> `directory`
* `reuseable` -> `reusable`
* `idiosyncracies` -> `idiosyncrasies`
* `enviroment` -> `environment`
* `unecessary` -> `unnecessary`
* `yesteday` -> `yesterday`
* `resouces` -> `resources`
2017-08-06 21:31:39 -06:00
Matthew Honnibal
d51d55bba6
Increment version
2017-07-22 15:43:16 +02:00
Matthew Honnibal
796b2f4c1b
Remove print statements in tests
2017-07-22 15:42:38 +02:00
Matthew Honnibal
4b2e5e59ed
Add flush_cache method to tokenizer, to fix #1061
...
The tokenizer caches output for common chunks, for efficiency. This
cache is be invalidated when the tokenizer rules change, e.g. when a new
special-case rule is introduced. That's what was causing #1061 .
When the cache is flushed, we free the intermediate token chunks.
I *think* this is safe --- but if we start getting segfaults, this patch
is to blame. The resolution would be to simply not free those bits of
memory. They'll be freed when the tokenizer exits anyway.
2017-07-22 15:06:50 +02:00
Matthew Honnibal
23a55b40ca
Default to English noun chunks iterator if no lang set
2017-07-22 14:15:25 +02:00
Matthew Honnibal
9750a0128c
Fix Span.noun_chunks. Closes #1207
2017-07-22 14:14:57 +02:00
Matthew Honnibal
d9b85675d7
Rename regression test
2017-07-22 14:14:35 +02:00
Matthew Honnibal
dfbc7e49de
Add test for Issue #1207
2017-07-22 14:14:01 +02:00
Matthew Honnibal
0ae3807d7d
Fix gaps in Lexeme API. Closes #1031
2017-07-22 13:53:48 +02:00
Matthew Honnibal
83e1b5f1e3
Merge branch 'master' of https://github.com/explosion/spaCy
2017-07-22 13:45:35 +02:00
Matthew Honnibal
45f6961ae0
Add __version__ symbol in __init__.py
2017-07-22 13:45:21 +02:00
Matthew Honnibal
8b9c4c5e1c
Add missing SP symbol to tag map, re #1052
2017-07-22 13:44:17 +02:00
Ines Montani
9af04ea11f
Merge pull request #1161 from AlexisEidelman/patch-1
...
French NUM_WORDS and ORDINAL_WORDS
2017-07-22 13:40:46 +02:00
Matthew Honnibal
44dd247e73
Merge branch 'master' of https://github.com/explosion/spaCy
2017-07-22 13:35:30 +02:00
Matthew Honnibal
94267ec50f
Fix merge conflit in printer
2017-07-22 13:35:15 +02:00
Ines Montani
c7708dc736
Merge pull request #1177 from swierh/master
...
Dutch NUM_WORDS and ORDINAL_WORDS
2017-07-22 13:35:08 +02:00
Matthew Honnibal
5916d46ba8
Avoid use of deepcopy in printer
2017-07-22 13:34:01 +02:00
Ines Montani
9eca6503c1
Merge pull request #1157 from polm/master
...
Add basic Japanese Tokenizer Test
2017-07-10 13:07:11 +02:00
Paul O'Leary McCann
bc87b815cc
Add comment clarifying what LANGUAGES does
2017-07-09 16:28:55 +09:00
Paul O'Leary McCann
04e6a65188
Remove Japanese from LANGUAGES
...
LANGUAGES is a list of languages whose tokenizers get run through a
variety of generic tests. Since the generic tests don't check the JA
fixture, it blows up when it can't find janome. -POLM
2017-07-09 16:23:26 +09:00
Swier
29720150f9
fix import of stop words in language data
2017-07-05 14:08:04 +02:00
Swier
f377c9c952
Rename stop_words.py to word_sets.py
2017-07-05 14:06:28 +02:00
Swier
5357874bf7
add Dutch numbers and ordinals
2017-07-05 14:03:30 +02:00
gispk47
669bd14213
Update __init__.py
...
remove the empty string return from jieba.cut,this will cause the list of tokens cant be pushed assert error
2017-07-01 13:12:00 +08:00
Paul O'Leary McCann
c336193392
Parametrize and extend Japanese tokenizer tests
2017-06-29 00:09:40 +09:00
Paul O'Leary McCann
30a34ebb6e
Add importorskip for janome
2017-06-29 00:09:20 +09:00
Alexis
1b3a5d87ba
French NUM_WORDS and ORDINAL_WORDS
2017-06-28 14:11:20 +02:00
Paul O'Leary McCann
e56fea14eb
Add basic Japanese tokenizer test
2017-06-28 01:24:25 +09:00
Paul O'Leary McCann
84041a2bb5
Make create_tokenizer work with Japanese
2017-06-28 01:18:05 +09:00
György Orosz
fa26041da6
Fixed typo in cli/package.py
2017-06-07 16:19:08 +02:00
Ines Montani
e7ef51b382
Update tokenizer_exceptions.py
2017-06-02 19:00:01 +02:00
Ines Montani
81918155ef
Merge pull request #1096 from recognai/master
...
Spanish model features
2017-06-02 11:07:27 +02:00
Francisco Aranda
70a2180199
fix(spanish sentence segmentation): remove tokenizer exceptions the break sentence segmentation. Aligned with training corpus
2017-06-02 08:19:57 +02:00
Francisco Aranda
5b385e7d78
feat(spanish model): add the spanish noun chunker
2017-06-02 08:14:06 +02:00
Ines Montani
7f6be41f21
Fix typo in English tokenizer exceptions ( resolves #1071 )
2017-05-23 12:18:00 +02:00
Raphaël Bournhonesque
6381ebfb14
Use yield from syntax
2017-05-18 10:42:35 +02:00
Raphaël Bournhonesque
f37d078d6a
Fix issue #1069 with custom hook Doc.sents
definition
2017-05-18 09:59:38 +02:00
ines
9003fd25e5
Fix error messages if model is required ( resolves #1051 )
...
Rename about.__docs__ to about.__docs_models__.
2017-05-13 13:14:02 +02:00
ines
24e973b17f
Rename about.__docs__ to about.__docs_models__
2017-05-13 13:09:00 +02:00
ines
6e1dbc608e
Fix parse_tree test
2017-05-13 12:34:20 +02:00
ines
573f0ba867
Replace deepcopy
2017-05-13 12:34:14 +02:00
ines
bd428c0a70
Set defaults for light and flat kwargs
2017-05-13 12:34:05 +02:00
ines
c5669450a0
Fix formatting
2017-05-13 12:33:57 +02:00
Matthew Honnibal
ad590feaa8
Fix test, which imported English incorrectly
2017-05-13 11:36:19 +02:00
Ines Montani
8d742ac8ff
Merge pull request #1055 from recognai/master
...
Enable pruning out rare words from clusters data
2017-05-13 03:22:56 +02:00
Matthew Honnibal
b2540d2379
Merge Kengz's tree_print patch
2017-05-13 03:18:49 +02:00
oeg
cdaefae60a
feature(populate_vocab): Enable pruning out rare words from clusters data
2017-05-12 16:15:19 +02:00