mirror of
https://github.com/explosion/spaCy.git
synced 2026-01-07 01:01:17 +03:00
* Refactor Chinese tokenizer configuration Refactor `ChineseTokenizer` configuration so that it uses a single `segmenter` setting to choose between character segmentation, jieba, and pkuseg. * replace `use_jieba`, `use_pkuseg`, `require_pkuseg` with the setting `segmenter` with the supported values: `char`, `jieba`, `pkuseg` * make the default segmenter plain character segmentation `char` (no additional libraries required) * Fix Chinese serialization test to use char default * Warn if attempting to customize other segmenter Add a warning if `Chinese.pkuseg_update_user_dict` is called when another segmenter is selected. |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| test_serialize.py | ||
| test_text.py | ||
| test_tokenizer.py | ||