spaCy/spacy/lang/bn/tokenizer_exceptions.py
Ines Montani db55577c45
Drop Python 2.7 and 3.5 (#4828)
* Remove unicode declarations

* Remove Python 3.5 and 2.7 from CI

* Don't require pathlib

* Replace compat helpers

* Remove OrderedDict

* Use f-strings

* Set Cython compiler language level

* Fix typo

* Re-add OrderedDict for Table

* Update setup.cfg

* Revert CONTRIBUTING.md

* Revert lookups.md

* Revert top-level.md

* Small adjustments and docs [ci skip]
2019-12-22 01:53:56 +01:00

25 lines
873 B
Python

from ...symbols import ORTH, LEMMA
_exc = {}
for exc_data in [
{ORTH: "ডঃ", LEMMA: "ডক্টর"},
{ORTH: "ডাঃ", LEMMA: "ডাক্তার"},
{ORTH: "ড.", LEMMA: "ডক্টর"},
{ORTH: "ডা.", LEMMA: "ডাক্তার"},
{ORTH: "মোঃ", LEMMA: "মোহাম্মদ"},
{ORTH: "মো.", LEMMA: "মোহাম্মদ"},
{ORTH: "সে.", LEMMA: "সেলসিয়াস"},
{ORTH: "কি.মি.", LEMMA: "কিলোমিটার"},
{ORTH: "কি.মি", LEMMA: "কিলোমিটার"},
{ORTH: "সে.মি.", LEMMA: "সেন্টিমিটার"},
{ORTH: "সে.মি", LEMMA: "সেন্টিমিটার"},
{ORTH: "মি.লি.", LEMMA: "মিলিলিটার"},
]:
_exc[exc_data[ORTH]] = [exc_data]
TOKENIZER_EXCEPTIONS = _exc