spaCy/spacy/glossary.py
Ines Montani d33953037e
💫 Port master changes over to develop (#2979)
* Create aryaprabhudesai.md (#2681)

* Update _install.jade (#2688)

Typo fix: "models" -> "model"

* Add FAC to spacy.explain (resolves #2706)

* Remove docstrings for deprecated arguments (see #2703)

* When calling getoption() in conftest.py, pass a default option (#2709)

* When calling getoption() in conftest.py, pass a default option

This is necessary to allow testing an installed spacy by running:

  pytest --pyargs spacy

* Add contributor agreement

* update bengali token rules for hyphen and digits (#2731)

* Less norm computations in token similarity (#2730)

* Less norm computations in token similarity

* Contributor agreement

* Remove ')' for clarity (#2737)

Sorry, don't mean to be nitpicky, I just noticed this when going through the CLI and thought it was a quick fix. That said, if this was intention than please let me know.

* added contributor agreement for mbkupfer (#2738)

* Basic support for Telugu language (#2751)

* Lex _attrs for polish language (#2750)

* Signed spaCy contributor agreement

* Added polish version of english lex_attrs

* Introduces a bulk merge function, in order to solve issue #653 (#2696)

* Fix comment

* Introduce bulk merge to increase performance on many span merges

* Sign contributor agreement

* Implement pull request suggestions

* Describe converters more explicitly (see #2643)

* Add multi-threading note to Language.pipe (resolves #2582) [ci skip]

* Fix formatting

* Fix dependency scheme docs (closes #2705) [ci skip]

* Don't set stop word in example (closes #2657) [ci skip]

* Add words to portuguese language _num_words (#2759)

* Add words to portuguese language _num_words

* Add words to portuguese language _num_words

* Update Indonesian model (#2752)

* adding e-KTP in tokenizer exceptions list

* add exception token

* removing lines with containing space as it won't matter since we use .split() method in the end, added new tokens in exception

* add tokenizer exceptions list

* combining base_norms with norm_exceptions

* adding norm_exception

* fix double key in lemmatizer

* remove unused import on punctuation.py

* reformat stop_words to reduce number of lines, improve readibility

* updating tokenizer exception

* implement is_currency for lang/id

* adding orth_first_upper in tokenizer_exceptions

* update the norm_exception list

* remove bunch of abbreviations

* adding contributors file

* Fixed spaCy+Keras example (#2763)

* bug fixes in keras example

* created contributor agreement

* Adding French hyphenated first name (#2786)

* Fix typo (closes #2784)

* Fix typo (#2795) [ci skip]

Fixed typo on line 6 "regcognizer --> recognizer"

* Adding basic support for Sinhala language. (#2788)

* adding Sinhala language package, stop words, examples and lex_attrs.

* Adding contributor agreement

* Updating contributor agreement

* Also include lowercase norm exceptions

* Fix error (#2802)

* Fix error
ValueError: cannot resize an array that references or is referenced
by another array in this way.  Use the resize function

* added spaCy Contributor Agreement

* Add charlax's contributor agreement (#2805)

* agreement of contributor, may I introduce a tiny pl languge contribution (#2799)

* Contributors agreement

* Contributors agreement

* Contributors agreement

* Add jupyter=True to displacy.render in documentation (#2806)

* Revert "Also include lowercase norm exceptions"

This reverts commit 70f4e8adf3.

* Remove deprecated encoding argument to msgpack

* Set up dependency tree pattern matching skeleton (#2732)

* Fix bug when too many entity types. Fixes #2800

* Fix Python 2 test failure

* Require older msgpack-numpy

* Restore encoding arg on msgpack-numpy

* Try to fix version pin for msgpack-numpy

* Update Portuguese Language (#2790)

* Add words to portuguese language _num_words

* Add words to portuguese language _num_words

* Portuguese - Add/remove stopwords, fix tokenizer, add currency symbols

* Extended punctuation and norm_exceptions in the Portuguese language

* Correct error in spacy universe docs concerning spacy-lookup (#2814)

* Update Keras Example for (Parikh et al, 2016) implementation  (#2803)

* bug fixes in keras example

* created contributor agreement

* baseline for Parikh model

* initial version of parikh 2016 implemented

* tested asymmetric models

* fixed grevious error in normalization

* use standard SNLI test file

* begin to rework parikh example

* initial version of running example

* start to document the new version

* start to document the new version

* Update Decompositional Attention.ipynb

* fixed calls to similarity

* updated the README

* import sys package duh

* simplified indexing on mapping word to IDs

* stupid python indent error

* added code from https://github.com/tensorflow/tensorflow/issues/3388 for tf bug workaround

* Fix typo (closes #2815) [ci skip]

* Update regex version dependency

* Set version to 2.0.13.dev3

* Skip seemingly problematic test

* Remove problematic test

* Try previous version of regex

* Revert "Remove problematic test"

This reverts commit bdebbef455.

* Unskip test

* Try older version of regex

* 💫 Update training examples and use minibatching (#2830)

<!--- Provide a general summary of your changes in the title. -->

## Description
Update the training examples in `/examples/training` to show usage of spaCy's `minibatch` and `compounding` helpers ([see here](https://spacy.io/usage/training#tips-batch-size) for details). The lack of batching in the examples has caused some confusion in the past, especially for beginners who would copy-paste the examples, update them with large training sets and experienced slow and unsatisfying results.

### Types of change
enhancements

## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.

* Visual C++ link updated (#2842) (closes #2841) [ci skip]

* New landing page

* Add contribution agreement

* Correcting lang/ru/examples.py (#2845)

* Correct some grammatical inaccuracies in lang\ru\examples.py; filled Contributor Agreement

* Correct some grammatical inaccuracies in lang\ru\examples.py

* Move contributor agreement to separate file

* Set version to 2.0.13.dev4

* Add Persian(Farsi) language support (#2797)

* Also include lowercase norm exceptions

* Remove in favour of https://github.com/explosion/spaCy/graphs/contributors

* Rule-based French Lemmatizer (#2818)

<!--- Provide a general summary of your changes in the title. -->

## Description
<!--- Use this section to describe your changes. If your changes required
testing, include information about the testing environment and the tests you
ran. If your test fixes a bug reported in an issue, don't forget to include the
issue number. If your PR is still a work in progress, that's totally fine – just
include a note to let us know. -->

Add a rule-based French Lemmatizer following the english one and the excellent PR for [greek language optimizations](https://github.com/explosion/spaCy/pull/2558) to adapt the Lemmatizer class.

### Types of change
<!-- What type of change does your PR cover? Is it a bug fix, an enhancement
or new feature, or a change to the documentation? -->

- Lemma dictionary used can be found [here](http://infolingu.univ-mlv.fr/DonneesLinguistiques/Dictionnaires/telechargement.html), I used the XML version.
- Add several files containing exhaustive list of words for each part of speech 
- Add some lemma rules
- Add POS that are not checked in the standard Lemmatizer, i.e PRON, DET, ADV and AUX
- Modify the Lemmatizer class to check in lookup table as a last resort if POS not mentionned
- Modify the lemmatize function to check in lookup table as a last resort
- Init files are updated so the model can support all the functionalities mentioned above
- Add words to tokenizer_exceptions_list.py in respect to regex used in tokenizer_exceptions.py

## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [X] I have submitted the spaCy Contributor Agreement.
- [X] I ran the tests, and all new and existing tests passed.
- [X] My changes don't require a change to the documentation, or if they do, I've added all required information.

* Set version to 2.0.13

* Fix formatting and consistency

* Update docs for new version [ci skip]

* Increment version [ci skip]

* Add info on wheels [ci skip]

* Adding "This is a sentence" example to Sinhala (#2846)

* Add wheels badge

* Update badge [ci skip]

* Update README.rst [ci skip]

* Update murmurhash pin

* Increment version to 2.0.14.dev0

* Update GPU docs for v2.0.14

* Add wheel to setup_requires

* Import prefer_gpu and require_gpu functions from Thinc

* Add tests for prefer_gpu() and require_gpu()

* Update requirements and setup.py

* Workaround bug in thinc require_gpu

* Set version to v2.0.14

* Update push-tag script

* Unhack prefer_gpu

* Require thinc 6.10.6

* Update prefer_gpu and require_gpu docs [ci skip]

* Fix specifiers for GPU

* Set version to 2.0.14.dev1

* Set version to 2.0.14

* Update Thinc version pin

* Increment version

* Fix msgpack-numpy version pin

* Increment version

* Update version to 2.0.16

* Update version [ci skip]

* Redundant ')' in the Stop words' example (#2856)

<!--- Provide a general summary of your changes in the title. -->

## Description
<!--- Use this section to describe your changes. If your changes required
testing, include information about the testing environment and the tests you
ran. If your test fixes a bug reported in an issue, don't forget to include the
issue number. If your PR is still a work in progress, that's totally fine – just
include a note to let us know. -->

### Types of change
<!-- What type of change does your PR cover? Is it a bug fix, an enhancement
or new feature, or a change to the documentation? -->

## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [ ] I have submitted the spaCy Contributor Agreement.
- [ ] I ran the tests, and all new and existing tests passed.
- [ ] My changes don't require a change to the documentation, or if they do, I've added all required information.

* Documentation improvement regarding joblib and SO (#2867)

Some documentation improvements

## Description
1. Fixed the dead URL to joblib
2. Fixed Stack Overflow brand name (with space)

### Types of change
Documentation

## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.

* raise error when setting overlapping entities as doc.ents (#2880)

* Fix out-of-bounds access in NER training

The helper method state.B(1) gets the index of the first token of the
buffer, or -1 if no such token exists. Normally this is safe because we
pass this to functions like state.safe_get(), which returns an empty
token. Here we used it directly as an array index, which is not okay!

This error may have been the cause of out-of-bounds access errors during
training. Similar errors may still be around, so much be hunted down.
Hunting this one down took a long time...I printed out values across
training runs and diffed, looking for points of divergence between
runs, when no randomness should be allowed.

* Change PyThaiNLP Url (#2876)

* Fix missing comma

* Add example showing a fix-up rule for space entities

* Set version to 2.0.17.dev0

* Update regex version

* Revert "Update regex version"

This reverts commit 62358dd867.

* Try setting older regex version, to align with conda

* Set version to 2.0.17

* Add spacy-js to universe [ci-skip]

* Add spacy-raspberry to universe (closes #2889)

* Add script to validate universe json [ci skip]

* Removed space in docs + added contributor indo (#2909)

* - removed unneeded space in documentation

* - added contributor info

* Allow input text of length up to max_length, inclusive (#2922)

* Include universe spec for spacy-wordnet component (#2919)

* feat: include universe spec for spacy-wordnet component

* chore: include spaCy contributor agreement

* Minor formatting changes [ci skip]

* Fix image [ci skip]

Twitter URL doesn't work on live site

* Check if the word is in one of the regular lists specific to each POS (#2886)

* 💫 Create random IDs for SVGs to prevent ID clashes (#2927)

Resolves #2924.

## Description
Fixes problem where multiple visualizations in Jupyter notebooks would have clashing arc IDs, resulting in weirdly positioned arc labels. Generating a random ID prefix so even identical parses won't receive the same IDs for consistency (even if effect of ID clash isn't noticable here.)

### Types of change
bug fix

## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.

* Fix typo [ci skip]

* fixes symbolic link on py3 and windows (#2949)

* fixes symbolic link on py3 and windows
during setup of spacy using command
python -m spacy link en_core_web_sm en
closes #2948

* Update spacy/compat.py

Co-Authored-By: cicorias <cicorias@users.noreply.github.com>

* Fix formatting

* Update universe [ci skip]

* Catalan Language Support (#2940)

* Catalan language Support

* Ddding Catalan to documentation

* Sort languages alphabetically [ci skip]

* Update tests for pytest 4.x (#2965)

<!--- Provide a general summary of your changes in the title. -->

## Description
- [x] Replace marks in params for pytest 4.0 compat ([see here](https://docs.pytest.org/en/latest/deprecations.html#marks-in-pytest-mark-parametrize))
- [x] Un-xfail passing tests (some fixes in a recent update resolved a bunch of issues, but tests were apparently never updated here)

### Types of change
<!-- What type of change does your PR cover? Is it a bug fix, an enhancement
or new feature, or a change to the documentation? -->

## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.

* Fix regex pin to harmonize with conda (#2964)

* Update README.rst

* Fix bug where Vocab.prune_vector did not use 'batch_size' (#2977)

Fixes #2976

* Fix typo

* Fix typo

* Remove duplicate file

* Require thinc 7.0.0.dev2

Fixes bug in gpu_ops that would use cupy instead of numpy on CPU

* Add missing import

* Fix error IDs

* Fix tests
2018-11-29 16:30:29 +01:00

316 lines
12 KiB
Python

# coding: utf8
from __future__ import unicode_literals
def explain(term):
"""Get a description for a given POS tag, dependency label or entity type.
term (unicode): The term to explain.
RETURNS (unicode): The explanation, or `None` if not found in the glossary.
EXAMPLE:
>>> spacy.explain(u'NORP')
>>> doc = nlp(u'Hello world')
>>> print([w.text, w.tag_, spacy.explain(w.tag_) for w in doc])
"""
if term in GLOSSARY:
return GLOSSARY[term]
GLOSSARY = {
# POS tags
# Universal POS Tags
# http://universaldependencies.org/u/pos/
'ADJ': 'adjective',
'ADP': 'adposition',
'ADV': 'adverb',
'AUX': 'auxiliary',
'CONJ': 'conjunction',
'CCONJ': 'coordinating conjunction',
'DET': 'determiner',
'INTJ': 'interjection',
'NOUN': 'noun',
'NUM': 'numeral',
'PART': 'particle',
'PRON': 'pronoun',
'PROPN': 'proper noun',
'PUNCT': 'punctuation',
'SCONJ': 'subordinating conjunction',
'SYM': 'symbol',
'VERB': 'verb',
'X': 'other',
'EOL': 'end of line',
'SPACE': 'space',
# POS tags (English)
# OntoNotes 5 / Penn Treebank
# https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html
'.': 'punctuation mark, sentence closer',
',': 'punctuation mark, comma',
'-LRB-': 'left round bracket',
'-RRB-': 'right round bracket',
'``': 'opening quotation mark',
'""': 'closing quotation mark',
"''": 'closing quotation mark',
':': 'punctuation mark, colon or ellipsis',
'$': 'symbol, currency',
'#': 'symbol, number sign',
'AFX': 'affix',
'CC': 'conjunction, coordinating',
'CD': 'cardinal number',
'DT': 'determiner',
'EX': 'existential there',
'FW': 'foreign word',
'HYPH': 'punctuation mark, hyphen',
'IN': 'conjunction, subordinating or preposition',
'JJ': 'adjective',
'JJR': 'adjective, comparative',
'JJS': 'adjective, superlative',
'LS': 'list item marker',
'MD': 'verb, modal auxiliary',
'NIL': 'missing tag',
'NN': 'noun, singular or mass',
'NNP': 'noun, proper singular',
'NNPS': 'noun, proper plural',
'NNS': 'noun, plural',
'PDT': 'predeterminer',
'POS': 'possessive ending',
'PRP': 'pronoun, personal',
'PRP$': 'pronoun, possessive',
'RB': 'adverb',
'RBR': 'adverb, comparative',
'RBS': 'adverb, superlative',
'RP': 'adverb, particle',
'TO': 'infinitival to',
'UH': 'interjection',
'VB': 'verb, base form',
'VBD': 'verb, past tense',
'VBG': 'verb, gerund or present participle',
'VBN': 'verb, past participle',
'VBP': 'verb, non-3rd person singular present',
'VBZ': 'verb, 3rd person singular present',
'WDT': 'wh-determiner',
'WP': 'wh-pronoun, personal',
'WP$': 'wh-pronoun, possessive',
'WRB': 'wh-adverb',
'SP': 'space',
'ADD': 'email',
'NFP': 'superfluous punctuation',
'GW': 'additional word in multi-word expression',
'XX': 'unknown',
'BES': 'auxiliary "be"',
'HVS': 'forms of "have"',
# POS Tags (German)
# TIGER Treebank
# http://www.ims.uni-stuttgart.de/forschung/ressourcen/korpora/TIGERCorpus/annotation/tiger_introduction.pdf
'$(': 'other sentence-internal punctuation mark',
'$,': 'comma',
'$.': 'sentence-final punctuation mark',
'ADJA': 'adjective, attributive',
'ADJD': 'adjective, adverbial or predicative',
'APPO': 'postposition',
'APPR': 'preposition; circumposition left',
'APPRART': 'preposition with article',
'APZR': 'circumposition right',
'ART': 'definite or indefinite article',
'CARD': 'cardinal number',
'FM': 'foreign language material',
'ITJ': 'interjection',
'KOKOM': 'comparative conjunction',
'KON': 'coordinate conjunction',
'KOUI': 'subordinate conjunction with "zu" and infinitive',
'KOUS': 'subordinate conjunction with sentence',
'NE': 'proper noun',
'NNE': 'proper noun',
'PAV': 'pronominal adverb',
'PROAV': 'pronominal adverb',
'PDAT': 'attributive demonstrative pronoun',
'PDS': 'substituting demonstrative pronoun',
'PIAT': 'attributive indefinite pronoun without determiner',
'PIDAT': 'attributive indefinite pronoun with determiner',
'PIS': 'substituting indefinite pronoun',
'PPER': 'non-reflexive personal pronoun',
'PPOSAT': 'attributive possessive pronoun',
'PPOSS': 'substituting possessive pronoun',
'PRELAT': 'attributive relative pronoun',
'PRELS': 'substituting relative pronoun',
'PRF': 'reflexive personal pronoun',
'PTKA': 'particle with adjective or adverb',
'PTKANT': 'answer particle',
'PTKNEG': 'negative particle',
'PTKVZ': 'separable verbal particle',
'PTKZU': '"zu" before infinitive',
'PWAT': 'attributive interrogative pronoun',
'PWAV': 'adverbial interrogative or relative pronoun',
'PWS': 'substituting interrogative pronoun',
'TRUNC': 'word remnant',
'VAFIN': 'finite verb, auxiliary',
'VAIMP': 'imperative, auxiliary',
'VAINF': 'infinitive, auxiliary',
'VAPP': 'perfect participle, auxiliary',
'VMFIN': 'finite verb, modal',
'VMINF': 'infinitive, modal',
'VMPP': 'perfect participle, modal',
'VVFIN': 'finite verb, full',
'VVIMP': 'imperative, full',
'VVINF': 'infinitive, full',
'VVIZU': 'infinitive with "zu", full',
'VVPP': 'perfect participle, full',
'XY': 'non-word containing non-letter',
# Noun chunks
'NP': 'noun phrase',
'PP': 'prepositional phrase',
'VP': 'verb phrase',
'ADVP': 'adverb phrase',
'ADJP': 'adjective phrase',
'SBAR': 'subordinating conjunction',
'PRT': 'particle',
'PNP': 'prepositional noun phrase',
# Dependency Labels (English)
# ClearNLP / Universal Dependencies
# https://github.com/clir/clearnlp-guidelines/blob/master/md/specifications/dependency_labels.md
'acomp': 'adjectival complement',
'advcl': 'adverbial clause modifier',
'advmod': 'adverbial modifier',
'agent': 'agent',
'amod': 'adjectival modifier',
'appos': 'appositional modifier',
'attr': 'attribute',
'aux': 'auxiliary',
'auxpass': 'auxiliary (passive)',
'cc': 'coordinating conjunction',
'ccomp': 'clausal complement',
'complm': 'complementizer',
'conj': 'conjunct',
'cop': 'copula',
'csubj': 'clausal subject',
'csubjpass': 'clausal subject (passive)',
'dep': 'unclassified dependent',
'det': 'determiner',
'dobj': 'direct object',
'expl': 'expletive',
'hmod': 'modifier in hyphenation',
'hyph': 'hyphen',
'infmod': 'infinitival modifier',
'intj': 'interjection',
'iobj': 'indirect object',
'mark': 'marker',
'meta': 'meta modifier',
'neg': 'negation modifier',
'nmod': 'modifier of nominal',
'nn': 'noun compound modifier',
'npadvmod': 'noun phrase as adverbial modifier',
'nsubj': 'nominal subject',
'nsubjpass': 'nominal subject (passive)',
'num': 'number modifier',
'number': 'number compound modifier',
'oprd': 'object predicate',
'obj': 'object',
'obl': 'oblique nominal',
'parataxis': 'parataxis',
'partmod': 'participal modifier',
'pcomp': 'complement of preposition',
'pobj': 'object of preposition',
'poss': 'possession modifier',
'possessive': 'possessive modifier',
'preconj': 'pre-correlative conjunction',
'prep': 'prepositional modifier',
'prt': 'particle',
'punct': 'punctuation',
'quantmod': 'modifier of quantifier',
'rcmod': 'relative clause modifier',
'root': 'root',
'xcomp': 'open clausal complement',
# Dependency labels (German)
# TIGER Treebank
# http://www.ims.uni-stuttgart.de/forschung/ressourcen/korpora/TIGERCorpus/annotation/tiger_introduction.pdf
# currently missing: 'cc' (comparative complement) because of conflict
# with English labels
'ac': 'adpositional case marker',
'adc': 'adjective component',
'ag': 'genitive attribute',
'ams': 'measure argument of adjective',
'app': 'apposition',
'avc': 'adverbial phrase component',
'cd': 'coordinating conjunction',
'cj': 'conjunct',
'cm': 'comparative conjunction',
'cp': 'complementizer',
'cvc': 'collocational verb construction',
'da': 'dative',
'dh': 'discourse-level head',
'dm': 'discourse marker',
'ep': 'expletive es',
'hd': 'head',
'ju': 'junctor',
'mnr': 'postnominal modifier',
'mo': 'modifier',
'ng': 'negation',
'nk': 'noun kernel element',
'nmc': 'numerical component',
'oa': 'accusative object',
'oc': 'clausal object',
'og': 'genitive object',
'op': 'prepositional object',
'par': 'parenthetical element',
'pd': 'predicate',
'pg': 'phrasal genitive',
'ph': 'placeholder',
'pm': 'morphological particle',
'pnc': 'proper noun component',
'rc': 'relative clause',
're': 'repeated element',
'rs': 'reported speech',
'sb': 'subject',
# Named Entity Recognition
# OntoNotes 5
# https://catalog.ldc.upenn.edu/docs/LDC2013T19/OntoNotes-Release-5.0.pdf
'PERSON': 'People, including fictional',
'NORP': 'Nationalities or religious or political groups',
'FACILITY': 'Buildings, airports, highways, bridges, etc.',
'FAC': 'Buildings, airports, highways, bridges, etc.',
'ORG': 'Companies, agencies, institutions, etc.',
'GPE': 'Countries, cities, states',
'LOC': 'Non-GPE locations, mountain ranges, bodies of water',
'PRODUCT': 'Objects, vehicles, foods, etc. (not services)',
'EVENT': 'Named hurricanes, battles, wars, sports events, etc.',
'WORK_OF_ART': 'Titles of books, songs, etc.',
'LAW': 'Named documents made into laws.',
'LANGUAGE': 'Any named language',
'DATE': 'Absolute or relative dates or periods',
'TIME': 'Times smaller than a day',
'PERCENT': 'Percentage, including "%"',
'MONEY': 'Monetary values, including unit',
'QUANTITY': 'Measurements, as of weight or distance',
'ORDINAL': '"first", "second", etc.',
'CARDINAL': 'Numerals that do not fall under another type',
# Named Entity Recognition
# Wikipedia
# http://www.sciencedirect.com/science/article/pii/S0004370212000276
# https://pdfs.semanticscholar.org/5744/578cc243d92287f47448870bb426c66cc941.pdf
'PER': 'Named person or family.',
'MISC': ('Miscellaneous entities, e.g. events, nationalities, '
'products or works of art'),
}