spaCy/spacy/lemmatizer.py

130 lines
4.3 KiB
Python
Raw Normal View History

# coding: utf8
from __future__ import unicode_literals
from collections import OrderedDict
2019-09-11 15:00:36 +03:00
from .symbols import NOUN, VERB, ADJ, PUNCT, PROPN
2014-12-23 07:16:57 +03:00
class Lemmatizer(object):
"""
The Lemmatizer supports simple part-of-speech-sensitive suffix rules and
lookup tables.
DOCS: https://spacy.io/api/lemmatizer
"""
@classmethod
2017-10-24 17:00:33 +03:00
def load(cls, path, index=None, exc=None, rules=None, lookup=None):
return cls(index, exc, rules, lookup)
2014-12-23 07:16:57 +03:00
def __init__(self, index=None, exceptions=None, rules=None, lookup=None):
self.index = index
self.exc = exceptions
self.rules = rules
self.lookup_table = lookup if lookup is not None else {}
2015-08-25 16:46:19 +03:00
def __call__(self, string, univ_pos, morphology=None):
if not self.rules:
return [self.lookup_table.get(string, string)]
if univ_pos in (NOUN, "NOUN", "noun"):
univ_pos = "noun"
elif univ_pos in (VERB, "VERB", "verb"):
univ_pos = "verb"
elif univ_pos in (ADJ, "ADJ", "adj"):
univ_pos = "adj"
elif univ_pos in (PUNCT, "PUNCT", "punct"):
univ_pos = "punct"
elif univ_pos in (PROPN, "PROPN"):
return [string]
else:
return [string.lower()]
# See Issue #435 for example of where this logic is requied.
if self.is_base_form(univ_pos, morphology):
return [string.lower()]
lemmas = lemmatize(
string,
self.index.get(univ_pos, {}),
self.exc.get(univ_pos, {}),
self.rules.get(univ_pos, []),
)
2015-09-08 16:38:03 +03:00
return lemmas
2014-12-23 07:16:57 +03:00
def is_base_form(self, univ_pos, morphology=None):
"""
Check whether we're dealing with an uninflected paradigm, so we can
avoid lemmatization entirely.
"""
2019-09-08 19:09:53 +03:00
if morphology is None:
morphology = {}
if univ_pos == "noun" and morphology.get("Number") == "sing":
return True
elif univ_pos == "verb" and morphology.get("VerbForm") == "inf":
return True
# This maps 'VBP' to base form -- probably just need 'IS_BASE'
# morphology
elif univ_pos == "verb" and (
morphology.get("VerbForm") == "fin"
and morphology.get("Tense") == "pres"
and morphology.get("Number") is None
):
return True
elif univ_pos == "adj" and morphology.get("Degree") == "pos":
return True
2019-09-11 15:00:36 +03:00
elif morphology.get("VerbForm") == "inf":
2017-03-25 23:56:41 +03:00
return True
2019-09-11 15:00:36 +03:00
elif morphology.get("VerbForm") == "none":
2017-03-25 23:56:41 +03:00
return True
2019-09-11 15:00:36 +03:00
elif morphology.get("VerbForm") == "inf":
2017-03-25 23:56:41 +03:00
return True
2019-09-11 15:00:36 +03:00
elif morphology.get("Degree") == "pos":
return True
else:
return False
2014-12-23 07:16:57 +03:00
def noun(self, string, morphology=None):
return self(string, "noun", morphology)
2014-12-23 07:16:57 +03:00
def verb(self, string, morphology=None):
return self(string, "verb", morphology)
2014-12-23 07:16:57 +03:00
def adj(self, string, morphology=None):
return self(string, "adj", morphology)
def punct(self, string, morphology=None):
return self(string, "punct", morphology)
Bloom-filter backed Lookup Tables (#4268) * Improve load_language_data helper * WIP: Add Lookups implementation * Start moving lemma data over to JSON * WIP: move data over for more languages * Convert more languages * Fix lemmatizer fixtures in tests * Finish conversion * Auto-format JSON files * Fix test for now * Make sure tables are stored on instance * Update docstrings * Update docstrings and errors * Update test * Add Lookups.__len__ * Add serialization methods * Add Lookups.remove_table * Use msgpack for serialization to disk * Fix file exists check * Try using OrderedDict for everything * Update .flake8 [ci skip] * Try fixing serialization * Update test_lookups.py * Update test_serialize_vocab_strings.py * Lookups / Tables now work This implements the stubs in the Lookups/Table classes. Currently this is in Cython but with no type declarations, so that could be improved. * Add lookups to setup.py * Actually add lookups pyx The previous commit added the old py file... * Lookups work-in-progress * Move from pyx back to py * Add string based lookups, fix serialization * Update tests, language/lemmatizer to work with string lookups There are some outstanding issues here: - a pickling-related test fails due to the bloom filter - some custom lemmatizers (fr/nl at least) have issues More generally, there's a question of how to deal with the case where you have a string but want to use the lookup table. Currently the table allows access by string or id, but that's getting pretty awkward. * Change lemmatizer lookup method to pass (orth, string) * Fix token lookup * Fix French lookup * Fix lt lemmatizer test * Fix Dutch lemmatizer * Fix lemmatizer lookup test This was using a normal dict instead of a Table, so checks for the string instead of an integer key failed. * Make uk/nl/ru lemmatizer lookup methods consistent The mentioned tokenizers all have their own implementation of the `lookup` method, which accesses a `Lookups` table. The way that was called in `token.pyx` was changed so this should be updated to have the same arguments as `lookup` in `lemmatizer.py` (specificially (orth/id, string)). Prior to this change tests weren't failing, but there would probably be issues with normal use of a model. More tests should proably be added. Additionally, the language-specific `lookup` implementations seem like they might not be needed, since they handle things like lower-casing that aren't actually language specific. * Make recently added Greek method compatible * Remove redundant class/method Leftovers from a merge not cleaned up adequately.
2019-09-12 18:26:11 +03:00
def lookup(self, orth, string):
if orth in self.lookup_table:
return self.lookup_table[orth]
return string
2014-12-23 07:16:57 +03:00
def lemmatize(string, index, exceptions, rules):
orig = string
2014-12-23 07:16:57 +03:00
string = string.lower()
forms = []
2017-03-01 23:44:17 +03:00
oov_forms = []
for old, new in rules:
if string.endswith(old):
form = string[: len(string) - len(old)] + new
if not form:
pass
elif form in index or not form.isalpha():
forms.append(form)
else:
oov_forms.append(form)
# Remove duplicates but preserve the ordering of applied "rules"
forms = list(OrderedDict.fromkeys(forms))
# Put exceptions at the front of the list, so they get priority.
# This is a dodgy heuristic -- but it's the best we can do until we get
# frequencies on this. We can at least prune out problematic exceptions,
# if they shadow more frequent analyses.
for form in exceptions.get(string, []):
if form not in forms:
forms.insert(0, form)
2014-12-23 07:16:57 +03:00
if not forms:
2017-03-01 23:44:17 +03:00
forms.extend(oov_forms)
if not forms:
forms.append(orig)
return forms