spaCy/website/docs/api/lemmatizer.md
Adriane Boyd e962784531
Add Lemmatizer and simplify related components (#5848)
* Add Lemmatizer and simplify related components

* Add `Lemmatizer` pipe with `lookup` and `rule` modes using the
`Lookups` tables.
* Reduce `Tagger` to a simple tagger that sets `Token.tag` (no pos or lemma)
* Reduce `Morphology` to only keep track of morph tags (no tag map, lemmatizer,
or morph rules)
* Remove lemmatizer from `Vocab`
* Adjust many many tests

Differences:

* No default lookup lemmas
* No special treatment of TAG in `from_array` and similar required
* Easier to modify labels in a `Tagger`
* No extra strings added from morphology / tag map

* Fix test

* Initial fix for Lemmatizer config/serialization

* Adjust init test to be more generic

* Adjust init test to force empty Lookups

* Add simple cache to rule-based lemmatizer

* Convert language-specific lemmatizers

Convert language-specific lemmatizers to component lemmatizers. Remove
previous lemmatizer class.

* Fix French and Polish lemmatizers

* Remove outdated UPOS conversions

* Update Russian lemmatizer init in tests

* Add minimal init/run tests for custom lemmatizers

* Add option to overwrite existing lemmas

* Update mode setting, lookup loading, and caching

* Make `mode` an immutable property
* Only enforce strict `load_lookups` for known supported modes
* Move caching into individual `_lemmatize` methods

* Implement strict when lang is not found in lookups

* Fix tables/lookups in make_lemmatizer

* Reallow provided lookups and allow for stricter checks

* Add lookups asset to all Lemmatizer pipe tests

* Rename lookups in lemmatizer init test

* Clean up merge

* Refactor lookup table loading

* Add helper from `load_lemmatizer_lookups` that loads required and
optional lookups tables based on settings provided by a config.

Additional slight refactor of lookups:

* Add `Lookups.set_table` to set a table from a provided `Table`
* Reorder class definitions to be able to specify type as `Table`

* Move registry assets into test methods

* Refactor lookups tables config

Use class methods within `Lemmatizer` to provide the config for
particular modes and to load the lookups from a config.

* Add pipe and score to lemmatizer

* Simplify Tagger.score

* Add missing import

* Clean up imports and auto-format

* Remove unused kwarg

* Tidy up and auto-format

* Update docstrings for Lemmatizer

Update docstrings for Lemmatizer.

Additionally modify `is_base_form` API to take `Token` instead of
individual features.

* Update docstrings

* Remove tag map values from Tagger.add_label

* Update API docs

* Fix relative link in Lemmatizer API docs
2020-08-07 15:27:13 +02:00

14 KiB

title tag source new teaser api_base_class api_string_name api_trainable
Lemmatizer class spacy/pipeline/lemmatizer.py 3 Pipeline component for lemmatization /api/pipe lemmatizer false

Config and implementation

The default config is defined by the pipeline component factory and describes how the component should be configured. You can override its settings via the config argument on nlp.add_pipe or in your config.cfg for training.

For examples of the lookups data formats used by the lookup and rule-based lemmatizers, see the spacy-lookups-data repo.

Example

config = {"mode": "rule"}
nlp.add_pipe("lemmatizer", config=config)
Setting Type Description Default
mode str The lemmatizer mode, e.g. "lookup" or "rule". "lookup"
lookups Lookups The lookups object containing the tables such as "lemma_rules", "lemma_index", "lemma_exc" and "lemma_lookup". If None, default tables are loaded from spacy-lookups-data. None
overwrite bool Whether to overwrite existing lemmas. False
model Model Not yet implemented: the model to use. None
https://github.com/explosion/spaCy/blob/develop/spacy/pipeline/lemmatizer.py

Lemmatizer.__init__

Example

# Construction via add_pipe with default model
lemmatizer = nlp.add_pipe("lemmatizer")

# Construction via add_pipe with custom settings
config = {"mode": "rule", overwrite=True}
lemmatizer = nlp.add_pipe("lemmatizer", config=config)

Create a new pipeline instance. In your application, you would normally use a shortcut for this and instantiate the component using its string name and nlp.add_pipe.

Name Type Description
vocab Vocab The vocab.
model Model A model (not yet implemented).
name str String name of the component instance. Used to add entries to the losses during training.
keyword-only
mode str The lemmatizer mode, e.g. "lookup" or "rule". Defaults to "lookup".
lookups Lookups A lookups object containing the tables such as "lemma_rules", "lemma_index", "lemma_exc" and "lemma_lookup". Defaults to None.
overwrite bool Whether to overwrite existing lemmas.

Lemmatizer.__call__

Apply the pipe to one document. The document is modified in place, and returned. This usually happens under the hood when the nlp object is called on a text and all pipeline components are applied to the Doc in order.

Example

doc = nlp("This is a sentence.")
lemmatizer = nlp.add_pipe("lemmatizer")
# This usually happens under the hood
processed = lemmatizer(doc)
Name Type Description
doc Doc The document to process.
RETURNS Doc The processed document.

Lemmatizer.pipe

Apply the pipe to a stream of documents. This usually happens under the hood when the nlp object is called on a text and all pipeline components are applied to the Doc in order.

Example

lemmatizer = nlp.add_pipe("lemmatizer")
for doc in lemmatizer.pipe(docs, batch_size=50):
    pass
Name Type Description
stream Iterable[Doc] A stream of documents.
keyword-only
batch_size int The number of texts to buffer. Defaults to 128.
YIELDS Doc Processed documents in the order of the original text.

Lemmatizer.lookup_lemmatize

Lemmatize a token using a lookup-based approach. If no lemma is found, the original string is returned. Languages can provide a lookup table via the Lookups.

Name Type Description
token Token The token to lemmatize.
RETURNS List[str] A list containing one or more lemmas.

Lemmatizer.rule_lemmatize

Lemmatize a token using a rule-based approach. Typically relies on POS tags.

Name Type Description
token Token The token to lemmatize.
RETURNS List[str] A list containing one or more lemmas.

Lemmatizer.is_base_form

Check whether we're dealing with an uninflected paradigm, so we can avoid lemmatization entirely.

Name Type Description
token Token The token to analyze.
RETURNS bool Whether the token's attributes (e.g., part-of-speech tag, morphological features) describe a base form.

Lemmatizer.get_lookups_config

Returns the lookups configuration settings for a given mode for use in Lemmatizer.load_lookups.

Name Type Description
mode str The lemmatizer mode.
RETURNS dict The lookups configuration settings for this mode.

Lemmatizer.load_lookups

Load and validate lookups tables. If the provided lookups is None, load the default lookups tables according to the language and mode settings. Confirm that all required tables for the language and mode are present.

Name Type Description
lang str The language.
mode str The lemmatizer mode.
lookups Lookups The provided lookups, may be None if the default lookups should be loaded.
RETURNS Lookups The lookups object.

Lemmatizer.to_disk

Serialize the pipe to disk.

Example

lemmatizer = nlp.add_pipe("lemmatizer")
lemmatizer.to_disk("/path/to/lemmatizer")
Name Type Description
path str / Path A path to a directory, which will be created if it doesn't exist. Paths may be either strings or Path-like objects.
keyword-only
exclude Iterable[str] String names of serialization fields to exclude.

Lemmatizer.from_disk

Load the pipe from disk. Modifies the object in place and returns it.

Example

lemmatizer = nlp.add_pipe("lemmatizer")
lemmatizer.from_disk("/path/to/lemmatizer")
Name Type Description
path str / Path A path to a directory. Paths may be either strings or Path-like objects.
keyword-only
exclude Iterable[str] String names of serialization fields to exclude.
RETURNS Lemmatizer The modified Lemmatizer object.

Lemmatizer.to_bytes

Example

lemmatizer = nlp.add_pipe("lemmatizer")
lemmatizer_bytes = lemmatizer.to_bytes()

Serialize the pipe to a bytestring.

Name Type Description
keyword-only
exclude Iterable[str] String names of serialization fields to exclude.
RETURNS bytes The serialized form of the Lemmatizer object.

Lemmatizer.from_bytes

Load the pipe from a bytestring. Modifies the object in place and returns it.

Example

lemmatizer_bytes = lemmatizer.to_bytes()
lemmatizer = nlp.add_pipe("lemmatizer")
lemmatizer.from_bytes(lemmatizer_bytes)
Name Type Description
bytes_data bytes The data to load from.
keyword-only
exclude Iterable[str] String names of serialization fields to exclude.
RETURNS Lemmatizer The Lemmatizer object.

Lemmatizer.mode

The lemmatizer mode.

Name Type Description
RETURNS str The lemmatizer mode.

Attributes

Name Type Description
vocab The shared Vocab.
lookups Lookups The lookups object.

Serialization fields

During serialization, spaCy will export several data fields used to restore different aspects of the object. If needed, you can exclude them from serialization by passing in the string names via the exclude argument.

Example

data = lemmatizer.to_disk("/path", exclude=["vocab"])
Name Description
vocab The shared Vocab.
lookups The lookups. You usually don't want to exclude this.