spaCy/website/docs/api/lemmatizer.md
Adriane Boyd 507422149f
Various docs updates for v3.0 (#8353)
* Update cats score names in Scorer API docs

* Refer to performance in meta

* Update package naming/versions, lemmatizer details

* Minor formatting fixes

* Provide more explanation for cats_score_desc

* Provide language-specific lemmatizer defaults in API docs

Co-authored-by: Paul O'Leary McCann <polm@dampfkraft.com>
2021-06-14 12:19:36 +02:00

16 KiB

title tag source new teaser api_base_class api_string_name api_trainable
Lemmatizer class spacy/pipeline/lemmatizer.py 3 Pipeline component for lemmatization /api/pipe lemmatizer false

Component for assigning base forms to tokens using rules based on part-of-speech tags, or lookup tables. Functionality to train the component is coming soon. Different Language subclasses can implement their own lemmatizer components via language-specific factories. The default data used is provided by the spacy-lookups-data extension package.

As of v3.0, the Lemmatizer is a standalone pipeline component that can be added to your pipeline, and not a hidden part of the vocab that runs behind the scenes. This makes it easier to customize how lemmas should be assigned in your pipeline.

If the lemmatization mode is set to "rule", which requires coarse-grained POS (Token.pos) to be assigned, make sure a Tagger, Morphologizer or another component assigning POS is available in the pipeline and runs before the lemmatizer.

Config and implementation

The default config is defined by the pipeline component factory and describes how the component should be configured. You can override its settings via the config argument on nlp.add_pipe or in your config.cfg for training. For examples of the lookups data format used by the lookup and rule-based lemmatizers, see spacy-lookups-data.

Example

config = {"mode": "rule"}
nlp.add_pipe("lemmatizer", config=config)
Setting Description
mode The lemmatizer mode, e.g. "lookup" or "rule". Defaults to lookup if no language-specific lemmatizer is available (see the following table). str
overwrite Whether to overwrite existing lemmas. Defaults to False. bool
model Not yet implemented: the model to use. Model

Many languages specify a default lemmatizer mode other than lookup if a better lemmatizer is available. The lemmatizer modes rule and pos_lookup require token.pos from a previous pipeline component (see example pipeline configurations in the pretrained pipeline design details) or rely on third-party libraries (pymorphy2).

Language Default Mode
bn rule
el rule
en rule
es rule
fa rule
fr rule
mk rule
nb rule
nl rule
pl pos_lookup
ru pymorphy2
sv rule
uk pymorphy2
%%GITHUB_SPACY/spacy/pipeline/lemmatizer.py

Lemmatizer.__init__

Example

# Construction via add_pipe with default model
lemmatizer = nlp.add_pipe("lemmatizer")

# Construction via add_pipe with custom settings
config = {"mode": "rule", "overwrite": True}
lemmatizer = nlp.add_pipe("lemmatizer", config=config)

Create a new pipeline instance. In your application, you would normally use a shortcut for this and instantiate the component using its string name and nlp.add_pipe.

Name Description
vocab The shared vocabulary. Vocab
model Not yet implemented: The model to use. Model
name String name of the component instance. Used to add entries to the losses during training. str
keyword-only
mode The lemmatizer mode, e.g. "lookup" or "rule". Defaults to "lookup". str
overwrite Whether to overwrite existing lemmas. ~~bool~

Lemmatizer.__call__

Apply the pipe to one document. The document is modified in place, and returned. This usually happens under the hood when the nlp object is called on a text and all pipeline components are applied to the Doc in order.

Example

doc = nlp("This is a sentence.")
lemmatizer = nlp.add_pipe("lemmatizer")
# This usually happens under the hood
processed = lemmatizer(doc)
Name Description
doc The document to process. Doc
RETURNS The processed document. Doc

Lemmatizer.pipe

Apply the pipe to a stream of documents. This usually happens under the hood when the nlp object is called on a text and all pipeline components are applied to the Doc in order.

Example

lemmatizer = nlp.add_pipe("lemmatizer")
for doc in lemmatizer.pipe(docs, batch_size=50):
    pass
Name Description
stream A stream of documents. Iterable[Doc]
keyword-only
batch_size The number of documents to buffer. Defaults to 128. int
YIELDS The processed documents in order. Doc

Lemmatizer.initialize

Initialize the lemmatizer and load any data resources. This method is typically called by Language.initialize and lets you customize arguments it receives via the [initialize.components] block in the config. The loading only happens during initialization, typically before training. At runtime, all data is loaded from disk.

Example

lemmatizer = nlp.add_pipe("lemmatizer")
lemmatizer.initialize(lookups=lookups)
### config.cfg
[initialize.components.lemmatizer]

[initialize.components.lemmatizer.lookups]
@misc = "load_my_lookups.v1"
Name Description
get_examples Function that returns gold-standard annotations in the form of Example objects. Defaults to None. Optional[Callable], Iterable[Example]
keyword-only
nlp The current nlp object. Defaults to None. Optional[Language]
lookups The lookups object containing the tables such as "lemma_rules", "lemma_index", "lemma_exc" and "lemma_lookup". If None, default tables are loaded from spacy-lookups-data. Defaults to None. Optional[Lookups]

Lemmatizer.lookup_lemmatize

Lemmatize a token using a lookup-based approach. If no lemma is found, the original string is returned.

Name Description
token The token to lemmatize. Token
RETURNS A list containing one or more lemmas. List[str]

Lemmatizer.rule_lemmatize

Lemmatize a token using a rule-based approach. Typically relies on POS tags.

Name Description
token The token to lemmatize. Token
RETURNS A list containing one or more lemmas. List[str]

Lemmatizer.is_base_form

Check whether we're dealing with an uninflected paradigm, so we can avoid lemmatization entirely.

Name Description
token The token to analyze. Token
RETURNS Whether the token's attributes (e.g., part-of-speech tag, morphological features) describe a base form. bool

Lemmatizer.get_lookups_config

Returns the lookups configuration settings for a given mode for use in Lemmatizer.load_lookups.

Name Description
mode The lemmatizer mode. str
RETURNS The required table names and the optional table names. Tuple[List[str], List[str]]

Lemmatizer.to_disk

Serialize the pipe to disk.

Example

lemmatizer = nlp.add_pipe("lemmatizer")
lemmatizer.to_disk("/path/to/lemmatizer")
Name Description
path A path to a directory, which will be created if it doesn't exist. Paths may be either strings or Path-like objects. Union[str, Path]
keyword-only
exclude String names of serialization fields to exclude. Iterable[str]

Lemmatizer.from_disk

Load the pipe from disk. Modifies the object in place and returns it.

Example

lemmatizer = nlp.add_pipe("lemmatizer")
lemmatizer.from_disk("/path/to/lemmatizer")
Name Description
path A path to a directory. Paths may be either strings or Path-like objects. Union[str, Path]
keyword-only
exclude String names of serialization fields to exclude. Iterable[str]
RETURNS The modified Lemmatizer object. Lemmatizer

Lemmatizer.to_bytes

Example

lemmatizer = nlp.add_pipe("lemmatizer")
lemmatizer_bytes = lemmatizer.to_bytes()

Serialize the pipe to a bytestring.

Name Description
keyword-only
exclude String names of serialization fields to exclude. Iterable[str]
RETURNS The serialized form of the Lemmatizer object. bytes

Lemmatizer.from_bytes

Load the pipe from a bytestring. Modifies the object in place and returns it.

Example

lemmatizer_bytes = lemmatizer.to_bytes()
lemmatizer = nlp.add_pipe("lemmatizer")
lemmatizer.from_bytes(lemmatizer_bytes)
Name Description
bytes_data The data to load from. bytes
keyword-only
exclude String names of serialization fields to exclude. Iterable[str]
RETURNS The Lemmatizer object. Lemmatizer

Attributes

Name Description
vocab The shared Vocab. Vocab
lookups The lookups object. Lookups
mode The lemmatizer mode. str

Serialization fields

During serialization, spaCy will export several data fields used to restore different aspects of the object. If needed, you can exclude them from serialization by passing in the string names via the exclude argument.

Example

data = lemmatizer.to_disk("/path", exclude=["vocab"])
Name Description
vocab The shared Vocab.
lookups The lookups. You usually don't want to exclude this.