15 KiB
title | tag | source | new | teaser | api_base_class | api_string_name | api_trainable |
---|---|---|---|---|---|---|---|
Lemmatizer | class | spacy/pipeline/lemmatizer.py | 3 | Pipeline component for lemmatization | /api/pipe | lemmatizer | false |
Component for assigning base forms to tokens using rules based on part-of-speech
tags, or lookup tables. Functionality to train the component is coming soon.
Different Language
subclasses can implement their own
lemmatizer components via
language-specific factories.
The default data used is provided by the
spacy-lookups-data
extension package.
As of v3.0, the Lemmatizer
is a standalone pipeline component that can be
added to your pipeline, and not a hidden part of the vocab that runs behind the
scenes. This makes it easier to customize how lemmas should be assigned in your
pipeline.
If the lemmatization mode is set to "rule"
, which requires coarse-grained POS
(Token.pos
) to be assigned, make sure a Tagger
,
Morphologizer
or another component assigning POS is
available in the pipeline and runs before the lemmatizer.
Config and implementation
The default config is defined by the pipeline component factory and describes
how the component should be configured. You can override its settings via the
config
argument on nlp.add_pipe
or in your
config.cfg
for training. For examples of the lookups
data formats used by the lookup and rule-based lemmatizers, see
spacy-lookups-data
.
Example
config = {"mode": "rule"} nlp.add_pipe("lemmatizer", config=config)
Setting | Description |
---|---|
mode |
The lemmatizer mode, e.g. "lookup" or "rule" . Defaults to "lookup" . |
lookups |
The lookups object containing the tables such as "lemma_rules" , "lemma_index" , "lemma_exc" and "lemma_lookup" . If None , default tables are loaded from spacy-lookups-data . Defaults to None . |
overwrite |
Whether to overwrite existing lemmas. Defaults to False . |
model |
Not yet implemented: the model to use. |
%%GITHUB_SPACY/spacy/pipeline/lemmatizer.py
Lemmatizer.__init__
Example
# Construction via add_pipe with default model lemmatizer = nlp.add_pipe("lemmatizer") # Construction via add_pipe with custom settings config = {"mode": "rule", overwrite=True} lemmatizer = nlp.add_pipe("lemmatizer", config=config)
Create a new pipeline instance. In your application, you would normally use a
shortcut for this and instantiate the component using its string name and
nlp.add_pipe
.
Name | Description |
---|---|
vocab |
The shared vocabulary. |
model |
Not yet implemented: The model to use. |
name |
String name of the component instance. Used to add entries to the losses during training. |
keyword-only | |
mode | The lemmatizer mode, e.g. "lookup" or "rule" . Defaults to "lookup" . |
lookups | A lookups object containing the tables such as "lemma_rules" , "lemma_index" , "lemma_exc" and "lemma_lookup" . Defaults to None . |
overwrite | Whether to overwrite existing lemmas. ~~bool~ |
Lemmatizer.__call__
Apply the pipe to one document. The document is modified in place, and returned.
This usually happens under the hood when the nlp
object is called on a text
and all pipeline components are applied to the Doc
in order.
Example
doc = nlp("This is a sentence.") lemmatizer = nlp.add_pipe("lemmatizer") # This usually happens under the hood processed = lemmatizer(doc)
Name | Description |
---|---|
doc |
The document to process. |
RETURNS | The processed document. |
Lemmatizer.pipe
Apply the pipe to a stream of documents. This usually happens under the hood
when the nlp
object is called on a text and all pipeline components are
applied to the Doc
in order.
Example
lemmatizer = nlp.add_pipe("lemmatizer") for doc in lemmatizer.pipe(docs, batch_size=50): pass
Name | Description |
---|---|
stream |
A stream of documents. |
keyword-only | |
batch_size |
The number of documents to buffer. Defaults to 128 . |
YIELDS | The processed documents in order. |
Lemmatizer.lookup_lemmatize
Lemmatize a token using a lookup-based approach. If no lemma is found, the
original string is returned. Languages can provide a
lookup table via the Lookups
.
Name | Description |
---|---|
token |
The token to lemmatize. |
RETURNS | A list containing one or more lemmas. |
Lemmatizer.rule_lemmatize
Lemmatize a token using a rule-based approach. Typically relies on POS tags.
Name | Description |
---|---|
token |
The token to lemmatize. |
RETURNS | A list containing one or more lemmas. |
Lemmatizer.is_base_form
Check whether we're dealing with an uninflected paradigm, so we can avoid lemmatization entirely.
Name | Description |
---|---|
token |
The token to analyze. |
RETURNS | Whether the token's attributes (e.g., part-of-speech tag, morphological features) describe a base form. |
Lemmatizer.get_lookups_config
Returns the lookups configuration settings for a given mode for use in
Lemmatizer.load_lookups
.
Name | Description |
---|---|
mode |
The lemmatizer mode. |
RETURNS | The lookups configuration settings for this mode. Includes the keys "required_tables" and "optional_tables" , mapped to a list of table string names. |
Lemmatizer.load_lookups
Load and validate lookups tables. If the provided lookups is None
, load the
default lookups tables according to the language and mode settings. Confirm that
all required tables for the language and mode are present.
Name | Description |
---|---|
lang |
The language. |
mode |
The lemmatizer mode. |
lookups |
The provided lookups, may be None if the default lookups should be loaded. |
RETURNS | The lookups. |
Lemmatizer.to_disk
Serialize the pipe to disk.
Example
lemmatizer = nlp.add_pipe("lemmatizer") lemmatizer.to_disk("/path/to/lemmatizer")
Name | Description |
---|---|
path |
A path to a directory, which will be created if it doesn't exist. Paths may be either strings or Path -like objects. |
keyword-only | |
exclude |
String names of serialization fields to exclude. |
Lemmatizer.from_disk
Load the pipe from disk. Modifies the object in place and returns it.
Example
lemmatizer = nlp.add_pipe("lemmatizer") lemmatizer.from_disk("/path/to/lemmatizer")
Name | Description |
---|---|
path |
A path to a directory. Paths may be either strings or Path -like objects. |
keyword-only | |
exclude |
String names of serialization fields to exclude. |
RETURNS | The modified Lemmatizer object. |
Lemmatizer.to_bytes
Example
lemmatizer = nlp.add_pipe("lemmatizer") lemmatizer_bytes = lemmatizer.to_bytes()
Serialize the pipe to a bytestring.
Name | Description |
---|---|
keyword-only | |
exclude |
String names of serialization fields to exclude. |
RETURNS | The serialized form of the Lemmatizer object. |
Lemmatizer.from_bytes
Load the pipe from a bytestring. Modifies the object in place and returns it.
Example
lemmatizer_bytes = lemmatizer.to_bytes() lemmatizer = nlp.add_pipe("lemmatizer") lemmatizer.from_bytes(lemmatizer_bytes)
Name | Description |
---|---|
bytes_data |
The data to load from. |
keyword-only | |
exclude |
String names of serialization fields to exclude. |
RETURNS | The Lemmatizer object. |
Attributes
Name | Description |
---|---|
vocab |
The shared Vocab . |
lookups |
The lookups object. |
mode |
The lemmatizer mode. |
Serialization fields
During serialization, spaCy will export several data fields used to restore
different aspects of the object. If needed, you can exclude them from
serialization by passing in the string names via the exclude
argument.
Example
data = lemmatizer.to_disk("/path", exclude=["vocab"])
Name | Description |
---|---|
vocab |
The shared Vocab . |
lookups |
The lookups. You usually don't want to exclude this. |