2019-09-12 12:38:34 +03:00
---
title: EntityLinker
tag: class
2020-07-27 01:29:45 +03:00
source: spacy/pipeline/entity_linker.py
2019-09-12 12:38:34 +03:00
new: 2.2
2020-07-27 19:11:45 +03:00
teaser: 'Pipeline component for named entity linking and disambiguation'
api_base_class: /api/pipe
api_string_name: entity_linker
api_trainable: true
2019-09-12 12:38:34 +03:00
---
2020-08-07 00:20:13 +03:00
An `EntityLinker` component disambiguates textual mentions (tagged as named
2020-08-06 18:41:26 +03:00
entities) to unique identifiers, grounding the named entities into the "real
2020-08-07 00:20:13 +03:00
world". It requires a `KnowledgeBase` , as well as a function to generate
plausible candidates from that `KnowledgeBase` given a certain textual mention,
2020-09-03 14:13:03 +03:00
and a machine learning model to pick the right candidate, given the local
context of the mention.
2020-08-06 18:41:26 +03:00
2021-09-01 13:09:39 +03:00
## Assigned Attributes {#assigned-attributes}
Predictions, in the form of knowledge base IDs, will be assigned to
`Token.ent_kb_id_` .
| Location | Value |
| ------------------ | --------------------------------- |
| `Token.ent_kb_id` | Knowledge base ID (hash). ~~int~~ |
| `Token.ent_kb_id_` | Knowledge base ID. ~~str~~ |
2020-07-27 19:11:45 +03:00
## Config and implementation {#config}
2019-09-12 12:38:34 +03:00
2020-07-27 19:11:45 +03:00
The default config is defined by the pipeline component factory and describes
how the component should be configured. You can override its settings via the
`config` argument on [`nlp.add_pipe` ](/api/language#add_pipe ) or in your
[`config.cfg` for training ](/usage/training#config ). See the
[model architectures ](/api/architectures ) documentation for details on the
architectures and their arguments and hyperparameters.
2020-07-08 14:34:35 +03:00
2020-07-27 19:11:45 +03:00
> #### Example
>
> ```python
> from spacy.pipeline.entity_linker import DEFAULT_NEL_MODEL
> config = {
> "labels_discard": [],
2021-02-22 06:49:52 +03:00
> "n_sents": 0,
2020-07-27 19:11:45 +03:00
> "incl_prior": True,
> "incl_context": True,
> "model": DEFAULT_NEL_MODEL,
2020-10-12 12:41:27 +03:00
> "entity_vector_length": 64,
2020-09-03 18:31:14 +03:00
> "get_candidates": {'@misc': 'spacy.CandidateGenerator.v1'},
2020-07-27 19:11:45 +03:00
> }
> nlp.add_pipe("entity_linker", config=config)
> ```
2021-10-11 11:35:07 +03:00
| Setting | Description |
| ---------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `labels_discard` | NER labels that will automatically get a "NIL" prediction. Defaults to `[]` . ~~Iterable[str]~~ |
| `n_sents` | The number of neighbouring sentences to take into account. Defaults to 0. ~~int~~ |
| `incl_prior` | Whether or not to include prior probabilities from the KB in the model. Defaults to `True` . ~~bool~~ |
| `incl_context` | Whether or not to include the local context in the model. Defaults to `True` . ~~bool~~ |
| `model` | The [`Model` ](https://thinc.ai/docs/api-model ) powering the pipeline component. Defaults to [EntityLinker ](/api/architectures#EntityLinker ). ~~Model~~ |
| `entity_vector_length` | Size of encoding vectors in the KB. Defaults to `64` . ~~int~~ |
Fix entity linker batching (#9669)
* Partial fix of entity linker batching
* Add import
* Better name
* Add `use_gold_ents` option, docs
* Change to v2, create stub v1, update docs etc.
* Fix error type
Honestly no idea what the right type to use here is.
ConfigValidationError seems wrong. Maybe a NotImplementedError?
* Make mypy happy
* Add hacky fix for init issue
* Add legacy pipeline entity linker
* Fix references to class name
* Add __init__.py for legacy
* Attempted fix for loss issue
* Remove placeholder V1
* formatting
* slightly more interesting train data
* Handle batches with no usable examples
This adds a test for batches that have docs but not entities, and a
check in the component that detects such cases and skips the update step
as thought the batch were empty.
* Remove todo about data verification
Check for empty data was moved further up so this should be OK now - the
case in question shouldn't be possible.
* Fix gradient calculation
The model doesn't know which entities are not in the kb, so it generates
embeddings for the context of all of them.
However, the loss does know which entities aren't in the kb, and it
ignores them, as there's no sensible gradient.
This has the issue that the gradient will not be calculated for some of
the input embeddings, which causes a dimension mismatch in backprop.
That should have caused a clear error, but with numpyops it was causing
nans to happen, which is another problem that should be addressed
separately.
This commit changes the loss to give a zero gradient for entities not in
the kb.
* add failing test for v1 EL legacy architecture
* Add nasty but simple working check for legacy arch
* Clarify why init hack works the way it does
* Clarify use_gold_ents use case
* Fix use gold ents related handling
* Add tests for no gold ents and fix other tests
* Use aligned ents function (not working)
This doesn't actually work because the "aligned" ents are gold-only. But
if I have a different function that returns the intersection, *then*
this will work as desired.
* Use proper matching ent check
This changes the process when gold ents are not used so that the
intersection of ents in the pred and gold is used.
* Move get_matching_ents to Example
* Use model attribute to check for legacy arch
* Rename flag
* bump spacy-legacy to lower 3.0.9
Co-authored-by: svlandeg <svlandeg@github.com>
2022-03-04 11:17:36 +03:00
| `use_gold_ents` | Whether to copy entities from the gold docs or not. Defaults to `True` . If `False` , entities must be set in the training data or by an annotating component in the pipeline. ~~int~~ |
2021-10-11 11:35:07 +03:00
| `get_candidates` | Function that generates plausible candidates for a given `Span` object. Defaults to [CandidateGenerator ](/api/architectures#CandidateGenerator ), a function looking up exact, case-dependent aliases in the KB. ~~Callable[[KnowledgeBase, Span], Iterable[Candidate]]~~ |
| `overwrite` < Tag variant = "new" > 3.2</ Tag > | Whether existing annotation is overwritten. Defaults to `True` . ~~bool~~ |
| `scorer` < Tag variant = "new" > 3.2</ Tag > | The scoring method. Defaults to [`Scorer.score_links` ](/api/scorer#score_links ). ~~Optional[Callable]~~ |
2020-07-08 14:34:35 +03:00
```python
2020-09-12 18:05:10 +03:00
%%GITHUB_SPACY/spacy/pipeline/entity_linker.py
2020-07-08 14:34:35 +03:00
```
2019-09-12 12:38:34 +03:00
## EntityLinker.\_\_init\_\_ {#init tag="method"}
> #### Example
>
> ```python
2020-07-27 01:29:45 +03:00
> # Construction via add_pipe with default model
> entity_linker = nlp.add_pipe("entity_linker")
2019-09-12 12:38:34 +03:00
>
2020-07-27 01:29:45 +03:00
> # Construction via add_pipe with custom model
2020-08-06 17:40:48 +03:00
> config = {"model": {"@architectures": "my_el.v1"}}
> entity_linker = nlp.add_pipe("entity_linker", config=config)
>
2020-07-27 19:11:45 +03:00
> # Construction from class
> from spacy.pipeline import EntityLinker
> entity_linker = EntityLinker(nlp.vocab, model)
2019-09-12 12:38:34 +03:00
> ```
2020-07-08 13:14:30 +03:00
Create a new pipeline instance. In your application, you would normally use a
shortcut for this and instantiate the component using its string name and
2020-10-12 12:41:27 +03:00
[`nlp.add_pipe` ](/api/language#add_pipe ).
Upon construction of the entity linker component, an empty knowledge base is
constructed with the provided `entity_vector_length` . If you want to use a
custom knowledge base, you should either call
[`set_kb` ](/api/entitylinker#set_kb ) or provide a `kb_loader` in the
[`initialize` ](/api/entitylinker#initialize ) call.
2021-10-11 11:35:07 +03:00
| Name | Description |
| ---------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- |
| `vocab` | The shared vocabulary. ~~Vocab~~ |
| `model` | The [`Model` ](https://thinc.ai/docs/api-model ) powering the pipeline component. ~~Model~~ |
| `name` | String name of the component instance. Used to add entries to the `losses` during training. ~~str~~ |
| _keyword-only_ | |
| `entity_vector_length` | Size of encoding vectors in the KB. ~~int~~ |
| `get_candidates` | Function that generates plausible candidates for a given `Span` object. ~~Callable[[KnowledgeBase, Span], Iterable[Candidate]]~~ |
| `labels_discard` | NER labels that will automatically get a `"NIL"` prediction. ~~Iterable[str]~~ |
| `n_sents` | The number of neighbouring sentences to take into account. ~~int~~ |
| `incl_prior` | Whether or not to include prior probabilities from the KB in the model. ~~bool~~ |
| `incl_context` | Whether or not to include the local context in the model. ~~bool~~ |
| `overwrite` < Tag variant = "new" > 3.2</ Tag > | Whether existing annotation is overwritten. Defaults to `True` . ~~bool~~ |
| `scorer` < Tag variant = "new" > 3.2</ Tag > | The scoring method. Defaults to [`Scorer.score_links` ](/api/scorer#score_links ). ~~Optional[Callable]~~ |
2019-09-12 12:38:34 +03:00
## EntityLinker.\_\_call\_\_ {#call tag="method"}
2020-09-24 14:15:28 +03:00
Apply the pipe to one document. The document is modified in place and returned.
2019-09-12 12:38:34 +03:00
This usually happens under the hood when the `nlp` object is called on a text
and all pipeline components are applied to the `Doc` in order. Both
2019-09-12 17:11:15 +03:00
[`__call__` ](/api/entitylinker#call ) and [`pipe` ](/api/entitylinker#pipe )
delegate to the [`predict` ](/api/entitylinker#predict ) and
[`set_annotations` ](/api/entitylinker#set_annotations ) methods.
2019-09-12 12:38:34 +03:00
> #### Example
>
> ```python
2019-09-12 17:11:15 +03:00
> doc = nlp("This is a sentence.")
2020-07-27 19:11:45 +03:00
> entity_linker = nlp.add_pipe("entity_linker")
2019-09-12 12:38:34 +03:00
> # This usually happens under the hood
> processed = entity_linker(doc)
> ```
2020-08-17 17:45:24 +03:00
| Name | Description |
| ----------- | -------------------------------- |
| `doc` | The document to process. ~~Doc~~ |
| **RETURNS** | The processed document. ~~Doc~~ |
2019-09-12 12:38:34 +03:00
## EntityLinker.pipe {#pipe tag="method"}
Apply the pipe to a stream of documents. This usually happens under the hood
when the `nlp` object is called on a text and all pipeline components are
applied to the `Doc` in order. Both [`__call__` ](/api/entitylinker#call ) and
[`pipe` ](/api/entitylinker#pipe ) delegate to the
[`predict` ](/api/entitylinker#predict ) and
[`set_annotations` ](/api/entitylinker#set_annotations ) methods.
> #### Example
>
> ```python
2020-07-27 19:11:45 +03:00
> entity_linker = nlp.add_pipe("entity_linker")
2019-09-12 12:38:34 +03:00
> for doc in entity_linker.pipe(docs, batch_size=50):
> pass
> ```
2020-08-17 17:45:24 +03:00
| Name | Description |
| -------------- | ------------------------------------------------------------- |
| `stream` | A stream of documents. ~~Iterable[Doc]~~ |
| _keyword-only_ | |
| `batch_size` | The number of documents to buffer. Defaults to `128` . ~~int~~ |
| **YIELDS** | The processed documents in order. ~~Doc~~ |
2020-07-27 19:11:45 +03:00
2021-04-22 10:59:24 +03:00
## EntityLinker.set_kb {#set_kb tag="method" new="3"}
2020-10-12 12:41:27 +03:00
The `kb_loader` should be a function that takes a `Vocab` instance and creates
the `KnowledgeBase` , ensuring that the strings of the knowledge base are synced
with the current vocab.
> #### Example
>
> ```python
> def create_kb(vocab):
> kb = KnowledgeBase(vocab, entity_vector_length=128)
> kb.add_entity(...)
> kb.add_alias(...)
> return kb
> entity_linker = nlp.add_pipe("entity_linker")
2021-02-22 03:04:22 +03:00
> entity_linker.set_kb(create_kb)
2020-10-12 12:41:27 +03:00
> ```
| Name | Description |
| ----------- | ---------------------------------------------------------------------------------------------------------------- |
| `kb_loader` | Function that creates a [`KnowledgeBase` ](/api/kb ) from a `Vocab` instance. ~~Callable[[Vocab], KnowledgeBase]~~ |
2020-10-01 18:38:17 +03:00
## EntityLinker.initialize {#initialize tag="method" new="3"}
2020-07-27 19:11:45 +03:00
2020-09-29 17:59:21 +03:00
Initialize the component for training. `get_examples` should be a function that
returns an iterable of [`Example` ](/api/example ) objects. The data examples are
used to **initialize the model** of the component and can either be the full
training data or a representative sample. Initialization includes validating the
network,
2020-08-11 21:57:23 +03:00
[inferring missing shapes ](https://thinc.ai/docs/usage-models#validation ) and
2020-09-29 17:59:21 +03:00
setting up the label scheme based on the data. This method is typically called
by [`Language.initialize` ](/api/language#initialize ).
2020-07-27 19:11:45 +03:00
2020-10-12 12:41:27 +03:00
Optionally, a `kb_loader` argument may be specified to change the internal
knowledge base. This argument should be a function that takes a `Vocab` instance
and creates the `KnowledgeBase` , ensuring that the strings of the knowledge base
are synced with the current vocab.
2020-09-28 22:35:09 +03:00
< Infobox variant = "warning" title = "Changed in v3.0" id = "begin_training" >
This method was previously called `begin_training` .
< / Infobox >
2020-07-27 19:11:45 +03:00
> #### Example
>
> ```python
2020-09-29 17:59:21 +03:00
> entity_linker = nlp.add_pipe("entity_linker")
2020-10-12 12:41:27 +03:00
> entity_linker.initialize(lambda: [], nlp=nlp, kb_loader=my_kb)
2020-07-27 19:11:45 +03:00
> ```
2020-08-17 17:45:24 +03:00
| Name | Description |
| -------------- | ------------------------------------------------------------------------------------------------------------------------------------- |
| `get_examples` | Function that returns gold-standard annotations in the form of [`Example` ](/api/example ) objects. ~~Callable[[], Iterable[Example]]~~ |
| _keyword-only_ | |
2020-09-29 17:59:21 +03:00
| `nlp` | The current `nlp` object. Defaults to `None` . ~~Optional[Language]~~ |
2020-10-12 12:41:27 +03:00
| `kb_loader` | Function that creates a [`KnowledgeBase` ](/api/kb ) from a `Vocab` instance. ~~Callable[[Vocab], KnowledgeBase]~~ |
2019-09-12 12:38:34 +03:00
## EntityLinker.predict {#predict tag="method"}
2020-08-10 14:45:22 +03:00
Apply the component's model to a batch of [`Doc` ](/api/doc ) objects, without
modifying them. Returns the KB IDs for each entity in each doc, including `NIL`
if there is no prediction.
2019-09-12 12:38:34 +03:00
> #### Example
>
> ```python
2020-07-27 19:11:45 +03:00
> entity_linker = nlp.add_pipe("entity_linker")
2020-07-08 14:11:54 +03:00
> kb_ids = entity_linker.predict([doc1, doc2])
2019-09-12 12:38:34 +03:00
> ```
2021-05-20 11:11:30 +03:00
| Name | Description |
| ----------- | -------------------------------------------------------------------------- |
| `docs` | The documents to predict. ~~Iterable[Doc]~~ |
| **RETURNS** | The predicted KB identifiers for the entities in the `docs` . ~~List[str]~~ |
2019-09-12 12:38:34 +03:00
## EntityLinker.set_annotations {#set_annotations tag="method"}
2019-09-12 17:11:15 +03:00
Modify a batch of documents, using pre-computed entity IDs for a list of named
entities.
2019-09-12 12:38:34 +03:00
> #### Example
>
> ```python
2020-07-27 19:11:45 +03:00
> entity_linker = nlp.add_pipe("entity_linker")
2020-07-08 14:11:54 +03:00
> kb_ids = entity_linker.predict([doc1, doc2])
> entity_linker.set_annotations([doc1, doc2], kb_ids)
2019-09-12 12:38:34 +03:00
> ```
2020-08-17 17:45:24 +03:00
| Name | Description |
| -------- | --------------------------------------------------------------------------------------------------------------- |
| `docs` | The documents to modify. ~~Iterable[Doc]~~ |
| `kb_ids` | The knowledge base identifiers for the entities in the docs, predicted by `EntityLinker.predict` . ~~List[str]~~ |
2019-09-12 12:38:34 +03:00
## EntityLinker.update {#update tag="method"}
2020-07-07 20:17:19 +03:00
Learn from a batch of [`Example` ](/api/example ) objects, updating both the
2019-09-12 17:11:15 +03:00
pipe's entity linking model and context encoder. Delegates to
2021-01-25 17:18:45 +03:00
[`predict` ](/api/entitylinker#predict ).
2019-09-12 12:38:34 +03:00
> #### Example
>
> ```python
2020-07-27 19:11:45 +03:00
> entity_linker = nlp.add_pipe("entity_linker")
2020-09-28 22:35:09 +03:00
> optimizer = nlp.initialize()
2020-07-08 13:14:30 +03:00
> losses = entity_linker.update(examples, sgd=optimizer)
2019-09-12 12:38:34 +03:00
> ```
2021-02-22 06:49:52 +03:00
| Name | Description |
| -------------- | ------------------------------------------------------------------------------------------------------------------------ |
| `examples` | A batch of [`Example` ](/api/example ) objects to learn from. ~~Iterable[Example]~~ |
| _keyword-only_ | |
| `drop` | The dropout rate. ~~float~~ |
| `sgd` | An optimizer. Will be created via [`create_optimizer` ](#create_optimizer ) if not set. ~~Optional[Optimizer]~~ |
| `losses` | Optional record of the loss during training. Updated using the component name as the key. ~~Optional[Dict[str, float]]~~ |
| **RETURNS** | The updated `losses` dictionary. ~~Dict[str, float]~~ |
2019-09-12 12:38:34 +03:00
## EntityLinker.create_optimizer {#create_optimizer tag="method"}
Create an optimizer for the pipeline component.
> #### Example
>
> ```python
2020-07-27 19:11:45 +03:00
> entity_linker = nlp.add_pipe("entity_linker")
2019-09-12 12:38:34 +03:00
> optimizer = entity_linker.create_optimizer()
> ```
2020-08-17 17:45:24 +03:00
| Name | Description |
| ----------- | ---------------------------- |
| **RETURNS** | The optimizer. ~~Optimizer~~ |
2019-09-12 12:38:34 +03:00
## EntityLinker.use_params {#use_params tag="method, contextmanager"}
2020-07-28 14:37:31 +03:00
Modify the pipe's model, to use the given parameter values. At the end of the
context, the original parameters are restored.
2019-09-12 12:38:34 +03:00
> #### Example
>
> ```python
2020-07-27 19:11:45 +03:00
> entity_linker = nlp.add_pipe("entity_linker")
2019-09-12 12:38:34 +03:00
> with entity_linker.use_params(optimizer.averages):
> entity_linker.to_disk("/best_model")
> ```
2020-08-17 17:45:24 +03:00
| Name | Description |
| -------- | -------------------------------------------------- |
| `params` | The parameter values to use in the model. ~~dict~~ |
2019-09-12 12:38:34 +03:00
## EntityLinker.to_disk {#to_disk tag="method"}
Serialize the pipe to disk.
> #### Example
>
> ```python
2020-07-27 19:11:45 +03:00
> entity_linker = nlp.add_pipe("entity_linker")
2019-09-12 12:38:34 +03:00
> entity_linker.to_disk("/path/to/entity_linker")
> ```
2020-08-17 17:45:24 +03:00
| Name | Description |
| -------------- | ------------------------------------------------------------------------------------------------------------------------------------------ |
| `path` | A path to a directory, which will be created if it doesn't exist. Paths may be either strings or `Path` -like objects. ~~Union[str, Path]~~ |
| _keyword-only_ | |
| `exclude` | String names of [serialization fields ](#serialization-fields ) to exclude. ~~Iterable[str]~~ |
2019-09-12 12:38:34 +03:00
## EntityLinker.from_disk {#from_disk tag="method"}
Load the pipe from disk. Modifies the object in place and returns it.
> #### Example
>
> ```python
2020-07-27 19:11:45 +03:00
> entity_linker = nlp.add_pipe("entity_linker")
2019-09-12 12:38:34 +03:00
> entity_linker.from_disk("/path/to/entity_linker")
> ```
2020-08-17 17:45:24 +03:00
| Name | Description |
| -------------- | ----------------------------------------------------------------------------------------------- |
| `path` | A path to a directory. Paths may be either strings or `Path` -like objects. ~~Union[str, Path]~~ |
| _keyword-only_ | |
| `exclude` | String names of [serialization fields ](#serialization-fields ) to exclude. ~~Iterable[str]~~ |
| **RETURNS** | The modified `EntityLinker` object. ~~EntityLinker~~ |
2019-09-12 12:38:34 +03:00
2021-05-20 11:11:30 +03:00
## EntityLinker.to_bytes {#to_bytes tag="method"}
> #### Example
>
> ```python
> entity_linker = nlp.add_pipe("entity_linker")
> entity_linker_bytes = entity_linker.to_bytes()
> ```
Serialize the pipe to a bytestring, including the `KnowledgeBase` .
| Name | Description |
| -------------- | ------------------------------------------------------------------------------------------- |
| _keyword-only_ | |
| `exclude` | String names of [serialization fields ](#serialization-fields ) to exclude. ~~Iterable[str]~~ |
| **RETURNS** | The serialized form of the `EntityLinker` object. ~~bytes~~ |
## EntityLinker.from_bytes {#from_bytes tag="method"}
Load the pipe from a bytestring. Modifies the object in place and returns it.
> #### Example
>
> ```python
> entity_linker_bytes = entity_linker.to_bytes()
> entity_linker = nlp.add_pipe("entity_linker")
> entity_linker.from_bytes(entity_linker_bytes)
> ```
| Name | Description |
| -------------- | ------------------------------------------------------------------------------------------- |
| `bytes_data` | The data to load from. ~~bytes~~ |
| _keyword-only_ | |
| `exclude` | String names of [serialization fields ](#serialization-fields ) to exclude. ~~Iterable[str]~~ |
| **RETURNS** | The `EntityLinker` object. ~~EntityLinker~~ |
2019-09-12 12:38:34 +03:00
## Serialization fields {#serialization-fields}
During serialization, spaCy will export several data fields used to restore
different aspects of the object. If needed, you can exclude them from
serialization by passing in the string names via the `exclude` argument.
> #### Example
>
> ```python
> data = entity_linker.to_disk("/path", exclude=["vocab"])
> ```
| Name | Description |
| ------- | -------------------------------------------------------------- |
| `vocab` | The shared [`Vocab` ](/api/vocab ). |
| `cfg` | The config file. You usually don't want to exclude this. |
| `model` | The binary model data. You usually don't want to exclude this. |
| `kb` | The knowledge base. You usually don't want to exclude this. |