mirror of
https://github.com/explosion/spaCy.git
synced 2024-11-11 12:18:04 +03:00
c70ccd543d
* make model positional arg and raise error if no vectors * small doc fixes |
||
---|---|---|
.. | ||
__init__.py | ||
entity_linker_evaluation.py | ||
kb_creator.py | ||
README.md | ||
train_descriptions.py | ||
wiki_io.py | ||
wiki_namespaces.py | ||
wikidata_pretrain_kb.py | ||
wikidata_processor.py | ||
wikidata_train_entity_linker.py | ||
wikipedia_processor.py |
Entity Linking with Wikipedia and Wikidata
Step 1: Create a Knowledge Base (KB) and training data
Run wikipedia_pretrain_kb.py
- This takes as input the locations of a Wikipedia and a Wikidata dump, and produces a KB directory + training file
- WikiData: get
latest-all.json.bz2
from https://dumps.wikimedia.org/wikidatawiki/entities/ - Wikipedia: get
enwiki-latest-pages-articles-multistream.xml.bz2
from https://dumps.wikimedia.org/enwiki/latest/ (or for any other language)
- WikiData: get
- You can set the filtering parameters for KB construction:
max_per_alias
(-a
): (max) number of candidate entities in the KB per alias/synonymmin_freq
(-f
): threshold of number of times an entity should occur in the corpus to be included in the KBmin_pair
(-c
): threshold of number of times an entity+alias combination should occur in the corpus to be included in the KB
- Further parameters to set:
descriptions_from_wikipedia
(-wp
): whether to parse descriptions from Wikipedia (True
) or Wikidata (False
)entity_vector_length
(-v
): length of the pre-trained entity description vectorslang
(-la
): language for which to fetch Wikidata information (as the dump contains all languages)
Quick testing and rerunning:
- When trying out the pipeline for a quick test, set
limit_prior
(-lp
),limit_train
(-lt
) and/orlimit_wd
(-lw
) to read only parts of the dumps instead of everything.- e.g. set
-lt 20000 -lp 2000 -lw 3000 -f 1
- e.g. set
- If you only want to (re)run certain parts of the pipeline, just remove the corresponding files and they will be recalculated or reparsed.
Step 2: Train an Entity Linking model
Run wikidata_train_entity_linker.py
- This takes the KB directory produced by Step 1, and trains an Entity Linking model
- Specify the output directory (
-o
) in which the final, trained model will be saved - You can set the learning parameters for the EL training:
epochs
(-e
): number of training iterationsdropout
(-p
): dropout ratelr
(-n
): learning ratel2
(-r
): L2 regularization
- Specify the number of training and dev testing articles with
train_articles
(-t
) anddev_articles
(-d
) respectively- If not specified, the full dataset will be processed - this may take a LONG time !
- Further parameters to set:
labels_discard
(-l
): NER label types to discard during training