mirror of
				https://github.com/explosion/spaCy.git
				synced 2025-10-31 16:07:41 +03:00 
			
		
		
		
	| * fix grad_clip naming * cleaning up pretrained_vectors out of cfg * further refactoring Model init's * move Model building out of pipes * further refactor to require a model config when creating a pipe * small fixes * making cfg in nn_parser more consistent * fixing nr_class for parser * fixing nn_parser's nO * fix printing of loss * architectures in own file per type, consistent naming * convenience methods default_tagger_config and default_tok2vec_config * let create_pipe access default config if available for that component * default_parser_config * move defaults to separate folder * allow reading nlp from package or dir with argument 'name' * architecture spacy.VocabVectors.v1 to read static vectors from file * cleanup * default configs for nel, textcat, morphologizer, tensorizer * fix imports * fixing unit tests * fixes and clean up * fixing defaults, nO, fix unit tests * restore parser IO * fix IO * 'fix' serialization test * add *.cfg to manifest * fix example configs with additional arguments * replace Morpohologizer with Tagger * add IO bit when testing overfitting of tagger (currently failing) * fix IO - don't initialize when reading from disk * expand overfitting tests to also check IO goes OK * remove dropout from HashEmbed to fix Tagger performance * add defaults for sentrec * update thinc * always pass a Model instance to a Pipe * fix piped_added statement * remove obsolete W029 * remove obsolete errors * restore byte checking tests (work again) * clean up test * further test cleanup * convert from config to Model in create_pipe * bring back error when component is not initialized * cleanup * remove calls for nlp2.begin_training * use thinc.api in imports * allow setting charembed's nM and nC * fix for hardcoded nM/nC + unit test * formatting fixes * trigger build | ||
|---|---|---|
| .. | ||
| __init__.py | ||
| entity_linker_evaluation.py | ||
| kb_creator.py | ||
| README.md | ||
| train_descriptions.py | ||
| wiki_io.py | ||
| wiki_namespaces.py | ||
| wikidata_pretrain_kb.py | ||
| wikidata_processor.py | ||
| wikidata_train_entity_linker.py | ||
| wikipedia_processor.py | ||
Entity Linking with Wikipedia and Wikidata
Step 1: Create a Knowledge Base (KB) and training data
Run  wikipedia_pretrain_kb.py
- This takes as input the locations of a Wikipedia and a Wikidata dump, and produces a KB directory + training file
- WikiData: get latest-all.json.bz2from https://dumps.wikimedia.org/wikidatawiki/entities/
- Wikipedia: get enwiki-latest-pages-articles-multistream.xml.bz2from https://dumps.wikimedia.org/enwiki/latest/ (or for any other language)
 
- WikiData: get 
- You can set the filtering parameters for KB construction:
- max_per_alias(- -a): (max) number of candidate entities in the KB per alias/synonym
- min_freq(- -f): threshold of number of times an entity should occur in the corpus to be included in the KB
- min_pair(- -c): threshold of number of times an entity+alias combination should occur in the corpus to be included in the KB
 
- Further parameters to set:
- descriptions_from_wikipedia(- -wp): whether to parse descriptions from Wikipedia (- True) or Wikidata (- False)
- entity_vector_length(- -v): length of the pre-trained entity description vectors
- lang(- -la): language for which to fetch Wikidata information (as the dump contains all languages)
 
Quick testing and rerunning:
- When trying out the pipeline for a quick test, set limit_prior(-lp),limit_train(-lt) and/orlimit_wd(-lw) to read only parts of the dumps instead of everything.- e.g. set -lt 20000 -lp 2000 -lw 3000 -f 1
 
- e.g. set 
- If you only want to (re)run certain parts of the pipeline, just remove the corresponding files and they will be recalculated or reparsed.
Step 2: Train an Entity Linking model
Run  wikidata_train_entity_linker.py
- This takes the KB directory produced by Step 1, and trains an Entity Linking model
- Specify the output directory (-o) in which the final, trained model will be saved
- You can set the learning parameters for the EL training:
- epochs(- -e): number of training iterations
- dropout(- -p): dropout rate
- lr(- -n): learning rate
- l2(- -r): L2 regularization
 
- Specify the number of training and dev testing articles with train_articles(-t) anddev_articles(-d) respectively- If not specified, the full dataset will be processed - this may take a LONG time !
 
- Further parameters to set:
- labels_discard(- -l): NER label types to discard during training