37 KiB
title | teaser | menu | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Data formats | Details on spaCy's input and output data formats |
|
This section documents input and output formats of data used by spaCy, including the training config, training data and lexical vocabulary data. For an overview of label schemes used by the models, see the models directory. Each trained pipeline documents the label schemes used in its components, depending on the data it was trained on.
Training config
Config files define the training process and pipeline and can be passed to
spacy train
. They use
Thinc's configuration system under the
hood. For details on how to use training configs, see the
usage documentation. To get started with the
recommended settings for your use case, check out the
quickstart widget or run the
init config
command.
What does the @ mean?
The
@
syntax lets you refer to function names registered in the function registry. For example,@architectures = "spacy.HashEmbedCNN.v1"
refers to a registered function of the name spacy.HashEmbedCNN.v1 and all other values defined in its block will be passed into that function as arguments. Those arguments depend on the registered function. See the usage guide on registered functions for details.
%%GITHUB_SPACY/spacy/default_config.cfg
Under the hood, spaCy's configs are powered by our machine learning library
Thinc's config system, which uses
pydantic
for data validation
based on type hints. See spacy/schemas.py
for the schemas used to validate the default config. Arguments of registered
functions are validated against their type annotations, if available. To debug
your config and check that it's valid, you can run the
spacy debug config
command.
nlp
Example
[nlp] lang = "en" pipeline = ["tagger", "parser", "ner"] before_creation = null after_creation = null after_pipeline_creation = null [nlp.tokenizer] @tokenizers = "spacy.Tokenizer.v1"
Defines the nlp
object, its tokenizer and
processing pipeline component names.
Name | Description |
---|---|
lang |
Pipeline language ISO code. Defaults to null . |
pipeline |
Names of pipeline components in order. Should correspond to sections in the [components] block, e.g. [components.ner] . See docs on defining components. Defaults to [] . |
disabled |
Names of pipeline components that are loaded but disabled by default and not run as part of the pipeline. Should correspond to components listed in pipeline . After a pipeline is loaded, disabled components can be enabled using Language.enable_pipe . |
before_creation |
Optional callback to modify Language subclass before it's initialized. Defaults to null . |
after_creation |
Optional callback to modify nlp object right after it's initialized. Defaults to null . |
after_pipeline_creation |
Optional callback to modify nlp object after the pipeline components have been added. Defaults to null . |
tokenizer |
The tokenizer to use. Defaults to Tokenizer . |
components
Example
[components.textcat] factory = "textcat" labels = ["POSITIVE", "NEGATIVE"] [components.textcat.model] @architectures = "spacy.TextCatBOW.v1" exclusive_classes = false ngram_size = 1 no_output_layer = false
This section includes definitions of the
pipeline components and their models, if
available. Components in this section can be referenced in the pipeline
of the
[nlp]
block. Component blocks need to specify either a factory
(named
function to use to create component) or a source
(name of path of trained
pipeline to copy components from). See the docs on
defining pipeline components for details.
paths, system
These sections define variables that can be referenced across the other sections
as variables. For example ${paths.train}
uses the value of train
defined in
the block [paths]
. If your config includes custom registered functions that
need paths, you can define them here. All config values can also be
overwritten on the CLI when you run
spacy train
, which is especially relevant for data paths
that you don't want to hard-code in your config file.
$ python -m spacy train config.cfg --paths.train ./corpus/train.spacy
corpora
Example
[corpora] [corpora.train] @readers = "spacy.Corpus.v1" path = ${paths:train} [corpora.dev] @readers = "spacy.Corpus.v1" path = ${paths:dev} [corpora.pretrain] @readers = "spacy.JsonlReader.v1" path = ${paths.raw} [corpora.my_custom_data] @readers = "my_custom_reader.v1"
This section defines a dictionary mapping of string keys to functions. Each
function takes an nlp
object and yields Example
objects. By
default, the two keys train
and dev
are specified and each refer to a
Corpus
. When pretraining, an additional pretrain
section is added that defaults to a JsonlReader
.
You can also register custom functions that return a callable.
Name | Description |
---|---|
train |
Training data corpus, typically used in [training] block. |
dev |
Development data corpus, typically used in [training] block. |
pretrain |
Raw text for pretraining, typically used in [pretraining] block (if available). |
... | Any custom or alternative corpora. |
Alternatively, the [corpora]
block can refer to one function that returns
a dictionary keyed by the corpus names. This can be useful if you want to load a
single corpus once and then divide it up into train
and dev
partitions.
Example
[corpora] @readers = "my_custom_reader.v1" train_path = ${paths:train} dev_path = ${paths:dev} shuffle = true
Name | Description |
---|---|
corpora |
A dictionary keyed by string names, mapped to corpus functions that receive the current nlp object and return an iterator of Example objects. |
training
This section defines settings and controls for the training and evaluation
process that are used when you run spacy train
.
Name | Description |
---|---|
accumulate_gradient |
Whether to divide the batch up into substeps. Defaults to 1 . |
batcher |
Callable that takes an iterator of Doc objects and yields batches of Doc s. Defaults to batch_by_words . |
dev_corpus |
Dot notation of the config location defining the dev corpus. Defaults to corpora.dev . |
dropout |
The dropout rate. Defaults to 0.1 . |
eval_frequency |
How often to evaluate during training (steps). Defaults to 200 . |
frozen_components |
Pipeline component names that are "frozen" and shouldn't be updated during training. See here for details. Defaults to [] . |
gpu_allocator |
Library for cupy to route GPU memory allocation to. Can be "pytorch" or "tensorflow" . Defaults to variable ${system.gpu_allocator} . |
init_tok2vec |
Optional path to pretrained tok2vec weights created with spacy pretrain . Defaults to variable ${paths.init_tok2vec} . |
lookups |
Additional lexeme and vocab data from spacy-lookups-data . Defaults to null . |
max_epochs |
Maximum number of epochs to train for. Defaults to 0 . |
max_steps |
Maximum number of update steps to train for. Defaults to 20000 . |
optimizer |
The optimizer. The learning rate schedule and other settings can be configured as part of the optimizer. Defaults to Adam . |
patience |
How many steps to continue without improvement in evaluation score. Defaults to 1600 . |
raw_text |
Optional path to a jsonl file with unlabelled text documents for a rehearsal step. Defaults to variable ${paths.raw} . |
score_weights |
Score names shown in metrics mapped to their weight towards the final weighted score. See here for details. Defaults to {} . |
seed |
The random seed. Defaults to variable ${system.seed} . |
train_corpus |
Dot notation of the config location defining the train corpus. Defaults to corpora.train . |
vectors |
Name or path of pipeline containing pretrained word vectors to use, e.g. created with init vocab . Defaults to null . |
pretraining
This section is optional and defines settings and controls for
language model pretraining. It's
used when you run spacy pretrain
.
Name | Description |
---|---|
max_epochs |
Maximum number of epochs. Defaults to 1000 . |
dropout |
The dropout rate. Defaults to 0.2 . |
n_save_every |
Saving frequency. Defaults to null . |
objective |
The pretraining objective. Defaults to {"type": "characters", "n_characters": 4} . |
optimizer |
The optimizer. Defaults to Adam . |
corpus |
Dot notation of the config location defining the train corpus. Defaults to corpora.pretrain . |
batcher |
Batcher for the training data. |
component |
Component to find the layer to pretrain. Defaults to "tok2vec" . |
layer |
The layer to pretrain. If empty, the whole component model will be used. |
Training data
Binary training format
Example
from spacy.tokens import DocBin from spacy.training import Corpus doc_bin = DocBin(docs=docs) doc_bin.to_disk("./data.spacy") reader = Corpus("./data.spacy")
The main data format used in spaCy v3.0 is a binary format created by
serializing a DocBin
, which represents a collection of Doc
objects. This means that you can train spaCy pipelines using the same format it
outputs: annotated Doc
objects. The binary format is extremely efficient in
storage, especially when packing multiple documents together.
Typically, the extension for these binary files is .spacy
, and they are used
as input format for specifying a training corpus and for spaCy's
CLI train
command. The built-in
convert
command helps you convert spaCy's previous
JSON format to the new binary format. It also supports
conversion of the .conllu
format used by the
Universal Dependencies corpora.
JSON training format
As of v3.0, the JSON input format is deprecated and is replaced by the
binary format. Instead of converting Doc
objects to JSON, you can now serialize them directly using the
DocBin
container and then use them as input data.
spacy convert
lets you convert your JSON data to the new .spacy
format:
$ python -m spacy convert ./data.json ./output.spacy
Annotating entities
Named entities are provided in the BILUO notation. Tokens outside an entity are set to
"O"
and tokens that are part of an entity are set to the entity label, prefixed by the BILUO marker. For example"B-ORG"
describes the first token of a multi-tokenORG
entity and"U-PERSON"
a single token representing aPERSON
entity. Theoffsets_to_biluo_tags
function can help you convert entity offsets to the right format.
### Example structure
[{
"id": int, # ID of the document within the corpus
"paragraphs": [{ # list of paragraphs in the corpus
"raw": string, # raw text of the paragraph
"sentences": [{ # list of sentences in the paragraph
"tokens": [{ # list of tokens in the sentence
"id": int, # index of the token in the document
"dep": string, # dependency label
"head": int, # offset of token head relative to token index
"tag": string, # part-of-speech tag
"orth": string, # verbatim text of the token
"ner": string # BILUO label, e.g. "O" or "B-ORG"
}],
"brackets": [{ # phrase structure (NOT USED by current models)
"first": int, # index of first token
"last": int, # index of last token
"label": string # phrase label
}]
}],
"cats": [{ # new in v2.2: categories for text classifier
"label": string, # text category label
"value": float / bool # label applies (1.0/true) or not (0.0/false)
}]
}]
}]
Here's an example of dependencies, part-of-speech tags and named entities, taken from the English Wall Street Journal portion of the Penn Treebank:
https://github.com/explosion/spaCy/blob/v2.3.x/examples/training/training-data.json
Annotation format for creating training examples
An Example
object holds the information for one training
instance. It stores two Doc
objects: one for holding the
gold-standard reference data, and one for holding the predictions of the
pipeline. Examples can be created using the
Example.from_dict
method with a reference Doc
and
a dictionary of gold-standard annotations.
Example
example = Example.from_dict(doc, gold_dict)
Example
objects are used as part of the
internal training API and they're expected when you call
nlp.update
. However, for most use cases, you
shouldn't have to write your own training scripts. It's recommended to train
your pipelines via the spacy train
command with a config
file to keep track of your settings and hyperparameters and your own
registered functions to customize the setup.
Example
{ "text": str, "words": List[str], "lemmas": List[str], "spaces": List[bool], "tags": List[str], "pos": List[str], "morphs": List[str], "sent_starts": List[bool], "deps": List[string], "heads": List[int], "entities": List[str], "entities": List[(int, int, str)], "cats": Dict[str, float], "links": Dict[(int, int), dict], }
Name | Description |
---|---|
text |
Raw text. |
words |
List of gold-standard tokens. |
lemmas |
List of lemmas. |
spaces |
List of boolean values indicating whether the corresponding tokens is followed by a space or not. |
tags |
List of fine-grained POS tags. |
pos |
List of coarse-grained POS tags. |
morphs |
List of morphological features. |
sent_starts |
List of boolean values indicating whether each token is the first of a sentence or not. |
deps |
List of string values indicating the dependency relation of a token to its head. |
heads |
List of integer values indicating the dependency head of each token, referring to the absolute index of each token in the text. |
entities |
Option 1: List of BILUO tags per token of the format "{action}-{label}" , or None for unannotated tokens. |
entities |
Option 2: List of "(start, end, label)" tuples defining all entities in the text. |
cats |
Dictionary of label /value pairs indicating how relevant a certain text category is for the text. |
links |
Dictionary of offset /dict pairs defining named entity links. The character offsets are linked to a dictionary of relevant knowledge base IDs. |
- Multiple formats are possible for the "entities" entry, but you have to pick one.
- Any values for sentence starts will be ignored if there are annotations for dependency relations.
- If the dictionary contains values for
"text"
and"words"
, but not"spaces"
, the latter are inferred automatically. If "words" is not provided either, the values are inferred from theDoc
argument.
### Examples
# Training data for a part-of-speech tagger
doc = Doc(vocab, words=["I", "like", "stuff"])
gold_dict = {"tags": ["NOUN", "VERB", "NOUN"]}
example = Example.from_dict(doc, gold_dict)
# Training data for an entity recognizer (option 1)
doc = nlp("Laura flew to Silicon Valley.")
gold_dict = {"entities": ["U-PERS", "O", "O", "B-LOC", "L-LOC"]}
example = Example.from_dict(doc, gold_dict)
# Training data for an entity recognizer (option 2)
doc = nlp("Laura flew to Silicon Valley.")
gold_dict = {"entities": [(0, 5, "PERSON"), (14, 28, "LOC")]}
example = Example.from_dict(doc, gold_dict)
# Training data for text categorization
doc = nlp("I'm pretty happy about that!")
gold_dict = {"cats": {"POSITIVE": 1.0, "NEGATIVE": 0.0}}
example = Example.from_dict(doc, gold_dict)
# Training data for an Entity Linking component
doc = nlp("Russ Cochran his reprints include EC Comics.")
gold_dict = {"links": {(0, 12): {"Q7381115": 1.0, "Q2146908": 0.0}}}
example = Example.from_dict(doc, gold_dict)
Lexical data for vocabulary
To populate a pipeline's vocabulary, you can use the
spacy init vocab
command and load in a
newline-delimited JSON (JSONL) file containing one
lexical entry per line via the --jsonl-loc
option. The first line defines the
language and vocabulary settings. All other lines are expected to be JSON
objects describing an individual lexeme. The lexical attributes will be then set
as attributes on spaCy's Lexeme
object. The vocab
command outputs a ready-to-use spaCy pipeline with a Vocab
containing the
lexical data.
### First line
{"lang": "en", "settings": {"oov_prob": -20.502029418945312}}
### Entry structure
{
"orth": string, # the word text
"id": int, # can correspond to row in vectors table
"lower": string,
"norm": string,
"shape": string
"prefix": string,
"suffix": string,
"length": int,
"cluster": string,
"prob": float,
"is_alpha": bool,
"is_ascii": bool,
"is_digit": bool,
"is_lower": bool,
"is_punct": bool,
"is_space": bool,
"is_title": bool,
"is_upper": bool,
"like_url": bool,
"like_num": bool,
"like_email": bool,
"is_stop": bool,
"is_oov": bool,
"is_quote": bool,
"is_left_punct": bool,
"is_right_punct": bool
}
Here's an example of the 20 most frequent lexemes in the English training data:
%%GITHUB_SPACY/extra/example_data/vocab-data.jsonl
Pipeline meta
The pipeline meta is available as the file meta.json
and exported
automatically when you save an nlp
object to disk. Its contents are available
as nlp.meta
.
As of spaCy v3.0, the meta.json
isn't used to construct the language class
and pipeline anymore and only contains meta information for reference and for
creating a Python package with spacy package
. How to set
up the nlp
object is now defined in the
config.cfg
, which includes detailed information
about the pipeline components and their model architectures, and all other
settings and hyperparameters used to train the pipeline. It's the single
source of truth used for loading a pipeline.
Example
{ "name": "example_pipeline", "lang": "en", "version": "1.0.0", "spacy_version": ">=3.0.0,<3.1.0", "parent_package": "spacy", "description": "Example pipeline for spaCy", "author": "You", "email": "you@example.com", "url": "https://example.com", "license": "CC BY-SA 3.0", "sources": [{ "name": "My Corpus", "license": "MIT" }], "vectors": { "width": 0, "vectors": 0, "keys": 0, "name": null }, "pipeline": ["tok2vec", "ner", "textcat"], "labels": { "ner": ["PERSON", "ORG", "PRODUCT"], "textcat": ["POSITIVE", "NEGATIVE"] }, "accuracy": { "ents_f": 82.7300930714, "ents_p": 82.135523614, "ents_r": 83.3333333333, "textcat_score": 88.364323811 }, "speed": { "cpu": 7667.8, "gpu": null, "nwords": 10329 }, "spacy_git_version": "61dfdd9fb" }
Name | Description |
---|---|
lang |
Pipeline language ISO code. Defaults to "en" . |
name |
Pipeline name, e.g. "core_web_sm" . The final package name will be {lang}_{name} . Defaults to "pipeline" . |
version |
Pipeline version. Will be used to version a Python package created with spacy package . Defaults to "0.0.0" . |
spacy_version |
spaCy version range the package is compatible with. Defaults to the spaCy version used to create the pipeline, up to next minor version, which is the default compatibility for the available trained pipelines. For instance, a pipeline trained with v3.0.0 will have the version range ">=3.0.0,<3.1.0" . |
parent_package |
Name of the spaCy package. Typically "spacy" or "spacy_nightly" . Defaults to "spacy" . |
description |
Pipeline description. Also used for Python package. Defaults to "" . |
author |
Pipeline author name. Also used for Python package. Defaults to "" . |
email |
Pipeline author email. Also used for Python package. Defaults to "" . |
url |
Pipeline author URL. Also used for Python package. Defaults to "" . |
license |
Pipeline license. Also used for Python package. Defaults to "" . |
sources |
Data sources used to train the pipeline. Typically a list of dicts with the keys "name" , "url" , "author" and "license" . See here for examples. Defaults to None . |
vectors |
Information about the word vectors included with the pipeline. Typically a dict with the keys "width" , "vectors" (number of vectors), "keys" and "name" . |
pipeline |
Names of pipeline component names, in order. Corresponds to nlp.pipe_names . Only exists for reference and is not used to create the components. This information is defined in the config.cfg . Defaults to [] . |
labels |
Label schemes of the trained pipeline components, keyed by component name. Corresponds to nlp.pipe_labels . See here for examples. Defaults to {} . |
accuracy |
Training accuracy, added automatically by spacy train . Dictionary of score names mapped to scores. Defaults to {} . |
speed |
Inference speed, added automatically by spacy train . Typically a dictionary with the keys "cpu" , "gpu" and "nwords" (words per second). Defaults to {} . |
spacy_git_version 3 |
Git commit of spacy used to create pipeline. |
other | Any other custom meta information you want to add. The data is preserved in nlp.meta . |