2017-10-24 16:50:41 +03:00
|
|
|
//- 💫 DOCS > API > ANNOTATION > TRAINING
|
|
|
|
|
2017-10-30 21:40:05 +03:00
|
|
|
+h(3, "json-input") JSON input format for training
|
|
|
|
|
2017-10-24 16:50:41 +03:00
|
|
|
p
|
|
|
|
| spaCy takes training data in JSON format. The built-in
|
|
|
|
| #[+api("cli#convert") #[code convert]] command helps you convert the
|
|
|
|
| #[code .conllu] format used by the
|
|
|
|
| #[+a("https://github.com/UniversalDependencies") Universal Dependencies corpora]
|
|
|
|
| to spaCy's training format.
|
|
|
|
|
|
|
|
+aside("Annotating entities")
|
|
|
|
| Named entities are provided in the #[+a("/api/annotation#biluo") BILUO]
|
|
|
|
| notation. Tokens outside an entity are set to #[code "O"] and tokens
|
|
|
|
| that are part of an entity are set to the entity label, prefixed by the
|
|
|
|
| BILUO marker. For example #[code "B-ORG"] describes the first token of
|
|
|
|
| a multi-token #[code ORG] entity and #[code "U-PERSON"] a single
|
2017-10-26 15:44:32 +03:00
|
|
|
| token representing a #[code PERSON] entity. The
|
|
|
|
| #[+api("goldparse#biluo_tags_from_offsets") #[code biluo_tags_from_offsets]]
|
|
|
|
| function can help you convert entity offsets to the right format.
|
2017-10-24 16:50:41 +03:00
|
|
|
|
|
|
|
+code("Example structure").
|
|
|
|
[{
|
|
|
|
"id": int, # ID of the document within the corpus
|
|
|
|
"paragraphs": [{ # list of paragraphs in the corpus
|
|
|
|
"raw": string, # raw text of the paragraph
|
|
|
|
"sentences": [{ # list of sentences in the paragraph
|
|
|
|
"tokens": [{ # list of tokens in the sentence
|
|
|
|
"id": int, # index of the token in the document
|
|
|
|
"dep": string, # dependency label
|
|
|
|
"head": int, # offset of token head relative to token index
|
|
|
|
"tag": string, # part-of-speech tag
|
|
|
|
"orth": string, # verbatim text of the token
|
|
|
|
"ner": string # BILUO label, e.g. "O" or "B-ORG"
|
|
|
|
}],
|
|
|
|
"brackets": [{ # phrase structure (NOT USED by current models)
|
|
|
|
"first": int, # index of first token
|
|
|
|
"last": int, # index of last token
|
|
|
|
"label": string # phrase label
|
|
|
|
}]
|
|
|
|
}]
|
|
|
|
}]
|
|
|
|
}]
|
|
|
|
|
|
|
|
p
|
|
|
|
| Here's an example of dependencies, part-of-speech tags and names
|
|
|
|
| entities, taken from the English Wall Street Journal portion of the Penn
|
|
|
|
| Treebank:
|
|
|
|
|
|
|
|
+github("spacy", "examples/training/training-data.json", false, false, "json")
|
2017-10-30 21:40:05 +03:00
|
|
|
|
|
|
|
+h(3, "vocab-jsonl") Lexical data for vocabulary
|
|
|
|
+tag-new(2)
|
|
|
|
|
|
|
|
p
|
|
|
|
| The populate a model's vocabulary, you can use the
|
|
|
|
| #[+api("cli#vocab") #[code spacy vocab]] command and load in a
|
|
|
|
| #[+a("https://jsonlines.readthedocs.io/en/latest/") newline-delimited JSON]
|
|
|
|
| (JSONL) file containing one lexical entry per line. The first line
|
|
|
|
| defines the language and vocabulary settings. All other lines are
|
|
|
|
| expected to be JSON objects describing an individual lexeme. The lexical
|
|
|
|
| attributes will be then set as attributes on spaCy's
|
|
|
|
| #[+api("lexeme#attributes") #[code Lexeme]] object. The #[code vocab]
|
|
|
|
| command outputs a ready-to-use spaCy model with a #[code Vocab]
|
|
|
|
| containing the lexical data.
|
|
|
|
|
|
|
|
+code("First line").
|
|
|
|
{"lang": "en", "settings": {"oov_prob": -20.502029418945312}}
|
|
|
|
|
|
|
|
+code("Entry structure").
|
|
|
|
{
|
|
|
|
"orth": string,
|
|
|
|
"id": int,
|
|
|
|
"lower": string,
|
|
|
|
"norm": string,
|
|
|
|
"shape": string
|
|
|
|
"prefix": string,
|
|
|
|
"suffix": string,
|
|
|
|
"length": int,
|
|
|
|
"cluster": string,
|
|
|
|
"prob": float,
|
|
|
|
"is_alpha": bool,
|
|
|
|
"is_ascii": bool,
|
|
|
|
"is_digit": bool,
|
|
|
|
"is_lower": bool,
|
|
|
|
"is_punct": bool,
|
|
|
|
"is_space": bool,
|
|
|
|
"is_title": bool,
|
|
|
|
"is_upper": bool,
|
|
|
|
"like_url": bool,
|
|
|
|
"like_num": bool,
|
|
|
|
"like_email": bool,
|
|
|
|
"is_stop": bool,
|
|
|
|
"is_oov": bool,
|
|
|
|
"is_quote": bool,
|
|
|
|
"is_left_punct": bool,
|
|
|
|
"is_right_punct": bool
|
|
|
|
}
|
|
|
|
|
|
|
|
p
|
2017-10-30 21:46:45 +03:00
|
|
|
| Here's an example of the 20 most frequent lexemes in the English
|
2017-10-30 21:40:05 +03:00
|
|
|
| training data:
|
|
|
|
|
2017-10-30 21:44:35 +03:00
|
|
|
+github("spacy", "examples/training/vocab-data.jsonl", false, false, "json")
|