Update tagger and parser examples and add to docs

This commit is contained in:
ines 2017-10-26 16:27:42 +02:00
parent f1529463a8
commit b90e958975
3 changed files with 55 additions and 3 deletions

View File

@ -50,7 +50,7 @@ def main(lang='en', output_dir=None, n_iter=25):
lang_cls.Defaults.tag_map.update(TAG_MAP) # add tag map to defaults
nlp = lang_cls() # initialise Language class
# add the parser to the pipeline
# add the tagger to the pipeline
# nlp.create_pipe works for built-ins that are registered with spaCy
tagger = nlp.create_pipe('tagger')
nlp.add_pipe(tagger)

View File

@ -1,6 +1,6 @@
//- 💫 DOCS > USAGE > TRAINING > TAGGER & PARSER
+h(3, "example-train-parser") Updating the parser
+h(3, "example-train-parser") Updating the Dependency Parser
p
| This example shows how to train spaCy's dependency parser, starting off
@ -51,6 +51,49 @@ p
+item
| #[strong Test] the model to make sure the parser works as expected.
+h(3, "example-train-tagger") Updating the Part-of-speech Tagger
p
| In this example, we're training spaCy's part-of-speech tagger with a
| custom tag map. We start off with a blank #[code Language] class, update
| its defaults with our custom tags and then train the tagger. You'll need
| a set of #[strong training examples] and the respective
| #[strong custom tags], as well as a dictionary mapping those tags to the
| #[+a("http://universaldependencies.github.io/docs/u/pos/index.html") Universal Dependencies scheme].
+github("spacy", "examples/training/train_tagger.py")
+h(4) Step by step guide
+list("numbers")
+item
| #[strong Create] a new #[code Language] class and before initialising
| it, update the #[code tag_map] in its #[code Defaults] with your
| custom tags.
+item
| #[strong Create a new tagger] component and add it to the pipeline.
+item
| #[strong Shuffle and loop over] the examples and create a
| #[code Doc] and #[code GoldParse] object for each example. Make sure
| to pass in the #[code tags] when you create the #[code GoldParse].
+item
| For each example, #[strong update the model]
| by calling #[+api("language#update") #[code nlp.update]], which steps
| through the words of the input. At each word, it makes a
| #[strong prediction]. It then consults the annotations provided on the
| #[code GoldParse] instance, to see whether it was
| right. If it was wrong, it adjusts its weights so that the correct
| action will score higher next time.
+item
| #[strong Save] the trained model using
| #[+api("language#to_disk") #[code nlp.to_disk]].
+item
| #[strong Test] the model to make sure the parser works as expected.
+h(3, "training-json") JSON format for training

View File

@ -80,7 +80,7 @@ include ../_includes/_mixins
+github("spacy", "examples/training/train_new_entity_type.py")
+h(3, "parser") Training spaCy's parser
+h(3, "parser") Training spaCy's Dependency Parser
p
| This example shows how to update spaCy's dependency parser,
@ -89,6 +89,15 @@ include ../_includes/_mixins
+github("spacy", "examples/training/train_parser.py")
+h(3, "tagger") Training spaCy's Part-of-speech Tagger
p
| In this example, we're training spaCy's part-of-speech tagger with a
| custom tag map, mapping our own tags to the mapping those tags to the
| #[+a("http://universaldependencies.github.io/docs/u/pos/index.html") Universal Dependencies scheme].
+github("spacy", "examples/training/train_tagger.py")
+h(3, "textcat") Training spaCy's text classifier
+tag-new(2)