mirror of
https://github.com/explosion/spaCy.git
synced 2024-11-12 04:38:28 +03:00
49cee4af92
* Integrate Python kernel via Binder * Add live model test for languages with examples * Update docs and code examples * Adjust margin (if not bootstrapped) * Add binder version to global config * Update terminal and executable code mixins * Pass attributes through infobox and section * Hide v-cloak * Fix example * Take out model comparison for now * Add meta text for compat * Remove chart.js dependency * Tidy up and simplify JS and port big components over to Vue * Remove chartjs example * Add Twitter icon * Add purple stylesheet option * Add utility for hand cursor (special cases only) * Add transition classes * Add small option for section * Add thumb object for small round thumbnail images * Allow unset code block language via "none" value (workaround to still allow unset language to default to DEFAULT_SYNTAX) * Pass through attributes * Add syntax highlighting definitions for Julia, R and Docker * Add website icon * Remove user survey from navigation * Don't hide GitHub icon on small screens * Make top navigation scrollable on small screens * Remove old resources page and references to it * Add Universe * Add helper functions for better page URL and title * Update site description * Increment versions * Update preview images * Update mentions of resources * Fix image * Fix social images * Fix problem with cover sizing and floats * Add divider and move badges into heading * Add docstrings * Reference converting section * Add section on converting word vectors * Move converting section to custom section and fix formatting * Remove old fastText example * Move extensions content to own section Keep weird ID to not break permalinks for now (we don't want to rewrite URLs if not absolutely necessary) * Use better component example and add factories section * Add note on larger model * Use better example for non-vector * Remove similarity in context section Only works via small models with tensors so has always been kind of confusing * Add note on init-model command * Fix lightning tour examples and make excutable if possible * Add spacy train CLI section to train * Fix formatting and add video * Fix formatting * Fix textcat example description (resolves #2246) * Add dummy file to try resolve conflict * Delete dummy file * Tidy up [ci skip] * Ensure sufficient height of loading container * Add loading animation to universe * Update Thebelab build and use better startup message * Fix asset versioning * Fix typo [ci skip] * Add note on project idea label
64 lines
2.6 KiB
Plaintext
64 lines
2.6 KiB
Plaintext
//- 💫 DOCS > USAGE > TRAINING > TEXT CLASSIFICATION
|
||
|
||
+h(3, "example-textcat") Adding a text classifier to a spaCy model
|
||
+tag-new(2)
|
||
|
||
p
|
||
| This example shows how to train a convolutional neural
|
||
| network text classifier on IMDB movie reviews, using spaCy's new
|
||
| #[+api("textcategorizer") #[code TextCategorizer]] component. The
|
||
| dataset will be loaded automatically via Thinc's built-in dataset
|
||
| loader. Predictions are available via
|
||
| #[+api("doc#attributes") #[code Doc.cats]].
|
||
|
||
+github("spacy", "examples/training/train_textcat.py", 500)
|
||
|
||
+h(4) Step by step guide
|
||
|
||
+list("numbers")
|
||
+item
|
||
| #[strong Load the model] you want to start with, or create an
|
||
| #[strong empty model] using
|
||
| #[+api("spacy#blank") #[code spacy.blank]] with the ID of your
|
||
| language. If you're using an existing model, make sure to disable all
|
||
| other pipeline components during training using
|
||
| #[+api("language#disable_pipes") #[code nlp.disable_pipes]]. This
|
||
| way, you'll only be training the text classifier.
|
||
|
||
+item
|
||
| #[strong Add the text classifier] to the pipeline, and add the labels
|
||
| you want to train – for example, #[code POSITIVE].
|
||
|
||
+item
|
||
| #[strong Load and pre-process the dataset], shuffle the data and
|
||
| split off a part of it to hold back for evaluation. This way, you'll
|
||
| be able to see results on each training iteration.
|
||
|
||
+item
|
||
| #[strong Loop over] the training examples and partition them into
|
||
| batches using spaCy's
|
||
| #[+api("top-level#util.minibatch") #[code minibatch]] and
|
||
| #[+api("top-level#util.compounding") #[code compounding]] helpers.
|
||
|
||
+item
|
||
| #[strong Update the model] by calling
|
||
| #[+api("language#update") #[code nlp.update]], which steps
|
||
| through the examples and makes a #[strong prediction]. It then
|
||
| consults the annotations to see whether it was right. If it was
|
||
| wrong, it adjusts its weights so that the correct prediction will
|
||
| score higher next time.
|
||
|
||
+item
|
||
| Optionally, you can also #[strong evaluate the text classifier] on
|
||
| each iteration, by checking how it performs on the development data
|
||
| held back from the dataset. This lets you print the
|
||
| #[strong precision], #[strong recall] and #[strong F-score].
|
||
|
||
+item
|
||
| #[strong Save] the trained model using
|
||
| #[+api("language#to_disk") #[code nlp.to_disk]].
|
||
|
||
+item
|
||
| #[strong Test] the model to make sure the text classifier works as
|
||
| expected.
|