Fix typos and wording [ci skip]

This commit is contained in:
Ines Montani 2020-09-03 16:37:45 +02:00
parent b5a0657fd6
commit 25a595dc10
2 changed files with 5 additions and 3 deletions

View File

@ -828,8 +828,10 @@ subclass of the built-in `dict`. It supports the additional methods `to_disk`
## Language.to_disk {#to_disk tag="method" new="2"} ## Language.to_disk {#to_disk tag="method" new="2"}
Save the current state to a directory. If a trained pipeline is loaded, this Save the current state to a directory. Under the hood, this method delegates to
will **include all model data**. the `to_disk` methods of the individual pipeline components, if available. This
means that if a trained pipeline is loaded, all components and their weights
will be saved to disk.
> #### Example > #### Example
> >

View File

@ -1222,7 +1222,7 @@ print(doc.text, [token.text for token in doc])
Keep in mind that your models' results may be less accurate if the tokenization Keep in mind that your models' results may be less accurate if the tokenization
during training differs from the tokenization at runtime. So if you modify a during training differs from the tokenization at runtime. So if you modify a
trained pipeline' tokenization afterwards, it may produce very different trained pipeline's tokenization afterwards, it may produce very different
predictions. You should therefore train your pipeline with the **same predictions. You should therefore train your pipeline with the **same
tokenizer** it will be using at runtime. See the docs on tokenizer** it will be using at runtime. See the docs on
[training with custom tokenization](#custom-tokenizer-training) for details. [training with custom tokenization](#custom-tokenizer-training) for details.