mirror of
https://github.com/explosion/spaCy.git
synced 2024-11-13 05:07:03 +03:00
Remove section about parallel training with Ray (#12770)
The Ray integration is currently broken, having these docs around suggest that this functionality is currently available.
This commit is contained in:
parent
fb0da3e097
commit
57a230c6e4
|
@ -11,7 +11,6 @@ menu:
|
||||||
- ['Custom Functions', 'custom-functions']
|
- ['Custom Functions', 'custom-functions']
|
||||||
- ['Initialization', 'initialization']
|
- ['Initialization', 'initialization']
|
||||||
- ['Data Utilities', 'data']
|
- ['Data Utilities', 'data']
|
||||||
- ['Parallel Training', 'parallel-training']
|
|
||||||
- ['Internal API', 'api']
|
- ['Internal API', 'api']
|
||||||
---
|
---
|
||||||
|
|
||||||
|
@ -1565,77 +1564,6 @@ token-based annotations like the dependency parse or entity labels, you'll need
|
||||||
to take care to adjust the `Example` object so its annotations match and remain
|
to take care to adjust the `Example` object so its annotations match and remain
|
||||||
valid.
|
valid.
|
||||||
|
|
||||||
## Parallel & distributed training with Ray {id="parallel-training"}
|
|
||||||
|
|
||||||
> #### Installation
|
|
||||||
>
|
|
||||||
> ```bash
|
|
||||||
> $ pip install -U %%SPACY_PKG_NAME[ray]%%SPACY_PKG_FLAGS
|
|
||||||
> # Check that the CLI is registered
|
|
||||||
> $ python -m spacy ray --help
|
|
||||||
> ```
|
|
||||||
|
|
||||||
[Ray](https://ray.io/) is a fast and simple framework for building and running
|
|
||||||
**distributed applications**. You can use Ray to train spaCy on one or more
|
|
||||||
remote machines, potentially speeding up your training process. Parallel
|
|
||||||
training won't always be faster though – it depends on your batch size, models,
|
|
||||||
and hardware.
|
|
||||||
|
|
||||||
<Infobox variant="warning">
|
|
||||||
|
|
||||||
To use Ray with spaCy, you need the
|
|
||||||
[`spacy-ray`](https://github.com/explosion/spacy-ray) package installed.
|
|
||||||
Installing the package will automatically add the `ray` command to the spaCy
|
|
||||||
CLI.
|
|
||||||
|
|
||||||
</Infobox>
|
|
||||||
|
|
||||||
The [`spacy ray train`](/api/cli#ray-train) command follows the same API as
|
|
||||||
[`spacy train`](/api/cli#train), with a few extra options to configure the Ray
|
|
||||||
setup. You can optionally set the `--address` option to point to your Ray
|
|
||||||
cluster. If it's not set, Ray will run locally.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python -m spacy ray train config.cfg --n-workers 2
|
|
||||||
```
|
|
||||||
|
|
||||||
<Project id="integrations/ray">
|
|
||||||
|
|
||||||
Get started with parallel training using our project template. It trains a
|
|
||||||
simple model on a Universal Dependencies Treebank and lets you parallelize the
|
|
||||||
training with Ray.
|
|
||||||
|
|
||||||
</Project>
|
|
||||||
|
|
||||||
### How parallel training works {id="parallel-training-details"}
|
|
||||||
|
|
||||||
Each worker receives a shard of the **data** and builds a copy of the **model
|
|
||||||
and optimizer** from the [`config.cfg`](#config). It also has a communication
|
|
||||||
channel to **pass gradients and parameters** to the other workers. Additionally,
|
|
||||||
each worker is given ownership of a subset of the parameter arrays. Every
|
|
||||||
parameter array is owned by exactly one worker, and the workers are given a
|
|
||||||
mapping so they know which worker owns which parameter.
|
|
||||||
|
|
||||||
![Illustration of setup](/images/spacy-ray.svg)
|
|
||||||
|
|
||||||
As training proceeds, every worker will be computing gradients for **all** of
|
|
||||||
the model parameters. When they compute gradients for parameters they don't own,
|
|
||||||
they'll **send them to the worker** that does own that parameter, along with a
|
|
||||||
version identifier so that the owner can decide whether to discard the gradient.
|
|
||||||
Workers use the gradients they receive and the ones they compute locally to
|
|
||||||
update the parameters they own, and then broadcast the updated array and a new
|
|
||||||
version ID to the other workers.
|
|
||||||
|
|
||||||
This training procedure is **asynchronous** and **non-blocking**. Workers always
|
|
||||||
push their gradient increments and parameter updates, they do not have to pull
|
|
||||||
them and block on the result, so the transfers can happen in the background,
|
|
||||||
overlapped with the actual training work. The workers also do not have to stop
|
|
||||||
and wait for each other ("synchronize") at the start of each batch. This is very
|
|
||||||
useful for spaCy, because spaCy is often trained on long documents, which means
|
|
||||||
**batches can vary in size** significantly. Uneven workloads make synchronous
|
|
||||||
gradient descent inefficient, because if one batch is slow, all of the other
|
|
||||||
workers are stuck waiting for it to complete before they can continue.
|
|
||||||
|
|
||||||
## Internal training API {id="api"}
|
## Internal training API {id="api"}
|
||||||
|
|
||||||
<Infobox variant="danger">
|
<Infobox variant="danger">
|
||||||
|
|
Loading…
Reference in New Issue
Block a user