Update GPU docs for v2.0.14

This commit is contained in:
Ines Montani 2018-10-14 16:38:12 +02:00
parent 295da0f11b
commit 5a4c5b78a8
2 changed files with 47 additions and 14 deletions

View File

@ -157,3 +157,29 @@ p
+cell returns +cell returns
+cell unicode +cell unicode
+cell The explanation, or #[code None] if not found in the glossary. +cell The explanation, or #[code None] if not found in the glossary.
+h(3, "spacy.prefer_gpu") spacy.prefer_gpu
+tag function
+tag-new("2.0.14")
p
| Allocate data and perform operations on #[+a("/usage/#gpu") GPU], if
| available. If data has already been allocated on CPU, it will not be
| moved. Ideally, this function should be called right after
| importing spaCy and #[em before] loading any models.
+aside-code("Example").
import spacy
spacy.prefer_gpu() # or: spacy.require_gpu()
nlp = spacy.load('en_core_web_sm')
+h(3, "spacy.require_gpu") spacy.require_gpu
+tag function
+tag-new("2.0.14")
p
| Allocate data and perform operations on #[+a("/usage/#gpu") GPU]. Will
| raise an error if no GPU is available. If data has already been allocated
| on CPU, it will not be moved. Ideally, this function should be called
| right after importing spaCy and #[em before] loading any models.

View File

@ -80,13 +80,7 @@ p
python -m spacy validate python -m spacy validate
+h(3, "gpu") Run spaCy with GPU +h(3, "gpu") Run spaCy with GPU
+tag experimental +tag-new("2.0.14")
+infobox("Important note", "⚠️")
| The instructions below refer to installation with CUDA 8.0. In order to
| install with CUDA 9.0, set the environment variable #[code CUDA9=1]
| before installing Thinc. You'll also need to adjust the path to the
| CUDA runtime.
p p
| As of v2.0, spaCy's comes with neural network models that are implemented | As of v2.0, spaCy's comes with neural network models that are implemented
@ -96,16 +90,29 @@ p
| a NumPy-compatible interface for GPU arrays. | a NumPy-compatible interface for GPU arrays.
p p
| First, install follows the normal CUDA installation procedure. Next, set | spaCy can be installed on GPU by specifying #[code spacy[cuda]],
| your environment variables so that the installation will be able to find | #[code spacy[cuda90]], #[code spacy[cuda91]], #[code spacy[cuda92]] or
| CUDA. Finally, install spaCy. | #[code spacy[cuda10]]. If you know your cuda version, using the more
| explicit specifier allows cupy to be installed via wheel, saving some
| compilation time. The specifiers should install two libraries:
| #[+a("https://cupy.chainer.org") #[code cupy]] and
| #[+a(gh("thinc_gpu_ops")) #[code thinc_gpu_ops]].
+code(false, "bash"). +code(false, "bash").
export CUDA_HOME=/usr/local/cuda-8.0 # or wherever your CUDA is pip install -U spacy[cuda92]
export PATH=$PATH:$CUDA_HOME/bin
pip install spacy p
python -c "import thinc.neural.gpu_ops" # check the GPU ops were built | Once you have a GPU-enabled installation, the best way to activate it is
| to call #[+api("top-level#spacy.prefer_gpu") #[code spacy.prefer_gpu()]]
| or #[+api("top-level#spacy.require_gpu") #[code spacy.require_gpu()]]
| somewhere in your script before any models have been loaded.
| #[code require_gpu] will raise an error if no GPU is available.
+code.
import spacy
spacy.prefer_gpu()
nlp = spacy.load('en_core_web_sm')
+h(3, "source") Compile from source +h(3, "source") Compile from source