mirror of
https://github.com/explosion/spaCy.git
synced 2025-01-12 10:16:27 +03:00
Update GPU docs for v2.0.14
This commit is contained in:
parent
295da0f11b
commit
5a4c5b78a8
|
@ -157,3 +157,29 @@ p
|
|||
+cell returns
|
||||
+cell unicode
|
||||
+cell The explanation, or #[code None] if not found in the glossary.
|
||||
|
||||
+h(3, "spacy.prefer_gpu") spacy.prefer_gpu
|
||||
+tag function
|
||||
+tag-new("2.0.14")
|
||||
|
||||
p
|
||||
| Allocate data and perform operations on #[+a("/usage/#gpu") GPU], if
|
||||
| available. If data has already been allocated on CPU, it will not be
|
||||
| moved. Ideally, this function should be called right after
|
||||
| importing spaCy and #[em before] loading any models.
|
||||
|
||||
+aside-code("Example").
|
||||
import spacy
|
||||
|
||||
spacy.prefer_gpu() # or: spacy.require_gpu()
|
||||
nlp = spacy.load('en_core_web_sm')
|
||||
|
||||
+h(3, "spacy.require_gpu") spacy.require_gpu
|
||||
+tag function
|
||||
+tag-new("2.0.14")
|
||||
|
||||
p
|
||||
| Allocate data and perform operations on #[+a("/usage/#gpu") GPU]. Will
|
||||
| raise an error if no GPU is available. If data has already been allocated
|
||||
| on CPU, it will not be moved. Ideally, this function should be called
|
||||
| right after importing spaCy and #[em before] loading any models.
|
||||
|
|
|
@ -80,13 +80,7 @@ p
|
|||
python -m spacy validate
|
||||
|
||||
+h(3, "gpu") Run spaCy with GPU
|
||||
+tag experimental
|
||||
|
||||
+infobox("Important note", "⚠️")
|
||||
| The instructions below refer to installation with CUDA 8.0. In order to
|
||||
| install with CUDA 9.0, set the environment variable #[code CUDA9=1]
|
||||
| before installing Thinc. You'll also need to adjust the path to the
|
||||
| CUDA runtime.
|
||||
+tag-new("2.0.14")
|
||||
|
||||
p
|
||||
| As of v2.0, spaCy's comes with neural network models that are implemented
|
||||
|
@ -96,16 +90,29 @@ p
|
|||
| a NumPy-compatible interface for GPU arrays.
|
||||
|
||||
p
|
||||
| First, install follows the normal CUDA installation procedure. Next, set
|
||||
| your environment variables so that the installation will be able to find
|
||||
| CUDA. Finally, install spaCy.
|
||||
| spaCy can be installed on GPU by specifying #[code spacy[cuda]],
|
||||
| #[code spacy[cuda90]], #[code spacy[cuda91]], #[code spacy[cuda92]] or
|
||||
| #[code spacy[cuda10]]. If you know your cuda version, using the more
|
||||
| explicit specifier allows cupy to be installed via wheel, saving some
|
||||
| compilation time. The specifiers should install two libraries:
|
||||
| #[+a("https://cupy.chainer.org") #[code cupy]] and
|
||||
| #[+a(gh("thinc_gpu_ops")) #[code thinc_gpu_ops]].
|
||||
|
||||
+code(false, "bash").
|
||||
export CUDA_HOME=/usr/local/cuda-8.0 # or wherever your CUDA is
|
||||
export PATH=$PATH:$CUDA_HOME/bin
|
||||
pip install -U spacy[cuda92]
|
||||
|
||||
pip install spacy
|
||||
python -c "import thinc.neural.gpu_ops" # check the GPU ops were built
|
||||
p
|
||||
| Once you have a GPU-enabled installation, the best way to activate it is
|
||||
| to call #[+api("top-level#spacy.prefer_gpu") #[code spacy.prefer_gpu()]]
|
||||
| or #[+api("top-level#spacy.require_gpu") #[code spacy.require_gpu()]]
|
||||
| somewhere in your script before any models have been loaded.
|
||||
| #[code require_gpu] will raise an error if no GPU is available.
|
||||
|
||||
+code.
|
||||
import spacy
|
||||
|
||||
spacy.prefer_gpu()
|
||||
nlp = spacy.load('en_core_web_sm')
|
||||
|
||||
+h(3, "source") Compile from source
|
||||
|
||||
|
|
Loading…
Reference in New Issue
Block a user