* Generalize handling of tokenizer special cases Handle tokenizer special cases more generally by using the Matcher internally to match special cases after the affix/token_match tokenization is complete. Instead of only matching special cases while processing balanced or nearly balanced prefixes and suffixes, this recognizes special cases in a wider range of contexts: * Allows arbitrary numbers of prefixes/affixes around special cases * Allows special cases separated by infixes Existing tests/settings that couldn't be preserved as before: * The emoticon '")' is no longer a supported special case * The emoticon ':)' in "example:)" is a false positive again When merged with #4258 (or the relevant cache bugfix), the affix and token_match properties should be modified to flush and reload all special cases to use the updated internal tokenization with the Matcher. * Remove accidentally added test case * Really remove accidentally added test * Reload special cases when necessary Reload special cases when affixes or token_match are modified. Skip reloading during initialization. * Update error code number * Fix offset and whitespace in Matcher special cases * Fix offset bugs when merging and splitting tokens * Set final whitespace on final token in inserted special case * Improve cache flushing in tokenizer * Separate cache and specials memory (temporarily) * Flush cache when adding special cases * Repeated `self._cache = PreshMap()` and `self._specials = PreshMap()` are necessary due to this bug: https://github.com/explosion/preshed/issues/21 * Remove reinitialized PreshMaps on cache flush * Update UD bin scripts * Update imports for `bin/` * Add all currently supported languages * Update subtok merger for new Matcher validation * Modify blinded check to look at tokens instead of lemmas (for corpora with tokens but not lemmas like Telugu) * Use special Matcher only for cases with affixes * Reinsert specials cache checks during normal tokenization for special cases as much as possible * Additionally include specials cache checks while splitting on infixes * Since the special Matcher needs consistent affix-only tokenization for the special cases themselves, introduce the argument `with_special_cases` in order to do tokenization with or without specials cache checks * After normal tokenization, postprocess with special cases Matcher for special cases containing affixes * Replace PhraseMatcher with Aho-Corasick Replace PhraseMatcher with the Aho-Corasick algorithm over numpy arrays of the hash values for the relevant attribute. The implementation is based on FlashText. The speed should be similar to the previous PhraseMatcher. It is now possible to easily remove match IDs and matches don't go missing with large keyword lists / vocabularies. Fixes #4308. * Restore support for pickling * Fix internal keyword add/remove for numpy arrays * Add test for #4248, clean up test * Improve efficiency of special cases handling * Use PhraseMatcher instead of Matcher * Improve efficiency of merging/splitting special cases in document * Process merge/splits in one pass without repeated token shifting * Merge in place if no splits * Update error message number * Remove UD script modifications Only used for timing/testing, should be a separate PR * Remove final traces of UD script modifications * Update UD bin scripts * Update imports for `bin/` * Add all currently supported languages * Update subtok merger for new Matcher validation * Modify blinded check to look at tokens instead of lemmas (for corpora with tokens but not lemmas like Telugu) * Add missing loop for match ID set in search loop * Remove cruft in matching loop for partial matches There was a bit of unnecessary code left over from FlashText in the matching loop to handle partial token matches, which we don't have with PhraseMatcher. * Replace dict trie with MapStruct trie * Fix how match ID hash is stored/added * Update fix for match ID vocab * Switch from map_get_unless_missing to map_get * Switch from numpy array to Token.get_struct_attr Access token attributes directly in Doc instead of making a copy of the relevant values in a numpy array. Add unsatisfactory warning for hash collision with reserved terminal hash key. (Ideally it would change the reserved terminal hash and redo the whole trie, but for now, I'm hoping there won't be collisions.) * Restructure imports to export find_matches * Implement full remove() Remove unnecessary trie paths and free unused maps. Parallel to Matcher, raise KeyError when attempting to remove a match ID that has not been added. * Switch to PhraseMatcher.find_matches * Switch to local cdef functions for span filtering * Switch special case reload threshold to variable Refer to variable instead of hard-coded threshold * Move more of special case retokenize to cdef nogil Move as much of the special case retokenization to nogil as possible. * Rewrap sort as stdsort for OS X * Rewrap stdsort with specific types * Switch to qsort * Fix merge * Improve cmp functions * Fix realloc * Fix realloc again * Initialize span struct while retokenizing * Temporarily skip retokenizing * Revert "Move more of special case retokenize to cdef nogil" This reverts commit0b7e52c797
. * Revert "Switch to qsort" This reverts commita98d71a942
. * Fix specials check while caching * Modify URL test with emoticons The multiple suffix tests result in the emoticon `:>`, which is now retokenized into one token as a special case after the suffixes are split off. * Refactor _apply_special_cases() * Use cdef ints for span info used in multiple spots * Modify _filter_special_spans() to prefer earlier Parallel to #4414, modify _filter_special_spans() so that the earlier span is preferred for overlapping spans of the same length. * Replace MatchStruct with Entity Replace MatchStruct with Entity since the existing Entity struct is nearly identical. * Replace Entity with more general SpanC * Replace MatchStruct with SpanC * Add error in debug-data if no dev docs are available (see #4575) * Update azure-pipelines.yml * Revert "Update azure-pipelines.yml" This reverts commited1060cf59
. * Use latest wasabi * Reorganise install_requires * add dframcy to universe.json (#4580) * Update universe.json [ci skip] * Fix multiprocessing for as_tuples=True (#4582) * Fix conllu script (#4579) * force extensions to avoid clash between example scripts * fix arg order and default file encoding * add example config for conllu script * newline * move extension definitions to main function * few more encodings fixes * Add load_from_docbin example [ci skip] TODO: upload the file somewhere * Update README.md * Add warnings about 3.8 (resolves #4593) [ci skip] * Fixed typo: Added space between "recognize" and "various" (#4600) * Fix DocBin.merge() example (#4599) * Replace function registries with catalogue (#4584) * Replace functions registries with catalogue * Update __init__.py * Fix test * Revert unrelated flag [ci skip] * Bugfix/dep matcher issue 4590 (#4601) * add contributor agreement for prilopes * add test for issue #4590 * fix on_match params for DependencyMacther (#4590) * Minor updates to language example sentences (#4608) * Add punctuation to Spanish example sentences * Combine multilanguage examples for lang xx * Add punctuation to nb examples * Always realloc to a larger size Avoid potential (unlikely) edge case and cymem error seen in #4604. * Add error in debug-data if no dev docs are available (see #4575) * Update debug-data for GoldCorpus / Example * Ignore None label in misaligned NER data
18 KiB
title | next | menu | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Install spaCy | /usage/models |
|
spaCy is compatible with 64-bit CPython 2.7 / 3.5+ and runs on Unix/Linux, macOS/OS X and Windows. The latest spaCy releases are available over pip and conda.
📖 Looking for the old docs?
To help you make the transition from v1.x to v2.0, we've uploaded the old website to legacy.spacy.io. Wherever possible, the new docs also include notes on features that have changed in v2.0, and features that were introduced in the new version.
We can't yet ship pre-compiled binary wheels for spaCy that work on Python 3.8, as we're still waiting for our CI providers and other tooling to support it. This means that in order to run spaCy on Python 3.8, you'll need a compiler installed and compile the library and its Cython dependencies locally. If this is causing problems for you, the easiest solution is to use Python 3.7 in the meantime.
Quickstart
import QuickstartInstall from 'widgets/quickstart-install.js'
Installation instructions
pip
Using pip, spaCy releases are available as source packages and binary wheels (as of v2.0.13).
$ pip install -U spacy
Download models
After installation you need to download a language model. For more info and available models, see the docs on models.
$ python -m spacy download en_core_web_sm >>> import spacy >>> nlp = spacy.load("en_core_web_sm")
To install additional data tables for lemmatization in spaCy v2.2+ you can
run pip install spacy[lookups]
or install
spacy-lookups-data
separately. The lookups package is needed to create blank models with
lemmatization data, and to lemmatize in languages that don't yet come with
pretrained models and aren't powered by third-party libraries.
When using pip it is generally recommended to install packages in a virtual environment to avoid modifying system state:
python -m venv .env
source .env/bin/activate
pip install spacy
conda
Thanks to our great community, we've been able to re-add conda support. You can
also install spaCy via conda-forge
:
$ conda install -c conda-forge spacy
For the feedstock including the build recipe and configuration, check out this repository. Improvements and pull requests to the recipe and setup are always appreciated.
Upgrading spaCy
Upgrading from v1 to v2
Although we've tried to keep breaking changes to a minimum, upgrading from spaCy v1.x to v2.x may still require some changes to your code base. For details see the sections on backwards incompatibilities and migrating. Also remember to download the new models, and retrain your own models.
When updating to a newer version of spaCy, it's generally recommended to start with a clean virtual environment. If you're upgrading to a new major version, make sure you have the latest compatible models installed, and that there are no old shortcut links or incompatible model packages left over in your environment, as this can often lead to unexpected results and errors. If you've trained your own models, keep in mind that your train and runtime inputs must match. This means you'll have to retrain your models with the new version.
As of v2.0, spaCy also provides a validate
command, which
lets you verify that all installed models are compatible with your spaCy
version. If incompatible models are found, tips and installation instructions
are printed. The command is also useful to detect out-of-sync model links
resulting from links created in different virtual environments. It's recommended
to run the command with python -m
to make sure you're executing the correct
version of spaCy.
pip install -U spacy
python -m spacy validate
Run spaCy with GPU
As of v2.0, spaCy comes with neural network models that are implemented in our machine learning library, Thinc. For GPU support, we've been grateful to use the work of Chainer's CuPy module, which provides a numpy-compatible interface for GPU arrays.
spaCy can be installed on GPU by specifying spacy[cuda]
, spacy[cuda90]
,
spacy[cuda91]
, spacy[cuda92]
or spacy[cuda100]
. If you know your cuda
version, using the more explicit specifier allows cupy to be installed via
wheel, saving some compilation time. The specifiers should install two
libraries: cupy
and
thinc_gpu_ops
.
$ pip install -U spacy[cuda92]
Once you have a GPU-enabled installation, the best way to activate it is to call
spacy.prefer_gpu
or
spacy.require_gpu()
somewhere in your
script before any models have been loaded. require_gpu
will raise an error if
no GPU is available.
import spacy
spacy.prefer_gpu()
nlp = spacy.load("en_core_web_sm")
Compile from source
The other way to install spaCy is to clone its GitHub repository and build it from source. That is the common way if you want to make changes to the code base. You'll need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, virtualenv and git installed. The compiler part is the trickiest. How to do that depends on your system. See notes on Ubuntu, macOS / OS X and Windows for details.
python -m pip install -U pip # update pip
git clone https://github.com/explosion/spaCy # clone spaCy
cd spaCy # navigate into directory
python -m venv .env # create environment in .env
source .env/bin/activate # activate virtual environment
\export PYTHONPATH=`pwd` # set Python path to spaCy directory
pip install -r requirements.txt # install all requirements
python setup.py build_ext --inplace # compile spaCy
Compared to regular install via pip, the
requirements.txt
additionally installs developer dependencies such as Cython. See the the
quickstart widget to get the right commands for your platform and
Python version.
Ubuntu
Install system-level dependencies via apt-get
:
$ sudo apt-get install build-essential python-dev git
macOS / OS X
Install a recent version of XCode, including the so-called "Command Line Tools". macOS and OS X ship with Python and git preinstalled.
Windows
Install a version of the Visual C++ Build Tools or Visual Studio Express that matches the version that was used to compile your Python interpreter. For official distributions these are:
Distribution | Version |
---|---|
Python 2.7 | Visual Studio 2008 |
Python 3.4 | Visual Studio 2010 |
Python 3.5+ | Visual Studio 2015 |
Run tests
spaCy comes with an
extensive test suite.
In order to run the tests, you'll usually want to clone the
repository and
build spaCy from source. This will also install the required
development dependencies and test utilities defined in the requirements.txt
.
Alternatively, you can find out where spaCy is installed and run pytest
on
that directory. Don't forget to also install the test utilities via spaCy's
requirements.txt
:
python -c "import os; import spacy; print(os.path.dirname(spacy.__file__))"
pip install -r path/to/requirements.txt
python -m pytest [spacy directory]
Calling pytest
on the spaCy directory will run only the basic tests. The flag
--slow
is optional and enables additional tests that take longer.
# make sure you are using recent pytest version
python -m pip install -U pytest
python -m pytest [spacy directory] # basic tests
python -m pytest [spacy directory] --slow # basic and slow tests
Troubleshooting guide
This section collects some of the most common errors you may come across when installing, loading and using spaCy, as well as their solutions.
Help us improve this guide
Did you come across a problem like the ones listed here and want to share the solution? You can find the "Suggest edits" button at the bottom of this page that points you to the source. We always appreciate pull requests!
No compatible model found for [lang] (spaCy vX.X.X).
This usually means that the model you're trying to download does not exist, or
isn't available for your version of spaCy. Check the
compatibility table
to see which models are available for your spaCy version. If you're using an old
version, consider upgrading to the latest release. Note that while spaCy
supports tokenization for a variety of languages, not
all of them come with statistical models. To only use the tokenizer, import the
language's Language
class instead, for example
from spacy.lang.fr import French
.
OSError: symbolic link privilege not held
To create shortcut links that let you load models by
name, spaCy creates a symbolic link in the spacy/data
directory. This means
your user needs permission to do this. The above error mostly occurs when doing
a system-wide installation, which will create the symlinks in a system
directory. Run the download
or link
command as administrator (on Windows,
you can either right-click on your terminal or shell and select "Run as
Administrator"), set the --user
flag when installing a model or use a virtual
environment to install spaCy in a user directory, instead of doing a system-wide
installation.
no such option: --no-cache-dir
The download
command uses pip to install the models and sets the
--no-cache-dir
flag to prevent it from requiring too much memory.
This setting
requires pip v6.0 or newer. Run pip install -U pip
to upgrade to the latest
version of pip. To see which version you have installed, run pip --version
.
sre_constants.error: bad character range
In v2.1, spaCy changed its implementation of regular expressions for tokenization to make it up to 2-3 times faster. But this also means that it's very important now that you run spaCy with a wide unicode build of Python. This means that the build has 1114111 unicode characters available, instead of only 65535 in a narrow unicode build. You can check this by running the following command:
python -c "import sys; print(sys.maxunicode)"
If you're running a narrow unicode build, reinstall Python and use a wide
unicode build instead. You can also rebuild Python and set the
--enable-unicode=ucs4
flag.
ValueError: unknown locale: UTF-8
This error can sometimes occur on OSX and is likely related to a still
unresolved Python bug. However, it's easy
to fix: just add the following to your ~/.bash_profile
or ~/.zshrc
and then
run source ~/.bash_profile
or source ~/.zshrc
. Make sure to add both
lines for LC_ALL
and LANG
.
\export LC_ALL=en_US.UTF-8
\export LANG=en_US.UTF-8
Import Error: No module named spacy
This error means that the spaCy module can't be located on your system, or in
your environment. Make sure you have spaCy installed. If you're using a virtual
environment, make sure it's activated and check that spaCy is installed in that
environment – otherwise, you're trying to load a system installation. You can
also run which python
to find out where your Python executable is located.
ImportError: No module named 'en_core_web_sm'
As of spaCy v1.7, all models can be installed as Python packages. This means
that they'll become importable modules of your application. When creating
shortcut links, spaCy will also try to import the model
to load its meta data. If this fails, it's usually a sign that the package is
not installed in the current environment. Run pip list
or pip freeze
to
check which model packages you have installed, and install the
correct models if necessary. If you're importing a model manually at
the top of a file, make sure to use the name of the package, not the shortcut
link you've created.
command not found: spacy
This error may occur when running the spacy
command from the command line.
spaCy does not currently add an entry to your PATH
environment variable, as
this can lead to unexpected results, especially when using a virtual
environment. Instead, spaCy adds an auto-alias that maps spacy
to
python -m spacy]
. If this is not working as expected, run the command with
python -m
, yourself – for example python -m spacy download en_core_web_sm
.
For more info on this, see the download
command.
AttributeError: 'module' object has no attribute 'load'
While this could technically have many causes, including spaCy being broken, the
most likely one is that your script's file or directory name is "shadowing" the
module – e.g. your file is called spacy.py
, or a directory you're importing
from is called spacy
. So, when using spaCy, never call anything else spacy
.
doc = nlp("They are")
print(doc[0].lemma_)
# -PRON-
This is in fact expected behavior and not a bug. Unlike verbs and common nouns,
there's no clear base form of a personal pronoun. Should the lemma of "me" be
"I", or should we normalize person as well, giving "it" — or maybe "he"? spaCy's
solution is to introduce a novel symbol, -PRON-
, which is used as the lemma
for all personal pronouns. For more info on this, see the
lemmatization specs.
If your training data only contained new entities and you didn't mix in any examples the model previously recognized, it can cause the model to "forget" what it had previously learned. This is also referred to as the "catastrophic forgetting problem". A solution is to pre-label some text, and mix it with the new text in your updates. You can also do this by running spaCy over some text, extracting a bunch of entities the model previously recognized correctly, and adding them to your training examples.
TypeError: unhashable type: 'list'
If you're training models, writing them to disk, and versioning them with git,
you might encounter this error when trying to load them in a Windows
environment. This happens because a default install of Git for Windows is
configured to automatically convert Unix-style end-of-line characters (LF) to
Windows-style ones (CRLF) during file checkout (and the reverse when
committing). While that's mostly fine for text files, a trained model written to
disk has some binary files that should not go through this conversion. When they
do, you get the error above. You can fix it by either changing your
core.autocrlf
setting to "false"
, or by committing a
.gitattributes
file] to your
repository to tell git on which files or folders it shouldn't do LF-to-CRLF
conversion, with an entry like path/to/spacy/model/** -text
. After you've done
either of these, clone your repository again.
Changelog
import Changelog from 'widgets/changelog.js'