Merge branch 'master' into fix/windows-quoting

This commit is contained in:
Paul O'Leary McCann 2022-09-28 19:47:55 +09:00
commit 6dfa8bde2f
74 changed files with 2671 additions and 322 deletions

View File

@ -27,6 +27,7 @@ steps:
- script: python -m mypy spacy - script: python -m mypy spacy
displayName: 'Run mypy' displayName: 'Run mypy'
condition: ne(variables['python_version'], '3.10')
- task: DeleteFiles@1 - task: DeleteFiles@1
inputs: inputs:

1
.gitignore vendored
View File

@ -24,6 +24,7 @@ quickstart-training-generator.js
cythonize.json cythonize.json
spacy/*.html spacy/*.html
*.cpp *.cpp
*.c
*.so *.so
# Vim / VSCode / editors # Vim / VSCode / editors

View File

@ -0,0 +1,82 @@
# spaCy Satellite Packages
This is a list of all the active repos relevant to spaCy besides the main one, with short descriptions, history, and current status. Archived repos will not be covered.
## Always Included in spaCy
These packages are always pulled in when you install spaCy. Most of them are direct dependencies, but some are transitive dependencies through other packages.
- [spacy-legacy](https://github.com/explosion/spacy-legacy): When an architecture in spaCy changes enough to get a new version, the old version is frozen and moved to spacy-legacy. This allows us to keep the core library slim while also preserving backwards compatability.
- [thinc](https://github.com/explosion/thinc): Thinc is the machine learning library that powers trainable components in spaCy. It wraps backends like Numpy, PyTorch, and Tensorflow to provide a functional interface for specifying architectures.
- [catalogue](https://github.com/explosion/catalogue): Small library for adding function registries, like those used for model architectures in spaCy.
- [confection](https://github.com/explosion/confection): This library contains the functionality for config parsing that was formerly contained directly in Thinc.
- [spacy-loggers](https://github.com/explosion/spacy-loggers): Contains loggers beyond the default logger available in spaCy's core code base. This includes loggers integrated with third-party services, which may differ in release cadence from spaCy itself.
- [wasabi](https://github.com/explosion/wasabi): A command line formatting library, used for terminal output in spaCy.
- [srsly](https://github.com/explosion/srsly): A wrapper that vendors several serialization libraries for spaCy. Includes parsers for JSON, JSONL, MessagePack, (extended) Pickle, and YAML.
- [preshed](https://github.com/explosion/preshed): A Cython library for low-level data structures like hash maps, used for memory efficient data storage.
- [cython-blis](https://github.com/explosion/cython-blis): Fast matrix multiplication using BLIS without depending on system libraries. Required by Thinc, rather than spaCy directly.
- [murmurhash](https://github.com/explosion/murmurhash): A wrapper library for a C++ murmurhash implementation, used for string IDs in spaCy and preshed.
- [cymem](https://github.com/explosion/cymem): A small library for RAII-style memory management in Cython.
## Optional Extensions for spaCy
These are repos that can be used by spaCy but aren't part of a default installation. Many of these are wrappers to integrate various kinds of third-party libraries.
- [spacy-transformers](https://github.com/explosion/spacy-transformers): A wrapper for the [HuggingFace Transformers](https://huggingface.co/docs/transformers/index) library, this handles the extensive conversion necessary to coordinate spaCy's powerful `Doc` representation, training pipeline, and the Transformer embeddings. When released, this was known as `spacy-pytorch-transformers`, but it changed to the current name when HuggingFace update the name of their library as well.
- [spacy-huggingface-hub](https://github.com/explosion/spacy-huggingface-hub): This package has a CLI script for uploading a packaged spaCy pipeline (created with `spacy package`) to the [Hugging Face Hub](https://huggingface.co/models).
- [spacy-alignments](https://github.com/explosion/spacy-alignments): A wrapper for the tokenizations library (mentioned below) with a modified build system to simplify cross-platform wheel creation. Used in spacy-transformers for aligning spaCy and HuggingFace tokenizations.
- [spacy-experimental](https://github.com/explosion/spacy-experimental): Experimental components that are not quite ready for inclusion in the main spaCy library. Usually there are unresolved questions around their APIs, so the experimental library allows us to expose them to the community for feedback before fully integrating them.
- [spacy-lookups-data](https://github.com/explosion/spacy-lookups-data): A repository of linguistic data, such as lemmas, that takes up a lot of disk space. Originally created to reduce the size of the spaCy core library. This is mainly useful if you want the data included but aren't using a pretrained pipeline; for the affected languages, the relevant data is included in pretrained pipelines directly.
- [coreferee](https://github.com/explosion/coreferee): Coreference resolution for English, French, German and Polish, optimised for limited training data and easily extensible for further languages. Used as a spaCy pipeline component.
- [spacy-stanza](https://github.com/explosion/spacy-stanza): This is a wrapper that allows the use of Stanford's Stanza library in spaCy.
- [spacy-streamlit](https://github.com/explosion/spacy-streamlit): A wrapper for the Streamlit dashboard building library to help with integrating [displaCy](https://spacy.io/api/top-level/#displacy).
- [spacymoji](https://github.com/explosion/spacymoji): A library to add extra support for emoji to spaCy, such as including character names.
- [thinc-apple-ops](https://github.com/explosion/thinc-apple-ops): A special backend for OSX that uses Apple's native libraries for improved performance.
- [os-signpost](https://github.com/explosion/os-signpost): A Python package that allows you to use the `OSSignposter` API in OSX for performance analysis.
- [spacy-ray](https://github.com/explosion/spacy-ray): A wrapper to integrate spaCy with Ray, a distributed training framework. Currently a work in progress.
## Prodigy
[Prodigy](https://prodi.gy) is Explosion's easy to use and highly customizable tool for annotating data. Prodigy itself requires a license, but the repos below contain documentation, examples, and editor or notebook integrations.
- [prodigy-recipes](https://github.com/explosion/prodigy-recipes): Sample recipes for Prodigy, along with notebooks and other examples of usage.
- [vscode-prodigy](https://github.com/explosion/vscode-prodigy): A VS Code extension that lets you run Prodigy inside VS Code.
- [jupyterlab-prodigy](https://github.com/explosion/jupyterlab-prodigy): An extension for JupyterLab that lets you run Prodigy inside JupyterLab.
## Independent Tools or Projects
These are tools that may be related to or use spaCy, but are functional independent projects in their own right as well.
- [floret](https://github.com/explosion/floret): A modification of fastText to use Bloom Embeddings. Can be used to add vectors with subword features to spaCy, and also works independently in the same manner as fastText.
- [sense2vec](https://github.com/explosion/sense2vec): A library to make embeddings of noun phrases or words coupled with their part of speech. This library uses spaCy.
- [spacy-vectors-builder](https://github.com/explosion/spacy-vectors-builder): This is a spaCy project that builds vectors using floret and a lot of input text. It handles downloading the input data as well as the actual building of vectors.
- [holmes-extractor](https://github.com/explosion/holmes-extractor): Information extraction from English and German texts based on predicate logic. Uses spaCy.
- [healthsea](https://github.com/explosion/healthsea): Healthsea is a project to extract information from comments about health supplements. Structurally, it's a self-contained, large spaCy project.
- [spacy-pkuseg](https://github.com/explosion/spacy-pkuseg): A fork of the pkuseg Chinese tokenizer. Used for Chinese support in spaCy, but also works independently.
- [ml-datasets](https://github.com/explosion/ml-datasets): This repo includes loaders for several standard machine learning datasets, like MNIST or WikiNER, and has historically been used in spaCy example code and documentation.
## Documentation and Informational Repos
These repos are used to support the spaCy docs or otherwise present information about spaCy or other Explosion projects.
- [projects](https://github.com/explosion/projects): The projects repo is used to show detailed examples of spaCy usage. Individual projects can be checked out using the spaCy command line tool, rather than checking out the projects repo directly.
- [spacy-course](https://github.com/explosion/spacy-course): Home to the interactive spaCy course for learning about how to use the library and some basic NLP principles.
- [spacy-io-binder](https://github.com/explosion/spacy-io-binder): Home to the notebooks used for interactive examples in the documentation.
## Organizational / Meta
These repos are used for organizing data around spaCy, but are not something an end user would need to install as part of using the library.
- [spacy-models](https://github.com/explosion/spacy-models): This repo contains metadata (but not training data) for all the spaCy models. This includes information about where their training data came from, version compatability, and performance information. It also includes tests for the model packages, and the built models are hosted as releases of this repo.
- [wheelwright](https://github.com/explosion/wheelwright): A tool for automating our PyPI builds and releases.
- [ec2buildwheel](https://github.com/explosion/ec2buildwheel): A small project that allows you to build Python packages in the manner of cibuildwheel, but on any EC2 image. Used by wheelwright.
## Other
Repos that don't fit in any of the above categories.
- [blis](https://github.com/explosion/blis): A fork of the official BLIS library. The main branch is not updated, but work continues in various branches. This is used for cython-blis.
- [tokenizations](https://github.com/explosion/tokenizations): A library originally by Yohei Tamura to align strings with tolerance to some variations in features like case and diacritics, used for aligning tokens and wordpieces. Adopted and maintained by Explosion, but usually spacy-alignments is used instead.
- [conll-2012](https://github.com/explosion/conll-2012): A repo to hold some slightly cleaned up versions of the official scripts for the CoNLL 2012 shared task involving coreference resolution. Used in the coref project.
- [fastapi-explosion-extras](https://github.com/explosion/fastapi-explosion-extras): Some small tweaks to FastAPI used at Explosion.

View File

@ -127,3 +127,34 @@ distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and See the License for the specific language governing permissions and
limitations under the License. limitations under the License.
polyleven
---------
* Files: spacy/matcher/polyleven.c
MIT License
Copyright (c) 2021 Fujimoto Seiji <fujimoto@ceptord.net>
Copyright (c) 2021 Max Bachmann <kontakt@maxbachmann.de>
Copyright (c) 2022 Nick Mazuk
Copyright (c) 2022 Michael Weiss <code@mweiss.ch>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@ -6,7 +6,6 @@ requires = [
"preshed>=3.0.2,<3.1.0", "preshed>=3.0.2,<3.1.0",
"murmurhash>=0.28.0,<1.1.0", "murmurhash>=0.28.0,<1.1.0",
"thinc>=8.1.0,<8.2.0", "thinc>=8.1.0,<8.2.0",
"pathy",
"numpy>=1.15.0", "numpy>=1.15.0",
] ]
build-backend = "setuptools.build_meta" build-backend = "setuptools.build_meta"

View File

@ -1,5 +1,5 @@
# Our libraries # Our libraries
spacy-legacy>=3.0.9,<3.1.0 spacy-legacy>=3.0.10,<3.1.0
spacy-loggers>=1.0.0,<2.0.0 spacy-loggers>=1.0.0,<2.0.0
cymem>=2.0.2,<2.1.0 cymem>=2.0.2,<2.1.0
preshed>=3.0.2,<3.1.0 preshed>=3.0.2,<3.1.0
@ -33,5 +33,7 @@ hypothesis>=3.27.0,<7.0.0
mypy>=0.910,<0.970; platform_machine!='aarch64' mypy>=0.910,<0.970; platform_machine!='aarch64'
types-dataclasses>=0.1.3; python_version < "3.7" types-dataclasses>=0.1.3; python_version < "3.7"
types-mock>=0.1.1 types-mock>=0.1.1
types-setuptools>=57.0.0
types-requests types-requests
types-setuptools>=57.0.0
black>=22.0,<23.0 black>=22.0,<23.0

View File

@ -41,7 +41,7 @@ setup_requires =
thinc>=8.1.0,<8.2.0 thinc>=8.1.0,<8.2.0
install_requires = install_requires =
# Our libraries # Our libraries
spacy-legacy>=3.0.9,<3.1.0 spacy-legacy>=3.0.10,<3.1.0
spacy-loggers>=1.0.0,<2.0.0 spacy-loggers>=1.0.0,<2.0.0
murmurhash>=0.28.0,<1.1.0 murmurhash>=0.28.0,<1.1.0
cymem>=2.0.2,<2.1.0 cymem>=2.0.2,<2.1.0
@ -50,9 +50,9 @@ install_requires =
wasabi>=0.9.1,<1.1.0 wasabi>=0.9.1,<1.1.0
srsly>=2.4.3,<3.0.0 srsly>=2.4.3,<3.0.0
catalogue>=2.0.6,<2.1.0 catalogue>=2.0.6,<2.1.0
# Third-party dependencies
typer>=0.3.0,<0.5.0 typer>=0.3.0,<0.5.0
pathy>=0.3.5 pathy>=0.3.5
# Third-party dependencies
tqdm>=4.38.0,<5.0.0 tqdm>=4.38.0,<5.0.0
numpy>=1.15.0 numpy>=1.15.0
requests>=2.13.0,<3.0.0 requests>=2.13.0,<3.0.0
@ -76,37 +76,41 @@ transformers =
ray = ray =
spacy_ray>=0.1.0,<1.0.0 spacy_ray>=0.1.0,<1.0.0
cuda = cuda =
cupy>=5.0.0b4,<11.0.0 cupy>=5.0.0b4,<12.0.0
cuda80 = cuda80 =
cupy-cuda80>=5.0.0b4,<11.0.0 cupy-cuda80>=5.0.0b4,<12.0.0
cuda90 = cuda90 =
cupy-cuda90>=5.0.0b4,<11.0.0 cupy-cuda90>=5.0.0b4,<12.0.0
cuda91 = cuda91 =
cupy-cuda91>=5.0.0b4,<11.0.0 cupy-cuda91>=5.0.0b4,<12.0.0
cuda92 = cuda92 =
cupy-cuda92>=5.0.0b4,<11.0.0 cupy-cuda92>=5.0.0b4,<12.0.0
cuda100 = cuda100 =
cupy-cuda100>=5.0.0b4,<11.0.0 cupy-cuda100>=5.0.0b4,<12.0.0
cuda101 = cuda101 =
cupy-cuda101>=5.0.0b4,<11.0.0 cupy-cuda101>=5.0.0b4,<12.0.0
cuda102 = cuda102 =
cupy-cuda102>=5.0.0b4,<11.0.0 cupy-cuda102>=5.0.0b4,<12.0.0
cuda110 = cuda110 =
cupy-cuda110>=5.0.0b4,<11.0.0 cupy-cuda110>=5.0.0b4,<12.0.0
cuda111 = cuda111 =
cupy-cuda111>=5.0.0b4,<11.0.0 cupy-cuda111>=5.0.0b4,<12.0.0
cuda112 = cuda112 =
cupy-cuda112>=5.0.0b4,<11.0.0 cupy-cuda112>=5.0.0b4,<12.0.0
cuda113 = cuda113 =
cupy-cuda113>=5.0.0b4,<11.0.0 cupy-cuda113>=5.0.0b4,<12.0.0
cuda114 = cuda114 =
cupy-cuda114>=5.0.0b4,<11.0.0 cupy-cuda114>=5.0.0b4,<12.0.0
cuda115 = cuda115 =
cupy-cuda115>=5.0.0b4,<11.0.0 cupy-cuda115>=5.0.0b4,<12.0.0
cuda116 = cuda116 =
cupy-cuda116>=5.0.0b4,<11.0.0 cupy-cuda116>=5.0.0b4,<12.0.0
cuda117 = cuda117 =
cupy-cuda117>=5.0.0b4,<11.0.0 cupy-cuda117>=5.0.0b4,<12.0.0
cuda11x =
cupy-cuda11x>=11.0.0,<12.0.0
cuda-autodetect =
cupy-wheel>=11.0.0,<12.0.0
apple = apple =
thinc-apple-ops>=0.1.0.dev0,<1.0.0 thinc-apple-ops>=0.1.0.dev0,<1.0.0
# Language tokenizers with external dependencies # Language tokenizers with external dependencies

View File

@ -205,6 +205,17 @@ def setup_package():
get_python_inc(plat_specific=True), get_python_inc(plat_specific=True),
] ]
ext_modules = [] ext_modules = []
ext_modules.append(
Extension(
"spacy.matcher.levenshtein",
[
"spacy/matcher/levenshtein.pyx",
"spacy/matcher/polyleven.c",
],
language="c",
include_dirs=include_dirs,
)
)
for name in MOD_NAMES: for name in MOD_NAMES:
mod_path = name.replace(".", "/") + ".pyx" mod_path = name.replace(".", "/") + ".pyx"
ext = Extension( ext = Extension(

View File

@ -31,21 +31,21 @@ def load(
name: Union[str, Path], name: Union[str, Path],
*, *,
vocab: Union[Vocab, bool] = True, vocab: Union[Vocab, bool] = True,
disable: Iterable[str] = util.SimpleFrozenList(), disable: Union[str, Iterable[str]] = util._DEFAULT_EMPTY_PIPES,
enable: Iterable[str] = util.SimpleFrozenList(), enable: Union[str, Iterable[str]] = util._DEFAULT_EMPTY_PIPES,
exclude: Iterable[str] = util.SimpleFrozenList(), exclude: Union[str, Iterable[str]] = util._DEFAULT_EMPTY_PIPES,
config: Union[Dict[str, Any], Config] = util.SimpleFrozenDict(), config: Union[Dict[str, Any], Config] = util.SimpleFrozenDict(),
) -> Language: ) -> Language:
"""Load a spaCy model from an installed package or a local path. """Load a spaCy model from an installed package or a local path.
name (str): Package name or model path. name (str): Package name or model path.
vocab (Vocab): A Vocab object. If True, a vocab is created. vocab (Vocab): A Vocab object. If True, a vocab is created.
disable (Iterable[str]): Names of pipeline components to disable. Disabled disable (Union[str, Iterable[str]]): Name(s) of pipeline component(s) to disable. Disabled
pipes will be loaded but they won't be run unless you explicitly pipes will be loaded but they won't be run unless you explicitly
enable them by calling nlp.enable_pipe. enable them by calling nlp.enable_pipe.
enable (Iterable[str]): Names of pipeline components to enable. All other enable (Union[str, Iterable[str]]): Name(s) of pipeline component(s) to enable. All other
pipes will be disabled (but can be enabled later using nlp.enable_pipe). pipes will be disabled (but can be enabled later using nlp.enable_pipe).
exclude (Iterable[str]): Names of pipeline components to exclude. Excluded exclude (Union[str, Iterable[str]]): Name(s) of pipeline component(s) to exclude. Excluded
components won't be loaded. components won't be loaded.
config (Dict[str, Any] / Config): Config overrides as nested dict or dict config (Dict[str, Any] / Config): Config overrides as nested dict or dict
keyed by section values in dot notation. keyed by section values in dot notation.

View File

@ -20,7 +20,7 @@ def download_cli(
ctx: typer.Context, ctx: typer.Context,
model: str = Arg(..., help="Name of pipeline package to download"), model: str = Arg(..., help="Name of pipeline package to download"),
direct: bool = Opt(False, "--direct", "-d", "-D", help="Force direct download of name + version"), direct: bool = Opt(False, "--direct", "-d", "-D", help="Force direct download of name + version"),
sdist: bool = Opt(False, "--sdist", "-S", help="Download sdist (.tar.gz) archive instead of pre-built binary wheel") sdist: bool = Opt(False, "--sdist", "-S", help="Download sdist (.tar.gz) archive instead of pre-built binary wheel"),
# fmt: on # fmt: on
): ):
""" """
@ -36,7 +36,12 @@ def download_cli(
download(model, direct, sdist, *ctx.args) download(model, direct, sdist, *ctx.args)
def download(model: str, direct: bool = False, sdist: bool = False, *pip_args) -> None: def download(
model: str,
direct: bool = False,
sdist: bool = False,
*pip_args,
) -> None:
if ( if (
not (is_package("spacy") or is_package("spacy-nightly")) not (is_package("spacy") or is_package("spacy-nightly"))
and "--no-deps" not in pip_args and "--no-deps" not in pip_args
@ -50,13 +55,10 @@ def download(model: str, direct: bool = False, sdist: bool = False, *pip_args) -
"dependencies, you'll have to install them manually." "dependencies, you'll have to install them manually."
) )
pip_args = pip_args + ("--no-deps",) pip_args = pip_args + ("--no-deps",)
suffix = SDIST_SUFFIX if sdist else WHEEL_SUFFIX
dl_tpl = "{m}-{v}/{m}-{v}{s}#egg={m}=={v}"
if direct: if direct:
components = model.split("-") components = model.split("-")
model_name = "".join(components[:-1]) model_name = "".join(components[:-1])
version = components[-1] version = components[-1]
download_model(dl_tpl.format(m=model_name, v=version, s=suffix), pip_args)
else: else:
model_name = model model_name = model
if model in OLD_MODEL_SHORTCUTS: if model in OLD_MODEL_SHORTCUTS:
@ -67,13 +69,26 @@ def download(model: str, direct: bool = False, sdist: bool = False, *pip_args) -
model_name = OLD_MODEL_SHORTCUTS[model] model_name = OLD_MODEL_SHORTCUTS[model]
compatibility = get_compatibility() compatibility = get_compatibility()
version = get_version(model_name, compatibility) version = get_version(model_name, compatibility)
download_model(dl_tpl.format(m=model_name, v=version, s=suffix), pip_args)
filename = get_model_filename(model_name, version, sdist)
download_model(filename, pip_args)
msg.good( msg.good(
"Download and installation successful", "Download and installation successful",
f"You can now load the package via spacy.load('{model_name}')", f"You can now load the package via spacy.load('{model_name}')",
) )
def get_model_filename(model_name: str, version: str, sdist: bool = False) -> str:
dl_tpl = "{m}-{v}/{m}-{v}{s}"
egg_tpl = "#egg={m}=={v}"
suffix = SDIST_SUFFIX if sdist else WHEEL_SUFFIX
filename = dl_tpl.format(m=model_name, v=version, s=suffix)
if sdist:
filename += egg_tpl.format(m=model_name, v=version)
return filename
def get_compatibility() -> dict: def get_compatibility() -> dict:
if is_prerelease_version(about.__version__): if is_prerelease_version(about.__version__):
version: Optional[str] = about.__version__ version: Optional[str] = about.__version__
@ -105,6 +120,11 @@ def get_version(model: str, comp: dict) -> str:
return comp[model][0] return comp[model][0]
def get_latest_version(model: str) -> str:
comp = get_compatibility()
return get_version(model, comp)
def download_model( def download_model(
filename: str, user_pip_args: Optional[Sequence[str]] = None filename: str, user_pip_args: Optional[Sequence[str]] = None
) -> None: ) -> None:

View File

@ -1,10 +1,13 @@
from typing import Optional, Dict, Any, Union, List from typing import Optional, Dict, Any, Union, List
import platform import platform
import pkg_resources
import json
from pathlib import Path from pathlib import Path
from wasabi import Printer, MarkdownRenderer from wasabi import Printer, MarkdownRenderer
import srsly import srsly
from ._util import app, Arg, Opt, string_to_list from ._util import app, Arg, Opt, string_to_list
from .download import get_model_filename, get_latest_version
from .. import util from .. import util
from .. import about from .. import about
@ -16,6 +19,7 @@ def info_cli(
markdown: bool = Opt(False, "--markdown", "-md", help="Generate Markdown for GitHub issues"), markdown: bool = Opt(False, "--markdown", "-md", help="Generate Markdown for GitHub issues"),
silent: bool = Opt(False, "--silent", "-s", "-S", help="Don't print anything (just return)"), silent: bool = Opt(False, "--silent", "-s", "-S", help="Don't print anything (just return)"),
exclude: str = Opt("labels", "--exclude", "-e", help="Comma-separated keys to exclude from the print-out"), exclude: str = Opt("labels", "--exclude", "-e", help="Comma-separated keys to exclude from the print-out"),
url: bool = Opt(False, "--url", "-u", help="Print the URL to download the most recent compatible version of the pipeline"),
# fmt: on # fmt: on
): ):
""" """
@ -23,10 +27,19 @@ def info_cli(
print its meta information. Flag --markdown prints details in Markdown for easy print its meta information. Flag --markdown prints details in Markdown for easy
copy-pasting to GitHub issues. copy-pasting to GitHub issues.
Flag --url prints only the download URL of the most recent compatible
version of the pipeline.
DOCS: https://spacy.io/api/cli#info DOCS: https://spacy.io/api/cli#info
""" """
exclude = string_to_list(exclude) exclude = string_to_list(exclude)
info(model, markdown=markdown, silent=silent, exclude=exclude) info(
model,
markdown=markdown,
silent=silent,
exclude=exclude,
url=url,
)
def info( def info(
@ -35,11 +48,20 @@ def info(
markdown: bool = False, markdown: bool = False,
silent: bool = True, silent: bool = True,
exclude: Optional[List[str]] = None, exclude: Optional[List[str]] = None,
url: bool = False,
) -> Union[str, dict]: ) -> Union[str, dict]:
msg = Printer(no_print=silent, pretty=not silent) msg = Printer(no_print=silent, pretty=not silent)
if not exclude: if not exclude:
exclude = [] exclude = []
if model: if url:
if model is not None:
title = f"Download info for pipeline '{model}'"
data = info_model_url(model)
print(data["download_url"])
return data
else:
msg.fail("--url option requires a pipeline name", exits=1)
elif model:
title = f"Info about pipeline '{model}'" title = f"Info about pipeline '{model}'"
data = info_model(model, silent=silent) data = info_model(model, silent=silent)
else: else:
@ -99,11 +121,44 @@ def info_model(model: str, *, silent: bool = True) -> Dict[str, Any]:
meta["source"] = str(model_path.resolve()) meta["source"] = str(model_path.resolve())
else: else:
meta["source"] = str(model_path) meta["source"] = str(model_path)
download_url = info_installed_model_url(model)
if download_url:
meta["download_url"] = download_url
return { return {
k: v for k, v in meta.items() if k not in ("accuracy", "performance", "speed") k: v for k, v in meta.items() if k not in ("accuracy", "performance", "speed")
} }
def info_installed_model_url(model: str) -> Optional[str]:
"""Given a pipeline name, get the download URL if available, otherwise
return None.
This is only available for pipelines installed as modules that have
dist-info available.
"""
try:
dist = pkg_resources.get_distribution(model)
data = json.loads(dist.get_metadata("direct_url.json"))
return data["url"]
except pkg_resources.DistributionNotFound:
# no such package
return None
except Exception:
# something else, like no file or invalid JSON
return None
def info_model_url(model: str) -> Dict[str, Any]:
"""Return the download URL for the latest version of a pipeline."""
version = get_latest_version(model)
filename = get_model_filename(model, version)
download_url = about.__download_url__ + "/" + filename
release_tpl = "https://github.com/explosion/spacy-models/releases/tag/{m}-{v}"
release_url = release_tpl.format(m=model, v=version)
return {"download_url": download_url, "release_url": release_url}
def get_markdown( def get_markdown(
data: Dict[str, Any], data: Dict[str, Any],
title: Optional[str] = None, title: Optional[str] = None,

View File

@ -1,5 +1,8 @@
from typing import Optional, List, Dict, Sequence, Any, Iterable, Union from typing import Optional, List, Dict, Sequence, Any, Iterable, Union, Tuple
import os.path
from pathlib import Path from pathlib import Path
import pkg_resources
from wasabi import msg from wasabi import msg
from wasabi.util import locale_escape from wasabi.util import locale_escape
import sys import sys
@ -72,6 +75,12 @@ def project_run(
commands = {cmd["name"]: cmd for cmd in config.get("commands", [])} commands = {cmd["name"]: cmd for cmd in config.get("commands", [])}
workflows = config.get("workflows", {}) workflows = config.get("workflows", {})
validate_subcommand(list(commands.keys()), list(workflows.keys()), subcommand) validate_subcommand(list(commands.keys()), list(workflows.keys()), subcommand)
req_path = project_dir / "requirements.txt"
if config.get("check_requirements", True) and os.path.exists(req_path):
with req_path.open() as requirements_file:
_check_requirements([req.replace("\n", "") for req in requirements_file])
if subcommand in workflows: if subcommand in workflows:
msg.info(f"Running workflow '{subcommand}'") msg.info(f"Running workflow '{subcommand}'")
for cmd in workflows[subcommand]: for cmd in workflows[subcommand]:
@ -221,6 +230,8 @@ def validate_subcommand(
msg.fail(f"No commands or workflows defined in {PROJECT_FILE}", exits=1) msg.fail(f"No commands or workflows defined in {PROJECT_FILE}", exits=1)
if subcommand not in commands and subcommand not in workflows: if subcommand not in commands and subcommand not in workflows:
help_msg = [] help_msg = []
if subcommand in ["assets", "asset"]:
help_msg.append("Did you mean to run: python -m spacy project assets?")
if commands: if commands:
help_msg.append(f"Available commands: {', '.join(commands)}") help_msg.append(f"Available commands: {', '.join(commands)}")
if workflows: if workflows:
@ -334,3 +345,32 @@ def get_fileinfo(project_dir: Path, paths: List[str]) -> List[Dict[str, Optional
md5 = get_checksum(file_path) if file_path.exists() else None md5 = get_checksum(file_path) if file_path.exists() else None
data.append({"path": path, "md5": md5}) data.append({"path": path, "md5": md5})
return data return data
def _check_requirements(requirements: List[str]) -> Tuple[bool, bool]:
"""Checks whether requirements are installed and free of version conflicts.
requirements (List[str]): List of requirements.
RETURNS (Tuple[bool, bool]): Whether (1) any packages couldn't be imported, (2) any packages with version conflicts
exist.
"""
failed_pkgs_msgs: List[str] = []
conflicting_pkgs_msgs: List[str] = []
for req in requirements:
try:
pkg_resources.require(req)
except pkg_resources.DistributionNotFound as dnf:
failed_pkgs_msgs.append(dnf.report())
except pkg_resources.VersionConflict as vc:
conflicting_pkgs_msgs.append(vc.report())
if len(failed_pkgs_msgs) or len(conflicting_pkgs_msgs):
msg.warn(
title="Missing requirements or requirement conflicts detected. Make sure your Python environment is set up "
"correctly and you installed all requirements specified in your project's requirements.txt: "
)
for pgk_msg in failed_pkgs_msgs + conflicting_pkgs_msgs:
msg.text(pgk_msg)
return len(failed_pkgs_msgs) > 0, len(conflicting_pkgs_msgs) > 0

View File

@ -271,13 +271,8 @@ factory = "tok2vec"
[components.tok2vec.model.embed] [components.tok2vec.model.embed]
@architectures = "spacy.MultiHashEmbed.v2" @architectures = "spacy.MultiHashEmbed.v2"
width = ${components.tok2vec.model.encode.width} width = ${components.tok2vec.model.encode.width}
{% if has_letters -%}
attrs = ["NORM", "PREFIX", "SUFFIX", "SHAPE"] attrs = ["NORM", "PREFIX", "SUFFIX", "SHAPE"]
rows = [5000, 2500, 2500, 2500] rows = [5000, 1000, 2500, 2500]
{% else -%}
attrs = ["ORTH", "SHAPE"]
rows = [5000, 2500]
{% endif -%}
include_static_vectors = {{ "true" if optimize == "accuracy" else "false" }} include_static_vectors = {{ "true" if optimize == "accuracy" else "false" }}
[components.tok2vec.model.encode] [components.tok2vec.model.encode]

View File

@ -271,4 +271,3 @@ zh:
accuracy: accuracy:
name: bert-base-chinese name: bert-base-chinese
size_factor: 3 size_factor: 3
has_letters: false

View File

@ -212,6 +212,8 @@ class Warnings(metaclass=ErrorsWithCodes):
W121 = ("Attempting to trace non-existent method '{method}' in pipe '{pipe}'") W121 = ("Attempting to trace non-existent method '{method}' in pipe '{pipe}'")
W122 = ("Couldn't trace method '{method}' in pipe '{pipe}'. This can happen if the pipe class " W122 = ("Couldn't trace method '{method}' in pipe '{pipe}'. This can happen if the pipe class "
"is a Cython extension type.") "is a Cython extension type.")
W123 = ("Argument {arg} with value {arg_value} is used instead of {config_value} as specified in the config. Be "
"aware that this might affect other components in your pipeline.")
class Errors(metaclass=ErrorsWithCodes): class Errors(metaclass=ErrorsWithCodes):
@ -230,8 +232,9 @@ class Errors(metaclass=ErrorsWithCodes):
"initialized component.") "initialized component.")
E004 = ("Can't set up pipeline component: a factory for '{name}' already " E004 = ("Can't set up pipeline component: a factory for '{name}' already "
"exists. Existing factory: {func}. New factory: {new_func}") "exists. Existing factory: {func}. New factory: {new_func}")
E005 = ("Pipeline component '{name}' returned None. If you're using a " E005 = ("Pipeline component '{name}' returned {returned_type} instead of a "
"custom component, maybe you forgot to return the processed Doc?") "Doc. If you're using a custom component, maybe you forgot to "
"return the processed Doc?")
E006 = ("Invalid constraints for adding pipeline component. You can only " E006 = ("Invalid constraints for adding pipeline component. You can only "
"set one of the following: before (component name or index), " "set one of the following: before (component name or index), "
"after (component name or index), first (True) or last (True). " "after (component name or index), first (True) or last (True). "
@ -706,7 +709,7 @@ class Errors(metaclass=ErrorsWithCodes):
"need to modify the pipeline, use the built-in methods like " "need to modify the pipeline, use the built-in methods like "
"`nlp.add_pipe`, `nlp.remove_pipe`, `nlp.disable_pipe` or " "`nlp.add_pipe`, `nlp.remove_pipe`, `nlp.disable_pipe` or "
"`nlp.enable_pipe` instead.") "`nlp.enable_pipe` instead.")
E927 = ("Can't write to frozen list Maybe you're trying to modify a computed " E927 = ("Can't write to frozen list. Maybe you're trying to modify a computed "
"property or default function argument?") "property or default function argument?")
E928 = ("A KnowledgeBase can only be serialized to/from from a directory, " E928 = ("A KnowledgeBase can only be serialized to/from from a directory, "
"but the provided argument {loc} points to a file.") "but the provided argument {loc} points to a file.")
@ -936,8 +939,9 @@ class Errors(metaclass=ErrorsWithCodes):
E1040 = ("Doc.from_json requires all tokens to have the same attributes. " E1040 = ("Doc.from_json requires all tokens to have the same attributes. "
"Some tokens do not contain annotation for: {partial_attrs}") "Some tokens do not contain annotation for: {partial_attrs}")
E1041 = ("Expected a string, Doc, or bytes as input, but got: {type}") E1041 = ("Expected a string, Doc, or bytes as input, but got: {type}")
E1042 = ("Function was called with `{arg1}`={arg1_values} and " E1042 = ("`enable={enable}` and `disable={disable}` are inconsistent with each other.\nIf you only passed "
"`{arg2}`={arg2_values} but these arguments are conflicting.") "one of `enable` or `disable`, the other argument is specified in your pipeline's configuration.\nIn that "
"case pass an empty list for the previously not specified argument to avoid this error.")
E1043 = ("Expected None or a value in range [{range_start}, {range_end}] for entity linker threshold, but got " E1043 = ("Expected None or a value in range [{range_start}, {range_end}] for entity linker threshold, but got "
"{value}.") "{value}.")

18
spacy/lang/la/__init__.py Normal file
View File

@ -0,0 +1,18 @@
from ...language import Language, BaseDefaults
from .tokenizer_exceptions import TOKENIZER_EXCEPTIONS
from .stop_words import STOP_WORDS
from .lex_attrs import LEX_ATTRS
class LatinDefaults(BaseDefaults):
tokenizer_exceptions = TOKENIZER_EXCEPTIONS
stop_words = STOP_WORDS
lex_attr_getters = LEX_ATTRS
class Latin(Language):
lang = "la"
Defaults = LatinDefaults
__all__ = ["Latin"]

View File

@ -0,0 +1,34 @@
from ...attrs import LIKE_NUM
import re
# cf. Goyvaerts/Levithan 2009; case-insensitive, allow 4
roman_numerals_compile = re.compile(
r"(?i)^(?=[MDCLXVI])M*(C[MD]|D?C{0,4})(X[CL]|L?X{0,4})(I[XV]|V?I{0,4})$"
)
_num_words = set(
"""
unus una unum duo duae tres tria quattuor quinque sex septem octo novem decem
""".split()
)
_ordinal_words = set(
"""
primus prima primum secundus secunda secundum tertius tertia tertium
""".split()
)
def like_num(text):
if text.isdigit():
return True
if roman_numerals_compile.match(text):
return True
if text.lower() in _num_words:
return True
if text.lower() in _ordinal_words:
return True
return False
LEX_ATTRS = {LIKE_NUM: like_num}

View File

@ -0,0 +1,37 @@
# Corrected Perseus list, cf. https://wiki.digitalclassicist.org/Stopwords_for_Greek_and_Latin
STOP_WORDS = set(
"""
ab ac ad adhuc aliqui aliquis an ante apud at atque aut autem
cum cur
de deinde dum
ego enim ergo es est et etiam etsi ex
fio
haud hic
iam idem igitur ille in infra inter interim ipse is ita
magis modo mox
nam ne nec necque neque nisi non nos
o ob
per possum post pro
quae quam quare qui quia quicumque quidem quilibet quis quisnam quisquam quisque quisquis quo quoniam
sed si sic sive sub sui sum super suus
tam tamen trans tu tum
ubi uel uero
vel vero
""".split()
)

View File

@ -0,0 +1,76 @@
from ..tokenizer_exceptions import BASE_EXCEPTIONS
from ...symbols import ORTH
from ...util import update_exc
## TODO: Look into systematically handling u/v
_exc = {
"mecum": [{ORTH: "me"}, {ORTH: "cum"}],
"tecum": [{ORTH: "te"}, {ORTH: "cum"}],
"nobiscum": [{ORTH: "nobis"}, {ORTH: "cum"}],
"vobiscum": [{ORTH: "vobis"}, {ORTH: "cum"}],
"uobiscum": [{ORTH: "uobis"}, {ORTH: "cum"}],
}
for orth in [
"A.",
"Agr.",
"Ap.",
"C.",
"Cn.",
"D.",
"F.",
"K.",
"L.",
"M'.",
"M.",
"Mam.",
"N.",
"Oct.",
"Opet.",
"P.",
"Paul.",
"Post.",
"Pro.",
"Q.",
"S.",
"Ser.",
"Sert.",
"Sex.",
"St.",
"Sta.",
"T.",
"Ti.",
"V.",
"Vol.",
"Vop.",
"U.",
"Uol.",
"Uop.",
"Ian.",
"Febr.",
"Mart.",
"Apr.",
"Mai.",
"Iun.",
"Iul.",
"Aug.",
"Sept.",
"Oct.",
"Nov.",
"Nou.",
"Dec.",
"Non.",
"Id.",
"A.D.",
"Coll.",
"Cos.",
"Ord.",
"Pl.",
"S.C.",
"Suff.",
"Trib.",
]:
_exc[orth] = [{ORTH: orth}]
TOKENIZER_EXCEPTIONS = update_exc(BASE_EXCEPTIONS, _exc)

18
spacy/lang/lg/__init__.py Normal file
View File

@ -0,0 +1,18 @@
from .stop_words import STOP_WORDS
from .lex_attrs import LEX_ATTRS
from .punctuation import TOKENIZER_INFIXES
from ...language import Language, BaseDefaults
class LugandaDefaults(BaseDefaults):
lex_attr_getters = LEX_ATTRS
infixes = TOKENIZER_INFIXES
stop_words = STOP_WORDS
class Luganda(Language):
lang = "lg"
Defaults = LugandaDefaults
__all__ = ["Luganda"]

17
spacy/lang/lg/examples.py Normal file
View File

@ -0,0 +1,17 @@
"""
Example sentences to test spaCy and its language models.
>>> from spacy.lang.lg.examples import sentences
>>> docs = nlp.pipe(sentences)
"""
sentences = [
"Mpa ebyafaayo ku byalo Nakatu ne Nkajja",
"Okuyita Ttembo kitegeeza kugwa ddalu",
"Ekifumu kino kyali kya mulimu ki?",
"Ekkovu we liyise wayitibwa mukululo",
"Akola mulimu ki oguvaamu ssente?",
"Emisumaali egikomerera embaawo giyitibwa nninga",
"Abooluganda abemmamba ababiri",
"Ekisaawe ky'ebyenjigiriza kya mugaso nnyo",
]

View File

@ -0,0 +1,95 @@
from ...attrs import LIKE_NUM
_num_words = [
"nnooti", # Zero
"zeero", # zero
"emu", # one
"bbiri", # two
"ssatu", # three
"nnya", # four
"ttaano", # five
"mukaaga", # six
"musanvu", # seven
"munaana", # eight
"mwenda", # nine
"kkumi", # ten
"kkumi n'emu", # eleven
"kkumi na bbiri", # twelve
"kkumi na ssatu", # thirteen
"kkumi na nnya", # forteen
"kkumi na ttaano", # fifteen
"kkumi na mukaaga", # sixteen
"kkumi na musanvu", # seventeen
"kkumi na munaana", # eighteen
"kkumi na mwenda", # nineteen
"amakumi abiri", # twenty
"amakumi asatu", # thirty
"amakumi ana", # forty
"amakumi ataano", # fifty
"nkaaga", # sixty
"nsanvu", # seventy
"kinaana", # eighty
"kyenda", # ninety
"kikumi", # hundred
"lukumi", # thousand
"kakadde", # million
"kawumbi", # billion
"kase", # trillion
"katabalika", # quadrillion
"keesedde", # gajillion
"kafukunya", # bazillion
"ekisooka", # first
"ekyokubiri", # second
"ekyokusatu", # third
"ekyokuna", # fourth
"ekyokutaano", # fifith
"ekyomukaaga", # sixth
"ekyomusanvu", # seventh
"eky'omunaana", # eighth
"ekyomwenda", # nineth
"ekyekkumi", # tenth
"ekyekkumi n'ekimu", # eleventh
"ekyekkumi n'ebibiri", # twelveth
"ekyekkumi n'ebisatu", # thirteenth
"ekyekkumi n'ebina", # fourteenth
"ekyekkumi n'ebitaano", # fifteenth
"ekyekkumi n'omukaaga", # sixteenth
"ekyekkumi n'omusanvu", # seventeenth
"ekyekkumi n'omunaana", # eigteenth
"ekyekkumi n'omwenda", # nineteenth
"ekyamakumi abiri", # twentieth
"ekyamakumi asatu", # thirtieth
"ekyamakumi ana", # fortieth
"ekyamakumi ataano", # fiftieth
"ekyenkaaga", # sixtieth
"ekyensanvu", # seventieth
"ekyekinaana", # eightieth
"ekyekyenda", # ninetieth
"ekyekikumi", # hundredth
"ekyolukumi", # thousandth
"ekyakakadde", # millionth
"ekyakawumbi", # billionth
"ekyakase", # trillionth
"ekyakatabalika", # quadrillionth
"ekyakeesedde", # gajillionth
"ekyakafukunya", # bazillionth
]
def like_num(text):
if text.startswith(("+", "-", "±", "~")):
text = text[1:]
text = text.replace(",", "").replace(".", "")
if text.isdigit():
return True
if text.count("/") == 1:
num, denom = text.split("/")
if num.isdigit() and denom.isdigit():
return True
text_lower = text.lower()
if text_lower in _num_words:
return True
return False
LEX_ATTRS = {LIKE_NUM: like_num}

View File

@ -0,0 +1,19 @@
from ..char_classes import LIST_ELLIPSES, LIST_ICONS, HYPHENS
from ..char_classes import CONCAT_QUOTES, ALPHA_LOWER, ALPHA_UPPER, ALPHA
_infixes = (
LIST_ELLIPSES
+ LIST_ICONS
+ [
r"(?<=[0-9])[+\-\*^](?=[0-9-])",
r"(?<=[{al}{q}])\.(?=[{au}{q}])".format(
al=ALPHA_LOWER, au=ALPHA_UPPER, q=CONCAT_QUOTES
),
r"(?<=[{a}]),(?=[{a}])".format(a=ALPHA),
r"(?<=[{a}0-9])(?:{h})(?=[{a}])".format(a=ALPHA, h=HYPHENS),
r"(?<=[{a}0-9])[:<>=/](?=[{a}])".format(a=ALPHA),
]
)
TOKENIZER_INFIXES = _infixes

View File

@ -0,0 +1,19 @@
STOP_WORDS = set(
"""
abadde abalala abamu abangi abava ajja ali alina ani anti ateekeddwa atewamu
atya awamu aweebwa ayinza ba baali babadde babalina bajja
bajjanewankubade bali balina bandi bangi bano bateekeddwa baweebwa bayina bebombi beera bibye
bimu bingi bino bo bokka bonna buli bulijjo bulungi bwabwe bwaffe bwayo bwe bwonna bya byabwe
byaffe byebimu byonna ddaa ddala ddi e ebimu ebiri ebweruobulungi ebyo edda ejja ekirala ekyo
endala engeri ennyo era erimu erina ffe ffenna ga gujja gumu gunno guno gwa gwe kaseera kati
kennyini ki kiki kikino kikye kikyo kino kirungi kki ku kubangabyombi kubangaolwokuba kudda
kuva kuwa kwegamba kyaffe kye kyekimuoyo kyekyo kyonna leero liryo lwa lwaki lyabwezaabwe
lyaffe lyange mbadde mingi mpozzi mu mulinaoyina munda mwegyabwe nolwekyo nabadde nabo nandiyagadde
nandiye nanti naye ne nedda neera nga nnyingi nnyini nnyinza nnyo nti nyinza nze oba ojja okudda
okugenda okuggyako okutuusa okuva okuwa oli olina oluvannyuma olwekyobuva omuli ono osobola otya
oyina oyo seetaaga si sinakindi singa talina tayina tebaali tebaalina tebayina terina tetulina
tetuteekeddwa tewali teyalina teyayina tolina tu tuyina tulina tuyina twafuna twetaaga wa wabula
wabweru wadde waggulunnina wakati waliwobangi waliyo wandi wange wano wansi weebwa yabadde yaffe
ye yenna yennyini yina yonna ziba zijja zonna
""".split()
)

View File

@ -1,4 +1,4 @@
from typing import Iterator, Optional, Any, Dict, Callable, Iterable, Collection from typing import Iterator, Optional, Any, Dict, Callable, Iterable
from typing import Union, Tuple, List, Set, Pattern, Sequence from typing import Union, Tuple, List, Set, Pattern, Sequence
from typing import NoReturn, TYPE_CHECKING, TypeVar, cast, overload from typing import NoReturn, TYPE_CHECKING, TypeVar, cast, overload
@ -10,6 +10,7 @@ from contextlib import contextmanager
from copy import deepcopy from copy import deepcopy
from pathlib import Path from pathlib import Path
import warnings import warnings
from thinc.api import get_current_ops, Config, CupyOps, Optimizer from thinc.api import get_current_ops, Config, CupyOps, Optimizer
import srsly import srsly
import multiprocessing as mp import multiprocessing as mp
@ -24,7 +25,7 @@ from .pipe_analysis import validate_attrs, analyze_pipes, print_pipe_analysis
from .training import Example, validate_examples from .training import Example, validate_examples
from .training.initialize import init_vocab, init_tok2vec from .training.initialize import init_vocab, init_tok2vec
from .scorer import Scorer from .scorer import Scorer
from .util import registry, SimpleFrozenList, _pipe, raise_error from .util import registry, SimpleFrozenList, _pipe, raise_error, _DEFAULT_EMPTY_PIPES
from .util import SimpleFrozenDict, combine_score_weights, CONFIG_SECTION_ORDER from .util import SimpleFrozenDict, combine_score_weights, CONFIG_SECTION_ORDER
from .util import warn_if_jupyter_cupy from .util import warn_if_jupyter_cupy
from .lang.tokenizer_exceptions import URL_MATCH, BASE_EXCEPTIONS from .lang.tokenizer_exceptions import URL_MATCH, BASE_EXCEPTIONS
@ -1028,8 +1029,8 @@ class Language:
raise ValueError(Errors.E109.format(name=name)) from e raise ValueError(Errors.E109.format(name=name)) from e
except Exception as e: except Exception as e:
error_handler(name, proc, [doc], e) error_handler(name, proc, [doc], e)
if doc is None: if not isinstance(doc, Doc):
raise ValueError(Errors.E005.format(name=name)) raise ValueError(Errors.E005.format(name=name, returned_type=type(doc)))
return doc return doc
def disable_pipes(self, *names) -> "DisabledPipes": def disable_pipes(self, *names) -> "DisabledPipes":
@ -1063,7 +1064,7 @@ class Language:
""" """
if enable is None and disable is None: if enable is None and disable is None:
raise ValueError(Errors.E991) raise ValueError(Errors.E991)
if disable is not None and isinstance(disable, str): if isinstance(disable, str):
disable = [disable] disable = [disable]
if enable is not None: if enable is not None:
if isinstance(enable, str): if isinstance(enable, str):
@ -1698,9 +1699,9 @@ class Language:
config: Union[Dict[str, Any], Config] = {}, config: Union[Dict[str, Any], Config] = {},
*, *,
vocab: Union[Vocab, bool] = True, vocab: Union[Vocab, bool] = True,
disable: Iterable[str] = SimpleFrozenList(), disable: Union[str, Iterable[str]] = _DEFAULT_EMPTY_PIPES,
enable: Iterable[str] = SimpleFrozenList(), enable: Union[str, Iterable[str]] = _DEFAULT_EMPTY_PIPES,
exclude: Iterable[str] = SimpleFrozenList(), exclude: Union[str, Iterable[str]] = _DEFAULT_EMPTY_PIPES,
meta: Dict[str, Any] = SimpleFrozenDict(), meta: Dict[str, Any] = SimpleFrozenDict(),
auto_fill: bool = True, auto_fill: bool = True,
validate: bool = True, validate: bool = True,
@ -1711,12 +1712,12 @@ class Language:
config (Dict[str, Any] / Config): The loaded config. config (Dict[str, Any] / Config): The loaded config.
vocab (Vocab): A Vocab object. If True, a vocab is created. vocab (Vocab): A Vocab object. If True, a vocab is created.
disable (Iterable[str]): Names of pipeline components to disable. disable (Union[str, Iterable[str]]): Name(s) of pipeline component(s) to disable.
Disabled pipes will be loaded but they won't be run unless you Disabled pipes will be loaded but they won't be run unless you
explicitly enable them by calling nlp.enable_pipe. explicitly enable them by calling nlp.enable_pipe.
enable (Iterable[str]): Names of pipeline components to enable. All other enable (Union[str, Iterable[str]]): Name(s) of pipeline component(s) to enable. All other
pipes will be disabled (and can be enabled using `nlp.enable_pipe`). pipes will be disabled (and can be enabled using `nlp.enable_pipe`).
exclude (Iterable[str]): Names of pipeline components to exclude. exclude (Union[str, Iterable[str]]): Name(s) of pipeline component(s) to exclude.
Excluded components won't be loaded. Excluded components won't be loaded.
meta (Dict[str, Any]): Meta overrides for nlp.meta. meta (Dict[str, Any]): Meta overrides for nlp.meta.
auto_fill (bool): Automatically fill in missing values in config based auto_fill (bool): Automatically fill in missing values in config based
@ -1871,9 +1872,38 @@ class Language:
nlp.vocab.from_bytes(vocab_b) nlp.vocab.from_bytes(vocab_b)
# Resolve disabled/enabled settings. # Resolve disabled/enabled settings.
if isinstance(disable, str):
disable = [disable]
if isinstance(enable, str):
enable = [enable]
if isinstance(exclude, str):
exclude = [exclude]
def fetch_pipes_status(value: Iterable[str], key: str) -> Iterable[str]:
"""Fetch value for `enable` or `disable` w.r.t. the specified config and passed arguments passed to
.load(). If both arguments and config specified values for this field, the passed arguments take precedence
and a warning is printed.
value (Iterable[str]): Passed value for `enable` or `disable`.
key (str): Key for field in config (either "enabled" or "disabled").
RETURN (Iterable[str]):
"""
# We assume that no argument was passed if the value is the specified default value.
if id(value) == id(_DEFAULT_EMPTY_PIPES):
return config["nlp"].get(key, [])
else:
if len(config["nlp"].get(key, [])):
warnings.warn(
Warnings.W123.format(
arg=key[:-1],
arg_value=value,
config_value=config["nlp"][key],
)
)
return value
disabled_pipes = cls._resolve_component_status( disabled_pipes = cls._resolve_component_status(
[*config["nlp"]["disabled"], *disable], fetch_pipes_status(disable, "disabled"),
[*config["nlp"].get("enabled", []), *enable], fetch_pipes_status(enable, "enabled"),
config["nlp"]["pipeline"], config["nlp"]["pipeline"],
) )
nlp._disabled = set(p for p in disabled_pipes if p not in exclude) nlp._disabled = set(p for p in disabled_pipes if p not in exclude)
@ -2031,37 +2061,34 @@ class Language:
@staticmethod @staticmethod
def _resolve_component_status( def _resolve_component_status(
disable: Iterable[str], enable: Iterable[str], pipe_names: Collection[str] disable: Union[str, Iterable[str]],
enable: Union[str, Iterable[str]],
pipe_names: Iterable[str],
) -> Tuple[str, ...]: ) -> Tuple[str, ...]:
"""Derives whether (1) `disable` and `enable` values are consistent and (2) """Derives whether (1) `disable` and `enable` values are consistent and (2)
resolves those to a single set of disabled components. Raises an error in resolves those to a single set of disabled components. Raises an error in
case of inconsistency. case of inconsistency.
disable (Iterable[str]): Names of components or serialization fields to disable. disable (Union[str, Iterable[str]]): Name(s) of component(s) or serialization fields to disable.
enable (Iterable[str]): Names of pipeline components to enable. enable (Union[str, Iterable[str]]): Name(s) of pipeline component(s) to enable.
pipe_names (Iterable[str]): Names of all pipeline components. pipe_names (Iterable[str]): Names of all pipeline components.
RETURNS (Tuple[str, ...]): Names of components to exclude from pipeline w.r.t. RETURNS (Tuple[str, ...]): Names of components to exclude from pipeline w.r.t.
specified includes and excludes. specified includes and excludes.
""" """
if disable is not None and isinstance(disable, str): if isinstance(disable, str):
disable = [disable] disable = [disable]
to_disable = disable to_disable = disable
if enable: if enable:
if isinstance(enable, str):
enable = [enable]
to_disable = [ to_disable = [
pipe_name for pipe_name in pipe_names if pipe_name not in enable pipe_name for pipe_name in pipe_names if pipe_name not in enable
] ]
if disable and disable != to_disable: if disable and disable != to_disable:
raise ValueError( raise ValueError(Errors.E1042.format(enable=enable, disable=disable))
Errors.E1042.format(
arg1="enable",
arg2="disable",
arg1_values=enable,
arg2_values=disable,
)
)
return tuple(to_disable) return tuple(to_disable)

View File

@ -1,5 +1,6 @@
from .matcher import Matcher from .matcher import Matcher
from .phrasematcher import PhraseMatcher from .phrasematcher import PhraseMatcher
from .dependencymatcher import DependencyMatcher from .dependencymatcher import DependencyMatcher
from .levenshtein import levenshtein
__all__ = ["Matcher", "PhraseMatcher", "DependencyMatcher"] __all__ = ["Matcher", "PhraseMatcher", "DependencyMatcher", "levenshtein"]

View File

@ -0,0 +1,15 @@
# cython: profile=True, binding=True, infer_types=True
from cpython.object cimport PyObject
from libc.stdint cimport int64_t
from typing import Optional
cdef extern from "polyleven.c":
int64_t polyleven(PyObject *o1, PyObject *o2, int64_t k)
cpdef int64_t levenshtein(a: str, b: str, k: Optional[int] = None):
if k is None:
k = -1
return polyleven(<PyObject*>a, <PyObject*>b, k)

View File

@ -1,5 +1,5 @@
# cython: infer_types=True, cython: profile=True # cython: infer_types=True, cython: profile=True
from typing import List from typing import List, Iterable
from libcpp.vector cimport vector from libcpp.vector cimport vector
from libc.stdint cimport int32_t, int8_t from libc.stdint cimport int32_t, int8_t
@ -867,20 +867,27 @@ class _SetPredicate:
def __call__(self, Token token): def __call__(self, Token token):
if self.is_extension: if self.is_extension:
value = get_string_id(token._.get(self.attr)) value = token._.get(self.attr)
else: else:
value = get_token_attr_for_matcher(token.c, self.attr) value = get_token_attr_for_matcher(token.c, self.attr)
if self.predicate in ("IS_SUBSET", "IS_SUPERSET", "INTERSECTS"): if self.predicate in ("IN", "NOT_IN"):
if isinstance(value, (str, int)):
value = get_string_id(value)
else:
return False
elif self.predicate in ("IS_SUBSET", "IS_SUPERSET", "INTERSECTS"):
# ensure that all values are enclosed in a set
if self.attr == MORPH: if self.attr == MORPH:
# break up MORPH into individual Feat=Val values # break up MORPH into individual Feat=Val values
value = set(get_string_id(v) for v in MorphAnalysis.from_id(self.vocab, value)) value = set(get_string_id(v) for v in MorphAnalysis.from_id(self.vocab, value))
elif isinstance(value, (str, int)):
value = set((get_string_id(value),))
elif isinstance(value, Iterable) and all(isinstance(v, (str, int)) for v in value):
value = set(get_string_id(v) for v in value)
else: else:
# treat a single value as a list return False
if isinstance(value, (str, int)):
value = set([get_string_id(value)])
else:
value = set(get_string_id(v) for v in value)
if self.predicate == "IN": if self.predicate == "IN":
return value in self.value return value in self.value
elif self.predicate == "NOT_IN": elif self.predicate == "NOT_IN":

384
spacy/matcher/polyleven.c Normal file
View File

@ -0,0 +1,384 @@
/*
* Adapted from Polyleven (https://ceptord.net/)
*
* Source: https://github.com/fujimotos/polyleven/blob/c3f95a080626c5652f0151a2e449963288ccae84/polyleven.c
*
* Copyright (c) 2021 Fujimoto Seiji <fujimoto@ceptord.net>
* Copyright (c) 2021 Max Bachmann <kontakt@maxbachmann.de>
* Copyright (c) 2022 Nick Mazuk
* Copyright (c) 2022 Michael Weiss <code@mweiss.ch>
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in all
* copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#include <Python.h>
#include <stdint.h>
#define MIN(a,b) ((a) < (b) ? (a) : (b))
#define MAX(a,b) ((a) > (b) ? (a) : (b))
#define CDIV(a,b) ((a) / (b) + ((a) % (b) > 0))
#define BIT(i,n) (((i) >> (n)) & 1)
#define FLIP(i,n) ((i) ^ ((uint64_t) 1 << (n)))
#define ISASCII(kd) ((kd) == PyUnicode_1BYTE_KIND)
/*
* Bare bone of PyUnicode
*/
struct strbuf {
void *ptr;
int kind;
int64_t len;
};
static void strbuf_init(struct strbuf *s, PyObject *o)
{
s->ptr = PyUnicode_DATA(o);
s->kind = PyUnicode_KIND(o);
s->len = PyUnicode_GET_LENGTH(o);
}
#define strbuf_read(s, i) PyUnicode_READ((s)->kind, (s)->ptr, (i))
/*
* An encoded mbleven model table.
*
* Each 8-bit integer represents an edit sequence, with using two
* bits for a single operation.
*
* 01 = DELETE, 10 = INSERT, 11 = REPLACE
*
* For example, 13 is '1101' in binary notation, so it means
* DELETE + REPLACE.
*/
static const uint8_t MBLEVEN_MATRIX[] = {
3, 0, 0, 0, 0, 0, 0, 0,
1, 0, 0, 0, 0, 0, 0, 0,
15, 9, 6, 0, 0, 0, 0, 0,
13, 7, 0, 0, 0, 0, 0, 0,
5, 0, 0, 0, 0, 0, 0, 0,
63, 39, 45, 57, 54, 30, 27, 0,
61, 55, 31, 37, 25, 22, 0, 0,
53, 29, 23, 0, 0, 0, 0, 0,
21, 0, 0, 0, 0, 0, 0, 0,
};
#define MBLEVEN_MATRIX_GET(k, d) ((((k) + (k) * (k)) / 2 - 1) + (d)) * 8
static int64_t mbleven_ascii(char *s1, int64_t len1,
char *s2, int64_t len2, int k)
{
int pos;
uint8_t m;
int64_t i, j, c, r;
pos = MBLEVEN_MATRIX_GET(k, len1 - len2);
r = k + 1;
while (MBLEVEN_MATRIX[pos]) {
m = MBLEVEN_MATRIX[pos++];
i = j = c = 0;
while (i < len1 && j < len2) {
if (s1[i] != s2[j]) {
c++;
if (!m) break;
if (m & 1) i++;
if (m & 2) j++;
m >>= 2;
} else {
i++;
j++;
}
}
c += (len1 - i) + (len2 - j);
r = MIN(r, c);
if (r < 2) {
return r;
}
}
return r;
}
static int64_t mbleven(PyObject *o1, PyObject *o2, int64_t k)
{
int pos;
uint8_t m;
int64_t i, j, c, r;
struct strbuf s1, s2;
strbuf_init(&s1, o1);
strbuf_init(&s2, o2);
if (s1.len < s2.len)
return mbleven(o2, o1, k);
if (k > 3)
return -1;
if (k < s1.len - s2.len)
return k + 1;
if (ISASCII(s1.kind) && ISASCII(s2.kind))
return mbleven_ascii(s1.ptr, s1.len, s2.ptr, s2.len, k);
pos = MBLEVEN_MATRIX_GET(k, s1.len - s2.len);
r = k + 1;
while (MBLEVEN_MATRIX[pos]) {
m = MBLEVEN_MATRIX[pos++];
i = j = c = 0;
while (i < s1.len && j < s2.len) {
if (strbuf_read(&s1, i) != strbuf_read(&s2, j)) {
c++;
if (!m) break;
if (m & 1) i++;
if (m & 2) j++;
m >>= 2;
} else {
i++;
j++;
}
}
c += (s1.len - i) + (s2.len - j);
r = MIN(r, c);
if (r < 2) {
return r;
}
}
return r;
}
/*
* Data structure to store Peq (equality bit-vector).
*/
struct blockmap_entry {
uint32_t key[128];
uint64_t val[128];
};
struct blockmap {
int64_t nr;
struct blockmap_entry *list;
};
#define blockmap_key(c) ((c) | 0x80000000U)
#define blockmap_hash(c) ((c) % 128)
static int blockmap_init(struct blockmap *map, struct strbuf *s)
{
int64_t i;
struct blockmap_entry *be;
uint32_t c, k;
uint8_t h;
map->nr = CDIV(s->len, 64);
map->list = calloc(1, map->nr * sizeof(struct blockmap_entry));
if (map->list == NULL) {
PyErr_NoMemory();
return -1;
}
for (i = 0; i < s->len; i++) {
be = &(map->list[i / 64]);
c = strbuf_read(s, i);
h = blockmap_hash(c);
k = blockmap_key(c);
while (be->key[h] && be->key[h] != k)
h = blockmap_hash(h + 1);
be->key[h] = k;
be->val[h] |= (uint64_t) 1 << (i % 64);
}
return 0;
}
static void blockmap_clear(struct blockmap *map)
{
if (map->list)
free(map->list);
map->list = NULL;
map->nr = 0;
}
static uint64_t blockmap_get(struct blockmap *map, int block, uint32_t c)
{
struct blockmap_entry *be;
uint8_t h;
uint32_t k;
h = blockmap_hash(c);
k = blockmap_key(c);
be = &(map->list[block]);
while (be->key[h] && be->key[h] != k)
h = blockmap_hash(h + 1);
return be->key[h] == k ? be->val[h] : 0;
}
/*
* Myers' bit-parallel algorithm
*
* See: G. Myers. "A fast bit-vector algorithm for approximate string
* matching based on dynamic programming." Journal of the ACM, 1999.
*/
static int64_t myers1999_block(struct strbuf *s1, struct strbuf *s2,
struct blockmap *map)
{
uint64_t Eq, Xv, Xh, Ph, Mh, Pv, Mv, Last;
uint64_t *Mhc, *Phc;
int64_t i, b, hsize, vsize, Score;
uint8_t Pb, Mb;
hsize = CDIV(s1->len, 64);
vsize = CDIV(s2->len, 64);
Score = s2->len;
Phc = malloc(hsize * 2 * sizeof(uint64_t));
if (Phc == NULL) {
PyErr_NoMemory();
return -1;
}
Mhc = Phc + hsize;
memset(Phc, -1, hsize * sizeof(uint64_t));
memset(Mhc, 0, hsize * sizeof(uint64_t));
Last = (uint64_t)1 << ((s2->len - 1) % 64);
for (b = 0; b < vsize; b++) {
Mv = 0;
Pv = (uint64_t) -1;
Score = s2->len;
for (i = 0; i < s1->len; i++) {
Eq = blockmap_get(map, b, strbuf_read(s1, i));
Pb = BIT(Phc[i / 64], i % 64);
Mb = BIT(Mhc[i / 64], i % 64);
Xv = Eq | Mv;
Xh = ((((Eq | Mb) & Pv) + Pv) ^ Pv) | Eq | Mb;
Ph = Mv | ~ (Xh | Pv);
Mh = Pv & Xh;
if (Ph & Last) Score++;
if (Mh & Last) Score--;
if ((Ph >> 63) ^ Pb)
Phc[i / 64] = FLIP(Phc[i / 64], i % 64);
if ((Mh >> 63) ^ Mb)
Mhc[i / 64] = FLIP(Mhc[i / 64], i % 64);
Ph = (Ph << 1) | Pb;
Mh = (Mh << 1) | Mb;
Pv = Mh | ~ (Xv | Ph);
Mv = Ph & Xv;
}
}
free(Phc);
return Score;
}
static int64_t myers1999_simple(uint8_t *s1, int64_t len1, uint8_t *s2, int64_t len2)
{
uint64_t Peq[256];
uint64_t Eq, Xv, Xh, Ph, Mh, Pv, Mv, Last;
int64_t i;
int64_t Score = len2;
memset(Peq, 0, sizeof(Peq));
for (i = 0; i < len2; i++)
Peq[s2[i]] |= (uint64_t) 1 << i;
Mv = 0;
Pv = (uint64_t) -1;
Last = (uint64_t) 1 << (len2 - 1);
for (i = 0; i < len1; i++) {
Eq = Peq[s1[i]];
Xv = Eq | Mv;
Xh = (((Eq & Pv) + Pv) ^ Pv) | Eq;
Ph = Mv | ~ (Xh | Pv);
Mh = Pv & Xh;
if (Ph & Last) Score++;
if (Mh & Last) Score--;
Ph = (Ph << 1) | 1;
Mh = (Mh << 1);
Pv = Mh | ~ (Xv | Ph);
Mv = Ph & Xv;
}
return Score;
}
static int64_t myers1999(PyObject *o1, PyObject *o2)
{
struct strbuf s1, s2;
struct blockmap map;
int64_t ret;
strbuf_init(&s1, o1);
strbuf_init(&s2, o2);
if (s1.len < s2.len)
return myers1999(o2, o1);
if (ISASCII(s1.kind) && ISASCII(s2.kind) && s2.len < 65)
return myers1999_simple(s1.ptr, s1.len, s2.ptr, s2.len);
if (blockmap_init(&map, &s2))
return -1;
ret = myers1999_block(&s1, &s2, &map);
blockmap_clear(&map);
return ret;
}
/*
* Interface functions
*/
static int64_t polyleven(PyObject *o1, PyObject *o2, int64_t k)
{
int64_t len1, len2;
len1 = PyUnicode_GET_LENGTH(o1);
len2 = PyUnicode_GET_LENGTH(o2);
if (len1 < len2)
return polyleven(o2, o1, k);
if (k == 0)
return PyUnicode_Compare(o1, o2) ? 1 : 0;
if (0 < k && k < len1 - len2)
return k + 1;
if (len2 == 0)
return len1;
if (0 < k && k < 4)
return mbleven(o1, o2, k);
return myers1999(o1, o2);
}

View File

@ -89,11 +89,14 @@ def pipes_with_nvtx_range(
types.MethodType(nvtx_range_wrapper_for_pipe_method, pipe), func types.MethodType(nvtx_range_wrapper_for_pipe_method, pipe), func
) )
# Try to preserve the original function signature. # We need to preserve the original function signature so that
# the original parameters are passed to pydantic for validation downstream.
try: try:
wrapped_func.__signature__ = inspect.signature(func) # type: ignore wrapped_func.__signature__ = inspect.signature(func) # type: ignore
except: except:
pass # Can fail for Cython methods that do not have bindings.
warnings.warn(Warnings.W122.format(method=name, pipe=pipe.name))
continue
try: try:
setattr( setattr(

View File

@ -1,4 +1,4 @@
# cython: infer_types=True, profile=True # cython: infer_types=True, profile=True, binding=True
from typing import Optional, Tuple, Iterable, Iterator, Callable, Union, Dict from typing import Optional, Tuple, Iterable, Iterator, Callable, Union, Dict
import srsly import srsly
import warnings import warnings

View File

@ -1,4 +1,4 @@
# cython: infer_types=True, profile=True # cython: infer_types=True, profile=True, binding=True
from typing import Iterable, Iterator, Optional, Dict, Tuple, Callable from typing import Iterable, Iterator, Optional, Dict, Tuple, Callable
import srsly import srsly
from thinc.api import set_dropout_rate, Model, Optimizer from thinc.api import set_dropout_rate, Model, Optimizer

View File

@ -256,11 +256,21 @@ def ko_tokenizer_tokenizer():
return nlp.tokenizer return nlp.tokenizer
@pytest.fixture(scope="module")
def la_tokenizer():
return get_lang_class("la")().tokenizer
@pytest.fixture(scope="session") @pytest.fixture(scope="session")
def lb_tokenizer(): def lb_tokenizer():
return get_lang_class("lb")().tokenizer return get_lang_class("lb")().tokenizer
@pytest.fixture(scope="session")
def lg_tokenizer():
return get_lang_class("lg")().tokenizer
@pytest.fixture(scope="session") @pytest.fixture(scope="session")
def lt_tokenizer(): def lt_tokenizer():
return get_lang_class("lt")().tokenizer return get_lang_class("lt")().tokenizer

View File

@ -82,6 +82,21 @@ def test_issue2396(en_vocab):
assert (span.get_lca_matrix() == matrix).all() assert (span.get_lca_matrix() == matrix).all()
@pytest.mark.issue(11499)
def test_init_args_unmodified(en_vocab):
words = ["A", "sentence"]
ents = ["B-TYPE1", ""]
sent_starts = [True, False]
Doc(
vocab=en_vocab,
words=words,
ents=ents,
sent_starts=sent_starts,
)
assert ents == ["B-TYPE1", ""]
assert sent_starts == [True, False]
@pytest.mark.parametrize("text", ["-0.23", "+123,456", "±1"]) @pytest.mark.parametrize("text", ["-0.23", "+123,456", "±1"])
@pytest.mark.parametrize("lang_cls", [English, MultiLanguage]) @pytest.mark.parametrize("lang_cls", [English, MultiLanguage])
@pytest.mark.issue(2782) @pytest.mark.issue(2782)

View File

View File

@ -0,0 +1,8 @@
import pytest
def test_la_tokenizer_handles_exc_in_text(la_tokenizer):
text = "scio te omnia facturum, ut nobiscum quam primum sis"
tokens = la_tokenizer(text)
assert len(tokens) == 11
assert tokens[6].text == "nobis"

View File

@ -0,0 +1,35 @@
import pytest
from spacy.lang.la.lex_attrs import like_num
@pytest.mark.parametrize(
"text,match",
[
("IIII", True),
("VI", True),
("vi", True),
("IV", True),
("iv", True),
("IX", True),
("ix", True),
("MMXXII", True),
("0", True),
("1", True),
("quattuor", True),
("decem", True),
("tertius", True),
("canis", False),
("MMXX11", False),
(",", False),
],
)
def test_lex_attrs_like_number(la_tokenizer, text, match):
tokens = la_tokenizer(text)
assert len(tokens) == 1
assert tokens[0].like_num == match
@pytest.mark.parametrize("word", ["quinque"])
def test_la_lex_attrs_capitals(word):
assert like_num(word)
assert like_num(word.upper())

View File

View File

@ -0,0 +1,15 @@
import pytest
LG_BASIC_TOKENIZATION_TESTS = [
(
"Abooluganda abemmamba ababiri",
["Abooluganda", "abemmamba", "ababiri"],
),
]
@pytest.mark.parametrize("text,expected_tokens", LG_BASIC_TOKENIZATION_TESTS)
def test_lg_tokenizer_basic(lg_tokenizer, text, expected_tokens):
tokens = lg_tokenizer(text)
token_list = [token.text for token in tokens if not token.is_space]
assert expected_tokens == token_list

View File

@ -0,0 +1,44 @@
import pytest
from spacy.matcher import levenshtein
# empty string plus 10 random ASCII, 10 random unicode, and 2 random long tests
# from polyleven
@pytest.mark.parametrize(
"dist,a,b",
[
(0, "", ""),
(4, "bbcb", "caba"),
(3, "abcb", "cacc"),
(3, "aa", "ccc"),
(1, "cca", "ccac"),
(1, "aba", "aa"),
(4, "bcbb", "abac"),
(3, "acbc", "bba"),
(3, "cbba", "a"),
(2, "bcc", "ba"),
(4, "aaa", "ccbb"),
(3, "うあい", "いいうい"),
(2, "あううい", "うあい"),
(3, "いういい", "うううあ"),
(2, "うい", "あいあ"),
(2, "いあい", "いう"),
(1, "いい", "あいい"),
(3, "あうあ", "いいああ"),
(4, "いあうう", "ううああ"),
(3, "いあいい", "ういああ"),
(3, "いいああ", "ううあう"),
(
166,
"TCTGGGCACGGATTCGTCAGATTCCATGTCCATATTTGAGGCTCTTGCAGGCAAAATTTGGGCATGTGAACTCCTTATAGTCCCCGTGC",
"ATATGGATTGGGGGCATTCAAAGATACGGTTTCCCTTTCTTCAGTTTCGCGCGGCGCACGTCCGGGTGCGAGCCAGTTCGTCTTACTCACATTGTCGACTTCACGAATCGCGCATGATGTGCTTAGCCTGTACTTACGAACGAACTTTCGGTCCAAATACATTCTATCAACACCGAGGTATCCGTGCCACACGCCGAAGCTCGACCGTGTTCGTTGAGAGGTGGAAATGGTAAAAGATGAACATAGTC",
),
(
111,
"GGTTCGGCCGAATTCATAGAGCGTGGTAGTCGACGGTATCCCGCCTGGTAGGGGCCCCTTCTACCTAGCGGAAGTTTGTCAGTACTCTATAACACGAGGGCCTCTCACACCCTAGATCGTCCAGCCACTCGAAGATCGCAGCACCCTTACAGAAAGGCATTAATGTTTCTCCTAGCACTTGTGCAATGGTGAAGGAGTGATG",
"CGTAACACTTCGCGCTACTGGGCTGCAACGTCTTGGGCATACATGCAAGATTATCTAATGCAAGCTTGAGCCCCGCTTGCGGAATTTCCCTAATCGGGGTCCCTTCCTGTTACGATAAGGACGCGTGCACT",
),
],
)
def test_levenshtein(dist, a, b):
assert levenshtein(a, b) == dist

View File

@ -368,6 +368,16 @@ def test_matcher_intersect_value_operator(en_vocab):
doc[0]._.ext = ["A", "B"] doc[0]._.ext = ["A", "B"]
assert len(matcher(doc)) == 1 assert len(matcher(doc)) == 1
# INTERSECTS matches nothing for iterables that aren't all str or int
matcher = Matcher(en_vocab)
pattern = [{"_": {"ext": {"INTERSECTS": ["Abx", "C"]}}}]
matcher.add("M", [pattern])
doc = Doc(en_vocab, words=["a", "b", "c"])
doc[0]._.ext = [["Abx"], "B"]
assert len(matcher(doc)) == 0
doc[0]._.ext = ["Abx", "B"]
assert len(matcher(doc)) == 1
# INTERSECTS with an empty pattern list matches nothing # INTERSECTS with an empty pattern list matches nothing
matcher = Matcher(en_vocab) matcher = Matcher(en_vocab)
pattern = [{"_": {"ext": {"INTERSECTS": []}}}] pattern = [{"_": {"ext": {"INTERSECTS": []}}}]
@ -476,14 +486,22 @@ def test_matcher_extension_set_membership(en_vocab):
assert len(matches) == 0 assert len(matches) == 0
@pytest.mark.xfail(reason="IN predicate must handle sequence values in extensions")
def test_matcher_extension_in_set_predicate(en_vocab): def test_matcher_extension_in_set_predicate(en_vocab):
matcher = Matcher(en_vocab) matcher = Matcher(en_vocab)
Token.set_extension("ext", default=[]) Token.set_extension("ext", default=[])
pattern = [{"_": {"ext": {"IN": ["A", "C"]}}}] pattern = [{"_": {"ext": {"IN": ["A", "C"]}}}]
matcher.add("M", [pattern]) matcher.add("M", [pattern])
doc = Doc(en_vocab, words=["a", "b", "c"]) doc = Doc(en_vocab, words=["a", "b", "c"])
# The IN predicate expects an exact match between the
# extension value and one of the pattern's values.
doc[0]._.ext = ["A", "B"] doc[0]._.ext = ["A", "B"]
assert len(matcher(doc)) == 0
doc[0]._.ext = ["A"]
assert len(matcher(doc)) == 0
doc[0]._.ext = "A"
assert len(matcher(doc)) == 1 assert len(matcher(doc)) == 1

View File

@ -17,6 +17,7 @@ def test_build_dependencies():
"types-dataclasses", "types-dataclasses",
"types-mock", "types-mock",
"types-requests", "types-requests",
"types-setuptools",
] ]
# ignore language-specific packages that shouldn't be installed by all # ignore language-specific packages that shouldn't be installed by all
libs_ignore_setup = [ libs_ignore_setup = [

View File

@ -605,10 +605,35 @@ def test_update_with_annotates():
assert results[component] == "" assert results[component] == ""
def test_load_disable_enable() -> None: @pytest.mark.issue(11443)
""" def test_enable_disable_conflict_with_config():
Tests spacy.load() with dis-/enabling components. """Test conflict between enable/disable w.r.t. `nlp.disabled` set in the config."""
""" nlp = English()
nlp.add_pipe("tagger")
nlp.add_pipe("senter")
nlp.add_pipe("sentencizer")
with make_tempdir() as tmp_dir:
nlp.to_disk(tmp_dir)
# Expected to fail, as config and arguments conflict.
with pytest.raises(ValueError):
spacy.load(
tmp_dir, enable=["tagger"], config={"nlp": {"disabled": ["senter"]}}
)
# Expected to succeed without warning due to the lack of a conflicting config option.
spacy.load(tmp_dir, enable=["tagger"])
# Expected to succeed with a warning, as disable=[] should override the config setting.
with pytest.warns(UserWarning):
spacy.load(
tmp_dir,
enable=["tagger"],
disable=[],
config={"nlp": {"disabled": ["senter"]}},
)
def test_load_disable_enable():
"""Tests spacy.load() with dis-/enabling components."""
base_nlp = English() base_nlp = English()
for pipe in ("sentencizer", "tagger", "parser"): for pipe in ("sentencizer", "tagger", "parser"):
@ -618,6 +643,7 @@ def test_load_disable_enable() -> None:
base_nlp.to_disk(tmp_dir) base_nlp.to_disk(tmp_dir)
to_disable = ["parser", "tagger"] to_disable = ["parser", "tagger"]
to_enable = ["tagger", "parser"] to_enable = ["tagger", "parser"]
single_str = "tagger"
# Setting only `disable`. # Setting only `disable`.
nlp = spacy.load(tmp_dir, disable=to_disable) nlp = spacy.load(tmp_dir, disable=to_disable)
@ -632,6 +658,16 @@ def test_load_disable_enable() -> None:
] ]
) )
# Loading with a string representing one component
nlp = spacy.load(tmp_dir, exclude=single_str)
assert single_str not in nlp.component_names
nlp = spacy.load(tmp_dir, disable=single_str)
assert single_str in nlp.component_names
assert single_str not in nlp.pipe_names
assert nlp._disabled == {single_str}
assert nlp.disabled == [single_str]
# Testing consistent enable/disable combination. # Testing consistent enable/disable combination.
nlp = spacy.load( nlp = spacy.load(
tmp_dir, tmp_dir,

View File

@ -404,10 +404,11 @@ def test_serialize_pipeline_disable_enable():
assert nlp3.component_names == ["ner", "tagger"] assert nlp3.component_names == ["ner", "tagger"]
with make_tempdir() as d: with make_tempdir() as d:
nlp3.to_disk(d) nlp3.to_disk(d)
nlp4 = spacy.load(d, disable=["ner"]) with pytest.warns(UserWarning):
assert nlp4.pipe_names == [] nlp4 = spacy.load(d, disable=["ner"])
assert nlp4.pipe_names == ["tagger"]
assert nlp4.component_names == ["ner", "tagger"] assert nlp4.component_names == ["ner", "tagger"]
assert nlp4.disabled == ["ner", "tagger"] assert nlp4.disabled == ["ner"]
with make_tempdir() as d: with make_tempdir() as d:
nlp.to_disk(d) nlp.to_disk(d)
nlp5 = spacy.load(d, exclude=["tagger"]) nlp5 = spacy.load(d, exclude=["tagger"])

View File

@ -670,3 +670,25 @@ def test_dot_in_factory_names(nlp):
with pytest.raises(ValueError, match="not permitted"): with pytest.raises(ValueError, match="not permitted"):
Language.factory("my.evil.component.v1", func=evil_component) Language.factory("my.evil.component.v1", func=evil_component)
def test_component_return():
"""Test that an error is raised if components return a type other than a
doc."""
nlp = English()
@Language.component("test_component_good_pipe")
def good_pipe(doc):
return doc
nlp.add_pipe("test_component_good_pipe")
nlp("text")
nlp.remove_pipe("test_component_good_pipe")
@Language.component("test_component_bad_pipe")
def bad_pipe(doc):
return doc.text
nlp.add_pipe("test_component_bad_pipe")
with pytest.raises(ValueError, match="instead of a Doc"):
nlp("text")

View File

@ -10,7 +10,8 @@ from spacy.ml._precomputable_affine import _backprop_precomputable_affine_paddin
from spacy.util import dot_to_object, SimpleFrozenList, import_file from spacy.util import dot_to_object, SimpleFrozenList, import_file
from spacy.util import to_ternary_int from spacy.util import to_ternary_int
from thinc.api import Config, Optimizer, ConfigValidationError from thinc.api import Config, Optimizer, ConfigValidationError
from thinc.api import set_current_ops from thinc.api import get_current_ops, set_current_ops, NumpyOps, CupyOps, MPSOps
from thinc.compat import has_cupy_gpu, has_torch_mps_gpu
from spacy.training.batchers import minibatch_by_words from spacy.training.batchers import minibatch_by_words
from spacy.lang.en import English from spacy.lang.en import English
from spacy.lang.nl import Dutch from spacy.lang.nl import Dutch
@ -18,7 +19,6 @@ from spacy.language import DEFAULT_CONFIG_PATH
from spacy.schemas import ConfigSchemaTraining, TokenPattern, TokenPatternSchema from spacy.schemas import ConfigSchemaTraining, TokenPattern, TokenPatternSchema
from pydantic import ValidationError from pydantic import ValidationError
from thinc.api import get_current_ops, NumpyOps, CupyOps
from .util import get_random_doc, make_tempdir from .util import get_random_doc, make_tempdir
@ -111,26 +111,25 @@ def test_PrecomputableAffine(nO=4, nI=5, nF=3, nP=2):
def test_prefer_gpu(): def test_prefer_gpu():
current_ops = get_current_ops() current_ops = get_current_ops()
try: if has_cupy_gpu:
import cupy # noqa: F401 assert prefer_gpu()
prefer_gpu()
assert isinstance(get_current_ops(), CupyOps) assert isinstance(get_current_ops(), CupyOps)
except ImportError: elif has_torch_mps_gpu:
assert prefer_gpu()
assert isinstance(get_current_ops(), MPSOps)
else:
assert not prefer_gpu() assert not prefer_gpu()
set_current_ops(current_ops) set_current_ops(current_ops)
def test_require_gpu(): def test_require_gpu():
current_ops = get_current_ops() current_ops = get_current_ops()
try: if has_cupy_gpu:
import cupy # noqa: F401
require_gpu() require_gpu()
assert isinstance(get_current_ops(), CupyOps) assert isinstance(get_current_ops(), CupyOps)
except ImportError: elif has_torch_mps_gpu:
with pytest.raises(ValueError): require_gpu()
require_gpu() assert isinstance(get_current_ops(), MPSOps)
set_current_ops(current_ops) set_current_ops(current_ops)

View File

@ -31,7 +31,7 @@ def doc(nlp):
words = ["Sarah", "'s", "sister", "flew", "to", "Silicon", "Valley", "via", "London", "."] words = ["Sarah", "'s", "sister", "flew", "to", "Silicon", "Valley", "via", "London", "."]
tags = ["NNP", "POS", "NN", "VBD", "IN", "NNP", "NNP", "IN", "NNP", "."] tags = ["NNP", "POS", "NN", "VBD", "IN", "NNP", "NNP", "IN", "NNP", "."]
pos = ["PROPN", "PART", "NOUN", "VERB", "ADP", "PROPN", "PROPN", "ADP", "PROPN", "PUNCT"] pos = ["PROPN", "PART", "NOUN", "VERB", "ADP", "PROPN", "PROPN", "ADP", "PROPN", "PUNCT"]
ents = ["B-PERSON", "I-PERSON", "O", "O", "O", "B-LOC", "I-LOC", "O", "B-GPE", "O"] ents = ["B-PERSON", "I-PERSON", "O", "", "O", "B-LOC", "I-LOC", "O", "B-GPE", "O"]
cats = {"TRAVEL": 1.0, "BAKING": 0.0} cats = {"TRAVEL": 1.0, "BAKING": 0.0}
# fmt: on # fmt: on
doc = Doc(nlp.vocab, words=words, tags=tags, pos=pos, ents=ents) doc = Doc(nlp.vocab, words=words, tags=tags, pos=pos, ents=ents)
@ -106,6 +106,7 @@ def test_lowercase_augmenter(nlp, doc):
assert [(e.start, e.end, e.label) for e in eg.reference.ents] == ents assert [(e.start, e.end, e.label) for e in eg.reference.ents] == ents
for ref_ent, orig_ent in zip(eg.reference.ents, doc.ents): for ref_ent, orig_ent in zip(eg.reference.ents, doc.ents):
assert ref_ent.text == orig_ent.text.lower() assert ref_ent.text == orig_ent.text.lower()
assert [t.ent_iob for t in doc] == [t.ent_iob for t in eg.reference]
assert [t.pos_ for t in eg.reference] == [t.pos_ for t in doc] assert [t.pos_ for t in eg.reference] == [t.pos_ for t in doc]
# check that augmentation works when lowercasing leads to different # check that augmentation works when lowercasing leads to different
@ -166,7 +167,7 @@ def test_make_whitespace_variant(nlp):
lemmas = ["they", "fly", "to", "New", "York", "City", ".", "\n", "then", "they", "drive", "to", "Washington", ",", "D.C."] lemmas = ["they", "fly", "to", "New", "York", "City", ".", "\n", "then", "they", "drive", "to", "Washington", ",", "D.C."]
heads = [1, 1, 1, 4, 5, 2, 1, 10, 10, 10, 10, 10, 11, 12, 12] heads = [1, 1, 1, 4, 5, 2, 1, 10, 10, 10, 10, 10, 11, 12, 12]
deps = ["nsubj", "ROOT", "prep", "compound", "compound", "pobj", "punct", "dep", "advmod", "nsubj", "ROOT", "prep", "pobj", "punct", "appos"] deps = ["nsubj", "ROOT", "prep", "compound", "compound", "pobj", "punct", "dep", "advmod", "nsubj", "ROOT", "prep", "pobj", "punct", "appos"]
ents = ["O", "O", "O", "B-GPE", "I-GPE", "I-GPE", "O", "O", "O", "O", "O", "O", "B-GPE", "O", "B-GPE"] ents = ["O", "", "O", "B-GPE", "I-GPE", "I-GPE", "O", "O", "O", "O", "O", "O", "B-GPE", "O", "B-GPE"]
# fmt: on # fmt: on
doc = Doc( doc = Doc(
nlp.vocab, nlp.vocab,
@ -215,6 +216,8 @@ def test_make_whitespace_variant(nlp):
assert mod_ex2.reference[j].head.i == j - 1 assert mod_ex2.reference[j].head.i == j - 1
# entities are well-formed # entities are well-formed
assert len(doc.ents) == len(mod_ex.reference.ents) assert len(doc.ents) == len(mod_ex.reference.ents)
# there is one token with missing entity information
assert any(t.ent_iob == 0 for t in mod_ex.reference)
for ent in mod_ex.reference.ents: for ent in mod_ex.reference.ents:
assert not ent[0].is_space assert not ent[0].is_space
assert not ent[-1].is_space assert not ent[-1].is_space

View File

@ -0,0 +1,30 @@
import pytest
import spacy
from spacy.training import loggers
@pytest.fixture()
def nlp():
nlp = spacy.blank("en")
nlp.add_pipe("ner")
return nlp
@pytest.fixture()
def info():
return {
"losses": {"ner": 100},
"other_scores": {"ENTS_F": 0.85, "ENTS_P": 0.90, "ENTS_R": 0.80},
"epoch": 100,
"step": 125,
"score": 85,
}
def test_console_logger(nlp, info):
console_logger = loggers.console_logger(
progress_bar=True, console_output=True, output_file=None
)
log_step, finalize = console_logger(nlp)
log_step(info)

View File

@ -72,7 +72,7 @@ class Doc:
lemmas: Optional[List[str]] = ..., lemmas: Optional[List[str]] = ...,
heads: Optional[List[int]] = ..., heads: Optional[List[int]] = ...,
deps: Optional[List[str]] = ..., deps: Optional[List[str]] = ...,
sent_starts: Optional[List[Union[bool, None]]] = ..., sent_starts: Optional[List[Union[bool, int, None]]] = ...,
ents: Optional[List[str]] = ..., ents: Optional[List[str]] = ...,
) -> None: ... ) -> None: ...
@property @property

View File

@ -217,9 +217,9 @@ cdef class Doc:
head in the doc. Defaults to None. head in the doc. Defaults to None.
deps (Optional[List[str]]): A list of unicode strings, of the same deps (Optional[List[str]]): A list of unicode strings, of the same
length as words, to assign as token.dep. Defaults to None. length as words, to assign as token.dep. Defaults to None.
sent_starts (Optional[List[Union[bool, None]]]): A list of values, of sent_starts (Optional[List[Union[bool, int, None]]]): A list of values,
the same length as words, to assign as token.is_sent_start. Will be of the same length as words, to assign as token.is_sent_start. Will
overridden by heads if heads is provided. Defaults to None. be overridden by heads if heads is provided. Defaults to None.
ents (Optional[List[str]]): A list of unicode strings, of the same ents (Optional[List[str]]): A list of unicode strings, of the same
length as words, as IOB tags to assign as token.ent_iob and length as words, as IOB tags to assign as token.ent_iob and
token.ent_type. Defaults to None. token.ent_type. Defaults to None.
@ -285,6 +285,7 @@ cdef class Doc:
heads = [0] * len(deps) heads = [0] * len(deps)
if heads and not deps: if heads and not deps:
raise ValueError(Errors.E1017) raise ValueError(Errors.E1017)
sent_starts = list(sent_starts) if sent_starts is not None else None
if sent_starts is not None: if sent_starts is not None:
for i in range(len(sent_starts)): for i in range(len(sent_starts)):
if sent_starts[i] is True: if sent_starts[i] is True:
@ -300,12 +301,11 @@ cdef class Doc:
ent_iobs = None ent_iobs = None
ent_types = None ent_types = None
if ents is not None: if ents is not None:
ents = [ent if ent != "" else None for ent in ents]
iob_strings = Token.iob_strings() iob_strings = Token.iob_strings()
# make valid IOB2 out of IOB1 or IOB2 # make valid IOB2 out of IOB1 or IOB2
for i, ent in enumerate(ents): for i, ent in enumerate(ents):
if ent is "": if ent is not None and not isinstance(ent, str):
ents[i] = None
elif ent is not None and not isinstance(ent, str):
raise ValueError(Errors.E177.format(tag=ent)) raise ValueError(Errors.E177.format(tag=ent))
if i < len(ents) - 1: if i < len(ents) - 1:
# OI -> OB # OI -> OB

View File

@ -6,7 +6,7 @@ from functools import partial
from ..util import registry from ..util import registry
from .example import Example from .example import Example
from .iob_utils import split_bilu_label from .iob_utils import split_bilu_label, _doc_to_biluo_tags_with_partial
if TYPE_CHECKING: if TYPE_CHECKING:
from ..language import Language # noqa: F401 from ..language import Language # noqa: F401
@ -62,6 +62,9 @@ def combined_augmenter(
if orth_variants and random.random() < orth_level: if orth_variants and random.random() < orth_level:
raw_text = example.text raw_text = example.text
orig_dict = example.to_dict() orig_dict = example.to_dict()
orig_dict["doc_annotation"]["entities"] = _doc_to_biluo_tags_with_partial(
example.reference
)
variant_text, variant_token_annot = make_orth_variants( variant_text, variant_token_annot = make_orth_variants(
nlp, nlp,
raw_text, raw_text,
@ -128,6 +131,9 @@ def lower_casing_augmenter(
def make_lowercase_variant(nlp: "Language", example: Example): def make_lowercase_variant(nlp: "Language", example: Example):
example_dict = example.to_dict() example_dict = example.to_dict()
example_dict["doc_annotation"]["entities"] = _doc_to_biluo_tags_with_partial(
example.reference
)
doc = nlp.make_doc(example.text.lower()) doc = nlp.make_doc(example.text.lower())
example_dict["token_annotation"]["ORTH"] = [t.lower_ for t in example.reference] example_dict["token_annotation"]["ORTH"] = [t.lower_ for t in example.reference]
return example.from_dict(doc, example_dict) return example.from_dict(doc, example_dict)
@ -146,6 +152,9 @@ def orth_variants_augmenter(
else: else:
raw_text = example.text raw_text = example.text
orig_dict = example.to_dict() orig_dict = example.to_dict()
orig_dict["doc_annotation"]["entities"] = _doc_to_biluo_tags_with_partial(
example.reference
)
variant_text, variant_token_annot = make_orth_variants( variant_text, variant_token_annot = make_orth_variants(
nlp, nlp,
raw_text, raw_text,
@ -248,6 +257,9 @@ def make_whitespace_variant(
RETURNS (Example): Example with one additional space token. RETURNS (Example): Example with one additional space token.
""" """
example_dict = example.to_dict() example_dict = example.to_dict()
example_dict["doc_annotation"]["entities"] = _doc_to_biluo_tags_with_partial(
example.reference
)
doc_dict = example_dict.get("doc_annotation", {}) doc_dict = example_dict.get("doc_annotation", {})
token_dict = example_dict.get("token_annotation", {}) token_dict = example_dict.get("token_annotation", {})
# returned unmodified if: # returned unmodified if:

View File

@ -60,6 +60,14 @@ def doc_to_biluo_tags(doc: Doc, missing: str = "O"):
) )
def _doc_to_biluo_tags_with_partial(doc: Doc) -> List[str]:
ents = doc_to_biluo_tags(doc, missing="-")
for i, token in enumerate(doc):
if token.ent_iob == 2:
ents[i] = "O"
return ents
def offsets_to_biluo_tags( def offsets_to_biluo_tags(
doc: Doc, entities: Iterable[Tuple[int, int, Union[str, int]]], missing: str = "O" doc: Doc, entities: Iterable[Tuple[int, int, Union[str, int]]], missing: str = "O"
) -> List[str]: ) -> List[str]:

View File

@ -1,10 +1,13 @@
from typing import TYPE_CHECKING, Dict, Any, Tuple, Callable, List, Optional, IO from typing import TYPE_CHECKING, Dict, Any, Tuple, Callable, List, Optional, IO, Union
from wasabi import Printer from wasabi import Printer
from pathlib import Path
import tqdm import tqdm
import sys import sys
import srsly
from ..util import registry from ..util import registry
from ..errors import Errors from ..errors import Errors
from .. import util
if TYPE_CHECKING: if TYPE_CHECKING:
from ..language import Language # noqa: F401 from ..language import Language # noqa: F401
@ -23,13 +26,44 @@ def setup_table(
return final_cols, final_widths, ["r" for _ in final_widths] return final_cols, final_widths, ["r" for _ in final_widths]
@registry.loggers("spacy.ConsoleLogger.v1") @registry.loggers("spacy.ConsoleLogger.v2")
def console_logger(progress_bar: bool = False): def console_logger(
progress_bar: bool = False,
console_output: bool = True,
output_file: Optional[Union[str, Path]] = None,
):
"""The ConsoleLogger.v2 prints out training logs in the console and/or saves them to a jsonl file.
progress_bar (bool): Whether the logger should print the progress bar.
console_output (bool): Whether the logger should print the logs on the console.
output_file (Optional[Union[str, Path]]): The file to save the training logs to.
"""
_log_exist = False
if output_file:
output_file = util.ensure_path(output_file) # type: ignore
if output_file.exists(): # type: ignore
_log_exist = True
if not output_file.parents[0].exists(): # type: ignore
output_file.parents[0].mkdir(parents=True) # type: ignore
def setup_printer( def setup_printer(
nlp: "Language", stdout: IO = sys.stdout, stderr: IO = sys.stderr nlp: "Language", stdout: IO = sys.stdout, stderr: IO = sys.stderr
) -> Tuple[Callable[[Optional[Dict[str, Any]]], None], Callable[[], None]]: ) -> Tuple[Callable[[Optional[Dict[str, Any]]], None], Callable[[], None]]:
write = lambda text: print(text, file=stdout, flush=True) write = lambda text: print(text, file=stdout, flush=True)
msg = Printer(no_print=True) msg = Printer(no_print=True)
nonlocal output_file
output_stream = None
if _log_exist:
write(
msg.warn(
f"Saving logs is disabled because {output_file} already exists."
)
)
output_file = None
elif output_file:
write(msg.info(f"Saving results to {output_file}"))
output_stream = open(output_file, "w", encoding="utf-8")
# ensure that only trainable components are logged # ensure that only trainable components are logged
logged_pipes = [ logged_pipes = [
name name
@ -40,13 +74,15 @@ def console_logger(progress_bar: bool = False):
score_weights = nlp.config["training"]["score_weights"] score_weights = nlp.config["training"]["score_weights"]
score_cols = [col for col, value in score_weights.items() if value is not None] score_cols = [col for col, value in score_weights.items() if value is not None]
loss_cols = [f"Loss {pipe}" for pipe in logged_pipes] loss_cols = [f"Loss {pipe}" for pipe in logged_pipes]
spacing = 2
table_header, table_widths, table_aligns = setup_table( if console_output:
cols=["E", "#"] + loss_cols + score_cols + ["Score"], spacing = 2
widths=[3, 6] + [8 for _ in loss_cols] + [6 for _ in score_cols] + [6], table_header, table_widths, table_aligns = setup_table(
) cols=["E", "#"] + loss_cols + score_cols + ["Score"],
write(msg.row(table_header, widths=table_widths, spacing=spacing)) widths=[3, 6] + [8 for _ in loss_cols] + [6 for _ in score_cols] + [6],
write(msg.row(["-" * width for width in table_widths], spacing=spacing)) )
write(msg.row(table_header, widths=table_widths, spacing=spacing))
write(msg.row(["-" * width for width in table_widths], spacing=spacing))
progress = None progress = None
def log_step(info: Optional[Dict[str, Any]]) -> None: def log_step(info: Optional[Dict[str, Any]]) -> None:
@ -57,12 +93,15 @@ def console_logger(progress_bar: bool = False):
if progress is not None: if progress is not None:
progress.update(1) progress.update(1)
return return
losses = [
"{0:.2f}".format(float(info["losses"][pipe_name])) losses = []
for pipe_name in logged_pipes log_losses = {}
] for pipe_name in logged_pipes:
losses.append("{0:.2f}".format(float(info["losses"][pipe_name])))
log_losses[pipe_name] = float(info["losses"][pipe_name])
scores = [] scores = []
log_scores = {}
for col in score_cols: for col in score_cols:
score = info["other_scores"].get(col, 0.0) score = info["other_scores"].get(col, 0.0)
try: try:
@ -73,6 +112,7 @@ def console_logger(progress_bar: bool = False):
if col != "speed": if col != "speed":
score *= 100 score *= 100
scores.append("{0:.2f}".format(score)) scores.append("{0:.2f}".format(score))
log_scores[str(col)] = score
data = ( data = (
[info["epoch"], info["step"]] [info["epoch"], info["step"]]
@ -80,20 +120,36 @@ def console_logger(progress_bar: bool = False):
+ scores + scores
+ ["{0:.2f}".format(float(info["score"]))] + ["{0:.2f}".format(float(info["score"]))]
) )
if output_stream:
# Write to log file per log_step
log_data = {
"epoch": info["epoch"],
"step": info["step"],
"losses": log_losses,
"scores": log_scores,
"score": float(info["score"]),
}
output_stream.write(srsly.json_dumps(log_data) + "\n")
if progress is not None: if progress is not None:
progress.close() progress.close()
write( if console_output:
msg.row(data, widths=table_widths, aligns=table_aligns, spacing=spacing) write(
) msg.row(
if progress_bar: data, widths=table_widths, aligns=table_aligns, spacing=spacing
# Set disable=None, so that it disables on non-TTY )
progress = tqdm.tqdm(
total=eval_frequency, disable=None, leave=False, file=stderr
) )
progress.set_description(f"Epoch {info['epoch']+1}") if progress_bar:
# Set disable=None, so that it disables on non-TTY
progress = tqdm.tqdm(
total=eval_frequency, disable=None, leave=False, file=stderr
)
progress.set_description(f"Epoch {info['epoch']+1}")
def finalize() -> None: def finalize() -> None:
pass if output_stream:
output_stream.close()
return log_step, finalize return log_step, finalize

View File

@ -67,7 +67,6 @@ LEXEME_NORM_LANGS = ["cs", "da", "de", "el", "en", "id", "lb", "mk", "pt", "ru",
CONFIG_SECTION_ORDER = ["paths", "variables", "system", "nlp", "components", "corpora", "training", "pretraining", "initialize"] CONFIG_SECTION_ORDER = ["paths", "variables", "system", "nlp", "components", "corpora", "training", "pretraining", "initialize"]
# fmt: on # fmt: on
logger = logging.getLogger("spacy") logger = logging.getLogger("spacy")
logger_stream_handler = logging.StreamHandler() logger_stream_handler = logging.StreamHandler()
logger_stream_handler.setFormatter( logger_stream_handler.setFormatter(
@ -394,13 +393,17 @@ def get_module_path(module: ModuleType) -> Path:
return file_path.parent return file_path.parent
# Default value for passed enable/disable values.
_DEFAULT_EMPTY_PIPES = SimpleFrozenList()
def load_model( def load_model(
name: Union[str, Path], name: Union[str, Path],
*, *,
vocab: Union["Vocab", bool] = True, vocab: Union["Vocab", bool] = True,
disable: Iterable[str] = SimpleFrozenList(), disable: Union[str, Iterable[str]] = _DEFAULT_EMPTY_PIPES,
enable: Iterable[str] = SimpleFrozenList(), enable: Union[str, Iterable[str]] = _DEFAULT_EMPTY_PIPES,
exclude: Iterable[str] = SimpleFrozenList(), exclude: Union[str, Iterable[str]] = _DEFAULT_EMPTY_PIPES,
config: Union[Dict[str, Any], Config] = SimpleFrozenDict(), config: Union[Dict[str, Any], Config] = SimpleFrozenDict(),
) -> "Language": ) -> "Language":
"""Load a model from a package or data path. """Load a model from a package or data path.
@ -408,9 +411,9 @@ def load_model(
name (str): Package name or model path. name (str): Package name or model path.
vocab (Vocab / True): Optional vocab to pass in on initialization. If True, vocab (Vocab / True): Optional vocab to pass in on initialization. If True,
a new Vocab object will be created. a new Vocab object will be created.
disable (Iterable[str]): Names of pipeline components to disable. disable (Union[str, Iterable[str]]): Name(s) of pipeline component(s) to disable.
enable (Iterable[str]): Names of pipeline components to enable. All others will be disabled. enable (Union[str, Iterable[str]]): Name(s) of pipeline component(s) to enable. All others will be disabled.
exclude (Iterable[str]): Names of pipeline components to exclude. exclude (Union[str, Iterable[str]]): Name(s) of pipeline component(s) to exclude.
config (Dict[str, Any] / Config): Config overrides as nested dict or dict config (Dict[str, Any] / Config): Config overrides as nested dict or dict
keyed by section values in dot notation. keyed by section values in dot notation.
RETURNS (Language): The loaded nlp object. RETURNS (Language): The loaded nlp object.
@ -440,9 +443,9 @@ def load_model_from_package(
name: str, name: str,
*, *,
vocab: Union["Vocab", bool] = True, vocab: Union["Vocab", bool] = True,
disable: Iterable[str] = SimpleFrozenList(), disable: Union[str, Iterable[str]] = SimpleFrozenList(),
enable: Iterable[str] = SimpleFrozenList(), enable: Union[str, Iterable[str]] = SimpleFrozenList(),
exclude: Iterable[str] = SimpleFrozenList(), exclude: Union[str, Iterable[str]] = SimpleFrozenList(),
config: Union[Dict[str, Any], Config] = SimpleFrozenDict(), config: Union[Dict[str, Any], Config] = SimpleFrozenDict(),
) -> "Language": ) -> "Language":
"""Load a model from an installed package. """Load a model from an installed package.
@ -450,12 +453,12 @@ def load_model_from_package(
name (str): The package name. name (str): The package name.
vocab (Vocab / True): Optional vocab to pass in on initialization. If True, vocab (Vocab / True): Optional vocab to pass in on initialization. If True,
a new Vocab object will be created. a new Vocab object will be created.
disable (Iterable[str]): Names of pipeline components to disable. Disabled disable (Union[str, Iterable[str]]): Name(s) of pipeline component(s) to disable. Disabled
pipes will be loaded but they won't be run unless you explicitly pipes will be loaded but they won't be run unless you explicitly
enable them by calling nlp.enable_pipe. enable them by calling nlp.enable_pipe.
enable (Iterable[str]): Names of pipeline components to enable. All other enable (Union[str, Iterable[str]]): Name(s) of pipeline component(s) to enable. All other
pipes will be disabled (and can be enabled using `nlp.enable_pipe`). pipes will be disabled (and can be enabled using `nlp.enable_pipe`).
exclude (Iterable[str]): Names of pipeline components to exclude. Excluded exclude (Union[str, Iterable[str]]): Name(s) of pipeline component(s) to exclude. Excluded
components won't be loaded. components won't be loaded.
config (Dict[str, Any] / Config): Config overrides as nested dict or dict config (Dict[str, Any] / Config): Config overrides as nested dict or dict
keyed by section values in dot notation. keyed by section values in dot notation.
@ -470,9 +473,9 @@ def load_model_from_path(
*, *,
meta: Optional[Dict[str, Any]] = None, meta: Optional[Dict[str, Any]] = None,
vocab: Union["Vocab", bool] = True, vocab: Union["Vocab", bool] = True,
disable: Iterable[str] = SimpleFrozenList(), disable: Union[str, Iterable[str]] = _DEFAULT_EMPTY_PIPES,
enable: Iterable[str] = SimpleFrozenList(), enable: Union[str, Iterable[str]] = _DEFAULT_EMPTY_PIPES,
exclude: Iterable[str] = SimpleFrozenList(), exclude: Union[str, Iterable[str]] = _DEFAULT_EMPTY_PIPES,
config: Union[Dict[str, Any], Config] = SimpleFrozenDict(), config: Union[Dict[str, Any], Config] = SimpleFrozenDict(),
) -> "Language": ) -> "Language":
"""Load a model from a data directory path. Creates Language class with """Load a model from a data directory path. Creates Language class with
@ -482,12 +485,12 @@ def load_model_from_path(
meta (Dict[str, Any]): Optional model meta. meta (Dict[str, Any]): Optional model meta.
vocab (Vocab / True): Optional vocab to pass in on initialization. If True, vocab (Vocab / True): Optional vocab to pass in on initialization. If True,
a new Vocab object will be created. a new Vocab object will be created.
disable (Iterable[str]): Names of pipeline components to disable. Disabled disable (Union[str, Iterable[str]]): Name(s) of pipeline component(s) to disable. Disabled
pipes will be loaded but they won't be run unless you explicitly pipes will be loaded but they won't be run unless you explicitly
enable them by calling nlp.enable_pipe. enable them by calling nlp.enable_pipe.
enable (Iterable[str]): Names of pipeline components to enable. All other enable (Union[str, Iterable[str]]): Name(s) of pipeline component(s) to enable. All other
pipes will be disabled (and can be enabled using `nlp.enable_pipe`). pipes will be disabled (and can be enabled using `nlp.enable_pipe`).
exclude (Iterable[str]): Names of pipeline components to exclude. Excluded exclude (Union[str, Iterable[str]]): Name(s) of pipeline component(s) to exclude. Excluded
components won't be loaded. components won't be loaded.
config (Dict[str, Any] / Config): Config overrides as nested dict or dict config (Dict[str, Any] / Config): Config overrides as nested dict or dict
keyed by section values in dot notation. keyed by section values in dot notation.
@ -516,9 +519,9 @@ def load_model_from_config(
*, *,
meta: Dict[str, Any] = SimpleFrozenDict(), meta: Dict[str, Any] = SimpleFrozenDict(),
vocab: Union["Vocab", bool] = True, vocab: Union["Vocab", bool] = True,
disable: Iterable[str] = SimpleFrozenList(), disable: Union[str, Iterable[str]] = _DEFAULT_EMPTY_PIPES,
enable: Iterable[str] = SimpleFrozenList(), enable: Union[str, Iterable[str]] = _DEFAULT_EMPTY_PIPES,
exclude: Iterable[str] = SimpleFrozenList(), exclude: Union[str, Iterable[str]] = _DEFAULT_EMPTY_PIPES,
auto_fill: bool = False, auto_fill: bool = False,
validate: bool = True, validate: bool = True,
) -> "Language": ) -> "Language":
@ -529,12 +532,12 @@ def load_model_from_config(
meta (Dict[str, Any]): Optional model meta. meta (Dict[str, Any]): Optional model meta.
vocab (Vocab / True): Optional vocab to pass in on initialization. If True, vocab (Vocab / True): Optional vocab to pass in on initialization. If True,
a new Vocab object will be created. a new Vocab object will be created.
disable (Iterable[str]): Names of pipeline components to disable. Disabled disable (Union[str, Iterable[str]]): Name(s) of pipeline component(s) to disable. Disabled
pipes will be loaded but they won't be run unless you explicitly pipes will be loaded but they won't be run unless you explicitly
enable them by calling nlp.enable_pipe. enable them by calling nlp.enable_pipe.
enable (Iterable[str]): Names of pipeline components to enable. All other enable (Union[str, Iterable[str]]): Name(s) of pipeline component(s) to enable. All other
pipes will be disabled (and can be enabled using `nlp.enable_pipe`). pipes will be disabled (and can be enabled using `nlp.enable_pipe`).
exclude (Iterable[str]): Names of pipeline components to exclude. Excluded exclude (Union[str, Iterable[str]]): Name(s) of pipeline component(s) to exclude. Excluded
components won't be loaded. components won't be loaded.
auto_fill (bool): Whether to auto-fill config with missing defaults. auto_fill (bool): Whether to auto-fill config with missing defaults.
validate (bool): Whether to show config validation errors. validate (bool): Whether to show config validation errors.
@ -616,9 +619,9 @@ def load_model_from_init_py(
init_file: Union[Path, str], init_file: Union[Path, str],
*, *,
vocab: Union["Vocab", bool] = True, vocab: Union["Vocab", bool] = True,
disable: Iterable[str] = SimpleFrozenList(), disable: Union[str, Iterable[str]] = SimpleFrozenList(),
enable: Iterable[str] = SimpleFrozenList(), enable: Union[str, Iterable[str]] = SimpleFrozenList(),
exclude: Iterable[str] = SimpleFrozenList(), exclude: Union[str, Iterable[str]] = SimpleFrozenList(),
config: Union[Dict[str, Any], Config] = SimpleFrozenDict(), config: Union[Dict[str, Any], Config] = SimpleFrozenDict(),
) -> "Language": ) -> "Language":
"""Helper function to use in the `load()` method of a model package's """Helper function to use in the `load()` method of a model package's
@ -626,12 +629,12 @@ def load_model_from_init_py(
vocab (Vocab / True): Optional vocab to pass in on initialization. If True, vocab (Vocab / True): Optional vocab to pass in on initialization. If True,
a new Vocab object will be created. a new Vocab object will be created.
disable (Iterable[str]): Names of pipeline components to disable. Disabled disable (Union[str, Iterable[str]]): Name(s) of pipeline component(s) to disable. Disabled
pipes will be loaded but they won't be run unless you explicitly pipes will be loaded but they won't be run unless you explicitly
enable them by calling nlp.enable_pipe. enable them by calling nlp.enable_pipe.
enable (Iterable[str]): Names of pipeline components to enable. All other enable (Union[str, Iterable[str]]): Name(s) of pipeline component(s) to enable. All other
pipes will be disabled (and can be enabled using `nlp.enable_pipe`). pipes will be disabled (and can be enabled using `nlp.enable_pipe`).
exclude (Iterable[str]): Names of pipeline components to exclude. Excluded exclude (Union[str, Iterable[str]]): Name(s) of pipeline component(s) to exclude. Excluded
components won't be loaded. components won't be loaded.
config (Dict[str, Any] / Config): Config overrides as nested dict or dict config (Dict[str, Any] / Config): Config overrides as nested dict or dict
keyed by section values in dot notation. keyed by section values in dot notation.

View File

@ -11,6 +11,7 @@ menu:
- ['Text Classification', 'textcat'] - ['Text Classification', 'textcat']
- ['Span Classification', 'spancat'] - ['Span Classification', 'spancat']
- ['Entity Linking', 'entitylinker'] - ['Entity Linking', 'entitylinker']
- ['Coreference', 'coref-architectures']
--- ---
A **model architecture** is a function that wires up a A **model architecture** is a function that wires up a
@ -587,8 +588,8 @@ consists of either two or three subnetworks:
run once for each batch. run once for each batch.
- **lower**: Construct a feature-specific vector for each `(token, feature)` - **lower**: Construct a feature-specific vector for each `(token, feature)`
pair. This is also run once for each batch. Constructing the state pair. This is also run once for each batch. Constructing the state
representation is then a matter of summing the component features and representation is then a matter of summing the component features and applying
applying the non-linearity. the non-linearity.
- **upper** (optional): A feed-forward network that predicts scores from the - **upper** (optional): A feed-forward network that predicts scores from the
state representation. If not present, the output from the lower model is used state representation. If not present, the output from the lower model is used
as action scores directly. as action scores directly.
@ -628,8 +629,8 @@ same signature, but the `use_upper` argument was `True` by default.
> ``` > ```
Build a tagger model, using a provided token-to-vector component. The tagger Build a tagger model, using a provided token-to-vector component. The tagger
model adds a linear layer with softmax activation to predict scores given model adds a linear layer with softmax activation to predict scores given the
the token vectors. token vectors.
| Name | Description | | Name | Description |
| ----------- | ------------------------------------------------------------------------------------------ | | ----------- | ------------------------------------------------------------------------------------------ |
@ -920,5 +921,84 @@ A function that reads an existing `KnowledgeBase` from file.
A function that takes as input a [`KnowledgeBase`](/api/kb) and a A function that takes as input a [`KnowledgeBase`](/api/kb) and a
[`Span`](/api/span) object denoting a named entity, and returns a list of [`Span`](/api/span) object denoting a named entity, and returns a list of
plausible [`Candidate`](/api/kb/#candidate) objects. The default plausible [`Candidate`](/api/kb/#candidate) objects. The default
`CandidateGenerator` uses the text of a mention to find its potential `CandidateGenerator` uses the text of a mention to find its potential aliases in
aliases in the `KnowledgeBase`. Note that this function is case-dependent. the `KnowledgeBase`. Note that this function is case-dependent.
## Coreference {#coref-architectures tag="experimental"}
A [`CoreferenceResolver`](/api/coref) component identifies tokens that refer to
the same entity. A [`SpanResolver`](/api/span-resolver) component infers spans
from single tokens. Together these components can be used to reproduce
traditional coreference models. You can also omit the `SpanResolver` if working
with only token-level clusters is acceptable.
### spacy-experimental.Coref.v1 {#Coref tag="experimental"}
> #### Example Config
>
> ```ini
>
> [model]
> @architectures = "spacy-experimental.Coref.v1"
> distance_embedding_size = 20
> dropout = 0.3
> hidden_size = 1024
> depth = 2
> antecedent_limit = 50
> antecedent_batch_size = 512
>
> [model.tok2vec]
> @architectures = "spacy-transformers.TransformerListener.v1"
> grad_factor = 1.0
> upstream = "transformer"
> pooling = {"@layers":"reduce_mean.v1"}
> ```
The `Coref` model architecture is a Thinc `Model`.
| Name | Description |
| ------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `tok2vec` | The [`tok2vec`](#tok2vec) layer of the model. ~~Model~~ |
| `distance_embedding_size` | A representation of the distance between candidates. ~~int~~ |
| `dropout` | The dropout to use internally. Unlike some Thinc models, this has separate dropout for the internal PyTorch layers. ~~float~~ |
| `hidden_size` | Size of the main internal layers. ~~int~~ |
| `depth` | Depth of the internal network. ~~int~~ |
| `antecedent_limit` | How many candidate antecedents to keep after rough scoring. This has a significant effect on memory usage. Typical values would be 50 to 200, or higher for very long documents. ~~int~~ |
| `antecedent_batch_size` | Internal batch size. ~~int~~ |
| **CREATES** | The model using the architecture. ~~Model[List[Doc], Floats2d]~~ |
### spacy-experimental.SpanResolver.v1 {#SpanResolver tag="experimental"}
> #### Example Config
>
> ```ini
>
> [model]
> @architectures = "spacy-experimental.SpanResolver.v1"
> hidden_size = 1024
> distance_embedding_size = 64
> conv_channels = 4
> window_size = 1
> max_distance = 128
> prefix = "coref_head_clusters"
>
> [model.tok2vec]
> @architectures = "spacy-transformers.TransformerListener.v1"
> grad_factor = 1.0
> upstream = "transformer"
> pooling = {"@layers":"reduce_mean.v1"}
> ```
The `SpanResolver` model architecture is a Thinc `Model`. Note that
`MentionClusters` is `List[List[Tuple[int, int]]]`.
| Name | Description |
| ------------------------- | -------------------------------------------------------------------------------------------------------------------- |
| `tok2vec` | The [`tok2vec`](#tok2vec) layer of the model. ~~Model~~ |
| `hidden_size` | Size of the main internal layers. ~~int~~ |
| `distance_embedding_size` | A representation of the distance between two candidates. ~~int~~ |
| `conv_channels` | The number of channels in the internal CNN. ~~int~~ |
| `window_size` | The number of neighboring tokens to consider in the internal CNN. `1` means consider one token on each side. ~~int~~ |
| `max_distance` | The longest possible length of a predicted span. ~~int~~ |
| `prefix` | The prefix that indicates spans to use for input data. ~~string~~ |
| **CREATES** | The model using the architecture. ~~Model[List[Doc], List[MentionClusters]]~~ |

View File

@ -77,14 +77,15 @@ $ python -m spacy info [--markdown] [--silent] [--exclude]
$ python -m spacy info [model] [--markdown] [--silent] [--exclude] $ python -m spacy info [model] [--markdown] [--silent] [--exclude]
``` ```
| Name | Description | | Name | Description |
| ------------------------------------------------ | --------------------------------------------------------------------------------------------- | | ------------------------------------------------ | ----------------------------------------------------------------------------------------------------------------------- |
| `model` | A trained pipeline, i.e. package name or path (optional). ~~Optional[str] \(option)~~ | | `model` | A trained pipeline, i.e. package name or path (optional). ~~Optional[str] \(option)~~ |
| `--markdown`, `-md` | Print information as Markdown. ~~bool (flag)~~ | | `--markdown`, `-md` | Print information as Markdown. ~~bool (flag)~~ |
| `--silent`, `-s` <Tag variant="new">2.0.12</Tag> | Don't print anything, just return the values. ~~bool (flag)~~ | | `--silent`, `-s` <Tag variant="new">2.0.12</Tag> | Don't print anything, just return the values. ~~bool (flag)~~ |
| `--exclude`, `-e` | Comma-separated keys to exclude from the print-out. Defaults to `"labels"`. ~~Optional[str]~~ | | `--exclude`, `-e` | Comma-separated keys to exclude from the print-out. Defaults to `"labels"`. ~~Optional[str]~~ |
| `--help`, `-h` | Show help message and available arguments. ~~bool (flag)~~ | | `--url`, `-u` <Tag variant="new">3.5.0</Tag> | Print the URL to download the most recent compatible version of the pipeline. Requires a pipeline name. ~~bool (flag)~~ |
| **PRINTS** | Information about your spaCy installation. | | `--help`, `-h` | Show help message and available arguments. ~~bool (flag)~~ |
| **PRINTS** | Information about your spaCy installation. |
## validate {#validate new="2" tag="command"} ## validate {#validate new="2" tag="command"}

353
website/docs/api/coref.md Normal file
View File

@ -0,0 +1,353 @@
---
title: CoreferenceResolver
tag: class,experimental
source: spacy-experimental/coref/coref_component.py
teaser: 'Pipeline component for word-level coreference resolution'
api_base_class: /api/pipe
api_string_name: coref
api_trainable: true
---
> #### Installation
>
> ```bash
> $ pip install -U spacy-experimental
> ```
<Infobox title="Important note" variant="warning">
This component is not yet integrated into spaCy core, and is available via the
extension package
[`spacy-experimental`](https://github.com/explosion/spacy-experimental) starting
in version 0.6.0. It exposes the component via
[entry points](/usage/saving-loading/#entry-points), so if you have the package
installed, using `factory = "experimental_coref"` in your
[training config](/usage/training#config) or
`nlp.add_pipe("experimental_coref")` will work out-of-the-box.
</Infobox>
A `CoreferenceResolver` component groups tokens into clusters that refer to the
same thing. Clusters are represented as SpanGroups that start with a prefix
(`coref_clusters` by default).
A `CoreferenceResolver` component can be paired with a
[`SpanResolver`](/api/span-resolver) to expand single tokens to spans.
## Assigned Attributes {#assigned-attributes}
Predictions will be saved to `Doc.spans` as a [`SpanGroup`](/api/spangroup). The
span key will be a prefix plus a serial number referring to the coreference
cluster, starting from zero.
The span key prefix defaults to `"coref_clusters"`, but can be passed as a
parameter.
| Location | Value |
| ------------------------------------------ | ------------------------------------------------------------------------------------------------------- |
| `Doc.spans[prefix + "_" + cluster_number]` | One coreference cluster, represented as single-token spans. Cluster numbers start from 1. ~~SpanGroup~~ |
## Config and implementation {#config}
The default config is defined by the pipeline component factory and describes
how the component should be configured. You can override its settings via the
`config` argument on [`nlp.add_pipe`](/api/language#add_pipe) or in your
[`config.cfg` for training](/usage/training#config). See the
[model architectures](/api/architectures#coref-architectures) documentation for
details on the architectures and their arguments and hyperparameters.
> #### Example
>
> ```python
> from spacy_experimental.coref.coref_component import DEFAULT_COREF_MODEL
> from spacy_experimental.coref.coref_util import DEFAULT_CLUSTER_PREFIX
> config={
> "model": DEFAULT_COREF_MODEL,
> "span_cluster_prefix": DEFAULT_CLUSTER_PREFIX,
> },
> nlp.add_pipe("experimental_coref", config=config)
> ```
| Setting | Description |
| --------------------- | ---------------------------------------------------------------------------------------------------------------------------------------- |
| `model` | The [`Model`](https://thinc.ai/docs/api-model) powering the pipeline component. Defaults to [Coref](/api/architectures#Coref). ~~Model~~ |
| `span_cluster_prefix` | The prefix for the keys for clusters saved to `doc.spans`. Defaults to `coref_clusters`. ~~str~~ |
## CoreferenceResolver.\_\_init\_\_ {#init tag="method"}
> #### Example
>
> ```python
> # Construction via add_pipe with default model
> coref = nlp.add_pipe("experimental_coref")
>
> # Construction via add_pipe with custom model
> config = {"model": {"@architectures": "my_coref.v1"}}
> coref = nlp.add_pipe("experimental_coref", config=config)
>
> # Construction from class
> from spacy_experimental.coref.coref_component import CoreferenceResolver
> coref = CoreferenceResolver(nlp.vocab, model)
> ```
Create a new pipeline instance. In your application, you would normally use a
shortcut for this and instantiate the component using its string name and
[`nlp.add_pipe`](/api/language#add_pipe).
| Name | Description |
| --------------------- | --------------------------------------------------------------------------------------------------- |
| `vocab` | The shared vocabulary. ~~Vocab~~ |
| `model` | The [`Model`](https://thinc.ai/docs/api-model) powering the pipeline component. ~~Model~~ |
| `name` | String name of the component instance. Used to add entries to the `losses` during training. ~~str~~ |
| _keyword-only_ | |
| `span_cluster_prefix` | The prefix for the key for saving clusters of spans. ~~bool~~ |
## CoreferenceResolver.\_\_call\_\_ {#call tag="method"}
Apply the pipe to one document. The document is modified in place and returned.
This usually happens under the hood when the `nlp` object is called on a text
and all pipeline components are applied to the `Doc` in order. Both
[`__call__`](/api/coref#call) and [`pipe`](/api/coref#pipe) delegate to the
[`predict`](/api/coref#predict) and
[`set_annotations`](/api/coref#set_annotations) methods.
> #### Example
>
> ```python
> doc = nlp("This is a sentence.")
> coref = nlp.add_pipe("experimental_coref")
> # This usually happens under the hood
> processed = coref(doc)
> ```
| Name | Description |
| ----------- | -------------------------------- |
| `doc` | The document to process. ~~Doc~~ |
| **RETURNS** | The processed document. ~~Doc~~ |
## CoreferenceResolver.pipe {#pipe tag="method"}
Apply the pipe to a stream of documents. This usually happens under the hood
when the `nlp` object is called on a text and all pipeline components are
applied to the `Doc` in order. Both [`__call__`](/api/coref#call) and
[`pipe`](/api/coref#pipe) delegate to the [`predict`](/api/coref#predict) and
[`set_annotations`](/api/coref#set_annotations) methods.
> #### Example
>
> ```python
> coref = nlp.add_pipe("experimental_coref")
> for doc in coref.pipe(docs, batch_size=50):
> pass
> ```
| Name | Description |
| -------------- | ------------------------------------------------------------- |
| `stream` | A stream of documents. ~~Iterable[Doc]~~ |
| _keyword-only_ | |
| `batch_size` | The number of documents to buffer. Defaults to `128`. ~~int~~ |
| **YIELDS** | The processed documents in order. ~~Doc~~ |
## CoreferenceResolver.initialize {#initialize tag="method"}
Initialize the component for training. `get_examples` should be a function that
returns an iterable of [`Example`](/api/example) objects. **At least one example
should be supplied.** The data examples are used to **initialize the model** of
the component and can either be the full training data or a representative
sample. Initialization includes validating the network,
[inferring missing shapes](https://thinc.ai/docs/usage-models#validation) and
setting up the label scheme based on the data. This method is typically called
by [`Language.initialize`](/api/language#initialize).
> #### Example
>
> ```python
> coref = nlp.add_pipe("experimental_coref")
> coref.initialize(lambda: examples, nlp=nlp)
> ```
| Name | Description |
| -------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `get_examples` | Function that returns gold-standard annotations in the form of [`Example`](/api/example) objects. Must contain at least one `Example`. ~~Callable[[], Iterable[Example]]~~ |
| _keyword-only_ | |
| `nlp` | The current `nlp` object. Defaults to `None`. ~~Optional[Language]~~ |
## CoreferenceResolver.predict {#predict tag="method"}
Apply the component's model to a batch of [`Doc`](/api/doc) objects, without
modifying them. Clusters are returned as a list of `MentionClusters`, one for
each input `Doc`. A `MentionClusters` instance is just a list of lists of pairs
of `int`s, where each item corresponds to a cluster, and the `int`s correspond
to token indices.
> #### Example
>
> ```python
> coref = nlp.add_pipe("experimental_coref")
> clusters = coref.predict([doc1, doc2])
> ```
| Name | Description |
| ----------- | ---------------------------------------------------------------------------- |
| `docs` | The documents to predict. ~~Iterable[Doc]~~ |
| **RETURNS** | The predicted coreference clusters for the `docs`. ~~List[MentionClusters]~~ |
## CoreferenceResolver.set_annotations {#set_annotations tag="method"}
Modify a batch of documents, saving coreference clusters in `Doc.spans`.
> #### Example
>
> ```python
> coref = nlp.add_pipe("experimental_coref")
> clusters = coref.predict([doc1, doc2])
> coref.set_annotations([doc1, doc2], clusters)
> ```
| Name | Description |
| ---------- | ---------------------------------------------------------------------------- |
| `docs` | The documents to modify. ~~Iterable[Doc]~~ |
| `clusters` | The predicted coreference clusters for the `docs`. ~~List[MentionClusters]~~ |
## CoreferenceResolver.update {#update tag="method"}
Learn from a batch of [`Example`](/api/example) objects. Delegates to
[`predict`](/api/coref#predict).
> #### Example
>
> ```python
> coref = nlp.add_pipe("experimental_coref")
> optimizer = nlp.initialize()
> losses = coref.update(examples, sgd=optimizer)
> ```
| Name | Description |
| -------------- | ------------------------------------------------------------------------------------------------------------------------ |
| `examples` | A batch of [`Example`](/api/example) objects to learn from. ~~Iterable[Example]~~ |
| _keyword-only_ | |
| `drop` | The dropout rate. ~~float~~ |
| `sgd` | An optimizer. Will be created via [`create_optimizer`](#create_optimizer) if not set. ~~Optional[Optimizer]~~ |
| `losses` | Optional record of the loss during training. Updated using the component name as the key. ~~Optional[Dict[str, float]]~~ |
| **RETURNS** | The updated `losses` dictionary. ~~Dict[str, float]~~ |
## CoreferenceResolver.create_optimizer {#create_optimizer tag="method"}
Create an optimizer for the pipeline component.
> #### Example
>
> ```python
> coref = nlp.add_pipe("experimental_coref")
> optimizer = coref.create_optimizer()
> ```
| Name | Description |
| ----------- | ---------------------------- |
| **RETURNS** | The optimizer. ~~Optimizer~~ |
## CoreferenceResolver.use_params {#use_params tag="method, contextmanager"}
Modify the pipe's model, to use the given parameter values. At the end of the
context, the original parameters are restored.
> #### Example
>
> ```python
> coref = nlp.add_pipe("experimental_coref")
> with coref.use_params(optimizer.averages):
> coref.to_disk("/best_model")
> ```
| Name | Description |
| -------- | -------------------------------------------------- |
| `params` | The parameter values to use in the model. ~~dict~~ |
## CoreferenceResolver.to_disk {#to_disk tag="method"}
Serialize the pipe to disk.
> #### Example
>
> ```python
> coref = nlp.add_pipe("experimental_coref")
> coref.to_disk("/path/to/coref")
> ```
| Name | Description |
| -------------- | ------------------------------------------------------------------------------------------------------------------------------------------ |
| `path` | A path to a directory, which will be created if it doesn't exist. Paths may be either strings or `Path`-like objects. ~~Union[str, Path]~~ |
| _keyword-only_ | |
| `exclude` | String names of [serialization fields](#serialization-fields) to exclude. ~~Iterable[str]~~ |
## CoreferenceResolver.from_disk {#from_disk tag="method"}
Load the pipe from disk. Modifies the object in place and returns it.
> #### Example
>
> ```python
> coref = nlp.add_pipe("experimental_coref")
> coref.from_disk("/path/to/coref")
> ```
| Name | Description |
| -------------- | ----------------------------------------------------------------------------------------------- |
| `path` | A path to a directory. Paths may be either strings or `Path`-like objects. ~~Union[str, Path]~~ |
| _keyword-only_ | |
| `exclude` | String names of [serialization fields](#serialization-fields) to exclude. ~~Iterable[str]~~ |
| **RETURNS** | The modified `CoreferenceResolver` object. ~~CoreferenceResolver~~ |
## CoreferenceResolver.to_bytes {#to_bytes tag="method"}
> #### Example
>
> ```python
> coref = nlp.add_pipe("experimental_coref")
> coref_bytes = coref.to_bytes()
> ```
Serialize the pipe to a bytestring, including the `KnowledgeBase`.
| Name | Description |
| -------------- | ------------------------------------------------------------------------------------------- |
| _keyword-only_ | |
| `exclude` | String names of [serialization fields](#serialization-fields) to exclude. ~~Iterable[str]~~ |
| **RETURNS** | The serialized form of the `CoreferenceResolver` object. ~~bytes~~ |
## CoreferenceResolver.from_bytes {#from_bytes tag="method"}
Load the pipe from a bytestring. Modifies the object in place and returns it.
> #### Example
>
> ```python
> coref_bytes = coref.to_bytes()
> coref = nlp.add_pipe("experimental_coref")
> coref.from_bytes(coref_bytes)
> ```
| Name | Description |
| -------------- | ------------------------------------------------------------------------------------------- |
| `bytes_data` | The data to load from. ~~bytes~~ |
| _keyword-only_ | |
| `exclude` | String names of [serialization fields](#serialization-fields) to exclude. ~~Iterable[str]~~ |
| **RETURNS** | The `CoreferenceResolver` object. ~~CoreferenceResolver~~ |
## Serialization fields {#serialization-fields}
During serialization, spaCy will export several data fields used to restore
different aspects of the object. If needed, you can exclude them from
serialization by passing in the string names via the `exclude` argument.
> #### Example
>
> ```python
> data = coref.to_disk("/path", exclude=["vocab"])
> ```
| Name | Description |
| ------- | -------------------------------------------------------------- |
| `vocab` | The shared [`Vocab`](/api/vocab). |
| `cfg` | The config file. You usually don't want to exclude this. |
| `model` | The binary model data. You usually don't want to exclude this. |

View File

@ -31,21 +31,21 @@ Construct a `Doc` object. The most common way to get a `Doc` object is via the
> doc = Doc(nlp.vocab, words=words, spaces=spaces) > doc = Doc(nlp.vocab, words=words, spaces=spaces)
> ``` > ```
| Name | Description | | Name | Description |
| ---------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | ---------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `vocab` | A storage container for lexical types. ~~Vocab~~ | | `vocab` | A storage container for lexical types. ~~Vocab~~ |
| `words` | A list of strings or integer hash values to add to the document as words. ~~Optional[List[Union[str,int]]]~~ | | `words` | A list of strings or integer hash values to add to the document as words. ~~Optional[List[Union[str,int]]]~~ |
| `spaces` | A list of boolean values indicating whether each word has a subsequent space. Must have the same length as `words`, if specified. Defaults to a sequence of `True`. ~~Optional[List[bool]]~~ | | `spaces` | A list of boolean values indicating whether each word has a subsequent space. Must have the same length as `words`, if specified. Defaults to a sequence of `True`. ~~Optional[List[bool]]~~ |
| _keyword-only_ | | | _keyword-only_ | |
| `user\_data` | Optional extra data to attach to the Doc. ~~Dict~~ | | `user\_data` | Optional extra data to attach to the Doc. ~~Dict~~ |
| `tags` <Tag variant="new">3</Tag> | A list of strings, of the same length as `words`, to assign as `token.tag` for each word. Defaults to `None`. ~~Optional[List[str]]~~ | | `tags` <Tag variant="new">3</Tag> | A list of strings, of the same length as `words`, to assign as `token.tag` for each word. Defaults to `None`. ~~Optional[List[str]]~~ |
| `pos` <Tag variant="new">3</Tag> | A list of strings, of the same length as `words`, to assign as `token.pos` for each word. Defaults to `None`. ~~Optional[List[str]]~~ | | `pos` <Tag variant="new">3</Tag> | A list of strings, of the same length as `words`, to assign as `token.pos` for each word. Defaults to `None`. ~~Optional[List[str]]~~ |
| `morphs` <Tag variant="new">3</Tag> | A list of strings, of the same length as `words`, to assign as `token.morph` for each word. Defaults to `None`. ~~Optional[List[str]]~~ | | `morphs` <Tag variant="new">3</Tag> | A list of strings, of the same length as `words`, to assign as `token.morph` for each word. Defaults to `None`. ~~Optional[List[str]]~~ |
| `lemmas` <Tag variant="new">3</Tag> | A list of strings, of the same length as `words`, to assign as `token.lemma` for each word. Defaults to `None`. ~~Optional[List[str]]~~ | | `lemmas` <Tag variant="new">3</Tag> | A list of strings, of the same length as `words`, to assign as `token.lemma` for each word. Defaults to `None`. ~~Optional[List[str]]~~ |
| `heads` <Tag variant="new">3</Tag> | A list of values, of the same length as `words`, to assign as the head for each word. Head indices are the absolute position of the head in the `Doc`. Defaults to `None`. ~~Optional[List[int]]~~ | | `heads` <Tag variant="new">3</Tag> | A list of values, of the same length as `words`, to assign as the head for each word. Head indices are the absolute position of the head in the `Doc`. Defaults to `None`. ~~Optional[List[int]]~~ |
| `deps` <Tag variant="new">3</Tag> | A list of strings, of the same length as `words`, to assign as `token.dep` for each word. Defaults to `None`. ~~Optional[List[str]]~~ | | `deps` <Tag variant="new">3</Tag> | A list of strings, of the same length as `words`, to assign as `token.dep` for each word. Defaults to `None`. ~~Optional[List[str]]~~ |
| `sent_starts` <Tag variant="new">3</Tag> | A list of values, of the same length as `words`, to assign as `token.is_sent_start`. Will be overridden by heads if `heads` is provided. Defaults to `None`. ~~Optional[List[Optional[bool]]]~~ | | `sent_starts` <Tag variant="new">3</Tag> | A list of values, of the same length as `words`, to assign as `token.is_sent_start`. Will be overridden by heads if `heads` is provided. Defaults to `None`. ~~Optional[List[Union[bool, int, None]]]~~ |
| `ents` <Tag variant="new">3</Tag> | A list of strings, of the same length of `words`, to assign the token-based IOB tag. Defaults to `None`. ~~Optional[List[str]]~~ | | `ents` <Tag variant="new">3</Tag> | A list of strings, of the same length of `words`, to assign the token-based IOB tag. Defaults to `None`. ~~Optional[List[str]]~~ |
## Doc.\_\_getitem\_\_ {#getitem tag="method"} ## Doc.\_\_getitem\_\_ {#getitem tag="method"}

View File

@ -23,11 +23,13 @@ both documents.
> ```python > ```python
> from spacy.tokens import Doc > from spacy.tokens import Doc
> from spacy.training import Example > from spacy.training import Example
> > pred_words = ["Apply", "some", "sunscreen"]
> words = ["hello", "world", "!"] > pred_spaces = [True, True, False]
> spaces = [True, False, False] > gold_words = ["Apply", "some", "sun", "screen"]
> predicted = Doc(nlp.vocab, words=words, spaces=spaces) > gold_spaces = [True, True, False, False]
> reference = parse_gold_doc(my_data) > gold_tags = ["VERB", "DET", "NOUN", "NOUN"]
> predicted = Doc(nlp.vocab, words=pred_words, spaces=pred_spaces)
> reference = Doc(nlp.vocab, words=gold_words, spaces=gold_spaces, tags=gold_tags)
> example = Example(predicted, reference) > example = Example(predicted, reference)
> ``` > ```
@ -286,10 +288,14 @@ Calculate alignment tables between two tokenizations.
### Alignment attributes {#alignment-attributes"} ### Alignment attributes {#alignment-attributes"}
| Name | Description | Alignment attributes are managed using `AlignmentArray`, which is a
| ----- | --------------------------------------------------------------------- | simplified version of Thinc's [Ragged](https://thinc.ai/docs/api-types#ragged)
| `x2y` | The `Ragged` object holding the alignment from `x` to `y`. ~~Ragged~~ | type that only supports the `data` and `length` attributes.
| `y2x` | The `Ragged` object holding the alignment from `y` to `x`. ~~Ragged~~ |
| Name | Description |
| ----- | ------------------------------------------------------------------------------------- |
| `x2y` | The `AlignmentArray` object holding the alignment from `x` to `y`. ~~AlignmentArray~~ |
| `y2x` | The `AlignmentArray` object holding the alignment from `y` to `x`. ~~AlignmentArray~~ |
<Infobox title="Important note" variant="warning"> <Infobox title="Important note" variant="warning">
@ -309,10 +315,10 @@ tokenizations add up to the same string. For example, you'll be able to align
> spacy_tokens = ["obama", "'s", "podcast"] > spacy_tokens = ["obama", "'s", "podcast"]
> alignment = Alignment.from_strings(bert_tokens, spacy_tokens) > alignment = Alignment.from_strings(bert_tokens, spacy_tokens)
> a2b = alignment.x2y > a2b = alignment.x2y
> assert list(a2b.dataXd) == [0, 1, 1, 2] > assert list(a2b.data) == [0, 1, 1, 2]
> ``` > ```
> >
> If `a2b.dataXd[1] == a2b.dataXd[2] == 1`, that means that `A[1]` (`"'"`) and > If `a2b.data[1] == a2b.data[2] == 1`, that means that `A[1]` (`"'"`) and
> `A[2]` (`"s"`) both align to `B[1]` (`"'s"`). > `A[2]` (`"s"`) both align to `B[1]` (`"'s"`).
### Alignment.from_strings {#classmethod tag="function"} ### Alignment.from_strings {#classmethod tag="function"}

View File

@ -63,17 +63,18 @@ spaCy loads a model under the hood based on its
> nlp = Language.from_config(config) > nlp = Language.from_config(config)
> ``` > ```
| Name | Description | | Name | Description |
| -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | ------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `config` | The loaded config. ~~Union[Dict[str, Any], Config]~~ | | `config` | The loaded config. ~~Union[Dict[str, Any], Config]~~ |
| _keyword-only_ | | | _keyword-only_ | |
| `vocab` | A `Vocab` object. If `True`, a vocab is created using the default language data settings. ~~Vocab~~ | | `vocab` | A `Vocab` object. If `True`, a vocab is created using the default language data settings. ~~Vocab~~ |
| `disable` | Names of pipeline components to [disable](/usage/processing-pipelines#disabling). Disabled pipes will be loaded but they won't be run unless you explicitly enable them by calling [`nlp.enable_pipe`](/api/language#enable_pipe). ~~List[str]~~ | | `disable` | Name(s) of pipeline component(s) to [disable](/usage/processing-pipelines#disabling). Disabled pipes will be loaded but they won't be run unless you explicitly enable them by calling [`nlp.enable_pipe`](/api/language#enable_pipe). ~~Union[str, Iterable[str]]~~ |
| `exclude` | Names of pipeline components to [exclude](/usage/processing-pipelines#disabling). Excluded components won't be loaded. ~~List[str]~~ | | `enable` <Tag variant="new">3.4</Tag> | Name(s) of pipeline component(s) to [enable](/usage/processing-pipelines#disabling). All other pipes will be disabled, but can be enabled again using [`nlp.enable_pipe`](/api/language#enable_pipe). ~~Union[str, Iterable[str]]~~ |
| `meta` | [Meta data](/api/data-formats#meta) overrides. ~~Dict[str, Any]~~ | | `exclude` | Name(s) of pipeline component(s) to [exclude](/usage/processing-pipelines#disabling). Excluded components won't be loaded. ~~Union[str, Iterable[str]]~~ |
| `auto_fill` | Whether to automatically fill in missing values in the config, based on defaults and function argument annotations. Defaults to `True`. ~~bool~~ | | `meta` | [Meta data](/api/data-formats#meta) overrides. ~~Dict[str, Any]~~ |
| `validate` | Whether to validate the component config and arguments against the types expected by the factory. Defaults to `True`. ~~bool~~ | | `auto_fill` | Whether to automatically fill in missing values in the config, based on defaults and function argument annotations. Defaults to `True`. ~~bool~~ |
| **RETURNS** | The initialized object. ~~Language~~ | | `validate` | Whether to validate the component config and arguments against the types expected by the factory. Defaults to `True`. ~~bool~~ |
| **RETURNS** | The initialized object. ~~Language~~ |
## Language.component {#component tag="classmethod" new="3"} ## Language.component {#component tag="classmethod" new="3"}
@ -695,8 +696,8 @@ As of spaCy v3.0, the `disable_pipes` method has been renamed to `select_pipes`:
| Name | Description | | Name | Description |
| -------------- | ------------------------------------------------------------------------------------------------------ | | -------------- | ------------------------------------------------------------------------------------------------------ |
| _keyword-only_ | | | _keyword-only_ | |
| `disable` | Name(s) of pipeline components to disable. ~~Optional[Union[str, Iterable[str]]]~~ | | `disable` | Name(s) of pipeline component(s) to disable. ~~Optional[Union[str, Iterable[str]]]~~ |
| `enable` | Name(s) of pipeline components that will not be disabled. ~~Optional[Union[str, Iterable[str]]]~~ | | `enable` | Name(s) of pipeline component(s) that will not be disabled. ~~Optional[Union[str, Iterable[str]]]~~ |
| **RETURNS** | The disabled pipes that can be restored by calling the object's `.restore()` method. ~~DisabledPipes~~ | | **RETURNS** | The disabled pipes that can be restored by calling the object's `.restore()` method. ~~DisabledPipes~~ |
## Language.get_factory_meta {#get_factory_meta tag="classmethod" new="3"} ## Language.get_factory_meta {#get_factory_meta tag="classmethod" new="3"}

View File

@ -248,6 +248,59 @@ added to an existing vectors table. See more details in
## Loggers {#loggers} ## Loggers {#loggers}
These functions are available from `@spacy.registry.loggers`.
### spacy.ConsoleLogger.v1 {#ConsoleLogger_v1}
> #### Example config
>
> ```ini
> [training.logger]
> @loggers = "spacy.ConsoleLogger.v1"
> progress_bar = true
> ```
Writes the results of a training step to the console in a tabular format.
<Accordion title="Example console output" spaced>
```cli
$ python -m spacy train config.cfg
```
```
Using CPU
Loading config and nlp from: config.cfg
Pipeline: ['tok2vec', 'tagger']
Start training
Training. Initial learn rate: 0.0
E # LOSS TOK2VEC LOSS TAGGER TAG_ACC SCORE
--- ------ ------------ ----------- ------- ------
0 0 0.00 86.20 0.22 0.00
0 200 3.08 18968.78 34.00 0.34
0 400 31.81 22539.06 33.64 0.34
0 600 92.13 22794.91 43.80 0.44
0 800 183.62 21541.39 56.05 0.56
0 1000 352.49 25461.82 65.15 0.65
0 1200 422.87 23708.82 71.84 0.72
0 1400 601.92 24994.79 76.57 0.77
0 1600 662.57 22268.02 80.20 0.80
0 1800 1101.50 28413.77 82.56 0.83
0 2000 1253.43 28736.36 85.00 0.85
0 2200 1411.02 28237.53 87.42 0.87
0 2400 1605.35 28439.95 88.70 0.89
```
Note that the cumulative loss keeps increasing within one epoch, but should
start decreasing across epochs.
</Accordion>
| Name | Description |
| -------------- | --------------------------------------------------------- |
| `progress_bar` | Whether the logger should print the progress bar ~~bool~~ |
Logging utilities for spaCy are implemented in the Logging utilities for spaCy are implemented in the
[`spacy-loggers`](https://github.com/explosion/spacy-loggers) repo, and the [`spacy-loggers`](https://github.com/explosion/spacy-loggers) repo, and the
functions are typically available from `@spacy.registry.loggers`. functions are typically available from `@spacy.registry.loggers`.

View File

@ -153,3 +153,36 @@ whole pipeline has run.
| `attrs` | A dict of the `Doc` attributes and the values to set them to. Defaults to `{"tensor": None, "_.trf_data": None}` to clean up after `tok2vec` and `transformer` components. ~~dict~~ | | `attrs` | A dict of the `Doc` attributes and the values to set them to. Defaults to `{"tensor": None, "_.trf_data": None}` to clean up after `tok2vec` and `transformer` components. ~~dict~~ |
| `silent` | If `False`, show warnings if attributes aren't found or can't be set. Defaults to `True`. ~~bool~~ | | `silent` | If `False`, show warnings if attributes aren't found or can't be set. Defaults to `True`. ~~bool~~ |
| **RETURNS** | The modified `Doc` with the modified attributes. ~~Doc~~ | | **RETURNS** | The modified `Doc` with the modified attributes. ~~Doc~~ |
## span_cleaner {#span_cleaner tag="function,experimental"}
Remove `SpanGroup`s from `doc.spans` based on a key prefix. This is used to
clean up after the [`CoreferenceResolver`](/api/coref) when it's paired with a
[`SpanResolver`](/api/span-resolver).
<Infobox title="Important note" variant="warning">
This pipeline function is not yet integrated into spaCy core, and is available
via the extension package
[`spacy-experimental`](https://github.com/explosion/spacy-experimental) starting
in version 0.6.0. It exposes the component via
[entry points](/usage/saving-loading/#entry-points), so if you have the package
installed, using `factory = "span_cleaner"` in your
[training config](/usage/training#config) or `nlp.add_pipe("span_cleaner")` will
work out-of-the-box.
</Infobox>
> #### Example
>
> ```python
> config = {"prefix": "coref_head_clusters"}
> nlp.add_pipe("span_cleaner", config=config)
> doc = nlp("text")
> assert "coref_head_clusters_1" not in doc.spans
> ```
| Setting | Description |
| ----------- | ------------------------------------------------------------------------------------------------------------------------- |
| `prefix` | A prefix to check `SpanGroup` keys for. Any matching groups will be removed. Defaults to `"coref_head_clusters"`. ~~str~~ |
| **RETURNS** | The modified `Doc` with any matching spans removed. ~~Doc~~ |

View File

@ -270,3 +270,62 @@ Compute micro-PRF and per-entity PRF scores.
| Name | Description | | Name | Description |
| ---------- | ------------------------------------------------------------------------------------------------------------------- | | ---------- | ------------------------------------------------------------------------------------------------------------------- |
| `examples` | The `Example` objects holding both the predictions and the correct gold-standard annotations. ~~Iterable[Example]~~ | | `examples` | The `Example` objects holding both the predictions and the correct gold-standard annotations. ~~Iterable[Example]~~ |
## score_coref_clusters {#score_coref_clusters tag="experimental"}
Returns LEA ([Moosavi and Strube, 2016](https://aclanthology.org/P16-1060/)) PRF
scores for coreference clusters.
<Infobox title="Important note" variant="warning">
Note this scoring function is not yet included in spaCy core - for details, see
the [CoreferenceResolver](/api/coref) docs.
</Infobox>
> #### Example
>
> ```python
> scores = score_coref_clusters(
> examples,
> span_cluster_prefix="coref_clusters",
> )
> print(scores["coref_f"])
> ```
| Name | Description |
| --------------------- | ------------------------------------------------------------------------------------------------------------------- |
| `examples` | The `Example` objects holding both the predictions and the correct gold-standard annotations. ~~Iterable[Example]~~ |
| _keyword-only_ | |
| `span_cluster_prefix` | The prefix used for spans representing coreference clusters. ~~str~~ |
| **RETURNS** | A dictionary containing the scores. ~~Dict[str, Optional[float]]~~ |
## score_span_predictions {#score_span_predictions tag="experimental"}
Return accuracy for reconstructions of spans from single tokens. Only exactly
correct predictions are counted as correct, there is no partial credit for near
answers. Used by the [SpanResolver](/api/span-resolver).
<Infobox title="Important note" variant="warning">
Note this scoring function is not yet included in spaCy core - for details, see
the [SpanResolver](/api/span-resolver) docs.
</Infobox>
> #### Example
>
> ```python
> scores = score_span_predictions(
> examples,
> output_prefix="coref_clusters",
> )
> print(scores["span_coref_clusters_accuracy"])
> ```
| Name | Description |
| --------------- | ------------------------------------------------------------------------------------------------------------------- |
| `examples` | The `Example` objects holding both the predictions and the correct gold-standard annotations. ~~Iterable[Example]~~ |
| _keyword-only_ | |
| `output_prefix` | The prefix used for spans representing the final predicted spans. ~~str~~ |
| **RETURNS** | A dictionary containing the scores. ~~Dict[str, Optional[float]]~~ |

View File

@ -0,0 +1,356 @@
---
title: SpanResolver
tag: class,experimental
source: spacy-experimental/coref/span_resolver_component.py
teaser: 'Pipeline component for resolving tokens into spans'
api_base_class: /api/pipe
api_string_name: span_resolver
api_trainable: true
---
> #### Installation
>
> ```bash
> $ pip install -U spacy-experimental
> ```
<Infobox title="Important note" variant="warning">
This component not yet integrated into spaCy core, and is available via the
extension package
[`spacy-experimental`](https://github.com/explosion/spacy-experimental) starting
in version 0.6.0. It exposes the component via
[entry points](/usage/saving-loading/#entry-points), so if you have the package
installed, using `factory = "experimental_span_resolver"` in your
[training config](/usage/training#config) or
`nlp.add_pipe("experimental_span_resolver")` will work out-of-the-box.
</Infobox>
A `SpanResolver` component takes in tokens (represented as `Span` objects of
length 1) and resolves them into `Span` objects of arbitrary length. The initial
use case is as a post-processing step on word-level
[coreference resolution](/api/coref). The input and output keys used to store
`Span` objects are configurable.
## Assigned Attributes {#assigned-attributes}
Predictions will be saved to `Doc.spans` as [`SpanGroup`s](/api/spangroup).
Input token spans will be read in using an input prefix, by default
`"coref_head_clusters"`, and output spans will be saved using an output prefix
(default `"coref_clusters"`) plus a serial number starting from one. The
prefixes are configurable.
| Location | Value |
| ------------------------------------------------- | ------------------------------------------------------------------------- |
| `Doc.spans[output_prefix + "_" + cluster_number]` | One group of predicted spans. Cluster number starts from 1. ~~SpanGroup~~ |
## Config and implementation {#config}
The default config is defined by the pipeline component factory and describes
how the component should be configured. You can override its settings via the
`config` argument on [`nlp.add_pipe`](/api/language#add_pipe) or in your
[`config.cfg` for training](/usage/training#config). See the
[model architectures](/api/architectures#coref-architectures) documentation for
details on the architectures and their arguments and hyperparameters.
> #### Example
>
> ```python
> from spacy_experimental.coref.span_resolver_component import DEFAULT_SPAN_RESOLVER_MODEL
> from spacy_experimental.coref.coref_util import DEFAULT_CLUSTER_PREFIX, DEFAULT_CLUSTER_HEAD_PREFIX
> config={
> "model": DEFAULT_SPAN_RESOLVER_MODEL,
> "input_prefix": DEFAULT_CLUSTER_HEAD_PREFIX,
> "output_prefix": DEFAULT_CLUSTER_PREFIX,
> },
> nlp.add_pipe("experimental_span_resolver", config=config)
> ```
| Setting | Description |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `model` | The [`Model`](https://thinc.ai/docs/api-model) powering the pipeline component. Defaults to [SpanResolver](/api/architectures#SpanResolver). ~~Model~~ |
| `input_prefix` | The prefix to use for input `SpanGroup`s. Defaults to `coref_head_clusters`. ~~str~~ |
| `output_prefix` | The prefix for predicted `SpanGroup`s. Defaults to `coref_clusters`. ~~str~~ |
## SpanResolver.\_\_init\_\_ {#init tag="method"}
> #### Example
>
> ```python
> # Construction via add_pipe with default model
> span_resolver = nlp.add_pipe("experimental_span_resolver")
>
> # Construction via add_pipe with custom model
> config = {"model": {"@architectures": "my_span_resolver.v1"}}
> span_resolver = nlp.add_pipe("experimental_span_resolver", config=config)
>
> # Construction from class
> from spacy_experimental.coref.span_resolver_component import SpanResolver
> span_resolver = SpanResolver(nlp.vocab, model)
> ```
Create a new pipeline instance. In your application, you would normally use a
shortcut for this and instantiate the component using its string name and
[`nlp.add_pipe`](/api/language#add_pipe).
| Name | Description |
| --------------- | --------------------------------------------------------------------------------------------------- |
| `vocab` | The shared vocabulary. ~~Vocab~~ |
| `model` | The [`Model`](https://thinc.ai/docs/api-model) powering the pipeline component. ~~Model~~ |
| `name` | String name of the component instance. Used to add entries to the `losses` during training. ~~str~~ |
| _keyword-only_ | |
| `input_prefix` | The prefix to use for input `SpanGroup`s. Defaults to `coref_head_clusters`. ~~str~~ |
| `output_prefix` | The prefix for predicted `SpanGroup`s. Defaults to `coref_clusters`. ~~str~~ |
## SpanResolver.\_\_call\_\_ {#call tag="method"}
Apply the pipe to one document. The document is modified in place and returned.
This usually happens under the hood when the `nlp` object is called on a text
and all pipeline components are applied to the `Doc` in order. Both
[`__call__`](#call) and [`pipe`](#pipe) delegate to the [`predict`](#predict)
and [`set_annotations`](#set_annotations) methods.
> #### Example
>
> ```python
> doc = nlp("This is a sentence.")
> span_resolver = nlp.add_pipe("experimental_span_resolver")
> # This usually happens under the hood
> processed = span_resolver(doc)
> ```
| Name | Description |
| ----------- | -------------------------------- |
| `doc` | The document to process. ~~Doc~~ |
| **RETURNS** | The processed document. ~~Doc~~ |
## SpanResolver.pipe {#pipe tag="method"}
Apply the pipe to a stream of documents. This usually happens under the hood
when the `nlp` object is called on a text and all pipeline components are
applied to the `Doc` in order. Both [`__call__`](/api/span-resolver#call) and
[`pipe`](/api/span-resolver#pipe) delegate to the
[`predict`](/api/span-resolver#predict) and
[`set_annotations`](/api/span-resolver#set_annotations) methods.
> #### Example
>
> ```python
> span_resolver = nlp.add_pipe("experimental_span_resolver")
> for doc in span_resolver.pipe(docs, batch_size=50):
> pass
> ```
| Name | Description |
| -------------- | ------------------------------------------------------------- |
| `stream` | A stream of documents. ~~Iterable[Doc]~~ |
| _keyword-only_ | |
| `batch_size` | The number of documents to buffer. Defaults to `128`. ~~int~~ |
| **YIELDS** | The processed documents in order. ~~Doc~~ |
## SpanResolver.initialize {#initialize tag="method"}
Initialize the component for training. `get_examples` should be a function that
returns an iterable of [`Example`](/api/example) objects. **At least one example
should be supplied.** The data examples are used to **initialize the model** of
the component and can either be the full training data or a representative
sample. Initialization includes validating the network,
[inferring missing shapes](https://thinc.ai/docs/usage-models#validation) and
setting up the label scheme based on the data. This method is typically called
by [`Language.initialize`](/api/language#initialize).
> #### Example
>
> ```python
> span_resolver = nlp.add_pipe("experimental_span_resolver")
> span_resolver.initialize(lambda: examples, nlp=nlp)
> ```
| Name | Description |
| -------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `get_examples` | Function that returns gold-standard annotations in the form of [`Example`](/api/example) objects. Must contain at least one `Example`. ~~Callable[[], Iterable[Example]]~~ |
| _keyword-only_ | |
| `nlp` | The current `nlp` object. Defaults to `None`. ~~Optional[Language]~~ |
## SpanResolver.predict {#predict tag="method"}
Apply the component's model to a batch of [`Doc`](/api/doc) objects, without
modifying them. Predictions are returned as a list of `MentionClusters`, one for
each input `Doc`. A `MentionClusters` instance is just a list of lists of pairs
of `int`s, where each item corresponds to an input `SpanGroup`, and the `int`s
correspond to token indices.
> #### Example
>
> ```python
> span_resolver = nlp.add_pipe("experimental_span_resolver")
> spans = span_resolver.predict([doc1, doc2])
> ```
| Name | Description |
| ----------- | ------------------------------------------------------------- |
| `docs` | The documents to predict. ~~Iterable[Doc]~~ |
| **RETURNS** | The predicted spans for the `Doc`s. ~~List[MentionClusters]~~ |
## SpanResolver.set_annotations {#set_annotations tag="method"}
Modify a batch of documents, saving predictions using the output prefix in
`Doc.spans`.
> #### Example
>
> ```python
> span_resolver = nlp.add_pipe("experimental_span_resolver")
> spans = span_resolver.predict([doc1, doc2])
> span_resolver.set_annotations([doc1, doc2], spans)
> ```
| Name | Description |
| ------- | ------------------------------------------------------------- |
| `docs` | The documents to modify. ~~Iterable[Doc]~~ |
| `spans` | The predicted spans for the `docs`. ~~List[MentionClusters]~~ |
## SpanResolver.update {#update tag="method"}
Learn from a batch of [`Example`](/api/example) objects. Delegates to
[`predict`](/api/span-resolver#predict).
> #### Example
>
> ```python
> span_resolver = nlp.add_pipe("experimental_span_resolver")
> optimizer = nlp.initialize()
> losses = span_resolver.update(examples, sgd=optimizer)
> ```
| Name | Description |
| -------------- | ------------------------------------------------------------------------------------------------------------------------ |
| `examples` | A batch of [`Example`](/api/example) objects to learn from. ~~Iterable[Example]~~ |
| _keyword-only_ | |
| `drop` | The dropout rate. ~~float~~ |
| `sgd` | An optimizer. Will be created via [`create_optimizer`](#create_optimizer) if not set. ~~Optional[Optimizer]~~ |
| `losses` | Optional record of the loss during training. Updated using the component name as the key. ~~Optional[Dict[str, float]]~~ |
| **RETURNS** | The updated `losses` dictionary. ~~Dict[str, float]~~ |
## SpanResolver.create_optimizer {#create_optimizer tag="method"}
Create an optimizer for the pipeline component.
> #### Example
>
> ```python
> span_resolver = nlp.add_pipe("experimental_span_resolver")
> optimizer = span_resolver.create_optimizer()
> ```
| Name | Description |
| ----------- | ---------------------------- |
| **RETURNS** | The optimizer. ~~Optimizer~~ |
## SpanResolver.use_params {#use_params tag="method, contextmanager"}
Modify the pipe's model, to use the given parameter values. At the end of the
context, the original parameters are restored.
> #### Example
>
> ```python
> span_resolver = nlp.add_pipe("experimental_span_resolver")
> with span_resolver.use_params(optimizer.averages):
> span_resolver.to_disk("/best_model")
> ```
| Name | Description |
| -------- | -------------------------------------------------- |
| `params` | The parameter values to use in the model. ~~dict~~ |
## SpanResolver.to_disk {#to_disk tag="method"}
Serialize the pipe to disk.
> #### Example
>
> ```python
> span_resolver = nlp.add_pipe("experimental_span_resolver")
> span_resolver.to_disk("/path/to/span_resolver")
> ```
| Name | Description |
| -------------- | ------------------------------------------------------------------------------------------------------------------------------------------ |
| `path` | A path to a directory, which will be created if it doesn't exist. Paths may be either strings or `Path`-like objects. ~~Union[str, Path]~~ |
| _keyword-only_ | |
| `exclude` | String names of [serialization fields](#serialization-fields) to exclude. ~~Iterable[str]~~ |
## SpanResolver.from_disk {#from_disk tag="method"}
Load the pipe from disk. Modifies the object in place and returns it.
> #### Example
>
> ```python
> span_resolver = nlp.add_pipe("experimental_span_resolver")
> span_resolver.from_disk("/path/to/span_resolver")
> ```
| Name | Description |
| -------------- | ----------------------------------------------------------------------------------------------- |
| `path` | A path to a directory. Paths may be either strings or `Path`-like objects. ~~Union[str, Path]~~ |
| _keyword-only_ | |
| `exclude` | String names of [serialization fields](#serialization-fields) to exclude. ~~Iterable[str]~~ |
| **RETURNS** | The modified `SpanResolver` object. ~~SpanResolver~~ |
## SpanResolver.to_bytes {#to_bytes tag="method"}
> #### Example
>
> ```python
> span_resolver = nlp.add_pipe("experimental_span_resolver")
> span_resolver_bytes = span_resolver.to_bytes()
> ```
Serialize the pipe to a bytestring.
| Name | Description |
| -------------- | ------------------------------------------------------------------------------------------- |
| _keyword-only_ | |
| `exclude` | String names of [serialization fields](#serialization-fields) to exclude. ~~Iterable[str]~~ |
| **RETURNS** | The serialized form of the `SpanResolver` object. ~~bytes~~ |
## SpanResolver.from_bytes {#from_bytes tag="method"}
Load the pipe from a bytestring. Modifies the object in place and returns it.
> #### Example
>
> ```python
> span_resolver_bytes = span_resolver.to_bytes()
> span_resolver = nlp.add_pipe("experimental_span_resolver")
> span_resolver.from_bytes(span_resolver_bytes)
> ```
| Name | Description |
| -------------- | ------------------------------------------------------------------------------------------- |
| `bytes_data` | The data to load from. ~~bytes~~ |
| _keyword-only_ | |
| `exclude` | String names of [serialization fields](#serialization-fields) to exclude. ~~Iterable[str]~~ |
| **RETURNS** | The `SpanResolver` object. ~~SpanResolver~~ |
## Serialization fields {#serialization-fields}
During serialization, spaCy will export several data fields used to restore
different aspects of the object. If needed, you can exclude them from
serialization by passing in the string names via the `exclude` argument.
> #### Example
>
> ```python
> data = span_resolver.to_disk("/path", exclude=["vocab"])
> ```
| Name | Description |
| ------- | -------------------------------------------------------------- |
| `vocab` | The shared [`Vocab`](/api/vocab). |
| `cfg` | The config file. You usually don't want to exclude this. |
| `model` | The binary model data. You usually don't want to exclude this. |

View File

@ -45,16 +45,16 @@ specified separately using the new `exclude` keyword argument.
> nlp = spacy.load("en_core_web_sm", exclude=["parser", "tagger"]) > nlp = spacy.load("en_core_web_sm", exclude=["parser", "tagger"])
> ``` > ```
| Name | Description | | Name | Description |
| ------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | ------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `name` | Pipeline to load, i.e. package name or path. ~~Union[str, Path]~~ | | `name` | Pipeline to load, i.e. package name or path. ~~Union[str, Path]~~ |
| _keyword-only_ | | | _keyword-only_ | |
| `vocab` | Optional shared vocab to pass in on initialization. If `True` (default), a new `Vocab` object will be created. ~~Union[Vocab, bool]~~ | | `vocab` | Optional shared vocab to pass in on initialization. If `True` (default), a new `Vocab` object will be created. ~~Union[Vocab, bool]~~ |
| `disable` | Names of pipeline components to [disable](/usage/processing-pipelines#disabling). Disabled pipes will be loaded but they won't be run unless you explicitly enable them by calling [nlp.enable_pipe](/api/language#enable_pipe). ~~List[str]~~ | | `disable` | Name(s) of pipeline component(s) to [disable](/usage/processing-pipelines#disabling). Disabled pipes will be loaded but they won't be run unless you explicitly enable them by calling [nlp.enable_pipe](/api/language#enable_pipe). ~~Union[str, Iterable[str]]~~ |
| `enable` | Names of pipeline components to [enable](/usage/processing-pipelines#disabling). All other pipes will be disabled. ~~List[str]~~ | | `enable` <Tag variant="new">3.4</Tag> | Name(s) of pipeline component(s) to [enable](/usage/processing-pipelines#disabling). All other pipes will be disabled. ~~Union[str, Iterable[str]]~~ |
| `exclude` <Tag variant="new">3</Tag> | Names of pipeline components to [exclude](/usage/processing-pipelines#disabling). Excluded components won't be loaded. ~~List[str]~~ | | `exclude` <Tag variant="new">3</Tag> | Name(s) of pipeline component(s) to [exclude](/usage/processing-pipelines#disabling). Excluded components won't be loaded. ~~Union[str, Iterable[str]]~~ |
| `config` <Tag variant="new">3</Tag> | Optional config overrides, either as nested dict or dict keyed by section value in dot notation, e.g. `"components.name.value"`. ~~Union[Dict[str, Any], Config]~~ | | `config` <Tag variant="new">3</Tag> | Optional config overrides, either as nested dict or dict keyed by section value in dot notation, e.g. `"components.name.value"`. ~~Union[Dict[str, Any], Config]~~ |
| **RETURNS** | A `Language` object with the loaded pipeline. ~~Language~~ | | **RETURNS** | A `Language` object with the loaded pipeline. ~~Language~~ |
Essentially, `spacy.load()` is a convenience wrapper that reads the pipeline's Essentially, `spacy.load()` is a convenience wrapper that reads the pipeline's
[`config.cfg`](/api/data-formats#config), uses the language and pipeline [`config.cfg`](/api/data-formats#config), uses the language and pipeline
@ -275,8 +275,8 @@ Render a dependency parse tree or named entity visualization.
### displacy.parse_deps {#displacy.parse_deps tag="method" new="2"} ### displacy.parse_deps {#displacy.parse_deps tag="method" new="2"}
Generate dependency parse in `{'words': [], 'arcs': []}` format. Generate dependency parse in `{'words': [], 'arcs': []}` format. For use with
For use with the `manual=True` argument in `displacy.render`. the `manual=True` argument in `displacy.render`.
> #### Example > #### Example
> >
@ -297,8 +297,8 @@ For use with the `manual=True` argument in `displacy.render`.
### displacy.parse_ents {#displacy.parse_ents tag="method" new="2"} ### displacy.parse_ents {#displacy.parse_ents tag="method" new="2"}
Generate named entities in `[{start: i, end: i, label: 'label'}]` format. Generate named entities in `[{start: i, end: i, label: 'label'}]` format. For
For use with the `manual=True` argument in `displacy.render`. use with the `manual=True` argument in `displacy.render`.
> #### Example > #### Example
> >
@ -319,8 +319,8 @@ For use with the `manual=True` argument in `displacy.render`.
### displacy.parse_spans {#displacy.parse_spans tag="method" new="2"} ### displacy.parse_spans {#displacy.parse_spans tag="method" new="2"}
Generate spans in `[{start_token: i, end_token: i, label: 'label'}]` format. Generate spans in `[{start_token: i, end_token: i, label: 'label'}]` format. For
For use with the `manual=True` argument in `displacy.render`. use with the `manual=True` argument in `displacy.render`.
> #### Example > #### Example
> >
@ -451,7 +451,7 @@ factories.
| Registry name | Description | | Registry name | Description |
| ----------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | ----------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `architectures` | Registry for functions that create [model architectures](/api/architectures). Can be used to register custom model architectures and reference them in the `config.cfg`. | | `architectures` | Registry for functions that create [model architectures](/api/architectures). Can be used to register custom model architectures and reference them in the `config.cfg`. |
| `augmenters` | Registry for functions that create [data augmentation](#augmenters) callbacks for corpora and other training data iterators. | | `augmenters` | Registry for functions that create [data augmentation](#augmenters) callbacks for corpora and other training data iterators. |
| `batchers` | Registry for training and evaluation [data batchers](#batchers). | | `batchers` | Registry for training and evaluation [data batchers](#batchers). |
| `callbacks` | Registry for custom callbacks to [modify the `nlp` object](/usage/training#custom-code-nlp-callbacks) before training. | | `callbacks` | Registry for custom callbacks to [modify the `nlp` object](/usage/training#custom-code-nlp-callbacks) before training. |
| `displacy_colors` | Registry for custom color scheme for the [`displacy` NER visualizer](/usage/visualizers). Automatically reads from [entry points](/usage/saving-loading#entry-points). | | `displacy_colors` | Registry for custom color scheme for the [`displacy` NER visualizer](/usage/visualizers). Automatically reads from [entry points](/usage/saving-loading#entry-points). |
@ -505,7 +505,7 @@ finished. To log each training step, a
and the accuracy scores on the development set. and the accuracy scores on the development set.
The built-in, default logger is the ConsoleLogger, which prints results to the The built-in, default logger is the ConsoleLogger, which prints results to the
console in tabular format. The console in tabular format and saves them to a `jsonl` file. The
[spacy-loggers](https://github.com/explosion/spacy-loggers) package, included as [spacy-loggers](https://github.com/explosion/spacy-loggers) package, included as
a dependency of spaCy, enables other loggers, such as one that sends results to a dependency of spaCy, enables other loggers, such as one that sends results to
a [Weights & Biases](https://www.wandb.com/) dashboard. a [Weights & Biases](https://www.wandb.com/) dashboard.
@ -513,16 +513,20 @@ a [Weights & Biases](https://www.wandb.com/) dashboard.
Instead of using one of the built-in loggers, you can Instead of using one of the built-in loggers, you can
[implement your own](/usage/training#custom-logging). [implement your own](/usage/training#custom-logging).
#### spacy.ConsoleLogger.v1 {#ConsoleLogger tag="registered function"} #### spacy.ConsoleLogger.v2 {#ConsoleLogger tag="registered function"}
> #### Example config > #### Example config
> >
> ```ini > ```ini
> [training.logger] > [training.logger]
> @loggers = "spacy.ConsoleLogger.v1" > @loggers = "spacy.ConsoleLogger.v2"
> progress_bar = true
> console_output = true
> output_file = "training_log.jsonl"
> ``` > ```
Writes the results of a training step to the console in a tabular format. Writes the results of a training step to the console in a tabular format and
saves them to a `jsonl` file.
<Accordion title="Example console output" spaced> <Accordion title="Example console output" spaced>
@ -536,22 +540,23 @@ $ python -m spacy train config.cfg
Pipeline: ['tok2vec', 'tagger'] Pipeline: ['tok2vec', 'tagger']
Start training Start training
Training. Initial learn rate: 0.0 Training. Initial learn rate: 0.0
Saving results to training_log.jsonl
E # LOSS TOK2VEC LOSS TAGGER TAG_ACC SCORE E # LOSS TOK2VEC LOSS TAGGER TAG_ACC SCORE
--- ------ ------------ ----------- ------- ------ --- ------ ------------ ----------- ------- ------
1 0 0.00 86.20 0.22 0.00 0 0 0.00 86.20 0.22 0.00
1 200 3.08 18968.78 34.00 0.34 0 200 3.08 18968.78 34.00 0.34
1 400 31.81 22539.06 33.64 0.34 0 400 31.81 22539.06 33.64 0.34
1 600 92.13 22794.91 43.80 0.44 0 600 92.13 22794.91 43.80 0.44
1 800 183.62 21541.39 56.05 0.56 0 800 183.62 21541.39 56.05 0.56
1 1000 352.49 25461.82 65.15 0.65 0 1000 352.49 25461.82 65.15 0.65
1 1200 422.87 23708.82 71.84 0.72 0 1200 422.87 23708.82 71.84 0.72
1 1400 601.92 24994.79 76.57 0.77 0 1400 601.92 24994.79 76.57 0.77
1 1600 662.57 22268.02 80.20 0.80 0 1600 662.57 22268.02 80.20 0.80
1 1800 1101.50 28413.77 82.56 0.83 0 1800 1101.50 28413.77 82.56 0.83
1 2000 1253.43 28736.36 85.00 0.85 0 2000 1253.43 28736.36 85.00 0.85
1 2200 1411.02 28237.53 87.42 0.87 0 2200 1411.02 28237.53 87.42 0.87
1 2400 1605.35 28439.95 88.70 0.89 0 2400 1605.35 28439.95 88.70 0.89
``` ```
Note that the cumulative loss keeps increasing within one epoch, but should Note that the cumulative loss keeps increasing within one epoch, but should
@ -559,6 +564,12 @@ start decreasing across epochs.
</Accordion> </Accordion>
| Name | Description |
| ---------------- | --------------------------------------------------------------------- |
| `progress_bar` | Whether the logger should print the progress bar ~~bool~~ |
| `console_output` | Whether the logger should print the logs on the console. ~~bool~~ |
| `output_file` | The file to save the training logs to. ~~Optional[Union[str, Path]]~~ |
## Readers {#readers} ## Readers {#readers}
### File readers {#file-readers source="github.com/explosion/srsly" new="3"} ### File readers {#file-readers source="github.com/explosion/srsly" new="3"}
@ -876,6 +887,27 @@ backprop passes.
| `backprop_color` | Color identifier for backpropagation passes. Defaults to `-1`. ~~int~~ | | `backprop_color` | Color identifier for backpropagation passes. Defaults to `-1`. ~~int~~ |
| **CREATES** | A function that takes the current `nlp` and wraps forward/backprop passes in NVTX ranges. ~~Callable[[Language], Language]~~ | | **CREATES** | A function that takes the current `nlp` and wraps forward/backprop passes in NVTX ranges. ~~Callable[[Language], Language]~~ |
### spacy.models_and_pipes_with_nvtx_range.v1 {#models_and_pipes_with_nvtx_range tag="registered function" new="3.4"}
> #### Example config
>
> ```ini
> [nlp]
> after_pipeline_creation = {"@callbacks":"spacy.models_and_pipes_with_nvtx_range.v1"}
> ```
Recursively wrap both the models and methods of each pipe using
[NVTX](https://nvidia.github.io/NVTX/) range markers. By default, the following
methods are wrapped: `pipe`, `predict`, `set_annotations`, `update`, `rehearse`,
`get_loss`, `initialize`, `begin_update`, `finish_update`, `update`.
| Name | Description |
| --------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `forward_color` | Color identifier for model forward passes. Defaults to `-1`. ~~int~~ |
| `backprop_color` | Color identifier for model backpropagation passes. Defaults to `-1`. ~~int~~ |
| `additional_pipe_functions` | Additional pipeline methods to wrap. Keys are pipeline names and values are lists of method identifiers. Defaults to `None`. ~~Optional[Dict[str, List[str]]]~~ |
| **CREATES** | A function that takes the current `nlp` and wraps pipe models and methods in NVTX ranges. ~~Callable[[Language], Language]~~ |
## Training data and alignment {#gold source="spacy/training"} ## Training data and alignment {#gold source="spacy/training"}
### training.offsets_to_biluo_tags {#offsets_to_biluo_tags tag="function"} ### training.offsets_to_biluo_tags {#offsets_to_biluo_tags tag="function"}
@ -1038,15 +1070,16 @@ and create a `Language` object. The model data will then be loaded in via
> nlp = util.load_model("/path/to/data") > nlp = util.load_model("/path/to/data")
> ``` > ```
| Name | Description | | Name | Description |
| ------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | ------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `name` | Package name or path. ~~str~~ | | `name` | Package name or path. ~~str~~ |
| _keyword-only_ | | | _keyword-only_ | |
| `vocab` | Optional shared vocab to pass in on initialization. If `True` (default), a new `Vocab` object will be created. ~~Union[Vocab, bool]~~ | | `vocab` | Optional shared vocab to pass in on initialization. If `True` (default), a new `Vocab` object will be created. ~~Union[Vocab, bool]~~ |
| `disable` | Names of pipeline components to [disable](/usage/processing-pipelines#disabling). Disabled pipes will be loaded but they won't be run unless you explicitly enable them by calling [`nlp.enable_pipe`](/api/language#enable_pipe). ~~List[str]~~ | | `disable` | Name(s) of pipeline component(s) to [disable](/usage/processing-pipelines#disabling). Disabled pipes will be loaded but they won't be run unless you explicitly enable them by calling [`nlp.enable_pipe`](/api/language#enable_pipe). ~~Union[str, Iterable[str]]~~ |
| `exclude` <Tag variant="new">3</Tag> | Names of pipeline components to [exclude](/usage/processing-pipelines#disabling). Excluded components won't be loaded. ~~List[str]~~ | | `enable` <Tag variant="new">3.4</Tag> | Name(s) of pipeline component(s) to [enable](/usage/processing-pipelines#disabling). All other pipes will be disabled, but can be enabled again using [`nlp.enable_pipe`](/api/language#enable_pipe). ~~Union[str, Iterable[str]]~~ |
| `config` <Tag variant="new">3</Tag> | Config overrides as nested dict or flat dict keyed by section values in dot notation, e.g. `"nlp.pipeline"`. ~~Union[Dict[str, Any], Config]~~ | | `exclude` | Name(s) of pipeline component(s) to [exclude](/usage/processing-pipelines#disabling). Excluded components won't be loaded. ~~Union[str, Iterable[str]]~~ |
| **RETURNS** | `Language` class with the loaded pipeline. ~~Language~~ | | `config` <Tag variant="new">3</Tag> | Config overrides as nested dict or flat dict keyed by section values in dot notation, e.g. `"nlp.pipeline"`. ~~Union[Dict[str, Any], Config]~~ |
| **RETURNS** | `Language` class with the loaded pipeline. ~~Language~~ |
### util.load_model_from_init_py {#util.load_model_from_init_py tag="function" new="2"} ### util.load_model_from_init_py {#util.load_model_from_init_py tag="function" new="2"}
@ -1062,15 +1095,16 @@ A helper function to use in the `load()` method of a pipeline package's
> return load_model_from_init_py(__file__, **overrides) > return load_model_from_init_py(__file__, **overrides)
> ``` > ```
| Name | Description | | Name | Description |
| ------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | ------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `init_file` | Path to package's `__init__.py`, i.e. `__file__`. ~~Union[str, Path]~~ | | `init_file` | Path to package's `__init__.py`, i.e. `__file__`. ~~Union[str, Path]~~ |
| _keyword-only_ | | | _keyword-only_ | |
| `vocab` <Tag variant="new">3</Tag> | Optional shared vocab to pass in on initialization. If `True` (default), a new `Vocab` object will be created. ~~Union[Vocab, bool]~~ | | `vocab` <Tag variant="new">3</Tag> | Optional shared vocab to pass in on initialization. If `True` (default), a new `Vocab` object will be created. ~~Union[Vocab, bool]~~ |
| `disable` | Names of pipeline components to [disable](/usage/processing-pipelines#disabling). Disabled pipes will be loaded but they won't be run unless you explicitly enable them by calling [nlp.enable_pipe](/api/language#enable_pipe). ~~List[str]~~ | | `disable` | Name(s) of pipeline component(s) to [disable](/usage/processing-pipelines#disabling). Disabled pipes will be loaded but they won't be run unless you explicitly enable them by calling [`nlp.enable_pipe`](/api/language#enable_pipe). ~~Union[str, Iterable[str]]~~ |
| `exclude` <Tag variant="new">3</Tag> | Names of pipeline components to [exclude](/usage/processing-pipelines#disabling). Excluded components won't be loaded. ~~List[str]~~ | | `enable` <Tag variant="new">3.4</Tag> | Name(s) of pipeline component(s) to [enable](/usage/processing-pipelines#disabling). All other pipes will be disabled, but can be enabled again using [`nlp.enable_pipe`](/api/language#enable_pipe). ~~Union[str, Iterable[str]]~~ |
| `config` <Tag variant="new">3</Tag> | Config overrides as nested dict or flat dict keyed by section values in dot notation, e.g. `"nlp.pipeline"`. ~~Union[Dict[str, Any], Config]~~ | | `exclude` <Tag variant="new">3</Tag> | Name(s) of pipeline component(s) to [exclude](/usage/processing-pipelines#disabling). Excluded components won't be loaded. ~~Union[str, Iterable[str]]~~ |
| **RETURNS** | `Language` class with the loaded pipeline. ~~Language~~ | | `config` <Tag variant="new">3</Tag> | Config overrides as nested dict or flat dict keyed by section values in dot notation, e.g. `"nlp.pipeline"`. ~~Union[Dict[str, Any], Config]~~ |
| **RETURNS** | `Language` class with the loaded pipeline. ~~Language~~ |
### util.load_config {#util.load_config tag="function" new="3"} ### util.load_config {#util.load_config tag="function" new="3"}

View File

@ -1422,9 +1422,9 @@ other_tokens = ["i", "listened", "to", "obama", "'", "s", "podcasts", "."]
spacy_tokens = ["i", "listened", "to", "obama", "'s", "podcasts", "."] spacy_tokens = ["i", "listened", "to", "obama", "'s", "podcasts", "."]
align = Alignment.from_strings(other_tokens, spacy_tokens) align = Alignment.from_strings(other_tokens, spacy_tokens)
print(f"a -> b, lengths: {align.x2y.lengths}") # array([1, 1, 1, 1, 1, 1, 1, 1]) print(f"a -> b, lengths: {align.x2y.lengths}") # array([1, 1, 1, 1, 1, 1, 1, 1])
print(f"a -> b, mapping: {align.x2y.dataXd}") # array([0, 1, 2, 3, 4, 4, 5, 6]) : two tokens both refer to "'s" print(f"a -> b, mapping: {align.x2y.data}") # array([0, 1, 2, 3, 4, 4, 5, 6]) : two tokens both refer to "'s"
print(f"b -> a, lengths: {align.y2x.lengths}") # array([1, 1, 1, 1, 2, 1, 1]) : the token "'s" refers to two tokens print(f"b -> a, lengths: {align.y2x.lengths}") # array([1, 1, 1, 1, 2, 1, 1]) : the token "'s" refers to two tokens
print(f"b -> a, mappings: {align.y2x.dataXd}") # array([0, 1, 2, 3, 4, 5, 6, 7]) print(f"b -> a, mappings: {align.y2x.data}") # array([0, 1, 2, 3, 4, 5, 6, 7])
``` ```
Here are some insights from the alignment information generated in the example Here are some insights from the alignment information generated in the example
@ -1433,10 +1433,10 @@ above:
- The one-to-one mappings for the first four tokens are identical, which means - The one-to-one mappings for the first four tokens are identical, which means
they map to each other. This makes sense because they're also identical in the they map to each other. This makes sense because they're also identical in the
input: `"i"`, `"listened"`, `"to"` and `"obama"`. input: `"i"`, `"listened"`, `"to"` and `"obama"`.
- The value of `x2y.dataXd[6]` is `5`, which means that `other_tokens[6]` - The value of `x2y.data[6]` is `5`, which means that `other_tokens[6]`
(`"podcasts"`) aligns to `spacy_tokens[5]` (also `"podcasts"`). (`"podcasts"`) aligns to `spacy_tokens[5]` (also `"podcasts"`).
- `x2y.dataXd[4]` and `x2y.dataXd[5]` are both `4`, which means that both tokens - `x2y.data[4]` and `x2y.data[5]` are both `4`, which means that both tokens 4
4 and 5 of `other_tokens` (`"'"` and `"s"`) align to token 4 of `spacy_tokens` and 5 of `other_tokens` (`"'"` and `"s"`) align to token 4 of `spacy_tokens`
(`"'s"`). (`"'s"`).
<Infobox title="Important note" variant="warning"> <Infobox title="Important note" variant="warning">

View File

@ -365,15 +365,32 @@ pipeline package can be found.
To download a trained pipeline directly using To download a trained pipeline directly using
[pip](https://pypi.python.org/pypi/pip), point `pip install` to the URL or local [pip](https://pypi.python.org/pypi/pip), point `pip install` to the URL or local
path of the wheel file or archive. Installing the wheel is usually more path of the wheel file or archive. Installing the wheel is usually more
efficient. To find the direct link to a package, head over to the efficient.
[releases](https://github.com/explosion/spacy-models/releases), right click on
the archive link and copy it to your clipboard. > #### Pipeline Package URLs {#pipeline-urls}
>
> Pretrained pipeline distributions are hosted on
> [Github Releases](https://github.com/explosion/spacy-models/releases), and you
> can find download links there, as well as on the model page. You can also get
> URLs directly from the command line by using `spacy info` with the `--url`
> flag, which may be useful for automation.
>
> ```bash
> spacy info en_core_web_sm --url
> ```
>
> This command will print the URL for the latest version of a pipeline
> compatible with the version of spaCy you're using. Note that in order to look
> up the compatibility information an internet connection is required.
```bash ```bash
# With external URL # With external URL
$ pip install https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.0.0/en_core_web_sm-3.0.0-py3-none-any.whl $ pip install https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.0.0/en_core_web_sm-3.0.0-py3-none-any.whl
$ pip install https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.0.0/en_core_web_sm-3.0.0.tar.gz $ pip install https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.0.0/en_core_web_sm-3.0.0.tar.gz
# Using spacy info to get the external URL
$ pip install $(spacy info en_core_web_sm --url)
# With local file # With local file
$ pip install /Users/you/en_core_web_sm-3.0.0-py3-none-any.whl $ pip install /Users/you/en_core_web_sm-3.0.0-py3-none-any.whl
$ pip install /Users/you/en_core_web_sm-3.0.0.tar.gz $ pip install /Users/you/en_core_web_sm-3.0.0.tar.gz
@ -514,21 +531,16 @@ should be specifying them directly.
Because pipeline packages are valid Python packages, you can add them to your Because pipeline packages are valid Python packages, you can add them to your
application's `requirements.txt`. If you're running your own internal PyPi application's `requirements.txt`. If you're running your own internal PyPi
installation, you can upload the pipeline packages there. pip's installation, you can upload the pipeline packages there. pip's
[requirements file format](https://pip.pypa.io/en/latest/reference/pip_install/#requirements-file-format) [requirements file format](https://pip.pypa.io/en/latest/reference/requirements-file-format/)
supports both package names to download via a PyPi server, as well as direct supports both package names to download via a PyPi server, as well as
URLs. [direct URLs](#pipeline-urls).
```text ```text
### requirements.txt ### requirements.txt
spacy>=3.0.0,<4.0.0 spacy>=3.0.0,<4.0.0
https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.0.0/en_core_web_sm-3.0.0.tar.gz#egg=en_core_web_sm en_core_web_sm @ https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.4.0/en_core_web_sm-3.4.0-py3-none-any.whl
``` ```
Specifying `#egg=` with the package name tells pip which package to expect from
the download URL. This way, the package won't be re-downloaded and overwritten
if it's already installed - just like when you're downloading a package from
PyPi.
All pipeline packages are versioned and specify their spaCy dependency. This All pipeline packages are versioned and specify their spaCy dependency. This
ensures cross-compatibility and lets you specify exact version requirements for ensures cross-compatibility and lets you specify exact version requirements for
each pipeline. If you've [trained](/usage/training) your own pipeline, you can each pipeline. If you've [trained](/usage/training) your own pipeline, you can

View File

@ -148,6 +148,13 @@ skipped. You can also set `--force` to force re-running a command, or `--dry` to
perform a "dry run" and see what would happen (without actually running the perform a "dry run" and see what would happen (without actually running the
script). script).
Since spaCy v3.4.2, `spacy projects run` checks your installed dependencies to
verify that your environment is properly set up and aligns with the project's
`requirements.txt`, if there is one. If missing or conflicting dependencies are
detected, a corresponding warning is displayed. If you'd like to disable the
dependency check, set `check_requirements: false` in your project's
`project.yml`.
### 4. Run a workflow {#run-workfow} ### 4. Run a workflow {#run-workfow}
> #### project.yml > #### project.yml
@ -226,26 +233,28 @@ pipelines.
```yaml ```yaml
%%GITHUB_PROJECTS/pipelines/tagger_parser_ud/project.yml %%GITHUB_PROJECTS/pipelines/tagger_parser_ud/project.yml
``` ```
> #### Tip: Overriding variables on the CLI > #### Tip: Overriding variables on the CLI
> >
> If you want to override one or more variables on the CLI and are not already specifying a > If you want to override one or more variables on the CLI and are not already
> project directory, you need to add `.` as a placeholder: > specifying a project directory, you need to add `.` as a placeholder:
> >
> ``` > ```
> python -m spacy project run test . --vars.foo bar > python -m spacy project run test . --vars.foo bar
> ``` > ```
| Section | Description | | Section | Description |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | --------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `title` | An optional project title used in `--help` message and [auto-generated docs](#custom-docs). | | `title` | An optional project title used in `--help` message and [auto-generated docs](#custom-docs). |
| `description` | An optional project description used in [auto-generated docs](#custom-docs). | | `description` | An optional project description used in [auto-generated docs](#custom-docs). |
| `vars` | A dictionary of variables that can be referenced in paths, URLs and scripts and overriden on the CLI, just like [`config.cfg` variables](/usage/training#config-interpolation). For example, `${vars.name}` will use the value of the variable `name`. Variables need to be defined in the section `vars`, but can be a nested dict, so you're able to reference `${vars.model.name}`. | | `vars` | A dictionary of variables that can be referenced in paths, URLs and scripts and overriden on the CLI, just like [`config.cfg` variables](/usage/training#config-interpolation). For example, `${vars.name}` will use the value of the variable `name`. Variables need to be defined in the section `vars`, but can be a nested dict, so you're able to reference `${vars.model.name}`. |
| `env` | A dictionary of variables, mapped to the names of environment variables that will be read in when running the project. For example, `${env.name}` will use the value of the environment variable defined as `name`. | | `env` | A dictionary of variables, mapped to the names of environment variables that will be read in when running the project. For example, `${env.name}` will use the value of the environment variable defined as `name`. |
| `directories` | An optional list of [directories](#project-files) that should be created in the project for assets, training outputs, metrics etc. spaCy will make sure that these directories always exist. | | `directories` | An optional list of [directories](#project-files) that should be created in the project for assets, training outputs, metrics etc. spaCy will make sure that these directories always exist. |
| `assets` | A list of assets that can be fetched with the [`project assets`](/api/cli#project-assets) command. `url` defines a URL or local path, `dest` is the destination file relative to the project directory, and an optional `checksum` ensures that an error is raised if the file's checksum doesn't match. Instead of `url`, you can also provide a `git` block with the keys `repo`, `branch` and `path`, to download from a Git repo. | | `assets` | A list of assets that can be fetched with the [`project assets`](/api/cli#project-assets) command. `url` defines a URL or local path, `dest` is the destination file relative to the project directory, and an optional `checksum` ensures that an error is raised if the file's checksum doesn't match. Instead of `url`, you can also provide a `git` block with the keys `repo`, `branch` and `path`, to download from a Git repo. |
| `workflows` | A dictionary of workflow names, mapped to a list of command names, to execute in order. Workflows can be run with the [`project run`](/api/cli#project-run) command. | | `workflows` | A dictionary of workflow names, mapped to a list of command names, to execute in order. Workflows can be run with the [`project run`](/api/cli#project-run) command. |
| `commands` | A list of named commands. A command can define an optional help message (shown in the CLI when the user adds `--help`) and the `script`, a list of commands to run. The `deps` and `outputs` let you define the created file the command depends on and produces, respectively. This lets spaCy determine whether a command needs to be re-run because its dependencies or outputs changed. Commands can be run as part of a workflow, or separately with the [`project run`](/api/cli#project-run) command. | | `commands` | A list of named commands. A command can define an optional help message (shown in the CLI when the user adds `--help`) and the `script`, a list of commands to run. The `deps` and `outputs` let you define the created file the command depends on and produces, respectively. This lets spaCy determine whether a command needs to be re-run because its dependencies or outputs changed. Commands can be run as part of a workflow, or separately with the [`project run`](/api/cli#project-run) command. |
| `spacy_version` | Optional spaCy version range like `>=3.0.0,<3.1.0` that the project is compatible with. If it's loaded with an incompatible version, an error is raised when the project is loaded. | | `spacy_version` | Optional spaCy version range like `>=3.0.0,<3.1.0` that the project is compatible with. If it's loaded with an incompatible version, an error is raised when the project is loaded. |
| `check_requirements` <Tag variant="new">3.4.2</Tag> | A flag determining whether to verify that the installed dependencies align with the project's `requirements.txt`. Defaults to `true`. |
### Data assets {#data-assets} ### Data assets {#data-assets}
@ -758,7 +767,7 @@ and [`dvc repro`](https://dvc.org/doc/command-reference/repro) to reproduce the
workflow or individual commands. workflow or individual commands.
```cli ```cli
$ python -m spacy project dvc [workflow_name] $ python -m spacy project dvc [project_dir] [workflow_name]
``` ```
<Infobox title="Important note for multiple workflows" variant="warning"> <Infobox title="Important note for multiple workflows" variant="warning">

View File

@ -65,10 +65,10 @@ The English CNN pipelines have new word vectors:
| Package | Model Version | TAG | Parser LAS | NER F | | Package | Model Version | TAG | Parser LAS | NER F |
| ----------------------------------------------- | ------------- | ---: | ---------: | ----: | | ----------------------------------------------- | ------------- | ---: | ---------: | ----: |
| [`en_core_news_md`](/models/en#en_core_news_md) | v3.3.0 | 97.3 | 90.1 | 84.6 | | [`en_core_web_md`](/models/en#en_core_web_md) | v3.3.0 | 97.3 | 90.1 | 84.6 |
| [`en_core_news_md`](/models/en#en_core_news_lg) | v3.4.0 | 97.2 | 90.3 | 85.5 | | [`en_core_web_md`](/models/en#en_core_web_lg) | v3.4.0 | 97.2 | 90.3 | 85.5 |
| [`en_core_news_lg`](/models/en#en_core_news_md) | v3.3.0 | 97.4 | 90.1 | 85.3 | | [`en_core_web_lg`](/models/en#en_core_web_md) | v3.3.0 | 97.4 | 90.1 | 85.3 |
| [`en_core_news_lg`](/models/en#en_core_news_lg) | v3.4.0 | 97.3 | 90.2 | 85.6 | | [`en_core_web_lg`](/models/en#en_core_web_lg) | v3.4.0 | 97.3 | 90.2 | 85.6 |
## Notes about upgrading from v3.3 {#upgrading} ## Notes about upgrading from v3.3 {#upgrading}

View File

@ -265,6 +265,11 @@
"name": "Luxembourgish", "name": "Luxembourgish",
"has_examples": true "has_examples": true
}, },
{
"code": "lg",
"name": "Luganda",
"has_examples": true
},
{ {
"code": "lij", "code": "lij",
"name": "Ligurian", "name": "Ligurian",

View File

@ -12,7 +12,6 @@
{ "text": "New in v3.0", "url": "/usage/v3" }, { "text": "New in v3.0", "url": "/usage/v3" },
{ "text": "New in v3.1", "url": "/usage/v3-1" }, { "text": "New in v3.1", "url": "/usage/v3-1" },
{ "text": "New in v3.2", "url": "/usage/v3-2" }, { "text": "New in v3.2", "url": "/usage/v3-2" },
{ "text": "New in v3.2", "url": "/usage/v3-2" },
{ "text": "New in v3.3", "url": "/usage/v3-3" }, { "text": "New in v3.3", "url": "/usage/v3-3" },
{ "text": "New in v3.4", "url": "/usage/v3-4" } { "text": "New in v3.4", "url": "/usage/v3-4" }
] ]
@ -95,6 +94,7 @@
"label": "Pipeline", "label": "Pipeline",
"items": [ "items": [
{ "text": "AttributeRuler", "url": "/api/attributeruler" }, { "text": "AttributeRuler", "url": "/api/attributeruler" },
{ "text": "CoreferenceResolver", "url": "/api/coref" },
{ "text": "DependencyParser", "url": "/api/dependencyparser" }, { "text": "DependencyParser", "url": "/api/dependencyparser" },
{ "text": "EditTreeLemmatizer", "url": "/api/edittreelemmatizer" }, { "text": "EditTreeLemmatizer", "url": "/api/edittreelemmatizer" },
{ "text": "EntityLinker", "url": "/api/entitylinker" }, { "text": "EntityLinker", "url": "/api/entitylinker" },
@ -105,6 +105,7 @@
{ "text": "SentenceRecognizer", "url": "/api/sentencerecognizer" }, { "text": "SentenceRecognizer", "url": "/api/sentencerecognizer" },
{ "text": "Sentencizer", "url": "/api/sentencizer" }, { "text": "Sentencizer", "url": "/api/sentencizer" },
{ "text": "SpanCategorizer", "url": "/api/spancategorizer" }, { "text": "SpanCategorizer", "url": "/api/spancategorizer" },
{ "text": "SpanResolver", "url": "/api/span-resolver" },
{ "text": "SpanRuler", "url": "/api/spanruler" }, { "text": "SpanRuler", "url": "/api/spanruler" },
{ "text": "Tagger", "url": "/api/tagger" }, { "text": "Tagger", "url": "/api/tagger" },
{ "text": "TextCategorizer", "url": "/api/textcategorizer" }, { "text": "TextCategorizer", "url": "/api/textcategorizer" },

View File

@ -1192,7 +1192,7 @@
"slogan": "Fast, flexible and transparent sentiment analysis", "slogan": "Fast, flexible and transparent sentiment analysis",
"description": "Asent is a rule-based sentiment analysis library for Python made using spaCy. It is inspired by VADER, but uses a more modular ruleset, that allows the user to change e.g. the method for finding negations. Furthermore it includes visualisers to visualize the model predictions, making the model easily interpretable.", "description": "Asent is a rule-based sentiment analysis library for Python made using spaCy. It is inspired by VADER, but uses a more modular ruleset, that allows the user to change e.g. the method for finding negations. Furthermore it includes visualisers to visualize the model predictions, making the model easily interpretable.",
"github": "kennethenevoldsen/asent", "github": "kennethenevoldsen/asent",
"pip": "aseny", "pip": "asent",
"code_example": [ "code_example": [
"import spacy", "import spacy",
"import asent", "import asent",
@ -3984,7 +3984,21 @@
}, },
"category": ["pipeline"], "category": ["pipeline"],
"tags": ["interpretation", "ja"] "tags": ["interpretation", "ja"]
},
{
"id": "spacy-partial-tagger",
"title": "spaCy - Partial Tagger",
"slogan": "Sequence Tagger for Partially Annotated Dataset in spaCy",
"description": "This is a library to build a CRF tagger with a partially annotated dataset in spaCy. You can build your own tagger only from dictionary.",
"github": "doccano/spacy-partial-tagger",
"pip": "spacy-partial-tagger",
"category": ["pipeline", "training"],
"author": "Yasufumi Taniguchi",
"author_links": {
"github": "yasufumy"
}
} }
], ],
"categories": [ "categories": [

View File

@ -76,6 +76,7 @@ const MODEL_META = {
benchmark_ner: 'NER accuracy', benchmark_ner: 'NER accuracy',
benchmark_speed: 'Speed', benchmark_speed: 'Speed',
compat: 'Latest compatible package version for your spaCy installation', compat: 'Latest compatible package version for your spaCy installation',
download_link: 'Download link for the pipeline',
} }
const LABEL_SCHEME_META = { const LABEL_SCHEME_META = {
@ -138,6 +139,13 @@ function formatAccuracy(data, lang) {
.filter(item => item) .filter(item => item)
} }
function formatDownloadLink(lang, name, version) {
const fullName = `${lang}_${name}-${version}`
const filename = `${fullName}-py3-none-any.whl`
const url = `https://github.com/explosion/spacy-models/releases/download/${fullName}/${filename}`
return <Link to={url} hideIcon>{filename}</Link>
}
function formatModelMeta(data) { function formatModelMeta(data) {
return { return {
fullName: `${data.lang}_${data.name}-${data.version}`, fullName: `${data.lang}_${data.name}-${data.version}`,
@ -154,6 +162,7 @@ function formatModelMeta(data) {
labels: isEmptyObj(data.labels) ? null : data.labels, labels: isEmptyObj(data.labels) ? null : data.labels,
vectors: formatVectors(data.vectors), vectors: formatVectors(data.vectors),
accuracy: formatAccuracy(data.performance, data.lang), accuracy: formatAccuracy(data.performance, data.lang),
download_link: formatDownloadLink(data.lang, data.name, data.version),
} }
} }
@ -244,6 +253,7 @@ const Model = ({
{ label: 'Components', content: components, help: MODEL_META.components }, { label: 'Components', content: components, help: MODEL_META.components },
{ label: 'Pipeline', content: pipeline, help: MODEL_META.pipeline }, { label: 'Pipeline', content: pipeline, help: MODEL_META.pipeline },
{ label: 'Vectors', content: meta.vectors, help: MODEL_META.vecs }, { label: 'Vectors', content: meta.vectors, help: MODEL_META.vecs },
{ label: 'Download Link', content: meta.download_link, help: MODEL_META.download_link },
{ label: 'Sources', content: sources, help: MODEL_META.sources }, { label: 'Sources', content: sources, help: MODEL_META.sources },
{ label: 'Author', content: author }, { label: 'Author', content: author },
{ label: 'License', content: license }, { label: 'License', content: license },

View File

@ -9,7 +9,7 @@ const DEFAULT_PLATFORM = 'x86'
const DEFAULT_MODELS = ['en'] const DEFAULT_MODELS = ['en']
const DEFAULT_OPT = 'efficiency' const DEFAULT_OPT = 'efficiency'
const DEFAULT_HARDWARE = 'cpu' const DEFAULT_HARDWARE = 'cpu'
const DEFAULT_CUDA = 'cuda113' const DEFAULT_CUDA = 'cuda-autodetect'
const CUDA = { const CUDA = {
'8.0': 'cuda80', '8.0': 'cuda80',
'9.0': 'cuda90', '9.0': 'cuda90',
@ -17,15 +17,7 @@ const CUDA = {
'9.2': 'cuda92', '9.2': 'cuda92',
'10.0': 'cuda100', '10.0': 'cuda100',
'10.1': 'cuda101', '10.1': 'cuda101',
'10.2': 'cuda102', '10.2, 11.0+': 'cuda-autodetect',
'11.0': 'cuda110',
'11.1': 'cuda111',
'11.2': 'cuda112',
'11.3': 'cuda113',
'11.4': 'cuda114',
'11.5': 'cuda115',
'11.6': 'cuda116',
'11.7': 'cuda117',
} }
const LANG_EXTRAS = ['ja'] // only for languages with models const LANG_EXTRAS = ['ja'] // only for languages with models