diff --git a/.github/FUNDING.yml b/.github/FUNDING.yml new file mode 100644 index 000000000..c9f30d1d3 --- /dev/null +++ b/.github/FUNDING.yml @@ -0,0 +1 @@ +custom: [https://explosion.ai/merch, https://explosion.ai/tailored-solutions] diff --git a/.github/workflows/tests.yml b/.github/workflows/tests.yml index 987298b7b..760a79f21 100644 --- a/.github/workflows/tests.yml +++ b/.github/workflows/tests.yml @@ -58,7 +58,7 @@ jobs: fail-fast: true matrix: os: [ubuntu-latest, windows-latest, macos-latest] - python_version: ["3.11"] + python_version: ["3.12"] include: - os: macos-latest python_version: "3.8" @@ -66,6 +66,8 @@ jobs: python_version: "3.9" - os: windows-latest python_version: "3.10" + - os: macos-latest + python_version: "3.11" runs-on: ${{ matrix.os }} diff --git a/LICENSE b/LICENSE index d76864579..979f5ade7 100644 --- a/LICENSE +++ b/LICENSE @@ -1,6 +1,6 @@ The MIT License (MIT) -Copyright (C) 2016-2022 ExplosionAI GmbH, 2016 spaCy GmbH, 2015 Matthew Honnibal +Copyright (C) 2016-2023 ExplosionAI GmbH, 2016 spaCy GmbH, 2015 Matthew Honnibal Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal diff --git a/README.md b/README.md index 8f97cbb4d..9e5c4be68 100644 --- a/README.md +++ b/README.md @@ -6,23 +6,20 @@ spaCy is a library for **advanced Natural Language Processing** in Python and Cython. It's built on the very latest research, and was designed from day one to be used in real products. -spaCy comes with -[pretrained pipelines](https://spacy.io/models) and -currently supports tokenization and training for **70+ languages**. It features -state-of-the-art speed and **neural network models** for tagging, -parsing, **named entity recognition**, **text classification** and more, -multi-task learning with pretrained **transformers** like BERT, as well as a +spaCy comes with [pretrained pipelines](https://spacy.io/models) and currently +supports tokenization and training for **70+ languages**. It features +state-of-the-art speed and **neural network models** for tagging, parsing, +**named entity recognition**, **text classification** and more, multi-task +learning with pretrained **transformers** like BERT, as well as a production-ready [**training system**](https://spacy.io/usage/training) and easy model packaging, deployment and workflow management. spaCy is commercial -open-source software, released under the [MIT license](https://github.com/explosion/spaCy/blob/master/LICENSE). +open-source software, released under the +[MIT license](https://github.com/explosion/spaCy/blob/master/LICENSE). -💥 **We'd love to hear more about your experience with spaCy!** -[Fill out our survey here.](https://form.typeform.com/to/aMel9q9f) - -💫 **Version 3.5 out now!** +💫 **Version 3.7 out now!** [Check out the release notes here.](https://github.com/explosion/spaCy/releases) -[![Azure Pipelines](https://img.shields.io/azure-devops/build/explosion-ai/public/8/master.svg?logo=azure-pipelines&style=flat-square&label=build)](https://dev.azure.com/explosion-ai/public/_build?definitionId=8) +[![tests](https://github.com/explosion/spaCy/actions/workflows/tests.yml/badge.svg)](https://github.com/explosion/spaCy/actions/workflows/tests.yml) [![Current Release Version](https://img.shields.io/github/release/explosion/spacy.svg?style=flat-square&logo=github)](https://github.com/explosion/spaCy/releases) [![pypi Version](https://img.shields.io/pypi/v/spacy.svg?style=flat-square&logo=pypi&logoColor=white)](https://pypi.org/project/spacy/) [![conda Version](https://img.shields.io/conda/vn/conda-forge/spacy.svg?style=flat-square&logo=conda-forge&logoColor=white)](https://anaconda.org/conda-forge/spacy) @@ -35,35 +32,42 @@ open-source software, released under the [MIT license](https://github.com/explos ## 📖 Documentation -| Documentation | | -| ----------------------------- | ---------------------------------------------------------------------- | -| ⭐️ **[spaCy 101]** | New to spaCy? Here's everything you need to know! | -| 📚 **[Usage Guides]** | How to use spaCy and its features. | -| 🚀 **[New in v3.0]** | New features, backwards incompatibilities and migration guide. | -| 🪐 **[Project Templates]** | End-to-end workflows you can clone, modify and run. | -| 🎛 **[API Reference]** | The detailed reference for spaCy's API. | -| 📦 **[Models]** | Download trained pipelines for spaCy. | -| 🌌 **[Universe]** | Plugins, extensions, demos and books from the spaCy ecosystem. | -| ⚙️ **[spaCy VS Code Extension]** | Additional tooling and features for working with spaCy's config files. | -| 👩‍🏫 **[Online Course]** | Learn spaCy in this free and interactive online course. | -| 📺 **[Videos]** | Our YouTube channel with video tutorials, talks and more. | -| 🛠 **[Changelog]** | Changes and version history. | -| 💝 **[Contribute]** | How to contribute to the spaCy project and code base. | -| spaCy Tailored Pipelines | Get a custom spaCy pipeline, tailor-made for your NLP problem by spaCy's core developers. Streamlined, production-ready, predictable and maintainable. Start by completing our 5-minute questionnaire to tell us what you need and we'll be in touch! **[Learn more →](https://explosion.ai/spacy-tailored-pipelines)** | -| spaCy Tailored Pipelines | Bespoke advice for problem solving, strategy and analysis for applied NLP projects. Services include data strategy, code reviews, pipeline design and annotation coaching. Curious? Fill in our 5-minute questionnaire to tell us what you need and we'll be in touch! **[Learn more →](https://explosion.ai/spacy-tailored-analysis)** | +| Documentation | | +| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| ⭐️ **[spaCy 101]** | New to spaCy? Here's everything you need to know! | +| 📚 **[Usage Guides]** | How to use spaCy and its features. | +| 🚀 **[New in v3.0]** | New features, backwards incompatibilities and migration guide. | +| 🪐 **[Project Templates]** | End-to-end workflows you can clone, modify and run. | +| 🎛 **[API Reference]** | The detailed reference for spaCy's API. | +| ⏩ **[GPU Processing]** | Use spaCy with CUDA-compatible GPU processing. | +| 📦 **[Models]** | Download trained pipelines for spaCy. | +| 🦙 **[Large Language Models]** | Integrate LLMs into spaCy pipelines. | +| 🌌 **[Universe]** | Plugins, extensions, demos and books from the spaCy ecosystem. | +| ⚙️ **[spaCy VS Code Extension]** | Additional tooling and features for working with spaCy's config files. | +| 👩‍🏫 **[Online Course]** | Learn spaCy in this free and interactive online course. | +| 📰 **[Blog]** | Read about current spaCy and Prodigy development, releases, talks and more from Explosion. | +| 📺 **[Videos]** | Our YouTube channel with video tutorials, talks and more. | +| 🛠 **[Changelog]** | Changes and version history. | +| 💝 **[Contribute]** | How to contribute to the spaCy project and code base. | +| 👕 **[Swag]** | Support us and our work with unique, custom-designed swag! | +| Tailored Solutions | Custom NLP consulting, implementation and strategic advice by spaCy’s core development team. Streamlined, production-ready, predictable and maintainable. Send us an email or take our 5-minute questionnaire, and well'be in touch! **[Learn more →](https://explosion.ai/tailored-solutions)** | [spacy 101]: https://spacy.io/usage/spacy-101 [new in v3.0]: https://spacy.io/usage/v3 [usage guides]: https://spacy.io/usage/ [api reference]: https://spacy.io/api/ +[gpu processing]: https://spacy.io/usage#gpu [models]: https://spacy.io/models +[large language models]: https://spacy.io/usage/large-language-models [universe]: https://spacy.io/universe -[spaCy VS Code Extension]: https://github.com/explosion/spacy-vscode +[spacy vs code extension]: https://github.com/explosion/spacy-vscode [videos]: https://www.youtube.com/c/ExplosionAI [online course]: https://course.spacy.io +[blog]: https://explosion.ai [project templates]: https://github.com/explosion/projects [changelog]: https://spacy.io/usage#changelog [contribute]: https://github.com/explosion/spaCy/blob/master/CONTRIBUTING.md +[swag]: https://explosion.ai/merch ## 💬 Where to ask questions @@ -92,7 +96,9 @@ more people can benefit from it. - State-of-the-art speed - Production-ready **training system** - Linguistically-motivated **tokenization** -- Components for named **entity recognition**, part-of-speech-tagging, dependency parsing, sentence segmentation, **text classification**, lemmatization, morphological analysis, entity linking and more +- Components for named **entity recognition**, part-of-speech-tagging, + dependency parsing, sentence segmentation, **text classification**, + lemmatization, morphological analysis, entity linking and more - Easily extensible with **custom components** and attributes - Support for custom models in **PyTorch**, **TensorFlow** and other frameworks - Built in **visualizers** for syntax and NER @@ -118,8 +124,8 @@ For detailed installation instructions, see the ### pip Using pip, spaCy releases are available as source packages and binary wheels. -Before you install spaCy and its dependencies, make sure that -your `pip`, `setuptools` and `wheel` are up to date. +Before you install spaCy and its dependencies, make sure that your `pip`, +`setuptools` and `wheel` are up to date. ```bash pip install -U pip setuptools wheel @@ -174,9 +180,9 @@ with the new version. ## 📦 Download model packages -Trained pipelines for spaCy can be installed as **Python packages**. This -means that they're a component of your application, just like any other module. -Models can be installed using spaCy's [`download`](https://spacy.io/api/cli#download) +Trained pipelines for spaCy can be installed as **Python packages**. This means +that they're a component of your application, just like any other module. Models +can be installed using spaCy's [`download`](https://spacy.io/api/cli#download) command, or manually by pointing pip to a path or URL. | Documentation | | @@ -242,8 +248,7 @@ do that depends on your system. | **Mac** | Install a recent version of [XCode](https://developer.apple.com/xcode/), including the so-called "Command Line Tools". macOS and OS X ship with Python and git preinstalled. | | **Windows** | Install a version of the [Visual C++ Build Tools](https://visualstudio.microsoft.com/visual-cpp-build-tools/) or [Visual Studio Express](https://visualstudio.microsoft.com/vs/express/) that matches the version that was used to compile your Python interpreter. | -For more details -and instructions, see the documentation on +For more details and instructions, see the documentation on [compiling spaCy from source](https://spacy.io/usage#source) and the [quickstart widget](https://spacy.io/usage#section-quickstart) to get the right commands for your platform and Python version. diff --git a/build-constraints.txt b/build-constraints.txt index 78cfed7a6..812e7c6b1 100644 --- a/build-constraints.txt +++ b/build-constraints.txt @@ -1,7 +1,4 @@ -# build version constraints for use with wheelwright + multibuild +# build version constraints for use with wheelwright numpy==1.17.3; python_version=='3.8' and platform_machine!='aarch64' numpy==1.19.2; python_version=='3.8' and platform_machine=='aarch64' -numpy==1.19.3; python_version=='3.9' -numpy==1.21.3; python_version=='3.10' -numpy==1.23.2; python_version=='3.11' -numpy; python_version>='3.12' +numpy>=1.25.0; python_version>='3.9' diff --git a/extra/DEVELOPER_DOCS/Listeners.md b/extra/DEVELOPER_DOCS/Listeners.md index 3a71082e0..72c036880 100644 --- a/extra/DEVELOPER_DOCS/Listeners.md +++ b/extra/DEVELOPER_DOCS/Listeners.md @@ -1,14 +1,17 @@ # Listeners -1. [Overview](#1-overview) -2. [Initialization](#2-initialization) - - [A. Linking listeners to the embedding component](#2a-linking-listeners-to-the-embedding-component) - - [B. Shape inference](#2b-shape-inference) -3. [Internal communication](#3-internal-communication) - - [A. During prediction](#3a-during-prediction) - - [B. During training](#3b-during-training) - - [C. Frozen components](#3c-frozen-components) -4. [Replacing listener with standalone](#4-replacing-listener-with-standalone) +- [1. Overview](#1-overview) +- [2. Initialization](#2-initialization) + - [2A. Linking listeners to the embedding component](#2a-linking-listeners-to-the-embedding-component) + - [2B. Shape inference](#2b-shape-inference) +- [3. Internal communication](#3-internal-communication) + - [3A. During prediction](#3a-during-prediction) + - [3B. During training](#3b-during-training) + - [Training with multiple listeners](#training-with-multiple-listeners) + - [3C. Frozen components](#3c-frozen-components) + - [The Tok2Vec or Transformer is frozen](#the-tok2vec-or-transformer-is-frozen) + - [The upstream component is frozen](#the-upstream-component-is-frozen) +- [4. Replacing listener with standalone](#4-replacing-listener-with-standalone) ## 1. Overview @@ -62,7 +65,7 @@ of this `find_listener()` method will specifically identify sublayers of a model If it's a Transformer-based pipeline, a [`transformer` component](https://github.com/explosion/spacy-transformers/blob/master/spacy_transformers/pipeline_component.py) -has a similar implementation but its `find_listener()` function will specifically look for `TransformerListener` +has a similar implementation but its `find_listener()` function will specifically look for `TransformerListener` sublayers of downstream components. ### 2B. Shape inference @@ -154,7 +157,7 @@ as a tagger or a parser. This used to be impossible before 3.1, but has become s embedding component in the [`annotating_components`](https://spacy.io/usage/training#annotating-components) list of the config. This works like any other "annotating component" because it relies on the `Doc` attributes. -However, if the `Tok2Vec` or `Transformer` is frozen, and not present in `annotating_components`, and a related +However, if the `Tok2Vec` or `Transformer` is frozen, and not present in `annotating_components`, and a related listener isn't frozen, then a `W086` warning is shown and further training of the pipeline will likely end with `E954`. #### The upstream component is frozen @@ -216,5 +219,17 @@ new_model = tok2vec_model.attrs["replace_listener"](new_model) ``` The new config and model are then properly stored on the `nlp` object. -Note that this functionality (running the replacement for a transformer listener) was broken prior to +Note that this functionality (running the replacement for a transformer listener) was broken prior to `spacy-transformers` 1.0.5. + +In spaCy 3.7, `Language.replace_listeners` was updated to pass the following additional arguments to the `replace_listener` callback: +the listener to be replaced and the `tok2vec`/`transformer` pipe from which the new model was copied. To maintain backwards-compatiblity, +the method only passes these extra arguments for callbacks that support them: + +``` +def replace_listener_pre_37(copied_tok2vec_model): + ... + +def replace_listener_post_37(copied_tok2vec_model, replaced_listener, tok2vec_pipe): + ... +``` diff --git a/licenses/3rd_party_licenses.txt b/licenses/3rd_party_licenses.txt index 851e09585..9b037a496 100644 --- a/licenses/3rd_party_licenses.txt +++ b/licenses/3rd_party_licenses.txt @@ -158,3 +158,45 @@ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + + +SciPy +----- + +* Files: scorer.py + +The implementation of trapezoid() is adapted from SciPy, which is distributed +under the following license: + +New BSD License + +Copyright (c) 2001-2002 Enthought, Inc. 2003-2023, SciPy Developers. +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions +are met: + +1. Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. + +2. Redistributions in binary form must reproduce the above + copyright notice, this list of conditions and the following + disclaimer in the documentation and/or other materials provided + with the distribution. + +3. Neither the name of the copyright holder nor the names of its + contributors may be used to endorse or promote products derived + from this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/pyproject.toml b/pyproject.toml index 0a5bc1627..96d12cd8a 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -6,7 +6,8 @@ requires = [ "preshed>=3.0.2,<3.1.0", "murmurhash>=0.28.0,<1.1.0", "thinc>=9.0.0.dev4,<9.1.0", - "numpy>=1.15.0", + "numpy>=1.15.0; python_version < '3.9'", + "numpy>=1.25.0; python_version >= '3.9'", ] build-backend = "setuptools.build_meta" diff --git a/requirements.txt b/requirements.txt index a57278108..1a2459498 100644 --- a/requirements.txt +++ b/requirements.txt @@ -10,13 +10,14 @@ wasabi>=0.9.1,<1.2.0 srsly>=2.4.3,<3.0.0 catalogue>=2.0.6,<2.1.0 typer>=0.3.0,<0.10.0 -pathy>=0.10.0 smart-open>=5.2.1,<7.0.0 +weasel>=0.1.0,<0.4.0 # Third party dependencies -numpy>=1.15.0 +numpy>=1.15.0; python_version < "3.9" +numpy>=1.19.0; python_version >= "3.9" requests>=2.13.0,<3.0.0 tqdm>=4.38.0,<5.0.0 -pydantic>=1.7.4,!=1.8,!=1.8.1,<1.11.0 +pydantic>=1.7.4,!=1.8,!=1.8.1,<3.0.0 jinja2 langcodes>=3.2.0,<4.0.0 # Official Python utilities @@ -36,5 +37,5 @@ types-setuptools>=57.0.0 types-requests types-setuptools>=57.0.0 black==22.3.0 -cython-lint>=0.15.0; python_version >= "3.7" +cython-lint>=0.15.0 isort>=5.0,<6.0 diff --git a/setup.cfg b/setup.cfg index b1479c3b7..e61ae94a2 100644 --- a/setup.cfg +++ b/setup.cfg @@ -30,9 +30,12 @@ project_urls = zip_safe = false include_package_data = true python_requires = >=3.8 +# NOTE: This section is superseded by pyproject.toml and will be removed in +# spaCy v4 setup_requires = cython>=0.25,<3.0 - numpy>=1.15.0 + numpy>=1.15.0; python_version < "3.9" + numpy>=1.19.0; python_version >= "3.9" # We also need our Cython packages here to compile against cymem>=2.0.2,<2.1.0 preshed>=3.0.2,<3.1.0 @@ -49,14 +52,15 @@ install_requires = wasabi>=0.9.1,<1.2.0 srsly>=2.4.3,<3.0.0 catalogue>=2.0.6,<2.1.0 + weasel>=0.1.0,<0.4.0 # Third-party dependencies typer>=0.3.0,<0.10.0 - pathy>=0.10.0 smart-open>=5.2.1,<7.0.0 tqdm>=4.38.0,<5.0.0 - numpy>=1.15.0 + numpy>=1.15.0; python_version < "3.9" + numpy>=1.19.0; python_version >= "3.9" requests>=2.13.0,<3.0.0 - pydantic>=1.7.4,!=1.8,!=1.8.1,<1.11.0 + pydantic>=1.7.4,!=1.8,!=1.8.1,<3.0.0 jinja2 # Official Python utilities setuptools @@ -71,9 +75,7 @@ console_scripts = lookups = spacy_lookups_data>=1.0.3,<1.1.0 transformers = - spacy_transformers>=1.1.2,<1.3.0 -ray = - spacy_ray>=0.1.0,<1.0.0 + spacy_transformers>=1.1.2,<1.4.0 cuda = cupy>=5.0.0b4,<13.0.0 cuda80 = @@ -108,6 +110,8 @@ cuda117 = cupy-cuda117>=5.0.0b4,<13.0.0 cuda11x = cupy-cuda11x>=11.0.0,<13.0.0 +cuda12x = + cupy-cuda12x>=11.5.0,<13.0.0 cuda-autodetect = cupy-wheel>=11.0.0,<13.0.0 apple = diff --git a/setup.py b/setup.py index 77a4cf283..0eb529c20 100755 --- a/setup.py +++ b/setup.py @@ -1,10 +1,9 @@ #!/usr/bin/env python from setuptools import Extension, setup, find_packages import sys -import platform import numpy -from distutils.command.build_ext import build_ext -from distutils.sysconfig import get_python_inc +from setuptools.command.build_ext import build_ext +from sysconfig import get_path from pathlib import Path import shutil from Cython.Build import cythonize @@ -80,6 +79,7 @@ COMPILER_DIRECTIVES = { "language_level": -3, "embedsignature": True, "annotation_typing": False, + "profile": sys.version_info < (3, 12), } # Files to copy into the package that are otherwise not included COPY_FILES = { @@ -89,30 +89,6 @@ COPY_FILES = { } -def is_new_osx(): - """Check whether we're on OSX >= 10.7""" - if sys.platform != "darwin": - return False - mac_ver = platform.mac_ver()[0] - if mac_ver.startswith("10"): - minor_version = int(mac_ver.split(".")[1]) - if minor_version >= 7: - return True - else: - return False - return False - - -if is_new_osx(): - # On Mac, use libc++ because Apple deprecated use of - # libstdc - COMPILE_OPTIONS["other"].append("-stdlib=libc++") - LINK_OPTIONS["other"].append("-lc++") - # g++ (used by unix compiler on mac) links to libstdc++ as a default lib. - # See: https://stackoverflow.com/questions/1653047/avoid-linking-to-libstdc - LINK_OPTIONS["other"].append("-nodefaultlibs") - - # By subclassing build_extensions we have the actual compiler that will be used which is really known only after finalize_options # http://stackoverflow.com/questions/724664/python-distutils-how-to-get-a-compiler-that-is-going-to-be-used class build_ext_options: @@ -205,7 +181,7 @@ def setup_package(): include_dirs = [ numpy.get_include(), - get_python_inc(plat_specific=True), + get_path("include"), ] ext_modules = [] ext_modules.append( diff --git a/spacy/attrs.pyx b/spacy/attrs.pyx index 9b0ae3400..1688afe47 100644 --- a/spacy/attrs.pyx +++ b/spacy/attrs.pyx @@ -1,3 +1,4 @@ +# cython: profile=False from .errors import Errors IOB_STRINGS = ("", "I", "O", "B") diff --git a/spacy/cli/__init__.py b/spacy/cli/__init__.py index 549a27616..1d402ff0c 100644 --- a/spacy/cli/__init__.py +++ b/spacy/cli/__init__.py @@ -14,6 +14,7 @@ from .debug_diff import debug_diff # noqa: F401 from .debug_model import debug_model # noqa: F401 from .download import download # noqa: F401 from .evaluate import evaluate # noqa: F401 +from .find_function import find_function # noqa: F401 from .find_threshold import find_threshold # noqa: F401 from .info import info # noqa: F401 from .init_config import fill_config, init_config # noqa: F401 @@ -21,15 +22,17 @@ from .init_pipeline import init_pipeline_cli # noqa: F401 from .package import package # noqa: F401 from .pretrain import pretrain # noqa: F401 from .profile import profile # noqa: F401 -from .project.assets import project_assets # noqa: F401 -from .project.clone import project_clone # noqa: F401 -from .project.document import project_document # noqa: F401 -from .project.dvc import project_update_dvc # noqa: F401 -from .project.pull import project_pull # noqa: F401 -from .project.push import project_push # noqa: F401 -from .project.run import project_run # noqa: F401 -from .train import train_cli # noqa: F401 -from .validate import validate # noqa: F401 +from .project.assets import project_assets # type: ignore[attr-defined] # noqa: F401 +from .project.clone import project_clone # type: ignore[attr-defined] # noqa: F401 +from .project.document import ( # type: ignore[attr-defined] # noqa: F401 + project_document, +) +from .project.dvc import project_update_dvc # type: ignore[attr-defined] # noqa: F401 +from .project.pull import project_pull # type: ignore[attr-defined] # noqa: F401 +from .project.push import project_push # type: ignore[attr-defined] # noqa: F401 +from .project.run import project_run # type: ignore[attr-defined] # noqa: F401 +from .train import train_cli # type: ignore[attr-defined] # noqa: F401 +from .validate import validate # type: ignore[attr-defined] # noqa: F401 @app.command("link", no_args_is_help=True, deprecated=True, hidden=True) diff --git a/spacy/cli/_util.py b/spacy/cli/_util.py index ca92cdd23..12da175b4 100644 --- a/spacy/cli/_util.py +++ b/spacy/cli/_util.py @@ -26,10 +26,11 @@ from thinc.api import Config, ConfigValidationError, require_gpu from thinc.util import gpu_is_available from typer.main import get_command from wasabi import Printer, msg +from weasel import app as project_cli from .. import about from ..errors import RENAMED_LANGUAGE_CODES -from ..schemas import ProjectConfigSchema, validate +from ..schemas import validate from ..util import ( ENV_VARS, SimpleFrozenDict, @@ -41,15 +42,10 @@ from ..util import ( run_command, ) -if TYPE_CHECKING: - from pathy import FluidPath # noqa: F401 - - SDIST_SUFFIX = ".tar.gz" WHEEL_SUFFIX = "-py3-none-any.whl" PROJECT_FILE = "project.yml" -PROJECT_LOCK = "project.lock" COMMAND = "python -m spacy" NAME = "spacy" HELP = """spaCy Command-line Interface @@ -75,11 +71,10 @@ Opt = typer.Option app = typer.Typer(name=NAME, help=HELP) benchmark_cli = typer.Typer(name="benchmark", help=BENCHMARK_HELP, no_args_is_help=True) -project_cli = typer.Typer(name="project", help=PROJECT_HELP, no_args_is_help=True) debug_cli = typer.Typer(name="debug", help=DEBUG_HELP, no_args_is_help=True) init_cli = typer.Typer(name="init", help=INIT_HELP, no_args_is_help=True) -app.add_typer(project_cli) +app.add_typer(project_cli, name="project", help=PROJECT_HELP, no_args_is_help=True) app.add_typer(debug_cli) app.add_typer(benchmark_cli) app.add_typer(init_cli) @@ -164,148 +159,6 @@ def _handle_renamed_language_codes(lang: Optional[str]) -> None: ) -def load_project_config( - path: Path, interpolate: bool = True, overrides: Dict[str, Any] = SimpleFrozenDict() -) -> Dict[str, Any]: - """Load the project.yml file from a directory and validate it. Also make - sure that all directories defined in the config exist. - - path (Path): The path to the project directory. - interpolate (bool): Whether to substitute project variables. - overrides (Dict[str, Any]): Optional config overrides. - RETURNS (Dict[str, Any]): The loaded project.yml. - """ - config_path = path / PROJECT_FILE - if not config_path.exists(): - msg.fail(f"Can't find {PROJECT_FILE}", config_path, exits=1) - invalid_err = f"Invalid {PROJECT_FILE}. Double-check that the YAML is correct." - try: - config = srsly.read_yaml(config_path) - except ValueError as e: - msg.fail(invalid_err, e, exits=1) - errors = validate(ProjectConfigSchema, config) - if errors: - msg.fail(invalid_err) - print("\n".join(errors)) - sys.exit(1) - validate_project_version(config) - validate_project_commands(config) - if interpolate: - err = f"{PROJECT_FILE} validation error" - with show_validation_error(title=err, hint_fill=False): - config = substitute_project_variables(config, overrides) - # Make sure directories defined in config exist - for subdir in config.get("directories", []): - dir_path = path / subdir - if not dir_path.exists(): - dir_path.mkdir(parents=True) - return config - - -def substitute_project_variables( - config: Dict[str, Any], - overrides: Dict[str, Any] = SimpleFrozenDict(), - key: str = "vars", - env_key: str = "env", -) -> Dict[str, Any]: - """Interpolate variables in the project file using the config system. - - config (Dict[str, Any]): The project config. - overrides (Dict[str, Any]): Optional config overrides. - key (str): Key containing variables in project config. - env_key (str): Key containing environment variable mapping in project config. - RETURNS (Dict[str, Any]): The interpolated project config. - """ - config.setdefault(key, {}) - config.setdefault(env_key, {}) - # Substitute references to env vars with their values - for config_var, env_var in config[env_key].items(): - config[env_key][config_var] = _parse_override(os.environ.get(env_var, "")) - # Need to put variables in the top scope again so we can have a top-level - # section "project" (otherwise, a list of commands in the top scope wouldn't) - # be allowed by Thinc's config system - cfg = Config({"project": config, key: config[key], env_key: config[env_key]}) - cfg = Config().from_str(cfg.to_str(), overrides=overrides) - interpolated = cfg.interpolate() - return dict(interpolated["project"]) - - -def validate_project_version(config: Dict[str, Any]) -> None: - """If the project defines a compatible spaCy version range, chec that it's - compatible with the current version of spaCy. - - config (Dict[str, Any]): The loaded config. - """ - spacy_version = config.get("spacy_version", None) - if spacy_version and not is_compatible_version(about.__version__, spacy_version): - err = ( - f"The {PROJECT_FILE} specifies a spaCy version range ({spacy_version}) " - f"that's not compatible with the version of spaCy you're running " - f"({about.__version__}). You can edit version requirement in the " - f"{PROJECT_FILE} to load it, but the project may not run as expected." - ) - msg.fail(err, exits=1) - - -def validate_project_commands(config: Dict[str, Any]) -> None: - """Check that project commands and workflows are valid, don't contain - duplicates, don't clash and only refer to commands that exist. - - config (Dict[str, Any]): The loaded config. - """ - command_names = [cmd["name"] for cmd in config.get("commands", [])] - workflows = config.get("workflows", {}) - duplicates = set([cmd for cmd in command_names if command_names.count(cmd) > 1]) - if duplicates: - err = f"Duplicate commands defined in {PROJECT_FILE}: {', '.join(duplicates)}" - msg.fail(err, exits=1) - for workflow_name, workflow_steps in workflows.items(): - if workflow_name in command_names: - err = f"Can't use workflow name '{workflow_name}': name already exists as a command" - msg.fail(err, exits=1) - for step in workflow_steps: - if step not in command_names: - msg.fail( - f"Unknown command specified in workflow '{workflow_name}': {step}", - f"Workflows can only refer to commands defined in the 'commands' " - f"section of the {PROJECT_FILE}.", - exits=1, - ) - - -def get_hash(data, exclude: Iterable[str] = tuple()) -> str: - """Get the hash for a JSON-serializable object. - - data: The data to hash. - exclude (Iterable[str]): Top-level keys to exclude if data is a dict. - RETURNS (str): The hash. - """ - if isinstance(data, dict): - data = {k: v for k, v in data.items() if k not in exclude} - data_str = srsly.json_dumps(data, sort_keys=True).encode("utf8") - return hashlib.md5(data_str).hexdigest() - - -def get_checksum(path: Union[Path, str]) -> str: - """Get the checksum for a file or directory given its file path. If a - directory path is provided, this uses all files in that directory. - - path (Union[Path, str]): The file or directory path. - RETURNS (str): The checksum. - """ - path = Path(path) - if not (path.is_file() or path.is_dir()): - msg.fail(f"Can't get checksum for {path}: not a file or directory", exits=1) - if path.is_file(): - return hashlib.md5(Path(path).read_bytes()).hexdigest() - else: - # TODO: this is currently pretty slow - dir_checksum = hashlib.md5() - for sub_file in sorted(fp for fp in path.rglob("*") if fp.is_file()): - dir_checksum.update(sub_file.read_bytes()) - return dir_checksum.hexdigest() - - @contextmanager def show_validation_error( file_path: Optional[Union[str, Path]] = None, @@ -370,166 +223,10 @@ def import_code(code_path: Optional[Union[Path, str]]) -> None: msg.fail(f"Couldn't load Python code: {code_path}", e, exits=1) -def upload_file(src: Path, dest: Union[str, "FluidPath"]) -> None: - """Upload a file. - - src (Path): The source path. - url (str): The destination URL to upload to. - """ - import smart_open - - # Create parent directories for local paths - if isinstance(dest, Path): - if not dest.parent.exists(): - dest.parent.mkdir(parents=True) - - dest = str(dest) - with smart_open.open(dest, mode="wb") as output_file: - with src.open(mode="rb") as input_file: - output_file.write(input_file.read()) - - -def download_file( - src: Union[str, "FluidPath"], dest: Path, *, force: bool = False -) -> None: - """Download a file using smart_open. - - url (str): The URL of the file. - dest (Path): The destination path. - force (bool): Whether to force download even if file exists. - If False, the download will be skipped. - """ - import smart_open - - if dest.exists() and not force: - return None - src = str(src) - with smart_open.open(src, mode="rb", compression="disable") as input_file: - with dest.open(mode="wb") as output_file: - shutil.copyfileobj(input_file, output_file) - - -def ensure_pathy(path): - """Temporary helper to prevent importing Pathy globally (which can cause - slow and annoying Google Cloud warning).""" - from pathy import Pathy # noqa: F811 - - return Pathy.fluid(path) - - -def git_checkout( - repo: str, subpath: str, dest: Path, *, branch: str = "master", sparse: bool = False -): - git_version = get_git_version() - if dest.exists(): - msg.fail("Destination of checkout must not exist", exits=1) - if not dest.parent.exists(): - msg.fail("Parent of destination of checkout must exist", exits=1) - if sparse and git_version >= (2, 22): - return git_sparse_checkout(repo, subpath, dest, branch) - elif sparse: - # Only show warnings if the user explicitly wants sparse checkout but - # the Git version doesn't support it - err_old = ( - f"You're running an old version of Git (v{git_version[0]}.{git_version[1]}) " - f"that doesn't fully support sparse checkout yet." - ) - err_unk = "You're running an unknown version of Git, so sparse checkout has been disabled." - msg.warn( - f"{err_unk if git_version == (0, 0) else err_old} " - f"This means that more files than necessary may be downloaded " - f"temporarily. To only download the files needed, make sure " - f"you're using Git v2.22 or above." - ) - with make_tempdir() as tmp_dir: - cmd = f"git -C {tmp_dir} clone {repo} . -b {branch}" - run_command(cmd, capture=True) - # We need Path(name) to make sure we also support subdirectories - try: - source_path = tmp_dir / Path(subpath) - if not is_subpath_of(tmp_dir, source_path): - err = f"'{subpath}' is a path outside of the cloned repository." - msg.fail(err, repo, exits=1) - shutil.copytree(str(source_path), str(dest)) - except FileNotFoundError: - err = f"Can't clone {subpath}. Make sure the directory exists in the repo (branch '{branch}')" - msg.fail(err, repo, exits=1) - - -def git_sparse_checkout(repo, subpath, dest, branch): - # We're using Git, partial clone and sparse checkout to - # only clone the files we need - # This ends up being RIDICULOUS. omg. - # So, every tutorial and SO post talks about 'sparse checkout'...But they - # go and *clone* the whole repo. Worthless. And cloning part of a repo - # turns out to be completely broken. The only way to specify a "path" is.. - # a path *on the server*? The contents of which, specifies the paths. Wat. - # Obviously this is hopelessly broken and insecure, because you can query - # arbitrary paths on the server! So nobody enables this. - # What we have to do is disable *all* files. We could then just checkout - # the path, and it'd "work", but be hopelessly slow...Because it goes and - # transfers every missing object one-by-one. So the final piece is that we - # need to use some weird git internals to fetch the missings in bulk, and - # *that* we can do by path. - # We're using Git and sparse checkout to only clone the files we need - with make_tempdir() as tmp_dir: - # This is the "clone, but don't download anything" part. - cmd = ( - f"git clone {repo} {tmp_dir} --no-checkout --depth 1 " - f"-b {branch} --filter=blob:none" - ) - run_command(cmd) - # Now we need to find the missing filenames for the subpath we want. - # Looking for this 'rev-list' command in the git --help? Hah. - cmd = f"git -C {tmp_dir} rev-list --objects --all --missing=print -- {subpath}" - ret = run_command(cmd, capture=True) - git_repo = _http_to_git(repo) - # Now pass those missings into another bit of git internals - missings = " ".join([x[1:] for x in ret.stdout.split() if x.startswith("?")]) - if not missings: - err = ( - f"Could not find any relevant files for '{subpath}'. " - f"Did you specify a correct and complete path within repo '{repo}' " - f"and branch {branch}?" - ) - msg.fail(err, exits=1) - cmd = f"git -C {tmp_dir} fetch-pack {git_repo} {missings}" - run_command(cmd, capture=True) - # And finally, we can checkout our subpath - cmd = f"git -C {tmp_dir} checkout {branch} {subpath}" - run_command(cmd, capture=True) - - # Get a subdirectory of the cloned path, if appropriate - source_path = tmp_dir / Path(subpath) - if not is_subpath_of(tmp_dir, source_path): - err = f"'{subpath}' is a path outside of the cloned repository." - msg.fail(err, repo, exits=1) - - shutil.move(str(source_path), str(dest)) - - -def git_repo_branch_exists(repo: str, branch: str) -> bool: - """Uses 'git ls-remote' to check if a repository and branch exists - - repo (str): URL to get repo. - branch (str): Branch on repo to check. - RETURNS (bool): True if repo:branch exists. - """ - get_git_version() - cmd = f"git ls-remote {repo} {branch}" - # We might be tempted to use `--exit-code` with `git ls-remote`, but - # `run_command` handles the `returncode` for us, so we'll rely on - # the fact that stdout returns '' if the requested branch doesn't exist - ret = run_command(cmd, capture=True) - exists = ret.stdout != "" - return exists - - def get_git_version( error: str = "Could not run 'git'. Make sure it's installed and the executable is available.", ) -> Tuple[int, int]: """Get the version of git and raise an error if calling 'git --version' fails. - error (str): The error message to show. RETURNS (Tuple[int, int]): The version as a (major, minor) tuple. Returns (0, 0) if the version couldn't be determined. @@ -545,30 +242,6 @@ def get_git_version( return int(version[0]), int(version[1]) -def _http_to_git(repo: str) -> str: - if repo.startswith("http://"): - repo = repo.replace(r"http://", r"https://") - if repo.startswith(r"https://"): - repo = repo.replace("https://", "git@").replace("/", ":", 1) - if repo.endswith("/"): - repo = repo[:-1] - repo = f"{repo}.git" - return repo - - -def is_subpath_of(parent, child): - """ - Check whether `child` is a path contained within `parent`. - """ - # Based on https://stackoverflow.com/a/37095733 . - - # In Python 3.9, the `Path.is_relative_to()` method will supplant this, so - # we can stop using crusty old os.path functions. - parent_realpath = os.path.realpath(parent) - child_realpath = os.path.realpath(child) - return os.path.commonpath([parent_realpath, child_realpath]) == parent_realpath - - @overload def string_to_list(value: str, intify: Literal[False] = ...) -> List[str]: ... diff --git a/spacy/cli/apply.py b/spacy/cli/apply.py index 8c4b4c8bf..ffd810506 100644 --- a/spacy/cli/apply.py +++ b/spacy/cli/apply.py @@ -133,7 +133,9 @@ def apply( if len(text_files) > 0: streams.append(_stream_texts(text_files)) datagen = cast(DocOrStrStream, chain(*streams)) - for doc in tqdm.tqdm(nlp.pipe(datagen, batch_size=batch_size, n_process=n_process)): + for doc in tqdm.tqdm( + nlp.pipe(datagen, batch_size=batch_size, n_process=n_process), disable=None + ): docbin.add(doc) if output_file.suffix == "": output_file = output_file.with_suffix(".spacy") diff --git a/spacy/cli/assemble.py b/spacy/cli/assemble.py index f552c8459..7ad0f52fe 100644 --- a/spacy/cli/assemble.py +++ b/spacy/cli/assemble.py @@ -40,7 +40,8 @@ def assemble_cli( DOCS: https://spacy.io/api/cli#assemble """ - util.logger.setLevel(logging.DEBUG if verbose else logging.INFO) + if verbose: + util.logger.setLevel(logging.DEBUG) # Make sure all files and paths exists if they are needed if not config_path or (str(config_path) != "-" and not config_path.exists()): msg.fail("Config file not found", config_path, exits=1) diff --git a/spacy/cli/benchmark_speed.py b/spacy/cli/benchmark_speed.py index a683d1591..c7fd771c3 100644 --- a/spacy/cli/benchmark_speed.py +++ b/spacy/cli/benchmark_speed.py @@ -89,7 +89,7 @@ class Quartiles: def annotate( nlp: Language, docs: List[Doc], batch_size: Optional[int] ) -> numpy.ndarray: - docs = nlp.pipe(tqdm(docs, unit="doc"), batch_size=batch_size) + docs = nlp.pipe(tqdm(docs, unit="doc", disable=None), batch_size=batch_size) wps = [] while True: with time_context() as elapsed: diff --git a/spacy/cli/download.py b/spacy/cli/download.py index ca6a7babd..2ccde2613 100644 --- a/spacy/cli/download.py +++ b/spacy/cli/download.py @@ -10,6 +10,8 @@ from ..util import ( get_installed_models, get_minor_version, get_package_version, + is_in_interactive, + is_in_jupyter, is_package, is_prerelease_version, run_command, @@ -85,6 +87,27 @@ def download( "Download and installation successful", f"You can now load the package via spacy.load('{model_name}')", ) + if is_in_jupyter(): + reload_deps_msg = ( + "If you are in a Jupyter or Colab notebook, you may need to " + "restart Python in order to load all the package's dependencies. " + "You can do this by selecting the 'Restart kernel' or 'Restart " + "runtime' option." + ) + msg.warn( + "Restart to reload dependencies", + reload_deps_msg, + ) + elif is_in_interactive(): + reload_deps_msg = ( + "If you are in an interactive Python session, you may need to " + "exit and restart Python to load all the package's dependencies. " + "You can exit with Ctrl-D (or Ctrl-Z and Enter on Windows)." + ) + msg.warn( + "Restart to reload dependencies", + reload_deps_msg, + ) def get_model_filename(model_name: str, version: str, sdist: bool = False) -> str: diff --git a/spacy/cli/evaluate.py b/spacy/cli/evaluate.py index f035aa3ce..c3527028e 100644 --- a/spacy/cli/evaluate.py +++ b/spacy/cli/evaluate.py @@ -28,6 +28,7 @@ def evaluate_cli( displacy_path: Optional[Path] = Opt(None, "--displacy-path", "-dp", help="Directory to output rendered parses as HTML", exists=True, file_okay=False), displacy_limit: int = Opt(25, "--displacy-limit", "-dl", help="Limit of parses to render as HTML"), per_component: bool = Opt(False, "--per-component", "-P", help="Return scores per component, only applicable when an output JSON file is specified."), + spans_key: str = Opt("sc", "--spans-key", "-sk", help="Spans key to use when evaluating Doc.spans"), # fmt: on ): """ @@ -53,6 +54,7 @@ def evaluate_cli( displacy_limit=displacy_limit, per_component=per_component, silent=False, + spans_key=spans_key, ) diff --git a/spacy/cli/find_function.py b/spacy/cli/find_function.py new file mode 100644 index 000000000..f99ce2adc --- /dev/null +++ b/spacy/cli/find_function.py @@ -0,0 +1,69 @@ +from typing import Optional, Tuple + +from catalogue import RegistryError +from wasabi import msg + +from ..util import registry +from ._util import Arg, Opt, app + + +@app.command("find-function") +def find_function_cli( + # fmt: off + func_name: str = Arg(..., help="Name of the registered function."), + registry_name: Optional[str] = Opt(None, "--registry", "-r", help="Name of the catalogue registry."), + # fmt: on +): + """ + Find the module, path and line number to the file the registered + function is defined in, if available. + + func_name (str): Name of the registered function. + registry_name (Optional[str]): Name of the catalogue registry. + + DOCS: https://spacy.io/api/cli#find-function + """ + if not registry_name: + registry_names = registry.get_registry_names() + for name in registry_names: + if registry.has(name, func_name): + registry_name = name + break + + if not registry_name: + msg.fail( + f"Couldn't find registered function: '{func_name}'", + exits=1, + ) + + assert registry_name is not None + find_function(func_name, registry_name) + + +def find_function(func_name: str, registry_name: str) -> Tuple[str, int]: + registry_desc = None + try: + registry_desc = registry.find(registry_name, func_name) + except RegistryError as e: + msg.fail( + f"Couldn't find registered function: '{func_name}' in registry '{registry_name}'", + ) + msg.fail(f"{e}", exits=1) + assert registry_desc is not None + + registry_path = None + line_no = None + if registry_desc["file"]: + registry_path = registry_desc["file"] + line_no = registry_desc["line_no"] + + if not registry_path or not line_no: + msg.fail( + f"Couldn't find path to registered function: '{func_name}' in registry '{registry_name}'", + exits=1, + ) + assert registry_path is not None + assert line_no is not None + + msg.good(f"Found registered function '{func_name}' at {registry_path}:{line_no}") + return str(registry_path), int(line_no) diff --git a/spacy/cli/find_threshold.py b/spacy/cli/find_threshold.py index 7aa32c0c6..48077fa51 100644 --- a/spacy/cli/find_threshold.py +++ b/spacy/cli/find_threshold.py @@ -52,8 +52,8 @@ def find_threshold_cli( DOCS: https://spacy.io/api/cli#find-threshold """ - - util.logger.setLevel(logging.DEBUG if verbose else logging.INFO) + if verbose: + util.logger.setLevel(logging.DEBUG) import_code(code_path) find_threshold( model=model, diff --git a/spacy/cli/init_pipeline.py b/spacy/cli/init_pipeline.py index 4b4fe93af..991dc1a82 100644 --- a/spacy/cli/init_pipeline.py +++ b/spacy/cli/init_pipeline.py @@ -90,7 +90,8 @@ def init_pipeline_cli( use_gpu: int = Opt(-1, "--gpu-id", "-g", help="GPU ID or -1 for CPU") # fmt: on ): - util.logger.setLevel(logging.DEBUG if verbose else logging.INFO) + if verbose: + util.logger.setLevel(logging.DEBUG) overrides = parse_config_overrides(ctx.args) import_code(code_path) setup_gpu(use_gpu) @@ -119,7 +120,8 @@ def init_labels_cli( """Generate JSON files for the labels in the data. This helps speed up the training process, since spaCy won't have to preprocess the data to extract the labels.""" - util.logger.setLevel(logging.DEBUG if verbose else logging.INFO) + if verbose: + util.logger.setLevel(logging.DEBUG) if not output_path.exists(): output_path.mkdir(parents=True) overrides = parse_config_overrides(ctx.args) diff --git a/spacy/cli/package.py b/spacy/cli/package.py index 01449f957..3ba4d6960 100644 --- a/spacy/cli/package.py +++ b/spacy/cli/package.py @@ -1,5 +1,8 @@ +import importlib.metadata +import os import re import shutil +import subprocess import sys from collections import defaultdict from pathlib import Path @@ -35,7 +38,7 @@ def package_cli( specified output directory, and the data will be copied over. If --create-meta is set and a meta.json already exists in the output directory, the existing values will be used as the defaults in the command-line prompt. - After packaging, "python setup.py sdist" is run in the package directory, + After packaging, "python -m build --sdist" is run in the package directory, which will create a .tar.gz archive that can be installed via "pip install". If additional code files are provided (e.g. Python files containing custom @@ -78,9 +81,17 @@ def package( input_path = util.ensure_path(input_dir) output_path = util.ensure_path(output_dir) meta_path = util.ensure_path(meta_path) - if create_wheel and not has_wheel(): - err = "Generating a binary .whl file requires wheel to be installed" - msg.fail(err, "pip install wheel", exits=1) + if create_wheel and not has_wheel() and not has_build(): + err = ( + "Generating wheels requires 'build' or 'wheel' (deprecated) to be installed" + ) + msg.fail(err, "pip install build", exits=1) + if not has_build(): + msg.warn( + "Generating packages without the 'build' package is deprecated and " + "will not be supported in the future. To install 'build': pip " + "install build" + ) if not input_path or not input_path.exists(): msg.fail("Can't locate pipeline data", input_path, exits=1) if not output_path or not output_path.exists(): @@ -184,12 +195,37 @@ def package( msg.good(f"Successfully created package directory '{model_name_v}'", main_path) if create_sdist: with util.working_dir(main_path): - util.run_command([sys.executable, "setup.py", "sdist"], capture=False) + # run directly, since util.run_command is not designed to continue + # after a command fails + ret = subprocess.run( + [sys.executable, "-m", "build", ".", "--sdist"], + env=os.environ.copy(), + ) + if ret.returncode != 0: + msg.warn( + "Creating sdist with 'python -m build' failed. Falling " + "back to deprecated use of 'python setup.py sdist'" + ) + util.run_command([sys.executable, "setup.py", "sdist"], capture=False) zip_file = main_path / "dist" / f"{model_name_v}{SDIST_SUFFIX}" msg.good(f"Successfully created zipped Python package", zip_file) if create_wheel: with util.working_dir(main_path): - util.run_command([sys.executable, "setup.py", "bdist_wheel"], capture=False) + # run directly, since util.run_command is not designed to continue + # after a command fails + ret = subprocess.run( + [sys.executable, "-m", "build", ".", "--wheel"], + env=os.environ.copy(), + ) + if ret.returncode != 0: + msg.warn( + "Creating wheel with 'python -m build' failed. Falling " + "back to deprecated use of 'wheel' with " + "'python setup.py bdist_wheel'" + ) + util.run_command( + [sys.executable, "setup.py", "bdist_wheel"], capture=False + ) wheel_name_squashed = re.sub("_+", "_", model_name_v) wheel = main_path / "dist" / f"{wheel_name_squashed}{WHEEL_SUFFIX}" msg.good(f"Successfully created binary wheel", wheel) @@ -209,6 +245,17 @@ def has_wheel() -> bool: return False +def has_build() -> bool: + # it's very likely that there is a local directory named build/ (especially + # in an editable install), so an import check is not sufficient; instead + # check that there is a package version + try: + importlib.metadata.version("build") + return True + except importlib.metadata.PackageNotFoundError: # type: ignore[attr-defined] + return False + + def get_third_party_dependencies( config: Config, exclude: List[str] = util.SimpleFrozenList() ) -> List[str]: @@ -403,7 +450,7 @@ def _format_sources(data: Any) -> str: if author: result += " ({})".format(author) sources.append(result) - return "
".join(sources) + return "
".join(sources) def _format_accuracy(data: Dict[str, Any], exclude: List[str] = ["speed"]) -> str: diff --git a/spacy/cli/profile.py b/spacy/cli/profile.py index e1f720327..e5b8f1193 100644 --- a/spacy/cli/profile.py +++ b/spacy/cli/profile.py @@ -71,7 +71,7 @@ def profile(model: str, inputs: Optional[Path] = None, n_texts: int = 10000) -> def parse_texts(nlp: Language, texts: Sequence[str]) -> None: - for doc in nlp.pipe(tqdm.tqdm(texts), batch_size=16): + for doc in nlp.pipe(tqdm.tqdm(texts, disable=None), batch_size=16): pass diff --git a/spacy/cli/project/assets.py b/spacy/cli/project/assets.py index aa2705986..591d1959e 100644 --- a/spacy/cli/project/assets.py +++ b/spacy/cli/project/assets.py @@ -1,217 +1 @@ -import os -import re -import shutil -from pathlib import Path -from typing import Any, Dict, Optional - -import requests -import typer -from wasabi import msg - -from ...util import ensure_path, working_dir -from .._util import ( - PROJECT_FILE, - Arg, - Opt, - SimpleFrozenDict, - download_file, - get_checksum, - get_git_version, - git_checkout, - load_project_config, - parse_config_overrides, - project_cli, -) - -# Whether assets are extra if `extra` is not set. -EXTRA_DEFAULT = False - - -@project_cli.command( - "assets", - context_settings={"allow_extra_args": True, "ignore_unknown_options": True}, -) -def project_assets_cli( - # fmt: off - ctx: typer.Context, # This is only used to read additional arguments - project_dir: Path = Arg(Path.cwd(), help="Path to cloned project. Defaults to current working directory.", exists=True, file_okay=False), - sparse_checkout: bool = Opt(False, "--sparse", "-S", help="Use sparse checkout for assets provided via Git, to only check out and clone the files needed. Requires Git v22.2+."), - extra: bool = Opt(False, "--extra", "-e", help="Download all assets, including those marked as 'extra'.") - # fmt: on -): - """Fetch project assets like datasets and pretrained weights. Assets are - defined in the "assets" section of the project.yml. If a checksum is - provided in the project.yml, the file is only downloaded if no local file - with the same checksum exists. - - DOCS: https://spacy.io/api/cli#project-assets - """ - overrides = parse_config_overrides(ctx.args) - project_assets( - project_dir, - overrides=overrides, - sparse_checkout=sparse_checkout, - extra=extra, - ) - - -def project_assets( - project_dir: Path, - *, - overrides: Dict[str, Any] = SimpleFrozenDict(), - sparse_checkout: bool = False, - extra: bool = False, -) -> None: - """Fetch assets for a project using DVC if possible. - - project_dir (Path): Path to project directory. - sparse_checkout (bool): Use sparse checkout for assets provided via Git, to only check out and clone the files - needed. - extra (bool): Whether to download all assets, including those marked as 'extra'. - """ - project_path = ensure_path(project_dir) - config = load_project_config(project_path, overrides=overrides) - assets = [ - asset - for asset in config.get("assets", []) - if extra or not asset.get("extra", EXTRA_DEFAULT) - ] - if not assets: - msg.warn( - f"No assets specified in {PROJECT_FILE} (if assets are marked as extra, download them with --extra)", - exits=0, - ) - msg.info(f"Fetching {len(assets)} asset(s)") - - for asset in assets: - dest = (project_dir / asset["dest"]).resolve() - checksum = asset.get("checksum") - if "git" in asset: - git_err = ( - f"Cloning spaCy project templates requires Git and the 'git' command. " - f"Make sure it's installed and that the executable is available." - ) - get_git_version(error=git_err) - if dest.exists(): - # If there's already a file, check for checksum - if checksum and checksum == get_checksum(dest): - msg.good( - f"Skipping download with matching checksum: {asset['dest']}" - ) - continue - else: - if dest.is_dir(): - shutil.rmtree(dest) - else: - dest.unlink() - if "repo" not in asset["git"] or asset["git"]["repo"] is None: - msg.fail( - "A git asset must include 'repo', the repository address.", exits=1 - ) - if "path" not in asset["git"] or asset["git"]["path"] is None: - msg.fail( - "A git asset must include 'path' - use \"\" to get the entire repository.", - exits=1, - ) - git_checkout( - asset["git"]["repo"], - asset["git"]["path"], - dest, - branch=asset["git"].get("branch"), - sparse=sparse_checkout, - ) - msg.good(f"Downloaded asset {dest}") - else: - url = asset.get("url") - if not url: - # project.yml defines asset without URL that the user has to place - check_private_asset(dest, checksum) - continue - fetch_asset(project_path, url, dest, checksum) - - -def check_private_asset(dest: Path, checksum: Optional[str] = None) -> None: - """Check and validate assets without a URL (private assets that the user - has to provide themselves) and give feedback about the checksum. - - dest (Path): Destination path of the asset. - checksum (Optional[str]): Optional checksum of the expected file. - """ - if not Path(dest).exists(): - err = f"No URL provided for asset. You need to add this file yourself: {dest}" - msg.warn(err) - else: - if not checksum: - msg.good(f"Asset already exists: {dest}") - elif checksum == get_checksum(dest): - msg.good(f"Asset exists with matching checksum: {dest}") - else: - msg.fail(f"Asset available but with incorrect checksum: {dest}") - - -def fetch_asset( - project_path: Path, url: str, dest: Path, checksum: Optional[str] = None -) -> None: - """Fetch an asset from a given URL or path. If a checksum is provided and a - local file exists, it's only re-downloaded if the checksum doesn't match. - - project_path (Path): Path to project directory. - url (str): URL or path to asset. - checksum (Optional[str]): Optional expected checksum of local file. - RETURNS (Optional[Path]): The path to the fetched asset or None if fetching - the asset failed. - """ - dest_path = (project_path / dest).resolve() - if dest_path.exists(): - # If there's already a file, check for checksum - if checksum: - if checksum == get_checksum(dest_path): - msg.good(f"Skipping download with matching checksum: {dest}") - return - else: - # If there's not a checksum, make sure the file is a possibly valid size - if os.path.getsize(dest_path) == 0: - msg.warn(f"Asset exists but with size of 0 bytes, deleting: {dest}") - os.remove(dest_path) - # We might as well support the user here and create parent directories in - # case the asset dir isn't listed as a dir to create in the project.yml - if not dest_path.parent.exists(): - dest_path.parent.mkdir(parents=True) - with working_dir(project_path): - url = convert_asset_url(url) - try: - download_file(url, dest_path) - msg.good(f"Downloaded asset {dest}") - except requests.exceptions.RequestException as e: - if Path(url).exists() and Path(url).is_file(): - # If it's a local file, copy to destination - shutil.copy(url, str(dest_path)) - msg.good(f"Copied local asset {dest}") - else: - msg.fail(f"Download failed: {dest}", e) - if checksum and checksum != get_checksum(dest_path): - msg.fail(f"Checksum doesn't match value defined in {PROJECT_FILE}: {dest}") - - -def convert_asset_url(url: str) -> str: - """Check and convert the asset URL if needed. - - url (str): The asset URL. - RETURNS (str): The converted URL. - """ - # If the asset URL is a regular GitHub URL it's likely a mistake - if ( - re.match(r"(http(s?)):\/\/github.com", url) - and "releases/download" not in url - and "/raw/" not in url - ): - converted = url.replace("github.com", "raw.githubusercontent.com") - converted = re.sub(r"/(tree|blob)/", "/", converted) - msg.warn( - "Downloading from a regular GitHub URL. This will only download " - "the source of the page, not the actual file. Converting the URL " - "to a raw URL.", - converted, - ) - return converted - return url +from weasel.cli.assets import * diff --git a/spacy/cli/project/clone.py b/spacy/cli/project/clone.py index 2ee27c92a..11d2511a3 100644 --- a/spacy/cli/project/clone.py +++ b/spacy/cli/project/clone.py @@ -1,124 +1 @@ -import re -import subprocess -from pathlib import Path -from typing import Optional - -from wasabi import msg - -from ... import about -from ...util import ensure_path -from .._util import ( - COMMAND, - PROJECT_FILE, - Arg, - Opt, - get_git_version, - git_checkout, - git_repo_branch_exists, - project_cli, -) - -DEFAULT_REPO = about.__projects__ -DEFAULT_PROJECTS_BRANCH = about.__projects_branch__ -DEFAULT_BRANCHES = ["main", "master"] - - -@project_cli.command("clone") -def project_clone_cli( - # fmt: off - name: str = Arg(..., help="The name of the template to clone"), - dest: Optional[Path] = Arg(None, help="Where to clone the project. Defaults to current working directory", exists=False), - repo: str = Opt(DEFAULT_REPO, "--repo", "-r", help="The repository to clone from"), - branch: Optional[str] = Opt(None, "--branch", "-b", help=f"The branch to clone from. If not provided, will attempt {', '.join(DEFAULT_BRANCHES)}"), - sparse_checkout: bool = Opt(False, "--sparse", "-S", help="Use sparse Git checkout to only check out and clone the files needed. Requires Git v22.2+.") - # fmt: on -): - """Clone a project template from a repository. Calls into "git" and will - only download the files from the given subdirectory. The GitHub repo - defaults to the official spaCy template repo, but can be customized - (including using a private repo). - - DOCS: https://spacy.io/api/cli#project-clone - """ - if dest is None: - dest = Path.cwd() / Path(name).parts[-1] - if repo == DEFAULT_REPO and branch is None: - branch = DEFAULT_PROJECTS_BRANCH - - if branch is None: - for default_branch in DEFAULT_BRANCHES: - if git_repo_branch_exists(repo, default_branch): - branch = default_branch - break - if branch is None: - default_branches_msg = ", ".join(f"'{b}'" for b in DEFAULT_BRANCHES) - msg.fail( - "No branch provided and attempted default " - f"branches {default_branches_msg} do not exist.", - exits=1, - ) - else: - if not git_repo_branch_exists(repo, branch): - msg.fail(f"repo: {repo} (branch: {branch}) does not exist.", exits=1) - assert isinstance(branch, str) - project_clone(name, dest, repo=repo, branch=branch, sparse_checkout=sparse_checkout) - - -def project_clone( - name: str, - dest: Path, - *, - repo: str = about.__projects__, - branch: str = about.__projects_branch__, - sparse_checkout: bool = False, -) -> None: - """Clone a project template from a repository. - - name (str): Name of subdirectory to clone. - dest (Path): Destination path of cloned project. - repo (str): URL of Git repo containing project templates. - branch (str): The branch to clone from - """ - dest = ensure_path(dest) - check_clone(name, dest, repo) - project_dir = dest.resolve() - repo_name = re.sub(r"(http(s?)):\/\/github.com/", "", repo) - try: - git_checkout(repo, name, dest, branch=branch, sparse=sparse_checkout) - except subprocess.CalledProcessError: - err = f"Could not clone '{name}' from repo '{repo_name}' (branch '{branch}')" - msg.fail(err, exits=1) - msg.good(f"Cloned '{name}' from '{repo_name}' (branch '{branch}')", project_dir) - if not (project_dir / PROJECT_FILE).exists(): - msg.warn(f"No {PROJECT_FILE} found in directory") - else: - msg.good(f"Your project is now ready!") - print(f"To fetch the assets, run:\n{COMMAND} project assets {dest}") - - -def check_clone(name: str, dest: Path, repo: str) -> None: - """Check and validate that the destination path can be used to clone. Will - check that Git is available and that the destination path is suitable. - - name (str): Name of the directory to clone from the repo. - dest (Path): Local destination of cloned directory. - repo (str): URL of the repo to clone from. - """ - git_err = ( - f"Cloning spaCy project templates requires Git and the 'git' command. " - f"To clone a project without Git, copy the files from the '{name}' " - f"directory in the {repo} to {dest} manually." - ) - get_git_version(error=git_err) - if not dest: - msg.fail(f"Not a valid directory to clone project: {dest}", exits=1) - if dest.exists(): - # Directory already exists (not allowed, clone needs to create it) - msg.fail(f"Can't clone project, directory already exists: {dest}", exits=1) - if not dest.parent.exists(): - # We're not creating parents, parent dir should exist - msg.fail( - f"Can't clone project, parent directory doesn't exist: {dest.parent}. " - f"Create the necessary folder(s) first before continuing.", - exits=1, - ) +from weasel.cli.clone import * diff --git a/spacy/cli/project/document.py b/spacy/cli/project/document.py index 80107d27a..1952524a9 100644 --- a/spacy/cli/project/document.py +++ b/spacy/cli/project/document.py @@ -1,115 +1 @@ -from pathlib import Path - -from wasabi import MarkdownRenderer, msg - -from ...util import working_dir -from .._util import PROJECT_FILE, Arg, Opt, load_project_config, project_cli - -DOCS_URL = "https://spacy.io" -INTRO_PROJECT = f"""The [`{PROJECT_FILE}`]({PROJECT_FILE}) defines the data assets required by the -project, as well as the available commands and workflows. For details, see the -[spaCy projects documentation]({DOCS_URL}/usage/projects).""" -INTRO_COMMANDS = f"""The following commands are defined by the project. They -can be executed using [`spacy project run [name]`]({DOCS_URL}/api/cli#project-run). -Commands are only re-run if their inputs have changed.""" -INTRO_WORKFLOWS = f"""The following workflows are defined by the project. They -can be executed using [`spacy project run [name]`]({DOCS_URL}/api/cli#project-run) -and will run the specified commands in order. Commands are only re-run if their -inputs have changed.""" -INTRO_ASSETS = f"""The following assets are defined by the project. They can -be fetched by running [`spacy project assets`]({DOCS_URL}/api/cli#project-assets) -in the project directory.""" -# These markers are added to the Markdown and can be used to update the file in -# place if it already exists. Only the auto-generated part will be replaced. -MARKER_START = "" -MARKER_END = "" -# If this marker is used in an existing README, it's ignored and not replaced -MARKER_IGNORE = "" - - -@project_cli.command("document") -def project_document_cli( - # fmt: off - project_dir: Path = Arg(Path.cwd(), help="Path to cloned project. Defaults to current working directory.", exists=True, file_okay=False), - output_file: Path = Opt("-", "--output", "-o", help="Path to output Markdown file for output. Defaults to - for standard output"), - no_emoji: bool = Opt(False, "--no-emoji", "-NE", help="Don't use emoji") - # fmt: on -): - """ - Auto-generate a README.md for a project. If the content is saved to a file, - hidden markers are added so you can add custom content before or after the - auto-generated section and only the auto-generated docs will be replaced - when you re-run the command. - - DOCS: https://spacy.io/api/cli#project-document - """ - project_document(project_dir, output_file, no_emoji=no_emoji) - - -def project_document( - project_dir: Path, output_file: Path, *, no_emoji: bool = False -) -> None: - is_stdout = str(output_file) == "-" - config = load_project_config(project_dir) - md = MarkdownRenderer(no_emoji=no_emoji) - md.add(MARKER_START) - title = config.get("title") - description = config.get("description") - md.add(md.title(1, f"spaCy Project{f': {title}' if title else ''}", "🪐")) - if description: - md.add(description) - md.add(md.title(2, PROJECT_FILE, "📋")) - md.add(INTRO_PROJECT) - # Commands - cmds = config.get("commands", []) - data = [(md.code(cmd["name"]), cmd.get("help", "")) for cmd in cmds] - if data: - md.add(md.title(3, "Commands", "⏯")) - md.add(INTRO_COMMANDS) - md.add(md.table(data, ["Command", "Description"])) - # Workflows - wfs = config.get("workflows", {}).items() - data = [(md.code(n), " → ".join(md.code(w) for w in stp)) for n, stp in wfs] - if data: - md.add(md.title(3, "Workflows", "⏭")) - md.add(INTRO_WORKFLOWS) - md.add(md.table(data, ["Workflow", "Steps"])) - # Assets - assets = config.get("assets", []) - data = [] - for a in assets: - source = "Git" if a.get("git") else "URL" if a.get("url") else "Local" - dest_path = a["dest"] - dest = md.code(dest_path) - if source == "Local": - # Only link assets if they're in the repo - with working_dir(project_dir) as p: - if (p / dest_path).exists(): - dest = md.link(dest, dest_path) - data.append((dest, source, a.get("description", ""))) - if data: - md.add(md.title(3, "Assets", "🗂")) - md.add(INTRO_ASSETS) - md.add(md.table(data, ["File", "Source", "Description"])) - md.add(MARKER_END) - # Output result - if is_stdout: - print(md.text) - else: - content = md.text - if output_file.exists(): - with output_file.open("r", encoding="utf8") as f: - existing = f.read() - if MARKER_IGNORE in existing: - msg.warn("Found ignore marker in existing file: skipping", output_file) - return - if MARKER_START in existing and MARKER_END in existing: - msg.info("Found existing file: only replacing auto-generated docs") - before = existing.split(MARKER_START)[0] - after = existing.split(MARKER_END)[1] - content = f"{before}{content}{after}" - else: - msg.warn("Replacing existing file") - with output_file.open("w", encoding="utf8") as f: - f.write(content) - msg.good("Saved project documentation", output_file) +from weasel.cli.document import * diff --git a/spacy/cli/project/dvc.py b/spacy/cli/project/dvc.py index 9ad55c433..aa1ae7dd9 100644 --- a/spacy/cli/project/dvc.py +++ b/spacy/cli/project/dvc.py @@ -1,220 +1 @@ -"""This module contains helpers and subcommands for integrating spaCy projects -with Data Version Controk (DVC). https://dvc.org""" -import subprocess -from pathlib import Path -from typing import Any, Dict, Iterable, List, Optional - -from wasabi import msg - -from ...util import ( - SimpleFrozenList, - join_command, - run_command, - split_command, - working_dir, -) -from .._util import ( - COMMAND, - NAME, - PROJECT_FILE, - Arg, - Opt, - get_hash, - load_project_config, - project_cli, -) - -DVC_CONFIG = "dvc.yaml" -DVC_DIR = ".dvc" -UPDATE_COMMAND = "dvc" -DVC_CONFIG_COMMENT = f"""# This file is auto-generated by spaCy based on your {PROJECT_FILE}. If you've -# edited your {PROJECT_FILE}, you can regenerate this file by running: -# {COMMAND} project {UPDATE_COMMAND}""" - - -@project_cli.command(UPDATE_COMMAND) -def project_update_dvc_cli( - # fmt: off - project_dir: Path = Arg(Path.cwd(), help="Location of project directory. Defaults to current working directory.", exists=True, file_okay=False), - workflow: Optional[str] = Arg(None, help=f"Name of workflow defined in {PROJECT_FILE}. Defaults to first workflow if not set."), - verbose: bool = Opt(False, "--verbose", "-V", help="Print more info"), - quiet: bool = Opt(False, "--quiet", "-q", help="Print less info"), - force: bool = Opt(False, "--force", "-F", help="Force update DVC config"), - # fmt: on -): - """Auto-generate Data Version Control (DVC) config. A DVC - project can only define one pipeline, so you need to specify one workflow - defined in the project.yml. If no workflow is specified, the first defined - workflow is used. The DVC config will only be updated if the project.yml - changed. - - DOCS: https://spacy.io/api/cli#project-dvc - """ - project_update_dvc(project_dir, workflow, verbose=verbose, quiet=quiet, force=force) - - -def project_update_dvc( - project_dir: Path, - workflow: Optional[str] = None, - *, - verbose: bool = False, - quiet: bool = False, - force: bool = False, -) -> None: - """Update the auto-generated Data Version Control (DVC) config file. A DVC - project can only define one pipeline, so you need to specify one workflow - defined in the project.yml. Will only update the file if the checksum changed. - - project_dir (Path): The project directory. - workflow (Optional[str]): Optional name of workflow defined in project.yml. - If not set, the first workflow will be used. - verbose (bool): Print more info. - quiet (bool): Print less info. - force (bool): Force update DVC config. - """ - config = load_project_config(project_dir) - updated = update_dvc_config( - project_dir, config, workflow, verbose=verbose, quiet=quiet, force=force - ) - help_msg = "To execute the workflow with DVC, run: dvc repro" - if updated: - msg.good(f"Updated DVC config from {PROJECT_FILE}", help_msg) - else: - msg.info(f"No changes found in {PROJECT_FILE}, no update needed", help_msg) - - -def update_dvc_config( - path: Path, - config: Dict[str, Any], - workflow: Optional[str] = None, - verbose: bool = False, - quiet: bool = False, - force: bool = False, -) -> bool: - """Re-run the DVC commands in dry mode and update dvc.yaml file in the - project directory. The file is auto-generated based on the config. The - first line of the auto-generated file specifies the hash of the config - dict, so if any of the config values change, the DVC config is regenerated. - - path (Path): The path to the project directory. - config (Dict[str, Any]): The loaded project.yml. - verbose (bool): Whether to print additional info (via DVC). - quiet (bool): Don't output anything (via DVC). - force (bool): Force update, even if hashes match. - RETURNS (bool): Whether the DVC config file was updated. - """ - ensure_dvc(path) - workflows = config.get("workflows", {}) - workflow_names = list(workflows.keys()) - check_workflows(workflow_names, workflow) - if not workflow: - workflow = workflow_names[0] - config_hash = get_hash(config) - path = path.resolve() - dvc_config_path = path / DVC_CONFIG - if dvc_config_path.exists(): - # Check if the file was generated using the current config, if not, redo - with dvc_config_path.open("r", encoding="utf8") as f: - ref_hash = f.readline().strip().replace("# ", "") - if ref_hash == config_hash and not force: - return False # Nothing has changed in project.yml, don't need to update - dvc_config_path.unlink() - dvc_commands = [] - config_commands = {cmd["name"]: cmd for cmd in config.get("commands", [])} - - # some flags that apply to every command - flags = [] - if verbose: - flags.append("--verbose") - if quiet: - flags.append("--quiet") - - for name in workflows[workflow]: - command = config_commands[name] - deps = command.get("deps", []) - outputs = command.get("outputs", []) - outputs_no_cache = command.get("outputs_no_cache", []) - if not deps and not outputs and not outputs_no_cache: - continue - # Default to the working dir as the project path since dvc.yaml is auto-generated - # and we don't want arbitrary paths in there - project_cmd = ["python", "-m", NAME, "project", "run", name] - deps_cmd = [c for cl in [["-d", p] for p in deps] for c in cl] - outputs_cmd = [c for cl in [["-o", p] for p in outputs] for c in cl] - outputs_nc_cmd = [c for cl in [["-O", p] for p in outputs_no_cache] for c in cl] - - dvc_cmd = ["run", *flags, "-n", name, "-w", str(path), "--no-exec"] - if command.get("no_skip"): - dvc_cmd.append("--always-changed") - full_cmd = [*dvc_cmd, *deps_cmd, *outputs_cmd, *outputs_nc_cmd, *project_cmd] - dvc_commands.append(join_command(full_cmd)) - - if not dvc_commands: - # If we don't check for this, then there will be an error when reading the - # config, since DVC wouldn't create it. - msg.fail( - "No usable commands for DVC found. This can happen if none of your " - "commands have dependencies or outputs.", - exits=1, - ) - - with working_dir(path): - for c in dvc_commands: - dvc_command = "dvc " + c - run_command(dvc_command) - with dvc_config_path.open("r+", encoding="utf8") as f: - content = f.read() - f.seek(0, 0) - f.write(f"# {config_hash}\n{DVC_CONFIG_COMMENT}\n{content}") - return True - - -def check_workflows(workflows: List[str], workflow: Optional[str] = None) -> None: - """Validate workflows provided in project.yml and check that a given - workflow can be used to generate a DVC config. - - workflows (List[str]): Names of the available workflows. - workflow (Optional[str]): The name of the workflow to convert. - """ - if not workflows: - msg.fail( - f"No workflows defined in {PROJECT_FILE}. To generate a DVC config, " - f"define at least one list of commands.", - exits=1, - ) - if workflow is not None and workflow not in workflows: - msg.fail( - f"Workflow '{workflow}' not defined in {PROJECT_FILE}. " - f"Available workflows: {', '.join(workflows)}", - exits=1, - ) - if not workflow: - msg.warn( - f"No workflow specified for DVC pipeline. Using the first workflow " - f"defined in {PROJECT_FILE}: '{workflows[0]}'" - ) - - -def ensure_dvc(project_dir: Path) -> None: - """Ensure that the "dvc" command is available and that the current project - directory is an initialized DVC project. - """ - try: - subprocess.run(["dvc", "--version"], stdout=subprocess.DEVNULL) - except Exception: - msg.fail( - "To use spaCy projects with DVC (Data Version Control), DVC needs " - "to be installed and the 'dvc' command needs to be available", - "You can install the Python package from pip (pip install dvc) or " - "conda (conda install -c conda-forge dvc). For more details, see the " - "documentation: https://dvc.org/doc/install", - exits=1, - ) - if not (project_dir / ".dvc").exists(): - msg.fail( - "Project not initialized as a DVC project", - "To initialize a DVC project, you can run 'dvc init' in the project " - "directory. For more details, see the documentation: " - "https://dvc.org/doc/command-reference/init", - exits=1, - ) +from weasel.cli.dvc import * diff --git a/spacy/cli/project/pull.py b/spacy/cli/project/pull.py index e9be74df7..5e603273d 100644 --- a/spacy/cli/project/pull.py +++ b/spacy/cli/project/pull.py @@ -1,67 +1 @@ -from pathlib import Path - -from wasabi import msg - -from .._util import Arg, load_project_config, logger, project_cli -from .remote_storage import RemoteStorage, get_command_hash -from .run import update_lockfile - - -@project_cli.command("pull") -def project_pull_cli( - # fmt: off - remote: str = Arg("default", help="Name or path of remote storage"), - project_dir: Path = Arg(Path.cwd(), help="Location of project directory. Defaults to current working directory.", exists=True, file_okay=False), - # fmt: on -): - """Retrieve available precomputed outputs from a remote storage. - You can alias remotes in your project.yml by mapping them to storage paths. - A storage can be anything that the smart-open library can upload to, e.g. - AWS, Google Cloud Storage, SSH, local directories etc. - - DOCS: https://spacy.io/api/cli#project-pull - """ - for url, output_path in project_pull(project_dir, remote): - if url is not None: - msg.good(f"Pulled {output_path} from {url}") - - -def project_pull(project_dir: Path, remote: str, *, verbose: bool = False): - # TODO: We don't have tests for this :(. It would take a bit of mockery to - # set up. I guess see if it breaks first? - config = load_project_config(project_dir) - if remote in config.get("remotes", {}): - remote = config["remotes"][remote] - storage = RemoteStorage(project_dir, remote) - commands = list(config.get("commands", [])) - # We use a while loop here because we don't know how the commands - # will be ordered. A command might need dependencies from one that's later - # in the list. - while commands: - for i, cmd in enumerate(list(commands)): - logger.debug("CMD: %s.", cmd["name"]) - deps = [project_dir / dep for dep in cmd.get("deps", [])] - if all(dep.exists() for dep in deps): - cmd_hash = get_command_hash("", "", deps, cmd["script"]) - for output_path in cmd.get("outputs", []): - url = storage.pull(output_path, command_hash=cmd_hash) - logger.debug( - "URL: %s for %s with command hash %s", - url, - output_path, - cmd_hash, - ) - yield url, output_path - - out_locs = [project_dir / out for out in cmd.get("outputs", [])] - if all(loc.exists() for loc in out_locs): - update_lockfile(project_dir, cmd) - # We remove the command from the list here, and break, so that - # we iterate over the loop again. - commands.pop(i) - break - else: - logger.debug("Dependency missing. Skipping %s outputs.", cmd["name"]) - else: - # If we didn't break the for loop, break the while loop. - break +from weasel.cli.pull import * diff --git a/spacy/cli/project/push.py b/spacy/cli/project/push.py index a7915e547..3a8e8869d 100644 --- a/spacy/cli/project/push.py +++ b/spacy/cli/project/push.py @@ -1,69 +1 @@ -from pathlib import Path - -from wasabi import msg - -from .._util import Arg, load_project_config, logger, project_cli -from .remote_storage import RemoteStorage, get_command_hash, get_content_hash - - -@project_cli.command("push") -def project_push_cli( - # fmt: off - remote: str = Arg("default", help="Name or path of remote storage"), - project_dir: Path = Arg(Path.cwd(), help="Location of project directory. Defaults to current working directory.", exists=True, file_okay=False), - # fmt: on -): - """Persist outputs to a remote storage. You can alias remotes in your - project.yml by mapping them to storage paths. A storage can be anything that - the smart-open library can upload to, e.g. AWS, Google Cloud Storage, SSH, - local directories etc. - - DOCS: https://spacy.io/api/cli#project-push - """ - for output_path, url in project_push(project_dir, remote): - if url is None: - msg.info(f"Skipping {output_path}") - else: - msg.good(f"Pushed {output_path} to {url}") - - -def project_push(project_dir: Path, remote: str): - """Persist outputs to a remote storage. You can alias remotes in your project.yml - by mapping them to storage paths. A storage can be anything that the smart-open - library can upload to, e.g. gcs, aws, ssh, local directories etc - """ - config = load_project_config(project_dir) - if remote in config.get("remotes", {}): - remote = config["remotes"][remote] - storage = RemoteStorage(project_dir, remote) - for cmd in config.get("commands", []): - logger.debug("CMD: %s", cmd["name"]) - deps = [project_dir / dep for dep in cmd.get("deps", [])] - if any(not dep.exists() for dep in deps): - logger.debug("Dependency missing. Skipping %s outputs", cmd["name"]) - continue - cmd_hash = get_command_hash( - "", "", [project_dir / dep for dep in cmd.get("deps", [])], cmd["script"] - ) - logger.debug("CMD_HASH: %s", cmd_hash) - for output_path in cmd.get("outputs", []): - output_loc = project_dir / output_path - if output_loc.exists() and _is_not_empty_dir(output_loc): - url = storage.push( - output_path, - command_hash=cmd_hash, - content_hash=get_content_hash(output_loc), - ) - logger.debug( - "URL: %s for output %s with cmd_hash %s", url, output_path, cmd_hash - ) - yield output_path, url - - -def _is_not_empty_dir(loc: Path): - if not loc.is_dir(): - return True - elif any(_is_not_empty_dir(child) for child in loc.iterdir()): - return True - else: - return False +from weasel.cli.push import * diff --git a/spacy/cli/project/remote_storage.py b/spacy/cli/project/remote_storage.py index 84235a90d..29409150f 100644 --- a/spacy/cli/project/remote_storage.py +++ b/spacy/cli/project/remote_storage.py @@ -1,212 +1 @@ -import hashlib -import os -import site -import tarfile -import urllib.parse -from pathlib import Path -from typing import TYPE_CHECKING, Dict, List, Optional - -from wasabi import msg - -from ... import about -from ...errors import Errors -from ...git_info import GIT_VERSION -from ...util import ENV_VARS, check_bool_env_var, get_minor_version -from .._util import ( - download_file, - ensure_pathy, - get_checksum, - get_hash, - make_tempdir, - upload_file, -) - -if TYPE_CHECKING: - from pathy import FluidPath # noqa: F401 - - -class RemoteStorage: - """Push and pull outputs to and from a remote file storage. - - Remotes can be anything that `smart-open` can support: AWS, GCS, file system, - ssh, etc. - """ - - def __init__(self, project_root: Path, url: str, *, compression="gz"): - self.root = project_root - self.url = ensure_pathy(url) - self.compression = compression - - def push(self, path: Path, command_hash: str, content_hash: str) -> "FluidPath": - """Compress a file or directory within a project and upload it to a remote - storage. If an object exists at the full URL, nothing is done. - - Within the remote storage, files are addressed by their project path - (url encoded) and two user-supplied hashes, representing their creation - context and their file contents. If the URL already exists, the data is - not uploaded. Paths are archived and compressed prior to upload. - """ - loc = self.root / path - if not loc.exists(): - raise IOError(f"Cannot push {loc}: does not exist.") - url = self.make_url(path, command_hash, content_hash) - if url.exists(): - return url - tmp: Path - with make_tempdir() as tmp: - tar_loc = tmp / self.encode_name(str(path)) - mode_string = f"w:{self.compression}" if self.compression else "w" - with tarfile.open(tar_loc, mode=mode_string) as tar_file: - tar_file.add(str(loc), arcname=str(path)) - upload_file(tar_loc, url) - return url - - def pull( - self, - path: Path, - *, - command_hash: Optional[str] = None, - content_hash: Optional[str] = None, - ) -> Optional["FluidPath"]: - """Retrieve a file from the remote cache. If the file already exists, - nothing is done. - - If the command_hash and/or content_hash are specified, only matching - results are returned. If no results are available, an error is raised. - """ - dest = self.root / path - if dest.exists(): - return None - url = self.find(path, command_hash=command_hash, content_hash=content_hash) - if url is None: - return url - else: - # Make sure the destination exists - if not dest.parent.exists(): - dest.parent.mkdir(parents=True) - tmp: Path - with make_tempdir() as tmp: - tar_loc = tmp / url.parts[-1] - download_file(url, tar_loc) - mode_string = f"r:{self.compression}" if self.compression else "r" - with tarfile.open(tar_loc, mode=mode_string) as tar_file: - # This requires that the path is added correctly, relative - # to root. This is how we set things up in push() - - # Disallow paths outside the current directory for the tar - # file (CVE-2007-4559, directory traversal vulnerability) - def is_within_directory(directory, target): - abs_directory = os.path.abspath(directory) - abs_target = os.path.abspath(target) - prefix = os.path.commonprefix([abs_directory, abs_target]) - return prefix == abs_directory - - def safe_extract(tar, path): - for member in tar.getmembers(): - member_path = os.path.join(path, member.name) - if not is_within_directory(path, member_path): - raise ValueError(Errors.E852) - tar.extractall(path) - - safe_extract(tar_file, self.root) - return url - - def find( - self, - path: Path, - *, - command_hash: Optional[str] = None, - content_hash: Optional[str] = None, - ) -> Optional["FluidPath"]: - """Find the best matching version of a file within the storage, - or `None` if no match can be found. If both the creation and content hash - are specified, only exact matches will be returned. Otherwise, the most - recent matching file is preferred. - """ - name = self.encode_name(str(path)) - urls = [] - if command_hash is not None and content_hash is not None: - url = self.url / name / command_hash / content_hash - urls = [url] if url.exists() else [] - elif command_hash is not None: - if (self.url / name / command_hash).exists(): - urls = list((self.url / name / command_hash).iterdir()) - else: - if (self.url / name).exists(): - for sub_dir in (self.url / name).iterdir(): - urls.extend(sub_dir.iterdir()) - if content_hash is not None: - urls = [url for url in urls if url.parts[-1] == content_hash] - if len(urls) >= 2: - try: - urls.sort(key=lambda x: x.stat().last_modified) # type: ignore - except Exception: - msg.warn( - "Unable to sort remote files by last modified. The file(s) " - "pulled from the cache may not be the most recent." - ) - return urls[-1] if urls else None - - def make_url(self, path: Path, command_hash: str, content_hash: str) -> "FluidPath": - """Construct a URL from a subpath, a creation hash and a content hash.""" - return self.url / self.encode_name(str(path)) / command_hash / content_hash - - def encode_name(self, name: str) -> str: - """Encode a subpath into a URL-safe name.""" - return urllib.parse.quote_plus(name) - - -def get_content_hash(loc: Path) -> str: - return get_checksum(loc) - - -def get_command_hash( - site_hash: str, env_hash: str, deps: List[Path], cmd: List[str] -) -> str: - """Create a hash representing the execution of a command. This includes the - currently installed packages, whatever environment variables have been marked - as relevant, and the command. - """ - if check_bool_env_var(ENV_VARS.PROJECT_USE_GIT_VERSION): - spacy_v = GIT_VERSION - else: - spacy_v = str(get_minor_version(about.__version__) or "") - dep_checksums = [get_checksum(dep) for dep in sorted(deps)] - hashes = [spacy_v, site_hash, env_hash] + dep_checksums - hashes.extend(cmd) - creation_bytes = "".join(hashes).encode("utf8") - return hashlib.md5(creation_bytes).hexdigest() - - -def get_site_hash(): - """Hash the current Python environment's site-packages contents, including - the name and version of the libraries. The list we're hashing is what - `pip freeze` would output. - """ - site_dirs = site.getsitepackages() - if site.ENABLE_USER_SITE: - site_dirs.extend(site.getusersitepackages()) - packages = set() - for site_dir in site_dirs: - site_dir = Path(site_dir) - for subpath in site_dir.iterdir(): - if subpath.parts[-1].endswith("dist-info"): - packages.add(subpath.parts[-1].replace(".dist-info", "")) - package_bytes = "".join(sorted(packages)).encode("utf8") - return hashlib.md5sum(package_bytes).hexdigest() - - -def get_env_hash(env: Dict[str, str]) -> str: - """Construct a hash of the environment variables that will be passed into - the commands. - - Values in the env dict may be references to the current os.environ, using - the syntax $ENV_VAR to mean os.environ[ENV_VAR] - """ - env_vars = {} - for key, value in env.items(): - if value.startswith("$"): - env_vars[key] = os.environ.get(value[1:], "") - else: - env_vars[key] = value - return get_hash(env_vars) +from weasel.cli.remote_storage import * diff --git a/spacy/cli/project/run.py b/spacy/cli/project/run.py index 43972a202..cc6a5ac42 100644 --- a/spacy/cli/project/run.py +++ b/spacy/cli/project/run.py @@ -1,379 +1 @@ -import os.path -import sys -from pathlib import Path -from typing import Any, Dict, Iterable, List, Optional, Sequence, Tuple - -import srsly -import typer -from wasabi import msg -from wasabi.util import locale_escape - -from ... import about -from ...git_info import GIT_VERSION -from ...util import ( - ENV_VARS, - SimpleFrozenDict, - SimpleFrozenList, - check_bool_env_var, - is_cwd, - is_minor_version_match, - join_command, - run_command, - split_command, - working_dir, -) -from .._util import ( - COMMAND, - PROJECT_FILE, - PROJECT_LOCK, - Arg, - Opt, - get_checksum, - get_hash, - load_project_config, - parse_config_overrides, - project_cli, -) - - -@project_cli.command( - "run", context_settings={"allow_extra_args": True, "ignore_unknown_options": True} -) -def project_run_cli( - # fmt: off - ctx: typer.Context, # This is only used to read additional arguments - subcommand: str = Arg(None, help=f"Name of command defined in the {PROJECT_FILE}"), - project_dir: Path = Arg(Path.cwd(), help="Location of project directory. Defaults to current working directory.", exists=True, file_okay=False), - force: bool = Opt(False, "--force", "-F", help="Force re-running steps, even if nothing changed"), - dry: bool = Opt(False, "--dry", "-D", help="Perform a dry run and don't execute scripts"), - show_help: bool = Opt(False, "--help", help="Show help message and available subcommands") - # fmt: on -): - """Run a named command or workflow defined in the project.yml. If a workflow - name is specified, all commands in the workflow are run, in order. If - commands define dependencies and/or outputs, they will only be re-run if - state has changed. - - DOCS: https://spacy.io/api/cli#project-run - """ - if show_help or not subcommand: - print_run_help(project_dir, subcommand) - else: - overrides = parse_config_overrides(ctx.args) - project_run(project_dir, subcommand, overrides=overrides, force=force, dry=dry) - - -def project_run( - project_dir: Path, - subcommand: str, - *, - overrides: Dict[str, Any] = SimpleFrozenDict(), - force: bool = False, - dry: bool = False, - capture: bool = False, - skip_requirements_check: bool = False, -) -> None: - """Run a named script defined in the project.yml. If the script is part - of the default pipeline (defined in the "run" section), DVC is used to - execute the command, so it can determine whether to rerun it. It then - calls into "exec" to execute it. - - project_dir (Path): Path to project directory. - subcommand (str): Name of command to run. - overrides (Dict[str, Any]): Optional config overrides. - force (bool): Force re-running, even if nothing changed. - dry (bool): Perform a dry run and don't execute commands. - capture (bool): Whether to capture the output and errors of individual commands. - If False, the stdout and stderr will not be redirected, and if there's an error, - sys.exit will be called with the return code. You should use capture=False - when you want to turn over execution to the command, and capture=True - when you want to run the command more like a function. - skip_requirements_check (bool): Whether to skip the requirements check. - """ - config = load_project_config(project_dir, overrides=overrides) - commands = {cmd["name"]: cmd for cmd in config.get("commands", [])} - workflows = config.get("workflows", {}) - validate_subcommand(list(commands.keys()), list(workflows.keys()), subcommand) - - req_path = project_dir / "requirements.txt" - if not skip_requirements_check: - if config.get("check_requirements", True) and os.path.exists(req_path): - with req_path.open() as requirements_file: - _check_requirements([req.strip() for req in requirements_file]) - - if subcommand in workflows: - msg.info(f"Running workflow '{subcommand}'") - for cmd in workflows[subcommand]: - project_run( - project_dir, - cmd, - overrides=overrides, - force=force, - dry=dry, - capture=capture, - skip_requirements_check=True, - ) - else: - cmd = commands[subcommand] - for dep in cmd.get("deps", []): - if not (project_dir / dep).exists(): - err = f"Missing dependency specified by command '{subcommand}': {dep}" - err_help = "Maybe you forgot to run the 'project assets' command or a previous step?" - err_exits = 1 if not dry else None - msg.fail(err, err_help, exits=err_exits) - check_spacy_commit = check_bool_env_var(ENV_VARS.PROJECT_USE_GIT_VERSION) - with working_dir(project_dir) as current_dir: - msg.divider(subcommand) - rerun = check_rerun(current_dir, cmd, check_spacy_commit=check_spacy_commit) - if not rerun and not force: - msg.info(f"Skipping '{cmd['name']}': nothing changed") - else: - run_commands(cmd["script"], dry=dry, capture=capture) - if not dry: - update_lockfile(current_dir, cmd) - - -def print_run_help(project_dir: Path, subcommand: Optional[str] = None) -> None: - """Simulate a CLI help prompt using the info available in the project.yml. - - project_dir (Path): The project directory. - subcommand (Optional[str]): The subcommand or None. If a subcommand is - provided, the subcommand help is shown. Otherwise, the top-level help - and a list of available commands is printed. - """ - config = load_project_config(project_dir) - config_commands = config.get("commands", []) - commands = {cmd["name"]: cmd for cmd in config_commands} - workflows = config.get("workflows", {}) - project_loc = "" if is_cwd(project_dir) else project_dir - if subcommand: - validate_subcommand(list(commands.keys()), list(workflows.keys()), subcommand) - print(f"Usage: {COMMAND} project run {subcommand} {project_loc}") - if subcommand in commands: - help_text = commands[subcommand].get("help") - if help_text: - print(f"\n{help_text}\n") - elif subcommand in workflows: - steps = workflows[subcommand] - print(f"\nWorkflow consisting of {len(steps)} commands:") - steps_data = [ - (f"{i + 1}. {step}", commands[step].get("help", "")) - for i, step in enumerate(steps) - ] - msg.table(steps_data) - help_cmd = f"{COMMAND} project run [COMMAND] {project_loc} --help" - print(f"For command details, run: {help_cmd}") - else: - print("") - title = config.get("title") - if title: - print(f"{locale_escape(title)}\n") - if config_commands: - print(f"Available commands in {PROJECT_FILE}") - print(f"Usage: {COMMAND} project run [COMMAND] {project_loc}") - msg.table([(cmd["name"], cmd.get("help", "")) for cmd in config_commands]) - if workflows: - print(f"Available workflows in {PROJECT_FILE}") - print(f"Usage: {COMMAND} project run [WORKFLOW] {project_loc}") - msg.table([(name, " -> ".join(steps)) for name, steps in workflows.items()]) - - -def run_commands( - commands: Iterable[str] = SimpleFrozenList(), - silent: bool = False, - dry: bool = False, - capture: bool = False, -) -> None: - """Run a sequence of commands in a subprocess, in order. - - commands (List[str]): The string commands. - silent (bool): Don't print the commands. - dry (bool): Perform a dry run and don't execut anything. - capture (bool): Whether to capture the output and errors of individual commands. - If False, the stdout and stderr will not be redirected, and if there's an error, - sys.exit will be called with the return code. You should use capture=False - when you want to turn over execution to the command, and capture=True - when you want to run the command more like a function. - """ - for c in commands: - command = split_command(c) - # Not sure if this is needed or a good idea. Motivation: users may often - # use commands in their config that reference "python" and we want to - # make sure that it's always executing the same Python that spaCy is - # executed with and the pip in the same env, not some other Python/pip. - # Also ensures cross-compatibility if user 1 writes "python3" (because - # that's how it's set up on their system), and user 2 without the - # shortcut tries to re-run the command. - if len(command) and command[0] in ("python", "python3"): - command[0] = sys.executable - elif len(command) and command[0] in ("pip", "pip3"): - command = [sys.executable, "-m", "pip", *command[1:]] - if not silent: - print(f"Running command: {join_command(command)}") - if not dry: - run_command(command, capture=capture) - - -def validate_subcommand( - commands: Sequence[str], workflows: Sequence[str], subcommand: str -) -> None: - """Check that a subcommand is valid and defined. Raises an error otherwise. - - commands (Sequence[str]): The available commands. - subcommand (str): The subcommand. - """ - if not commands and not workflows: - msg.fail(f"No commands or workflows defined in {PROJECT_FILE}", exits=1) - if subcommand not in commands and subcommand not in workflows: - help_msg = [] - if subcommand in ["assets", "asset"]: - help_msg.append("Did you mean to run: python -m spacy project assets?") - if commands: - help_msg.append(f"Available commands: {', '.join(commands)}") - if workflows: - help_msg.append(f"Available workflows: {', '.join(workflows)}") - msg.fail( - f"Can't find command or workflow '{subcommand}' in {PROJECT_FILE}", - ". ".join(help_msg), - exits=1, - ) - - -def check_rerun( - project_dir: Path, - command: Dict[str, Any], - *, - check_spacy_version: bool = True, - check_spacy_commit: bool = False, -) -> bool: - """Check if a command should be rerun because its settings or inputs/outputs - changed. - - project_dir (Path): The current project directory. - command (Dict[str, Any]): The command, as defined in the project.yml. - strict_version (bool): - RETURNS (bool): Whether to re-run the command. - """ - # Always rerun if no-skip is set - if command.get("no_skip", False): - return True - lock_path = project_dir / PROJECT_LOCK - if not lock_path.exists(): # We don't have a lockfile, run command - return True - data = srsly.read_yaml(lock_path) - if command["name"] not in data: # We don't have info about this command - return True - entry = data[command["name"]] - # Always run commands with no outputs (otherwise they'd always be skipped) - if not entry.get("outs", []): - return True - # Always rerun if spaCy version or commit hash changed - spacy_v = entry.get("spacy_version") - commit = entry.get("spacy_git_version") - if check_spacy_version and not is_minor_version_match(spacy_v, about.__version__): - info = f"({spacy_v} in {PROJECT_LOCK}, {about.__version__} current)" - msg.info(f"Re-running '{command['name']}': spaCy minor version changed {info}") - return True - if check_spacy_commit and commit != GIT_VERSION: - info = f"({commit} in {PROJECT_LOCK}, {GIT_VERSION} current)" - msg.info(f"Re-running '{command['name']}': spaCy commit changed {info}") - return True - # If the entry in the lockfile matches the lockfile entry that would be - # generated from the current command, we don't rerun because it means that - # all inputs/outputs, hashes and scripts are the same and nothing changed - lock_entry = get_lock_entry(project_dir, command) - exclude = ["spacy_version", "spacy_git_version"] - return get_hash(lock_entry, exclude=exclude) != get_hash(entry, exclude=exclude) - - -def update_lockfile(project_dir: Path, command: Dict[str, Any]) -> None: - """Update the lockfile after running a command. Will create a lockfile if - it doesn't yet exist and will add an entry for the current command, its - script and dependencies/outputs. - - project_dir (Path): The current project directory. - command (Dict[str, Any]): The command, as defined in the project.yml. - """ - lock_path = project_dir / PROJECT_LOCK - if not lock_path.exists(): - srsly.write_yaml(lock_path, {}) - data = {} - else: - data = srsly.read_yaml(lock_path) - data[command["name"]] = get_lock_entry(project_dir, command) - srsly.write_yaml(lock_path, data) - - -def get_lock_entry(project_dir: Path, command: Dict[str, Any]) -> Dict[str, Any]: - """Get a lockfile entry for a given command. An entry includes the command, - the script (command steps) and a list of dependencies and outputs with - their paths and file hashes, if available. The format is based on the - dvc.lock files, to keep things consistent. - - project_dir (Path): The current project directory. - command (Dict[str, Any]): The command, as defined in the project.yml. - RETURNS (Dict[str, Any]): The lockfile entry. - """ - deps = get_fileinfo(project_dir, command.get("deps", [])) - outs = get_fileinfo(project_dir, command.get("outputs", [])) - outs_nc = get_fileinfo(project_dir, command.get("outputs_no_cache", [])) - return { - "cmd": f"{COMMAND} run {command['name']}", - "script": command["script"], - "deps": deps, - "outs": [*outs, *outs_nc], - "spacy_version": about.__version__, - "spacy_git_version": GIT_VERSION, - } - - -def get_fileinfo(project_dir: Path, paths: List[str]) -> List[Dict[str, Optional[str]]]: - """Generate the file information for a list of paths (dependencies, outputs). - Includes the file path and the file's checksum. - - project_dir (Path): The current project directory. - paths (List[str]): The file paths. - RETURNS (List[Dict[str, str]]): The lockfile entry for a file. - """ - data = [] - for path in paths: - file_path = project_dir / path - md5 = get_checksum(file_path) if file_path.exists() else None - data.append({"path": path, "md5": md5}) - return data - - -def _check_requirements(requirements: List[str]) -> Tuple[bool, bool]: - """Checks whether requirements are installed and free of version conflicts. - requirements (List[str]): List of requirements. - RETURNS (Tuple[bool, bool]): Whether (1) any packages couldn't be imported, (2) any packages with version conflicts - exist. - """ - import pkg_resources - - failed_pkgs_msgs: List[str] = [] - conflicting_pkgs_msgs: List[str] = [] - - for req in requirements: - try: - pkg_resources.require(req) - except pkg_resources.DistributionNotFound as dnf: - failed_pkgs_msgs.append(dnf.report()) - except pkg_resources.VersionConflict as vc: - conflicting_pkgs_msgs.append(vc.report()) - except Exception: - msg.warn( - f"Unable to check requirement: {req} " - "Checks are currently limited to requirement specifiers " - "(PEP 508)" - ) - - if len(failed_pkgs_msgs) or len(conflicting_pkgs_msgs): - msg.warn( - title="Missing requirements or requirement conflicts detected. Make sure your Python environment is set up " - "correctly and you installed all requirements specified in your project's requirements.txt: " - ) - for pgk_msg in failed_pkgs_msgs + conflicting_pkgs_msgs: - msg.text(pgk_msg) - - return len(failed_pkgs_msgs) > 0, len(conflicting_pkgs_msgs) > 0 +from weasel.cli.run import * diff --git a/spacy/cli/templates/quickstart_training.jinja b/spacy/cli/templates/quickstart_training.jinja index 1937ea935..2817147f3 100644 --- a/spacy/cli/templates/quickstart_training.jinja +++ b/spacy/cli/templates/quickstart_training.jinja @@ -271,8 +271,9 @@ grad_factor = 1.0 @layers = "reduce_mean.v1" [components.textcat.model.linear_model] -@architectures = "spacy.TextCatBOW.v2" +@architectures = "spacy.TextCatBOW.v3" exclusive_classes = true +length = 262144 ngram_size = 1 no_output_layer = false @@ -308,8 +309,9 @@ grad_factor = 1.0 @layers = "reduce_mean.v1" [components.textcat_multilabel.model.linear_model] -@architectures = "spacy.TextCatBOW.v2" +@architectures = "spacy.TextCatBOW.v3" exclusive_classes = false +length = 262144 ngram_size = 1 no_output_layer = false @@ -542,14 +544,15 @@ nO = null width = ${components.tok2vec.model.encode.width} [components.textcat.model.linear_model] -@architectures = "spacy.TextCatBOW.v2" +@architectures = "spacy.TextCatBOW.v3" exclusive_classes = true +length = 262144 ngram_size = 1 no_output_layer = false {% else -%} [components.textcat.model] -@architectures = "spacy.TextCatBOW.v2" +@architectures = "spacy.TextCatBOW.v3" exclusive_classes = true ngram_size = 1 no_output_layer = false @@ -570,15 +573,17 @@ nO = null width = ${components.tok2vec.model.encode.width} [components.textcat_multilabel.model.linear_model] -@architectures = "spacy.TextCatBOW.v2" +@architectures = "spacy.TextCatBOW.v3" exclusive_classes = false +length = 262144 ngram_size = 1 no_output_layer = false {% else -%} [components.textcat_multilabel.model] -@architectures = "spacy.TextCatBOW.v2" +@architectures = "spacy.TextCatBOW.v3" exclusive_classes = false +length = 262144 ngram_size = 1 no_output_layer = false {%- endif %} diff --git a/spacy/cli/train.py b/spacy/cli/train.py index eb1a1a2c1..40934f546 100644 --- a/spacy/cli/train.py +++ b/spacy/cli/train.py @@ -47,7 +47,8 @@ def train_cli( DOCS: https://spacy.io/api/cli#train """ - util.logger.setLevel(logging.DEBUG if verbose else logging.INFO) + if verbose: + util.logger.setLevel(logging.DEBUG) overrides = parse_config_overrides(ctx.args) import_code_paths(code_path) train(config_path, output_path, use_gpu=use_gpu, overrides=overrides) diff --git a/spacy/default_config.cfg b/spacy/default_config.cfg index 694fb732f..b005eef40 100644 --- a/spacy/default_config.cfg +++ b/spacy/default_config.cfg @@ -26,6 +26,9 @@ batch_size = 1000 [nlp.tokenizer] @tokenizers = "spacy.Tokenizer.v1" +[nlp.vectors] +@vectors = "spacy.Vectors.v1" + # The pipeline components and their models [components] diff --git a/spacy/displacy/render.py b/spacy/displacy/render.py index 47407bcb7..40b9986e8 100644 --- a/spacy/displacy/render.py +++ b/spacy/displacy/render.py @@ -142,7 +142,25 @@ class SpanRenderer: spans (list): Individual entity spans and their start, end, label, kb_id and kb_url. title (str / None): Document title set in Doc.user_data['title']. """ - per_token_info = [] + per_token_info = self._assemble_per_token_info(tokens, spans) + markup = self._render_markup(per_token_info) + markup = TPL_SPANS.format(content=markup, dir=self.direction) + if title: + markup = TPL_TITLE.format(title=title) + markup + return markup + + @staticmethod + def _assemble_per_token_info( + tokens: List[str], spans: List[Dict[str, Any]] + ) -> List[Dict[str, List[Dict[str, Any]]]]: + """Assembles token info used to generate markup in render_spans(). + tokens (List[str]): Tokens in text. + spans (List[Dict[str, Any]]): Spans in text. + RETURNS (List[Dict[str, List[Dict, str, Any]]]): Per token info needed to render HTML markup for given tokens + and spans. + """ + per_token_info: List[Dict[str, List[Dict[str, Any]]]] = [] + # we must sort so that we can correctly describe when spans need to "stack" # which is determined by their start token, then span length (longer spans on top), # then break any remaining ties with the span label @@ -154,21 +172,22 @@ class SpanRenderer: s["label"], ), ) + for s in spans: # this is the vertical 'slot' that the span will be rendered in # vertical_position = span_label_offset + (offset_step * (slot - 1)) s["render_slot"] = 0 + for idx, token in enumerate(tokens): # Identify if a token belongs to a Span (and which) and if it's a # start token of said Span. We'll use this for the final HTML render token_markup: Dict[str, Any] = {} token_markup["text"] = token - concurrent_spans = 0 + intersecting_spans: List[Dict[str, Any]] = [] entities = [] for span in spans: ent = {} if span["start_token"] <= idx < span["end_token"]: - concurrent_spans += 1 span_start = idx == span["start_token"] ent["label"] = span["label"] ent["is_start"] = span_start @@ -176,7 +195,12 @@ class SpanRenderer: # When the span starts, we need to know how many other # spans are on the 'span stack' and will be rendered. # This value becomes the vertical render slot for this entire span - span["render_slot"] = concurrent_spans + span["render_slot"] = ( + intersecting_spans[-1]["render_slot"] + if len(intersecting_spans) + else 0 + ) + 1 + intersecting_spans.append(span) ent["render_slot"] = span["render_slot"] kb_id = span.get("kb_id", "") kb_url = span.get("kb_url", "#") @@ -193,11 +217,8 @@ class SpanRenderer: span["render_slot"] = 0 token_markup["entities"] = entities per_token_info.append(token_markup) - markup = self._render_markup(per_token_info) - markup = TPL_SPANS.format(content=markup, dir=self.direction) - if title: - markup = TPL_TITLE.format(title=title) + markup - return markup + + return per_token_info def _render_markup(self, per_token_info: List[Dict[str, Any]]) -> str: """Render the markup from per-token information""" @@ -313,6 +334,8 @@ class DependencyRenderer: self.lang = settings.get("lang", DEFAULT_LANG) render_id = f"{id_prefix}-{i}" svg = self.render_svg(render_id, p["words"], p["arcs"]) + if p.get("title"): + svg = TPL_TITLE.format(title=p.get("title")) + svg rendered.append(svg) if page: content = "".join([TPL_FIGURE.format(content=svg) for svg in rendered]) @@ -565,7 +588,7 @@ class EntityRenderer: for i, fragment in enumerate(fragments): markup += escape_html(fragment) if len(fragments) > 1 and i != len(fragments) - 1: - markup += "
" + markup += "
" if self.ents is None or label.upper() in self.ents: color = self.colors.get(label.upper(), self.default_color) ent_settings = { @@ -583,7 +606,7 @@ class EntityRenderer: for i, fragment in enumerate(fragments): markup += escape_html(fragment) if len(fragments) > 1 and i != len(fragments) - 1: - markup += "
" + markup += "
" markup = TPL_ENTS.format(content=markup, dir=self.direction) if title: markup = TPL_TITLE.format(title=title) + markup diff --git a/spacy/errors.py b/spacy/errors.py index 9b893aa96..ad13fd596 100644 --- a/spacy/errors.py +++ b/spacy/errors.py @@ -214,6 +214,7 @@ class Warnings(metaclass=ErrorsWithCodes): W125 = ("The StaticVectors key_attr is no longer used. To set a custom " "key attribute for vectors, configure it through Vectors(attr=) or " "'spacy init vectors --attr'") + W126 = ("These keys are unsupported: {unsupported}") # v4 warning strings W401 = ("`incl_prior is True`, but the selected knowledge base type {kb_type} doesn't support prior probability " @@ -226,7 +227,6 @@ class Errors(metaclass=ErrorsWithCodes): E002 = ("Can't find factory for '{name}' for language {lang} ({lang_code}). " "This usually happens when spaCy calls `nlp.{method}` with a custom " "component name that's not registered on the current language class. " - "If you're using a Transformer, make sure to install 'spacy-transformers'. " "If you're using a custom component, make sure you've added the " "decorator `@Language.component` (for function components) or " "`@Language.factory` (for class components).\n\nAvailable " @@ -551,12 +551,12 @@ class Errors(metaclass=ErrorsWithCodes): "during training, make sure to include it in 'annotating components'") # New errors added in v3.x + E849 = ("The vocab only supports {method} for vectors of type " + "spacy.vectors.Vectors, not {vectors_type}.") E850 = ("The PretrainVectors objective currently only supports default or " "floret vectors, not {mode} vectors.") E851 = ("The 'textcat' component labels should only have values of 0 or 1, " "but found value of '{val}'.") - E852 = ("The tar file pulled from the remote attempted an unsafe path " - "traversal.") E853 = ("Unsupported component factory name '{name}'. The character '.' is " "not permitted in factory names.") E854 = ("Unable to set doc.ents. Check that the 'ents_filter' does not " @@ -970,6 +970,12 @@ class Errors(metaclass=ErrorsWithCodes): " 'min_length': {min_length}, 'max_length': {max_length}") E1054 = ("The text, including whitespace, must match between reference and " "predicted docs when training {component}.") + E1055 = ("The 'replace_listener' callback expects {num_params} parameters, " + "but only callbacks with one or three parameters are supported") + E1056 = ("The `TextCatBOW` architecture expects a length of at least 1, was {length}.") + E1057 = ("The `TextCatReduce` architecture must be used with at least one " + "reduction. Please enable one of `use_reduce_first`, " + "`use_reduce_last`, `use_reduce_max` or `use_reduce_mean`.") # v4 error strings E4000 = ("Expected a Doc as input, but got: '{type}'") diff --git a/spacy/kb/__init__.py b/spacy/kb/__init__.py index 2aa084ef5..7a1ee68cf 100644 --- a/spacy/kb/__init__.py +++ b/spacy/kb/__init__.py @@ -2,4 +2,9 @@ from .candidate import Candidate, InMemoryCandidate from .kb import KnowledgeBase from .kb_in_memory import InMemoryLookupKB -__all__ = ["KnowledgeBase", "InMemoryLookupKB", "Candidate", "InMemoryCandidate"] +__all__ = [ + "Candidate", + "KnowledgeBase", + "InMemoryCandidate", + "InMemoryLookupKB", +] diff --git a/spacy/kb/candidate.pyx b/spacy/kb/candidate.pyx index 39a253380..1739cfa64 100644 --- a/spacy/kb/candidate.pyx +++ b/spacy/kb/candidate.pyx @@ -1,4 +1,4 @@ -# cython: infer_types=True, profile=True +# cython: infer_types=True from .kb_in_memory cimport InMemoryLookupKB diff --git a/spacy/kb/kb.pyx b/spacy/kb/kb.pyx index 22a67ed75..afeb0fa2a 100644 --- a/spacy/kb/kb.pyx +++ b/spacy/kb/kb.pyx @@ -1,4 +1,4 @@ -# cython: infer_types=True, profile=True +# cython: infer_types=True from pathlib import Path from typing import Iterable, Tuple, Union diff --git a/spacy/kb/kb_in_memory.pyx b/spacy/kb/kb_in_memory.pyx index 0cf1f7ec1..62cc196b8 100644 --- a/spacy/kb/kb_in_memory.pyx +++ b/spacy/kb/kb_in_memory.pyx @@ -1,4 +1,4 @@ -# cython: infer_types=True, profile=True +# cython: infer_types=True from typing import Any, Callable, Dict, Iterable import srsly diff --git a/spacy/lang/en/lex_attrs.py b/spacy/lang/en/lex_attrs.py index ab9353919..7f9dce948 100644 --- a/spacy/lang/en/lex_attrs.py +++ b/spacy/lang/en/lex_attrs.py @@ -6,7 +6,8 @@ _num_words = [ "nine", "ten", "eleven", "twelve", "thirteen", "fourteen", "fifteen", "sixteen", "seventeen", "eighteen", "nineteen", "twenty", "thirty", "forty", "fifty", "sixty", "seventy", "eighty", "ninety", "hundred", "thousand", - "million", "billion", "trillion", "quadrillion", "gajillion", "bazillion" + "million", "billion", "trillion", "quadrillion", "quintillion", "sextillion", + "septillion", "octillion", "nonillion", "decillion", "gajillion", "bazillion" ] _ordinal_words = [ "first", "second", "third", "fourth", "fifth", "sixth", "seventh", "eighth", @@ -14,7 +15,8 @@ _ordinal_words = [ "fifteenth", "sixteenth", "seventeenth", "eighteenth", "nineteenth", "twentieth", "thirtieth", "fortieth", "fiftieth", "sixtieth", "seventieth", "eightieth", "ninetieth", "hundredth", "thousandth", "millionth", "billionth", - "trillionth", "quadrillionth", "gajillionth", "bazillionth" + "trillionth", "quadrillionth", "quintillionth", "sextillionth", "septillionth", + "octillionth", "nonillionth", "decillionth", "gajillionth", "bazillionth" ] # fmt: on diff --git a/spacy/lang/es/lemmatizer.py b/spacy/lang/es/lemmatizer.py index 44f968347..ee5d38e84 100644 --- a/spacy/lang/es/lemmatizer.py +++ b/spacy/lang/es/lemmatizer.py @@ -163,7 +163,7 @@ class SpanishLemmatizer(Lemmatizer): for old, new in self.lookups.get_table("lemma_rules").get("det", []): if word == old: return [new] - # If none of the specfic rules apply, search in the common rules for + # If none of the specific rules apply, search in the common rules for # determiners and pronouns that follow a unique pattern for # lemmatization. If the word is in the list, return the corresponding # lemma. @@ -291,7 +291,7 @@ class SpanishLemmatizer(Lemmatizer): for old, new in self.lookups.get_table("lemma_rules").get("pron", []): if word == old: return [new] - # If none of the specfic rules apply, search in the common rules for + # If none of the specific rules apply, search in the common rules for # determiners and pronouns that follow a unique pattern for # lemmatization. If the word is in the list, return the corresponding # lemma. diff --git a/spacy/lang/fo/__init__.py b/spacy/lang/fo/__init__.py new file mode 100644 index 000000000..db18f1a5d --- /dev/null +++ b/spacy/lang/fo/__init__.py @@ -0,0 +1,18 @@ +from ...language import BaseDefaults, Language +from ..punctuation import TOKENIZER_INFIXES, TOKENIZER_PREFIXES, TOKENIZER_SUFFIXES +from .tokenizer_exceptions import TOKENIZER_EXCEPTIONS + + +class FaroeseDefaults(BaseDefaults): + tokenizer_exceptions = TOKENIZER_EXCEPTIONS + infixes = TOKENIZER_INFIXES + suffixes = TOKENIZER_SUFFIXES + prefixes = TOKENIZER_PREFIXES + + +class Faroese(Language): + lang = "fo" + Defaults = FaroeseDefaults + + +__all__ = ["Faroese"] diff --git a/spacy/lang/fo/tokenizer_exceptions.py b/spacy/lang/fo/tokenizer_exceptions.py new file mode 100644 index 000000000..856b72200 --- /dev/null +++ b/spacy/lang/fo/tokenizer_exceptions.py @@ -0,0 +1,90 @@ +from ...symbols import ORTH +from ...util import update_exc +from ..tokenizer_exceptions import BASE_EXCEPTIONS + +_exc = {} + +for orth in [ + "apr.", + "aug.", + "avgr.", + "árg.", + "ávís.", + "beinl.", + "blkv.", + "blaðkv.", + "blm.", + "blaðm.", + "bls.", + "blstj.", + "blaðstj.", + "des.", + "eint.", + "febr.", + "fyrrv.", + "góðk.", + "h.m.", + "innt.", + "jan.", + "kl.", + "m.a.", + "mðr.", + "mió.", + "nr.", + "nto.", + "nov.", + "nút.", + "o.a.", + "o.a.m.", + "o.a.tíl.", + "o.fl.", + "ff.", + "o.m.a.", + "o.o.", + "o.s.fr.", + "o.tíl.", + "o.ø.", + "okt.", + "omf.", + "pst.", + "ritstj.", + "sbr.", + "sms.", + "smst.", + "smb.", + "sb.", + "sbrt.", + "sp.", + "sept.", + "spf.", + "spsk.", + "t.e.", + "t.s.", + "t.s.s.", + "tlf.", + "tel.", + "tsk.", + "t.o.v.", + "t.d.", + "uml.", + "ums.", + "uppl.", + "upprfr.", + "uppr.", + "útg.", + "útl.", + "útr.", + "vanl.", + "v.", + "v.h.", + "v.ø.o.", + "viðm.", + "viðv.", + "vm.", + "v.m.", +]: + _exc[orth] = [{ORTH: orth}] + capitalized = orth.capitalize() + _exc[capitalized] = [{ORTH: capitalized}] + +TOKENIZER_EXCEPTIONS = update_exc(BASE_EXCEPTIONS, _exc) diff --git a/spacy/lang/grc/punctuation.py b/spacy/lang/grc/punctuation.py index 8e9fc8bf2..59037617d 100644 --- a/spacy/lang/grc/punctuation.py +++ b/spacy/lang/grc/punctuation.py @@ -15,6 +15,7 @@ _prefixes = ( [ "†", "⸏", + "〈", ] + LIST_PUNCT + LIST_ELLIPSES @@ -31,6 +32,7 @@ _suffixes = ( + [ "†", "⸎", + "〉", r"(?<=[\u1F00-\u1FFF\u0370-\u03FF])[\-\.⸏]", ] ) diff --git a/spacy/lang/nn/__init__.py b/spacy/lang/nn/__init__.py new file mode 100644 index 000000000..ebbf07090 --- /dev/null +++ b/spacy/lang/nn/__init__.py @@ -0,0 +1,20 @@ +from ...language import BaseDefaults, Language +from ..nb import SYNTAX_ITERATORS +from .punctuation import TOKENIZER_INFIXES, TOKENIZER_PREFIXES, TOKENIZER_SUFFIXES +from .tokenizer_exceptions import TOKENIZER_EXCEPTIONS + + +class NorwegianNynorskDefaults(BaseDefaults): + tokenizer_exceptions = TOKENIZER_EXCEPTIONS + prefixes = TOKENIZER_PREFIXES + infixes = TOKENIZER_INFIXES + suffixes = TOKENIZER_SUFFIXES + syntax_iterators = SYNTAX_ITERATORS + + +class NorwegianNynorsk(Language): + lang = "nn" + Defaults = NorwegianNynorskDefaults + + +__all__ = ["NorwegianNynorsk"] diff --git a/spacy/lang/nn/examples.py b/spacy/lang/nn/examples.py new file mode 100644 index 000000000..95ec0aadd --- /dev/null +++ b/spacy/lang/nn/examples.py @@ -0,0 +1,15 @@ +""" +Example sentences to test spaCy and its language models. + +>>> from spacy.lang.nn.examples import sentences +>>> docs = nlp.pipe(sentences) +""" + + +# sentences taken from Omsetjingsminne frå Nynorsk pressekontor 2022 (https://www.nb.no/sprakbanken/en/resource-catalogue/oai-nb-no-sbr-80/) +sentences = [ + "Konseptet går ut på at alle tre omgangar tel, alle hopparar må stille i kvalifiseringa og poengsummen skal telje.", + "Det er ein meir enn i same periode i fjor.", + "Det har lava ned enorme snømengder i store delar av Europa den siste tida.", + "Akhtar Chaudhry er ikkje innstilt på Oslo-lista til SV, men utfordrar Heikki Holmås om førsteplassen.", +] diff --git a/spacy/lang/nn/punctuation.py b/spacy/lang/nn/punctuation.py new file mode 100644 index 000000000..7b50b58d3 --- /dev/null +++ b/spacy/lang/nn/punctuation.py @@ -0,0 +1,74 @@ +from ..char_classes import ( + ALPHA, + ALPHA_LOWER, + ALPHA_UPPER, + CONCAT_QUOTES, + CURRENCY, + LIST_CURRENCY, + LIST_ELLIPSES, + LIST_ICONS, + LIST_PUNCT, + LIST_QUOTES, + PUNCT, + UNITS, +) +from ..punctuation import TOKENIZER_SUFFIXES + +_quotes = CONCAT_QUOTES.replace("'", "") +_list_punct = [x for x in LIST_PUNCT if x != "#"] +_list_icons = [x for x in LIST_ICONS if x != "°"] +_list_icons = [x.replace("\\u00B0", "") for x in _list_icons] +_list_quotes = [x for x in LIST_QUOTES if x != "\\'"] + + +_prefixes = ( + ["§", "%", "=", "—", "–", r"\+(?![0-9])"] + + _list_punct + + LIST_ELLIPSES + + LIST_QUOTES + + LIST_CURRENCY + + LIST_ICONS +) + + +_infixes = ( + LIST_ELLIPSES + + _list_icons + + [ + r"(?<=[{al}])\.(?=[{au}])".format(al=ALPHA_LOWER, au=ALPHA_UPPER), + r"(?<=[{a}])[,!?](?=[{a}])".format(a=ALPHA), + r"(?<=[{a}])[:<>=/](?=[{a}])".format(a=ALPHA), + r"(?<=[{a}]),(?=[{a}])".format(a=ALPHA), + r"(?<=[{a}])([{q}\)\]\(\[])(?=[{a}])".format(a=ALPHA, q=_quotes), + r"(?<=[{a}])--(?=[{a}])".format(a=ALPHA), + ] +) + +_suffixes = ( + LIST_PUNCT + + LIST_ELLIPSES + + _list_quotes + + _list_icons + + ["—", "–"] + + [ + r"(?<=[0-9])\+", + r"(?<=°[FfCcKk])\.", + r"(?<=[0-9])(?:{c})".format(c=CURRENCY), + r"(?<=[0-9])(?:{u})".format(u=UNITS), + r"(?<=[{al}{e}{p}(?:{q})])\.".format( + al=ALPHA_LOWER, e=r"%²\-\+", q=_quotes, p=PUNCT + ), + r"(?<=[{au}][{au}])\.".format(au=ALPHA_UPPER), + ] + + [r"(?<=[^sSxXzZ])'"] +) +_suffixes += [ + suffix + for suffix in TOKENIZER_SUFFIXES + if suffix not in ["'s", "'S", "’s", "’S", r"\'"] +] + + +TOKENIZER_PREFIXES = _prefixes +TOKENIZER_INFIXES = _infixes +TOKENIZER_SUFFIXES = _suffixes diff --git a/spacy/lang/nn/tokenizer_exceptions.py b/spacy/lang/nn/tokenizer_exceptions.py new file mode 100644 index 000000000..4bfcb26d8 --- /dev/null +++ b/spacy/lang/nn/tokenizer_exceptions.py @@ -0,0 +1,228 @@ +from ...symbols import NORM, ORTH +from ...util import update_exc +from ..tokenizer_exceptions import BASE_EXCEPTIONS + +_exc = {} + + +for exc_data in [ + {ORTH: "jan.", NORM: "januar"}, + {ORTH: "feb.", NORM: "februar"}, + {ORTH: "mar.", NORM: "mars"}, + {ORTH: "apr.", NORM: "april"}, + {ORTH: "jun.", NORM: "juni"}, + # note: "jul." is in the simple list below without a NORM exception + {ORTH: "aug.", NORM: "august"}, + {ORTH: "sep.", NORM: "september"}, + {ORTH: "okt.", NORM: "oktober"}, + {ORTH: "nov.", NORM: "november"}, + {ORTH: "des.", NORM: "desember"}, +]: + _exc[exc_data[ORTH]] = [exc_data] + + +for orth in [ + "Ap.", + "Aq.", + "Ca.", + "Chr.", + "Co.", + "Dr.", + "F.eks.", + "Fr.p.", + "Frp.", + "Grl.", + "Kr.", + "Kr.F.", + "Kr.F.s", + "Mr.", + "Mrs.", + "Pb.", + "Pr.", + "Sp.", + "St.", + "a.m.", + "ad.", + "adm.dir.", + "adr.", + "b.c.", + "bl.a.", + "bla.", + "bm.", + "bnr.", + "bto.", + "c.c.", + "ca.", + "cand.mag.", + "co.", + "d.d.", + "d.m.", + "d.y.", + "dept.", + "dr.", + "dr.med.", + "dr.philos.", + "dr.psychol.", + "dss.", + "dvs.", + "e.Kr.", + "e.l.", + "eg.", + "eig.", + "ekskl.", + "el.", + "et.", + "etc.", + "etg.", + "ev.", + "evt.", + "f.", + "f.Kr.", + "f.eks.", + "f.o.m.", + "fhv.", + "fk.", + "foreg.", + "fork.", + "fv.", + "fvt.", + "g.", + "gl.", + "gno.", + "gnr.", + "grl.", + "gt.", + "h.r.adv.", + "hhv.", + "hoh.", + "hr.", + "ifb.", + "ifm.", + "iht.", + "inkl.", + "istf.", + "jf.", + "jr.", + "jul.", + "juris.", + "kfr.", + "kgl.", + "kgl.res.", + "kl.", + "komm.", + "kr.", + "kst.", + "lat.", + "lø.", + "m.a.", + "m.a.o.", + "m.fl.", + "m.m.", + "m.v.", + "ma.", + "mag.art.", + "md.", + "mfl.", + "mht.", + "mill.", + "min.", + "mnd.", + "moh.", + "mrd.", + "muh.", + "mv.", + "mva.", + "n.å.", + "ndf.", + "nr.", + "nto.", + "nyno.", + "o.a.", + "o.l.", + "obl.", + "off.", + "ofl.", + "on.", + "op.", + "org.", + "osv.", + "ovf.", + "p.", + "p.a.", + "p.g.a.", + "p.m.", + "p.t.", + "pga.", + "ph.d.", + "pkt.", + "pr.", + "pst.", + "pt.", + "red.anm.", + "ref.", + "res.", + "res.kap.", + "resp.", + "rv.", + "s.", + "s.d.", + "s.k.", + "s.u.", + "s.å.", + "sen.", + "sep.", + "siviling.", + "sms.", + "snr.", + "spm.", + "sr.", + "sst.", + "st.", + "st.meld.", + "st.prp.", + "stip.", + "stk.", + "stud.", + "sv.", + "såk.", + "sø.", + "t.d.", + "t.h.", + "t.o.m.", + "t.v.", + "temp.", + "ti.", + "tils.", + "tilsv.", + "tl;dr", + "tlf.", + "to.", + "ult.", + "utg.", + "v.", + "vedk.", + "vedr.", + "vg.", + "vgs.", + "vha.", + "vit.ass.", + "vn.", + "vol.", + "vs.", + "vsa.", + "§§", + "©NTB", + "årg.", + "årh.", +]: + _exc[orth] = [{ORTH: orth}] + +# Dates +for h in range(1, 31 + 1): + for period in ["."]: + _exc[f"{h}{period}"] = [{ORTH: f"{h}."}] + +_custom_base_exc = {"i.": [{ORTH: "i", NORM: "i"}, {ORTH: "."}]} +_exc.update(_custom_base_exc) + +TOKENIZER_EXCEPTIONS = update_exc(BASE_EXCEPTIONS, _exc) diff --git a/spacy/lang/tr/examples.py b/spacy/lang/tr/examples.py index dfb324a4e..c912c950d 100644 --- a/spacy/lang/tr/examples.py +++ b/spacy/lang/tr/examples.py @@ -15,4 +15,7 @@ sentences = [ "Türkiye'nin başkenti neresi?", "Bakanlar Kurulu 180 günlük eylem planını açıkladı.", "Merkez Bankası, beklentiler doğrultusunda faizlerde değişikliğe gitmedi.", + "Cemal Sureya kimdir?", + "Bunlari Biliyor muydunuz?", + "Altinoluk Turkiye haritasinin neresinde yer alir?", ] diff --git a/spacy/language.py b/spacy/language.py index 7920151e7..1b2722a2d 100644 --- a/spacy/language.py +++ b/spacy/language.py @@ -1,4 +1,5 @@ import functools +import inspect import itertools import multiprocessing as mp import random @@ -64,6 +65,7 @@ from .util import ( registry, warn_if_jupyter_cupy, ) +from .vectors import BaseVectors from .vocab import Vocab, create_vocab PipeCallable = Callable[[Doc], Doc] @@ -153,6 +155,7 @@ class Language: max_length: int = 10**6, meta: Dict[str, Any] = {}, create_tokenizer: Optional[Callable[["Language"], Callable[[str], Doc]]] = None, + create_vectors: Optional[Callable[["Vocab"], BaseVectors]] = None, batch_size: int = 1000, **kwargs, ) -> None: @@ -192,6 +195,10 @@ class Language: raise ValueError(Errors.E918.format(vocab=vocab, vocab_type=type(Vocab))) if vocab is True: vocab = create_vocab(self.lang, self.Defaults) + if not create_vectors: + vectors_cfg = {"vectors": self._config["nlp"]["vectors"]} + create_vectors = registry.resolve(vectors_cfg)["vectors"] + vocab.vectors = create_vectors(vocab) else: if (self.lang and vocab.lang) and (self.lang != vocab.lang): raise ValueError(Errors.E150.format(nlp=self.lang, vocab=vocab.lang)) @@ -1878,6 +1885,10 @@ class Language: ).merge(config) if "nlp" not in config: raise ValueError(Errors.E985.format(config=config)) + # fill in [nlp.vectors] if not present (as a narrower alternative to + # auto-filling [nlp] from the default config) + if "vectors" not in config["nlp"]: + config["nlp"]["vectors"] = {"@vectors": "spacy.Vectors.v1"} config_lang = config["nlp"].get("lang") if config_lang is not None and config_lang != cls.lang: raise ValueError( @@ -1913,6 +1924,7 @@ class Language: filled["nlp"], validate=validate, schema=ConfigSchemaNlp ) create_tokenizer = resolved_nlp["tokenizer"] + create_vectors = resolved_nlp["vectors"] before_creation = resolved_nlp["before_creation"] after_creation = resolved_nlp["after_creation"] after_pipeline_creation = resolved_nlp["after_pipeline_creation"] @@ -1933,7 +1945,12 @@ class Language: # inside stuff like the spacy train function. If we loaded them here, # then we would load them twice at runtime: once when we make from config, # and then again when we load from disk. - nlp = lang_cls(vocab=vocab, create_tokenizer=create_tokenizer, meta=meta) + nlp = lang_cls( + vocab=vocab, + create_tokenizer=create_tokenizer, + create_vectors=create_vectors, + meta=meta, + ) if after_creation is not None: nlp = after_creation(nlp) if not isinstance(nlp, cls): @@ -2150,8 +2167,20 @@ class Language: # Go over the listener layers and replace them for listener in pipe_listeners: new_model = tok2vec_model.copy() - if "replace_listener" in tok2vec_model.attrs: - new_model = tok2vec_model.attrs["replace_listener"](new_model) + replace_listener_func = tok2vec_model.attrs.get("replace_listener") + if replace_listener_func is not None: + # Pass the extra args to the callback without breaking compatibility with + # old library versions that only expect a single parameter. + num_params = len( + inspect.signature(replace_listener_func).parameters + ) + if num_params == 1: + new_model = replace_listener_func(new_model) + elif num_params == 3: + new_model = replace_listener_func(new_model, listener, tok2vec) + else: + raise ValueError(Errors.E1055.format(num_params=num_params)) + util.replace_model_node(pipe.model, listener, new_model) # type: ignore[attr-defined] tok2vec.remove_listener(listener, pipe_name) diff --git a/spacy/lexeme.pyx b/spacy/lexeme.pyx index b882a5479..41fc8f1d2 100644 --- a/spacy/lexeme.pyx +++ b/spacy/lexeme.pyx @@ -1,4 +1,5 @@ # cython: embedsignature=True +# cython: profile=False # Compiler crashes on memory view coercion without this. Should report bug. cimport numpy as np from libc.string cimport memset diff --git a/spacy/matcher/__init__.py b/spacy/matcher/__init__.py index f671f2e35..b6d6d70ab 100644 --- a/spacy/matcher/__init__.py +++ b/spacy/matcher/__init__.py @@ -3,4 +3,4 @@ from .levenshtein import levenshtein from .matcher import Matcher from .phrasematcher import PhraseMatcher -__all__ = ["Matcher", "PhraseMatcher", "DependencyMatcher", "levenshtein"] +__all__ = ["DependencyMatcher", "Matcher", "PhraseMatcher", "levenshtein"] diff --git a/spacy/matcher/dependencymatcher.pyx b/spacy/matcher/dependencymatcher.pyx index b8b7828dd..0b639ab04 100644 --- a/spacy/matcher/dependencymatcher.pyx +++ b/spacy/matcher/dependencymatcher.pyx @@ -1,4 +1,4 @@ -# cython: infer_types=True, profile=True +# cython: infer_types=True import warnings from collections import defaultdict from itertools import product @@ -129,6 +129,7 @@ cdef class DependencyMatcher: else: required_keys = {"RIGHT_ID", "RIGHT_ATTRS", "REL_OP", "LEFT_ID"} relation_keys = set(relation.keys()) + # Identify required keys that have not been specified missing = required_keys - relation_keys if missing: missing_txt = ", ".join(list(missing)) @@ -136,6 +137,13 @@ cdef class DependencyMatcher: required=required_keys, missing=missing_txt )) + # Identify additional, unsupported keys + unsupported = relation_keys - required_keys + if unsupported: + unsupported_txt = ", ".join(list(unsupported)) + warnings.warn(Warnings.W126.format( + unsupported=unsupported_txt + )) if ( relation["RIGHT_ID"] in visited_nodes or relation["LEFT_ID"] not in visited_nodes diff --git a/spacy/matcher/levenshtein.pyx b/spacy/matcher/levenshtein.pyx index e823ce99d..e394f2cf4 100644 --- a/spacy/matcher/levenshtein.pyx +++ b/spacy/matcher/levenshtein.pyx @@ -1,4 +1,4 @@ -# cython: profile=True, binding=True, infer_types=True +# cython: binding=True, infer_types=True from cpython.object cimport PyObject from libc.stdint cimport int64_t diff --git a/spacy/matcher/matcher.pyx b/spacy/matcher/matcher.pyx index f926608b8..7ca39568c 100644 --- a/spacy/matcher/matcher.pyx +++ b/spacy/matcher/matcher.pyx @@ -1,4 +1,4 @@ -# cython: binding=True, infer_types=True, profile=True +# cython: binding=True, infer_types=True from typing import Iterable, List from cymem.cymem cimport Pool diff --git a/spacy/matcher/phrasematcher.pyx b/spacy/matcher/phrasematcher.pyx index eb9ca675f..3499a1e39 100644 --- a/spacy/matcher/phrasematcher.pyx +++ b/spacy/matcher/phrasematcher.pyx @@ -1,4 +1,4 @@ -# cython: infer_types=True, profile=True +# cython: infer_types=True from collections import defaultdict from typing import List diff --git a/spacy/ml/models/textcat.py b/spacy/ml/models/textcat.py index ab14110d2..3e5471ab3 100644 --- a/spacy/ml/models/textcat.py +++ b/spacy/ml/models/textcat.py @@ -1,21 +1,27 @@ from functools import partial -from typing import List, Optional, cast +from typing import List, Optional, Tuple, cast from thinc.api import ( Dropout, + Gelu, LayerNorm, Linear, Logistic, Maxout, Model, ParametricAttention, + ParametricAttention_v2, Relu, Softmax, SparseLinear, + SparseLinear_v2, chain, clone, concatenate, list2ragged, + reduce_first, + reduce_last, + reduce_max, reduce_mean, reduce_sum, residual, @@ -25,9 +31,10 @@ from thinc.api import ( ) from thinc.layers.chain import init as init_chain from thinc.layers.resizable import resize_linear_weighted, resize_model -from thinc.types import Floats2d +from thinc.types import ArrayXd, Floats2d from ...attrs import ORTH +from ...errors import Errors from ...tokens import Doc from ...util import registry from ..extract_ngrams import extract_ngrams @@ -47,39 +54,15 @@ def build_simple_cnn_text_classifier( outputs sum to 1. If exclusive_classes=False, a logistic non-linearity is applied instead, so that outputs are in the range [0, 1]. """ - fill_defaults = {"b": 0, "W": 0} - with Model.define_operators({">>": chain}): - cnn = tok2vec >> list2ragged() >> reduce_mean() - nI = tok2vec.maybe_get_dim("nO") - if exclusive_classes: - output_layer = Softmax(nO=nO, nI=nI) - fill_defaults["b"] = NEG_VALUE - resizable_layer: Model = resizable( - output_layer, - resize_layer=partial( - resize_linear_weighted, fill_defaults=fill_defaults - ), - ) - model = cnn >> resizable_layer - else: - output_layer = Linear(nO=nO, nI=nI) - resizable_layer = resizable( - output_layer, - resize_layer=partial( - resize_linear_weighted, fill_defaults=fill_defaults - ), - ) - model = cnn >> resizable_layer >> Logistic() - model.set_ref("output_layer", output_layer) - model.attrs["resize_output"] = partial( - resize_and_set_ref, - resizable_layer=resizable_layer, - ) - model.set_ref("tok2vec", tok2vec) - if nO is not None: - model.set_dim("nO", cast(int, nO)) - model.attrs["multi_label"] = not exclusive_classes - return model + return build_reduce_text_classifier( + tok2vec=tok2vec, + exclusive_classes=exclusive_classes, + use_reduce_first=False, + use_reduce_last=False, + use_reduce_max=False, + use_reduce_mean=True, + nO=nO, + ) def resize_and_set_ref(model, new_nO, resizable_layer): @@ -95,10 +78,48 @@ def build_bow_text_classifier( ngram_size: int, no_output_layer: bool, nO: Optional[int] = None, +) -> Model[List[Doc], Floats2d]: + return _build_bow_text_classifier( + exclusive_classes=exclusive_classes, + ngram_size=ngram_size, + no_output_layer=no_output_layer, + nO=nO, + sparse_linear=SparseLinear(nO=nO), + ) + + +@registry.architectures("spacy.TextCatBOW.v3") +def build_bow_text_classifier_v3( + exclusive_classes: bool, + ngram_size: int, + no_output_layer: bool, + length: int = 262144, + nO: Optional[int] = None, +) -> Model[List[Doc], Floats2d]: + if length < 1: + raise ValueError(Errors.E1056.format(length=length)) + + # Find k such that 2**(k-1) < length <= 2**k. + length = 2 ** (length - 1).bit_length() + + return _build_bow_text_classifier( + exclusive_classes=exclusive_classes, + ngram_size=ngram_size, + no_output_layer=no_output_layer, + nO=nO, + sparse_linear=SparseLinear_v2(nO=nO, length=length), + ) + + +def _build_bow_text_classifier( + exclusive_classes: bool, + ngram_size: int, + no_output_layer: bool, + sparse_linear: Model[Tuple[ArrayXd, ArrayXd, ArrayXd], ArrayXd], + nO: Optional[int] = None, ) -> Model[List[Doc], Floats2d]: fill_defaults = {"b": 0, "W": 0} with Model.define_operators({">>": chain}): - sparse_linear = SparseLinear(nO=nO) output_layer = None if not no_output_layer: fill_defaults["b"] = NEG_VALUE @@ -127,6 +148,9 @@ def build_text_classifier_v2( linear_model: Model[List[Doc], Floats2d], nO: Optional[int] = None, ) -> Model[List[Doc], Floats2d]: + # TODO: build the model with _build_parametric_attention_with_residual_nonlinear + # in spaCy v4. We don't do this in spaCy v3 to preserve model + # compatibility. exclusive_classes = not linear_model.attrs["multi_label"] with Model.define_operators({">>": chain, "|": concatenate}): width = tok2vec.maybe_get_dim("nO") @@ -190,3 +214,145 @@ def build_text_classifier_lowdata( model = model >> Dropout(dropout) model = model >> Logistic() return model + + +@registry.architectures("spacy.TextCatParametricAttention.v1") +def build_textcat_parametric_attention_v1( + tok2vec: Model[List[Doc], List[Floats2d]], + exclusive_classes: bool, + nO: Optional[int] = None, +) -> Model[List[Doc], Floats2d]: + width = tok2vec.maybe_get_dim("nO") + parametric_attention = _build_parametric_attention_with_residual_nonlinear( + tok2vec=tok2vec, + nonlinear_layer=Maxout(nI=width, nO=width), + key_transform=Gelu(nI=width, nO=width), + ) + with Model.define_operators({">>": chain}): + if exclusive_classes: + output_layer = Softmax(nO=nO) + else: + output_layer = Linear(nO=nO) >> Logistic() + model = parametric_attention >> output_layer + if model.has_dim("nO") is not False and nO is not None: + model.set_dim("nO", cast(int, nO)) + model.set_ref("output_layer", output_layer) + model.attrs["multi_label"] = not exclusive_classes + + return model + + +def _build_parametric_attention_with_residual_nonlinear( + *, + tok2vec: Model[List[Doc], List[Floats2d]], + nonlinear_layer: Model[Floats2d, Floats2d], + key_transform: Optional[Model[Floats2d, Floats2d]] = None, +) -> Model[List[Doc], Floats2d]: + with Model.define_operators({">>": chain, "|": concatenate}): + width = tok2vec.maybe_get_dim("nO") + attention_layer = ParametricAttention_v2(nO=width, key_transform=key_transform) + norm_layer = LayerNorm(nI=width) + parametric_attention = ( + tok2vec + >> list2ragged() + >> attention_layer + >> reduce_sum() + >> residual(nonlinear_layer >> norm_layer >> Dropout(0.0)) + ) + + parametric_attention.init = _init_parametric_attention_with_residual_nonlinear + + parametric_attention.set_ref("tok2vec", tok2vec) + parametric_attention.set_ref("attention_layer", attention_layer) + parametric_attention.set_ref("nonlinear_layer", nonlinear_layer) + parametric_attention.set_ref("norm_layer", norm_layer) + + return parametric_attention + + +def _init_parametric_attention_with_residual_nonlinear(model, X, Y) -> Model: + tok2vec_width = get_tok2vec_width(model) + model.get_ref("attention_layer").set_dim("nO", tok2vec_width) + model.get_ref("nonlinear_layer").set_dim("nO", tok2vec_width) + model.get_ref("nonlinear_layer").set_dim("nI", tok2vec_width) + model.get_ref("norm_layer").set_dim("nI", tok2vec_width) + model.get_ref("norm_layer").set_dim("nO", tok2vec_width) + init_chain(model, X, Y) + return model + + +@registry.architectures("spacy.TextCatReduce.v1") +def build_reduce_text_classifier( + tok2vec: Model, + exclusive_classes: bool, + use_reduce_first: bool, + use_reduce_last: bool, + use_reduce_max: bool, + use_reduce_mean: bool, + nO: Optional[int] = None, +) -> Model[List[Doc], Floats2d]: + """Build a model that classifies pooled `Doc` representations. + + Pooling is performed using reductions. Reductions are concatenated when + multiple reductions are used. + + tok2vec (Model): the tok2vec layer to pool over. + exclusive_classes (bool): Whether or not classes are mutually exclusive. + use_reduce_first (bool): Pool by using the hidden representation of the + first token of a `Doc`. + use_reduce_last (bool): Pool by using the hidden representation of the + last token of a `Doc`. + use_reduce_max (bool): Pool by taking the maximum values of the hidden + representations of a `Doc`. + use_reduce_mean (bool): Pool by taking the mean of all hidden + representations of a `Doc`. + nO (Optional[int]): Number of classes. + """ + + fill_defaults = {"b": 0, "W": 0} + reductions = [] + if use_reduce_first: + reductions.append(reduce_first()) + if use_reduce_last: + reductions.append(reduce_last()) + if use_reduce_max: + reductions.append(reduce_max()) + if use_reduce_mean: + reductions.append(reduce_mean()) + + if not len(reductions): + raise ValueError(Errors.E1057) + + with Model.define_operators({">>": chain}): + cnn = tok2vec >> list2ragged() >> concatenate(*reductions) + nO_tok2vec = tok2vec.maybe_get_dim("nO") + nI = nO_tok2vec * len(reductions) if nO_tok2vec is not None else None + if exclusive_classes: + output_layer = Softmax(nO=nO, nI=nI) + fill_defaults["b"] = NEG_VALUE + resizable_layer: Model = resizable( + output_layer, + resize_layer=partial( + resize_linear_weighted, fill_defaults=fill_defaults + ), + ) + model = cnn >> resizable_layer + else: + output_layer = Linear(nO=nO, nI=nI) + resizable_layer = resizable( + output_layer, + resize_layer=partial( + resize_linear_weighted, fill_defaults=fill_defaults + ), + ) + model = cnn >> resizable_layer >> Logistic() + model.set_ref("output_layer", output_layer) + model.attrs["resize_output"] = partial( + resize_and_set_ref, + resizable_layer=resizable_layer, + ) + model.set_ref("tok2vec", tok2vec) + if nO is not None: + model.set_dim("nO", cast(int, nO)) + model.attrs["multi_label"] = not exclusive_classes + return model diff --git a/spacy/ml/models/tok2vec.py b/spacy/ml/models/tok2vec.py index 633bd83f7..61bc7291e 100644 --- a/spacy/ml/models/tok2vec.py +++ b/spacy/ml/models/tok2vec.py @@ -67,8 +67,8 @@ def build_hash_embed_cnn_tok2vec( are between 2 and 8. window_size (int): The number of tokens on either side to concatenate during the convolutions. The receptive field of the CNN will be - depth * (window_size * 2 + 1), so a 4-layer network with window_size of - 2 will be sensitive to 20 words at a time. Recommended value is 1. + depth * window_size * 2 + 1, so a 4-layer network with window_size of + 2 will be sensitive to 17 words at a time. Recommended value is 1. embed_size (int): The number of rows in the hash embedding tables. This can be surprisingly small, due to the use of the hash embeddings. Recommended values are between 2000 and 10000. diff --git a/spacy/ml/parser_model.pyx b/spacy/ml/parser_model.pyx index 843275f4c..dd9166681 100644 --- a/spacy/ml/parser_model.pyx +++ b/spacy/ml/parser_model.pyx @@ -1,4 +1,5 @@ # cython: infer_types=True, cdivision=True, boundscheck=False +# cython: profile=False cimport numpy as np from libc.math cimport exp from libc.stdlib cimport calloc, free, realloc diff --git a/spacy/ml/staticvectors.py b/spacy/ml/staticvectors.py index b75240c5d..1a1b0a0ff 100644 --- a/spacy/ml/staticvectors.py +++ b/spacy/ml/staticvectors.py @@ -9,7 +9,7 @@ from thinc.util import partial from ..attrs import ORTH from ..errors import Errors, Warnings from ..tokens import Doc -from ..vectors import Mode +from ..vectors import Mode, Vectors from ..vocab import Vocab @@ -48,11 +48,14 @@ def forward( key_attr: int = getattr(vocab.vectors, "attr", ORTH) keys = model.ops.flatten([cast(Ints1d, doc.to_array(key_attr)) for doc in docs]) W = cast(Floats2d, model.ops.as_contig(model.get_param("W"))) - if vocab.vectors.mode == Mode.default: + if isinstance(vocab.vectors, Vectors) and vocab.vectors.mode == Mode.default: V = model.ops.asarray(vocab.vectors.data) rows = vocab.vectors.find(keys=keys) V = model.ops.as_contig(V[rows]) - elif vocab.vectors.mode == Mode.floret: + elif isinstance(vocab.vectors, Vectors) and vocab.vectors.mode == Mode.floret: + V = vocab.vectors.get_batch(keys) + V = model.ops.as_contig(V) + elif hasattr(vocab.vectors, "get_batch"): V = vocab.vectors.get_batch(keys) V = model.ops.as_contig(V) else: @@ -61,7 +64,7 @@ def forward( vectors_data = model.ops.gemm(V, W, trans2=True) except ValueError: raise RuntimeError(Errors.E896) - if vocab.vectors.mode == Mode.default: + if isinstance(vocab.vectors, Vectors) and vocab.vectors.mode == Mode.default: # Convert negative indices to 0-vectors # TODO: more options for UNK tokens vectors_data[rows < 0] = 0 diff --git a/spacy/morphology.pyx b/spacy/morphology.pyx index 665e964bf..d37362bed 100644 --- a/spacy/morphology.pyx +++ b/spacy/morphology.pyx @@ -1,4 +1,5 @@ # cython: infer_types +# cython: profile=False import warnings from typing import Dict, List, Optional, Tuple, Union diff --git a/spacy/parts_of_speech.pyx b/spacy/parts_of_speech.pyx index e71fb917f..98e3570ec 100644 --- a/spacy/parts_of_speech.pyx +++ b/spacy/parts_of_speech.pyx @@ -1,4 +1,4 @@ - +# cython: profile=False IDS = { "": NO_TAG, "ADJ": ADJ, diff --git a/spacy/pipeline/__init__.py b/spacy/pipeline/__init__.py index a0a15f63d..c28229078 100644 --- a/spacy/pipeline/__init__.py +++ b/spacy/pipeline/__init__.py @@ -21,6 +21,7 @@ from .trainable_pipe import TrainablePipe __all__ = [ "AttributeRuler", "DependencyParser", + "EditTreeLemmatizer", "EntityLinker", "EntityRecognizer", "Morphologizer", diff --git a/spacy/pipeline/_edit_tree_internals/edit_trees.pyx b/spacy/pipeline/_edit_tree_internals/edit_trees.pyx index 78cd25622..7abd9f2a6 100644 --- a/spacy/pipeline/_edit_tree_internals/edit_trees.pyx +++ b/spacy/pipeline/_edit_tree_internals/edit_trees.pyx @@ -1,4 +1,5 @@ # cython: infer_types=True, binding=True +# cython: profile=False from cython.operator cimport dereference as deref from libc.stdint cimport UINT32_MAX, uint32_t from libc.string cimport memset diff --git a/spacy/pipeline/_edit_tree_internals/schemas.py b/spacy/pipeline/_edit_tree_internals/schemas.py index 1e307b66c..89f2861ce 100644 --- a/spacy/pipeline/_edit_tree_internals/schemas.py +++ b/spacy/pipeline/_edit_tree_internals/schemas.py @@ -1,8 +1,12 @@ from collections import defaultdict from typing import Any, Dict, List, Union -from pydantic import BaseModel, Field, ValidationError -from pydantic.types import StrictBool, StrictInt, StrictStr +try: + from pydantic.v1 import BaseModel, Field, ValidationError + from pydantic.v1.types import StrictBool, StrictInt, StrictStr +except ImportError: + from pydantic import BaseModel, Field, ValidationError # type: ignore + from pydantic.types import StrictBool, StrictInt, StrictStr # type: ignore class MatchNodeSchema(BaseModel): diff --git a/spacy/pipeline/_parser_internals/_beam_utils.pyx b/spacy/pipeline/_parser_internals/_beam_utils.pyx index 5efc52a60..2752acb44 100644 --- a/spacy/pipeline/_parser_internals/_beam_utils.pyx +++ b/spacy/pipeline/_parser_internals/_beam_utils.pyx @@ -1,5 +1,4 @@ # cython: infer_types=True -# cython: profile=True import numpy from ...typedefs cimport class_t diff --git a/spacy/pipeline/_parser_internals/_state.pyx b/spacy/pipeline/_parser_internals/_state.pyx index e69de29bb..61bf62038 100644 --- a/spacy/pipeline/_parser_internals/_state.pyx +++ b/spacy/pipeline/_parser_internals/_state.pyx @@ -0,0 +1 @@ +# cython: profile=False diff --git a/spacy/pipeline/_parser_internals/arc_eager.pyx b/spacy/pipeline/_parser_internals/arc_eager.pyx index cec3a38f5..2f58ef86a 100644 --- a/spacy/pipeline/_parser_internals/arc_eager.pyx +++ b/spacy/pipeline/_parser_internals/arc_eager.pyx @@ -1,4 +1,4 @@ -# cython: profile=True, cdivision=True, infer_types=True +# cython: cdivision=True, infer_types=True from cymem.cymem cimport Address, Pool from libc.stdint cimport int32_t from libcpp.vector cimport vector diff --git a/spacy/pipeline/_parser_internals/ner.pyx b/spacy/pipeline/_parser_internals/ner.pyx index 411e53668..716ec730d 100644 --- a/spacy/pipeline/_parser_internals/ner.pyx +++ b/spacy/pipeline/_parser_internals/ner.pyx @@ -1,3 +1,4 @@ +# cython: profile=False from cymem.cymem cimport Pool from libcpp.memory cimport shared_ptr from libcpp.vector cimport vector diff --git a/spacy/pipeline/_parser_internals/nonproj.pyx b/spacy/pipeline/_parser_internals/nonproj.pyx index 93ad14feb..7de19851e 100644 --- a/spacy/pipeline/_parser_internals/nonproj.pyx +++ b/spacy/pipeline/_parser_internals/nonproj.pyx @@ -1,4 +1,4 @@ -# cython: profile=True, infer_types=True +# cython: infer_types=True """Implements the projectivize/deprojectivize mechanism in Nivre & Nilsson 2005 for doing pseudo-projective parsing implementation uses the HEAD decoration scheme. diff --git a/spacy/pipeline/_parser_internals/stateclass.pyx b/spacy/pipeline/_parser_internals/stateclass.pyx index fdb5004bb..e3b063b7d 100644 --- a/spacy/pipeline/_parser_internals/stateclass.pyx +++ b/spacy/pipeline/_parser_internals/stateclass.pyx @@ -1,4 +1,5 @@ # cython: infer_types=True +# cython: profile=False from libcpp.vector cimport vector from ...tokens.doc cimport Doc diff --git a/spacy/pipeline/_parser_internals/transition_system.pyx b/spacy/pipeline/_parser_internals/transition_system.pyx index e084689bc..485ce7c10 100644 --- a/spacy/pipeline/_parser_internals/transition_system.pyx +++ b/spacy/pipeline/_parser_internals/transition_system.pyx @@ -1,4 +1,5 @@ # cython: infer_types=True +# cython: profile=False from __future__ import print_function from cymem.cymem cimport Pool diff --git a/spacy/pipeline/dep_parser.pyx b/spacy/pipeline/dep_parser.pyx index e4767ed2f..f15ec9bad 100644 --- a/spacy/pipeline/dep_parser.pyx +++ b/spacy/pipeline/dep_parser.pyx @@ -1,4 +1,4 @@ -# cython: infer_types=True, profile=True, binding=True +# cython: infer_types=True, binding=True from collections import defaultdict from typing import Callable, Optional diff --git a/spacy/pipeline/morphologizer.pyx b/spacy/pipeline/morphologizer.pyx index 9f8b88502..4c8b5d0d7 100644 --- a/spacy/pipeline/morphologizer.pyx +++ b/spacy/pipeline/morphologizer.pyx @@ -1,4 +1,4 @@ -# cython: infer_types=True, profile=True, binding=True +# cython: infer_types=True, binding=True from itertools import islice from typing import Callable, Dict, Iterable, Optional, Union diff --git a/spacy/pipeline/ner.pyx b/spacy/pipeline/ner.pyx index 4ce7ec37b..4d27ca1a0 100644 --- a/spacy/pipeline/ner.pyx +++ b/spacy/pipeline/ner.pyx @@ -1,4 +1,4 @@ -# cython: infer_types=True, profile=True, binding=True +# cython: infer_types=True, binding=True from collections import defaultdict from typing import Callable, Optional diff --git a/spacy/pipeline/pipe.pyx b/spacy/pipeline/pipe.pyx index 8409e64c3..457fd1902 100644 --- a/spacy/pipeline/pipe.pyx +++ b/spacy/pipeline/pipe.pyx @@ -1,4 +1,4 @@ -# cython: infer_types=True, profile=True, binding=True +# cython: infer_types=True, binding=True from typing import Callable, Dict, Iterable, Iterator, Tuple, Union import srsly diff --git a/spacy/pipeline/sentencizer.pyx b/spacy/pipeline/sentencizer.pyx index 28cf5d6b4..d8298b1cb 100644 --- a/spacy/pipeline/sentencizer.pyx +++ b/spacy/pipeline/sentencizer.pyx @@ -1,4 +1,4 @@ -# cython: infer_types=True, profile=True, binding=True +# cython: infer_types=True, binding=True from typing import Callable, List, Optional import srsly diff --git a/spacy/pipeline/senter.pyx b/spacy/pipeline/senter.pyx index edba6e5ca..36071e219 100644 --- a/spacy/pipeline/senter.pyx +++ b/spacy/pipeline/senter.pyx @@ -1,4 +1,4 @@ -# cython: infer_types=True, profile=True, binding=True +# cython: infer_types=True, binding=True from itertools import islice from typing import Callable, Iterable, Optional diff --git a/spacy/pipeline/tagger.pyx b/spacy/pipeline/tagger.pyx index b28ea00bd..e3b8beea1 100644 --- a/spacy/pipeline/tagger.pyx +++ b/spacy/pipeline/tagger.pyx @@ -1,4 +1,4 @@ -# cython: infer_types=True, profile=True, binding=True +# cython: infer_types=True, binding=True from itertools import islice from typing import Callable, Dict, Iterable, List, Optional, Tuple, Union diff --git a/spacy/pipeline/textcat.py b/spacy/pipeline/textcat.py index 29b1a9a00..4d194b352 100644 --- a/spacy/pipeline/textcat.py +++ b/spacy/pipeline/textcat.py @@ -39,8 +39,9 @@ maxout_pieces = 3 depth = 2 [model.linear_model] -@architectures = "spacy.TextCatBOW.v2" +@architectures = "spacy.TextCatBOW.v3" exclusive_classes = true +length = 262144 ngram_size = 1 no_output_layer = false """ @@ -48,16 +49,21 @@ DEFAULT_SINGLE_TEXTCAT_MODEL = Config().from_str(single_label_default_config)["m single_label_bow_config = """ [model] -@architectures = "spacy.TextCatBOW.v2" +@architectures = "spacy.TextCatBOW.v3" exclusive_classes = true +length = 262144 ngram_size = 1 no_output_layer = false """ single_label_cnn_config = """ [model] -@architectures = "spacy.TextCatCNN.v2" +@architectures = "spacy.TextCatReduce.v1" exclusive_classes = true +use_reduce_first = false +use_reduce_last = false +use_reduce_max = false +use_reduce_mean = true [model.tok2vec] @architectures = "spacy.HashEmbedCNN.v2" diff --git a/spacy/pipeline/textcat_multilabel.py b/spacy/pipeline/textcat_multilabel.py index 94509f4cb..52115e81d 100644 --- a/spacy/pipeline/textcat_multilabel.py +++ b/spacy/pipeline/textcat_multilabel.py @@ -35,8 +35,9 @@ maxout_pieces = 3 depth = 2 [model.linear_model] -@architectures = "spacy.TextCatBOW.v2" +@architectures = "spacy.TextCatBOW.v3" exclusive_classes = false +length = 262144 ngram_size = 1 no_output_layer = false """ @@ -44,7 +45,7 @@ DEFAULT_MULTI_TEXTCAT_MODEL = Config().from_str(multi_label_default_config)["mod multi_label_bow_config = """ [model] -@architectures = "spacy.TextCatBOW.v2" +@architectures = "spacy.TextCatBOW.v3" exclusive_classes = false ngram_size = 1 no_output_layer = false @@ -52,8 +53,12 @@ no_output_layer = false multi_label_cnn_config = """ [model] -@architectures = "spacy.TextCatCNN.v2" +@architectures = "spacy.TextCatReduce.v1" exclusive_classes = false +use_reduce_first = false +use_reduce_last = false +use_reduce_max = false +use_reduce_mean = true [model.tok2vec] @architectures = "spacy.HashEmbedCNN.v2" diff --git a/spacy/pipeline/trainable_pipe.pyx b/spacy/pipeline/trainable_pipe.pyx index 42394c907..36a72e8d8 100644 --- a/spacy/pipeline/trainable_pipe.pyx +++ b/spacy/pipeline/trainable_pipe.pyx @@ -1,4 +1,4 @@ -# cython: infer_types=True, profile=True, binding=True +# cython: infer_types=True, binding=True from typing import Callable, Dict, Iterable, Iterator, Optional, Tuple import srsly diff --git a/spacy/pipeline/transition_parser.pyx b/spacy/pipeline/transition_parser.pyx index 64de847b8..c5a9f4e8e 100644 --- a/spacy/pipeline/transition_parser.pyx +++ b/spacy/pipeline/transition_parser.pyx @@ -1,4 +1,5 @@ # cython: infer_types=True, cdivision=True, boundscheck=False, binding=True +# cython: profile=False from __future__ import print_function from typing import Dict, Iterable, List, Optional, Tuple diff --git a/spacy/schemas.py b/spacy/schemas.py index f56625b54..e6fdf51e4 100644 --- a/spacy/schemas.py +++ b/spacy/schemas.py @@ -17,19 +17,34 @@ from typing import ( Union, ) -from pydantic import ( - BaseModel, - ConstrainedStr, - Field, - StrictBool, - StrictFloat, - StrictInt, - StrictStr, - ValidationError, - create_model, - validator, -) -from pydantic.main import ModelMetaclass +try: + from pydantic.v1 import ( + BaseModel, + ConstrainedStr, + Field, + StrictBool, + StrictFloat, + StrictInt, + StrictStr, + ValidationError, + create_model, + validator, + ) + from pydantic.v1.main import ModelMetaclass +except ImportError: + from pydantic import ( # type: ignore + BaseModel, + ConstrainedStr, + Field, + StrictBool, + StrictFloat, + StrictInt, + StrictStr, + ValidationError, + create_model, + validator, + ) + from pydantic.main import ModelMetaclass # type: ignore from thinc.api import ConfigValidationError, Model, Optimizer from thinc.config import Promise @@ -397,6 +412,7 @@ class ConfigSchemaNlp(BaseModel): after_creation: Optional[Callable[["Language"], "Language"]] = Field(..., title="Optional callback to modify nlp object after creation and before the pipeline is constructed") after_pipeline_creation: Optional[Callable[["Language"], "Language"]] = Field(..., title="Optional callback to modify nlp object after the pipeline is constructed") batch_size: Optional[int] = Field(..., title="Default batch size") + vectors: Callable = Field(..., title="Vectors implementation") # fmt: on class Config: @@ -488,66 +504,6 @@ CONFIG_SCHEMAS = { "distillation": ConfigSchemaDistill, } - -# Project config Schema - - -class ProjectConfigAssetGitItem(BaseModel): - # fmt: off - repo: StrictStr = Field(..., title="URL of Git repo to download from") - path: StrictStr = Field(..., title="File path or sub-directory to download (used for sparse checkout)") - branch: StrictStr = Field("master", title="Branch to clone from") - # fmt: on - - -class ProjectConfigAssetURL(BaseModel): - # fmt: off - dest: StrictStr = Field(..., title="Destination of downloaded asset") - url: Optional[StrictStr] = Field(None, title="URL of asset") - checksum: Optional[str] = Field(None, title="MD5 hash of file", regex=r"([a-fA-F\d]{32})") - description: StrictStr = Field("", title="Description of asset") - # fmt: on - - -class ProjectConfigAssetGit(BaseModel): - # fmt: off - git: ProjectConfigAssetGitItem = Field(..., title="Git repo information") - checksum: Optional[str] = Field(None, title="MD5 hash of file", regex=r"([a-fA-F\d]{32})") - description: Optional[StrictStr] = Field(None, title="Description of asset") - # fmt: on - - -class ProjectConfigCommand(BaseModel): - # fmt: off - name: StrictStr = Field(..., title="Name of command") - help: Optional[StrictStr] = Field(None, title="Command description") - script: List[StrictStr] = Field([], title="List of CLI commands to run, in order") - deps: List[StrictStr] = Field([], title="File dependencies required by this command") - outputs: List[StrictStr] = Field([], title="Outputs produced by this command") - outputs_no_cache: List[StrictStr] = Field([], title="Outputs not tracked by DVC (DVC only)") - no_skip: bool = Field(False, title="Never skip this command, even if nothing changed") - # fmt: on - - class Config: - title = "A single named command specified in a project config" - extra = "forbid" - - -class ProjectConfigSchema(BaseModel): - # fmt: off - vars: Dict[StrictStr, Any] = Field({}, title="Optional variables to substitute in commands") - env: Dict[StrictStr, Any] = Field({}, title="Optional variable names to substitute in commands, mapped to environment variable names") - assets: List[Union[ProjectConfigAssetURL, ProjectConfigAssetGit]] = Field([], title="Data assets") - workflows: Dict[StrictStr, List[StrictStr]] = Field({}, title="Named workflows, mapped to list of project commands to run in order") - commands: List[ProjectConfigCommand] = Field([], title="Project command shortucts") - title: Optional[str] = Field(None, title="Project title") - spacy_version: Optional[StrictStr] = Field(None, title="spaCy version range that the project is compatible with") - # fmt: on - - class Config: - title = "Schema for project configuration file" - - # Recommendations for init config workflows diff --git a/spacy/scorer.py b/spacy/scorer.py index 9cfb186ad..b590f8633 100644 --- a/spacy/scorer.py +++ b/spacy/scorer.py @@ -802,6 +802,140 @@ def get_ner_prf(examples: Iterable[Example], **kwargs) -> Dict[str, Any]: } +# The following implementation of trapezoid() is adapted from SciPy, +# which is distributed under the New BSD License. +# Copyright (c) 2001-2002 Enthought, Inc. 2003-2023, SciPy Developers. +# See licenses/3rd_party_licenses.txt +def trapezoid(y, x=None, dx=1.0, axis=-1): + r""" + Integrate along the given axis using the composite trapezoidal rule. + + If `x` is provided, the integration happens in sequence along its + elements - they are not sorted. + + Integrate `y` (`x`) along each 1d slice on the given axis, compute + :math:`\int y(x) dx`. + When `x` is specified, this integrates along the parametric curve, + computing :math:`\int_t y(t) dt = + \int_t y(t) \left.\frac{dx}{dt}\right|_{x=x(t)} dt`. + + Parameters + ---------- + y : array_like + Input array to integrate. + x : array_like, optional + The sample points corresponding to the `y` values. If `x` is None, + the sample points are assumed to be evenly spaced `dx` apart. The + default is None. + dx : scalar, optional + The spacing between sample points when `x` is None. The default is 1. + axis : int, optional + The axis along which to integrate. + + Returns + ------- + trapezoid : float or ndarray + Definite integral of `y` = n-dimensional array as approximated along + a single axis by the trapezoidal rule. If `y` is a 1-dimensional array, + then the result is a float. If `n` is greater than 1, then the result + is an `n`-1 dimensional array. + + See Also + -------- + cumulative_trapezoid, simpson, romb + + Notes + ----- + Image [2]_ illustrates trapezoidal rule -- y-axis locations of points + will be taken from `y` array, by default x-axis distances between + points will be 1.0, alternatively they can be provided with `x` array + or with `dx` scalar. Return value will be equal to combined area under + the red lines. + + References + ---------- + .. [1] Wikipedia page: https://en.wikipedia.org/wiki/Trapezoidal_rule + + .. [2] Illustration image: + https://en.wikipedia.org/wiki/File:Composite_trapezoidal_rule_illustration.png + + Examples + -------- + Use the trapezoidal rule on evenly spaced points: + + >>> import numpy as np + >>> from scipy import integrate + >>> integrate.trapezoid([1, 2, 3]) + 4.0 + + The spacing between sample points can be selected by either the + ``x`` or ``dx`` arguments: + + >>> integrate.trapezoid([1, 2, 3], x=[4, 6, 8]) + 8.0 + >>> integrate.trapezoid([1, 2, 3], dx=2) + 8.0 + + Using a decreasing ``x`` corresponds to integrating in reverse: + + >>> integrate.trapezoid([1, 2, 3], x=[8, 6, 4]) + -8.0 + + More generally ``x`` is used to integrate along a parametric curve. We can + estimate the integral :math:`\int_0^1 x^2 = 1/3` using: + + >>> x = np.linspace(0, 1, num=50) + >>> y = x**2 + >>> integrate.trapezoid(y, x) + 0.33340274885464394 + + Or estimate the area of a circle, noting we repeat the sample which closes + the curve: + + >>> theta = np.linspace(0, 2 * np.pi, num=1000, endpoint=True) + >>> integrate.trapezoid(np.cos(theta), x=np.sin(theta)) + 3.141571941375841 + + ``trapezoid`` can be applied along a specified axis to do multiple + computations in one call: + + >>> a = np.arange(6).reshape(2, 3) + >>> a + array([[0, 1, 2], + [3, 4, 5]]) + >>> integrate.trapezoid(a, axis=0) + array([1.5, 2.5, 3.5]) + >>> integrate.trapezoid(a, axis=1) + array([2., 8.]) + """ + y = np.asanyarray(y) + if x is None: + d = dx + else: + x = np.asanyarray(x) + if x.ndim == 1: + d = np.diff(x) + # reshape to correct shape + shape = [1] * y.ndim + shape[axis] = d.shape[0] + d = d.reshape(shape) + else: + d = np.diff(x, axis=axis) + nd = y.ndim + slice1 = [slice(None)] * nd + slice2 = [slice(None)] * nd + slice1[axis] = slice(1, None) + slice2[axis] = slice(None, -1) + try: + ret = (d * (y[tuple(slice1)] + y[tuple(slice2)]) / 2.0).sum(axis) + except ValueError: + # Operations didn't work, cast to ndarray + d = np.asarray(d) + y = np.asarray(y) + ret = np.add.reduce(d * (y[tuple(slice1)] + y[tuple(slice2)]) / 2.0, axis) + return ret + + # The following implementation of roc_auc_score() is adapted from # scikit-learn, which is distributed under the New BSD License. # Copyright (c) 2007–2019 The scikit-learn developers. @@ -1024,9 +1158,9 @@ def _auc(x, y): else: raise ValueError(Errors.E164.format(x=x)) - area = direction * np.trapz(y, x) + area = direction * trapezoid(y, x) if isinstance(area, np.memmap): - # Reductions such as .sum used internally in np.trapz do not return a + # Reductions such as .sum used internally in trapezoid do not return a # scalar by default for numpy.memmap instances contrary to # regular numpy.ndarray instances. area = area.dtype.type(area) diff --git a/spacy/strings.pyx b/spacy/strings.pyx index 62ab9c20d..cfa6c5a9a 100644 --- a/spacy/strings.pyx +++ b/spacy/strings.pyx @@ -1,4 +1,5 @@ # cython: infer_types=True +# cython: profile=False from typing import Iterable, Iterator, List, Optional, Tuple, Union from libc.stdint cimport uint32_t diff --git a/spacy/symbols.pyx b/spacy/symbols.pyx index 98c517aad..d2a8a4289 100644 --- a/spacy/symbols.pyx +++ b/spacy/symbols.pyx @@ -1,4 +1,5 @@ # cython: optimize.unpack_method_calls=False +# cython: profile=False IDS = { "": NIL, "IS_ALPHA": IS_ALPHA, diff --git a/spacy/tests/conftest.py b/spacy/tests/conftest.py index 3ac382022..28551f9ee 100644 --- a/spacy/tests/conftest.py +++ b/spacy/tests/conftest.py @@ -194,6 +194,11 @@ def fi_tokenizer(): return get_lang_class("fi")().tokenizer +@pytest.fixture(scope="session") +def fo_tokenizer(): + return get_lang_class("fo")().tokenizer + + @pytest.fixture(scope="session") def fr_tokenizer(): return get_lang_class("fr")().tokenizer @@ -363,6 +368,11 @@ def nl_tokenizer(): return get_lang_class("nl")().tokenizer +@pytest.fixture(scope="session") +def nn_tokenizer(): + return get_lang_class("nn")().tokenizer + + @pytest.fixture(scope="session") def pl_tokenizer(): return get_lang_class("pl")().tokenizer diff --git a/spacy/tests/doc/test_span.py b/spacy/tests/doc/test_span.py index 1006d67f2..d477d38eb 100644 --- a/spacy/tests/doc/test_span.py +++ b/spacy/tests/doc/test_span.py @@ -783,3 +783,12 @@ def test_for_no_ent_sents(): sents = list(doc.ents[0].sents) assert len(sents) == 1 assert str(sents[0]) == str(doc.ents[0].sent) == "ENTITY" + + +def test_span_api_richcmp_other(en_tokenizer): + doc1 = en_tokenizer("a b") + doc2 = en_tokenizer("b c") + assert not doc1[1:2] == doc1[1] + assert not doc1[1:2] == doc2[0] + assert not doc1[1:2] == doc2[0:1] + assert not doc1[0:1] == doc2 diff --git a/spacy/tests/doc/test_token_api.py b/spacy/tests/doc/test_token_api.py index 782dfd774..c10221e65 100644 --- a/spacy/tests/doc/test_token_api.py +++ b/spacy/tests/doc/test_token_api.py @@ -294,3 +294,12 @@ def test_missing_head_dep(en_vocab): assert aligned_heads[0] == ref_heads[0] assert aligned_deps[5] == ref_deps[5] assert aligned_heads[5] == ref_heads[5] + + +def test_token_api_richcmp_other(en_tokenizer): + doc1 = en_tokenizer("a b") + doc2 = en_tokenizer("b c") + assert not doc1[1] == doc1[0:1] + assert not doc1[1] == doc2[1:2] + assert not doc1[1] == doc2[0] + assert not doc1[0] == doc2 diff --git a/spacy/tests/lang/fo/__init__.py b/spacy/tests/lang/fo/__init__.py new file mode 100644 index 000000000..e69de29bb diff --git a/spacy/tests/lang/fo/test_tokenizer.py b/spacy/tests/lang/fo/test_tokenizer.py new file mode 100644 index 000000000..e61a62be5 --- /dev/null +++ b/spacy/tests/lang/fo/test_tokenizer.py @@ -0,0 +1,26 @@ +import pytest + +# examples taken from Basic LAnguage Resource Kit 1.0 for Faroese (https://maltokni.fo/en/resources) licensed with CC BY 4.0 (https://creativecommons.org/licenses/by/4.0/) +# fmt: off +FO_TOKEN_EXCEPTION_TESTS = [ + ( + "Eftir løgtingslóg um samsýning og eftirløn landsstýrismanna v.m., skulu løgmaður og landsstýrismenn vanliga siga frá sær størv í almennari tænastu ella privatum virkjum, samtøkum ella stovnum. ", + [ + "Eftir", "løgtingslóg", "um", "samsýning", "og", "eftirløn", "landsstýrismanna", "v.m.", ",", "skulu", "løgmaður", "og", "landsstýrismenn", "vanliga", "siga", "frá", "sær", "størv", "í", "almennari", "tænastu", "ella", "privatum", "virkjum", ",", "samtøkum", "ella", "stovnum", ".", + ], + ), + ( + "Sambandsflokkurin gongur aftur við 2,7 prosentum í mun til valið í 1994, tá flokkurin fekk undirtøku frá 23,4 prosent av veljarunum.", + [ + "Sambandsflokkurin", "gongur", "aftur", "við", "2,7", "prosentum", "í", "mun", "til", "valið", "í", "1994", ",", "tá", "flokkurin", "fekk", "undirtøku", "frá", "23,4", "prosent", "av", "veljarunum", ".", + ], + ), +] +# fmt: on + + +@pytest.mark.parametrize("text,expected_tokens", FO_TOKEN_EXCEPTION_TESTS) +def test_fo_tokenizer_handles_exception_cases(fo_tokenizer, text, expected_tokens): + tokens = fo_tokenizer(text) + token_list = [token.text for token in tokens if not token.is_space] + assert expected_tokens == token_list diff --git a/spacy/tests/lang/nn/__init__.py b/spacy/tests/lang/nn/__init__.py new file mode 100644 index 000000000..e69de29bb diff --git a/spacy/tests/lang/nn/test_tokenizer.py b/spacy/tests/lang/nn/test_tokenizer.py new file mode 100644 index 000000000..74a6937bd --- /dev/null +++ b/spacy/tests/lang/nn/test_tokenizer.py @@ -0,0 +1,38 @@ +import pytest + +# examples taken from Omsetjingsminne frå Nynorsk pressekontor 2022 (https://www.nb.no/sprakbanken/en/resource-catalogue/oai-nb-no-sbr-80/) +# fmt: off +NN_TOKEN_EXCEPTION_TESTS = [ + ( + "Målet til direktoratet er at alle skal bli tilbydd jobb i politiet så raskt som mogleg i 2014.", + [ + "Målet", "til", "direktoratet", "er", "at", "alle", "skal", "bli", "tilbydd", "jobb", "i", "politiet", "så", "raskt", "som", "mogleg", "i", "2014", ".", + ], + ), + ( + "Han ønskjer ikkje at staten skal vere med på å finansiere slik undervisning, men dette er rektor på skulen ueinig i.", + [ + "Han", "ønskjer", "ikkje", "at", "staten", "skal", "vere", "med", "på", "å", "finansiere", "slik", "undervisning", ",", "men", "dette", "er", "rektor", "på", "skulen", "ueinig", "i", ".", + ], + ), + ( + "Ifølgje China Daily vart det 8.848 meter høge fjellet flytta 3 centimeter sørvestover under jordskjelvet, som vart målt til 7,8.", + [ + "Ifølgje", "China", "Daily", "vart", "det", "8.848", "meter", "høge", "fjellet", "flytta", "3", "centimeter", "sørvestover", "under", "jordskjelvet", ",", "som", "vart", "målt", "til", "7,8", ".", + ], + ), + ( + "Brukssesongen er frå nov. til mai, med ein topp i mars.", + [ + "Brukssesongen", "er", "frå", "nov.", "til", "mai", ",", "med", "ein", "topp", "i", "mars", ".", + ], + ), +] +# fmt: on + + +@pytest.mark.parametrize("text,expected_tokens", NN_TOKEN_EXCEPTION_TESTS) +def test_nn_tokenizer_handles_exception_cases(nn_tokenizer, text, expected_tokens): + tokens = nn_tokenizer(text) + token_list = [token.text for token in tokens if not token.is_space] + assert expected_tokens == token_list diff --git a/spacy/tests/matcher/test_dependency_matcher.py b/spacy/tests/matcher/test_dependency_matcher.py index 44b3bb26b..be33f90cf 100644 --- a/spacy/tests/matcher/test_dependency_matcher.py +++ b/spacy/tests/matcher/test_dependency_matcher.py @@ -216,6 +216,11 @@ def test_dependency_matcher_pattern_validation(en_vocab): pattern2 = copy.deepcopy(pattern) pattern2[1]["RIGHT_ID"] = "fox" matcher.add("FOUNDED", [pattern2]) + # invalid key + with pytest.warns(UserWarning): + pattern2 = copy.deepcopy(pattern) + pattern2[1]["FOO"] = "BAR" + matcher.add("FOUNDED", [pattern2]) def test_dependency_matcher_callback(en_vocab, doc): diff --git a/spacy/tests/package/test_requirements.py b/spacy/tests/package/test_requirements.py index 99027ddeb..704d4b90b 100644 --- a/spacy/tests/package/test_requirements.py +++ b/spacy/tests/package/test_requirements.py @@ -5,6 +5,7 @@ from pathlib import Path def test_build_dependencies(): # Check that library requirements are pinned exactly the same across different setup files. libs_ignore_requirements = [ + "numpy", "pytest", "pytest-timeout", "mock", @@ -22,6 +23,7 @@ def test_build_dependencies(): ] # ignore language-specific packages that shouldn't be installed by all libs_ignore_setup = [ + "numpy", "fugashi", "mecab-ko", "pythainlp", diff --git a/spacy/tests/pipeline/test_initialize.py b/spacy/tests/pipeline/test_initialize.py index 6dd4114f1..9854b391e 100644 --- a/spacy/tests/pipeline/test_initialize.py +++ b/spacy/tests/pipeline/test_initialize.py @@ -1,5 +1,10 @@ import pytest -from pydantic import StrictBool + +try: + from pydantic.v1 import StrictBool +except ImportError: + from pydantic import StrictBool # type: ignore + from thinc.api import ConfigValidationError from spacy.lang.en import English diff --git a/spacy/tests/pipeline/test_pipe_factories.py b/spacy/tests/pipeline/test_pipe_factories.py index 0f1454b55..c45dccb06 100644 --- a/spacy/tests/pipeline/test_pipe_factories.py +++ b/spacy/tests/pipeline/test_pipe_factories.py @@ -1,5 +1,10 @@ import pytest -from pydantic import StrictInt, StrictStr + +try: + from pydantic.v1 import StrictInt, StrictStr +except ImportError: + from pydantic import StrictInt, StrictStr # type: ignore + from thinc.api import ConfigValidationError, Linear, Model import spacy @@ -198,7 +203,7 @@ def test_pipe_class_component_model(): "@architectures": "spacy.TextCatEnsemble.v2", "tok2vec": DEFAULT_TOK2VEC_MODEL, "linear_model": { - "@architectures": "spacy.TextCatBOW.v2", + "@architectures": "spacy.TextCatBOW.v3", "exclusive_classes": False, "ngram_size": 1, "no_output_layer": False, diff --git a/spacy/tests/pipeline/test_textcat.py b/spacy/tests/pipeline/test_textcat.py index 007789661..35cec61f1 100644 --- a/spacy/tests/pipeline/test_textcat.py +++ b/spacy/tests/pipeline/test_textcat.py @@ -419,7 +419,7 @@ def test_implicit_label(name, get_examples): @pytest.mark.parametrize( "name,textcat_config", [ - # BOW + # BOW V1 ("textcat", {"@architectures": "spacy.TextCatBOW.v1", "exclusive_classes": True, "no_output_layer": False, "ngram_size": 3}), ("textcat", {"@architectures": "spacy.TextCatBOW.v1", "exclusive_classes": True, "no_output_layer": True, "ngram_size": 3}), ("textcat_multilabel", {"@architectures": "spacy.TextCatBOW.v1", "exclusive_classes": False, "no_output_layer": False, "ngram_size": 3}), @@ -456,14 +456,14 @@ def test_no_resize(name, textcat_config): @pytest.mark.parametrize( "name,textcat_config", [ - # BOW - ("textcat", {"@architectures": "spacy.TextCatBOW.v2", "exclusive_classes": True, "no_output_layer": False, "ngram_size": 3}), - ("textcat", {"@architectures": "spacy.TextCatBOW.v2", "exclusive_classes": True, "no_output_layer": True, "ngram_size": 3}), - ("textcat_multilabel", {"@architectures": "spacy.TextCatBOW.v2", "exclusive_classes": False, "no_output_layer": False, "ngram_size": 3}), - ("textcat_multilabel", {"@architectures": "spacy.TextCatBOW.v2", "exclusive_classes": False, "no_output_layer": True, "ngram_size": 3}), + # BOW V3 + ("textcat", {"@architectures": "spacy.TextCatBOW.v3", "exclusive_classes": True, "no_output_layer": False, "ngram_size": 3}), + ("textcat", {"@architectures": "spacy.TextCatBOW.v3", "exclusive_classes": True, "no_output_layer": True, "ngram_size": 3}), + ("textcat_multilabel", {"@architectures": "spacy.TextCatBOW.v3", "exclusive_classes": False, "no_output_layer": False, "ngram_size": 3}), + ("textcat_multilabel", {"@architectures": "spacy.TextCatBOW.v3", "exclusive_classes": False, "no_output_layer": True, "ngram_size": 3}), # CNN - ("textcat", {"@architectures": "spacy.TextCatCNN.v2", "tok2vec": DEFAULT_TOK2VEC_MODEL, "exclusive_classes": True}), - ("textcat_multilabel", {"@architectures": "spacy.TextCatCNN.v2", "tok2vec": DEFAULT_TOK2VEC_MODEL, "exclusive_classes": False}), + ("textcat", {"@architectures": "spacy.TextCatReduce.v1", "tok2vec": DEFAULT_TOK2VEC_MODEL, "exclusive_classes": True, "use_reduce_first": True, "use_reduce_last": True, "use_reduce_max": True, "use_reduce_mean": True}), + ("textcat_multilabel", {"@architectures": "spacy.TextCatReduce.v1", "tok2vec": DEFAULT_TOK2VEC_MODEL, "exclusive_classes": False, "use_reduce_first": True, "use_reduce_last": True, "use_reduce_max": True, "use_reduce_mean": True}), ], ) # fmt: on @@ -485,14 +485,14 @@ def test_resize(name, textcat_config): @pytest.mark.parametrize( "name,textcat_config", [ - # BOW - ("textcat", {"@architectures": "spacy.TextCatBOW.v2", "exclusive_classes": True, "no_output_layer": False, "ngram_size": 3}), - ("textcat", {"@architectures": "spacy.TextCatBOW.v2", "exclusive_classes": True, "no_output_layer": True, "ngram_size": 3}), - ("textcat_multilabel", {"@architectures": "spacy.TextCatBOW.v2", "exclusive_classes": False, "no_output_layer": False, "ngram_size": 3}), - ("textcat_multilabel", {"@architectures": "spacy.TextCatBOW.v2", "exclusive_classes": False, "no_output_layer": True, "ngram_size": 3}), - # CNN - ("textcat", {"@architectures": "spacy.TextCatCNN.v2", "tok2vec": DEFAULT_TOK2VEC_MODEL, "exclusive_classes": True}), - ("textcat_multilabel", {"@architectures": "spacy.TextCatCNN.v2", "tok2vec": DEFAULT_TOK2VEC_MODEL, "exclusive_classes": False}), + # BOW v3 + ("textcat", {"@architectures": "spacy.TextCatBOW.v3", "exclusive_classes": True, "no_output_layer": False, "ngram_size": 3}), + ("textcat", {"@architectures": "spacy.TextCatBOW.v3", "exclusive_classes": True, "no_output_layer": True, "ngram_size": 3}), + ("textcat_multilabel", {"@architectures": "spacy.TextCatBOW.v3", "exclusive_classes": False, "no_output_layer": False, "ngram_size": 3}), + ("textcat_multilabel", {"@architectures": "spacy.TextCatBOW.v3", "exclusive_classes": False, "no_output_layer": True, "ngram_size": 3}), + # REDUCE + ("textcat", {"@architectures": "spacy.TextCatReduce.v1", "tok2vec": DEFAULT_TOK2VEC_MODEL, "exclusive_classes": True, "use_reduce_first": True, "use_reduce_last": True, "use_reduce_max": True, "use_reduce_mean": True}), + ("textcat_multilabel", {"@architectures": "spacy.TextCatReduce.v1", "tok2vec": DEFAULT_TOK2VEC_MODEL, "exclusive_classes": False, "use_reduce_first": True, "use_reduce_last": True, "use_reduce_max": True, "use_reduce_mean": True}), ], ) # fmt: on @@ -704,12 +704,23 @@ def test_overfitting_IO_multi(): ("textcat", TRAIN_DATA_SINGLE_LABEL, {"@architectures": "spacy.TextCatBOW.v2", "exclusive_classes": True, "ngram_size": 4, "no_output_layer": False}), ("textcat_multilabel", TRAIN_DATA_MULTI_LABEL, {"@architectures": "spacy.TextCatBOW.v2", "exclusive_classes": False, "ngram_size": 3, "no_output_layer": True}), ("textcat", TRAIN_DATA_SINGLE_LABEL, {"@architectures": "spacy.TextCatBOW.v2", "exclusive_classes": True, "ngram_size": 2, "no_output_layer": True}), + # BOW V3 + ("textcat_multilabel", TRAIN_DATA_MULTI_LABEL, {"@architectures": "spacy.TextCatBOW.v3", "exclusive_classes": False, "ngram_size": 1, "no_output_layer": False}), + ("textcat", TRAIN_DATA_SINGLE_LABEL, {"@architectures": "spacy.TextCatBOW.v3", "exclusive_classes": True, "ngram_size": 4, "no_output_layer": False}), + ("textcat_multilabel", TRAIN_DATA_MULTI_LABEL, {"@architectures": "spacy.TextCatBOW.v3", "exclusive_classes": False, "ngram_size": 3, "no_output_layer": True}), + ("textcat", TRAIN_DATA_SINGLE_LABEL, {"@architectures": "spacy.TextCatBOW.v3", "exclusive_classes": True, "ngram_size": 2, "no_output_layer": True}), # ENSEMBLE V2 - ("textcat_multilabel", TRAIN_DATA_MULTI_LABEL, {"@architectures": "spacy.TextCatEnsemble.v2", "tok2vec": DEFAULT_TOK2VEC_MODEL, "linear_model": {"@architectures": "spacy.TextCatBOW.v2", "exclusive_classes": False, "ngram_size": 1, "no_output_layer": False}}), - ("textcat", TRAIN_DATA_SINGLE_LABEL, {"@architectures": "spacy.TextCatEnsemble.v2", "tok2vec": DEFAULT_TOK2VEC_MODEL, "linear_model": {"@architectures": "spacy.TextCatBOW.v2", "exclusive_classes": True, "ngram_size": 5, "no_output_layer": False}}), - # CNN V2 + ("textcat_multilabel", TRAIN_DATA_MULTI_LABEL, {"@architectures": "spacy.TextCatEnsemble.v2", "tok2vec": DEFAULT_TOK2VEC_MODEL, "linear_model": {"@architectures": "spacy.TextCatBOW.v3", "exclusive_classes": False, "ngram_size": 1, "no_output_layer": False}}), + ("textcat", TRAIN_DATA_SINGLE_LABEL, {"@architectures": "spacy.TextCatEnsemble.v2", "tok2vec": DEFAULT_TOK2VEC_MODEL, "linear_model": {"@architectures": "spacy.TextCatBOW.v3", "exclusive_classes": True, "ngram_size": 5, "no_output_layer": False}}), + # CNN V2 (legacy) ("textcat", TRAIN_DATA_SINGLE_LABEL, {"@architectures": "spacy.TextCatCNN.v2", "tok2vec": DEFAULT_TOK2VEC_MODEL, "exclusive_classes": True}), ("textcat_multilabel", TRAIN_DATA_MULTI_LABEL, {"@architectures": "spacy.TextCatCNN.v2", "tok2vec": DEFAULT_TOK2VEC_MODEL, "exclusive_classes": False}), + # PARAMETRIC ATTENTION V1 + ("textcat", TRAIN_DATA_SINGLE_LABEL, {"@architectures": "spacy.TextCatParametricAttention.v1", "tok2vec": DEFAULT_TOK2VEC_MODEL, "exclusive_classes": True}), + ("textcat_multilabel", TRAIN_DATA_MULTI_LABEL, {"@architectures": "spacy.TextCatParametricAttention.v1", "tok2vec": DEFAULT_TOK2VEC_MODEL, "exclusive_classes": False}), + # REDUCE V1 + ("textcat", TRAIN_DATA_SINGLE_LABEL, {"@architectures": "spacy.TextCatReduce.v1", "tok2vec": DEFAULT_TOK2VEC_MODEL, "exclusive_classes": True, "use_reduce_first": True, "use_reduce_last": True, "use_reduce_max": True, "use_reduce_mean": True}), + ("textcat_multilabel", TRAIN_DATA_MULTI_LABEL, {"@architectures": "spacy.TextCatReduce.v1", "tok2vec": DEFAULT_TOK2VEC_MODEL, "exclusive_classes": False, "use_reduce_first": True, "use_reduce_last": True, "use_reduce_max": True, "use_reduce_mean": True}), ], ) # fmt: on diff --git a/spacy/tests/test_cli.py b/spacy/tests/test_cli.py index 451ddd298..ff53ed1e1 100644 --- a/spacy/tests/test_cli.py +++ b/spacy/tests/test_cli.py @@ -1,31 +1,19 @@ import math import os -import time from collections import Counter from pathlib import Path from typing import Any, Dict, List, Tuple -import numpy import pytest import srsly from click import NoSuchOption from packaging.specifiers import SpecifierSet -from thinc.api import Config, ConfigValidationError +from thinc.api import Config import spacy from spacy import about from spacy.cli import info -from spacy.cli._util import ( - download_file, - is_subpath_of, - load_project_config, - parse_config_overrides, - string_to_list, - substitute_project_variables, - upload_file, - validate_project_commands, - walk_directory, -) +from spacy.cli._util import parse_config_overrides, string_to_list, walk_directory from spacy.cli.apply import apply from spacy.cli.debug_data import ( _compile_gold, @@ -43,13 +31,11 @@ from spacy.cli.find_threshold import find_threshold from spacy.cli.init_config import RECOMMENDATIONS, fill_config, init_config from spacy.cli.init_pipeline import _init_labels from spacy.cli.package import _is_permitted_package_name, get_third_party_dependencies -from spacy.cli.project.remote_storage import RemoteStorage -from spacy.cli.project.run import _check_requirements from spacy.cli.validate import get_model_pkgs from spacy.lang.en import English from spacy.lang.nl import Dutch from spacy.language import Language -from spacy.schemas import ProjectConfigSchema, RecommendationSchema, validate +from spacy.schemas import RecommendationSchema from spacy.tokens import Doc, DocBin from spacy.tokens.span import Span from spacy.training import Example, docs_to_json, offsets_to_biluo_tags @@ -134,25 +120,6 @@ def test_issue7055(): assert "model" in filled_cfg["components"]["ner"] -@pytest.mark.issue(11235) -def test_issue11235(): - """ - Test that the cli handles interpolation in the directory names correctly when loading project config. - """ - lang_var = "en" - variables = {"lang": lang_var} - commands = [{"name": "x", "script": ["hello ${vars.lang}"]}] - directories = ["cfg", "${vars.lang}_model"] - project = {"commands": commands, "vars": variables, "directories": directories} - with make_tempdir() as d: - srsly.write_yaml(d / "project.yml", project) - cfg = load_project_config(d) - # Check that the directories are interpolated and created correctly - assert os.path.exists(d / "cfg") - assert os.path.exists(d / f"{lang_var}_model") - assert cfg["commands"][0]["script"][0] == f"hello {lang_var}" - - @pytest.mark.issue(12566) @pytest.mark.parametrize( "factory,output_file", @@ -443,136 +410,6 @@ def test_cli_converters_conll_ner_to_docs(): assert ent.text in ["New York City", "London"] -def test_project_config_validation_full(): - config = { - "vars": {"some_var": 20}, - "directories": ["assets", "configs", "corpus", "scripts", "training"], - "assets": [ - { - "dest": "x", - "extra": True, - "url": "https://example.com", - "checksum": "63373dd656daa1fd3043ce166a59474c", - }, - { - "dest": "y", - "git": { - "repo": "https://github.com/example/repo", - "branch": "develop", - "path": "y", - }, - }, - { - "dest": "z", - "extra": False, - "url": "https://example.com", - "checksum": "63373dd656daa1fd3043ce166a59474c", - }, - ], - "commands": [ - { - "name": "train", - "help": "Train a model", - "script": ["python -m spacy train config.cfg -o training"], - "deps": ["config.cfg", "corpus/training.spcy"], - "outputs": ["training/model-best"], - }, - {"name": "test", "script": ["pytest", "custom.py"], "no_skip": True}, - ], - "workflows": {"all": ["train", "test"], "train": ["train"]}, - } - errors = validate(ProjectConfigSchema, config) - assert not errors - - -@pytest.mark.parametrize( - "config", - [ - {"commands": [{"name": "a"}, {"name": "a"}]}, - {"commands": [{"name": "a"}], "workflows": {"a": []}}, - {"commands": [{"name": "a"}], "workflows": {"b": ["c"]}}, - ], -) -def test_project_config_validation1(config): - with pytest.raises(SystemExit): - validate_project_commands(config) - - -@pytest.mark.parametrize( - "config,n_errors", - [ - ({"commands": {"a": []}}, 1), - ({"commands": [{"help": "..."}]}, 1), - ({"commands": [{"name": "a", "extra": "b"}]}, 1), - ({"commands": [{"extra": "b"}]}, 2), - ({"commands": [{"name": "a", "deps": [123]}]}, 1), - ], -) -def test_project_config_validation2(config, n_errors): - errors = validate(ProjectConfigSchema, config) - assert len(errors) == n_errors - - -@pytest.mark.parametrize( - "int_value", - [10, pytest.param("10", marks=pytest.mark.xfail)], -) -def test_project_config_interpolation(int_value): - variables = {"a": int_value, "b": {"c": "foo", "d": True}} - commands = [ - {"name": "x", "script": ["hello ${vars.a} ${vars.b.c}"]}, - {"name": "y", "script": ["${vars.b.c} ${vars.b.d}"]}, - ] - project = {"commands": commands, "vars": variables} - with make_tempdir() as d: - srsly.write_yaml(d / "project.yml", project) - cfg = load_project_config(d) - assert type(cfg) == dict - assert type(cfg["commands"]) == list - assert cfg["commands"][0]["script"][0] == "hello 10 foo" - assert cfg["commands"][1]["script"][0] == "foo true" - commands = [{"name": "x", "script": ["hello ${vars.a} ${vars.b.e}"]}] - project = {"commands": commands, "vars": variables} - with pytest.raises(ConfigValidationError): - substitute_project_variables(project) - - -@pytest.mark.parametrize( - "greeting", - [342, "everyone", "tout le monde", pytest.param("42", marks=pytest.mark.xfail)], -) -def test_project_config_interpolation_override(greeting): - variables = {"a": "world"} - commands = [ - {"name": "x", "script": ["hello ${vars.a}"]}, - ] - overrides = {"vars.a": greeting} - project = {"commands": commands, "vars": variables} - with make_tempdir() as d: - srsly.write_yaml(d / "project.yml", project) - cfg = load_project_config(d, overrides=overrides) - assert type(cfg) == dict - assert type(cfg["commands"]) == list - assert cfg["commands"][0]["script"][0] == f"hello {greeting}" - - -def test_project_config_interpolation_env(): - variables = {"a": 10} - env_var = "SPACY_TEST_FOO" - env_vars = {"foo": env_var} - commands = [{"name": "x", "script": ["hello ${vars.a} ${env.foo}"]}] - project = {"commands": commands, "vars": variables, "env": env_vars} - with make_tempdir() as d: - srsly.write_yaml(d / "project.yml", project) - cfg = load_project_config(d) - assert cfg["commands"][0]["script"][0] == "hello 10 " - os.environ[env_var] = "123" - with make_tempdir() as d: - srsly.write_yaml(d / "project.yml", project) - cfg = load_project_config(d) - assert cfg["commands"][0]["script"][0] == "hello 10 123" - - @pytest.mark.parametrize( "args,expected", [ @@ -782,21 +619,6 @@ def test_get_third_party_dependencies(): get_third_party_dependencies(nlp.config) -@pytest.mark.parametrize( - "parent,child,expected", - [ - ("/tmp", "/tmp", True), - ("/tmp", "/", False), - ("/tmp", "/tmp/subdir", True), - ("/tmp", "/tmpdir", False), - ("/tmp", "/tmp/subdir/..", True), - ("/tmp", "/tmp/..", False), - ], -) -def test_is_subpath_of(parent, child, expected): - assert is_subpath_of(parent, child) == expected - - @pytest.mark.slow @pytest.mark.parametrize( "factory_name,pipe_name", @@ -1042,67 +864,6 @@ def test_applycli_user_data(): assert result[0]._.ext == val -# TODO: remove this xfail after merging master into v4. The issue -# is that for local files, pathy started returning os.stat_result, -# which doesn't have a last_modified property. So, recency-sorting -# fails and the test fails. However, once we merge master into -# v4, we'll use weasel, which in turn uses cloudpathlib, which -# should resolve this issue. -@pytest.mark.xfail(reason="Recency sorting is broken on some platforms") -def test_local_remote_storage(): - with make_tempdir() as d: - filename = "a.txt" - - content_hashes = ("aaaa", "cccc", "bbbb") - for i, content_hash in enumerate(content_hashes): - # make sure that each subsequent file has a later timestamp - if i > 0: - time.sleep(1) - content = f"{content_hash} content" - loc_file = d / "root" / filename - if not loc_file.parent.exists(): - loc_file.parent.mkdir(parents=True) - with loc_file.open(mode="w") as file_: - file_.write(content) - - # push first version to remote storage - remote = RemoteStorage(d / "root", str(d / "remote")) - remote.push(filename, "aaaa", content_hash) - - # retrieve with full hashes - loc_file.unlink() - remote.pull(filename, command_hash="aaaa", content_hash=content_hash) - with loc_file.open(mode="r") as file_: - assert file_.read() == content - - # retrieve with command hash - loc_file.unlink() - remote.pull(filename, command_hash="aaaa") - with loc_file.open(mode="r") as file_: - assert file_.read() == content - - # retrieve with content hash - loc_file.unlink() - remote.pull(filename, content_hash=content_hash) - with loc_file.open(mode="r") as file_: - assert file_.read() == content - - # retrieve with no hashes - loc_file.unlink() - remote.pull(filename) - with loc_file.open(mode="r") as file_: - assert file_.read() == content - - -def test_local_remote_storage_pull_missing(): - # pulling from a non-existent remote pulls nothing gracefully - with make_tempdir() as d: - filename = "a.txt" - remote = RemoteStorage(d / "root", str(d / "remote")) - assert remote.pull(filename, command_hash="aaaa") is None - assert remote.pull(filename) is None - - def test_cli_find_threshold(capsys): def make_examples(nlp: Language) -> List[Example]: docs: List[Example] = [] @@ -1213,63 +974,6 @@ def test_cli_find_threshold(capsys): ) -@pytest.mark.filterwarnings("ignore::DeprecationWarning") -@pytest.mark.parametrize( - "reqs,output", - [ - [ - """ - spacy - - # comment - - thinc""", - (False, False), - ], - [ - """# comment - --some-flag - spacy""", - (False, False), - ], - [ - """# comment - --some-flag - spacy; python_version >= '3.6'""", - (False, False), - ], - [ - """# comment - spacyunknowndoesnotexist12345""", - (True, False), - ], - ], -) -def test_project_check_requirements(reqs, output): - import pkg_resources - - # excessive guard against unlikely package name - try: - pkg_resources.require("spacyunknowndoesnotexist12345") - except pkg_resources.DistributionNotFound: - assert output == _check_requirements([req.strip() for req in reqs.split("\n")]) - - -def test_upload_download_local_file(): - with make_tempdir() as d1, make_tempdir() as d2: - filename = "f.txt" - content = "content" - local_file = d1 / filename - remote_file = d2 / filename - with local_file.open(mode="w") as file_: - file_.write(content) - upload_file(local_file, remote_file) - local_file.unlink() - download_file(remote_file, local_file) - with local_file.open(mode="r") as file_: - assert file_.read() == content - - def test_walk_directory(): with make_tempdir() as d: files = [ @@ -1357,3 +1061,8 @@ def test_debug_data_trainable_lemmatizer_not_annotated(): data = _compile_gold(train_examples, ["trainable_lemmatizer"], nlp, True) assert data["no_lemma_annotations"] == 2 + + +def test_project_api_imports(): + from spacy.cli import project_run + from spacy.cli.project.run import project_run # noqa: F401, F811 diff --git a/spacy/tests/test_cli_app.py b/spacy/tests/test_cli_app.py index 2424138d3..9031a2b11 100644 --- a/spacy/tests/test_cli_app.py +++ b/spacy/tests/test_cli_app.py @@ -9,7 +9,7 @@ from typer.testing import CliRunner import spacy from spacy.cli._util import app, get_git_version -from spacy.tokens import Doc, DocBin +from spacy.tokens import Doc, DocBin, Span from .util import make_tempdir, normalize_whitespace @@ -440,3 +440,196 @@ def test_project_push_pull(project_dir): result = CliRunner().invoke(app, ["project", "pull", remote, str(project_dir)]) assert result.exit_code == 0 assert test_file.is_file() + + +def test_find_function_valid(): + # example of architecture in main code base + function = "spacy.TextCatBOW.v3" + result = CliRunner().invoke(app, ["find-function", function, "-r", "architectures"]) + assert f"Found registered function '{function}'" in result.stdout + assert "textcat.py" in result.stdout + + result = CliRunner().invoke(app, ["find-function", function]) + assert f"Found registered function '{function}'" in result.stdout + assert "textcat.py" in result.stdout + + # example of architecture in spacy-legacy + function = "spacy.TextCatBOW.v1" + result = CliRunner().invoke(app, ["find-function", function]) + assert f"Found registered function '{function}'" in result.stdout + assert "spacy_legacy" in result.stdout + assert "textcat.py" in result.stdout + + +def test_find_function_invalid(): + # invalid registry + function = "spacy.TextCatBOW.v3" + registry = "foobar" + result = CliRunner().invoke( + app, ["find-function", function, "--registry", registry] + ) + assert f"Unknown function registry: '{registry}'" in result.stdout + + # invalid function + function = "spacy.TextCatBOW.v666" + result = CliRunner().invoke(app, ["find-function", function]) + assert f"Couldn't find registered function: '{function}'" in result.stdout + + +example_words_1 = ["I", "like", "cats"] +example_words_2 = ["I", "like", "dogs"] +example_lemmas_1 = ["I", "like", "cat"] +example_lemmas_2 = ["I", "like", "dog"] +example_tags = ["PRP", "VBP", "NNS"] +example_morphs = [ + "Case=Nom|Number=Sing|Person=1|PronType=Prs", + "Tense=Pres|VerbForm=Fin", + "Number=Plur", +] +example_deps = ["nsubj", "ROOT", "dobj"] +example_pos = ["PRON", "VERB", "NOUN"] +example_ents = ["O", "O", "I-ANIMAL"] +example_spans = [(2, 3, "ANIMAL")] + +TRAIN_EXAMPLE_1 = dict( + words=example_words_1, + lemmas=example_lemmas_1, + tags=example_tags, + morphs=example_morphs, + deps=example_deps, + heads=[1, 1, 1], + pos=example_pos, + ents=example_ents, + spans=example_spans, + cats={"CAT": 1.0, "DOG": 0.0}, +) +TRAIN_EXAMPLE_2 = dict( + words=example_words_2, + lemmas=example_lemmas_2, + tags=example_tags, + morphs=example_morphs, + deps=example_deps, + heads=[1, 1, 1], + pos=example_pos, + ents=example_ents, + spans=example_spans, + cats={"CAT": 0.0, "DOG": 1.0}, +) + + +@pytest.mark.slow +@pytest.mark.parametrize( + "component,examples", + [ + ("tagger", [TRAIN_EXAMPLE_1, TRAIN_EXAMPLE_2]), + ("morphologizer", [TRAIN_EXAMPLE_1, TRAIN_EXAMPLE_2]), + ("trainable_lemmatizer", [TRAIN_EXAMPLE_1, TRAIN_EXAMPLE_2]), + ("parser", [TRAIN_EXAMPLE_1] * 30), + ("ner", [TRAIN_EXAMPLE_1, TRAIN_EXAMPLE_2]), + ("spancat", [TRAIN_EXAMPLE_1, TRAIN_EXAMPLE_2]), + ("textcat", [TRAIN_EXAMPLE_1, TRAIN_EXAMPLE_2]), + ], +) +def test_init_config_trainable(component, examples, en_vocab): + if component == "textcat": + train_docs = [] + for example in examples: + doc = Doc(en_vocab, words=example["words"]) + doc.cats = example["cats"] + train_docs.append(doc) + elif component == "spancat": + train_docs = [] + for example in examples: + doc = Doc(en_vocab, words=example["words"]) + doc.spans["sc"] = [ + Span(doc, start, end, label) for start, end, label in example["spans"] + ] + train_docs.append(doc) + else: + train_docs = [] + for example in examples: + # cats, spans are not valid kwargs for instantiating a Doc + example = {k: v for k, v in example.items() if k not in ("cats", "spans")} + doc = Doc(en_vocab, **example) + train_docs.append(doc) + + with make_tempdir() as d_in: + train_bin = DocBin(docs=train_docs) + train_bin.to_disk(d_in / "train.spacy") + dev_bin = DocBin(docs=train_docs) + dev_bin.to_disk(d_in / "dev.spacy") + init_config_result = CliRunner().invoke( + app, + [ + "init", + "config", + f"{d_in}/config.cfg", + "--lang", + "en", + "--pipeline", + component, + ], + ) + assert init_config_result.exit_code == 0 + train_result = CliRunner().invoke( + app, + [ + "train", + f"{d_in}/config.cfg", + "--paths.train", + f"{d_in}/train.spacy", + "--paths.dev", + f"{d_in}/dev.spacy", + "--output", + f"{d_in}/model", + ], + ) + assert train_result.exit_code == 0 + assert Path(d_in / "model" / "model-last").exists() + + +@pytest.mark.slow +@pytest.mark.parametrize( + "component,examples", + [("tagger,parser,morphologizer", [TRAIN_EXAMPLE_1, TRAIN_EXAMPLE_2] * 15)], +) +def test_init_config_trainable_multiple(component, examples, en_vocab): + train_docs = [] + for example in examples: + example = {k: v for k, v in example.items() if k not in ("cats", "spans")} + doc = Doc(en_vocab, **example) + train_docs.append(doc) + + with make_tempdir() as d_in: + train_bin = DocBin(docs=train_docs) + train_bin.to_disk(d_in / "train.spacy") + dev_bin = DocBin(docs=train_docs) + dev_bin.to_disk(d_in / "dev.spacy") + init_config_result = CliRunner().invoke( + app, + [ + "init", + "config", + f"{d_in}/config.cfg", + "--lang", + "en", + "--pipeline", + component, + ], + ) + assert init_config_result.exit_code == 0 + train_result = CliRunner().invoke( + app, + [ + "train", + f"{d_in}/config.cfg", + "--paths.train", + f"{d_in}/train.spacy", + "--paths.dev", + f"{d_in}/dev.spacy", + "--output", + f"{d_in}/model", + ], + ) + assert train_result.exit_code == 0 + assert Path(d_in / "model" / "model-last").exists() diff --git a/spacy/tests/test_displacy.py b/spacy/tests/test_displacy.py index 1570f8d09..b83c7db07 100644 --- a/spacy/tests/test_displacy.py +++ b/spacy/tests/test_displacy.py @@ -2,7 +2,7 @@ import numpy import pytest from spacy import displacy -from spacy.displacy.render import DependencyRenderer, EntityRenderer +from spacy.displacy.render import DependencyRenderer, EntityRenderer, SpanRenderer from spacy.lang.en import English from spacy.lang.fa import Persian from spacy.tokens import Doc, Span @@ -113,7 +113,7 @@ def test_issue5838(): doc = nlp(sample_text) doc.ents = [Span(doc, 7, 8, label="test")] html = displacy.render(doc, style="ent") - found = html.count("
") + found = html.count("
") assert found == 4 @@ -350,6 +350,78 @@ def test_displacy_render_wrapper(en_vocab): displacy.set_render_wrapper(lambda html: html) +def test_displacy_render_manual_dep(): + """Test displacy.render with manual data for dep style""" + parsed_dep = { + "words": [ + {"text": "This", "tag": "DT"}, + {"text": "is", "tag": "VBZ"}, + {"text": "a", "tag": "DT"}, + {"text": "sentence", "tag": "NN"}, + ], + "arcs": [ + {"start": 0, "end": 1, "label": "nsubj", "dir": "left"}, + {"start": 2, "end": 3, "label": "det", "dir": "left"}, + {"start": 1, "end": 3, "label": "attr", "dir": "right"}, + ], + "title": "Title", + } + html = displacy.render([parsed_dep], style="dep", manual=True) + for word in parsed_dep["words"]: + assert word["text"] in html + assert word["tag"] in html + + +def test_displacy_render_manual_ent(): + """Test displacy.render with manual data for ent style""" + parsed_ents = [ + { + "text": "But Google is starting from behind.", + "ents": [{"start": 4, "end": 10, "label": "ORG"}], + }, + { + "text": "But Google is starting from behind.", + "ents": [{"start": -100, "end": 100, "label": "COMPANY"}], + "title": "Title", + }, + ] + + html = displacy.render(parsed_ents, style="ent", manual=True) + for parsed_ent in parsed_ents: + assert parsed_ent["ents"][0]["label"] in html + if "title" in parsed_ent: + assert parsed_ent["title"] in html + + +def test_displacy_render_manual_span(): + """Test displacy.render with manual data for span style""" + parsed_spans = [ + { + "text": "Welcome to the Bank of China.", + "spans": [ + {"start_token": 3, "end_token": 6, "label": "ORG"}, + {"start_token": 5, "end_token": 6, "label": "GPE"}, + ], + "tokens": ["Welcome", "to", "the", "Bank", "of", "China", "."], + }, + { + "text": "Welcome to the Bank of China.", + "spans": [ + {"start_token": 3, "end_token": 6, "label": "ORG"}, + {"start_token": 5, "end_token": 6, "label": "GPE"}, + ], + "tokens": ["Welcome", "to", "the", "Bank", "of", "China", "."], + "title": "Title", + }, + ] + + html = displacy.render(parsed_spans, style="span", manual=True) + for parsed_span in parsed_spans: + assert parsed_span["spans"][0]["label"] in html + if "title" in parsed_span: + assert parsed_span["title"] in html + + def test_displacy_options_case(): ents = ["foo", "BAR"] colors = {"FOO": "red", "bar": "green"} @@ -396,3 +468,23 @@ def test_issue12816(en_vocab) -> None: # Verify that the HTML tag is still escaped html = displacy.render(doc, style="span") assert "<TEST>" in html + + +@pytest.mark.issue(13056) +def test_displacy_span_stacking(): + """Test whether span stacking works properly for multiple overlapping spans.""" + spans = [ + {"start_token": 2, "end_token": 5, "label": "SkillNC"}, + {"start_token": 0, "end_token": 2, "label": "Skill"}, + {"start_token": 1, "end_token": 3, "label": "Skill"}, + ] + tokens = ["Welcome", "to", "the", "Bank", "of", "China", "."] + per_token_info = SpanRenderer._assemble_per_token_info(spans=spans, tokens=tokens) + + assert len(per_token_info) == len(tokens) + assert all([len(per_token_info[i]["entities"]) == 1 for i in (0, 3, 4)]) + assert all([len(per_token_info[i]["entities"]) == 2 for i in (1, 2)]) + assert per_token_info[1]["entities"][0]["render_slot"] == 1 + assert per_token_info[1]["entities"][1]["render_slot"] == 2 + assert per_token_info[2]["entities"][0]["render_slot"] == 2 + assert per_token_info[2]["entities"][1]["render_slot"] == 3 diff --git a/spacy/tests/test_misc.py b/spacy/tests/test_misc.py index 704a40485..b1b4faa88 100644 --- a/spacy/tests/test_misc.py +++ b/spacy/tests/test_misc.py @@ -376,8 +376,9 @@ def test_util_dot_section(): factory = "textcat" [components.textcat.model] - @architectures = "spacy.TextCatBOW.v2" + @architectures = "spacy.TextCatBOW.v3" exclusive_classes = true + length = 262144 ngram_size = 1 no_output_layer = false """ diff --git a/spacy/tests/test_models.py b/spacy/tests/test_models.py index e6692ad92..5228b4544 100644 --- a/spacy/tests/test_models.py +++ b/spacy/tests/test_models.py @@ -26,6 +26,7 @@ from spacy.ml.models import ( build_Tok2Vec_model, ) from spacy.ml.staticvectors import StaticVectors +from spacy.util import registry def get_textcat_bow_kwargs(): @@ -284,3 +285,17 @@ def test_spancat_model_forward_backward(nO=5): Y, backprop = model((docs, spans), is_train=True) assert Y.shape == (spans.dataXd.shape[0], nO) backprop(Y) + + +def test_textcat_reduce_invalid_args(): + textcat_reduce = registry.architectures.get("spacy.TextCatReduce.v1") + tok2vec = make_test_tok2vec() + with pytest.raises(ValueError, match=r"must be used with at least one reduction"): + textcat_reduce( + tok2vec=tok2vec, + exclusive_classes=False, + use_reduce_first=False, + use_reduce_last=False, + use_reduce_max=False, + use_reduce_mean=False, + ) diff --git a/spacy/tests/tokenizer/test_explain.py b/spacy/tests/tokenizer/test_explain.py index 4268392dd..073899fa5 100644 --- a/spacy/tests/tokenizer/test_explain.py +++ b/spacy/tests/tokenizer/test_explain.py @@ -86,6 +86,18 @@ def test_tokenizer_explain_special_matcher(en_vocab): assert tokens == explain_tokens +def test_tokenizer_explain_special_matcher_whitespace(en_vocab): + rules = {":]": [{"ORTH": ":]"}]} + tokenizer = Tokenizer( + en_vocab, + rules=rules, + ) + text = ": ]" + tokens = [t.text for t in tokenizer(text)] + explain_tokens = [t[1] for t in tokenizer.explain(text)] + assert tokens == explain_tokens + + @hypothesis.strategies.composite def sentence_strategy(draw: hypothesis.strategies.DrawFn, max_n_words: int = 4) -> str: """ @@ -124,6 +136,9 @@ def test_tokenizer_explain_fuzzy(lang: str, sentence: str) -> None: """ tokenizer: Tokenizer = spacy.blank(lang).tokenizer - tokens = [t.text for t in tokenizer(sentence) if not t.is_space] + # Tokenizer.explain is not intended to handle whitespace or control + # characters in the same way as Tokenizer + sentence = re.sub(r"\s+", " ", sentence).strip() + tokens = [t.text for t in tokenizer(sentence)] debug_tokens = [t[1] for t in tokenizer.explain(sentence)] assert tokens == debug_tokens, f"{tokens}, {debug_tokens}, {sentence}" diff --git a/spacy/tests/training/test_loop.py b/spacy/tests/training/test_loop.py index 9140421b4..068032e58 100644 --- a/spacy/tests/training/test_loop.py +++ b/spacy/tests/training/test_loop.py @@ -1,7 +1,7 @@ from typing import Callable, Iterable, Iterator import pytest -from thinc.api import Config +from thinc.api import Config, fix_random_seed from spacy import Language from spacy.training import Example @@ -88,6 +88,8 @@ TRAIN_DATA = [ @pytest.mark.slow def test_distill_loop(config_str): + fix_random_seed(0) + @registry.readers("sentence_corpus") def create_sentence_corpus() -> Callable[[Language], Iterable[Example]]: return SentenceCorpus() diff --git a/spacy/tokenizer.pyx b/spacy/tokenizer.pyx index 0b9d6e22a..189d8f3d5 100644 --- a/spacy/tokenizer.pyx +++ b/spacy/tokenizer.pyx @@ -1,4 +1,4 @@ -# cython: embedsignature=True, profile=True, binding=True +# cython: embedsignature=True, binding=True cimport cython from cymem.cymem cimport Pool from cython.operator cimport dereference as deref @@ -730,9 +730,16 @@ cdef class Tokenizer: if i in spans_by_start: span = spans_by_start[i] exc = [d[ORTH] for d in special_cases[span.label_]] - for j, orth in enumerate(exc): - final_tokens.append((f"SPECIAL-{j + 1}", self.vocab.strings[orth])) - i += len(span) + # The phrase matcher can overmatch for tokens separated by + # spaces in the text but not in the underlying rule, so skip + # cases where the texts aren't identical + if span.text != "".join([self.vocab.strings[orth] for orth in exc]): + final_tokens.append(tokens[i]) + i += 1 + else: + for j, orth in enumerate(exc): + final_tokens.append((f"SPECIAL-{j + 1}", self.vocab.strings[orth])) + i += len(span) else: final_tokens.append(tokens[i]) i += 1 diff --git a/spacy/tokens/__init__.py b/spacy/tokens/__init__.py index b40a5f119..7a8505aed 100644 --- a/spacy/tokens/__init__.py +++ b/spacy/tokens/__init__.py @@ -5,4 +5,4 @@ from .span import Span from .span_group import SpanGroup from .token import Token -__all__ = ["Doc", "Token", "Span", "SpanGroup", "DocBin", "MorphAnalysis"] +__all__ = ["Doc", "DocBin", "MorphAnalysis", "Span", "SpanGroup", "Token"] diff --git a/spacy/tokens/doc.pyi b/spacy/tokens/doc.pyi index 116533263..21577edab 100644 --- a/spacy/tokens/doc.pyi +++ b/spacy/tokens/doc.pyi @@ -8,6 +8,7 @@ from typing import ( List, Optional, Protocol, + Sequence, Tuple, Union, overload, @@ -41,7 +42,7 @@ class Doc: user_hooks: Dict[str, Callable[..., Any]] user_token_hooks: Dict[str, Callable[..., Any]] user_span_hooks: Dict[str, Callable[..., Any]] - tensor: np.ndarray[Any, np.dtype[np.float_]] + tensor: np.ndarray[Any, np.dtype[np.float64]] user_data: Dict[str, Any] has_unknown_spaces: bool _context: Any @@ -125,7 +126,7 @@ class Doc: vector: Optional[Floats1d] = ..., alignment_mode: str = ..., span_id: Union[int, str] = ..., - ) -> Span: ... + ) -> Optional[Span]: ... def similarity(self, other: Union[Doc, Span, Token, Lexeme]) -> float: ... @property def has_vector(self) -> bool: ... @@ -135,7 +136,12 @@ class Doc: def text(self) -> str: ... @property def text_with_ws(self) -> str: ... - ents: Tuple[Span] + # Ideally the getter would output Tuple[Span] + # see https://github.com/python/mypy/issues/3004 + @property + def ents(self) -> Sequence[Span]: ... + @ents.setter + def ents(self, value: Sequence[Span]) -> None: ... def set_ents( self, entities: List[Span], @@ -161,7 +167,7 @@ class Doc: ) -> Doc: ... def to_array( self, py_attr_ids: Union[int, str, List[Union[int, str]]] - ) -> np.ndarray[Any, np.dtype[np.float_]]: ... + ) -> np.ndarray[Any, np.dtype[np.float64]]: ... @staticmethod def from_docs( docs: List[Doc], @@ -174,15 +180,13 @@ class Doc: self, path: Union[str, Path], *, exclude: Iterable[str] = ... ) -> None: ... def from_disk( - self, path: Union[str, Path], *, exclude: Union[List[str], Tuple[str]] = ... + self, path: Union[str, Path], *, exclude: Iterable[str] = ... ) -> Doc: ... - def to_bytes(self, *, exclude: Union[List[str], Tuple[str]] = ...) -> bytes: ... - def from_bytes( - self, bytes_data: bytes, *, exclude: Union[List[str], Tuple[str]] = ... - ) -> Doc: ... - def to_dict(self, *, exclude: Union[List[str], Tuple[str]] = ...) -> bytes: ... + def to_bytes(self, *, exclude: Iterable[str] = ...) -> bytes: ... + def from_bytes(self, bytes_data: bytes, *, exclude: Iterable[str] = ...) -> Doc: ... + def to_dict(self, *, exclude: Iterable[str] = ...) -> Dict[str, Any]: ... def from_dict( - self, msg: bytes, *, exclude: Union[List[str], Tuple[str]] = ... + self, msg: Dict[str, Any], *, exclude: Iterable[str] = ... ) -> Doc: ... def extend_tensor(self, tensor: Floats2d) -> None: ... def retokenize(self) -> Retokenizer: ... diff --git a/spacy/tokens/doc.pyx b/spacy/tokens/doc.pyx index df012a28a..14fbb71c3 100644 --- a/spacy/tokens/doc.pyx +++ b/spacy/tokens/doc.pyx @@ -1,4 +1,4 @@ -# cython: infer_types=True, bounds_check=False, profile=True +# cython: infer_types=True, bounds_check=False from typing import Set cimport cython @@ -1340,7 +1340,7 @@ cdef class Doc: path (str / Path): A path to a directory. Paths may be either strings or `Path`-like objects. - exclude (list): String names of serialization fields to exclude. + exclude (Iterable[str]): String names of serialization fields to exclude. RETURNS (Doc): The modified `Doc` object. DOCS: https://spacy.io/api/doc#from_disk @@ -1353,7 +1353,7 @@ cdef class Doc: def to_bytes(self, *, exclude=tuple()): """Serialize, i.e. export the document contents to a binary string. - exclude (list): String names of serialization fields to exclude. + exclude (Iterable[str]): String names of serialization fields to exclude. RETURNS (bytes): A losslessly serialized copy of the `Doc`, including all annotations. @@ -1365,7 +1365,7 @@ cdef class Doc: """Deserialize, i.e. import the document contents from a binary string. data (bytes): The string to load from. - exclude (list): String names of serialization fields to exclude. + exclude (Iterable[str]): String names of serialization fields to exclude. RETURNS (Doc): Itself. DOCS: https://spacy.io/api/doc#from_bytes @@ -1375,11 +1375,8 @@ cdef class Doc: def to_dict(self, *, exclude=tuple()): """Export the document contents to a dictionary for serialization. - exclude (list): String names of serialization fields to exclude. - RETURNS (bytes): A losslessly serialized copy of the `Doc`, including - all annotations. - - DOCS: https://spacy.io/api/doc#to_bytes + exclude (Iterable[str]): String names of serialization fields to exclude. + RETURNS (Dict[str, Any]): A dictionary representation of the `Doc` """ array_head = Doc._get_array_attrs() strings = set() @@ -1424,13 +1421,11 @@ cdef class Doc: return util.to_dict(serializers, exclude) def from_dict(self, msg, *, exclude=tuple()): - """Deserialize, i.e. import the document contents from a binary string. + """Deserialize the document contents from a dictionary representation. - data (bytes): The string to load from. - exclude (list): String names of serialization fields to exclude. + msg (Dict[str, Any]): The dictionary to load from. + exclude (Iterable[str]): String names of serialization fields to exclude. RETURNS (Doc): Itself. - - DOCS: https://spacy.io/api/doc#from_dict """ if self.length != 0: raise ValueError(Errors.E033.format(length=self.length)) diff --git a/spacy/tokens/graph.pyx b/spacy/tokens/graph.pyx index e789d1a37..897735eb9 100644 --- a/spacy/tokens/graph.pyx +++ b/spacy/tokens/graph.pyx @@ -1,4 +1,5 @@ # cython: infer_types=True, cdivision=True, boundscheck=False, binding=True +# cython: profile=False from typing import Generator, List, Tuple cimport cython diff --git a/spacy/tokens/morphanalysis.pyx b/spacy/tokens/morphanalysis.pyx index 7ff08c4bd..9be30ad9e 100644 --- a/spacy/tokens/morphanalysis.pyx +++ b/spacy/tokens/morphanalysis.pyx @@ -1,3 +1,4 @@ +# cython: profile=False cimport numpy as np from ..errors import Errors diff --git a/spacy/tokens/retokenizer.pyx b/spacy/tokens/retokenizer.pyx index 6c5e25fa1..c928883ab 100644 --- a/spacy/tokens/retokenizer.pyx +++ b/spacy/tokens/retokenizer.pyx @@ -1,4 +1,4 @@ -# cython: infer_types=True, bounds_check=False, profile=True +# cython: infer_types=True, bounds_check=False from cymem.cymem cimport Pool from libc.string cimport memset diff --git a/spacy/tokens/span.pyx b/spacy/tokens/span.pyx index 26e5920c0..203e5db52 100644 --- a/spacy/tokens/span.pyx +++ b/spacy/tokens/span.pyx @@ -1,3 +1,4 @@ +# cython: profile=False cimport numpy as np from libcpp.memory cimport make_shared @@ -127,13 +128,16 @@ cdef class Span: self._vector = vector self._vector_norm = vector_norm - def __richcmp__(self, Span other, int op): + def __richcmp__(self, object other, int op): if other is None: if op == 0 or op == 1 or op == 2: return False else: return True + if not isinstance(other, Span): + return False + self_tuple = self._cmp_tuple() other_tuple = other._cmp_tuple() # < diff --git a/spacy/tokens/span_group.pyx b/spacy/tokens/span_group.pyx index c2f5ce1c8..ec10a84e8 100644 --- a/spacy/tokens/span_group.pyx +++ b/spacy/tokens/span_group.pyx @@ -1,3 +1,4 @@ +# cython: profile=False import struct import weakref from copy import deepcopy diff --git a/spacy/tokens/token.pyi b/spacy/tokens/token.pyi index 267924e57..5c3d4d0ba 100644 --- a/spacy/tokens/token.pyi +++ b/spacy/tokens/token.pyi @@ -53,7 +53,12 @@ class Token: def __bytes__(self) -> bytes: ... def __str__(self) -> str: ... def __repr__(self) -> str: ... - def __richcmp__(self, other: Token, op: int) -> bool: ... + def __lt__(self, other: Any) -> bool: ... + def __le__(self, other: Any) -> bool: ... + def __eq__(self, other: Any) -> bool: ... + def __ne__(self, other: Any) -> bool: ... + def __gt__(self, other: Any) -> bool: ... + def __ge__(self, other: Any) -> bool: ... @property def _(self) -> Underscore: ... def nbor(self, i: int = ...) -> Token: ... diff --git a/spacy/tokens/token.pyx b/spacy/tokens/token.pyx index ff1120b7b..358209efb 100644 --- a/spacy/tokens/token.pyx +++ b/spacy/tokens/token.pyx @@ -1,4 +1,5 @@ # cython: infer_types=True +# cython: profile=False # Compiler crashes on memory view coercion without this. Should report bug. cimport numpy as np @@ -139,17 +140,20 @@ cdef class Token: def __repr__(self): return self.__str__() - def __richcmp__(self, Token other, int op): + def __richcmp__(self, object other, int op): # http://cython.readthedocs.io/en/latest/src/userguide/special_methods.html if other is None: if op in (0, 1, 2): return False else: return True + if not isinstance(other, Token): + return False + cdef Token other_token = other cdef Doc my_doc = self.doc - cdef Doc other_doc = other.doc + cdef Doc other_doc = other_token.doc my = self.idx - their = other.idx + their = other_token.idx if op == 0: return my < their elif op == 2: diff --git a/spacy/training/__init__.py b/spacy/training/__init__.py index adfc2bb66..6bc4e1690 100644 --- a/spacy/training/__init__.py +++ b/spacy/training/__init__.py @@ -17,3 +17,28 @@ from .iob_utils import ( # noqa: F401 tags_to_entities, ) from .loggers import console_logger # noqa: F401 + +__all__ = [ + "Alignment", + "Corpus", + "Example", + "JsonlCorpus", + "PlainTextCorpus", + "biluo_tags_to_offsets", + "biluo_tags_to_spans", + "biluo_to_iob", + "create_copy_from_base_model", + "docs_to_json", + "dont_augment", + "iob_to_biluo", + "minibatch_by_padded_size", + "minibatch_by_words", + "offsets_to_biluo_tags", + "orth_variants_augmenter", + "read_json_file", + "remove_bilu_prefix", + "split_bilu_label", + "tags_to_entities", + "validate_get_examples", + "validate_examples", +] diff --git a/spacy/training/align.pyx b/spacy/training/align.pyx index 79fec73c4..c68110e30 100644 --- a/spacy/training/align.pyx +++ b/spacy/training/align.pyx @@ -1,3 +1,4 @@ +# cython: profile=False import re from itertools import chain from typing import List, Tuple diff --git a/spacy/training/alignment_array.pyx b/spacy/training/alignment_array.pyx index b0be1512b..f0eb5cf39 100644 --- a/spacy/training/alignment_array.pyx +++ b/spacy/training/alignment_array.pyx @@ -1,3 +1,4 @@ +# cython: profile=False from typing import List import numpy diff --git a/spacy/training/corpus.py b/spacy/training/corpus.py index 6037c15e3..5cc2733a5 100644 --- a/spacy/training/corpus.py +++ b/spacy/training/corpus.py @@ -63,7 +63,7 @@ def create_plain_text_reader( path: Optional[Path], min_length: int = 0, max_length: int = 0, -) -> Callable[["Language"], Iterable[Doc]]: +) -> Callable[["Language"], Iterable[Example]]: """Iterate Example objects from a file or directory of plain text UTF-8 files with one line per doc. diff --git a/spacy/training/example.pyi b/spacy/training/example.pyi new file mode 100644 index 000000000..06639d70c --- /dev/null +++ b/spacy/training/example.pyi @@ -0,0 +1,66 @@ +from typing import Any, Callable, Dict, Iterable, List, Optional, Tuple + +from ..tokens import Doc, Span +from ..vocab import Vocab +from .alignment import Alignment + +def annotations_to_doc( + vocab: Vocab, + tok_annot: Dict[str, Any], + doc_annot: Dict[str, Any], +) -> Doc: ... +def validate_examples( + examples: Iterable[Example], + method: str, +) -> None: ... +def validate_get_examples( + get_examples: Callable[[], Iterable[Example]], + method: str, +): ... + +class Example: + x: Doc + y: Doc + + def __init__( + self, + predicted: Doc, + reference: Doc, + *, + alignment: Optional[Alignment] = None, + ): ... + def __len__(self) -> int: ... + @property + def predicted(self) -> Doc: ... + @predicted.setter + def predicted(self, doc: Doc) -> None: ... + @property + def reference(self) -> Doc: ... + @reference.setter + def reference(self, doc: Doc) -> None: ... + def copy(self) -> Example: ... + @classmethod + def from_dict(cls, predicted: Doc, example_dict: Dict[str, Any]) -> Example: ... + @property + def alignment(self) -> Alignment: ... + def get_aligned(self, field: str, as_string=False): ... + def get_aligned_parse(self, projectivize=True): ... + def get_aligned_sent_starts(self): ... + def get_aligned_spans_x2y( + self, x_spans: Iterable[Span], allow_overlap=False + ) -> List[Span]: ... + def get_aligned_spans_y2x( + self, y_spans: Iterable[Span], allow_overlap=False + ) -> List[Span]: ... + def get_aligned_ents_and_ner(self) -> Tuple[List[Span], List[str]]: ... + def get_aligned_ner(self) -> List[str]: ... + def get_matching_ents(self, check_label: bool = True) -> List[Span]: ... + def to_dict(self) -> Dict[str, Any]: ... + def split_sents(self) -> List[Example]: ... + @property + def text(self) -> str: ... + def __str__(self) -> str: ... + def __repr__(self) -> str: ... + +def _parse_example_dict_data(example_dict): ... +def _fix_legacy_dict_data(example_dict): ... diff --git a/spacy/training/example.pyx b/spacy/training/example.pyx index efca4bcb0..914e877f5 100644 --- a/spacy/training/example.pyx +++ b/spacy/training/example.pyx @@ -1,3 +1,4 @@ +# cython: profile=False from collections.abc import Iterable as IterableInstance import numpy diff --git a/spacy/training/gold_io.pyx b/spacy/training/gold_io.pyx index 2fc36e41f..afbdf4631 100644 --- a/spacy/training/gold_io.pyx +++ b/spacy/training/gold_io.pyx @@ -1,3 +1,4 @@ +# cython: profile=False import warnings import srsly diff --git a/spacy/training/initialize.py b/spacy/training/initialize.py index 7a883ce50..781614c34 100644 --- a/spacy/training/initialize.py +++ b/spacy/training/initialize.py @@ -386,7 +386,7 @@ def read_vectors( shape = (truncate_vectors, shape[1]) vectors_data = numpy.zeros(shape=shape, dtype="f") vectors_keys = [] - for i, line in enumerate(tqdm.tqdm(f)): + for i, line in enumerate(tqdm.tqdm(f, disable=None)): line = line.rstrip() pieces = line.rsplit(" ", vectors_data.shape[1]) word = pieces.pop(0) diff --git a/spacy/typedefs.pyx b/spacy/typedefs.pyx index e69de29bb..61bf62038 100644 --- a/spacy/typedefs.pyx +++ b/spacy/typedefs.pyx @@ -0,0 +1 @@ +# cython: profile=False diff --git a/spacy/util.py b/spacy/util.py index 30300e019..5e966343f 100644 --- a/spacy/util.py +++ b/spacy/util.py @@ -91,7 +91,6 @@ logger.addHandler(logger_stream_handler) class ENV_VARS: CONFIG_OVERRIDES = "SPACY_CONFIG_OVERRIDES" - PROJECT_USE_GIT_VERSION = "SPACY_PROJECT_USE_GIT_VERSION" class registry(thinc.registry): @@ -109,6 +108,7 @@ class registry(thinc.registry): augmenters = catalogue.create("spacy", "augmenters", entry_points=True) loggers = catalogue.create("spacy", "loggers", entry_points=True) scorers = catalogue.create("spacy", "scorers", entry_points=True) + vectors = catalogue.create("spacy", "vectors", entry_points=True) # These are factories registered via third-party packages and the # spacy_factories entry point. This registry only exists so we can easily # load them via the entry points. The "true" factories are added via the @@ -878,7 +878,7 @@ def load_meta(path: Union[str, Path]) -> Dict[str, Any]: if "spacy_version" in meta: if not is_compatible_version(about.__version__, meta["spacy_version"]): lower_version = get_model_lower_version(meta["spacy_version"]) - lower_version = get_minor_version(lower_version) # type: ignore[arg-type] + lower_version = get_base_version(lower_version) # type: ignore[arg-type] if lower_version is not None: lower_version = "v" + lower_version elif "spacy_git_version" in meta: @@ -958,23 +958,12 @@ def replace_model_node(model: Model, target: Model, replacement: Model) -> None: def split_command(command: str) -> List[str]: """Split a string command using shlex. Handles platform compatibility. - command (str) : The command to split RETURNS (List[str]): The split command. """ return shlex.split(command, posix=not is_windows) -def join_command(command: List[str]) -> str: - """Join a command using shlex. shlex.join is only available for Python 3.8+, - so we're using a workaround here. - - command (List[str]): The command to join. - RETURNS (str): The joined command - """ - return " ".join(shlex.quote(cmd) for cmd in command) - - def run_command( command: Union[str, List[str]], *, @@ -983,7 +972,6 @@ def run_command( ) -> subprocess.CompletedProcess: """Run a command on the command line as a subprocess. If the subprocess returns a non-zero exit code, a system exit is performed. - command (str / List[str]): The command. If provided as a string, the string will be split using shlex.split. stdin (Optional[Any]): stdin to read from or None. @@ -1034,7 +1022,6 @@ def run_command( @contextmanager def working_dir(path: Union[str, Path]) -> Iterator[Path]: """Change current working directory and returns to previous on exit. - path (str / Path): The directory to navigate to. YIELDS (Path): The absolute path to the current working directory. This should be used if the block needs to perform actions within the working @@ -1053,7 +1040,6 @@ def working_dir(path: Union[str, Path]) -> Iterator[Path]: def make_tempdir() -> Generator[Path, None, None]: """Execute a block in a temporary directory and remove the directory and its contents at the end of the with block. - YIELDS (Path): The path of the temp directory. """ d = Path(tempfile.mkdtemp()) @@ -1066,35 +1052,47 @@ def make_tempdir() -> Generator[Path, None, None]: rmfunc(path) try: - shutil.rmtree(str(d), onerror=force_remove) + if sys.version_info >= (3, 12): + shutil.rmtree(str(d), onexc=force_remove) + else: + shutil.rmtree(str(d), onerror=force_remove) except PermissionError as e: warnings.warn(Warnings.W091.format(dir=d, msg=e)) -def is_cwd(path: Union[Path, str]) -> bool: - """Check whether a path is the current working directory. - - path (Union[Path, str]): The directory path. - RETURNS (bool): Whether the path is the current working directory. - """ - return str(Path(path).resolve()).lower() == str(Path.cwd().resolve()).lower() - - def is_in_jupyter() -> bool: - """Check if user is running spaCy from a Jupyter notebook by detecting the - IPython kernel. Mainly used for the displaCy visualizer. - RETURNS (bool): True if in Jupyter, False if not. + """Check if user is running spaCy from a Jupyter or Colab notebook by + detecting the IPython kernel. Mainly used for the displaCy visualizer. + RETURNS (bool): True if in Jupyter/Colab, False if not. """ # https://stackoverflow.com/a/39662359/6400719 + # https://stackoverflow.com/questions/15411967 try: - shell = get_ipython().__class__.__name__ # type: ignore[name-defined] - if shell == "ZMQInteractiveShell": + if get_ipython().__class__.__name__ == "ZMQInteractiveShell": # type: ignore[name-defined] return True # Jupyter notebook or qtconsole + if get_ipython().__class__.__module__ == "google.colab._shell": # type: ignore[name-defined] + return True # Colab notebook except NameError: - return False # Probably standard Python interpreter + pass # Probably standard Python interpreter + # additional check for Colab + try: + import google.colab + + return True # Colab notebook + except ImportError: + pass return False +def is_in_interactive() -> bool: + """Check if user is running spaCy from an interactive Python + shell. Will return True in Jupyter notebooks too. + RETURNS (bool): True if in interactive mode, False if not. + """ + # https://stackoverflow.com/questions/2356399/tell-if-python-is-in-interactive-mode + return hasattr(sys, "ps1") or hasattr(sys, "ps2") + + def get_object_name(obj: Any) -> str: """Get a human-readable name of a Python object, e.g. a pipeline component. diff --git a/spacy/vectors.pyx b/spacy/vectors.pyx index 783e6d00a..280925f7e 100644 --- a/spacy/vectors.pyx +++ b/spacy/vectors.pyx @@ -1,3 +1,6 @@ +# cython: infer_types=True, binding=True +from typing import Callable + from cython.operator cimport dereference as deref from libc.stdint cimport uint32_t, uint64_t from libcpp.set cimport set as cppset @@ -5,7 +8,8 @@ from murmurhash.mrmr cimport hash128_x64 import warnings from enum import Enum -from typing import cast +from pathlib import Path +from typing import TYPE_CHECKING, Union, cast import numpy import srsly @@ -21,6 +25,9 @@ from .attrs import IDS from .errors import Errors, Warnings from .strings import get_string_id +if TYPE_CHECKING: + from .vocab import Vocab # noqa: F401 # no-cython-lint + def unpickle_vectors(bytes_data): return Vectors().from_bytes(bytes_data) @@ -35,7 +42,71 @@ class Mode(str, Enum): return list(cls.__members__.keys()) -cdef class Vectors: +cdef class BaseVectors: + def __init__(self, *, strings=None): + # Make sure abstract BaseVectors is not instantiated. + if self.__class__ == BaseVectors: + raise TypeError( + Errors.E1046.format(cls_name=self.__class__.__name__) + ) + + def __getitem__(self, key): + raise NotImplementedError + + def __contains__(self, key): + raise NotImplementedError + + def is_full(self): + raise NotImplementedError + + def get_batch(self, keys): + raise NotImplementedError + + @property + def shape(self): + raise NotImplementedError + + def __len__(self): + raise NotImplementedError + + @property + def vectors_length(self): + raise NotImplementedError + + @property + def size(self): + raise NotImplementedError + + def add(self, key, *, vector=None): + raise NotImplementedError + + def to_ops(self, ops: Ops): + pass + + # add dummy methods for to_bytes, from_bytes, to_disk and from_disk to + # allow serialization + def to_bytes(self, **kwargs): + return b"" + + def from_bytes(self, data: bytes, **kwargs): + return self + + def to_disk(self, path: Union[str, Path], **kwargs): + return None + + def from_disk(self, path: Union[str, Path], **kwargs): + return self + + +@util.registry.vectors("spacy.Vectors.v1") +def create_mode_vectors() -> Callable[["Vocab"], BaseVectors]: + def vectors_factory(vocab: "Vocab") -> BaseVectors: + return Vectors(strings=vocab.strings) + + return vectors_factory + + +cdef class Vectors(BaseVectors): """Store, save and load word vectors. Vectors data is kept in the vectors.data attribute, which should be an diff --git a/spacy/vocab.pyx b/spacy/vocab.pyx index 0129862c1..6cd6da56c 100644 --- a/spacy/vocab.pyx +++ b/spacy/vocab.pyx @@ -1,4 +1,3 @@ -# cython: profile=True import functools import numpy @@ -93,8 +92,9 @@ cdef class Vocab: return self._vectors def __set__(self, vectors): - for s in vectors.strings: - self.strings.add(s) + if hasattr(vectors, "strings"): + for s in vectors.strings: + self.strings.add(s) self._vectors = vectors self._vectors.strings = self.strings @@ -181,7 +181,7 @@ cdef class Vocab: lex = self.mem.alloc(1, sizeof(LexemeC)) lex.orth = self.strings.add(string) lex.length = len(string) - if self.vectors is not None: + if self.vectors is not None and hasattr(self.vectors, "key2row"): lex.id = self.vectors.key2row.get(lex.orth, OOV_RANK) else: lex.id = OOV_RANK @@ -275,12 +275,17 @@ cdef class Vocab: @property def vectors_length(self): - return self.vectors.shape[1] + if hasattr(self.vectors, "shape"): + return self.vectors.shape[1] + else: + return -1 def reset_vectors(self, *, width=None, shape=None): """Drop the current vector table. Because all vectors must be the same width, you have to call this to change the size of the vectors. """ + if not isinstance(self.vectors, Vectors): + raise ValueError(Errors.E849.format(method="reset_vectors", vectors_type=type(self.vectors))) if width is not None and shape is not None: raise ValueError(Errors.E065.format(width=width, shape=shape)) elif shape is not None: @@ -290,6 +295,8 @@ cdef class Vocab: self.vectors = Vectors(strings=self.strings, shape=(self.vectors.shape[0], width)) def deduplicate_vectors(self): + if not isinstance(self.vectors, Vectors): + raise ValueError(Errors.E849.format(method="deduplicate_vectors", vectors_type=type(self.vectors))) if self.vectors.mode != VectorsMode.default: raise ValueError(Errors.E858.format( mode=self.vectors.mode, @@ -343,6 +350,8 @@ cdef class Vocab: DOCS: https://spacy.io/api/vocab#prune_vectors """ + if not isinstance(self.vectors, Vectors): + raise ValueError(Errors.E849.format(method="prune_vectors", vectors_type=type(self.vectors))) if self.vectors.mode != VectorsMode.default: raise ValueError(Errors.E858.format( mode=self.vectors.mode, diff --git a/website/docs/api/architectures.mdx b/website/docs/api/architectures.mdx index bab24f13b..956234ac0 100644 --- a/website/docs/api/architectures.mdx +++ b/website/docs/api/architectures.mdx @@ -78,16 +78,16 @@ subword features, and a [MaxoutWindowEncoder](/api/architectures#MaxoutWindowEncoder) encoding layer consisting of a CNN and a layer-normalized maxout activation function. -| Name | Description | -| -------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| `width` | The width of the input and output. These are required to be the same, so that residual connections can be used. Recommended values are `96`, `128` or `300`. ~~int~~ | -| `depth` | The number of convolutional layers to use. Recommended values are between `2` and `8`. ~~int~~ | -| `embed_size` | The number of rows in the hash embedding tables. This can be surprisingly small, due to the use of the hash embeddings. Recommended values are between `2000` and `10000`. ~~int~~ | -| `window_size` | The number of tokens on either side to concatenate during the convolutions. The receptive field of the CNN will be `depth * (window_size * 2 + 1)`, so a 4-layer network with a window size of `2` will be sensitive to 20 words at a time. Recommended value is `1`. ~~int~~ | -| `maxout_pieces` | The number of pieces to use in the maxout non-linearity. If `1`, the [`Mish`](https://thinc.ai/docs/api-layers#mish) non-linearity is used instead. Recommended values are `1`-`3`. ~~int~~ | -| `subword_features` | Whether to also embed subword features, specifically the prefix, suffix and word shape. This is recommended for alphabetic languages like English, but not if single-character tokens are used for a language such as Chinese. ~~bool~~ | -| `pretrained_vectors` | Whether to also use static vectors. ~~bool~~ | -| **CREATES** | The model using the architecture. ~~Model[List[Doc], List[Floats2d]]~~ | +| Name | Description | +| -------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `width` | The width of the input and output. These are required to be the same, so that residual connections can be used. Recommended values are `96`, `128` or `300`. ~~int~~ | +| `depth` | The number of convolutional layers to use. Recommended values are between `2` and `8`. ~~int~~ | +| `embed_size` | The number of rows in the hash embedding tables. This can be surprisingly small, due to the use of the hash embeddings. Recommended values are between `2000` and `10000`. ~~int~~ | +| `window_size` | The number of tokens on either side to concatenate during the convolutions. The receptive field of the CNN will be `depth * window_size * 2 + 1`, so a 4-layer network with a window size of `2` will be sensitive to 17 words at a time. Recommended value is `1`. ~~int~~ | +| `maxout_pieces` | The number of pieces to use in the maxout non-linearity. If `1`, the [`Mish`](https://thinc.ai/docs/api-layers#mish) non-linearity is used instead. Recommended values are `1`-`3`. ~~int~~ | +| `subword_features` | Whether to also embed subword features, specifically the prefix, suffix and word shape. This is recommended for alphabetic languages like English, but not if single-character tokens are used for a language such as Chinese. ~~bool~~ | +| `pretrained_vectors` | Whether to also use static vectors. ~~bool~~ | +| **CREATES** | The model using the architecture. ~~Model[List[Doc], List[Floats2d]]~~ | ### spacy.Tok2VecListener.v1 {id="Tok2VecListener"} @@ -481,6 +481,286 @@ The other arguments are shared between all versions. +## Curated Transformer architectures {id="curated-trf",source="https://github.com/explosion/spacy-curated-transformers/blob/main/spacy_curated_transformers/models/architectures.py"} + +The following architectures are provided by the package +[`spacy-curated-transformers`](https://github.com/explosion/spacy-curated-transformers). +See the [usage documentation](/usage/embeddings-transformers#transformers) for +how to integrate the architectures into your training config. + +When loading the model +[from the Hugging Face Hub](/api/curatedtransformer#hf_trfencoder_loader), the +model config's parameters must be same as the hyperparameters used by the +pre-trained model. The +[`init fill-curated-transformer`](/api/cli#init-fill-curated-transformer) CLI +command can be used to automatically fill in these values. + +### spacy-curated-transformers.AlbertTransformer.v1 + +Construct an ALBERT transformer model. + +| Name | Description | +| ------------------------------ | ---------------------------------------------------------------------------------------- | +| `vocab_size` | Vocabulary size. ~~int~~ | +| `with_spans` | Callback that constructs a span generator model. ~~Callable~~ | +| `piece_encoder` | The piece encoder to segment input tokens. ~~Model~~ | +| `attention_probs_dropout_prob` | Dropout probability of the self-attention layers. ~~float~~ | +| `embedding_width` | Width of the embedding representations. ~~int~~ | +| `hidden_act` | Activation used by the point-wise feed-forward layers. ~~str~~ | +| `hidden_dropout_prob` | Dropout probability of the point-wise feed-forward and embedding layers. ~~float~~ | +| `hidden_width` | Width of the final representations. ~~int~~ | +| `intermediate_width` | Width of the intermediate projection layer in the point-wise feed-forward layer. ~~int~~ | +| `layer_norm_eps` | Epsilon for layer normalization. ~~float~~ | +| `max_position_embeddings` | Maximum length of position embeddings. ~~int~~ | +| `model_max_length` | Maximum length of model inputs. ~~int~~ | +| `num_attention_heads` | Number of self-attention heads. ~~int~~ | +| `num_hidden_groups` | Number of layer groups whose constituents share parameters. ~~int~~ | +| `num_hidden_layers` | Number of hidden layers. ~~int~~ | +| `padding_idx` | Index of the padding meta-token. ~~int~~ | +| `type_vocab_size` | Type vocabulary size. ~~int~~ | +| `mixed_precision` | Use mixed-precision training. ~~bool~~ | +| `grad_scaler_config` | Configuration passed to the PyTorch gradient scaler. ~~dict~~ | +| **CREATES** | The model using the architecture ~~Model~~ | + +### spacy-curated-transformers.BertTransformer.v1 + +Construct a BERT transformer model. + +| Name | Description | +| ------------------------------ | ---------------------------------------------------------------------------------------- | +| `vocab_size` | Vocabulary size. ~~int~~ | +| `with_spans` | Callback that constructs a span generator model. ~~Callable~~ | +| `piece_encoder` | The piece encoder to segment input tokens. ~~Model~~ | +| `attention_probs_dropout_prob` | Dropout probability of the self-attention layers. ~~float~~ | +| `hidden_act` | Activation used by the point-wise feed-forward layers. ~~str~~ | +| `hidden_dropout_prob` | Dropout probability of the point-wise feed-forward and embedding layers. ~~float~~ | +| `hidden_width` | Width of the final representations. ~~int~~ | +| `intermediate_width` | Width of the intermediate projection layer in the point-wise feed-forward layer. ~~int~~ | +| `layer_norm_eps` | Epsilon for layer normalization. ~~float~~ | +| `max_position_embeddings` | Maximum length of position embeddings. ~~int~~ | +| `model_max_length` | Maximum length of model inputs. ~~int~~ | +| `num_attention_heads` | Number of self-attention heads. ~~int~~ | +| `num_hidden_layers` | Number of hidden layers. ~~int~~ | +| `padding_idx` | Index of the padding meta-token. ~~int~~ | +| `type_vocab_size` | Type vocabulary size. ~~int~~ | +| `mixed_precision` | Use mixed-precision training. ~~bool~~ | +| `grad_scaler_config` | Configuration passed to the PyTorch gradient scaler. ~~dict~~ | +| **CREATES** | The model using the architecture ~~Model~~ | + +### spacy-curated-transformers.CamembertTransformer.v1 + +Construct a CamemBERT transformer model. + +| Name | Description | +| ------------------------------ | ---------------------------------------------------------------------------------------- | +| `vocab_size` | Vocabulary size. ~~int~~ | +| `with_spans` | Callback that constructs a span generator model. ~~Callable~~ | +| `piece_encoder` | The piece encoder to segment input tokens. ~~Model~~ | +| `attention_probs_dropout_prob` | Dropout probability of the self-attention layers. ~~float~~ | +| `hidden_act` | Activation used by the point-wise feed-forward layers. ~~str~~ | +| `hidden_dropout_prob` | Dropout probability of the point-wise feed-forward and embedding layers. ~~float~~ | +| `hidden_width` | Width of the final representations. ~~int~~ | +| `intermediate_width` | Width of the intermediate projection layer in the point-wise feed-forward layer. ~~int~~ | +| `layer_norm_eps` | Epsilon for layer normalization. ~~float~~ | +| `max_position_embeddings` | Maximum length of position embeddings. ~~int~~ | +| `model_max_length` | Maximum length of model inputs. ~~int~~ | +| `num_attention_heads` | Number of self-attention heads. ~~int~~ | +| `num_hidden_layers` | Number of hidden layers. ~~int~~ | +| `padding_idx` | Index of the padding meta-token. ~~int~~ | +| `type_vocab_size` | Type vocabulary size. ~~int~~ | +| `mixed_precision` | Use mixed-precision training. ~~bool~~ | +| `grad_scaler_config` | Configuration passed to the PyTorch gradient scaler. ~~dict~~ | +| **CREATES** | The model using the architecture ~~Model~~ | + +### spacy-curated-transformers.RobertaTransformer.v1 + +Construct a RoBERTa transformer model. + +| Name | Description | +| ------------------------------ | ---------------------------------------------------------------------------------------- | +| `vocab_size` | Vocabulary size. ~~int~~ | +| `with_spans` | Callback that constructs a span generator model. ~~Callable~~ | +| `piece_encoder` | The piece encoder to segment input tokens. ~~Model~~ | +| `attention_probs_dropout_prob` | Dropout probability of the self-attention layers. ~~float~~ | +| `hidden_act` | Activation used by the point-wise feed-forward layers. ~~str~~ | +| `hidden_dropout_prob` | Dropout probability of the point-wise feed-forward and embedding layers. ~~float~~ | +| `hidden_width` | Width of the final representations. ~~int~~ | +| `intermediate_width` | Width of the intermediate projection layer in the point-wise feed-forward layer. ~~int~~ | +| `layer_norm_eps` | Epsilon for layer normalization. ~~float~~ | +| `max_position_embeddings` | Maximum length of position embeddings. ~~int~~ | +| `model_max_length` | Maximum length of model inputs. ~~int~~ | +| `num_attention_heads` | Number of self-attention heads. ~~int~~ | +| `num_hidden_layers` | Number of hidden layers. ~~int~~ | +| `padding_idx` | Index of the padding meta-token. ~~int~~ | +| `type_vocab_size` | Type vocabulary size. ~~int~~ | +| `mixed_precision` | Use mixed-precision training. ~~bool~~ | +| `grad_scaler_config` | Configuration passed to the PyTorch gradient scaler. ~~dict~~ | +| **CREATES** | The model using the architecture ~~Model~~ | + +### spacy-curated-transformers.XlmrTransformer.v1 + +Construct a XLM-RoBERTa transformer model. + +| Name | Description | +| ------------------------------ | ---------------------------------------------------------------------------------------- | +| `vocab_size` | Vocabulary size. ~~int~~ | +| `with_spans` | Callback that constructs a span generator model. ~~Callable~~ | +| `piece_encoder` | The piece encoder to segment input tokens. ~~Model~~ | +| `attention_probs_dropout_prob` | Dropout probability of the self-attention layers. ~~float~~ | +| `hidden_act` | Activation used by the point-wise feed-forward layers. ~~str~~ | +| `hidden_dropout_prob` | Dropout probability of the point-wise feed-forward and embedding layers. ~~float~~ | +| `hidden_width` | Width of the final representations. ~~int~~ | +| `intermediate_width` | Width of the intermediate projection layer in the point-wise feed-forward layer. ~~int~~ | +| `layer_norm_eps` | Epsilon for layer normalization. ~~float~~ | +| `max_position_embeddings` | Maximum length of position embeddings. ~~int~~ | +| `model_max_length` | Maximum length of model inputs. ~~int~~ | +| `num_attention_heads` | Number of self-attention heads. ~~int~~ | +| `num_hidden_layers` | Number of hidden layers. ~~int~~ | +| `padding_idx` | Index of the padding meta-token. ~~int~~ | +| `type_vocab_size` | Type vocabulary size. ~~int~~ | +| `mixed_precision` | Use mixed-precision training. ~~bool~~ | +| `grad_scaler_config` | Configuration passed to the PyTorch gradient scaler. ~~dict~~ | +| **CREATES** | The model using the architecture ~~Model~~ | + +### spacy-curated-transformers.ScalarWeight.v1 + +Construct a model that accepts a list of transformer layer outputs and returns a +weighted representation of the same. + +| Name | Description | +| -------------------- | ----------------------------------------------------------------------------- | +| `num_layers` | Number of transformer hidden layers. ~~int~~ | +| `dropout_prob` | Dropout probability. ~~float~~ | +| `mixed_precision` | Use mixed-precision training. ~~bool~~ | +| `grad_scaler_config` | Configuration passed to the PyTorch gradient scaler. ~~dict~~ | +| **CREATES** | The model using the architecture ~~Model[ScalarWeightInT, ScalarWeightOutT]~~ | + +### spacy-curated-transformers.TransformerLayersListener.v1 + +Construct a listener layer that communicates with one or more upstream +Transformer components. This layer extracts the output of the last transformer +layer and performs pooling over the individual pieces of each `Doc` token, +returning their corresponding representations. The upstream name should either +be the wildcard string '\*', or the name of the Transformer component. + +In almost all cases, the wildcard string will suffice as there'll only be one +upstream Transformer component. But in certain situations, e.g: you have +disjoint datasets for certain tasks, or you'd like to use a pre-trained pipeline +but a downstream task requires its own token representations, you could end up +with more than one Transformer component in the pipeline. + +| Name | Description | +| --------------- | ---------------------------------------------------------------------------------------------------------------------- | +| `layers` | The number of layers produced by the upstream transformer component, excluding the embedding layer. ~~int~~ | +| `width` | The width of the vectors produced by the upstream transformer component. ~~int~~ | +| `pooling` | Model that is used to perform pooling over the piece representations. ~~Model~~ | +| `upstream_name` | A string to identify the 'upstream' Transformer component to communicate with. ~~str~~ | +| `grad_factor` | Factor to multiply gradients with. ~~float~~ | +| **CREATES** | A model that returns the relevant vectors from an upstream transformer component. ~~Model[List[Doc], List[Floats2d]]~~ | + +### spacy-curated-transformers.LastTransformerLayerListener.v1 + +Construct a listener layer that communicates with one or more upstream +Transformer components. This layer extracts the output of the last transformer +layer and performs pooling over the individual pieces of each Doc token, +returning their corresponding representations. The upstream name should either +be the wildcard string '\*', or the name of the Transformer component. + +In almost all cases, the wildcard string will suffice as there'll only be one +upstream Transformer component. But in certain situations, e.g: you have +disjoint datasets for certain tasks, or you'd like to use a pre-trained pipeline +but a downstream task requires its own token representations, you could end up +with more than one Transformer component in the pipeline. + +| Name | Description | +| --------------- | ---------------------------------------------------------------------------------------------------------------------- | +| `width` | The width of the vectors produced by the upstream transformer component. ~~int~~ | +| `pooling` | Model that is used to perform pooling over the piece representations. ~~Model~~ | +| `upstream_name` | A string to identify the 'upstream' Transformer component to communicate with. ~~str~~ | +| `grad_factor` | Factor to multiply gradients with. ~~float~~ | +| **CREATES** | A model that returns the relevant vectors from an upstream transformer component. ~~Model[List[Doc], List[Floats2d]]~~ | + +### spacy-curated-transformers.ScalarWeightingListener.v1 + +Construct a listener layer that communicates with one or more upstream +Transformer components. This layer calculates a weighted representation of all +transformer layer outputs and performs pooling over the individual pieces of +each Doc token, returning their corresponding representations. + +Requires its upstream Transformer components to return all layer outputs from +their models. The upstream name should either be the wildcard string '\*', or +the name of the Transformer component. + +In almost all cases, the wildcard string will suffice as there'll only be one +upstream Transformer component. But in certain situations, e.g: you have +disjoint datasets for certain tasks, or you'd like to use a pre-trained pipeline +but a downstream task requires its own token representations, you could end up +with more than one Transformer component in the pipeline. + +| Name | Description | +| --------------- | ---------------------------------------------------------------------------------------------------------------------- | +| `width` | The width of the vectors produced by the upstream transformer component. ~~int~~ | +| `weighting` | Model that is used to perform the weighting of the different layer outputs. ~~Model~~ | +| `pooling` | Model that is used to perform pooling over the piece representations. ~~Model~~ | +| `upstream_name` | A string to identify the 'upstream' Transformer component to communicate with. ~~str~~ | +| `grad_factor` | Factor to multiply gradients with. ~~float~~ | +| **CREATES** | A model that returns the relevant vectors from an upstream transformer component. ~~Model[List[Doc], List[Floats2d]]~~ | + +### spacy-curated-transformers.BertWordpieceEncoder.v1 + +Construct a WordPiece piece encoder model that accepts a list of token sequences +or documents and returns a corresponding list of piece identifiers. This encoder +also splits each token on punctuation characters, as expected by most BERT +models. + +This model must be separately initialized using an appropriate loader. + +### spacy-curated-transformers.ByteBpeEncoder.v1 + +Construct a Byte-BPE piece encoder model that accepts a list of token sequences +or documents and returns a corresponding list of piece identifiers. + +This model must be separately initialized using an appropriate loader. + +### spacy-curated-transformers.CamembertSentencepieceEncoder.v1 + +Construct a SentencePiece piece encoder model that accepts a list of token +sequences or documents and returns a corresponding list of piece identifiers +with CamemBERT post-processing applied. + +This model must be separately initialized using an appropriate loader. + +### spacy-curated-transformers.CharEncoder.v1 + +Construct a character piece encoder model that accepts a list of token sequences +or documents and returns a corresponding list of piece identifiers. + +This model must be separately initialized using an appropriate loader. + +### spacy-curated-transformers.SentencepieceEncoder.v1 + +Construct a SentencePiece piece encoder model that accepts a list of token +sequences or documents and returns a corresponding list of piece identifiers. + +This model must be separately initialized using an appropriate loader. + +### spacy-curated-transformers.WordpieceEncoder.v1 + +Construct a WordPiece piece encoder model that accepts a list of token sequences +or documents and returns a corresponding list of piece identifiers. This encoder +also splits each token on punctuation characters, as expected by most BERT +models. + +This model must be separately initialized using an appropriate loader. + +### spacy-curated-transformers.XlmrSentencepieceEncoder.v1 + +Construct a SentencePiece piece encoder model that accepts a list of token +sequences or documents and returns a corresponding list of piece identifiers +with XLM-RoBERTa post-processing applied. + +This model must be separately initialized using an appropriate loader. + ## Pretraining architectures {id="pretrain",source="spacy/ml/models/multi_task.py"} The spacy `pretrain` command lets you initialize a `Tok2Vec` layer in your @@ -682,8 +962,9 @@ single-label use-cases where `exclusive_classes = true`, while the > nO = null > > [model.linear_model] -> @architectures = "spacy.TextCatBOW.v2" +> @architectures = "spacy.TextCatBOW.v3" > exclusive_classes = true +> length = 262144 > ngram_size = 1 > no_output_layer = false > @@ -737,54 +1018,15 @@ but used an internal `tok2vec` instead of taking it as argument: -### spacy.TextCatCNN.v2 {id="TextCatCNN"} +### spacy.TextCatBOW.v3 {id="TextCatBOW"} > #### Example Config > > ```ini > [model] -> @architectures = "spacy.TextCatCNN.v2" -> exclusive_classes = false -> nO = null -> -> [model.tok2vec] -> @architectures = "spacy.HashEmbedCNN.v2" -> pretrained_vectors = null -> width = 96 -> depth = 4 -> embed_size = 2000 -> window_size = 1 -> maxout_pieces = 3 -> subword_features = true -> ``` - -A neural network model where token vectors are calculated using a CNN. The -vectors are mean pooled and used as features in a feed-forward network. This -architecture is usually less accurate than the ensemble, but runs faster. - -| Name | Description | -| ------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| `exclusive_classes` | Whether or not categories are mutually exclusive. ~~bool~~ | -| `tok2vec` | The [`tok2vec`](#tok2vec) layer of the model. ~~Model~~ | -| `nO` | Output dimension, determined by the number of different labels. If not set, the [`TextCategorizer`](/api/textcategorizer) component will set it when `initialize` is called. ~~Optional[int]~~ | -| **CREATES** | The model using the architecture. ~~Model[List[Doc], Floats2d]~~ | - - - -[TextCatCNN.v1](/api/legacy#TextCatCNN_v1) had the exact same signature, but was -not yet resizable. Since v2, new labels can be added to this component, even -after training. - - - -### spacy.TextCatBOW.v2 {id="TextCatBOW"} - -> #### Example Config -> -> ```ini -> [model] -> @architectures = "spacy.TextCatBOW.v2" +> @architectures = "spacy.TextCatBOW.v3" > exclusive_classes = false +> length = 262144 > ngram_size = 1 > no_output_layer = false > nO = null @@ -798,17 +1040,108 @@ the others, but may not be as accurate, especially if texts are short. | `exclusive_classes` | Whether or not categories are mutually exclusive. ~~bool~~ | | `ngram_size` | Determines the maximum length of the n-grams in the BOW model. For instance, `ngram_size=3` would give unigram, trigram and bigram features. ~~int~~ | | `no_output_layer` | Whether or not to add an output layer to the model (`Softmax` activation if `exclusive_classes` is `True`, else `Logistic`). ~~bool~~ | +| `length` | The size of the weights vector. The length will be rounded up to the next power of two if it is not a power of two. Defaults to `262144`. ~~int~~ | | `nO` | Output dimension, determined by the number of different labels. If not set, the [`TextCategorizer`](/api/textcategorizer) component will set it when `initialize` is called. ~~Optional[int]~~ | | **CREATES** | The model using the architecture. ~~Model[List[Doc], Floats2d]~~ | - + -[TextCatBOW.v1](/api/legacy#TextCatBOW_v1) had the exact same signature, but was -not yet resizable. Since v2, new labels can be added to this component, even -after training. +- [TextCatBOW.v1](/api/legacy#TextCatBOW_v1) was not yet resizable. Since v2, + new labels can be added to this component, even after training. +- [TextCatBOW.v1](/api/legacy#TextCatBOW_v1) and + [TextCatBOW.v2](/api/legacy#TextCatBOW_v2) used an erroneous sparse linear + layer that only used a small number of the allocated parameters. +- [TextCatBOW.v1](/api/legacy#TextCatBOW_v1) and + [TextCatBOW.v2](/api/legacy#TextCatBOW_v2) did not have the `length` argument. +### spacy.TextCatParametricAttention.v1 {id="TextCatParametricAttention"} + +> #### Example Config +> +> ```ini +> [model] +> @architectures = "spacy.TextCatParametricAttention.v1" +> exclusive_classes = true +> nO = null +> +> [model.tok2vec] +> @architectures = "spacy.Tok2Vec.v2" +> +> [model.tok2vec.embed] +> @architectures = "spacy.MultiHashEmbed.v2" +> width = 64 +> rows = [2000, 2000, 1000, 1000, 1000, 1000] +> attrs = ["ORTH", "LOWER", "PREFIX", "SUFFIX", "SHAPE", "ID"] +> include_static_vectors = false +> +> [model.tok2vec.encode] +> @architectures = "spacy.MaxoutWindowEncoder.v2" +> width = ${model.tok2vec.embed.width} +> window_size = 1 +> maxout_pieces = 3 +> depth = 2 +> ``` + +A neural network model that is built upon Tok2Vec and uses parametric attention +to attend to tokens that are relevant to text classification. + +| Name | Description | +| ------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `tok2vec` | The `tok2vec` layer to build the neural network upon. ~~Model[List[Doc], List[Floats2d]]~~ | +| `exclusive_classes` | Whether or not categories are mutually exclusive. ~~bool~~ | +| `nO` | Output dimension, determined by the number of different labels. If not set, the [`TextCategorizer`](/api/textcategorizer) component will set it when `initialize` is called. ~~Optional[int]~~ | +| **CREATES** | The model using the architecture. ~~Model[List[Doc], Floats2d]~~ | + +### spacy.TextCatReduce.v1 {id="TextCatReduce"} + +> #### Example Config +> +> ```ini +> [model] +> @architectures = "spacy.TextCatReduce.v1" +> exclusive_classes = false +> use_reduce_first = false +> use_reduce_last = false +> use_reduce_max = false +> use_reduce_mean = true +> nO = null +> +> [model.tok2vec] +> @architectures = "spacy.HashEmbedCNN.v2" +> pretrained_vectors = null +> width = 96 +> depth = 4 +> embed_size = 2000 +> window_size = 1 +> maxout_pieces = 3 +> subword_features = true +> ``` + +A classifier that pools token hidden representations of each `Doc` using first, +max or mean reduction and then applies a classification layer. Reductions are +concatenated when multiple reductions are used. + + + +`TextCatReduce` is a generalization of the older +[`TextCatCNN`](/api/legacy#TextCatCNN_v2) model. `TextCatCNN` always uses a mean +reduction, whereas `TextCatReduce` also supports first/max reductions. + + + +| Name | Description | +| ------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `exclusive_classes` | Whether or not categories are mutually exclusive. ~~bool~~ | +| `tok2vec` | The [`tok2vec`](#tok2vec) layer of the model. ~~Model~~ | +| `use_reduce_first` | Pool by using the hidden representation of the first token of a `Doc`. ~~bool~~ | +| `use_reduce_last` | Pool by using the hidden representation of the last token of a `Doc`. ~~bool~~ | +| `use_reduce_max` | Pool by taking the maximum values of the hidden representations of a `Doc`. ~~bool~~ | +| `use_reduce_mean` | Pool by taking the mean of all hidden representations of a `Doc`. ~~bool~~ | +| `nO` | Output dimension, determined by the number of different labels. If not set, the [`TextCategorizer`](/api/textcategorizer) component will set it when `initialize` is called. ~~Optional[int]~~ | +| **CREATES** | The model using the architecture. ~~Model[List[Doc], Floats2d]~~ | + ## Span classification architectures {id="spancat",source="spacy/ml/models/spancat.py"} ### spacy.SpanCategorizer.v1 {id="SpanCategorizer"} diff --git a/website/docs/api/basevectors.mdx b/website/docs/api/basevectors.mdx new file mode 100644 index 000000000..993b9a33e --- /dev/null +++ b/website/docs/api/basevectors.mdx @@ -0,0 +1,143 @@ +--- +title: BaseVectors +teaser: Abstract class for word vectors +tag: class +source: spacy/vectors.pyx +version: 3.7 +--- + +`BaseVectors` is an abstract class to support the development of custom vectors +implementations. + +For use in training with [`StaticVectors`](/api/architectures#staticvectors), +`get_batch` must be implemented. For improved performance, use efficient +batching in `get_batch` and implement `to_ops` to copy the vector data to the +current device. See an example custom implementation for +[BPEmb subword embeddings](/usage/embeddings-transformers#custom-vectors). + +## BaseVectors.\_\_init\_\_ {id="init",tag="method"} + +Create a new vector store. + +| Name | Description | +| -------------- | --------------------------------------------------------------------------------------------------------------------- | +| _keyword-only_ | | +| `strings` | The string store. A new string store is created if one is not provided. Defaults to `None`. ~~Optional[StringStore]~~ | + +## BaseVectors.\_\_getitem\_\_ {id="getitem",tag="method"} + +Get a vector by key. If the key is not found in the table, a `KeyError` should +be raised. + +| Name | Description | +| ----------- | ---------------------------------------------------------------- | +| `key` | The key to get the vector for. ~~Union[int, str]~~ | +| **RETURNS** | The vector for the key. ~~numpy.ndarray[ndim=1, dtype=float32]~~ | + +## BaseVectors.\_\_len\_\_ {id="len",tag="method"} + +Return the number of vectors in the table. + +| Name | Description | +| ----------- | ------------------------------------------- | +| **RETURNS** | The number of vectors in the table. ~~int~~ | + +## BaseVectors.\_\_contains\_\_ {id="contains",tag="method"} + +Check whether there is a vector entry for the given key. + +| Name | Description | +| ----------- | -------------------------------------------- | +| `key` | The key to check. ~~int~~ | +| **RETURNS** | Whether the key has a vector entry. ~~bool~~ | + +## BaseVectors.add {id="add",tag="method"} + +Add a key to the table, if possible. If no keys can be added, return `-1`. + +| Name | Description | +| ----------- | ----------------------------------------------------------------------------------- | +| `key` | The key to add. ~~Union[str, int]~~ | +| **RETURNS** | The row the vector was added to, or `-1` if the operation is not supported. ~~int~~ | + +## BaseVectors.shape {id="shape",tag="property"} + +Get `(rows, dims)` tuples of number of rows and number of dimensions in the +vector table. + +| Name | Description | +| ----------- | ------------------------------------------ | +| **RETURNS** | A `(rows, dims)` pair. ~~Tuple[int, int]~~ | + +## BaseVectors.size {id="size",tag="property"} + +The vector size, i.e. `rows * dims`. + +| Name | Description | +| ----------- | ------------------------ | +| **RETURNS** | The vector size. ~~int~~ | + +## BaseVectors.is_full {id="is_full",tag="property"} + +Whether the vectors table is full and no slots are available for new keys. + +| Name | Description | +| ----------- | ------------------------------------------- | +| **RETURNS** | Whether the vectors table is full. ~~bool~~ | + +## BaseVectors.get_batch {id="get_batch",tag="method",version="3.2"} + +Get the vectors for the provided keys efficiently as a batch. Required to use +the vectors with [`StaticVectors`](/api/architectures#StaticVectors) for +training. + +| Name | Description | +| ------ | --------------------------------------- | +| `keys` | The keys. ~~Iterable[Union[int, str]]~~ | + +## BaseVectors.to_ops {id="to_ops",tag="method"} + +Dummy method. Implement this to change the embedding matrix to use different +Thinc ops. + +| Name | Description | +| ----- | -------------------------------------------------------- | +| `ops` | The Thinc ops to switch the embedding matrix to. ~~Ops~~ | + +## BaseVectors.to_disk {id="to_disk",tag="method"} + +Dummy method to allow serialization. Implement to save vector data with the +pipeline. + +| Name | Description | +| ------ | ------------------------------------------------------------------------------------------------------------------------------------------ | +| `path` | A path to a directory, which will be created if it doesn't exist. Paths may be either strings or `Path`-like objects. ~~Union[str, Path]~~ | + +## BaseVectors.from_disk {id="from_disk",tag="method"} + +Dummy method to allow serialization. Implement to load vector data from a saved +pipeline. + +| Name | Description | +| ----------- | ----------------------------------------------------------------------------------------------- | +| `path` | A path to a directory. Paths may be either strings or `Path`-like objects. ~~Union[str, Path]~~ | +| **RETURNS** | The modified vectors object. ~~BaseVectors~~ | + +## BaseVectors.to_bytes {id="to_bytes",tag="method"} + +Dummy method to allow serialization. Implement to serialize vector data to a +binary string. + +| Name | Description | +| ----------- | ---------------------------------------------------- | +| **RETURNS** | The serialized form of the vectors object. ~~bytes~~ | + +## BaseVectors.from_bytes {id="from_bytes",tag="method"} + +Dummy method to allow serialization. Implement to load vector data from a binary +string. + +| Name | Description | +| ----------- | ----------------------------------- | +| `data` | The data to load from. ~~bytes~~ | +| **RETURNS** | The vectors object. ~~BaseVectors~~ | diff --git a/website/docs/api/cli.mdx b/website/docs/api/cli.mdx index af910c885..973053fce 100644 --- a/website/docs/api/cli.mdx +++ b/website/docs/api/cli.mdx @@ -7,6 +7,7 @@ menu: - ['info', 'info'] - ['validate', 'validate'] - ['init', 'init'] + - ['find-function', 'find-function'] - ['convert', 'convert'] - ['debug', 'debug'] - ['train', 'train'] @@ -185,6 +186,29 @@ $ python -m spacy init fill-config [base_path] [output_file] [--diff] | `--help`, `-h` | Show help message and available arguments. ~~bool (flag)~~ | | **CREATES** | Complete and auto-filled config file for training. | +### init fill-curated-transformer {id="init-fill-curated-transformer",version="3.7",tag="command"} + +Auto-fill the Hugging Face model hyperpameters and loader parameters of a +[Curated Transformer](/api/curatedtransformer) pipeline component in a +[.cfg file](/usage/training#config). The name and revision of the +[Hugging Face model](https://huggingface.co/models) can either be passed as +command-line arguments or read from the +`initialize.components.transformer.encoder_loader` config section. + +```bash +$ python -m spacy init fill-curated-transformer [base_path] [output_file] [--model-name] [--model-revision] [--pipe-name] [--code] +``` + +| Name | Description | +| ------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| `base_path` | Path to base config to fill, e.g. generated by the [quickstart widget](/usage/training#quickstart). ~~Path (positional)~~ | +| `output_file` | Path to output `.cfg` file or "-" to write to stdout so you can pipe it to a file. Defaults to "-" (stdout). ~~Path (positional)~~ | +| `--model-name`, `-m` | Name of the Hugging Face model. Defaults to the model name from the encoder loader config. ~~Optional[str] (option)~~ | +| `--model-revision`, `-r` | Revision of the Hugging Face model. Defaults to `main`. ~~Optional[str] (option)~~ | +| `--pipe-name`, `-n` | Name of the Curated Transformer pipe whose config is to be filled. Defaults to the first transformer pipe. ~~Optional[str] (option)~~ | +| `--code`, `-c` | Path to Python file with additional code to be imported. Allows [registering custom functions](/usage/training#custom-functions) for new architectures. ~~Optional[Path] \(option)~~ | +| **CREATES** | Complete and auto-filled config file for training. | + ### init vectors {id="init-vectors",version="3",tag="command"} Convert [word vectors](/usage/linguistic-features#vectors-similarity) for use @@ -249,6 +273,27 @@ $ python -m spacy init labels [config_path] [output_path] [--code] [--verbose] [ | overrides | Config parameters to override. Should be options starting with `--` that correspond to the config section and value to override, e.g. `--paths.train ./train.spacy`. ~~Any (option/flag)~~ | | **CREATES** | The label files. | +## find-function {id="find-function",version="3.7",tag="command"} + +Find the module, path and line number to the file for a given registered +function. This functionality is helpful to understand where registered +functions, as used in the config file, are defined. + +```bash +$ python -m spacy find-function [func_name] [--registry] +``` + +> #### Example +> +> ```bash +> $ python -m spacy find-function spacy.TextCatBOW.v1 +> ``` + +| Name | Description | +| ------------------ | ----------------------------------------------------- | +| `func_name` | Name of the registered function. ~~str (positional)~~ | +| `--registry`, `-r` | Name of the catalogue registry. ~~str (option)~~ | + ## convert {id="convert",tag="command"} Convert files into spaCy's @@ -1017,6 +1062,42 @@ $ python -m spacy debug model ./config.cfg tagger -l "5,15" -DIM -PAR -P0 -P1 -P | overrides | Config parameters to override. Should be options starting with `--` that correspond to the config section and value to override, e.g. `--paths.train ./train.spacy`. ~~Any (option/flag)~~ | | **PRINTS** | Debugging information. | +### debug pieces {id="debug-pieces",version="3.7",tag="command"} + +Analyze word- or sentencepiece stats. + +```bash +$ python -m spacy debug pieces [config_path] [--code] [--name] [overrides] +``` + +| Name | Description | +| -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| `config_path` | Path to config file. ~~Union[Path, str] (positional)~~ | +| `--code`, `-c` | Path to Python file with additional code to be imported. Allows [registering custom functions](/usage/training#custom-functions) for new architectures. ~~Optional[Path] \(option)~~ | +| `--name`, `-n` | Name of the Curated Transformer pipe whose config is to be filled. Defaults to the first transformer pipe. ~~Optional[str] (option)~~ | +| overrides | Config parameters to override. Should be options starting with `--` that correspond to the config section and value to override, e.g. `--paths.train ./train.spacy`. ~~Any (option/flag)~~ | +| **PRINTS** | Debugging information. | + + + +```bash +$ python -m spacy debug pieces ./config.cfg +``` + +``` +========================= Training corpus statistics ========================= +Median token length: 1.0 +Mean token length: 1.54 +Token length range: [1, 13] + +======================= Development corpus statistics ======================= +Median token length: 1.0 +Mean token length: 1.44 +Token length range: [1, 8] +``` + + + ## train {id="train",tag="command"} Train a pipeline. Expects data in spaCy's @@ -1160,7 +1241,7 @@ skew. To render a sample of dependency parses in a HTML file using the `--displacy-path` argument. ```bash -$ python -m spacy benchmark accuracy [model] [data_path] [--output] [--code] [--gold-preproc] [--gpu-id] [--displacy-path] [--displacy-limit] +$ python -m spacy benchmark accuracy [model] [data_path] [--output] [--code] [--gold-preproc] [--gpu-id] [--displacy-path] [--displacy-limit] [--per-component] [--spans-key] ``` | Name | Description | @@ -1174,6 +1255,7 @@ $ python -m spacy benchmark accuracy [model] [data_path] [--output] [--code] [-- | `--displacy-path`, `-dp` | Directory to output rendered parses as HTML. If not set, no visualizations will be generated. ~~Optional[Path] \(option)~~ | | `--displacy-limit`, `-dl` | Number of parses to generate per file. Defaults to `25`. Keep in mind that a significantly higher number might cause the `.html` files to render slowly. ~~int (option)~~ | | `--per-component`, `-P` 3.6 | Whether to return the scores keyed by component name. Defaults to `False`. ~~bool (flag)~~ | +| `--spans-key`, `-sk` 3.6.2 | Spans key to use when evaluating `Doc.spans`. Defaults to `sc`. ~~str (option)~~ | | `--help`, `-h` | Show help message and available arguments. ~~bool (flag)~~ | | **CREATES** | Training results and optional metrics and visualizations. | @@ -1461,9 +1543,9 @@ obsolete files is left up to you. Remotes can be defined in the `remotes` section of the [`project.yml`](/usage/projects#project-yml). Under the hood, spaCy uses -[`Pathy`](https://github.com/justindujardin/pathy) to communicate with the -remote storages, so you can use any protocol that `Pathy` supports, including -[S3](https://aws.amazon.com/s3/), +[`cloudpathlib`](https://cloudpathlib.drivendata.org) to communicate with the +remote storages, so you can use any protocol that `cloudpathlib` supports, +including [S3](https://aws.amazon.com/s3/), [Google Cloud Storage](https://cloud.google.com/storage), and the local filesystem, although you may need to install extra dependencies to use certain protocols. diff --git a/website/docs/api/curatedtransformer.mdx b/website/docs/api/curatedtransformer.mdx new file mode 100644 index 000000000..3e63ef7c2 --- /dev/null +++ b/website/docs/api/curatedtransformer.mdx @@ -0,0 +1,580 @@ +--- +title: CuratedTransformer +teaser: + Pipeline component for multi-task learning with Curated Transformer models +tag: class +source: github.com/explosion/spacy-curated-transformers/blob/main/spacy_curated_transformers/pipeline/transformer.py +version: 3.7 +api_base_class: /api/pipe +api_string_name: curated_transformer +--- + + + +This component is available via the extension package +[`spacy-curated-transformers`](https://github.com/explosion/spacy-curated-transformers). +It exposes the component via entry points, so if you have the package installed, +using `factory = "curated_transformer"` in your +[training config](/usage/training#config) will work out-of-the-box. + + + +This pipeline component lets you use a curated set of transformer models in your +pipeline. spaCy Curated Transformers currently supports the following model +types: + +- ALBERT +- BERT +- CamemBERT +- RoBERTa +- XLM-RoBERT + +If you want to use another type of model, use +[spacy-transformers](/api/spacy-transformers), which allows you to use all +Hugging Face transformer models with spaCy. + +You will usually connect downstream components to a shared Curated Transformer +pipe using one of the Curated Transformer listener layers. This works similarly +to spaCy's [Tok2Vec](/api/tok2vec), and the +[Tok2VecListener](/api/architectures/#Tok2VecListener) sublayer. The component +assigns the output of the transformer to the `Doc`'s extension attributes. To +access the values, you can use the custom +[`Doc._.trf_data`](#assigned-attributes) attribute. + +For more details, see the [usage documentation](/usage/embeddings-transformers). + +## Assigned Attributes {id="assigned-attributes"} + +The component sets the following +[custom extension attribute](/usage/processing-pipeline#custom-components-attributes): + +| Location | Value | +| ---------------- | -------------------------------------------------------------------------- | +| `Doc._.trf_data` | Curated Transformer outputs for the `Doc` object. ~~DocTransformerOutput~~ | + +## Config and Implementation {id="config"} + +The default config is defined by the pipeline component factory and describes +how the component should be configured. You can override its settings via the +`config` argument on [`nlp.add_pipe`](/api/language#add_pipe) or in your +[`config.cfg` for training](/usage/training#config). See the +[model architectures](/api/architectures#curated-trf) documentation for details +on the curated transformer architectures and their arguments and +hyperparameters. + +> #### Example +> +> ```python +> from spacy_curated_transformers.pipeline.transformer import DEFAULT_CONFIG +> +> nlp.add_pipe("curated_transformer", config=DEFAULT_CONFIG) +> ``` + +| Setting | Description | +| ------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `model` | The Thinc [`Model`](https://thinc.ai/docs/api-model) wrapping the transformer. Defaults to [`XlmrTransformer`](/api/architectures#curated-trf). ~~Model~~ | +| `frozen` | If `True`, the model's weights are frozen and no backpropagation is performed. ~~bool~~ | +| `all_layer_outputs` | If `True`, the model returns the outputs of all the layers. Otherwise, only the output of the last layer is returned. This must be set to `True` if any of the pipe's downstream listeners require the outputs of all transformer layers. ~~bool~~ | + +```python +https://github.com/explosion/spacy-curated-transformers/blob/main/spacy_curated_transformers/pipeline/transformer.py +``` + +## CuratedTransformer.\_\_init\_\_ {id="init",tag="method"} + +> #### Example +> +> ```python +> # Construction via add_pipe with default model +> trf = nlp.add_pipe("curated_transformer") +> +> # Construction via add_pipe with custom config +> config = { +> "model": { +> "@architectures": "spacy-curated-transformers.XlmrTransformer.v1", +> "vocab_size": 250002, +> "num_hidden_layers": 12, +> "hidden_width": 768, +> "piece_encoder": { +> "@architectures": "spacy-curated-transformers.XlmrSentencepieceEncoder.v1" +> } +> } +> } +> trf = nlp.add_pipe("curated_transformer", config=config) +> +> # Construction from class +> from spacy_curated_transformers import CuratedTransformer +> trf = CuratedTransformer(nlp.vocab, model) +> ``` + +Construct a `CuratedTransformer` component. One or more subsequent spaCy +components can use the transformer outputs as features in its model, with +gradients backpropagated to the single shared weights. The activations from the +transformer are saved in the [`Doc._.trf_data`](#assigned-attributes) extension +attribute. You can also provide a callback to set additional annotations. In +your application, you would normally use a shortcut for this and instantiate the +component using its string name and [`nlp.add_pipe`](/api/language#create_pipe). + +| Name | Description | +| ------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `vocab` | The shared vocabulary. ~~Vocab~~ | +| `model` | One of the supported pre-trained transformer models. ~~Model~~ | +| _keyword-only_ | | +| `name` | The component instance name. ~~str~~ | +| `frozen` | If `True`, the model's weights are frozen and no backpropagation is performed. ~~bool~~ | +| `all_layer_outputs` | If `True`, the model returns the outputs of all the layers. Otherwise, only the output of the last layer is returned. This must be set to `True` if any of the pipe's downstream listeners require the outputs of all transformer layers. ~~bool~~ | + +## CuratedTransformer.\_\_call\_\_ {id="call",tag="method"} + +Apply the pipe to one document. The document is modified in place, and returned. +This usually happens under the hood when the `nlp` object is called on a text +and all pipeline components are applied to the `Doc` in order. Both +[`__call__`](/api/curatedtransformer#call) and +[`pipe`](/api/curatedtransformer#pipe) delegate to the +[`predict`](/api/curatedtransformer#predict) and +[`set_annotations`](/api/curatedtransformer#set_annotations) methods. + +> #### Example +> +> ```python +> doc = nlp("This is a sentence.") +> trf = nlp.add_pipe("curated_transformer") +> # This usually happens under the hood +> processed = trf(doc) +> ``` + +| Name | Description | +| ----------- | -------------------------------- | +| `doc` | The document to process. ~~Doc~~ | +| **RETURNS** | The processed document. ~~Doc~~ | + +## CuratedTransformer.pipe {id="pipe",tag="method"} + +Apply the pipe to a stream of documents. This usually happens under the hood +when the `nlp` object is called on a text and all pipeline components are +applied to the `Doc` in order. Both [`__call__`](/api/curatedtransformer#call) +and [`pipe`](/api/curatedtransformer#pipe) delegate to the +[`predict`](/api/curatedtransformer#predict) and +[`set_annotations`](/api/curatedtransformer#set_annotations) methods. + +> #### Example +> +> ```python +> trf = nlp.add_pipe("curated_transformer") +> for doc in trf.pipe(docs, batch_size=50): +> pass +> ``` + +| Name | Description | +| -------------- | ------------------------------------------------------------- | +| `stream` | A stream of documents. ~~Iterable[Doc]~~ | +| _keyword-only_ | | +| `batch_size` | The number of documents to buffer. Defaults to `128`. ~~int~~ | +| **YIELDS** | The processed documents in order. ~~Doc~~ | + +## CuratedTransformer.initialize {id="initialize",tag="method"} + +Initialize the component for training and return an +[`Optimizer`](https://thinc.ai/docs/api-optimizers). `get_examples` should be a +function that returns an iterable of [`Example`](/api/example) objects. **At +least one example should be supplied.** The data examples are used to +**initialize the model** of the component and can either be the full training +data or a representative sample. Initialization includes validating the network, +[inferring missing shapes](https://thinc.ai/docs/usage-models#validation) and +setting up the label scheme based on the data. This method is typically called +by [`Language.initialize`](/api/language#initialize). + +> #### Example +> +> ```python +> trf = nlp.add_pipe("curated_transformer") +> trf.initialize(lambda: examples, nlp=nlp) +> ``` + +| Name | Description | +| ---------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `get_examples` | Function that returns gold-standard annotations in the form of [`Example`](/api/example) objects. Must contain at least one `Example`. ~~Callable[[], Iterable[Example]]~~ | +| _keyword-only_ | | +| `nlp` | The current `nlp` object. Defaults to `None`. ~~Optional[Language]~~ | +| `encoder_loader` | Initialization callback for the transformer model. ~~Optional[Callable]~~ | +| `piece_loader` | Initialization callback for the input piece encoder. ~~Optional[Callable]~~ | + +## CuratedTransformer.predict {id="predict",tag="method"} + +Apply the component's model to a batch of [`Doc`](/api/doc) objects without +modifying them. + +> #### Example +> +> ```python +> trf = nlp.add_pipe("curated_transformer") +> scores = trf.predict([doc1, doc2]) +> ``` + +| Name | Description | +| ----------- | ------------------------------------------- | +| `docs` | The documents to predict. ~~Iterable[Doc]~~ | +| **RETURNS** | The model's prediction for each document. | + +## CuratedTransformer.set_annotations {id="set_annotations",tag="method"} + +Assign the extracted features to the `Doc` objects. By default, the +[`DocTransformerOutput`](/api/curatedtransformer#doctransformeroutput) object is +written to the [`Doc._.trf_data`](#assigned-attributes) attribute. Your +`set_extra_annotations` callback is then called, if provided. + +> #### Example +> +> ```python +> trf = nlp.add_pipe("curated_transformer") +> scores = trf.predict(docs) +> trf.set_annotations(docs, scores) +> ``` + +| Name | Description | +| -------- | ------------------------------------------------------------ | +| `docs` | The documents to modify. ~~Iterable[Doc]~~ | +| `scores` | The scores to set, produced by `CuratedTransformer.predict`. | + +## CuratedTransformer.update {id="update",tag="method"} + +Prepare for an update to the transformer. + +Like the [`Tok2Vec`](api/tok2vec) component, the `CuratedTransformer` component +is unusual in that it does not receive "gold standard" annotations to calculate +a weight update. The optimal output of the transformer data is unknown; it's a +hidden layer inside the network that is updated by backpropagating from output +layers. + +The `CuratedTransformer` component therefore does not perform a weight update +during its own `update` method. Instead, it runs its transformer model and +communicates the output and the backpropagation callback to any downstream +components that have been connected to it via the transformer listener sublayer. +If there are multiple listeners, the last layer will actually backprop to the +transformer and call the optimizer, while the others simply increment the +gradients. + +> #### Example +> +> ```python +> trf = nlp.add_pipe("curated_transformer") +> optimizer = nlp.initialize() +> losses = trf.update(examples, sgd=optimizer) +> ``` + +| Name | Description | +| -------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `examples` | A batch of [`Example`](/api/example) objects. Only the [`Example.predicted`](/api/example#predicted) `Doc` object is used, the reference `Doc` is ignored. ~~Iterable[Example]~~ | +| _keyword-only_ | | +| `drop` | The dropout rate. ~~float~~ | +| `sgd` | An optimizer. Will be created via [`create_optimizer`](#create_optimizer) if not set. ~~Optional[Optimizer]~~ | +| `losses` | Optional record of the loss during training. Updated using the component name as the key. ~~Optional[Dict[str, float]]~~ | +| **RETURNS** | The updated `losses` dictionary. ~~Dict[str, float]~~ | + +## CuratedTransformer.create_optimizer {id="create_optimizer",tag="method"} + +Create an optimizer for the pipeline component. + +> #### Example +> +> ```python +> trf = nlp.add_pipe("curated_transformer") +> optimizer = trf.create_optimizer() +> ``` + +| Name | Description | +| ----------- | ---------------------------- | +| **RETURNS** | The optimizer. ~~Optimizer~~ | + +## CuratedTransformer.use_params {id="use_params",tag="method, contextmanager"} + +Modify the pipe's model to use the given parameter values. At the end of the +context, the original parameters are restored. + +> #### Example +> +> ```python +> trf = nlp.add_pipe("curated_transformer") +> with trf.use_params(optimizer.averages): +> trf.to_disk("/best_model") +> ``` + +| Name | Description | +| -------- | -------------------------------------------------- | +| `params` | The parameter values to use in the model. ~~dict~~ | + +## CuratedTransformer.to_disk {id="to_disk",tag="method"} + +Serialize the pipe to disk. + +> #### Example +> +> ```python +> trf = nlp.add_pipe("curated_transformer") +> trf.to_disk("/path/to/transformer") +> ``` + +| Name | Description | +| -------------- | ------------------------------------------------------------------------------------------------------------------------------------------ | +| `path` | A path to a directory, which will be created if it doesn't exist. Paths may be either strings or `Path`-like objects. ~~Union[str, Path]~~ | +| _keyword-only_ | | +| `exclude` | String names of [serialization fields](#serialization-fields) to exclude. ~~Iterable[str]~~ | + +## CuratedTransformer.from_disk {id="from_disk",tag="method"} + +Load the pipe from disk. Modifies the object in place and returns it. + +> #### Example +> +> ```python +> trf = nlp.add_pipe("curated_transformer") +> trf.from_disk("/path/to/transformer") +> ``` + +| Name | Description | +| -------------- | ----------------------------------------------------------------------------------------------- | +| `path` | A path to a directory. Paths may be either strings or `Path`-like objects. ~~Union[str, Path]~~ | +| _keyword-only_ | | +| `exclude` | String names of [serialization fields](#serialization-fields) to exclude. ~~Iterable[str]~~ | +| **RETURNS** | The modified `CuratedTransformer` object. ~~CuratedTransformer~~ | + +## CuratedTransformer.to_bytes {id="to_bytes",tag="method"} + +> #### Example +> +> ```python +> trf = nlp.add_pipe("curated_transformer") +> trf_bytes = trf.to_bytes() +> ``` + +Serialize the pipe to a bytestring. + +| Name | Description | +| -------------- | ------------------------------------------------------------------------------------------- | +| _keyword-only_ | | +| `exclude` | String names of [serialization fields](#serialization-fields) to exclude. ~~Iterable[str]~~ | +| **RETURNS** | The serialized form of the `CuratedTransformer` object. ~~bytes~~ | + +## CuratedTransformer.from_bytes {id="from_bytes",tag="method"} + +Load the pipe from a bytestring. Modifies the object in place and returns it. + +> #### Example +> +> ```python +> trf_bytes = trf.to_bytes() +> trf = nlp.add_pipe("curated_transformer") +> trf.from_bytes(trf_bytes) +> ``` + +| Name | Description | +| -------------- | ------------------------------------------------------------------------------------------- | +| `bytes_data` | The data to load from. ~~bytes~~ | +| _keyword-only_ | | +| `exclude` | String names of [serialization fields](#serialization-fields) to exclude. ~~Iterable[str]~~ | +| **RETURNS** | The `CuratedTransformer` object. ~~CuratedTransformer~~ | + +## Serialization Fields {id="serialization-fields"} + +During serialization, spaCy will export several data fields used to restore +different aspects of the object. If needed, you can exclude them from +serialization by passing in the string names via the `exclude` argument. + +> #### Example +> +> ```python +> data = trf.to_disk("/path", exclude=["vocab"]) +> ``` + +| Name | Description | +| ------- | -------------------------------------------------------------- | +| `vocab` | The shared [`Vocab`](/api/vocab). | +| `cfg` | The config file. You usually don't want to exclude this. | +| `model` | The binary model data. You usually don't want to exclude this. | + +## DocTransformerOutput {id="doctransformeroutput",tag="dataclass"} + +Curated Transformer outputs for one `Doc` object. Stores the dense +representations generated by the transformer for each piece identifier. Piece +identifiers are grouped by token. Instances of this class are typically assigned +to the [`Doc._.trf_data`](/api/curatedtransformer#assigned-attributes) extension +attribute. + +> #### Example +> +> ```python +> # Get the last hidden layer output for "is" (token index 1) +> doc = nlp("This is a text.") +> tensors = doc._.trf_data.last_hidden_layer_state[1] +> ``` + +| Name | Description | +| ----------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `all_outputs` | List of `Ragged` tensors that correspends to outputs of the different transformer layers. Each tensor element corresponds to a piece identifier's representation. ~~List[Ragged]~~ | +| `last_layer_only` | If only the last transformer layer's outputs are preserved. ~~bool~~ | + +### DocTransformerOutput.embedding_layer {id="doctransformeroutput-embeddinglayer",tag="property"} + +Return the output of the transformer's embedding layer or `None` if +`last_layer_only` is `True`. + +| Name | Description | +| ----------- | -------------------------------------------- | +| **RETURNS** | Embedding layer output. ~~Optional[Ragged]~~ | + +### DocTransformerOutput.last_hidden_layer_state {id="doctransformeroutput-lasthiddenlayerstate",tag="property"} + +Return the output of the transformer's last hidden layer. + +| Name | Description | +| ----------- | ------------------------------------ | +| **RETURNS** | Last hidden layer output. ~~Ragged~~ | + +### DocTransformerOutput.all_hidden_layer_states {id="doctransformeroutput-allhiddenlayerstates",tag="property"} + +Return the outputs of all transformer layers (excluding the embedding layer). + +| Name | Description | +| ----------- | -------------------------------------- | +| **RETURNS** | Hidden layer outputs. ~~List[Ragged]~~ | + +### DocTransformerOutput.num_outputs {id="doctransformeroutput-numoutputs",tag="property"} + +Return the number of layer outputs stored in the `DocTransformerOutput` instance +(including the embedding layer). + +| Name | Description | +| ----------- | -------------------------- | +| **RETURNS** | Numbef of outputs. ~~int~~ | + +## Span Getters {id="span_getters",source="github.com/explosion/spacy-transformers/blob/master/spacy_curated_transformers/span_getters.py"} + +Span getters are functions that take a batch of [`Doc`](/api/doc) objects and +return a lists of [`Span`](/api/span) objects for each doc to be processed by +the transformer. This is used to manage long documents by cutting them into +smaller sequences before running the transformer. The spans are allowed to +overlap, and you can also omit sections of the `Doc` if they are not relevant. +Span getters can be referenced in the +`[components.transformer.model.with_spans]` block of the config to customize the +sequences processed by the transformer. + +| Name | Description | +| ----------- | ------------------------------------------------------------- | +| `docs` | A batch of `Doc` objects. ~~Iterable[Doc]~~ | +| **RETURNS** | The spans to process by the transformer. ~~List[List[Span]]~~ | + +### WithStridedSpans.v1 {id="strided_spans",tag="registered function"} + +> #### Example config +> +> ```ini +> [transformer.model.with_spans] +> @architectures = "spacy-curated-transformers.WithStridedSpans.v1" +> stride = 96 +> window = 128 +> ``` + +Create a span getter for strided spans. If you set the `window` and `stride` to +the same value, the spans will cover each token once. Setting `stride` lower +than `window` will allow for an overlap, so that some tokens are counted twice. +This can be desirable, because it allows all tokens to have both a left and +right context. + +| Name | Description | +| -------- | ------------------------ | +| `window` | The window size. ~~int~~ | +| `stride` | The stride size. ~~int~~ | + +## Model Loaders + +[Curated Transformer models](/api/architectures#curated-trf) are constructed +with default hyperparameters and randomized weights when the pipeline is +created. To load the weights of an existing pre-trained model into the pipeline, +one of the following loader callbacks can be used. The pre-trained model must +have the same hyperparameters as the model used by the pipeline. + +### HFTransformerEncoderLoader.v1 {id="hf_trfencoder_loader",tag="registered_function"} + +Construct a callback that initializes a supported transformer model with weights +from a corresponding HuggingFace model. + +| Name | Description | +| ---------- | ------------------------------------------ | +| `name` | Name of the HuggingFace model. ~~str~~ | +| `revision` | Name of the model revision/branch. ~~str~~ | + +### PyTorchCheckpointLoader.v1 {id="pytorch_checkpoint_loader",tag="registered_function"} + +Construct a callback that initializes a supported transformer model with weights +from a PyTorch checkpoint. + +| Name | Description | +| ------ | ---------------------------------------- | +| `path` | Path to the PyTorch checkpoint. ~~Path~~ | + +## Tokenizer Loaders + +[Curated Transformer models](/api/architectures#curated-trf) must be paired with +a matching tokenizer (piece encoder) model in a spaCy pipeline. As with the +transformer models, tokenizers are constructed with an empty vocabulary during +pipeline creation - They need to be initialized with an appropriate loader +before use in training/inference. + +### ByteBPELoader.v1 {id="bytebpe_loader",tag="registered_function"} + +Construct a callback that initializes a Byte-BPE piece encoder model. + +| Name | Description | +| ------------- | ------------------------------------- | +| `vocab_path` | Path to the vocabulary file. ~~Path~~ | +| `merges_path` | Path to the merges file. ~~Path~~ | + +### CharEncoderLoader.v1 {id="charencoder_loader",tag="registered_function"} + +Construct a callback that initializes a character piece encoder model. + +| Name | Description | +| ----------- | --------------------------------------------------------------------------- | +| `path` | Path to the serialized character model. ~~Path~~ | +| `bos_piece` | Piece used as a beginning-of-sentence token. Defaults to `"[BOS]"`. ~~str~~ | +| `eos_piece` | Piece used as a end-of-sentence token. Defaults to `"[EOS]"`. ~~str~~ | +| `unk_piece` | Piece used as a stand-in for unknown tokens. Defaults to `"[UNK]"`. ~~str~~ | +| `normalize` | Unicode normalization form to use. Defaults to `"NFKC"`. ~~str~~ | + +### HFPieceEncoderLoader.v1 {id="hf_pieceencoder_loader",tag="registered_function"} + +Construct a callback that initializes a HuggingFace piece encoder model. Used in +conjunction with the HuggingFace model loader. + +| Name | Description | +| ---------- | ------------------------------------------ | +| `name` | Name of the HuggingFace model. ~~str~~ | +| `revision` | Name of the model revision/branch. ~~str~~ | + +### SentencepieceLoader.v1 {id="sentencepiece_loader",tag="registered_function"} + +Construct a callback that initializes a SentencePiece piece encoder model. + +| Name | Description | +| ------ | ---------------------------------------------------- | +| `path` | Path to the serialized SentencePiece model. ~~Path~~ | + +### WordpieceLoader.v1 {id="wordpiece_loader",tag="registered_function"} + +Construct a callback that initializes a WordPiece piece encoder model. + +| Name | Description | +| ------ | ------------------------------------------------ | +| `path` | Path to the serialized WordPiece model. ~~Path~~ | + +## Callbacks + +### gradual_transformer_unfreezing.v1 {id="gradual_transformer_unfreezing",tag="registered_function"} + +Construct a callback that can be used to gradually unfreeze the weights of one +or more Transformer components during training. This can be used to prevent +catastrophic forgetting during fine-tuning. + +| Name | Description | +| -------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `target_pipes` | A dictionary whose keys and values correspond to the names of Transformer components and the training step at which they should be unfrozen respectively. ~~Dict[str, int]~~ | diff --git a/website/docs/api/large-language-models.mdx b/website/docs/api/large-language-models.mdx new file mode 100644 index 000000000..5739a6c2f --- /dev/null +++ b/website/docs/api/large-language-models.mdx @@ -0,0 +1,1299 @@ +--- +title: Large Language Models +teaser: Integrating LLMs into structured NLP pipelines +menu: + - ['Config and implementation', 'config'] + - ['Tasks', 'tasks'] + - ['Models', 'models'] + - ['Cache', 'cache'] + - ['Various Functions', 'various-functions'] +--- + +[The spacy-llm package](https://github.com/explosion/spacy-llm) integrates Large +Language Models (LLMs) into spaCy, featuring a modular system for **fast +prototyping** and **prompting**, and turning unstructured responses into +**robust outputs** for various NLP tasks, **no training data** required. + +## Config and implementation {id="config"} + +> #### Example +> +> ```python +> # Construction via add_pipe with the default GPT 3.5 model and an explicitly defined task +> config = {"task": {"@llm_tasks": "spacy.NER.v3", "labels": ["PERSON", "ORGANISATION", "LOCATION"]}} +> llm = nlp.add_pipe("llm", config=config) +> +> # Construction via add_pipe with a task-specific factory and default GPT3.5 model +> llm = nlp.add_pipe("llm_ner") +> +> # Construction via add_pipe with a task-specific factory and custom model +> llm = nlp.add_pipe("llm_ner", config={"model": {"@llm_models": "spacy.Dolly.v1", "name": "dolly-v2-12b"}}) +> +> # Construction from class +> from spacy_llm.pipeline import LLMWrapper +> llm = LLMWrapper(vocab=nlp.vocab, task=task, model=model, cache=cache, save_io=True) +> ``` + +An LLM component is implemented through the `LLMWrapper` class. It is accessible +through a generic `llm` +[component factory](https://spacy.io/usage/processing-pipelines#custom-components-factories) +as well as through task-specific component factories: `llm_ner`, `llm_spancat`, +`llm_rel`, `llm_textcat`, `llm_sentiment` and `llm_summarization`. For these +factories, the GPT-3-5 model from OpenAI is used by default, but this can be +customized. + +### LLMWrapper.\_\_init\_\_ {id="init",tag="method"} + +Create a new pipeline instance. In your application, you would normally use a +shortcut for this and instantiate the component using its string name and +[`nlp.add_pipe`](/api/language#add_pipe). + +| Name | Description | +| -------------- | -------------------------------------------------------------------------------------------------- | +| `name` | String name of the component instance. `llm` by default. ~~str~~ | +| _keyword-only_ | | +| `vocab` | The shared vocabulary. ~~Vocab~~ | +| `task` | An [LLM Task](#tasks) can generate prompts and parse LLM responses. ~~LLMTask~~ | +| `model` | The [LLM Model](#models) queries a specific LLM API.. ~~Callable[[Iterable[Any]], Iterable[Any]]~~ | +| `cache` | [Cache](#cache) to use for caching prompts and responses per doc. ~~Cache~~ | +| `save_io` | Whether to save LLM I/O (prompts and responses) in the `Doc._.llm_io` custom attribute. ~~bool~~ | + +### LLMWrapper.\_\_call\_\_ {id="call",tag="method"} + +Apply the pipe to one document. The document is modified in place and returned. +This usually happens under the hood when the `nlp` object is called on a text +and all pipeline components are applied to the `Doc` in order. + +> #### Example +> +> ```python +> doc = nlp("Ingrid visited Paris.") +> llm_ner = nlp.add_pipe("llm_ner") +> # This usually happens under the hood +> processed = llm_ner(doc) +> ``` + +| Name | Description | +| ----------- | -------------------------------- | +| `doc` | The document to process. ~~Doc~~ | +| **RETURNS** | The processed document. ~~Doc~~ | + +### LLMWrapper.pipe {id="pipe",tag="method"} + +Apply the pipe to a stream of documents. This usually happens under the hood +when the `nlp` object is called on a text and all pipeline components are +applied to the `Doc` in order. + +> #### Example +> +> ```python +> llm_ner = nlp.add_pipe("llm_ner") +> for doc in llm_ner.pipe(docs, batch_size=50): +> pass +> ``` + +| Name | Description | +| -------------- | ------------------------------------------------------------- | +| `docs` | A stream of documents. ~~Iterable[Doc]~~ | +| _keyword-only_ | | +| `batch_size` | The number of documents to buffer. Defaults to `128`. ~~int~~ | +| **YIELDS** | The processed documents in order. ~~Doc~~ | + +### LLMWrapper.add_label {id="add_label",tag="method"} + +Add a new label to the pipe's task. Alternatively, provide the labels upon the +[task](#task) definition, or through the `[initialize]` block of the +[config](#config). + +> #### Example +> +> ```python +> llm_ner = nlp.add_pipe("llm_ner") +> llm_ner.add_label("MY_LABEL") +> ``` + +| Name | Description | +| ----------- | ----------------------------------------------------------- | +| `label` | The label to add. ~~str~~ | +| **RETURNS** | `0` if the label is already present, otherwise `1`. ~~int~~ | + +### LLMWrapper.to_disk {id="to_disk",tag="method"} + +Serialize the pipe to disk. + +> #### Example +> +> ```python +> llm_ner = nlp.add_pipe("llm_ner") +> llm_ner.to_disk("/path/to/llm_ner") +> ``` + +| Name | Description | +| -------------- | ------------------------------------------------------------------------------------------------------------------------------------------ | +| `path` | A path to a directory, which will be created if it doesn't exist. Paths may be either strings or `Path`-like objects. ~~Union[str, Path]~~ | +| _keyword-only_ | | +| `exclude` | String names of [serialization fields](#serialization-fields) to exclude. ~~Iterable[str]~~ | + +### LLMWrapper.from_disk {id="from_disk",tag="method"} + +Load the pipe from disk. Modifies the object in place and returns it. + +> #### Example +> +> ```python +> llm_ner = nlp.add_pipe("llm_ner") +> llm_ner.from_disk("/path/to/llm_ner") +> ``` + +| Name | Description | +| -------------- | ----------------------------------------------------------------------------------------------- | +| `path` | A path to a directory. Paths may be either strings or `Path`-like objects. ~~Union[str, Path]~~ | +| _keyword-only_ | | +| `exclude` | String names of [serialization fields](#serialization-fields) to exclude. ~~Iterable[str]~~ | +| **RETURNS** | The modified `LLMWrapper` object. ~~LLMWrapper~~ | + +### LLMWrapper.to_bytes {id="to_bytes",tag="method"} + +> #### Example +> +> ```python +> llm_ner = nlp.add_pipe("llm_ner") +> ner_bytes = llm_ner.to_bytes() +> ``` + +Serialize the pipe to a bytestring. + +| Name | Description | +| -------------- | ------------------------------------------------------------------------------------------- | +| _keyword-only_ | | +| `exclude` | String names of [serialization fields](#serialization-fields) to exclude. ~~Iterable[str]~~ | +| **RETURNS** | The serialized form of the `LLMWrapper` object. ~~bytes~~ | + +### LLMWrapper.from_bytes {id="from_bytes",tag="method"} + +Load the pipe from a bytestring. Modifies the object in place and returns it. + +> #### Example +> +> ```python +> ner_bytes = llm_ner.to_bytes() +> llm_ner = nlp.add_pipe("llm_ner") +> llm_ner.from_bytes(ner_bytes) +> ``` + +| Name | Description | +| -------------- | ------------------------------------------------------------------------------------------- | +| `bytes_data` | The data to load from. ~~bytes~~ | +| _keyword-only_ | | +| `exclude` | String names of [serialization fields](#serialization-fields) to exclude. ~~Iterable[str]~~ | +| **RETURNS** | The `LLMWrapper` object. ~~LLMWrapper~~ | + +### LLMWrapper.labels {id="labels",tag="property"} + +The labels currently added to the component. Empty tuple if the LLM's task does +not require labels. + +> #### Example +> +> ```python +> llm_ner.add_label("MY_LABEL") +> assert "MY_LABEL" in llm_ner.labels +> ``` + +| Name | Description | +| ----------- | ------------------------------------------------------ | +| **RETURNS** | The labels added to the component. ~~Tuple[str, ...]~~ | + +## Tasks {id="tasks"} + +### Task implementation {id="task-implementation"} + +A _task_ defines an NLP problem or question, that will be sent to the LLM via a +prompt. Further, the task defines how to parse the LLM's responses back into +structured information. All tasks are registered in the `llm_tasks` registry. + +#### task.generate_prompts {id="task-generate-prompts"} + +Takes a collection of documents, and returns a collection of "prompts", which +can be of type `Any`. Often, prompts are of type `str` - but this is not +enforced to allow for maximum flexibility in the framework. + +| Argument | Description | +| ----------- | ---------------------------------------- | +| `docs` | The input documents. ~~Iterable[Doc]~~ | +| **RETURNS** | The generated prompts. ~~Iterable[Any]~~ | + +#### task.parse_responses {id="task-parse-responses"} + +Takes a collection of LLM responses and the original documents, parses the +responses into structured information, and sets the annotations on the +documents. The `parse_responses` function is free to set the annotations in any +way, including `Doc` fields like `ents`, `spans` or `cats`, or using custom +defined fields. + +The `responses` are of type `Iterable[Any]`, though they will often be `str` +objects. This depends on the return type of the [model](#models). + +| Argument | Description | +| ----------- | ------------------------------------------ | +| `docs` | The input documents. ~~Iterable[Doc]~~ | +| `responses` | The generated prompts. ~~Iterable[Any]~~ | +| **RETURNS** | The annotated documents. ~~Iterable[Doc]~~ | + +### Summarization {id="summarization"} + +A summarization task takes a document as input and generates a summary that is +stored in an extension attribute. + +#### spacy.Summarization.v1 {id="summarization-v1"} + +The `spacy.Summarization.v1` task supports both zero-shot and few-shot +prompting. + +> #### Example config +> +> ```ini +> [components.llm.task] +> @llm_tasks = "spacy.Summarization.v1" +> examples = null +> max_n_words = null +> ``` + +| Argument | Description | +| --------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `template` | Custom prompt template to send to LLM model. Defaults to [summarization.v1.jinja](https://github.com/explosion/spacy-llm/blob/main/spacy_llm/tasks/templates/summarization.v1.jinja). ~~str~~ | +| `examples` | Optional function that generates examples for few-shot learning. Defaults to `None`. ~~Optional[Callable[[], Iterable[Any]]]~~ | +| `parse_responses` (NEW) | Callable for parsing LLM responses for this task. Defaults to the internal parsing method for this task. ~~Optional[TaskResponseParser[SummarizationTask]]~~ | +| `prompt_example_type` (NEW) | Type to use for fewshot examples. Defaults to `SummarizationExample`. ~~Optional[Type[FewshotExample]]~~ | +| `max_n_words` | Maximum number of words to be used in summary. Note that this should not expected to work exactly. Defaults to `None`. ~~Optional[int]~~ | +| `field` | Name of extension attribute to store summary in (i. e. the summary will be available in `doc._.{field}`). Defaults to `summary`. ~~str~~ | + +The summarization task prompts the model for a concise summary of the provided +text. It optionally allows to limit the response to a certain number of tokens - +note that this requirement will be included in the prompt, but the task doesn't +perform a hard cut-off. It's hence possible that your summary exceeds +`max_n_words`. + +To perform [few-shot learning](/usage/large-language-models#few-shot-prompts), +you can write down a few examples in a separate file, and provide these to be +injected into the prompt to the LLM. The default reader `spacy.FewShotReader.v1` +supports `.yml`, `.yaml`, `.json` and `.jsonl`. + +```yaml +- text: > + The United Nations, referred to informally as the UN, is an + intergovernmental organization whose stated purposes are to maintain + international peace and security, develop friendly relations among nations, + achieve international cooperation, and serve as a centre for harmonizing the + actions of nations. It is the world's largest international organization. + The UN is headquartered on international territory in New York City, and the + organization has other offices in Geneva, Nairobi, Vienna, and The Hague, + where the International Court of Justice is headquartered.\n\n The UN was + established after World War II with the aim of preventing future world wars, + and succeeded the League of Nations, which was characterized as + ineffective. + summary: + 'The UN is an international organization that promotes global peace, + cooperation, and harmony. Established after WWII, its purpose is to prevent + future world wars.' +``` + +```ini +[components.llm.task] +@llm_tasks = "spacy.Summarization.v1" +max_n_words = 20 +[components.llm.task.examples] +@misc = "spacy.FewShotReader.v1" +path = "summarization_examples.yml" +``` + +### NER {id="ner"} + +The NER task identifies non-overlapping entities in text. + +#### spacy.NER.v3 {id="ner-v3"} + +Version 3 is fundamentally different to v1 and v2, as it implements +Chain-of-Thought prompting, based on the +[PromptNER paper](https://arxiv.org/pdf/2305.15444.pdf) by Ashok and Lipton +(2023). On an internal use-case, we have found this implementation to obtain +significant better accuracy - with an increase of F-score of up to 15 percentage +points. + +> #### Example config +> +> ```ini +> [components.llm.task] +> @llm_tasks = "spacy.NER.v3" +> labels = ["PERSON", "ORGANISATION", "LOCATION"] +> ``` + +When no examples are [specified](/usage/large-language-models#few-shot-prompts), +the v3 implementation will use a dummy example in the prompt. Technically this +means that the task will always perform few-shot prompting under the hood. + +| Argument | Description | +| --------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| `template` | Custom prompt template to send to LLM model. Defaults to [ner.v3.jinja](https://github.com/explosion/spacy-llm/blob/main/spacy_llm/tasks/templates/ner.v3.jinja). ~~str~~ | +| `examples` | Optional function that generates examples for few-shot learning. Defaults to `None`. ~~Optional[Callable[[], Iterable[Any]]]~~ | +| `parse_responses` (NEW) | Callable for parsing LLM responses for this task. Defaults to the internal parsing method for this task. ~~Optional[TaskResponseParser[NERTask]]~~ | +| `prompt_example_type` (NEW) | Type to use for fewshot examples. Defaults to `NERExample`. ~~Optional[Type[FewshotExample]]~~ | +| `scorer` | Scorer function that evaluates the task performance on provided examples. Defaults to the metric used by spaCy. ~~Optional[Scorer]~~ | +| `labels` | List of labels or str of comma-separated list of labels. ~~Union[List[str], str]~~ | +| `label_definitions` | Optional dict mapping a label to a description of that label. These descriptions are added to the prompt to help instruct the LLM on what to extract. Defaults to `None`. ~~Optional[Dict[str, str]]~~ | +| `description` (NEW) | A description of what to recognize or not recognize as entities. ~~str~~ | +| `normalizer` | Function that normalizes the labels as returned by the LLM. If `None`, defaults to `spacy.LowercaseNormalizer.v1`. Defaults to `None`. ~~Optional[Callable[[str], str]]~~ | +| `alignment_mode` | Alignment mode in case the LLM returns entities that do not align with token boundaries. Options are `"strict"`, `"contract"` or `"expand"`. Defaults to `"contract"`. ~~str~~ | +| `case_sensitive_matching` | Whether to search without case sensitivity. Defaults to `False`. ~~bool~~ | + +Note that the `single_match` parameter, used in v1 and v2, is not supported +anymore, as the CoT parsing algorithm takes care of this automatically. + +New to v3 is the fact that you can provide an explicit description of what +entities should look like. You can use this feature in addition to +`label_definitions`. + +```ini +[components.llm.task] +@llm_tasks = "spacy.NER.v3" +labels = ["DISH", "INGREDIENT", "EQUIPMENT"] +description = Entities are the names food dishes, + ingredients, and any kind of cooking equipment. + Adjectives, verbs, adverbs are not entities. + Pronouns are not entities. + +[components.llm.task.label_definitions] +DISH = "Known food dishes, e.g. Lobster Ravioli, garlic bread" +INGREDIENT = "Individual parts of a food dish, including herbs and spices." +EQUIPMENT = "Any kind of cooking equipment. e.g. oven, cooking pot, grill" +``` + +To perform [few-shot learning](/usage/large-language-models#few-shot-prompts), +you can write down a few examples in a separate file, and provide these to be +injected into the prompt to the LLM. The default reader `spacy.FewShotReader.v1` +supports `.yml`, `.yaml`, `.json` and `.jsonl`. + +While not required, this task works best when both positive and negative +examples are provided. The format is different than the files required for v1 +and v2, as additional fields such as `is_entity` and `reason` should now be +provided. + +```json +[ + { + "text": "You can't get a great chocolate flavor with carob.", + "spans": [ + { + "text": "chocolate", + "is_entity": false, + "label": "==NONE==", + "reason": "is a flavor in this context, not an ingredient" + }, + { + "text": "carob", + "is_entity": true, + "label": "INGREDIENT", + "reason": "is an ingredient to add chocolate flavor" + } + ] + }, + ... +] +``` + +```ini +[components.llm.task.examples] +@misc = "spacy.FewShotReader.v1" +path = "${paths.examples}" +``` + +For a fully working example, see this +[usage example](https://github.com/explosion/spacy-llm/tree/main/usage_examples/ner_v3_openai). + +#### spacy.NER.v2 {id="ner-v2"} + +This version supports explicitly defining the provided labels with custom +descriptions, and further supports zero-shot and few-shot prompting just like +v1. + +> #### Example config +> +> ```ini +> [components.llm.task] +> @llm_tasks = "spacy.NER.v2" +> labels = ["PERSON", "ORGANISATION", "LOCATION"] +> examples = null +> ``` + +| Argument | Description | +| --------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| `template` (NEW) | Custom prompt template to send to LLM model. Defaults to [ner.v2.jinja](https://github.com/explosion/spacy-llm/blob/main/spacy_llm/tasks/templates/ner.v2.jinja). ~~str~~ | +| `examples` | Optional function that generates examples for few-shot learning. Defaults to `None`. ~~Optional[Callable[[], Iterable[Any]]]~~ | +| `parse_responses` (NEW) | Callable for parsing LLM responses for this task. Defaults to the internal parsing method for this task. ~~Optional[TaskResponseParser[NERTask]]~~ | +| `prompt_example_type` (NEW) | Type to use for fewshot examples. Defaults to `NERExample`. ~~Optional[Type[FewshotExample]]~~ | +| `scorer` (NEW) | Scorer function that evaluates the task performance on provided examples. Defaults to the metric used by spaCy. ~~Optional[Scorer]~~ | +| `labels` | List of labels or str of comma-separated list of labels. ~~Union[List[str], str]~~ | +| `label_definitions` (NEW) | Optional dict mapping a label to a description of that label. These descriptions are added to the prompt to help instruct the LLM on what to extract. Defaults to `None`. ~~Optional[Dict[str, str]]~~ | +| `normalizer` | Function that normalizes the labels as returned by the LLM. If `None`, defaults to `spacy.LowercaseNormalizer.v1`. Defaults to `None`. ~~Optional[Callable[[str], str]]~~ | +| `alignment_mode` | Alignment mode in case the LLM returns entities that do not align with token boundaries. Options are `"strict"`, `"contract"` or `"expand"`. Defaults to `"contract"`. ~~str~~ | +| `case_sensitive_matching` | Whether to search without case sensitivity. Defaults to `False`. ~~bool~~ | +| `single_match` | Whether to match an entity in the LLM's response only once (the first hit) or multiple times. Defaults to `False`. ~~bool~~ | + +The parameters `alignment_mode`, `case_sensitive_matching` and `single_match` +are identical to the [v1](#ner-v1) implementation. The format of few-shot +examples are also the same. + +> Label descriptions can also be used with explicit examples to give as much +> info to the LLM model as possible. + +New to v2 is the fact that you can write definitions for each label and provide +them via the `label_definitions` argument. This lets you tell the LLM exactly +what you're looking for rather than relying on the LLM to interpret its task +given just the label name. Label descriptions are freeform so you can write +whatever you want here, but a brief description along with some examples and +counter examples seems to work quite well. + +```ini +[components.llm.task] +@llm_tasks = "spacy.NER.v2" +labels = PERSON,SPORTS_TEAM + +[components.llm.task.label_definitions] +PERSON = "Extract any named individual in the text." +SPORTS_TEAM = "Extract the names of any professional sports team. e.g. Golden State Warriors, LA Lakers, Man City, Real Madrid" +``` + +For a fully working example, see this +[usage example](https://github.com/explosion/spacy-llm/tree/main/usage_examples/ner_dolly). + +#### spacy.NER.v1 {id="ner-v1"} + +The original version of the built-in NER task supports both zero-shot and +few-shot prompting. + +> #### Example config +> +> ```ini +> [components.llm.task] +> @llm_tasks = "spacy.NER.v1" +> labels = PERSON,ORGANISATION,LOCATION +> examples = null +> ``` + +| Argument | Description | +| --------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| `examples` | Optional function that generates examples for few-shot learning. Defaults to `None`. ~~Optional[Callable[[], Iterable[Any]]]~~ | +| `parse_responses` (NEW) | Callable for parsing LLM responses for this task. Defaults to the internal parsing method for this task. ~~Optional[TaskResponseParser[NERTask]]~~ | +| `prompt_example_type` (NEW) | Type to use for fewshot examples. Defaults to `NERExample`. ~~Optional[Type[FewshotExample]]~~ | +| `scorer` (NEW) | Scorer function that evaluates the task performance on provided examples. Defaults to the metric used by spaCy. ~~Optional[Scorer]~~ | +| `labels` | Comma-separated list of labels. ~~str~~ | +| `normalizer` | Function that normalizes the labels as returned by the LLM. If `None`, defaults to `spacy.LowercaseNormalizer.v1`. ~~Optional[Callable[[str], str]]~~ | +| `alignment_mode` | Alignment mode in case the LLM returns entities that do not align with token boundaries. Options are `"strict"`, `"contract"` or `"expand"`. Defaults to `"contract"`. ~~str~~ | +| `case_sensitive_matching` | Whether to search without case sensitivity. Defaults to `False`. ~~bool~~ | +| `single_match` | Whether to match an entity in the LLM's response only once (the first hit) or multiple times. Defaults to `False`. ~~bool~~ | + +The NER task implementation doesn't currently ask the LLM for specific offsets, +but simply expects a list of strings that represent the enties in the document. +This means that a form of string matching is required. This can be configured by +the following parameters: + +- The `single_match` parameter is typically set to `False` to allow for multiple + matches. For instance, the response from the LLM might only mention the entity + "Paris" once, but you'd still want to mark it every time it occurs in the + document. +- The case-sensitive matching is typically set to `False` to be robust against + case variances in the LLM's output. +- The `alignment_mode` argument is used to match entities as returned by the LLM + to the tokens from the original `Doc` - specifically it's used as argument in + the call to [`doc.char_span()`](/api/doc#char_span). The `"strict"` mode will + only keep spans that strictly adhere to the given token boundaries. + `"contract"` will only keep those tokens that are fully within the given + range, e.g. reducing `"New Y"` to `"New"`. Finally, `"expand"` will expand the + span to the next token boundaries, e.g. expanding `"New Y"` out to + `"New York"`. + +To perform [few-shot learning](/usage/large-language-models#few-shot-prompts), +you can write down a few examples in a separate file, and provide these to be +injected into the prompt to the LLM. The default reader `spacy.FewShotReader.v1` +supports `.yml`, `.yaml`, `.json` and `.jsonl`. + +```yaml +- text: Jack and Jill went up the hill. + entities: + PERSON: + - Jack + - Jill + LOCATION: + - hill +- text: Jack fell down and broke his crown. + entities: + PERSON: + - Jack +``` + +```ini +[components.llm.task.examples] +@misc = "spacy.FewShotReader.v1" +path = "ner_examples.yml" +``` + +### SpanCat {id="spancat"} + +The SpanCat task identifies potentially overlapping entities in text. + +#### spacy.SpanCat.v3 {id="spancat-v3"} + +The built-in SpanCat v3 task is a simple adaptation of the NER v3 task to +support overlapping entities and store its annotations in `doc.spans`. + +> #### Example config +> +> ```ini +> [components.llm.task] +> @llm_tasks = "spacy.SpanCat.v3" +> labels = ["PERSON", "ORGANISATION", "LOCATION"] +> examples = null +> ``` + +| Argument | Description | +| --------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| `template` | Custom prompt template to send to LLM model. Defaults to [`spancat.v3.jinja`](https://github.com/explosion/spacy-llm/blob/main/spacy_llm/tasks/templates/spancat.v3.jinja). ~~str~~ | +| `examples` | Optional function that generates examples for few-shot learning. Defaults to `None`. ~~Optional[Callable[[], Iterable[Any]]]~~ | +| `parse_responses` (NEW) | Callable for parsing LLM responses for this task. Defaults to the internal parsing method for this task. ~~Optional[TaskResponseParser[SpanCatTask]]~~ | +| `prompt_example_type` (NEW) | Type to use for fewshot examples. Defaults to `SpanCatExample`. ~~Optional[Type[FewshotExample]]~~ | +| `scorer` (NEW) | Scorer function that evaluates the task performance on provided examples. Defaults to the metric used by spaCy. ~~Optional[Scorer]~~ | +| `labels` | List of labels or str of comma-separated list of labels. ~~Union[List[str], str]~~ | +| `label_definitions` | Optional dict mapping a label to a description of that label. These descriptions are added to the prompt to help instruct the LLM on what to extract. Defaults to `None`. ~~Optional[Dict[str, str]]~~ | +| `description` (NEW) | A description of what to recognize or not recognize as entities. ~~str~~ | +| `spans_key` | Key of the `Doc.spans` dict to save the spans under. Defaults to `"sc"`. ~~str~~ | +| `normalizer` | Function that normalizes the labels as returned by the LLM. If `None`, defaults to `spacy.LowercaseNormalizer.v1`. ~~Optional[Callable[[str], str]]~~ | +| `alignment_mode` | Alignment mode in case the LLM returns entities that do not align with token boundaries. Options are `"strict"`, `"contract"` or `"expand"`. Defaults to `"contract"`. ~~str~~ | +| `case_sensitive_matching` | Whether to search without case sensitivity. Defaults to `False`. ~~bool~~ | + +Note that the `single_match` parameter, used in v1 and v2, is not supported +anymore, as the CoT parsing algorithm takes care of this automatically. + +#### spacy.SpanCat.v2 {id="spancat-v2"} + +The built-in SpanCat v2 task is a simple adaptation of the NER v2 task to +support overlapping entities and store its annotations in `doc.spans`. + +> #### Example config +> +> ```ini +> [components.llm.task] +> @llm_tasks = "spacy.SpanCat.v2" +> labels = ["PERSON", "ORGANISATION", "LOCATION"] +> examples = null +> ``` + +| Argument | Description | +| --------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| `template` (NEW) | Custom prompt template to send to LLM model. Defaults to [`spancat.v2.jinja`](https://github.com/explosion/spacy-llm/blob/main/spacy_llm/tasks/templates/spancat.v2.jinja). ~~str~~ | +| `examples` | Optional function that generates examples for few-shot learning. Defaults to `None`. ~~Optional[Callable[[], Iterable[Any]]]~~ | +| `parse_responses` (NEW) | Callable for parsing LLM responses for this task. Defaults to the internal parsing method for this task. ~~Optional[TaskResponseParser[SpanCatTask]]~~ | +| `prompt_example_type` (NEW) | Type to use for fewshot examples. Defaults to `SpanCatExample`. ~~Optional[Type[FewshotExample]]~~ | +| `scorer` (NEW) | Scorer function that evaluates the task performance on provided examples. Defaults to the metric used by spaCy. ~~Optional[Scorer]~~ | +| `labels` | List of labels or str of comma-separated list of labels. ~~Union[List[str], str]~~ | +| `label_definitions` (NEW) | Optional dict mapping a label to a description of that label. These descriptions are added to the prompt to help instruct the LLM on what to extract. Defaults to `None`. ~~Optional[Dict[str, str]]~~ | +| `spans_key` | Key of the `Doc.spans` dict to save the spans under. Defaults to `"sc"`. ~~str~~ | +| `normalizer` | Function that normalizes the labels as returned by the LLM. If `None`, defaults to `spacy.LowercaseNormalizer.v1`. ~~Optional[Callable[[str], str]]~~ | +| `alignment_mode` | Alignment mode in case the LLM returns entities that do not align with token boundaries. Options are `"strict"`, `"contract"` or `"expand"`. Defaults to `"contract"`. ~~str~~ | +| `case_sensitive_matching` | Whether to search without case sensitivity. Defaults to `False`. ~~bool~~ | +| `single_match` | Whether to match an entity in the LLM's response only once (the first hit) or multiple times. Defaults to `False`. ~~bool~~ | + +Except for the `spans_key` parameter, the SpanCat v2 task reuses the +configuration from the NER v2 task. Refer to [its documentation](#ner-v2) for +more insight. + +#### spacy.SpanCat.v1 {id="spancat-v1"} + +The original version of the built-in SpanCat task is a simple adaptation of the +v1 NER task to support overlapping entities and store its annotations in +`doc.spans`. + +> #### Example config +> +> ```ini +> [components.llm.task] +> @llm_tasks = "spacy.SpanCat.v1" +> labels = PERSON,ORGANISATION,LOCATION +> examples = null +> ``` + +| Argument | Description | +| --------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| `examples` | Optional function that generates examples for few-shot learning. Defaults to `None`. ~~Optional[Callable[[], Iterable[Any]]]~~ | +| `parse_responses` (NEW) | Callable for parsing LLM responses for this task. Defaults to the internal parsing method for this task. ~~Optional[TaskResponseParser[SpanCatTask]]~~ | +| `prompt_example_type` (NEW) | Type to use for fewshot examples. Defaults to `SpanCatExample`. ~~Optional[Type[FewshotExample]]~~ | +| `scorer` (NEW) | Scorer function that evaluates the task performance on provided examples. Defaults to the metric used by spaCy. ~~Optional[Scorer]~~ | +| `labels` | Comma-separated list of labels. ~~str~~ | +| `spans_key` | Key of the `Doc.spans` dict to save the spans under. Defaults to `"sc"`. ~~str~~ | +| `normalizer` | Function that normalizes the labels as returned by the LLM. If `None`, defaults to `spacy.LowercaseNormalizer.v1`. ~~Optional[Callable[[str], str]]~~ | +| `alignment_mode` | Alignment mode in case the LLM returns entities that do not align with token boundaries. Options are `"strict"`, `"contract"` or `"expand"`. Defaults to `"contract"`. ~~str~~ | +| `case_sensitive_matching` | Whether to search without case sensitivity. Defaults to `False`. ~~bool~~ | +| `single_match` | Whether to match an entity in the LLM's response only once (the first hit) or multiple times. Defaults to `False`. ~~bool~~ | + +Except for the `spans_key` parameter, the SpanCat v1 task reuses the +configuration from the NER v1 task. Refer to [its documentation](#ner-v1) for +more insight. + +### TextCat {id="textcat"} + +The TextCat task labels documents with relevant categories. + +#### spacy.TextCat.v3 {id="textcat-v3"} + +On top of the functionality from v2, version 3 of the built-in TextCat tasks +allows setting definitions of labels. Those definitions are included in the +prompt. + +> #### Example config +> +> ```ini +> [components.llm.task] +> @llm_tasks = "spacy.TextCat.v3" +> labels = ["COMPLIMENT", "INSULT"] +> +> [components.llm.task.label_definitions] +> "COMPLIMENT" = "a polite expression of praise or admiration.", +> "INSULT" = "a disrespectful or scornfully abusive remark or act." +> examples = null +> ``` + +| Argument | Description | +| --------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `template` | Custom prompt template to send to LLM model. Defaults to [`textcat.v3.jinja`](https://github.com/explosion/spacy-llm/blob/main/spacy_llm/tasks/templates/textcat.v3.jinja). ~~str~~ | +| `examples` | Optional function that generates examples for few-shot learning. Defaults to `None`. ~~Optional[Callable[[], Iterable[Any]]]~~ | +| `parse_responses` (NEW) | Callable for parsing LLM responses for this task. Defaults to the internal parsing method for this task. ~~Optional[TaskResponseParser[SpanCatTask]]~~ | +| `prompt_example_type` (NEW) | Type to use for fewshot examples. Defaults to `TextCatExample`. ~~Optional[Type[FewshotExample]]~~ | +| `scorer` (NEW) | Scorer function that evaluates the task performance on provided examples. Defaults to the metric used by spaCy. ~~Optional[Scorer]~~ | +| `labels` | List of labels or str of comma-separated list of labels. ~~Union[List[str], str]~~ | +| `label_definitions` (NEW) | Dictionary of label definitions. Included in the prompt, if set. Defaults to `None`. ~~Optional[Dict[str, str]]~~ | +| `normalizer` | Function that normalizes the labels as returned by the LLM. If `None`, falls back to `spacy.LowercaseNormalizer.v1`. Defaults to `None`. ~~Optional[Callable[[str], str]]~~ | +| `exclusive_classes` | If set to `True`, only one label per document should be valid. If set to `False`, one document can have multiple labels. Defaults to `False`. ~~bool~~ | +| `allow_none` | When set to `True`, allows the LLM to not return any of the given label. The resulting dict in `doc.cats` will have `0.0` scores for all labels. Defaults to `True`. ~~bool~~ | +| `verbose` | If set to `True`, warnings will be generated when the LLM returns invalid responses. Defaults to `False`. ~~bool~~ | + +The formatting of few-shot examples is the same as those for the +[v1](#textcat-v1) implementation. + +#### spacy.TextCat.v2 {id="textcat-v2"} + +V2 includes all v1 functionality, with an improved prompt template. + +> #### Example config +> +> ```ini +> [components.llm.task] +> @llm_tasks = "spacy.TextCat.v2" +> labels = ["COMPLIMENT", "INSULT"] +> examples = null +> ``` + +| Argument | Description | +| --------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `template` (NEW) | Custom prompt template to send to LLM model. Defaults to [`textcat.v2.jinja`](https://github.com/explosion/spacy-llm/blob/main/spacy_llm/tasks/templates/textcat.v2.jinja). ~~str~~ | +| `examples` | Optional function that generates examples for few-shot learning. Defaults to `None`. ~~Optional[Callable[[], Iterable[Any]]]~~ | +| `parse_responses` (NEW) | Callable for parsing LLM responses for this task. Defaults to the internal parsing method for this task. ~~Optional[TaskResponseParser[SpanCatTask]]~~ | +| `prompt_example_type` (NEW) | Type to use for fewshot examples. Defaults to `TextCatExample`. ~~Optional[Type[FewshotExample]]~~ | +| `scorer` (NEW) | Scorer function that evaluates the task performance on provided examples. Defaults to the metric used by spaCy. ~~Optional[Scorer]~~ | +| `labels` | List of labels or str of comma-separated list of labels. ~~Union[List[str], str]~~ | +| `normalizer` | Function that normalizes the labels as returned by the LLM. If `None`, falls back to `spacy.LowercaseNormalizer.v1`. ~~Optional[Callable[[str], str]]~~ | +| `exclusive_classes` | If set to `True`, only one label per document should be valid. If set to `False`, one document can have multiple labels. Defaults to `False`. ~~bool~~ | +| `allow_none` | When set to `True`, allows the LLM to not return any of the given label. The resulting dict in `doc.cats` will have `0.0` scores for all labels. Defaults to `True`. ~~bool~~ | +| `verbose` | If set to `True`, warnings will be generated when the LLM returns invalid responses. Defaults to `False`. ~~bool~~ | + +The formatting of few-shot examples is the same as those for the +[v1](#textcat-v1) implementation. + +#### spacy.TextCat.v1 {id="textcat-v1"} + +Version 1 of the built-in TextCat task supports both zero-shot and few-shot +prompting. + +> #### Example config +> +> ```ini +> [components.llm.task] +> @llm_tasks = "spacy.TextCat.v1" +> labels = COMPLIMENT,INSULT +> examples = null +> ``` + +| Argument | Description | +| --------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `examples` | Optional function that generates examples for few-shot learning. Deafults to `None`. ~~Optional[Callable[[], Iterable[Any]]]~~ | +| `parse_responses` (NEW) | Callable for parsing LLM responses for this task. Defaults to the internal parsing method for this task. ~~Optional[TaskResponseParser[SpanCatTask]]~~ | +| `prompt_example_type` (NEW) | Type to use for fewshot examples. Defaults to `TextCatExample`. ~~Optional[Type[FewshotExample]]~~ | +| `scorer` (NEW) | Scorer function that evaluates the task performance on provided examples. Defaults to the metric used by spaCy. ~~Optional[Scorer]~~ | +| `labels` | Comma-separated list of labels. ~~str~~ | +| `normalizer` | Function that normalizes the labels as returned by the LLM. If `None`, falls back to `spacy.LowercaseNormalizer.v1`. ~~Optional[Callable[[str], str]]~~ | +| `exclusive_classes` | If set to `True`, only one label per document should be valid. If set to `False`, one document can have multiple labels. Defaults to `False`. ~~bool~~ | +| `allow_none` | When set to `True`, allows the LLM to not return any of the given label. The resulting dict in `doc.cats` will have `0.0` scores for all labels. Defaults to `True`. ~~bool~~ | +| `verbose` | If set to `True`, warnings will be generated when the LLM returns invalid responses. Defaults to `False`. ~~bool~~ | + +To perform [few-shot learning](/usage/large-language-models#few-shot-prompts), +you can write down a few examples in a separate file, and provide these to be +injected into the prompt to the LLM. The default reader `spacy.FewShotReader.v1` +supports `.yml`, `.yaml`, `.json` and `.jsonl`. + +```json +[ + { + "text": "You look great!", + "answer": "Compliment" + }, + { + "text": "You are not very clever at all.", + "answer": "Insult" + } +] +``` + +```ini +[components.llm.task.examples] +@misc = "spacy.FewShotReader.v1" +path = "textcat_examples.json" +``` + +If you want to perform few-shot learning with a binary classifier (i. e. a text +either should or should not be assigned to a given class), you can provide +positive and negative examples with answers of "POS" or "NEG". "POS" means that +this example should be assigned the class label defined in the configuration, +"NEG" means it shouldn't. E. g. for spam classification: + +```json +[ + { + "text": "You won the lottery! Wire a fee of 200$ to be able to withdraw your winnings.", + "answer": "POS" + }, + { + "text": "Your order #123456789 has arrived", + "answer": "NEG" + } +] +``` + +### REL {id="rel"} + +The REL task extracts relations between named entities. + +#### spacy.REL.v1 {id="rel-v1"} + +The built-in REL task supports both zero-shot and few-shot prompting. It relies +on an upstream NER component for entities extraction. + +> #### Example config +> +> ```ini +> [components.llm.task] +> @llm_tasks = "spacy.REL.v1" +> labels = ["LivesIn", "Visits"] +> ``` + +| Argument | Description | +| --------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `template` | Custom prompt template to send to LLM model. Defaults to [`rel.v3.jinja`](https://github.com/explosion/spacy-llm/blob/main/spacy_llm/tasks/templates/rel.v1.jinja). ~~str~~ | +| `examples` | Optional function that generates examples for few-shot learning. Defaults to `None`. ~~Optional[Callable[[], Iterable[Any]]]~~ | +| `parse_responses` (NEW) | Callable for parsing LLM responses for this task. Defaults to the internal parsing method for this task. ~~Optional[TaskResponseParser[RELTask]]~~ | +| `prompt_example_type` (NEW) | Type to use for fewshot examples. Defaults to `RELExample`. ~~Optional[Type[FewshotExample]]~~ | +| `scorer` (NEW) | Scorer function that evaluates the task performance on provided examples. Defaults to the metric used by spaCy. ~~Optional[Scorer]~~ | +| `labels` | List of labels or str of comma-separated list of labels. ~~Union[List[str], str]~~ | +| `label_definitions` | Dictionary providing a description for each relation label. Defaults to `None`. ~~Optional[Dict[str, str]]~~ | +| `normalizer` | Function that normalizes the labels as returned by the LLM. If `None`, falls back to `spacy.LowercaseNormalizer.v1`. Defaults to `None`. ~~Optional[Callable[[str], str]]~~ | +| `verbose` | If set to `True`, warnings will be generated when the LLM returns invalid responses. Defaults to `False`. ~~bool~~ | + +To perform [few-shot learning](/usage/large-language-models#few-shot-prompts), +you can write down a few examples in a separate file, and provide these to be +injected into the prompt to the LLM. The default reader `spacy.FewShotReader.v1` +supports `.yml`, `.yaml`, `.json` and `.jsonl`. + +```json +{"text": "Laura bought a house in Boston with her husband Mark.", "ents": [{"start_char": 0, "end_char": 5, "label": "PERSON"}, {"start_char": 24, "end_char": 30, "label": "GPE"}, {"start_char": 48, "end_char": 52, "label": "PERSON"}], "relations": [{"dep": 0, "dest": 1, "relation": "LivesIn"}, {"dep": 2, "dest": 1, "relation": "LivesIn"}]} +{"text": "Michael travelled through South America by bike.", "ents": [{"start_char": 0, "end_char": 7, "label": "PERSON"}, {"start_char": 26, "end_char": 39, "label": "LOC"}], "relations": [{"dep": 0, "dest": 1, "relation": "Visits"}]} +``` + +```ini +[components.llm.task] +@llm_tasks = "spacy.REL.v1" +labels = ["LivesIn", "Visits"] + +[components.llm.task.examples] +@misc = "spacy.FewShotReader.v1" +path = "rel_examples.jsonl" +``` + +Note: the REL task relies on pre-extracted entities to make its prediction. +Hence, you'll need to add a component that populates `doc.ents` with recognized +spans to your spaCy pipeline and put it _before_ the REL component. + +For a fully working example, see this +[usage example](https://github.com/explosion/spacy-llm/tree/main/usage_examples/rel_openai). + +### Lemma {id="lemma"} + +The Lemma task lemmatizes the provided text and updates the `lemma_` attribute +in the doc's tokens accordingly. + +#### spacy.Lemma.v1 {id="lemma-v1"} + +This task supports both zero-shot and few-shot prompting. + +> #### Example config +> +> ```ini +> [components.llm.task] +> @llm_tasks = "spacy.Lemma.v1" +> examples = null +> ``` + +| Argument | Description | +| --------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `template` | Custom prompt template to send to LLM model. Defaults to [lemma.v1.jinja](https://github.com/explosion/spacy-llm/blob/main/spacy_llm/tasks/templates/lemma.v1.jinja). ~~str~~ | +| `examples` | Optional function that generates examples for few-shot learning. Defaults to `None`. ~~Optional[Callable[[], Iterable[Any]]]~~ | +| `parse_responses` (NEW) | Callable for parsing LLM responses for this task. Defaults to the internal parsing method for this task. ~~Optional[TaskResponseParser[LemmaTask]]~~ | +| `prompt_example_type` (NEW) | Type to use for fewshot examples. Defaults to `LemmaExample`. ~~Optional[Type[FewshotExample]]~~ | +| `scorer` (NEW) | Scorer function that evaluates the task performance on provided examples. Defaults to the metric used by spaCy. ~~Optional[Scorer]~~ | + +The task prompts the LLM to lemmatize the passed text and return the lemmatized +version as a list of tokens and their corresponding lemma. E. g. the text +`I'm buying ice cream for my friends` should invoke the response + +``` +I: I +'m: be +buying: buy +ice: ice +cream: cream +for: for +my: my +friends: friend +.: . +``` + +If for any given text/doc instance the number of lemmas returned by the LLM +doesn't match the number of tokens from the pipeline's tokenizer, no lemmas are +stored in the corresponding doc's tokens. Otherwise the tokens `.lemma_` +property is updated with the lemma suggested by the LLM. + +To perform [few-shot learning](/usage/large-language-models#few-shot-prompts), +you can write down a few examples in a separate file, and provide these to be +injected into the prompt to the LLM. The default reader `spacy.FewShotReader.v1` +supports `.yml`, `.yaml`, `.json` and `.jsonl`. + +```yaml +- text: I'm buying ice cream. + lemmas: + - 'I': 'I' + - "'m": 'be' + - 'buying': 'buy' + - 'ice': 'ice' + - 'cream': 'cream' + - '.': '.' + +- text: I've watered the plants. + lemmas: + - 'I': 'I' + - "'ve": 'have' + - 'watered': 'water' + - 'the': 'the' + - 'plants': 'plant' + - '.': '.' +``` + +```ini +[components.llm.task] +@llm_tasks = "spacy.Lemma.v1" +[components.llm.task.examples] +@misc = "spacy.FewShotReader.v1" +path = "lemma_examples.yml" +``` + +### Sentiment {id="sentiment"} + +Performs sentiment analysis on provided texts. Scores between 0 and 1 are stored +in `Doc._.sentiment` - the higher, the more positive. Note in cases of parsing +issues (e. g. in case of unexpected LLM responses) the value might be `None`. + +#### spacy.Sentiment.v1 {id="sentiment-v1"} + +This task supports both zero-shot and few-shot prompting. + +> #### Example config +> +> ```ini +> [components.llm.task] +> @llm_tasks = "spacy.Sentiment.v1" +> examples = null +> ``` + +| Argument | Description | +| --------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `template` | Custom prompt template to send to LLM model. Defaults to [sentiment.v1.jinja](./spacy_llm/tasks/templates/sentiment.v1.jinja). ~~str~~ | +| `examples` | Optional function that generates examples for few-shot learning. Defaults to `None`. ~~Optional[Callable[[], Iterable[Any]]]~~ | +| `parse_responses` (NEW) | Callable for parsing LLM responses for this task. Defaults to the internal parsing method for this task. ~~Optional[TaskResponseParser[SentimentTask]]~~ | +| `prompt_example_type` (NEW) | Type to use for fewshot examples. Defaults to `SentimentExample`. ~~Optional[Type[FewshotExample]]~~ | +| `scorer` (NEW) | Scorer function that evaluates the task performance on provided examples. Defaults to the metric used by spaCy. ~~Optional[Scorer]~~ | +| `field` | Name of extension attribute to store summary in (i. e. the summary will be available in `doc._.{field}`). Defaults to `sentiment`. ~~str~~ | + +To perform [few-shot learning](/usage/large-language-models#few-shot-prompts), +you can write down a few examples in a separate file, and provide these to be +injected into the prompt to the LLM. The default reader `spacy.FewShotReader.v1` +supports `.yml`, `.yaml`, `.json` and `.jsonl`. + +```yaml +- text: 'This is horrifying.' + score: 0 +- text: 'This is underwhelming.' + score: 0.25 +- text: 'This is ok.' + score: 0.5 +- text: "I'm looking forward to this!" + score: 1.0 +``` + +```ini +[components.llm.task] +@llm_tasks = "spacy.Sentiment.v1" +[components.llm.task.examples] +@misc = "spacy.FewShotReader.v1" +path = "sentiment_examples.yml" +``` + +### NoOp {id="noop"} + +This task is only useful for testing - it tells the LLM to do nothing, and does +not set any fields on the `docs`. + +> #### Example config +> +> ```ini +> [components.llm.task] +> @llm_tasks = "spacy.NoOp.v1" +> ``` + +#### spacy.NoOp.v1 {id="noop-v1"} + +This task needs no further configuration. + +## Models {id="models"} + +A _model_ defines which LLM model to query, and how to query it. It can be a +simple function taking a collection of prompts (consistent with the output type +of `task.generate_prompts()`) and returning a collection of responses +(consistent with the expected input of `parse_responses`). Generally speaking, +it's a function of type `Callable[[Iterable[Any]], Iterable[Any]]`, but specific +implementations can have other signatures, like +`Callable[[Iterable[str]], Iterable[str]]`. + +### Models via REST API {id="models-rest"} + +These models all take the same parameters, but note that the `config` should +contain provider-specific keys and values, as it will be passed onwards to the +provider's API. + +| Argument | Description | +| ------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------- | +| `name` | Model name, i. e. any supported variant for this particular model. Default depends on the specific model (cf. below) ~~str~~ | +| `config` | Further configuration passed on to the model. Default depends on the specific model (cf. below). ~~Dict[Any, Any]~~ | +| `strict` | If `True`, raises an error if the LLM API returns a malformed response. Otherwise, return the error responses as is. Defaults to `True`. ~~bool~~ | +| `max_tries` | Max. number of tries for API request. Defaults to `5`. ~~int~~ | +| `max_request_time` | Max. time (in seconds) to wait for request to terminate before raising an exception. Defaults to `30.0`. ~~float~~ | +| `interval` | Time interval (in seconds) for API retries in seconds. Defaults to `1.0`. ~~float~~ | + +> #### Example config: +> +> ```ini +> [components.llm.model] +> @llm_models = "spacy.GPT-4.v1" +> name = "gpt-4" +> config = {"temperature": 0.0} +> ``` + +Currently, these models are provided as part of the core library: + +| Model | Provider | Supported names | Default name | Default config | +| ----------------------------- | ----------------- | ------------------------------------------------------------------------------------------------------------------ | ---------------------- | ------------------------------------ | +| `spacy.GPT-4.v1` | OpenAI | `["gpt-4", "gpt-4-0314", "gpt-4-32k", "gpt-4-32k-0314"]` | `"gpt-4"` | `{}` | +| `spacy.GPT-4.v2` | OpenAI | `["gpt-4", "gpt-4-0314", "gpt-4-32k", "gpt-4-32k-0314"]` | `"gpt-4"` | `{temperature=0.0}` | +| `spacy.GPT-3-5.v1` | OpenAI | `["gpt-3.5-turbo", "gpt-3.5-turbo-16k", "gpt-3.5-turbo-0613", "gpt-3.5-turbo-0613-16k", "gpt-3.5-turbo-instruct"]` | `"gpt-3.5-turbo"` | `{}` | +| `spacy.GPT-3-5.v2` | OpenAI | `["gpt-3.5-turbo", "gpt-3.5-turbo-16k", "gpt-3.5-turbo-0613", "gpt-3.5-turbo-0613-16k", "gpt-3.5-turbo-instruct"]` | `"gpt-3.5-turbo"` | `{temperature=0.0}` | +| `spacy.Davinci.v1` | OpenAI | `["davinci"]` | `"davinci"` | `{}` | +| `spacy.Davinci.v2` | OpenAI | `["davinci"]` | `"davinci"` | `{temperature=0.0, max_tokens=500}` | +| `spacy.Text-Davinci.v1` | OpenAI | `["text-davinci-003", "text-davinci-002"]` | `"text-davinci-003"` | `{}` | +| `spacy.Text-Davinci.v2` | OpenAI | `["text-davinci-003", "text-davinci-002"]` | `"text-davinci-003"` | `{temperature=0.0, max_tokens=1000}` | +| `spacy.Code-Davinci.v1` | OpenAI | `["code-davinci-002"]` | `"code-davinci-002"` | `{}` | +| `spacy.Code-Davinci.v2` | OpenAI | `["code-davinci-002"]` | `"code-davinci-002"` | `{temperature=0.0, max_tokens=500}` | +| `spacy.Curie.v1` | OpenAI | `["curie"]` | `"curie"` | `{}` | +| `spacy.Curie.v2` | OpenAI | `["curie"]` | `"curie"` | `{temperature=0.0, max_tokens=500}` | +| `spacy.Text-Curie.v1` | OpenAI | `["text-curie-001"]` | `"text-curie-001"` | `{}` | +| `spacy.Text-Curie.v2` | OpenAI | `["text-curie-001"]` | `"text-curie-001"` | `{temperature=0.0, max_tokens=500}` | +| `spacy.Babbage.v1` | OpenAI | `["babbage"]` | `"babbage"` | `{}` | +| `spacy.Babbage.v2` | OpenAI | `["babbage"]` | `"babbage"` | `{temperature=0.0, max_tokens=500}` | +| `spacy.Text-Babbage.v1` | OpenAI | `["text-babbage-001"]` | `"text-babbage-001"` | `{}` | +| `spacy.Text-Babbage.v2` | OpenAI | `["text-babbage-001"]` | `"text-babbage-001"` | `{temperature=0.0, max_tokens=500}` | +| `spacy.Ada.v1` | OpenAI | `["ada"]` | `"ada"` | `{}` | +| `spacy.Ada.v2` | OpenAI | `["ada"]` | `"ada"` | `{temperature=0.0, max_tokens=500}` | +| `spacy.Text-Ada.v1` | OpenAI | `["text-ada-001"]` | `"text-ada-001"` | `{}` | +| `spacy.Text-Ada.v2` | OpenAI | `["text-ada-001"]` | `"text-ada-001"` | `{temperature=0.0, max_tokens=500}` | +| `spacy.Azure.v1` | Microsoft, OpenAI | Arbitrary values | No default | `{temperature=0.0}` | +| `spacy.Command.v1` | Cohere | `["command", "command-light", "command-light-nightly", "command-nightly"]` | `"command"` | `{}` | +| `spacy.Claude-2.v1` | Anthropic | `["claude-2", "claude-2-100k"]` | `"claude-2"` | `{}` | +| `spacy.Claude-1.v1` | Anthropic | `["claude-1", "claude-1-100k"]` | `"claude-1"` | `{}` | +| `spacy.Claude-1-0.v1` | Anthropic | `["claude-1.0"]` | `"claude-1.0"` | `{}` | +| `spacy.Claude-1-2.v1` | Anthropic | `["claude-1.2"]` | `"claude-1.2"` | `{}` | +| `spacy.Claude-1-3.v1` | Anthropic | `["claude-1.3", "claude-1.3-100k"]` | `"claude-1.3"` | `{}` | +| `spacy.Claude-instant-1.v1` | Anthropic | `["claude-instant-1", "claude-instant-1-100k"]` | `"claude-instant-1"` | `{}` | +| `spacy.Claude-instant-1-1.v1` | Anthropic | `["claude-instant-1.1", "claude-instant-1.1-100k"]` | `"claude-instant-1.1"` | `{}` | +| `spacy.PaLM.v1` | Google | `["chat-bison-001", "text-bison-001"]` | `"text-bison-001"` | `{temperature=0.0}` | + +To use these models, make sure that you've [set the relevant API](#api-keys) +keys as environment variables. + +**⚠️ A note on `spacy.Azure.v1`.** Working with Azure OpenAI is slightly +different than working with models from other providers: + +- In Azure LLMs have to be made available by creating a _deployment_ of a given + model (e. g. GPT-3.5). This deployment can have an arbitrary name. The `name` + argument, which everywhere else denotes the model name (e. g. `claude-1.0`, + `gpt-3.5`), here refers to the _deployment name_. +- Deployed Azure OpenAI models are reachable via a resource-specific base URL, + usually of the form `https://{resource}.openai.azure.com`. Hence the URL has + to be specified via the `base_url` argument. +- Azure further expects the _API version_ to be specified. The default value for + this, via the `api_version` argument, is currently `2023-05-15` but may be + updated in the future. +- Finally, since we can't infer information about the model from the deployment + name, `spacy-llm` requires the `model_type` to be set to either + `"completions"` or `"chat"`, depending on whether the deployed model is a + completion or chat model. + +#### API Keys {id="api-keys"} + +Note that when using hosted services, you have to ensure that the proper API +keys are set as environment variables as described by the corresponding +provider's documentation. + +E. g. when using OpenAI, you have to get an API key from openai.com, and ensure +that the keys are set as environmental variables: + +```shell +export OPENAI_API_KEY="sk-..." +export OPENAI_API_ORG="org-..." +``` + +For Cohere: + +```shell +export CO_API_KEY="..." +``` + +For Anthropic: + +```shell +export ANTHROPIC_API_KEY="..." +``` + +For PaLM: + +```shell +export PALM_API_KEY="..." +``` + +### Models via HuggingFace {id="models-hf"} + +These models all take the same parameters: + +| Argument | Description | +| ------------- | ------------------------------------------------------------------------------------------------------------------------------------- | +| `name` | Model name, i. e. any supported variant for this particular model. ~~str~~ | +| `config_init` | Further configuration passed on to the construction of the model with `transformers.pipeline()`. Defaults to `{}`. ~~Dict[str, Any]~~ | +| `config_run` | Further configuration used during model inference. Defaults to `{}`. ~~Dict[str, Any]~~ | + +> #### Example config +> +> ```ini +> [components.llm.model] +> @llm_models = "spacy.Llama2.v1" +> name = "llama2-7b-hf" +> ``` + +Currently, these models are provided as part of the core library: + +| Model | Provider | Supported names | HF directory | +| -------------------- | --------------- | ------------------------------------------------------------------------------------------------------------ | -------------------------------------- | +| `spacy.Dolly.v1` | Databricks | `["dolly-v2-3b", "dolly-v2-7b", "dolly-v2-12b"]` | https://huggingface.co/databricks | +| `spacy.Falcon.v1` | TII | `["falcon-rw-1b", "falcon-7b", "falcon-7b-instruct", "falcon-40b-instruct"]` | https://huggingface.co/tiiuae | +| `spacy.Llama2.v1` | Meta AI | `["Llama-2-7b-hf", "Llama-2-13b-hf", "Llama-2-70b-hf"]` | https://huggingface.co/meta-llama | +| `spacy.Mistral.v1` | Mistral AI | `["Mistral-7B-v0.1", "Mistral-7B-Instruct-v0.1"]` | https://huggingface.co/mistralai | +| `spacy.StableLM.v1` | Stability AI | `["stablelm-base-alpha-3b", "stablelm-base-alpha-7b", "stablelm-tuned-alpha-3b", "stablelm-tuned-alpha-7b"]` | https://huggingface.co/stabilityai | +| `spacy.OpenLLaMA.v1` | OpenLM Research | `["open_llama_3b", "open_llama_7b", "open_llama_7b_v2", "open_llama_13b"]` | https://huggingface.co/openlm-research | + + + +Some models available on Hugging Face (HF), such as Llama 2, are _gated models_. +That means that users have to fulfill certain requirements to be allowed access +to these models. In the case of Llama 2 you'll need to request agree to Meta's +Terms of Service while logged in with your HF account. After Meta grants you +permission to use Llama 2, you'll be able to download and use the model. + +This requires that you are logged in with your HF account on your local +machine - check out the HF quick start documentation. In a nutshell, you'll need +to create an access token on HF and log in to HF using your access token, e. g. +with `huggingface-cli login`. + + + +Note that Hugging Face will download the model the first time you use it - you +can +[define the cached directory](https://huggingface.co/docs/huggingface_hub/main/en/guides/manage-cache) +by setting the environmental variable `HF_HOME`. + +#### Installation with HuggingFace {id="install-hf"} + +To use models from HuggingFace, ideally you have a GPU enabled and have +installed `transformers`, `torch` and CUDA in your virtual environment. This +allows you to have the setting `device=cuda:0` in your config, which ensures +that the model is loaded entirely on the GPU (and fails otherwise). + +You can do so with + +```shell +python -m pip install "spacy-llm[transformers]" "transformers[sentencepiece]" +``` + +If you don't have access to a GPU, you can install `accelerate` and +set`device_map=auto` instead, but be aware that this may result in some layers +getting distributed to the CPU or even the hard drive, which may ultimately +result in extremely slow queries. + +```shell +python -m pip install "accelerate>=0.16.0,<1.0" +``` + +### LangChain models {id="langchain-models"} + +To use [LangChain](https://github.com/hwchase17/langchain) for the API retrieval +part, make sure you have installed it first: + +```shell +python -m pip install "langchain==0.0.191" +# Or install with spacy-llm directly +python -m pip install "spacy-llm[extras]" +``` + +Note that LangChain currently only supports Python 3.9 and beyond. + +LangChain models in `spacy-llm` work slightly differently. `langchain`'s models +are parsed automatically, each LLM class in `langchain` has one entry in +`spacy-llm`'s registry. As `langchain`'s design has one class per API and not +per model, this results in registry entries like `langchain.OpenAI.v1` - i. e. +there is one registry entry per API and not per model (family), as for the REST- +and HuggingFace-based entries. + +The name of the model to be used has to be passed in via the `name` attribute. + +> #### Example config +> +> ```ini +> [components.llm.model] +> @llm_models = "langchain.OpenAI.v1" +> name = "gpt-3.5-turbo" +> query = {"@llm_queries": "spacy.CallLangChain.v1"} +> config = {"temperature": 0.0} +> ``` + +| Argument | Description | +| -------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `name` | The name of a mdodel supported by LangChain for this API. ~~str~~ | +| `config` | Configuration passed on to the LangChain model. Defaults to `{}`. ~~Dict[Any, Any]~~ | +| `query` | Function that executes the prompts. If `None`, defaults to `spacy.CallLangChain.v1`. ~~Optional[Callable[["langchain.llms.BaseLLM", Iterable[Any]], Iterable[Any]]]~~ | + +The default `query` (`spacy.CallLangChain.v1`) executes the prompts by running +`model(text)` for each given textual prompt. + +## Cache {id="cache"} + +Interacting with LLMs, either through an external API or a local instance, is +costly. Since developing an NLP pipeline generally means a lot of exploration +and prototyping, `spacy-llm` implements a built-in cache to avoid reprocessing +the same documents at each run that keeps batches of documents stored on disk. + +> #### Example config +> +> ```ini +> [components.llm.cache] +> @llm_misc = "spacy.BatchCache.v1" +> path = "path/to/cache" +> batch_size = 64 +> max_batches_in_mem = 4 +> ``` + +| Argument | Description | +| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------ | +| `path` | Cache directory. If `None`, no caching is performed, and this component will act as a NoOp. Defaults to `None`. ~~Optional[Union[str, Path]]~~ | +| `batch_size` | Number of docs in one batch (file). Once a batch is full, it will be peristed to disk. Defaults to 64. ~~int~~ | +| `max_batches_in_mem` | Max. number of batches to hold in memory. Allows you to limit the effect on your memory if you're handling a lot of docs. Defaults to 4. ~~int~~ | + +When retrieving a document, the `BatchCache` will first figure out what batch +the document belongs to. If the batch isn't in memory it will try to load the +batch from disk and then move it into memory. + +Note that since the cache is generated by a registered function, you can also +provide your own registered function returning your own cache implementation. If +you wish to do so, ensure that your cache object adheres to the `Protocol` +defined in `spacy_llm.ty.Cache`. + +## Various functions {id="various-functions"} + +### spacy.FewShotReader.v1 {id="fewshotreader-v1"} + +This function is registered in spaCy's `misc` registry, and reads in examples +from a `.yml`, `.yaml`, `.json` or `.jsonl` file. It uses +[`srsly`](https://github.com/explosion/srsly) to read in these files and parses +them depending on the file extension. + +> #### Example config +> +> ```ini +> [components.llm.task.examples] +> @misc = "spacy.FewShotReader.v1" +> path = "ner_examples.yml" +> ``` + +| Argument | Description | +| -------- | ----------------------------------------------------------------------------------------------- | +| `path` | Path to an examples file with suffix `.yml`, `.yaml`, `.json` or `.jsonl`. ~~Union[str, Path]~~ | + +### spacy.FileReader.v1 {id="filereader-v1"} + +This function is registered in spaCy's `misc` registry, and reads a file +provided to the `path` to return a `str` representation of its contents. This +function is typically used to read +[Jinja](https://jinja.palletsprojects.com/en/3.1.x/) files containing the prompt +template. + +> #### Example config +> +> ```ini +> [components.llm.task.template] +> @misc = "spacy.FileReader.v1" +> path = "ner_template.jinja2" +> ``` + +| Argument | Description | +| -------- | ------------------------------------------------- | +| `path` | Path to the file to be read. ~~Union[str, Path]~~ | + +### Normalizer functions {id="normalizer-functions"} + +These functions provide simple normalizations for string comparisons, e.g. +between a list of specified labels and a label given in the raw text of the LLM +response. They are registered in spaCy's `misc` registry and have the signature +`Callable[[str], str]`. + +- `spacy.StripNormalizer.v1`: only apply `text.strip()` +- `spacy.LowercaseNormalizer.v1`: applies `text.strip().lower()` to compare + strings in a case-insensitive way. diff --git a/website/docs/api/legacy.mdx b/website/docs/api/legacy.mdx index ea6d3a899..b44df5387 100644 --- a/website/docs/api/legacy.mdx +++ b/website/docs/api/legacy.mdx @@ -162,7 +162,10 @@ network has an internal CNN Tok2Vec layer and uses attention. Since `spacy.TextCatCNN.v2`, this architecture has become resizable, which means that you can add labels to a previously trained textcat. `TextCatCNN` v1 did not -yet support that. +yet support that. `TextCatCNN` has been replaced by the more general +[`TextCatReduce`](/api/architectures#TextCatReduce) layer. `TextCatCNN` is +identical to `TextCatReduce` with `use_reduce_mean=true`, +`use_reduce_first=false`, `reduce_last=false` and `use_reduce_max=false`. > #### Example Config > @@ -194,11 +197,58 @@ architecture is usually less accurate than the ensemble, but runs faster. | `nO` | Output dimension, determined by the number of different labels. If not set, the [`TextCategorizer`](/api/textcategorizer) component will set it when `initialize` is called. ~~Optional[int]~~ | | **CREATES** | The model using the architecture. ~~Model[List[Doc], Floats2d]~~ | +### spacy.TextCatCNN.v2 {id="TextCatCNN_v2"} + +> #### Example Config +> +> ```ini +> [model] +> @architectures = "spacy.TextCatCNN.v2" +> exclusive_classes = false +> nO = null +> +> [model.tok2vec] +> @architectures = "spacy.HashEmbedCNN.v2" +> pretrained_vectors = null +> width = 96 +> depth = 4 +> embed_size = 2000 +> window_size = 1 +> maxout_pieces = 3 +> subword_features = true +> ``` + +A neural network model where token vectors are calculated using a CNN. The +vectors are mean pooled and used as features in a feed-forward network. This +architecture is usually less accurate than the ensemble, but runs faster. + +`TextCatCNN` has been replaced by the more general +[`TextCatReduce`](/api/architectures#TextCatReduce) layer. `TextCatCNN` is +identical to `TextCatReduce` with `use_reduce_mean=true`, +`use_reduce_first=false`, `reduce_last=false` and `use_reduce_max=false`. + +| Name | Description | +| ------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `exclusive_classes` | Whether or not categories are mutually exclusive. ~~bool~~ | +| `tok2vec` | The [`tok2vec`](#tok2vec) layer of the model. ~~Model~~ | +| `nO` | Output dimension, determined by the number of different labels. If not set, the [`TextCategorizer`](/api/textcategorizer) component will set it when `initialize` is called. ~~Optional[int]~~ | +| **CREATES** | The model using the architecture. ~~Model[List[Doc], Floats2d]~~ | + + + +[TextCatCNN.v1](/api/legacy#TextCatCNN_v1) had the exact same signature, but was +not yet resizable. Since v2, new labels can be added to this component, even +after training. + + + ### spacy.TextCatBOW.v1 {id="TextCatBOW_v1"} Since `spacy.TextCatBOW.v2`, this architecture has become resizable, which means that you can add labels to a previously trained textcat. `TextCatBOW` v1 did not -yet support that. +yet support that. Versions of this model before `spacy.TextCatBOW.v3` used an +erroneous sparse linear layer that only used a small number of the allocated +parameters. > #### Example Config > @@ -222,6 +272,33 @@ the others, but may not be as accurate, especially if texts are short. | `nO` | Output dimension, determined by the number of different labels. If not set, the [`TextCategorizer`](/api/textcategorizer) component will set it when `initialize` is called. ~~Optional[int]~~ | | **CREATES** | The model using the architecture. ~~Model[List[Doc], Floats2d]~~ | +### spacy.TextCatBOW.v2 {id="TextCatBOW"} + +Versions of this model before `spacy.TextCatBOW.v3` used an erroneous sparse +linear layer that only used a small number of the allocated parameters. + +> #### Example Config +> +> ```ini +> [model] +> @architectures = "spacy.TextCatBOW.v2" +> exclusive_classes = false +> ngram_size = 1 +> no_output_layer = false +> nO = null +> ``` + +An n-gram "bag-of-words" model. This architecture should run much faster than +the others, but may not be as accurate, especially if texts are short. + +| Name | Description | +| ------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `exclusive_classes` | Whether or not categories are mutually exclusive. ~~bool~~ | +| `ngram_size` | Determines the maximum length of the n-grams in the BOW model. For instance, `ngram_size=3` would give unigram, trigram and bigram features. ~~int~~ | +| `no_output_layer` | Whether or not to add an output layer to the model (`Softmax` activation if `exclusive_classes` is `True`, else `Logistic`). ~~bool~~ | +| `nO` | Output dimension, determined by the number of different labels. If not set, the [`TextCategorizer`](/api/textcategorizer) component will set it when `initialize` is called. ~~Optional[int]~~ | +| **CREATES** | The model using the architecture. ~~Model[List[Doc], Floats2d]~~ | + ### spacy.TransitionBasedParser.v1 {id="TransitionBasedParser_v1"} Identical to diff --git a/website/docs/api/spancategorizer.mdx b/website/docs/api/spancategorizer.mdx index 72df2e409..1f7620733 100644 --- a/website/docs/api/spancategorizer.mdx +++ b/website/docs/api/spancategorizer.mdx @@ -67,7 +67,6 @@ architectures and their arguments and hyperparameters. > ```python > from spacy.pipeline.spancat import DEFAULT_SPANCAT_SINGLELABEL_MODEL > config = { -> "threshold": 0.5, > "spans_key": "labeled_spans", > "model": DEFAULT_SPANCAT_SINGLELABEL_MODEL, > "suggester": {"@misc": "spacy.ngram_suggester.v1", "sizes": [1, 2, 3]}, @@ -91,6 +90,21 @@ architectures and their arguments and hyperparameters. | `allow_overlap` 3.5.1 | If `True`, the data is assumed to contain overlapping spans. It is only available when `max_positive` is exactly 1. Defaults to `True`. ~~bool~~ | | `save_activations` 4.0 | Save activations in `Doc` when annotating. Saved activations are `"indices"` and `"scores"`. ~~Union[bool, list[str]]~~ | + + +If you set a non-default value for `spans_key`, you'll have to update +`[training.score_weights]` as well so that weights are computed properly. E. g. +for `spans_key == "myspankey"`, include this in your config: + +```ini +[training.score_weights] +spans_myspankey_f = 1.0 +spans_myspankey_p = 0.0 +spans_myspankey_r = 0.0 +``` + + + ```python %%GITHUB_SPACY/spacy/pipeline/spancat.py ``` @@ -523,7 +537,7 @@ has two columns, indicating the start and end position. | Name | Description | | ----------- | ---------------------------------------------------------------------------- | | `min_size` | The minimal phrase lengths to suggest (inclusive). ~~[int]~~ | -| `max_size` | The maximal phrase lengths to suggest (exclusive). ~~[int]~~ | +| `max_size` | The maximal phrase lengths to suggest (inclusive). ~~[int]~~ | | **CREATES** | The suggester function. ~~Callable[[Iterable[Doc], Optional[Ops]], Ragged]~~ | ### spacy.preset_spans_suggester.v1 {id="preset_spans_suggester"} diff --git a/website/docs/api/spanruler.mdx b/website/docs/api/spanruler.mdx index c1037435b..9b8372a8b 100644 --- a/website/docs/api/spanruler.mdx +++ b/website/docs/api/spanruler.mdx @@ -128,7 +128,7 @@ config. Any existing patterns are removed on initialization. > > [initialize.components.span_ruler.patterns] > @readers = "srsly.read_jsonl.v1" -> path = "corpus/span_ruler_patterns.jsonl +> path = "corpus/span_ruler_patterns.jsonl" > ``` | Name | Description | diff --git a/website/docs/api/top-level.mdx b/website/docs/api/top-level.mdx index fe0c27ec0..a2d4bbdd3 100644 --- a/website/docs/api/top-level.mdx +++ b/website/docs/api/top-level.mdx @@ -69,7 +69,7 @@ weights, and returns it. cls = spacy.util.get_lang_class(lang) # 1. Get Language class, e.g. English nlp = cls() # 2. Initialize it for name in pipeline: - nlp.add_pipe(name) # 3. Add the component to the pipeline + nlp.add_pipe(name, config={...}) # 3. Add the component to the pipeline nlp.from_disk(data_path) # 4. Load in the binary data ``` @@ -344,6 +344,130 @@ use with the `manual=True` argument in `displacy.render`. | `options` | Span-specific visualisation options. ~~Dict[str, Any]~~ | | **RETURNS** | Generated entities keyed by text (original text) and ents. ~~dict~~ | +### Visualizer data structures {id="displacy_structures"} + +You can use displaCy's data format to manually render data. This can be useful +if you want to visualize output from other libraries. You can find examples of +displaCy's different data formats below. + +> #### DEP example data structure +> +> ```json +> { +> "words": [ +> { "text": "This", "tag": "DT" }, +> { "text": "is", "tag": "VBZ" }, +> { "text": "a", "tag": "DT" }, +> { "text": "sentence", "tag": "NN" } +> ], +> "arcs": [ +> { "start": 0, "end": 1, "label": "nsubj", "dir": "left" }, +> { "start": 2, "end": 3, "label": "det", "dir": "left" }, +> { "start": 1, "end": 3, "label": "attr", "dir": "right" } +> ] +> } +> ``` + +#### Dependency Visualizer data structure {id="structure-dep"} + +| Dictionary Key | Description | +| -------------- | ----------------------------------------------------------------------------------------------------------- | +| `words` | List of dictionaries describing a word token (see structure below). ~~List[Dict[str, Any]]~~ | +| `arcs` | List of dictionaries describing the relations between words (see structure below). ~~List[Dict[str, Any]]~~ | +| _Optional_ | | +| `title` | Title of the visualization. ~~Optional[str]~~ | +| `settings` | Dependency Visualizer options (see [here](/api/top-level#displacy_options)). ~~Dict[str, Any]~~ | + + + +| Dictionary Key | Description | +| -------------- | ---------------------------------------- | +| `text` | Text content of the word. ~~str~~ | +| `tag` | Fine-grained part-of-speech. ~~str~~ | +| `lemma` | Base form of the word. ~~Optional[str]~~ | + + + + + +| Dictionary Key | Description | +| -------------- | ---------------------------------------------------- | +| `start` | The index of the starting token. ~~int~~ | +| `end` | The index of the ending token. ~~int~~ | +| `label` | The type of dependency relation. ~~str~~ | +| `dir` | Direction of the relation (`left`, `right`). ~~str~~ | + + + +> #### ENT example data structure +> +> ```json +> { +> "text": "But Google is starting from behind.", +> "ents": [{ "start": 4, "end": 10, "label": "ORG" }] +> } +> ``` + +#### Named Entity Recognition data structure {id="structure-ent"} + +| Dictionary Key | Description | +| -------------- | ------------------------------------------------------------------------------------------- | +| `text` | String representation of the document text. ~~str~~ | +| `ents` | List of dictionaries describing entities (see structure below). ~~List[Dict[str, Any]]~~ | +| _Optional_ | | +| `title` | Title of the visualization. ~~Optional[str]~~ | +| `settings` | Entity Visualizer options (see [here](/api/top-level#displacy_options)). ~~Dict[str, Any]~~ | + + + +| Dictionary Key | Description | +| -------------- | ---------------------------------------------------------------------- | +| `start` | The index of the first character of the entity. ~~int~~ | +| `end` | The index of the last character of the entity. (not inclusive) ~~int~~ | +| `label` | Label attached to the entity. ~~str~~ | +| _Optional_ | | +| `kb_id` | `KnowledgeBase` ID. ~~str~~ | +| `kb_url` | `KnowledgeBase` URL. ~~str~~ | + + + +> #### SPAN example data structure +> +> ```json +> { +> "text": "Welcome to the Bank of China.", +> "spans": [ +> { "start_token": 3, "end_token": 6, "label": "ORG" }, +> { "start_token": 5, "end_token": 6, "label": "GPE" } +> ], +> "tokens": ["Welcome", "to", "the", "Bank", "of", "China", "."] +> } +> ``` + +#### Span Classification data structure {id="structure-span"} + +| Dictionary Key | Description | +| -------------- | ----------------------------------------------------------------------------------------- | +| `text` | String representation of the document text. ~~str~~ | +| `spans` | List of dictionaries describing spans (see structure below). ~~List[Dict[str, Any]]~~ | +| `tokens` | List of word tokens. ~~List[str]~~ | +| _Optional_ | | +| `title` | Title of the visualization. ~~Optional[str]~~ | +| `settings` | Span Visualizer options (see [here](/api/top-level#displacy_options)). ~~Dict[str, Any]~~ | + + + +| Dictionary Key | Description | +| -------------- | ------------------------------------------------------------- | +| `start_token` | The index of the first token of the span in `tokens`. ~~int~~ | +| `end_token` | The index of the last token of the span in `tokens`. ~~int~~ | +| `label` | Label attached to the span. ~~str~~ | +| _Optional_ | | +| `kb_id` | `KnowledgeBase` ID. ~~str~~ | +| `kb_url` | `KnowledgeBase` URL. ~~str~~ | + + + ### Visualizer options {id="displacy_options"} The `options` argument lets you specify additional settings for each visualizer. diff --git a/website/docs/api/transformer.mdx b/website/docs/api/transformer.mdx index ad8ecce54..8f024553d 100644 --- a/website/docs/api/transformer.mdx +++ b/website/docs/api/transformer.mdx @@ -397,6 +397,17 @@ are wrapped into the by this class. Instances of this class are typically assigned to the [`Doc._.trf_data`](/api/transformer#assigned-attributes) extension attribute. +> #### Example +> +> ```python +> # Get the last hidden layer output for "is" (token index 1) +> doc = nlp("This is a text.") +> indices = doc._.trf_data.align[1].data.flatten() +> last_hidden_state = doc._.trf_data.model_output.last_hidden_state +> dim = last_hidden_state.shape[-1] +> tensors = last_hidden_state.reshape(-1, dim)[indices] +> ``` + | Name | Description | | -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | `tokens` | A slice of the tokens data produced by the tokenizer. This may have several fields, including the token IDs, the texts and the attention mask. See the [`transformers.BatchEncoding`](https://huggingface.co/transformers/main_classes/tokenizer.html#transformers.BatchEncoding) object for details. ~~dict~~ | diff --git a/website/docs/api/vectors.mdx b/website/docs/api/vectors.mdx index 787f10fa3..39b309e13 100644 --- a/website/docs/api/vectors.mdx +++ b/website/docs/api/vectors.mdx @@ -296,10 +296,9 @@ The vector size, i.e. `rows * dims`. ## Vectors.is_full {id="is_full",tag="property"} -Whether the vectors table is full and has no slots are available for new keys. -If a table is full, it can be resized using -[`Vectors.resize`](/api/vectors#resize). In `floret` mode, the table is always -full and cannot be resized. +Whether the vectors table is full and no slots are available for new keys. If a +table is full, it can be resized using [`Vectors.resize`](/api/vectors#resize). +In `floret` mode, the table is always full and cannot be resized. > #### Example > @@ -440,7 +439,7 @@ Load state from a binary string. > #### Example > > ```python -> fron spacy.vectors import Vectors +> from spacy.vectors import Vectors > vectors_bytes = vectors.to_bytes() > new_vectors = Vectors(StringStore()) > new_vectors.from_bytes(vectors_bytes) diff --git a/website/public/images/displacy-long2.html b/website/docs/images/displacy-long2.html similarity index 100% rename from website/public/images/displacy-long2.html rename to website/docs/images/displacy-long2.html diff --git a/website/docs/models/index.mdx b/website/docs/models/index.mdx index 366d44f0e..54f3c4906 100644 --- a/website/docs/models/index.mdx +++ b/website/docs/models/index.mdx @@ -108,12 +108,12 @@ In the `sm`/`md`/`lg` models: #### CNN/CPU pipelines with floret vectors -The Finnish, Korean and Swedish `md` and `lg` pipelines use -[floret vectors](/usage/v3-2#vectors) instead of default vectors. If you're -running a trained pipeline on texts and working with [`Doc`](/api/doc) objects, -you shouldn't notice any difference with floret vectors. With floret vectors no -tokens are out-of-vocabulary, so [`Token.is_oov`](/api/token#attributes) will -return `False` for all tokens. +The Croatian, Finnish, Korean, Slovenian, Swedish and Ukrainian `md` and `lg` +pipelines use [floret vectors](/usage/v3-2#vectors) instead of default vectors. +If you're running a trained pipeline on texts and working with [`Doc`](/api/doc) +objects, you shouldn't notice any difference with floret vectors. With floret +vectors no tokens are out-of-vocabulary, so +[`Token.is_oov`](/api/token#attributes) will return `False` for all tokens. If you access vectors directly for similarity comparisons, there are a few differences because floret vectors don't include a fixed word list like the @@ -132,10 +132,20 @@ vector keys for default vectors. ### Transformer pipeline design {id="design-trf"} -In the transformer (`trf`) models, the `tagger`, `parser` and `ner` (if present) -all listen to the `transformer` component. The `attribute_ruler` and +In the transformer (`trf`) pipelines, the `tagger`, `parser` and `ner` (if +present) all listen to the `transformer` component. The `attribute_ruler` and `lemmatizer` have the same configuration as in the CNN models. +For spaCy v3.0-v3.6, `trf` pipelines use +[`spacy-transformers`](https://github.com/explosion/spacy-transformers) and the +transformer output in `doc._.trf_data` is a +[`TransformerData`](/api/transformer#transformerdata) object. + +For spaCy v3.7+, `trf` pipelines use +[`spacy-curated-transformers`](https://github.com/explosion/spacy-curated-transformers) +and `doc._.trf_data` is a +[`DocTransformerOutput`](/api/curatedtransformer#doctransformeroutput) object. + ### Modifying the default pipeline {id="design-modify"} For faster processing, you may only want to run a subset of the components in a diff --git a/website/docs/usage/101/_named-entities.mdx b/website/docs/usage/101/_named-entities.mdx index 9ae4134d8..da43c0ddd 100644 --- a/website/docs/usage/101/_named-entities.mdx +++ b/website/docs/usage/101/_named-entities.mdx @@ -31,8 +31,6 @@ for ent in doc.ents: Using spaCy's built-in [displaCy visualizer](/usage/visualizers), here's what our example sentence and its named entities look like: -