mirror of
https://github.com/explosion/spaCy.git
synced 2024-12-27 10:26:35 +03:00
Merge pull request #13027 from adrianeboyd/chore/update-develop-from-master-v3.7-1
Update develop from master for v3.7
This commit is contained in:
commit
36201cb6a1
72
README.md
72
README.md
|
@ -6,23 +6,20 @@ spaCy is a library for **advanced Natural Language Processing** in Python and
|
||||||
Cython. It's built on the very latest research, and was designed from day one to
|
Cython. It's built on the very latest research, and was designed from day one to
|
||||||
be used in real products.
|
be used in real products.
|
||||||
|
|
||||||
spaCy comes with
|
spaCy comes with [pretrained pipelines](https://spacy.io/models) and currently
|
||||||
[pretrained pipelines](https://spacy.io/models) and
|
supports tokenization and training for **70+ languages**. It features
|
||||||
currently supports tokenization and training for **70+ languages**. It features
|
state-of-the-art speed and **neural network models** for tagging, parsing,
|
||||||
state-of-the-art speed and **neural network models** for tagging,
|
**named entity recognition**, **text classification** and more, multi-task
|
||||||
parsing, **named entity recognition**, **text classification** and more,
|
learning with pretrained **transformers** like BERT, as well as a
|
||||||
multi-task learning with pretrained **transformers** like BERT, as well as a
|
|
||||||
production-ready [**training system**](https://spacy.io/usage/training) and easy
|
production-ready [**training system**](https://spacy.io/usage/training) and easy
|
||||||
model packaging, deployment and workflow management. spaCy is commercial
|
model packaging, deployment and workflow management. spaCy is commercial
|
||||||
open-source software, released under the [MIT license](https://github.com/explosion/spaCy/blob/master/LICENSE).
|
open-source software, released under the
|
||||||
|
[MIT license](https://github.com/explosion/spaCy/blob/master/LICENSE).
|
||||||
|
|
||||||
💥 **We'd love to hear more about your experience with spaCy!**
|
💫 **Version 3.6 out now!**
|
||||||
[Fill out our survey here.](https://form.typeform.com/to/aMel9q9f)
|
|
||||||
|
|
||||||
💫 **Version 3.5 out now!**
|
|
||||||
[Check out the release notes here.](https://github.com/explosion/spaCy/releases)
|
[Check out the release notes here.](https://github.com/explosion/spaCy/releases)
|
||||||
|
|
||||||
[![Azure Pipelines](https://img.shields.io/azure-devops/build/explosion-ai/public/8/master.svg?logo=azure-pipelines&style=flat-square&label=build)](https://dev.azure.com/explosion-ai/public/_build?definitionId=8)
|
[![tests](https://github.com/explosion/spaCy/actions/workflows/tests.yml/badge.svg)](https://github.com/explosion/spaCy/actions/workflows/tests.yml)
|
||||||
[![Current Release Version](https://img.shields.io/github/release/explosion/spacy.svg?style=flat-square&logo=github)](https://github.com/explosion/spaCy/releases)
|
[![Current Release Version](https://img.shields.io/github/release/explosion/spacy.svg?style=flat-square&logo=github)](https://github.com/explosion/spaCy/releases)
|
||||||
[![pypi Version](https://img.shields.io/pypi/v/spacy.svg?style=flat-square&logo=pypi&logoColor=white)](https://pypi.org/project/spacy/)
|
[![pypi Version](https://img.shields.io/pypi/v/spacy.svg?style=flat-square&logo=pypi&logoColor=white)](https://pypi.org/project/spacy/)
|
||||||
[![conda Version](https://img.shields.io/conda/vn/conda-forge/spacy.svg?style=flat-square&logo=conda-forge&logoColor=white)](https://anaconda.org/conda-forge/spacy)
|
[![conda Version](https://img.shields.io/conda/vn/conda-forge/spacy.svg?style=flat-square&logo=conda-forge&logoColor=white)](https://anaconda.org/conda-forge/spacy)
|
||||||
|
@ -35,22 +32,22 @@ open-source software, released under the [MIT license](https://github.com/explos
|
||||||
|
|
||||||
## 📖 Documentation
|
## 📖 Documentation
|
||||||
|
|
||||||
| Documentation | |
|
| Documentation | |
|
||||||
| ----------------------------- | ---------------------------------------------------------------------- |
|
| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||||
| ⭐️ **[spaCy 101]** | New to spaCy? Here's everything you need to know! |
|
| ⭐️ **[spaCy 101]** | New to spaCy? Here's everything you need to know! |
|
||||||
| 📚 **[Usage Guides]** | How to use spaCy and its features. |
|
| 📚 **[Usage Guides]** | How to use spaCy and its features. |
|
||||||
| 🚀 **[New in v3.0]** | New features, backwards incompatibilities and migration guide. |
|
| 🚀 **[New in v3.0]** | New features, backwards incompatibilities and migration guide. |
|
||||||
| 🪐 **[Project Templates]** | End-to-end workflows you can clone, modify and run. |
|
| 🪐 **[Project Templates]** | End-to-end workflows you can clone, modify and run. |
|
||||||
| 🎛 **[API Reference]** | The detailed reference for spaCy's API. |
|
| 🎛 **[API Reference]** | The detailed reference for spaCy's API. |
|
||||||
| 📦 **[Models]** | Download trained pipelines for spaCy. |
|
| 📦 **[Models]** | Download trained pipelines for spaCy. |
|
||||||
| 🌌 **[Universe]** | Plugins, extensions, demos and books from the spaCy ecosystem. |
|
| 🌌 **[Universe]** | Plugins, extensions, demos and books from the spaCy ecosystem. |
|
||||||
| ⚙️ **[spaCy VS Code Extension]** | Additional tooling and features for working with spaCy's config files. |
|
| ⚙️ **[spaCy VS Code Extension]** | Additional tooling and features for working with spaCy's config files. |
|
||||||
| 👩🏫 **[Online Course]** | Learn spaCy in this free and interactive online course. |
|
| 👩🏫 **[Online Course]** | Learn spaCy in this free and interactive online course. |
|
||||||
| 📺 **[Videos]** | Our YouTube channel with video tutorials, talks and more. |
|
| 📺 **[Videos]** | Our YouTube channel with video tutorials, talks and more. |
|
||||||
| 🛠 **[Changelog]** | Changes and version history. |
|
| 🛠 **[Changelog]** | Changes and version history. |
|
||||||
| 💝 **[Contribute]** | How to contribute to the spaCy project and code base. |
|
| 💝 **[Contribute]** | How to contribute to the spaCy project and code base. |
|
||||||
| <a href="https://explosion.ai/spacy-tailored-pipelines"><img src="https://user-images.githubusercontent.com/13643239/152853098-1c761611-ccb0-4ec6-9066-b234552831fe.png" width="125" alt="spaCy Tailored Pipelines"/></a> | Get a custom spaCy pipeline, tailor-made for your NLP problem by spaCy's core developers. Streamlined, production-ready, predictable and maintainable. Start by completing our 5-minute questionnaire to tell us what you need and we'll be in touch! **[Learn more →](https://explosion.ai/spacy-tailored-pipelines)** |
|
| <a href="https://explosion.ai/spacy-tailored-pipelines"><img src="https://user-images.githubusercontent.com/13643239/152853098-1c761611-ccb0-4ec6-9066-b234552831fe.png" width="125" alt="spaCy Tailored Pipelines"/></a> | Get a custom spaCy pipeline, tailor-made for your NLP problem by spaCy's core developers. Streamlined, production-ready, predictable and maintainable. Start by completing our 5-minute questionnaire to tell us what you need and we'll be in touch! **[Learn more →](https://explosion.ai/spacy-tailored-pipelines)** |
|
||||||
| <a href="https://explosion.ai/spacy-tailored-analysis"><img src="https://user-images.githubusercontent.com/1019791/206151300-b00cd189-e503-4797-aa1e-1bb6344062c5.png" width="125" alt="spaCy Tailored Pipelines"/></a> | Bespoke advice for problem solving, strategy and analysis for applied NLP projects. Services include data strategy, code reviews, pipeline design and annotation coaching. Curious? Fill in our 5-minute questionnaire to tell us what you need and we'll be in touch! **[Learn more →](https://explosion.ai/spacy-tailored-analysis)** |
|
| <a href="https://explosion.ai/spacy-tailored-analysis"><img src="https://user-images.githubusercontent.com/1019791/206151300-b00cd189-e503-4797-aa1e-1bb6344062c5.png" width="125" alt="spaCy Tailored Pipelines"/></a> | Bespoke advice for problem solving, strategy and analysis for applied NLP projects. Services include data strategy, code reviews, pipeline design and annotation coaching. Curious? Fill in our 5-minute questionnaire to tell us what you need and we'll be in touch! **[Learn more →](https://explosion.ai/spacy-tailored-analysis)** |
|
||||||
|
|
||||||
[spacy 101]: https://spacy.io/usage/spacy-101
|
[spacy 101]: https://spacy.io/usage/spacy-101
|
||||||
[new in v3.0]: https://spacy.io/usage/v3
|
[new in v3.0]: https://spacy.io/usage/v3
|
||||||
|
@ -58,7 +55,7 @@ open-source software, released under the [MIT license](https://github.com/explos
|
||||||
[api reference]: https://spacy.io/api/
|
[api reference]: https://spacy.io/api/
|
||||||
[models]: https://spacy.io/models
|
[models]: https://spacy.io/models
|
||||||
[universe]: https://spacy.io/universe
|
[universe]: https://spacy.io/universe
|
||||||
[spaCy VS Code Extension]: https://github.com/explosion/spacy-vscode
|
[spacy vs code extension]: https://github.com/explosion/spacy-vscode
|
||||||
[videos]: https://www.youtube.com/c/ExplosionAI
|
[videos]: https://www.youtube.com/c/ExplosionAI
|
||||||
[online course]: https://course.spacy.io
|
[online course]: https://course.spacy.io
|
||||||
[project templates]: https://github.com/explosion/projects
|
[project templates]: https://github.com/explosion/projects
|
||||||
|
@ -92,7 +89,9 @@ more people can benefit from it.
|
||||||
- State-of-the-art speed
|
- State-of-the-art speed
|
||||||
- Production-ready **training system**
|
- Production-ready **training system**
|
||||||
- Linguistically-motivated **tokenization**
|
- Linguistically-motivated **tokenization**
|
||||||
- Components for named **entity recognition**, part-of-speech-tagging, dependency parsing, sentence segmentation, **text classification**, lemmatization, morphological analysis, entity linking and more
|
- Components for named **entity recognition**, part-of-speech-tagging,
|
||||||
|
dependency parsing, sentence segmentation, **text classification**,
|
||||||
|
lemmatization, morphological analysis, entity linking and more
|
||||||
- Easily extensible with **custom components** and attributes
|
- Easily extensible with **custom components** and attributes
|
||||||
- Support for custom models in **PyTorch**, **TensorFlow** and other frameworks
|
- Support for custom models in **PyTorch**, **TensorFlow** and other frameworks
|
||||||
- Built in **visualizers** for syntax and NER
|
- Built in **visualizers** for syntax and NER
|
||||||
|
@ -118,8 +117,8 @@ For detailed installation instructions, see the
|
||||||
### pip
|
### pip
|
||||||
|
|
||||||
Using pip, spaCy releases are available as source packages and binary wheels.
|
Using pip, spaCy releases are available as source packages and binary wheels.
|
||||||
Before you install spaCy and its dependencies, make sure that
|
Before you install spaCy and its dependencies, make sure that your `pip`,
|
||||||
your `pip`, `setuptools` and `wheel` are up to date.
|
`setuptools` and `wheel` are up to date.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
pip install -U pip setuptools wheel
|
pip install -U pip setuptools wheel
|
||||||
|
@ -174,9 +173,9 @@ with the new version.
|
||||||
|
|
||||||
## 📦 Download model packages
|
## 📦 Download model packages
|
||||||
|
|
||||||
Trained pipelines for spaCy can be installed as **Python packages**. This
|
Trained pipelines for spaCy can be installed as **Python packages**. This means
|
||||||
means that they're a component of your application, just like any other module.
|
that they're a component of your application, just like any other module. Models
|
||||||
Models can be installed using spaCy's [`download`](https://spacy.io/api/cli#download)
|
can be installed using spaCy's [`download`](https://spacy.io/api/cli#download)
|
||||||
command, or manually by pointing pip to a path or URL.
|
command, or manually by pointing pip to a path or URL.
|
||||||
|
|
||||||
| Documentation | |
|
| Documentation | |
|
||||||
|
@ -242,8 +241,7 @@ do that depends on your system.
|
||||||
| **Mac** | Install a recent version of [XCode](https://developer.apple.com/xcode/), including the so-called "Command Line Tools". macOS and OS X ship with Python and git preinstalled. |
|
| **Mac** | Install a recent version of [XCode](https://developer.apple.com/xcode/), including the so-called "Command Line Tools". macOS and OS X ship with Python and git preinstalled. |
|
||||||
| **Windows** | Install a version of the [Visual C++ Build Tools](https://visualstudio.microsoft.com/visual-cpp-build-tools/) or [Visual Studio Express](https://visualstudio.microsoft.com/vs/express/) that matches the version that was used to compile your Python interpreter. |
|
| **Windows** | Install a version of the [Visual C++ Build Tools](https://visualstudio.microsoft.com/visual-cpp-build-tools/) or [Visual Studio Express](https://visualstudio.microsoft.com/vs/express/) that matches the version that was used to compile your Python interpreter. |
|
||||||
|
|
||||||
For more details
|
For more details and instructions, see the documentation on
|
||||||
and instructions, see the documentation on
|
|
||||||
[compiling spaCy from source](https://spacy.io/usage#source) and the
|
[compiling spaCy from source](https://spacy.io/usage#source) and the
|
||||||
[quickstart widget](https://spacy.io/usage#section-quickstart) to get the right
|
[quickstart widget](https://spacy.io/usage#section-quickstart) to get the right
|
||||||
commands for your platform and Python version.
|
commands for your platform and Python version.
|
||||||
|
|
|
@ -18,7 +18,7 @@ numpy>=1.15.0; python_version < "3.9"
|
||||||
numpy>=1.19.0; python_version >= "3.9"
|
numpy>=1.19.0; python_version >= "3.9"
|
||||||
requests>=2.13.0,<3.0.0
|
requests>=2.13.0,<3.0.0
|
||||||
tqdm>=4.38.0,<5.0.0
|
tqdm>=4.38.0,<5.0.0
|
||||||
pydantic>=1.7.4,!=1.8,!=1.8.1,<1.11.0
|
pydantic>=1.7.4,!=1.8,!=1.8.1,<3.0.0
|
||||||
jinja2
|
jinja2
|
||||||
langcodes>=3.2.0,<4.0.0
|
langcodes>=3.2.0,<4.0.0
|
||||||
# Official Python utilities
|
# Official Python utilities
|
||||||
|
|
|
@ -62,7 +62,7 @@ install_requires =
|
||||||
numpy>=1.15.0; python_version < "3.9"
|
numpy>=1.15.0; python_version < "3.9"
|
||||||
numpy>=1.19.0; python_version >= "3.9"
|
numpy>=1.19.0; python_version >= "3.9"
|
||||||
requests>=2.13.0,<3.0.0
|
requests>=2.13.0,<3.0.0
|
||||||
pydantic>=1.7.4,!=1.8,!=1.8.1,<1.11.0
|
pydantic>=1.7.4,!=1.8,!=1.8.1,<3.0.0
|
||||||
jinja2
|
jinja2
|
||||||
# Official Python utilities
|
# Official Python utilities
|
||||||
setuptools
|
setuptools
|
||||||
|
@ -113,6 +113,8 @@ cuda117 =
|
||||||
cupy-cuda117>=5.0.0b4,<13.0.0
|
cupy-cuda117>=5.0.0b4,<13.0.0
|
||||||
cuda11x =
|
cuda11x =
|
||||||
cupy-cuda11x>=11.0.0,<13.0.0
|
cupy-cuda11x>=11.0.0,<13.0.0
|
||||||
|
cuda12x =
|
||||||
|
cupy-cuda12x>=11.5.0,<13.0.0
|
||||||
cuda-autodetect =
|
cuda-autodetect =
|
||||||
cupy-wheel>=11.0.0,<13.0.0
|
cupy-wheel>=11.0.0,<13.0.0
|
||||||
apple =
|
apple =
|
||||||
|
|
31
setup.py
31
setup.py
|
@ -1,10 +1,9 @@
|
||||||
#!/usr/bin/env python
|
#!/usr/bin/env python
|
||||||
from setuptools import Extension, setup, find_packages
|
from setuptools import Extension, setup, find_packages
|
||||||
import sys
|
import sys
|
||||||
import platform
|
|
||||||
import numpy
|
import numpy
|
||||||
from distutils.command.build_ext import build_ext
|
from setuptools.command.build_ext import build_ext
|
||||||
from distutils.sysconfig import get_python_inc
|
from sysconfig import get_path
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
import shutil
|
import shutil
|
||||||
from Cython.Build import cythonize
|
from Cython.Build import cythonize
|
||||||
|
@ -88,30 +87,6 @@ COPY_FILES = {
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
def is_new_osx():
|
|
||||||
"""Check whether we're on OSX >= 10.7"""
|
|
||||||
if sys.platform != "darwin":
|
|
||||||
return False
|
|
||||||
mac_ver = platform.mac_ver()[0]
|
|
||||||
if mac_ver.startswith("10"):
|
|
||||||
minor_version = int(mac_ver.split(".")[1])
|
|
||||||
if minor_version >= 7:
|
|
||||||
return True
|
|
||||||
else:
|
|
||||||
return False
|
|
||||||
return False
|
|
||||||
|
|
||||||
|
|
||||||
if is_new_osx():
|
|
||||||
# On Mac, use libc++ because Apple deprecated use of
|
|
||||||
# libstdc
|
|
||||||
COMPILE_OPTIONS["other"].append("-stdlib=libc++")
|
|
||||||
LINK_OPTIONS["other"].append("-lc++")
|
|
||||||
# g++ (used by unix compiler on mac) links to libstdc++ as a default lib.
|
|
||||||
# See: https://stackoverflow.com/questions/1653047/avoid-linking-to-libstdc
|
|
||||||
LINK_OPTIONS["other"].append("-nodefaultlibs")
|
|
||||||
|
|
||||||
|
|
||||||
# By subclassing build_extensions we have the actual compiler that will be used which is really known only after finalize_options
|
# By subclassing build_extensions we have the actual compiler that will be used which is really known only after finalize_options
|
||||||
# http://stackoverflow.com/questions/724664/python-distutils-how-to-get-a-compiler-that-is-going-to-be-used
|
# http://stackoverflow.com/questions/724664/python-distutils-how-to-get-a-compiler-that-is-going-to-be-used
|
||||||
class build_ext_options:
|
class build_ext_options:
|
||||||
|
@ -204,7 +179,7 @@ def setup_package():
|
||||||
|
|
||||||
include_dirs = [
|
include_dirs = [
|
||||||
numpy.get_include(),
|
numpy.get_include(),
|
||||||
get_python_inc(plat_specific=True),
|
get_path("include"),
|
||||||
]
|
]
|
||||||
ext_modules = []
|
ext_modules = []
|
||||||
ext_modules.append(
|
ext_modules.append(
|
||||||
|
|
|
@ -13,7 +13,6 @@ from thinc.api import Config, prefer_gpu, require_cpu, require_gpu # noqa: F401
|
||||||
from . import pipeline # noqa: F401
|
from . import pipeline # noqa: F401
|
||||||
from . import util
|
from . import util
|
||||||
from .about import __version__ # noqa: F401
|
from .about import __version__ # noqa: F401
|
||||||
from .cli.info import info # noqa: F401
|
|
||||||
from .errors import Errors
|
from .errors import Errors
|
||||||
from .glossary import explain # noqa: F401
|
from .glossary import explain # noqa: F401
|
||||||
from .language import Language
|
from .language import Language
|
||||||
|
@ -77,3 +76,9 @@ def blank(
|
||||||
# We should accept both dot notation and nested dict here for consistency
|
# We should accept both dot notation and nested dict here for consistency
|
||||||
config = util.dot_to_dict(config)
|
config = util.dot_to_dict(config)
|
||||||
return LangClass.from_config(config, vocab=vocab, meta=meta)
|
return LangClass.from_config(config, vocab=vocab, meta=meta)
|
||||||
|
|
||||||
|
|
||||||
|
def info(*args, **kwargs):
|
||||||
|
from .cli.info import info as cli_info
|
||||||
|
|
||||||
|
return cli_info(*args, **kwargs)
|
||||||
|
|
|
@ -14,6 +14,7 @@ from .debug_diff import debug_diff # noqa: F401
|
||||||
from .debug_model import debug_model # noqa: F401
|
from .debug_model import debug_model # noqa: F401
|
||||||
from .download import download # noqa: F401
|
from .download import download # noqa: F401
|
||||||
from .evaluate import evaluate # noqa: F401
|
from .evaluate import evaluate # noqa: F401
|
||||||
|
from .find_function import find_function # noqa: F401
|
||||||
from .find_threshold import find_threshold # noqa: F401
|
from .find_threshold import find_threshold # noqa: F401
|
||||||
from .info import info # noqa: F401
|
from .info import info # noqa: F401
|
||||||
from .init_config import fill_config, init_config # noqa: F401
|
from .init_config import fill_config, init_config # noqa: F401
|
||||||
|
|
|
@ -40,7 +40,8 @@ def assemble_cli(
|
||||||
|
|
||||||
DOCS: https://spacy.io/api/cli#assemble
|
DOCS: https://spacy.io/api/cli#assemble
|
||||||
"""
|
"""
|
||||||
util.logger.setLevel(logging.DEBUG if verbose else logging.INFO)
|
if verbose:
|
||||||
|
util.logger.setLevel(logging.DEBUG)
|
||||||
# Make sure all files and paths exists if they are needed
|
# Make sure all files and paths exists if they are needed
|
||||||
if not config_path or (str(config_path) != "-" and not config_path.exists()):
|
if not config_path or (str(config_path) != "-" and not config_path.exists()):
|
||||||
msg.fail("Config file not found", config_path, exits=1)
|
msg.fail("Config file not found", config_path, exits=1)
|
||||||
|
|
|
@ -28,6 +28,7 @@ def evaluate_cli(
|
||||||
displacy_path: Optional[Path] = Opt(None, "--displacy-path", "-dp", help="Directory to output rendered parses as HTML", exists=True, file_okay=False),
|
displacy_path: Optional[Path] = Opt(None, "--displacy-path", "-dp", help="Directory to output rendered parses as HTML", exists=True, file_okay=False),
|
||||||
displacy_limit: int = Opt(25, "--displacy-limit", "-dl", help="Limit of parses to render as HTML"),
|
displacy_limit: int = Opt(25, "--displacy-limit", "-dl", help="Limit of parses to render as HTML"),
|
||||||
per_component: bool = Opt(False, "--per-component", "-P", help="Return scores per component, only applicable when an output JSON file is specified."),
|
per_component: bool = Opt(False, "--per-component", "-P", help="Return scores per component, only applicable when an output JSON file is specified."),
|
||||||
|
spans_key: str = Opt("sc", "--spans-key", "-sk", help="Spans key to use when evaluating Doc.spans"),
|
||||||
# fmt: on
|
# fmt: on
|
||||||
):
|
):
|
||||||
"""
|
"""
|
||||||
|
@ -53,6 +54,7 @@ def evaluate_cli(
|
||||||
displacy_limit=displacy_limit,
|
displacy_limit=displacy_limit,
|
||||||
per_component=per_component,
|
per_component=per_component,
|
||||||
silent=False,
|
silent=False,
|
||||||
|
spans_key=spans_key,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
|
69
spacy/cli/find_function.py
Normal file
69
spacy/cli/find_function.py
Normal file
|
@ -0,0 +1,69 @@
|
||||||
|
from typing import Optional, Tuple
|
||||||
|
|
||||||
|
from catalogue import RegistryError
|
||||||
|
from wasabi import msg
|
||||||
|
|
||||||
|
from ..util import registry
|
||||||
|
from ._util import Arg, Opt, app
|
||||||
|
|
||||||
|
|
||||||
|
@app.command("find-function")
|
||||||
|
def find_function_cli(
|
||||||
|
# fmt: off
|
||||||
|
func_name: str = Arg(..., help="Name of the registered function."),
|
||||||
|
registry_name: Optional[str] = Opt(None, "--registry", "-r", help="Name of the catalogue registry."),
|
||||||
|
# fmt: on
|
||||||
|
):
|
||||||
|
"""
|
||||||
|
Find the module, path and line number to the file the registered
|
||||||
|
function is defined in, if available.
|
||||||
|
|
||||||
|
func_name (str): Name of the registered function.
|
||||||
|
registry_name (Optional[str]): Name of the catalogue registry.
|
||||||
|
|
||||||
|
DOCS: https://spacy.io/api/cli#find-function
|
||||||
|
"""
|
||||||
|
if not registry_name:
|
||||||
|
registry_names = registry.get_registry_names()
|
||||||
|
for name in registry_names:
|
||||||
|
if registry.has(name, func_name):
|
||||||
|
registry_name = name
|
||||||
|
break
|
||||||
|
|
||||||
|
if not registry_name:
|
||||||
|
msg.fail(
|
||||||
|
f"Couldn't find registered function: '{func_name}'",
|
||||||
|
exits=1,
|
||||||
|
)
|
||||||
|
|
||||||
|
assert registry_name is not None
|
||||||
|
find_function(func_name, registry_name)
|
||||||
|
|
||||||
|
|
||||||
|
def find_function(func_name: str, registry_name: str) -> Tuple[str, int]:
|
||||||
|
registry_desc = None
|
||||||
|
try:
|
||||||
|
registry_desc = registry.find(registry_name, func_name)
|
||||||
|
except RegistryError as e:
|
||||||
|
msg.fail(
|
||||||
|
f"Couldn't find registered function: '{func_name}' in registry '{registry_name}'",
|
||||||
|
)
|
||||||
|
msg.fail(f"{e}", exits=1)
|
||||||
|
assert registry_desc is not None
|
||||||
|
|
||||||
|
registry_path = None
|
||||||
|
line_no = None
|
||||||
|
if registry_desc["file"]:
|
||||||
|
registry_path = registry_desc["file"]
|
||||||
|
line_no = registry_desc["line_no"]
|
||||||
|
|
||||||
|
if not registry_path or not line_no:
|
||||||
|
msg.fail(
|
||||||
|
f"Couldn't find path to registered function: '{func_name}' in registry '{registry_name}'",
|
||||||
|
exits=1,
|
||||||
|
)
|
||||||
|
assert registry_path is not None
|
||||||
|
assert line_no is not None
|
||||||
|
|
||||||
|
msg.good(f"Found registered function '{func_name}' at {registry_path}:{line_no}")
|
||||||
|
return str(registry_path), int(line_no)
|
|
@ -52,8 +52,8 @@ def find_threshold_cli(
|
||||||
|
|
||||||
DOCS: https://spacy.io/api/cli#find-threshold
|
DOCS: https://spacy.io/api/cli#find-threshold
|
||||||
"""
|
"""
|
||||||
|
if verbose:
|
||||||
util.logger.setLevel(logging.DEBUG if verbose else logging.INFO)
|
util.logger.setLevel(logging.DEBUG)
|
||||||
import_code(code_path)
|
import_code(code_path)
|
||||||
find_threshold(
|
find_threshold(
|
||||||
model=model,
|
model=model,
|
||||||
|
|
|
@ -39,7 +39,8 @@ def init_vectors_cli(
|
||||||
you can use in the [initialize] block of your config to initialize
|
you can use in the [initialize] block of your config to initialize
|
||||||
a model with vectors.
|
a model with vectors.
|
||||||
"""
|
"""
|
||||||
util.logger.setLevel(logging.DEBUG if verbose else logging.INFO)
|
if verbose:
|
||||||
|
util.logger.setLevel(logging.DEBUG)
|
||||||
msg.info(f"Creating blank nlp object for language '{lang}'")
|
msg.info(f"Creating blank nlp object for language '{lang}'")
|
||||||
nlp = util.get_lang_class(lang)()
|
nlp = util.get_lang_class(lang)()
|
||||||
if jsonl_loc is not None:
|
if jsonl_loc is not None:
|
||||||
|
@ -87,7 +88,8 @@ def init_pipeline_cli(
|
||||||
use_gpu: int = Opt(-1, "--gpu-id", "-g", help="GPU ID or -1 for CPU")
|
use_gpu: int = Opt(-1, "--gpu-id", "-g", help="GPU ID or -1 for CPU")
|
||||||
# fmt: on
|
# fmt: on
|
||||||
):
|
):
|
||||||
util.logger.setLevel(logging.DEBUG if verbose else logging.INFO)
|
if verbose:
|
||||||
|
util.logger.setLevel(logging.DEBUG)
|
||||||
overrides = parse_config_overrides(ctx.args)
|
overrides = parse_config_overrides(ctx.args)
|
||||||
import_code(code_path)
|
import_code(code_path)
|
||||||
setup_gpu(use_gpu)
|
setup_gpu(use_gpu)
|
||||||
|
@ -116,7 +118,8 @@ def init_labels_cli(
|
||||||
"""Generate JSON files for the labels in the data. This helps speed up the
|
"""Generate JSON files for the labels in the data. This helps speed up the
|
||||||
training process, since spaCy won't have to preprocess the data to
|
training process, since spaCy won't have to preprocess the data to
|
||||||
extract the labels."""
|
extract the labels."""
|
||||||
util.logger.setLevel(logging.DEBUG if verbose else logging.INFO)
|
if verbose:
|
||||||
|
util.logger.setLevel(logging.DEBUG)
|
||||||
if not output_path.exists():
|
if not output_path.exists():
|
||||||
output_path.mkdir(parents=True)
|
output_path.mkdir(parents=True)
|
||||||
overrides = parse_config_overrides(ctx.args)
|
overrides = parse_config_overrides(ctx.args)
|
||||||
|
|
|
@ -403,7 +403,7 @@ def _format_sources(data: Any) -> str:
|
||||||
if author:
|
if author:
|
||||||
result += " ({})".format(author)
|
result += " ({})".format(author)
|
||||||
sources.append(result)
|
sources.append(result)
|
||||||
return "<br />".join(sources)
|
return "<br>".join(sources)
|
||||||
|
|
||||||
|
|
||||||
def _format_accuracy(data: Dict[str, Any], exclude: List[str] = ["speed"]) -> str:
|
def _format_accuracy(data: Dict[str, Any], exclude: List[str] = ["speed"]) -> str:
|
||||||
|
|
|
@ -47,7 +47,8 @@ def train_cli(
|
||||||
|
|
||||||
DOCS: https://spacy.io/api/cli#train
|
DOCS: https://spacy.io/api/cli#train
|
||||||
"""
|
"""
|
||||||
util.logger.setLevel(logging.DEBUG if verbose else logging.INFO)
|
if verbose:
|
||||||
|
util.logger.setLevel(logging.DEBUG)
|
||||||
overrides = parse_config_overrides(ctx.args)
|
overrides = parse_config_overrides(ctx.args)
|
||||||
import_code(code_path)
|
import_code(code_path)
|
||||||
train(config_path, output_path, use_gpu=use_gpu, overrides=overrides)
|
train(config_path, output_path, use_gpu=use_gpu, overrides=overrides)
|
||||||
|
|
|
@ -313,6 +313,8 @@ class DependencyRenderer:
|
||||||
self.lang = settings.get("lang", DEFAULT_LANG)
|
self.lang = settings.get("lang", DEFAULT_LANG)
|
||||||
render_id = f"{id_prefix}-{i}"
|
render_id = f"{id_prefix}-{i}"
|
||||||
svg = self.render_svg(render_id, p["words"], p["arcs"])
|
svg = self.render_svg(render_id, p["words"], p["arcs"])
|
||||||
|
if p.get("title"):
|
||||||
|
svg = TPL_TITLE.format(title=p.get("title")) + svg
|
||||||
rendered.append(svg)
|
rendered.append(svg)
|
||||||
if page:
|
if page:
|
||||||
content = "".join([TPL_FIGURE.format(content=svg) for svg in rendered])
|
content = "".join([TPL_FIGURE.format(content=svg) for svg in rendered])
|
||||||
|
@ -565,7 +567,7 @@ class EntityRenderer:
|
||||||
for i, fragment in enumerate(fragments):
|
for i, fragment in enumerate(fragments):
|
||||||
markup += escape_html(fragment)
|
markup += escape_html(fragment)
|
||||||
if len(fragments) > 1 and i != len(fragments) - 1:
|
if len(fragments) > 1 and i != len(fragments) - 1:
|
||||||
markup += "</br>"
|
markup += "<br>"
|
||||||
if self.ents is None or label.upper() in self.ents:
|
if self.ents is None or label.upper() in self.ents:
|
||||||
color = self.colors.get(label.upper(), self.default_color)
|
color = self.colors.get(label.upper(), self.default_color)
|
||||||
ent_settings = {
|
ent_settings = {
|
||||||
|
@ -583,7 +585,7 @@ class EntityRenderer:
|
||||||
for i, fragment in enumerate(fragments):
|
for i, fragment in enumerate(fragments):
|
||||||
markup += escape_html(fragment)
|
markup += escape_html(fragment)
|
||||||
if len(fragments) > 1 and i != len(fragments) - 1:
|
if len(fragments) > 1 and i != len(fragments) - 1:
|
||||||
markup += "</br>"
|
markup += "<br>"
|
||||||
markup = TPL_ENTS.format(content=markup, dir=self.direction)
|
markup = TPL_ENTS.format(content=markup, dir=self.direction)
|
||||||
if title:
|
if title:
|
||||||
markup = TPL_TITLE.format(title=title) + markup
|
markup = TPL_TITLE.format(title=title) + markup
|
||||||
|
|
|
@ -163,7 +163,7 @@ class SpanishLemmatizer(Lemmatizer):
|
||||||
for old, new in self.lookups.get_table("lemma_rules").get("det", []):
|
for old, new in self.lookups.get_table("lemma_rules").get("det", []):
|
||||||
if word == old:
|
if word == old:
|
||||||
return [new]
|
return [new]
|
||||||
# If none of the specfic rules apply, search in the common rules for
|
# If none of the specific rules apply, search in the common rules for
|
||||||
# determiners and pronouns that follow a unique pattern for
|
# determiners and pronouns that follow a unique pattern for
|
||||||
# lemmatization. If the word is in the list, return the corresponding
|
# lemmatization. If the word is in the list, return the corresponding
|
||||||
# lemma.
|
# lemma.
|
||||||
|
@ -291,7 +291,7 @@ class SpanishLemmatizer(Lemmatizer):
|
||||||
for old, new in self.lookups.get_table("lemma_rules").get("pron", []):
|
for old, new in self.lookups.get_table("lemma_rules").get("pron", []):
|
||||||
if word == old:
|
if word == old:
|
||||||
return [new]
|
return [new]
|
||||||
# If none of the specfic rules apply, search in the common rules for
|
# If none of the specific rules apply, search in the common rules for
|
||||||
# determiners and pronouns that follow a unique pattern for
|
# determiners and pronouns that follow a unique pattern for
|
||||||
# lemmatization. If the word is in the list, return the corresponding
|
# lemmatization. If the word is in the list, return the corresponding
|
||||||
# lemma.
|
# lemma.
|
||||||
|
|
|
@ -15,4 +15,7 @@ sentences = [
|
||||||
"Türkiye'nin başkenti neresi?",
|
"Türkiye'nin başkenti neresi?",
|
||||||
"Bakanlar Kurulu 180 günlük eylem planını açıkladı.",
|
"Bakanlar Kurulu 180 günlük eylem planını açıkladı.",
|
||||||
"Merkez Bankası, beklentiler doğrultusunda faizlerde değişikliğe gitmedi.",
|
"Merkez Bankası, beklentiler doğrultusunda faizlerde değişikliğe gitmedi.",
|
||||||
|
"Cemal Sureya kimdir?",
|
||||||
|
"Bunlari Biliyor muydunuz?",
|
||||||
|
"Altinoluk Turkiye haritasinin neresinde yer alir?",
|
||||||
]
|
]
|
||||||
|
|
|
@ -67,8 +67,8 @@ def build_hash_embed_cnn_tok2vec(
|
||||||
are between 2 and 8.
|
are between 2 and 8.
|
||||||
window_size (int): The number of tokens on either side to concatenate during
|
window_size (int): The number of tokens on either side to concatenate during
|
||||||
the convolutions. The receptive field of the CNN will be
|
the convolutions. The receptive field of the CNN will be
|
||||||
depth * (window_size * 2 + 1), so a 4-layer network with window_size of
|
depth * window_size * 2 + 1, so a 4-layer network with window_size of
|
||||||
2 will be sensitive to 20 words at a time. Recommended value is 1.
|
2 will be sensitive to 17 words at a time. Recommended value is 1.
|
||||||
embed_size (int): The number of rows in the hash embedding tables. This can
|
embed_size (int): The number of rows in the hash embedding tables. This can
|
||||||
be surprisingly small, due to the use of the hash embeddings. Recommended
|
be surprisingly small, due to the use of the hash embeddings. Recommended
|
||||||
values are between 2000 and 10000.
|
values are between 2000 and 10000.
|
||||||
|
|
|
@ -1,8 +1,12 @@
|
||||||
from collections import defaultdict
|
from collections import defaultdict
|
||||||
from typing import Any, Dict, List, Union
|
from typing import Any, Dict, List, Union
|
||||||
|
|
||||||
from pydantic import BaseModel, Field, ValidationError
|
try:
|
||||||
from pydantic.types import StrictBool, StrictInt, StrictStr
|
from pydantic.v1 import BaseModel, Field, ValidationError
|
||||||
|
from pydantic.v1.types import StrictBool, StrictInt, StrictStr
|
||||||
|
except ImportError:
|
||||||
|
from pydantic import BaseModel, Field, ValidationError # type: ignore
|
||||||
|
from pydantic.types import StrictBool, StrictInt, StrictStr # type: ignore
|
||||||
|
|
||||||
|
|
||||||
class MatchNodeSchema(BaseModel):
|
class MatchNodeSchema(BaseModel):
|
||||||
|
|
|
@ -16,19 +16,34 @@ from typing import (
|
||||||
Union,
|
Union,
|
||||||
)
|
)
|
||||||
|
|
||||||
from pydantic import (
|
try:
|
||||||
BaseModel,
|
from pydantic.v1 import (
|
||||||
ConstrainedStr,
|
BaseModel,
|
||||||
Field,
|
ConstrainedStr,
|
||||||
StrictBool,
|
Field,
|
||||||
StrictFloat,
|
StrictBool,
|
||||||
StrictInt,
|
StrictFloat,
|
||||||
StrictStr,
|
StrictInt,
|
||||||
ValidationError,
|
StrictStr,
|
||||||
create_model,
|
ValidationError,
|
||||||
validator,
|
create_model,
|
||||||
)
|
validator,
|
||||||
from pydantic.main import ModelMetaclass
|
)
|
||||||
|
from pydantic.v1.main import ModelMetaclass
|
||||||
|
except ImportError:
|
||||||
|
from pydantic import ( # type: ignore
|
||||||
|
BaseModel,
|
||||||
|
ConstrainedStr,
|
||||||
|
Field,
|
||||||
|
StrictBool,
|
||||||
|
StrictFloat,
|
||||||
|
StrictInt,
|
||||||
|
StrictStr,
|
||||||
|
ValidationError,
|
||||||
|
create_model,
|
||||||
|
validator,
|
||||||
|
)
|
||||||
|
from pydantic.main import ModelMetaclass # type: ignore
|
||||||
from thinc.api import ConfigValidationError, Model, Optimizer
|
from thinc.api import ConfigValidationError, Model, Optimizer
|
||||||
from thinc.config import Promise
|
from thinc.config import Promise
|
||||||
|
|
||||||
|
|
|
@ -1,5 +1,10 @@
|
||||||
import pytest
|
import pytest
|
||||||
from pydantic import StrictBool
|
|
||||||
|
try:
|
||||||
|
from pydantic.v1 import StrictBool
|
||||||
|
except ImportError:
|
||||||
|
from pydantic import StrictBool # type: ignore
|
||||||
|
|
||||||
from thinc.api import ConfigValidationError
|
from thinc.api import ConfigValidationError
|
||||||
|
|
||||||
from spacy.lang.en import English
|
from spacy.lang.en import English
|
||||||
|
|
|
@ -1,5 +1,10 @@
|
||||||
import pytest
|
import pytest
|
||||||
from pydantic import StrictInt, StrictStr
|
|
||||||
|
try:
|
||||||
|
from pydantic.v1 import StrictInt, StrictStr
|
||||||
|
except ImportError:
|
||||||
|
from pydantic import StrictInt, StrictStr # type: ignore
|
||||||
|
|
||||||
from thinc.api import ConfigValidationError, Linear, Model
|
from thinc.api import ConfigValidationError, Linear, Model
|
||||||
|
|
||||||
import spacy
|
import spacy
|
||||||
|
|
|
@ -12,6 +12,7 @@ from thinc.api import Config
|
||||||
|
|
||||||
import spacy
|
import spacy
|
||||||
from spacy import about
|
from spacy import about
|
||||||
|
from spacy import info as spacy_info
|
||||||
from spacy.cli import info
|
from spacy.cli import info
|
||||||
from spacy.cli._util import parse_config_overrides, string_to_list, walk_directory
|
from spacy.cli._util import parse_config_overrides, string_to_list, walk_directory
|
||||||
from spacy.cli.apply import apply
|
from spacy.cli.apply import apply
|
||||||
|
@ -192,6 +193,9 @@ def test_cli_info():
|
||||||
raw_data = info(tmp_dir, exclude=[""])
|
raw_data = info(tmp_dir, exclude=[""])
|
||||||
assert raw_data["lang"] == "nl"
|
assert raw_data["lang"] == "nl"
|
||||||
assert raw_data["components"] == ["textcat"]
|
assert raw_data["components"] == ["textcat"]
|
||||||
|
raw_data = spacy_info(tmp_dir, exclude=[""])
|
||||||
|
assert raw_data["lang"] == "nl"
|
||||||
|
assert raw_data["components"] == ["textcat"]
|
||||||
|
|
||||||
|
|
||||||
def test_cli_converters_conllu_to_docs():
|
def test_cli_converters_conllu_to_docs():
|
||||||
|
|
|
@ -7,7 +7,7 @@ import srsly
|
||||||
from typer.testing import CliRunner
|
from typer.testing import CliRunner
|
||||||
|
|
||||||
from spacy.cli._util import app, get_git_version
|
from spacy.cli._util import app, get_git_version
|
||||||
from spacy.tokens import Doc, DocBin
|
from spacy.tokens import Doc, DocBin, Span
|
||||||
|
|
||||||
from .util import make_tempdir, normalize_whitespace
|
from .util import make_tempdir, normalize_whitespace
|
||||||
|
|
||||||
|
@ -237,3 +237,196 @@ def test_project_push_pull(project_dir):
|
||||||
result = CliRunner().invoke(app, ["project", "pull", remote, str(project_dir)])
|
result = CliRunner().invoke(app, ["project", "pull", remote, str(project_dir)])
|
||||||
assert result.exit_code == 0
|
assert result.exit_code == 0
|
||||||
assert test_file.is_file()
|
assert test_file.is_file()
|
||||||
|
|
||||||
|
|
||||||
|
def test_find_function_valid():
|
||||||
|
# example of architecture in main code base
|
||||||
|
function = "spacy.TextCatBOW.v2"
|
||||||
|
result = CliRunner().invoke(app, ["find-function", function, "-r", "architectures"])
|
||||||
|
assert f"Found registered function '{function}'" in result.stdout
|
||||||
|
assert "textcat.py" in result.stdout
|
||||||
|
|
||||||
|
result = CliRunner().invoke(app, ["find-function", function])
|
||||||
|
assert f"Found registered function '{function}'" in result.stdout
|
||||||
|
assert "textcat.py" in result.stdout
|
||||||
|
|
||||||
|
# example of architecture in spacy-legacy
|
||||||
|
function = "spacy.TextCatBOW.v1"
|
||||||
|
result = CliRunner().invoke(app, ["find-function", function])
|
||||||
|
assert f"Found registered function '{function}'" in result.stdout
|
||||||
|
assert "spacy_legacy" in result.stdout
|
||||||
|
assert "textcat.py" in result.stdout
|
||||||
|
|
||||||
|
|
||||||
|
def test_find_function_invalid():
|
||||||
|
# invalid registry
|
||||||
|
function = "spacy.TextCatBOW.v2"
|
||||||
|
registry = "foobar"
|
||||||
|
result = CliRunner().invoke(
|
||||||
|
app, ["find-function", function, "--registry", registry]
|
||||||
|
)
|
||||||
|
assert f"Unknown function registry: '{registry}'" in result.stdout
|
||||||
|
|
||||||
|
# invalid function
|
||||||
|
function = "spacy.TextCatBOW.v666"
|
||||||
|
result = CliRunner().invoke(app, ["find-function", function])
|
||||||
|
assert f"Couldn't find registered function: '{function}'" in result.stdout
|
||||||
|
|
||||||
|
|
||||||
|
example_words_1 = ["I", "like", "cats"]
|
||||||
|
example_words_2 = ["I", "like", "dogs"]
|
||||||
|
example_lemmas_1 = ["I", "like", "cat"]
|
||||||
|
example_lemmas_2 = ["I", "like", "dog"]
|
||||||
|
example_tags = ["PRP", "VBP", "NNS"]
|
||||||
|
example_morphs = [
|
||||||
|
"Case=Nom|Number=Sing|Person=1|PronType=Prs",
|
||||||
|
"Tense=Pres|VerbForm=Fin",
|
||||||
|
"Number=Plur",
|
||||||
|
]
|
||||||
|
example_deps = ["nsubj", "ROOT", "dobj"]
|
||||||
|
example_pos = ["PRON", "VERB", "NOUN"]
|
||||||
|
example_ents = ["O", "O", "I-ANIMAL"]
|
||||||
|
example_spans = [(2, 3, "ANIMAL")]
|
||||||
|
|
||||||
|
TRAIN_EXAMPLE_1 = dict(
|
||||||
|
words=example_words_1,
|
||||||
|
lemmas=example_lemmas_1,
|
||||||
|
tags=example_tags,
|
||||||
|
morphs=example_morphs,
|
||||||
|
deps=example_deps,
|
||||||
|
heads=[1, 1, 1],
|
||||||
|
pos=example_pos,
|
||||||
|
ents=example_ents,
|
||||||
|
spans=example_spans,
|
||||||
|
cats={"CAT": 1.0, "DOG": 0.0},
|
||||||
|
)
|
||||||
|
TRAIN_EXAMPLE_2 = dict(
|
||||||
|
words=example_words_2,
|
||||||
|
lemmas=example_lemmas_2,
|
||||||
|
tags=example_tags,
|
||||||
|
morphs=example_morphs,
|
||||||
|
deps=example_deps,
|
||||||
|
heads=[1, 1, 1],
|
||||||
|
pos=example_pos,
|
||||||
|
ents=example_ents,
|
||||||
|
spans=example_spans,
|
||||||
|
cats={"CAT": 0.0, "DOG": 1.0},
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.slow
|
||||||
|
@pytest.mark.parametrize(
|
||||||
|
"component,examples",
|
||||||
|
[
|
||||||
|
("tagger", [TRAIN_EXAMPLE_1, TRAIN_EXAMPLE_2]),
|
||||||
|
("morphologizer", [TRAIN_EXAMPLE_1, TRAIN_EXAMPLE_2]),
|
||||||
|
("trainable_lemmatizer", [TRAIN_EXAMPLE_1, TRAIN_EXAMPLE_2]),
|
||||||
|
("parser", [TRAIN_EXAMPLE_1] * 30),
|
||||||
|
("ner", [TRAIN_EXAMPLE_1, TRAIN_EXAMPLE_2]),
|
||||||
|
("spancat", [TRAIN_EXAMPLE_1, TRAIN_EXAMPLE_2]),
|
||||||
|
("textcat", [TRAIN_EXAMPLE_1, TRAIN_EXAMPLE_2]),
|
||||||
|
],
|
||||||
|
)
|
||||||
|
def test_init_config_trainable(component, examples, en_vocab):
|
||||||
|
if component == "textcat":
|
||||||
|
train_docs = []
|
||||||
|
for example in examples:
|
||||||
|
doc = Doc(en_vocab, words=example["words"])
|
||||||
|
doc.cats = example["cats"]
|
||||||
|
train_docs.append(doc)
|
||||||
|
elif component == "spancat":
|
||||||
|
train_docs = []
|
||||||
|
for example in examples:
|
||||||
|
doc = Doc(en_vocab, words=example["words"])
|
||||||
|
doc.spans["sc"] = [
|
||||||
|
Span(doc, start, end, label) for start, end, label in example["spans"]
|
||||||
|
]
|
||||||
|
train_docs.append(doc)
|
||||||
|
else:
|
||||||
|
train_docs = []
|
||||||
|
for example in examples:
|
||||||
|
# cats, spans are not valid kwargs for instantiating a Doc
|
||||||
|
example = {k: v for k, v in example.items() if k not in ("cats", "spans")}
|
||||||
|
doc = Doc(en_vocab, **example)
|
||||||
|
train_docs.append(doc)
|
||||||
|
|
||||||
|
with make_tempdir() as d_in:
|
||||||
|
train_bin = DocBin(docs=train_docs)
|
||||||
|
train_bin.to_disk(d_in / "train.spacy")
|
||||||
|
dev_bin = DocBin(docs=train_docs)
|
||||||
|
dev_bin.to_disk(d_in / "dev.spacy")
|
||||||
|
init_config_result = CliRunner().invoke(
|
||||||
|
app,
|
||||||
|
[
|
||||||
|
"init",
|
||||||
|
"config",
|
||||||
|
f"{d_in}/config.cfg",
|
||||||
|
"--lang",
|
||||||
|
"en",
|
||||||
|
"--pipeline",
|
||||||
|
component,
|
||||||
|
],
|
||||||
|
)
|
||||||
|
assert init_config_result.exit_code == 0
|
||||||
|
train_result = CliRunner().invoke(
|
||||||
|
app,
|
||||||
|
[
|
||||||
|
"train",
|
||||||
|
f"{d_in}/config.cfg",
|
||||||
|
"--paths.train",
|
||||||
|
f"{d_in}/train.spacy",
|
||||||
|
"--paths.dev",
|
||||||
|
f"{d_in}/dev.spacy",
|
||||||
|
"--output",
|
||||||
|
f"{d_in}/model",
|
||||||
|
],
|
||||||
|
)
|
||||||
|
assert train_result.exit_code == 0
|
||||||
|
assert Path(d_in / "model" / "model-last").exists()
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.slow
|
||||||
|
@pytest.mark.parametrize(
|
||||||
|
"component,examples",
|
||||||
|
[("tagger,parser,morphologizer", [TRAIN_EXAMPLE_1, TRAIN_EXAMPLE_2] * 15)],
|
||||||
|
)
|
||||||
|
def test_init_config_trainable_multiple(component, examples, en_vocab):
|
||||||
|
train_docs = []
|
||||||
|
for example in examples:
|
||||||
|
example = {k: v for k, v in example.items() if k not in ("cats", "spans")}
|
||||||
|
doc = Doc(en_vocab, **example)
|
||||||
|
train_docs.append(doc)
|
||||||
|
|
||||||
|
with make_tempdir() as d_in:
|
||||||
|
train_bin = DocBin(docs=train_docs)
|
||||||
|
train_bin.to_disk(d_in / "train.spacy")
|
||||||
|
dev_bin = DocBin(docs=train_docs)
|
||||||
|
dev_bin.to_disk(d_in / "dev.spacy")
|
||||||
|
init_config_result = CliRunner().invoke(
|
||||||
|
app,
|
||||||
|
[
|
||||||
|
"init",
|
||||||
|
"config",
|
||||||
|
f"{d_in}/config.cfg",
|
||||||
|
"--lang",
|
||||||
|
"en",
|
||||||
|
"--pipeline",
|
||||||
|
component,
|
||||||
|
],
|
||||||
|
)
|
||||||
|
assert init_config_result.exit_code == 0
|
||||||
|
train_result = CliRunner().invoke(
|
||||||
|
app,
|
||||||
|
[
|
||||||
|
"train",
|
||||||
|
f"{d_in}/config.cfg",
|
||||||
|
"--paths.train",
|
||||||
|
f"{d_in}/train.spacy",
|
||||||
|
"--paths.dev",
|
||||||
|
f"{d_in}/dev.spacy",
|
||||||
|
"--output",
|
||||||
|
f"{d_in}/model",
|
||||||
|
],
|
||||||
|
)
|
||||||
|
assert train_result.exit_code == 0
|
||||||
|
assert Path(d_in / "model" / "model-last").exists()
|
||||||
|
|
|
@ -113,7 +113,7 @@ def test_issue5838():
|
||||||
doc = nlp(sample_text)
|
doc = nlp(sample_text)
|
||||||
doc.ents = [Span(doc, 7, 8, label="test")]
|
doc.ents = [Span(doc, 7, 8, label="test")]
|
||||||
html = displacy.render(doc, style="ent")
|
html = displacy.render(doc, style="ent")
|
||||||
found = html.count("</br>")
|
found = html.count("<br>")
|
||||||
assert found == 4
|
assert found == 4
|
||||||
|
|
||||||
|
|
||||||
|
@ -350,6 +350,78 @@ def test_displacy_render_wrapper(en_vocab):
|
||||||
displacy.set_render_wrapper(lambda html: html)
|
displacy.set_render_wrapper(lambda html: html)
|
||||||
|
|
||||||
|
|
||||||
|
def test_displacy_render_manual_dep():
|
||||||
|
"""Test displacy.render with manual data for dep style"""
|
||||||
|
parsed_dep = {
|
||||||
|
"words": [
|
||||||
|
{"text": "This", "tag": "DT"},
|
||||||
|
{"text": "is", "tag": "VBZ"},
|
||||||
|
{"text": "a", "tag": "DT"},
|
||||||
|
{"text": "sentence", "tag": "NN"},
|
||||||
|
],
|
||||||
|
"arcs": [
|
||||||
|
{"start": 0, "end": 1, "label": "nsubj", "dir": "left"},
|
||||||
|
{"start": 2, "end": 3, "label": "det", "dir": "left"},
|
||||||
|
{"start": 1, "end": 3, "label": "attr", "dir": "right"},
|
||||||
|
],
|
||||||
|
"title": "Title",
|
||||||
|
}
|
||||||
|
html = displacy.render([parsed_dep], style="dep", manual=True)
|
||||||
|
for word in parsed_dep["words"]:
|
||||||
|
assert word["text"] in html
|
||||||
|
assert word["tag"] in html
|
||||||
|
|
||||||
|
|
||||||
|
def test_displacy_render_manual_ent():
|
||||||
|
"""Test displacy.render with manual data for ent style"""
|
||||||
|
parsed_ents = [
|
||||||
|
{
|
||||||
|
"text": "But Google is starting from behind.",
|
||||||
|
"ents": [{"start": 4, "end": 10, "label": "ORG"}],
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"text": "But Google is starting from behind.",
|
||||||
|
"ents": [{"start": -100, "end": 100, "label": "COMPANY"}],
|
||||||
|
"title": "Title",
|
||||||
|
},
|
||||||
|
]
|
||||||
|
|
||||||
|
html = displacy.render(parsed_ents, style="ent", manual=True)
|
||||||
|
for parsed_ent in parsed_ents:
|
||||||
|
assert parsed_ent["ents"][0]["label"] in html
|
||||||
|
if "title" in parsed_ent:
|
||||||
|
assert parsed_ent["title"] in html
|
||||||
|
|
||||||
|
|
||||||
|
def test_displacy_render_manual_span():
|
||||||
|
"""Test displacy.render with manual data for span style"""
|
||||||
|
parsed_spans = [
|
||||||
|
{
|
||||||
|
"text": "Welcome to the Bank of China.",
|
||||||
|
"spans": [
|
||||||
|
{"start_token": 3, "end_token": 6, "label": "ORG"},
|
||||||
|
{"start_token": 5, "end_token": 6, "label": "GPE"},
|
||||||
|
],
|
||||||
|
"tokens": ["Welcome", "to", "the", "Bank", "of", "China", "."],
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"text": "Welcome to the Bank of China.",
|
||||||
|
"spans": [
|
||||||
|
{"start_token": 3, "end_token": 6, "label": "ORG"},
|
||||||
|
{"start_token": 5, "end_token": 6, "label": "GPE"},
|
||||||
|
],
|
||||||
|
"tokens": ["Welcome", "to", "the", "Bank", "of", "China", "."],
|
||||||
|
"title": "Title",
|
||||||
|
},
|
||||||
|
]
|
||||||
|
|
||||||
|
html = displacy.render(parsed_spans, style="span", manual=True)
|
||||||
|
for parsed_span in parsed_spans:
|
||||||
|
assert parsed_span["spans"][0]["label"] in html
|
||||||
|
if "title" in parsed_span:
|
||||||
|
assert parsed_span["title"] in html
|
||||||
|
|
||||||
|
|
||||||
def test_displacy_options_case():
|
def test_displacy_options_case():
|
||||||
ents = ["foo", "BAR"]
|
ents = ["foo", "BAR"]
|
||||||
colors = {"FOO": "red", "bar": "green"}
|
colors = {"FOO": "red", "bar": "green"}
|
||||||
|
|
|
@ -3,7 +3,12 @@ import os
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
|
||||||
import pytest
|
import pytest
|
||||||
from pydantic import ValidationError
|
|
||||||
|
try:
|
||||||
|
from pydantic.v1 import ValidationError
|
||||||
|
except ImportError:
|
||||||
|
from pydantic import ValidationError # type: ignore
|
||||||
|
|
||||||
from thinc.api import (
|
from thinc.api import (
|
||||||
Config,
|
Config,
|
||||||
ConfigValidationError,
|
ConfigValidationError,
|
||||||
|
|
|
@ -894,7 +894,7 @@ def load_meta(path: Union[str, Path]) -> Dict[str, Any]:
|
||||||
if "spacy_version" in meta:
|
if "spacy_version" in meta:
|
||||||
if not is_compatible_version(about.__version__, meta["spacy_version"]):
|
if not is_compatible_version(about.__version__, meta["spacy_version"]):
|
||||||
lower_version = get_model_lower_version(meta["spacy_version"])
|
lower_version = get_model_lower_version(meta["spacy_version"])
|
||||||
lower_version = get_minor_version(lower_version) # type: ignore[arg-type]
|
lower_version = get_base_version(lower_version) # type: ignore[arg-type]
|
||||||
if lower_version is not None:
|
if lower_version is not None:
|
||||||
lower_version = "v" + lower_version
|
lower_version = "v" + lower_version
|
||||||
elif "spacy_git_version" in meta:
|
elif "spacy_git_version" in meta:
|
||||||
|
|
|
@ -83,7 +83,7 @@ consisting of a CNN and a layer-normalized maxout activation function.
|
||||||
| `width` | The width of the input and output. These are required to be the same, so that residual connections can be used. Recommended values are `96`, `128` or `300`. ~~int~~ |
|
| `width` | The width of the input and output. These are required to be the same, so that residual connections can be used. Recommended values are `96`, `128` or `300`. ~~int~~ |
|
||||||
| `depth` | The number of convolutional layers to use. Recommended values are between `2` and `8`. ~~int~~ |
|
| `depth` | The number of convolutional layers to use. Recommended values are between `2` and `8`. ~~int~~ |
|
||||||
| `embed_size` | The number of rows in the hash embedding tables. This can be surprisingly small, due to the use of the hash embeddings. Recommended values are between `2000` and `10000`. ~~int~~ |
|
| `embed_size` | The number of rows in the hash embedding tables. This can be surprisingly small, due to the use of the hash embeddings. Recommended values are between `2000` and `10000`. ~~int~~ |
|
||||||
| `window_size` | The number of tokens on either side to concatenate during the convolutions. The receptive field of the CNN will be `depth * (window_size * 2 + 1)`, so a 4-layer network with a window size of `2` will be sensitive to 20 words at a time. Recommended value is `1`. ~~int~~ |
|
| `window_size` | The number of tokens on either side to concatenate during the convolutions. The receptive field of the CNN will be `depth * window_size * 2 + 1`, so a 4-layer network with a window size of `2` will be sensitive to 17 words at a time. Recommended value is `1`. ~~int~~ |
|
||||||
| `maxout_pieces` | The number of pieces to use in the maxout non-linearity. If `1`, the [`Mish`](https://thinc.ai/docs/api-layers#mish) non-linearity is used instead. Recommended values are `1`-`3`. ~~int~~ |
|
| `maxout_pieces` | The number of pieces to use in the maxout non-linearity. If `1`, the [`Mish`](https://thinc.ai/docs/api-layers#mish) non-linearity is used instead. Recommended values are `1`-`3`. ~~int~~ |
|
||||||
| `subword_features` | Whether to also embed subword features, specifically the prefix, suffix and word shape. This is recommended for alphabetic languages like English, but not if single-character tokens are used for a language such as Chinese. ~~bool~~ |
|
| `subword_features` | Whether to also embed subword features, specifically the prefix, suffix and word shape. This is recommended for alphabetic languages like English, but not if single-character tokens are used for a language such as Chinese. ~~bool~~ |
|
||||||
| `pretrained_vectors` | Whether to also use static vectors. ~~bool~~ |
|
| `pretrained_vectors` | Whether to also use static vectors. ~~bool~~ |
|
||||||
|
|
|
@ -7,6 +7,7 @@ menu:
|
||||||
- ['info', 'info']
|
- ['info', 'info']
|
||||||
- ['validate', 'validate']
|
- ['validate', 'validate']
|
||||||
- ['init', 'init']
|
- ['init', 'init']
|
||||||
|
- ['find-function', 'find-function']
|
||||||
- ['convert', 'convert']
|
- ['convert', 'convert']
|
||||||
- ['debug', 'debug']
|
- ['debug', 'debug']
|
||||||
- ['train', 'train']
|
- ['train', 'train']
|
||||||
|
@ -274,6 +275,27 @@ $ python -m spacy init labels [config_path] [output_path] [--code] [--verbose] [
|
||||||
| overrides | Config parameters to override. Should be options starting with `--` that correspond to the config section and value to override, e.g. `--paths.train ./train.spacy`. ~~Any (option/flag)~~ |
|
| overrides | Config parameters to override. Should be options starting with `--` that correspond to the config section and value to override, e.g. `--paths.train ./train.spacy`. ~~Any (option/flag)~~ |
|
||||||
| **CREATES** | The label files. |
|
| **CREATES** | The label files. |
|
||||||
|
|
||||||
|
## find-function {id="find-function",version="3.7",tag="command"}
|
||||||
|
|
||||||
|
Find the module, path and line number to the file for a given registered
|
||||||
|
function. This functionality is helpful to understand where registered
|
||||||
|
functions, as used in the config file, are defined.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ python -m spacy find-function [func_name] [--registry]
|
||||||
|
```
|
||||||
|
|
||||||
|
> #### Example
|
||||||
|
>
|
||||||
|
> ```bash
|
||||||
|
> $ python -m spacy find-function spacy.TextCatBOW.v1
|
||||||
|
> ```
|
||||||
|
|
||||||
|
| Name | Description |
|
||||||
|
| ------------------ | ----------------------------------------------------- |
|
||||||
|
| `func_name` | Name of the registered function. ~~str (positional)~~ |
|
||||||
|
| `--registry`, `-r` | Name of the catalogue registry. ~~str (option)~~ |
|
||||||
|
|
||||||
## convert {id="convert",tag="command"}
|
## convert {id="convert",tag="command"}
|
||||||
|
|
||||||
Convert files into spaCy's
|
Convert files into spaCy's
|
||||||
|
@ -1220,7 +1242,7 @@ skew. To render a sample of dependency parses in a HTML file using the
|
||||||
`--displacy-path` argument.
|
`--displacy-path` argument.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
$ python -m spacy benchmark accuracy [model] [data_path] [--output] [--code] [--gold-preproc] [--gpu-id] [--displacy-path] [--displacy-limit]
|
$ python -m spacy benchmark accuracy [model] [data_path] [--output] [--code] [--gold-preproc] [--gpu-id] [--displacy-path] [--displacy-limit] [--per-component] [--spans-key]
|
||||||
```
|
```
|
||||||
|
|
||||||
| Name | Description |
|
| Name | Description |
|
||||||
|
@ -1234,6 +1256,7 @@ $ python -m spacy benchmark accuracy [model] [data_path] [--output] [--code] [--
|
||||||
| `--displacy-path`, `-dp` | Directory to output rendered parses as HTML. If not set, no visualizations will be generated. ~~Optional[Path] \(option)~~ |
|
| `--displacy-path`, `-dp` | Directory to output rendered parses as HTML. If not set, no visualizations will be generated. ~~Optional[Path] \(option)~~ |
|
||||||
| `--displacy-limit`, `-dl` | Number of parses to generate per file. Defaults to `25`. Keep in mind that a significantly higher number might cause the `.html` files to render slowly. ~~int (option)~~ |
|
| `--displacy-limit`, `-dl` | Number of parses to generate per file. Defaults to `25`. Keep in mind that a significantly higher number might cause the `.html` files to render slowly. ~~int (option)~~ |
|
||||||
| `--per-component`, `-P` <Tag variant="new">3.6</Tag> | Whether to return the scores keyed by component name. Defaults to `False`. ~~bool (flag)~~ |
|
| `--per-component`, `-P` <Tag variant="new">3.6</Tag> | Whether to return the scores keyed by component name. Defaults to `False`. ~~bool (flag)~~ |
|
||||||
|
| `--spans-key`, `-sk` <Tag variant="new">3.6.2</Tag> | Spans key to use when evaluating `Doc.spans`. Defaults to `sc`. ~~str (option)~~ |
|
||||||
| `--help`, `-h` | Show help message and available arguments. ~~bool (flag)~~ |
|
| `--help`, `-h` | Show help message and available arguments. ~~bool (flag)~~ |
|
||||||
| **CREATES** | Training results and optional metrics and visualizations. |
|
| **CREATES** | Training results and optional metrics and visualizations. |
|
||||||
|
|
||||||
|
|
1195
website/docs/api/large-language-models.mdx
Normal file
1195
website/docs/api/large-language-models.mdx
Normal file
File diff suppressed because it is too large
Load Diff
|
@ -67,7 +67,6 @@ architectures and their arguments and hyperparameters.
|
||||||
> ```python
|
> ```python
|
||||||
> from spacy.pipeline.spancat import DEFAULT_SPANCAT_SINGLELABEL_MODEL
|
> from spacy.pipeline.spancat import DEFAULT_SPANCAT_SINGLELABEL_MODEL
|
||||||
> config = {
|
> config = {
|
||||||
> "threshold": 0.5,
|
|
||||||
> "spans_key": "labeled_spans",
|
> "spans_key": "labeled_spans",
|
||||||
> "model": DEFAULT_SPANCAT_SINGLELABEL_MODEL,
|
> "model": DEFAULT_SPANCAT_SINGLELABEL_MODEL,
|
||||||
> "suggester": {"@misc": "spacy.ngram_suggester.v1", "sizes": [1, 2, 3]},
|
> "suggester": {"@misc": "spacy.ngram_suggester.v1", "sizes": [1, 2, 3]},
|
||||||
|
@ -522,7 +521,7 @@ has two columns, indicating the start and end position.
|
||||||
| Name | Description |
|
| Name | Description |
|
||||||
| ----------- | ---------------------------------------------------------------------------- |
|
| ----------- | ---------------------------------------------------------------------------- |
|
||||||
| `min_size` | The minimal phrase lengths to suggest (inclusive). ~~[int]~~ |
|
| `min_size` | The minimal phrase lengths to suggest (inclusive). ~~[int]~~ |
|
||||||
| `max_size` | The maximal phrase lengths to suggest (exclusive). ~~[int]~~ |
|
| `max_size` | The maximal phrase lengths to suggest (inclusive). ~~[int]~~ |
|
||||||
| **CREATES** | The suggester function. ~~Callable[[Iterable[Doc], Optional[Ops]], Ragged]~~ |
|
| **CREATES** | The suggester function. ~~Callable[[Iterable[Doc], Optional[Ops]], Ragged]~~ |
|
||||||
|
|
||||||
### spacy.preset_spans_suggester.v1 {id="preset_spans_suggester"}
|
### spacy.preset_spans_suggester.v1 {id="preset_spans_suggester"}
|
||||||
|
|
|
@ -117,7 +117,7 @@ config. Any existing patterns are removed on initialization.
|
||||||
>
|
>
|
||||||
> [initialize.components.span_ruler.patterns]
|
> [initialize.components.span_ruler.patterns]
|
||||||
> @readers = "srsly.read_jsonl.v1"
|
> @readers = "srsly.read_jsonl.v1"
|
||||||
> path = "corpus/span_ruler_patterns.jsonl
|
> path = "corpus/span_ruler_patterns.jsonl"
|
||||||
> ```
|
> ```
|
||||||
|
|
||||||
| Name | Description |
|
| Name | Description |
|
||||||
|
|
|
@ -68,7 +68,7 @@ weights, and returns it.
|
||||||
cls = spacy.util.get_lang_class(lang) # 1. Get Language class, e.g. English
|
cls = spacy.util.get_lang_class(lang) # 1. Get Language class, e.g. English
|
||||||
nlp = cls() # 2. Initialize it
|
nlp = cls() # 2. Initialize it
|
||||||
for name in pipeline:
|
for name in pipeline:
|
||||||
nlp.add_pipe(name) # 3. Add the component to the pipeline
|
nlp.add_pipe(name, config={...}) # 3. Add the component to the pipeline
|
||||||
nlp.from_disk(data_path) # 4. Load in the binary data
|
nlp.from_disk(data_path) # 4. Load in the binary data
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -343,6 +343,130 @@ use with the `manual=True` argument in `displacy.render`.
|
||||||
| `options` | Span-specific visualisation options. ~~Dict[str, Any]~~ |
|
| `options` | Span-specific visualisation options. ~~Dict[str, Any]~~ |
|
||||||
| **RETURNS** | Generated entities keyed by text (original text) and ents. ~~dict~~ |
|
| **RETURNS** | Generated entities keyed by text (original text) and ents. ~~dict~~ |
|
||||||
|
|
||||||
|
### Visualizer data structures {id="displacy_structures"}
|
||||||
|
|
||||||
|
You can use displaCy's data format to manually render data. This can be useful
|
||||||
|
if you want to visualize output from other libraries. You can find examples of
|
||||||
|
displaCy's different data formats below.
|
||||||
|
|
||||||
|
> #### DEP example data structure
|
||||||
|
>
|
||||||
|
> ```json
|
||||||
|
> {
|
||||||
|
> "words": [
|
||||||
|
> { "text": "This", "tag": "DT" },
|
||||||
|
> { "text": "is", "tag": "VBZ" },
|
||||||
|
> { "text": "a", "tag": "DT" },
|
||||||
|
> { "text": "sentence", "tag": "NN" }
|
||||||
|
> ],
|
||||||
|
> "arcs": [
|
||||||
|
> { "start": 0, "end": 1, "label": "nsubj", "dir": "left" },
|
||||||
|
> { "start": 2, "end": 3, "label": "det", "dir": "left" },
|
||||||
|
> { "start": 1, "end": 3, "label": "attr", "dir": "right" }
|
||||||
|
> ]
|
||||||
|
> }
|
||||||
|
> ```
|
||||||
|
|
||||||
|
#### Dependency Visualizer data structure {id="structure-dep"}
|
||||||
|
|
||||||
|
| Dictionary Key | Description |
|
||||||
|
| -------------- | ----------------------------------------------------------------------------------------------------------- |
|
||||||
|
| `words` | List of dictionaries describing a word token (see structure below). ~~List[Dict[str, Any]]~~ |
|
||||||
|
| `arcs` | List of dictionaries describing the relations between words (see structure below). ~~List[Dict[str, Any]]~~ |
|
||||||
|
| _Optional_ | |
|
||||||
|
| `title` | Title of the visualization. ~~Optional[str]~~ |
|
||||||
|
| `settings` | Dependency Visualizer options (see [here](/api/top-level#displacy_options)). ~~Dict[str, Any]~~ |
|
||||||
|
|
||||||
|
<Accordion title="Words data structure">
|
||||||
|
|
||||||
|
| Dictionary Key | Description |
|
||||||
|
| -------------- | ---------------------------------------- |
|
||||||
|
| `text` | Text content of the word. ~~str~~ |
|
||||||
|
| `tag` | Fine-grained part-of-speech. ~~str~~ |
|
||||||
|
| `lemma` | Base form of the word. ~~Optional[str]~~ |
|
||||||
|
|
||||||
|
</Accordion>
|
||||||
|
|
||||||
|
<Accordion title="Arcs data structure">
|
||||||
|
|
||||||
|
| Dictionary Key | Description |
|
||||||
|
| -------------- | ---------------------------------------------------- |
|
||||||
|
| `start` | The index of the starting token. ~~int~~ |
|
||||||
|
| `end` | The index of the ending token. ~~int~~ |
|
||||||
|
| `label` | The type of dependency relation. ~~str~~ |
|
||||||
|
| `dir` | Direction of the relation (`left`, `right`). ~~str~~ |
|
||||||
|
|
||||||
|
</Accordion>
|
||||||
|
|
||||||
|
> #### ENT example data structure
|
||||||
|
>
|
||||||
|
> ```json
|
||||||
|
> {
|
||||||
|
> "text": "But Google is starting from behind.",
|
||||||
|
> "ents": [{ "start": 4, "end": 10, "label": "ORG" }]
|
||||||
|
> }
|
||||||
|
> ```
|
||||||
|
|
||||||
|
#### Named Entity Recognition data structure {id="structure-ent"}
|
||||||
|
|
||||||
|
| Dictionary Key | Description |
|
||||||
|
| -------------- | ------------------------------------------------------------------------------------------- |
|
||||||
|
| `text` | String representation of the document text. ~~str~~ |
|
||||||
|
| `ents` | List of dictionaries describing entities (see structure below). ~~List[Dict[str, Any]]~~ |
|
||||||
|
| _Optional_ | |
|
||||||
|
| `title` | Title of the visualization. ~~Optional[str]~~ |
|
||||||
|
| `settings` | Entity Visualizer options (see [here](/api/top-level#displacy_options)). ~~Dict[str, Any]~~ |
|
||||||
|
|
||||||
|
<Accordion title="Ents data structure">
|
||||||
|
|
||||||
|
| Dictionary Key | Description |
|
||||||
|
| -------------- | ---------------------------------------------------------------------- |
|
||||||
|
| `start` | The index of the first character of the entity. ~~int~~ |
|
||||||
|
| `end` | The index of the last character of the entity. (not inclusive) ~~int~~ |
|
||||||
|
| `label` | Label attached to the entity. ~~str~~ |
|
||||||
|
| _Optional_ | |
|
||||||
|
| `kb_id` | `KnowledgeBase` ID. ~~str~~ |
|
||||||
|
| `kb_url` | `KnowledgeBase` URL. ~~str~~ |
|
||||||
|
|
||||||
|
</Accordion>
|
||||||
|
|
||||||
|
> #### SPAN example data structure
|
||||||
|
>
|
||||||
|
> ```json
|
||||||
|
> {
|
||||||
|
> "text": "Welcome to the Bank of China.",
|
||||||
|
> "spans": [
|
||||||
|
> { "start_token": 3, "end_token": 6, "label": "ORG" },
|
||||||
|
> { "start_token": 5, "end_token": 6, "label": "GPE" }
|
||||||
|
> ],
|
||||||
|
> "tokens": ["Welcome", "to", "the", "Bank", "of", "China", "."]
|
||||||
|
> }
|
||||||
|
> ```
|
||||||
|
|
||||||
|
#### Span Classification data structure {id="structure-span"}
|
||||||
|
|
||||||
|
| Dictionary Key | Description |
|
||||||
|
| -------------- | ----------------------------------------------------------------------------------------- |
|
||||||
|
| `text` | String representation of the document text. ~~str~~ |
|
||||||
|
| `spans` | List of dictionaries describing spans (see structure below). ~~List[Dict[str, Any]]~~ |
|
||||||
|
| `tokens` | List of word tokens. ~~List[str]~~ |
|
||||||
|
| _Optional_ | |
|
||||||
|
| `title` | Title of the visualization. ~~Optional[str]~~ |
|
||||||
|
| `settings` | Span Visualizer options (see [here](/api/top-level#displacy_options)). ~~Dict[str, Any]~~ |
|
||||||
|
|
||||||
|
<Accordion title="Spans data structure">
|
||||||
|
|
||||||
|
| Dictionary Key | Description |
|
||||||
|
| -------------- | ------------------------------------------------------------- |
|
||||||
|
| `start_token` | The index of the first token of the span in `tokens`. ~~int~~ |
|
||||||
|
| `end_token` | The index of the last token of the span in `tokens`. ~~int~~ |
|
||||||
|
| `label` | Label attached to the span. ~~str~~ |
|
||||||
|
| _Optional_ | |
|
||||||
|
| `kb_id` | `KnowledgeBase` ID. ~~str~~ |
|
||||||
|
| `kb_url` | `KnowledgeBase` URL. ~~str~~ |
|
||||||
|
|
||||||
|
</Accordion>
|
||||||
|
|
||||||
### Visualizer options {id="displacy_options"}
|
### Visualizer options {id="displacy_options"}
|
||||||
|
|
||||||
The `options` argument lets you specify additional settings for each visualizer.
|
The `options` argument lets you specify additional settings for each visualizer.
|
||||||
|
|
513
website/docs/usage/large-language-models.mdx
Normal file
513
website/docs/usage/large-language-models.mdx
Normal file
|
@ -0,0 +1,513 @@
|
||||||
|
---
|
||||||
|
title: Large Language Models
|
||||||
|
teaser: Integrating LLMs into structured NLP pipelines
|
||||||
|
menu:
|
||||||
|
- ['Motivation', 'motivation']
|
||||||
|
- ['Install', 'install']
|
||||||
|
- ['Usage', 'usage']
|
||||||
|
- ['Logging', 'logging']
|
||||||
|
- ['API', 'api']
|
||||||
|
- ['Tasks', 'tasks']
|
||||||
|
- ['Models', 'models']
|
||||||
|
---
|
||||||
|
|
||||||
|
[The spacy-llm package](https://github.com/explosion/spacy-llm) integrates Large
|
||||||
|
Language Models (LLMs) into spaCy pipelines, featuring a modular system for
|
||||||
|
**fast prototyping** and **prompting**, and turning unstructured responses into
|
||||||
|
**robust outputs** for various NLP tasks, **no training data** required.
|
||||||
|
|
||||||
|
- Serializable `llm` **component** to integrate prompts into your pipeline
|
||||||
|
- **Modular functions** to define the [**task**](#tasks) (prompting and parsing)
|
||||||
|
and [**model**](#models) (model to use)
|
||||||
|
- Support for **hosted APIs** and self-hosted **open-source models**
|
||||||
|
- Integration with [`LangChain`](https://github.com/hwchase17/langchain)
|
||||||
|
- Access to
|
||||||
|
**[OpenAI API](https://platform.openai.com/docs/api-reference/introduction)**,
|
||||||
|
including GPT-4 and various GPT-3 models
|
||||||
|
- Built-in support for various **open-source** models hosted on
|
||||||
|
[Hugging Face](https://huggingface.co/)
|
||||||
|
- Usage examples for standard NLP tasks such as **Named Entity Recognition** and
|
||||||
|
**Text Classification**
|
||||||
|
- Easy implementation of **your own functions** via the
|
||||||
|
[registry](/api/top-level#registry) for custom prompting, parsing and model
|
||||||
|
integrations
|
||||||
|
|
||||||
|
## Motivation {id="motivation"}
|
||||||
|
|
||||||
|
Large Language Models (LLMs) feature powerful natural language understanding
|
||||||
|
capabilities. With only a few (and sometimes no) examples, an LLM can be
|
||||||
|
prompted to perform custom NLP tasks such as text categorization, named entity
|
||||||
|
recognition, coreference resolution, information extraction and more.
|
||||||
|
|
||||||
|
Supervised learning is much worse than LLM prompting for prototyping, but for
|
||||||
|
many tasks it's much better for production. A transformer model that runs
|
||||||
|
comfortably on a single GPU is extremely powerful, and it's likely to be a
|
||||||
|
better choice for any task for which you have a well-defined output. You train
|
||||||
|
the model with anything from a few hundred to a few thousand labelled examples,
|
||||||
|
and it will learn to do exactly that. Efficiency, reliability and control are
|
||||||
|
all better with supervised learning, and accuracy will generally be higher than
|
||||||
|
LLM prompting as well.
|
||||||
|
|
||||||
|
`spacy-llm` lets you have **the best of both worlds**. You can quickly
|
||||||
|
initialize a pipeline with components powered by LLM prompts, and freely mix in
|
||||||
|
components powered by other approaches. As your project progresses, you can look
|
||||||
|
at replacing some or all of the LLM-powered components as you require.
|
||||||
|
|
||||||
|
Of course, there can be components in your system for which the power of an LLM
|
||||||
|
is fully justified. If you want a system that can synthesize information from
|
||||||
|
multiple documents in subtle ways and generate a nuanced summary for you, bigger
|
||||||
|
is better. However, even if your production system needs an LLM for some of the
|
||||||
|
task, that doesn't mean you need an LLM for all of it. Maybe you want to use a
|
||||||
|
cheap text classification model to help you find the texts to summarize, or
|
||||||
|
maybe you want to add a rule-based system to sanity check the output of the
|
||||||
|
summary. These before-and-after tasks are much easier with a mature and
|
||||||
|
well-thought-out library, which is exactly what spaCy provides.
|
||||||
|
|
||||||
|
## Install {id="install"}
|
||||||
|
|
||||||
|
`spacy-llm` will be installed automatically in future spaCy versions. For now,
|
||||||
|
you can run the following in the same virtual environment where you already have
|
||||||
|
`spacy` [installed](/usage).
|
||||||
|
|
||||||
|
> ⚠️ This package is still experimental and it is possible that changes made to
|
||||||
|
> the interface will be breaking in minor version updates.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python -m pip install spacy-llm
|
||||||
|
```
|
||||||
|
|
||||||
|
## Usage {id="usage"}
|
||||||
|
|
||||||
|
The task and the model have to be supplied to the `llm` pipeline component using
|
||||||
|
the [config system](/api/data-formats#config). This package provides various
|
||||||
|
built-in functionality, as detailed in the [API](#-api) documentation.
|
||||||
|
|
||||||
|
### Example 1: Add a text classifier using a GPT-3 model from OpenAI {id="example-1"}
|
||||||
|
|
||||||
|
Create a new API key from openai.com or fetch an existing one, and ensure the
|
||||||
|
keys are set as environmental variables. For more background information, see
|
||||||
|
the [OpenAI](/api/large-language-models#gpt-3-5) section.
|
||||||
|
|
||||||
|
Create a config file `config.cfg` containing at least the following (or see the
|
||||||
|
full example
|
||||||
|
[here](https://github.com/explosion/spacy-llm/tree/main/usage_examples/textcat_openai)):
|
||||||
|
|
||||||
|
```ini
|
||||||
|
[nlp]
|
||||||
|
lang = "en"
|
||||||
|
pipeline = ["llm"]
|
||||||
|
|
||||||
|
[components]
|
||||||
|
|
||||||
|
[components.llm]
|
||||||
|
factory = "llm"
|
||||||
|
|
||||||
|
[components.llm.task]
|
||||||
|
@llm_tasks = "spacy.TextCat.v2"
|
||||||
|
labels = ["COMPLIMENT", "INSULT"]
|
||||||
|
|
||||||
|
[components.llm.model]
|
||||||
|
@llm_models = "spacy.GPT-3-5.v1"
|
||||||
|
config = {"temperature": 0.0}
|
||||||
|
```
|
||||||
|
|
||||||
|
Now run:
|
||||||
|
|
||||||
|
```python
|
||||||
|
from spacy_llm.util import assemble
|
||||||
|
|
||||||
|
nlp = assemble("config.cfg")
|
||||||
|
doc = nlp("You look gorgeous!")
|
||||||
|
print(doc.cats)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Example 2: Add NER using an open-source model through Hugging Face {id="example-2"}
|
||||||
|
|
||||||
|
To run this example, ensure that you have a GPU enabled, and `transformers`,
|
||||||
|
`torch` and CUDA installed. For more background information, see the
|
||||||
|
[DollyHF](/api/large-language-models#dolly) section.
|
||||||
|
|
||||||
|
Create a config file `config.cfg` containing at least the following (or see the
|
||||||
|
full example
|
||||||
|
[here](https://github.com/explosion/spacy-llm/tree/main/usage_examples/ner_dolly)):
|
||||||
|
|
||||||
|
```ini
|
||||||
|
[nlp]
|
||||||
|
lang = "en"
|
||||||
|
pipeline = ["llm"]
|
||||||
|
|
||||||
|
[components]
|
||||||
|
|
||||||
|
[components.llm]
|
||||||
|
factory = "llm"
|
||||||
|
|
||||||
|
[components.llm.task]
|
||||||
|
@llm_tasks = "spacy.NER.v3"
|
||||||
|
labels = ["PERSON", "ORGANISATION", "LOCATION"]
|
||||||
|
|
||||||
|
[components.llm.model]
|
||||||
|
@llm_models = "spacy.Dolly.v1"
|
||||||
|
# For better performance, use dolly-v2-12b instead
|
||||||
|
name = "dolly-v2-3b"
|
||||||
|
```
|
||||||
|
|
||||||
|
Now run:
|
||||||
|
|
||||||
|
```python
|
||||||
|
from spacy_llm.util import assemble
|
||||||
|
|
||||||
|
nlp = assemble("config.cfg")
|
||||||
|
doc = nlp("Jack and Jill rode up the hill in Les Deux Alpes")
|
||||||
|
print([(ent.text, ent.label_) for ent in doc.ents])
|
||||||
|
```
|
||||||
|
|
||||||
|
Note that Hugging Face will download the `"databricks/dolly-v2-3b"` model the
|
||||||
|
first time you use it. You can
|
||||||
|
[define the cached directory](https://huggingface.co/docs/huggingface_hub/main/en/guides/manage-cache)
|
||||||
|
by setting the environmental variable `HF_HOME`. Also, you can upgrade the model
|
||||||
|
to be `"databricks/dolly-v2-12b"` for better performance.
|
||||||
|
|
||||||
|
### Example 3: Create the component directly in Python {id="example-3"}
|
||||||
|
|
||||||
|
The `llm` component behaves as any other component does, and there are
|
||||||
|
[task-specific components](/api/large-language-models#config) defined to
|
||||||
|
help you hit the ground running with a reasonable built-in task implementation.
|
||||||
|
|
||||||
|
```python
|
||||||
|
import spacy
|
||||||
|
|
||||||
|
nlp = spacy.blank("en")
|
||||||
|
llm_ner = nlp.add_pipe("llm_ner")
|
||||||
|
llm_ner.add_label("PERSON")
|
||||||
|
llm_ner.add_label("LOCATION")
|
||||||
|
nlp.initialize()
|
||||||
|
doc = nlp("Jack and Jill rode up the hill in Les Deux Alpes")
|
||||||
|
print([(ent.text, ent.label_) for ent in doc.ents])
|
||||||
|
```
|
||||||
|
|
||||||
|
Note that for efficient usage of resources, typically you would use
|
||||||
|
[`nlp.pipe(docs)`](/api/language#pipe) with a batch, instead of calling
|
||||||
|
`nlp(doc)` with a single document.
|
||||||
|
|
||||||
|
### Example 4: Implement your own custom task {id="example-4"}
|
||||||
|
|
||||||
|
To write a [`task`](#tasks), you need to implement two functions:
|
||||||
|
`generate_prompts` that takes a list of [`Doc`](/api/doc) objects and transforms
|
||||||
|
them into a list of prompts, and `parse_responses` that transforms the LLM
|
||||||
|
outputs into annotations on the [`Doc`](/api/doc), e.g. entity spans, text
|
||||||
|
categories and more.
|
||||||
|
|
||||||
|
To register your custom task, decorate a factory function using the
|
||||||
|
`spacy_llm.registry.llm_tasks` decorator with a custom name that you can refer
|
||||||
|
to in your config.
|
||||||
|
|
||||||
|
> 📖 For more details, see the
|
||||||
|
> [**usage example on writing your own task**](https://github.com/explosion/spacy-llm/tree/main/usage_examples#writing-your-own-task)
|
||||||
|
|
||||||
|
```python
|
||||||
|
from typing import Iterable, List
|
||||||
|
from spacy.tokens import Doc
|
||||||
|
from spacy_llm.registry import registry
|
||||||
|
from spacy_llm.util import split_labels
|
||||||
|
|
||||||
|
|
||||||
|
@registry.llm_tasks("my_namespace.MyTask.v1")
|
||||||
|
def make_my_task(labels: str, my_other_config_val: float) -> "MyTask":
|
||||||
|
labels_list = split_labels(labels)
|
||||||
|
return MyTask(labels=labels_list, my_other_config_val=my_other_config_val)
|
||||||
|
|
||||||
|
|
||||||
|
class MyTask:
|
||||||
|
def __init__(self, labels: List[str], my_other_config_val: float):
|
||||||
|
...
|
||||||
|
|
||||||
|
def generate_prompts(self, docs: Iterable[Doc]) -> Iterable[str]:
|
||||||
|
...
|
||||||
|
|
||||||
|
def parse_responses(
|
||||||
|
self, docs: Iterable[Doc], responses: Iterable[str]
|
||||||
|
) -> Iterable[Doc]:
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
```ini
|
||||||
|
# config.cfg (excerpt)
|
||||||
|
[components.llm.task]
|
||||||
|
@llm_tasks = "my_namespace.MyTask.v1"
|
||||||
|
labels = LABEL1,LABEL2,LABEL3
|
||||||
|
my_other_config_val = 0.3
|
||||||
|
```
|
||||||
|
|
||||||
|
## Logging {id="logging"}
|
||||||
|
|
||||||
|
spacy-llm has a built-in logger that can log the prompt sent to the LLM as well
|
||||||
|
as its raw response. This logger uses the debug level and by default has a
|
||||||
|
`logging.NullHandler()` configured.
|
||||||
|
|
||||||
|
In order to use this logger, you can setup a simple handler like this:
|
||||||
|
|
||||||
|
```python
|
||||||
|
import logging
|
||||||
|
import spacy_llm
|
||||||
|
|
||||||
|
|
||||||
|
spacy_llm.logger.addHandler(logging.StreamHandler())
|
||||||
|
spacy_llm.logger.setLevel(logging.DEBUG)
|
||||||
|
```
|
||||||
|
|
||||||
|
> NOTE: Any `logging` handler will work here so you probably want to use some
|
||||||
|
> sort of rotating `FileHandler` as the generated prompts can be quite long,
|
||||||
|
> especially for tasks with few-shot examples.
|
||||||
|
|
||||||
|
Then when using the pipeline you'll be able to view the prompt and response.
|
||||||
|
|
||||||
|
E.g. with the config and code from [Example 1](#example-1) above:
|
||||||
|
|
||||||
|
```python
|
||||||
|
from spacy_llm.util import assemble
|
||||||
|
|
||||||
|
|
||||||
|
nlp = assemble("config.cfg")
|
||||||
|
doc = nlp("You look gorgeous!")
|
||||||
|
print(doc.cats)
|
||||||
|
```
|
||||||
|
|
||||||
|
You will see `logging` output similar to:
|
||||||
|
|
||||||
|
```
|
||||||
|
Generated prompt for doc: You look gorgeous!
|
||||||
|
|
||||||
|
You are an expert Text Classification system. Your task is to accept Text as input
|
||||||
|
and provide a category for the text based on the predefined labels.
|
||||||
|
|
||||||
|
Classify the text below to any of the following labels: COMPLIMENT, INSULT
|
||||||
|
The task is non-exclusive, so you can provide more than one label as long as
|
||||||
|
they're comma-delimited. For example: Label1, Label2, Label3.
|
||||||
|
Do not put any other text in your answer, only one or more of the provided labels with nothing before or after.
|
||||||
|
If the text cannot be classified into any of the provided labels, answer `==NONE==`.
|
||||||
|
|
||||||
|
Here is the text that needs classification
|
||||||
|
|
||||||
|
|
||||||
|
Text:
|
||||||
|
'''
|
||||||
|
You look gorgeous!
|
||||||
|
'''
|
||||||
|
|
||||||
|
Model response for doc: You look gorgeous!
|
||||||
|
COMPLIMENT
|
||||||
|
```
|
||||||
|
|
||||||
|
`print(doc.cats)` to standard output should look like:
|
||||||
|
|
||||||
|
```
|
||||||
|
{'COMPLIMENT': 1.0, 'INSULT': 0.0}
|
||||||
|
```
|
||||||
|
|
||||||
|
## API {id="api"}
|
||||||
|
|
||||||
|
`spacy-llm` exposes an `llm` factory with
|
||||||
|
[configurable settings](/api/large-language-models#config).
|
||||||
|
|
||||||
|
An `llm` component is defined by two main settings:
|
||||||
|
|
||||||
|
- A [**task**](#tasks), defining the prompt to send to the LLM as well as the
|
||||||
|
functionality to parse the resulting response back into structured fields on
|
||||||
|
the [Doc](/api/doc) objects.
|
||||||
|
- A [**model**](#models) defining the model to use and how to connect to it.
|
||||||
|
Note that `spacy-llm` supports both access to external APIs (such as OpenAI)
|
||||||
|
as well as access to self-hosted open-source LLMs (such as using Dolly through
|
||||||
|
Hugging Face).
|
||||||
|
|
||||||
|
Moreover, `spacy-llm` exposes a customizable [**caching**](#cache) functionality
|
||||||
|
to avoid running the same document through an LLM service (be it local or
|
||||||
|
through a REST API) more than once.
|
||||||
|
|
||||||
|
Finally, you can choose to save a stringified version of LLM prompts/responses
|
||||||
|
within the `Doc.user_data["llm_io"]` attribute by setting `save_io` to `True`.
|
||||||
|
`Doc.user_data["llm_io"]` is a dictionary containing one entry for every LLM
|
||||||
|
component within the `nlp` pipeline. Each entry is itself a dictionary, with two
|
||||||
|
keys: `prompt` and `response`.
|
||||||
|
|
||||||
|
A note on `validate_types`: by default, `spacy-llm` checks whether the
|
||||||
|
signatures of the `model` and `task` callables are consistent with each other
|
||||||
|
and emits a warning if they don't. `validate_types` can be set to `False` if you
|
||||||
|
want to disable this behavior.
|
||||||
|
|
||||||
|
### Tasks {id="tasks"}
|
||||||
|
|
||||||
|
A _task_ defines an NLP problem or question, that will be sent to the LLM via a
|
||||||
|
prompt. Further, the task defines how to parse the LLM's responses back into
|
||||||
|
structured information. All tasks are registered in the `llm_tasks` registry.
|
||||||
|
|
||||||
|
Practically speaking, a task should adhere to the `Protocol` `LLMTask` defined
|
||||||
|
in [`ty.py`](https://github.com/explosion/spacy-llm/blob/main/spacy_llm/ty.py).
|
||||||
|
It needs to define a `generate_prompts` function and a `parse_responses`
|
||||||
|
function.
|
||||||
|
|
||||||
|
| Task | Description |
|
||||||
|
| --------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------ |
|
||||||
|
| [`task.generate_prompts`](/api/large-language-models#task-generate-prompts) | Takes a collection of documents, and returns a collection of "prompts", which can be of type `Any`. |
|
||||||
|
| [`task.parse_responses`](/api/large-language-models#task-parse-responses) | Takes a collection of LLM responses and the original documents, parses the responses into structured information, and sets the annotations on the documents. |
|
||||||
|
|
||||||
|
Moreover, the task may define an optional [`scorer` method](/api/scorer#score).
|
||||||
|
It should accept an iterable of `Example` objects as input and return a score
|
||||||
|
dictionary. If the `scorer` method is defined, `spacy-llm` will call it to
|
||||||
|
evaluate the component.
|
||||||
|
|
||||||
|
| Component | Description |
|
||||||
|
| ----------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------- |
|
||||||
|
| [`spacy.Summarization.v1`](/api/large-language-models#summarization-v1) | The summarization task prompts the model for a concise summary of the provided text. |
|
||||||
|
| [`spacy.NER.v3`](/api/large-language-models#ner-v3) | Implements Chain-of-Thought reasoning for NER extraction - obtains higher accuracy than v1 or v2. |
|
||||||
|
| [`spacy.NER.v2`](/api/large-language-models#ner-v2) | Builds on v1 and additionally supports defining the provided labels with explicit descriptions. |
|
||||||
|
| [`spacy.NER.v1`](/api/large-language-models#ner-v1) | The original version of the built-in NER task supports both zero-shot and few-shot prompting. |
|
||||||
|
| [`spacy.SpanCat.v3`](/api/large-language-models#spancat-v3) | Adaptation of the v3 NER task to support overlapping entities and store its annotations in `doc.spans`. |
|
||||||
|
| [`spacy.SpanCat.v2`](/api/large-language-models#spancat-v2) | Adaptation of the v2 NER task to support overlapping entities and store its annotations in `doc.spans`. |
|
||||||
|
| [`spacy.SpanCat.v1`](/api/large-language-models#spancat-v1) | Adaptation of the v1 NER task to support overlapping entities and store its annotations in `doc.spans`. |
|
||||||
|
| [`spacy.REL.v1`](/api/large-language-models#rel-v1) | Relation Extraction task supporting both zero-shot and few-shot prompting. |
|
||||||
|
| [`spacy.TextCat.v3`](/api/large-language-models#textcat-v3) | Version 3 builds on v2 and allows setting definitions of labels. |
|
||||||
|
| [`spacy.TextCat.v2`](/api/large-language-models#textcat-v2) | Version 2 builds on v1 and includes an improved prompt template. |
|
||||||
|
| [`spacy.TextCat.v1`](/api/large-language-models#textcat-v1) | Version 1 of the built-in TextCat task supports both zero-shot and few-shot prompting. |
|
||||||
|
| [`spacy.Lemma.v1`](/api/large-language-models#lemma-v1) | Lemmatizes the provided text and updates the `lemma_` attribute of the tokens accordingly. |
|
||||||
|
| [`spacy.Sentiment.v1`](/api/large-language-models#sentiment-v1) | Performs sentiment analysis on provided texts. |
|
||||||
|
| [`spacy.NoOp.v1`](/api/large-language-models#noop-v1) | This task is only useful for testing - it tells the LLM to do nothing, and does not set any fields on the `docs`. |
|
||||||
|
|
||||||
|
#### Providing examples for few-shot prompts {id="few-shot-prompts"}
|
||||||
|
|
||||||
|
All built-in tasks support few-shot prompts, i. e. including examples in a
|
||||||
|
prompt. Examples can be supplied in two ways: (1) as a separate file containing
|
||||||
|
only examples or (2) by initializing `llm` with a `get_examples()` callback
|
||||||
|
(like any other pipeline component).
|
||||||
|
|
||||||
|
##### (1) Few-shot example file
|
||||||
|
|
||||||
|
A file containing examples for few-shot prompting can be configured like this:
|
||||||
|
|
||||||
|
```ini
|
||||||
|
[components.llm.task]
|
||||||
|
@llm_tasks = "spacy.NER.v2"
|
||||||
|
labels = PERSON,ORGANISATION,LOCATION
|
||||||
|
[components.llm.task.examples]
|
||||||
|
@misc = "spacy.FewShotReader.v1"
|
||||||
|
path = "ner_examples.yml"
|
||||||
|
```
|
||||||
|
|
||||||
|
The supplied file has to conform to the format expected by the required task
|
||||||
|
(see the task documentation further down).
|
||||||
|
|
||||||
|
##### (2) Initializing the `llm` component with a `get_examples()` callback
|
||||||
|
|
||||||
|
Alternatively, you can initialize your `nlp` pipeline by providing a
|
||||||
|
`get_examples` callback for [`nlp.initialize`](/api/language#initialize) and
|
||||||
|
setting `n_prompt_examples` to a positive number to automatically fetch a few
|
||||||
|
examples for few-shot learning. Set `n_prompt_examples` to `-1` to use all
|
||||||
|
examples as part of the few-shot learning prompt.
|
||||||
|
|
||||||
|
```ini
|
||||||
|
[initialize.components.llm]
|
||||||
|
n_prompt_examples = 3
|
||||||
|
```
|
||||||
|
|
||||||
|
### Model {id="models"}
|
||||||
|
|
||||||
|
A _model_ defines which LLM model to query, and how to query it. It can be a
|
||||||
|
simple function taking a collection of prompts (consistent with the output type
|
||||||
|
of `task.generate_prompts()`) and returning a collection of responses
|
||||||
|
(consistent with the expected input of `parse_responses`). Generally speaking,
|
||||||
|
it's a function of type `Callable[[Iterable[Any]], Iterable[Any]]`, but specific
|
||||||
|
implementations can have other signatures, like
|
||||||
|
`Callable[[Iterable[str]], Iterable[str]]`.
|
||||||
|
|
||||||
|
All built-in models are registered in `llm_models`. If no model is specified,
|
||||||
|
the repo currently connects to the `OpenAI` API by default using REST, and
|
||||||
|
accesses the `"gpt-3.5-turbo"` model.
|
||||||
|
|
||||||
|
Currently three different approaches to use LLMs are supported:
|
||||||
|
|
||||||
|
1. `spacy-llm`s native REST interface. This is the default for all hosted models
|
||||||
|
(e. g. OpenAI, Cohere, Anthropic, ...).
|
||||||
|
2. A HuggingFace integration that allows to run a limited set of HF models
|
||||||
|
locally.
|
||||||
|
3. A LangChain integration that allows to run any model supported by LangChain
|
||||||
|
(hosted or locally).
|
||||||
|
|
||||||
|
Approaches 1. and 2 are the default for hosted model and local models,
|
||||||
|
respectively. Alternatively you can use LangChain to access hosted or local
|
||||||
|
models by specifying one of the models registered with the `langchain.` prefix.
|
||||||
|
|
||||||
|
<Infobox>
|
||||||
|
_Why LangChain if there are also are a native REST and a HuggingFace interface? When should I use what?_
|
||||||
|
|
||||||
|
Third-party libraries like `langchain` focus on prompt management, integration
|
||||||
|
of many different LLM APIs, and other related features such as conversational
|
||||||
|
memory or agents. `spacy-llm` on the other hand emphasizes features we consider
|
||||||
|
useful in the context of NLP pipelines utilizing LLMs to process documents
|
||||||
|
(mostly) independent from each other. It makes sense that the feature sets of
|
||||||
|
such third-party libraries and `spacy-llm` aren't identical - and users might
|
||||||
|
want to take advantage of features not available in `spacy-llm`.
|
||||||
|
|
||||||
|
The advantage of implementing our own REST and HuggingFace integrations is that
|
||||||
|
we can ensure a larger degree of stability and robustness, as we can guarantee
|
||||||
|
backwards-compatibility and more smoothly integrated error handling.
|
||||||
|
|
||||||
|
If however there are features or APIs not natively covered by `spacy-llm`, it's
|
||||||
|
trivial to utilize LangChain to cover this - and easy to customize the prompting
|
||||||
|
mechanism, if so required.
|
||||||
|
|
||||||
|
</Infobox>
|
||||||
|
|
||||||
|
<Infobox variant="warning">
|
||||||
|
Note that when using hosted services, you have to ensure that the [proper API
|
||||||
|
keys](/api/large-language-models#api-keys) are set as environment variables as described by the corresponding
|
||||||
|
provider's documentation.
|
||||||
|
|
||||||
|
</Infobox>
|
||||||
|
|
||||||
|
| Model | Description |
|
||||||
|
| ----------------------------------------------------------------------- | ---------------------------------------------- |
|
||||||
|
| [`spacy.GPT-4.v2`](/api/large-language-models#models-rest) | OpenAI’s `gpt-4` model family. |
|
||||||
|
| [`spacy.GPT-3-5.v2`](/api/large-language-models#models-rest) | OpenAI’s `gpt-3-5` model family. |
|
||||||
|
| [`spacy.Text-Davinci.v2`](/api/large-language-models#models-rest) | OpenAI’s `text-davinci` model family. |
|
||||||
|
| [`spacy.Code-Davinci.v2`](/api/large-language-models#models-rest) | OpenAI’s `code-davinci` model family. |
|
||||||
|
| [`spacy.Text-Curie.v2`](/api/large-language-models#models-rest) | OpenAI’s `text-curie` model family. |
|
||||||
|
| [`spacy.Text-Babbage.v2`](/api/large-language-models#models-rest) | OpenAI’s `text-babbage` model family. |
|
||||||
|
| [`spacy.Text-Ada.v2`](/api/large-language-models#models-rest) | OpenAI’s `text-ada` model family. |
|
||||||
|
| [`spacy.Davinci.v2`](/api/large-language-models#models-rest) | OpenAI’s `davinci` model family. |
|
||||||
|
| [`spacy.Curie.v2`](/api/large-language-models#models-rest) | OpenAI’s `curie` model family. |
|
||||||
|
| [`spacy.Babbage.v2`](/api/large-language-models#models-rest) | OpenAI’s `babbage` model family. |
|
||||||
|
| [`spacy.Ada.v2`](/api/large-language-models#models-rest) | OpenAI’s `ada` model family. |
|
||||||
|
| [`spacy.Command.v1`](/api/large-language-models#models-rest) | Cohere’s `command` model family. |
|
||||||
|
| [`spacy.Claude-2.v1`](/api/large-language-models#models-rest) | Anthropic’s `claude-2` model family. |
|
||||||
|
| [`spacy.Claude-1.v1`](/api/large-language-models#models-rest) | Anthropic’s `claude-1` model family. |
|
||||||
|
| [`spacy.Claude-instant-1.v1`](/api/large-language-models#models-rest) | Anthropic’s `claude-instant-1` model family. |
|
||||||
|
| [`spacy.Claude-instant-1-1.v1`](/api/large-language-models#models-rest) | Anthropic’s `claude-instant-1.1` model family. |
|
||||||
|
| [`spacy.Claude-1-0.v1`](/api/large-language-models#models-rest) | Anthropic’s `claude-1.0` model family. |
|
||||||
|
| [`spacy.Claude-1-2.v1`](/api/large-language-models#models-rest) | Anthropic’s `claude-1.2` model family. |
|
||||||
|
| [`spacy.Claude-1-3.v1`](/api/large-language-models#models-rest) | Anthropic’s `claude-1.3` model family. |
|
||||||
|
| [`spacy.Dolly.v1`](/api/large-language-models#models-hf) | Dolly models through HuggingFace. |
|
||||||
|
| [`spacy.Falcon.v1`](/api/large-language-models#models-hf) | Falcon models through HuggingFace. |
|
||||||
|
| [`spacy.Llama2.v1`](/api/large-language-models#models-hf) | Llama2 models through HuggingFace. |
|
||||||
|
| [`spacy.StableLM.v1`](/api/large-language-models#models-hf) | StableLM models through HuggingFace. |
|
||||||
|
| [`spacy.OpenLLaMA.v1`](/api/large-language-models#models-hf) | OpenLLaMA models through HuggingFace. |
|
||||||
|
| [LangChain models](/api/large-language-models#langchain-models) | LangChain models for API retrieval. |
|
||||||
|
|
||||||
|
Note that the chat models variants of Llama 2 are currently not supported. This
|
||||||
|
is because they need a particular prompting setup and don't add any discernible
|
||||||
|
benefits in the use case of `spacy-llm` (i. e. no interactive chat) compared to
|
||||||
|
the completion model variants.
|
||||||
|
|
||||||
|
### Cache {id="cache"}
|
||||||
|
|
||||||
|
Interacting with LLMs, either through an external API or a local instance, is
|
||||||
|
costly. Since developing an NLP pipeline generally means a lot of exploration
|
||||||
|
and prototyping, `spacy-llm` implements a built-in
|
||||||
|
[cache](/api/large-language-models#cache) to avoid reprocessing the same
|
||||||
|
documents at each run that keeps batches of documents stored on disk.
|
||||||
|
|
||||||
|
### Various functions {id="various-functions"}
|
||||||
|
|
||||||
|
| Function | Description |
|
||||||
|
| ----------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
|
||||||
|
| [`spacy.FewShotReader.v1`](/api/large-language-models#fewshotreader-v1) | This function is registered in spaCy's `misc` registry, and reads in examples from a `.yml`, `.yaml`, `.json` or `.jsonl` file. It uses [`srsly`](https://github.com/explosion/srsly) to read in these files and parses them depending on the file extension. |
|
||||||
|
| [`spacy.FileReader.v1`](/api/large-language-models#filereader-v1) | This function is registered in spaCy's `misc` registry, and reads a file provided to the `path` to return a `str` representation of its contents. This function is typically used to read [Jinja](https://jinja.palletsprojects.com/en/3.1.x/) files containing the prompt template. |
|
||||||
|
| [Normalizer functions](/api/large-language-models#normalizer-functions) | These functions provide simple normalizations for string comparisons, e.g. between a list of specified labels and a label given in the raw text of the LLM response. |
|
|
@ -1299,9 +1299,9 @@ correct type.
|
||||||
|
|
||||||
```python {title="functions.py",highlight="1"}
|
```python {title="functions.py",highlight="1"}
|
||||||
@spacy.registry.tokenizers("bert_word_piece_tokenizer")
|
@spacy.registry.tokenizers("bert_word_piece_tokenizer")
|
||||||
def create_whitespace_tokenizer(vocab_file: str, lowercase: bool):
|
def create_bert_tokenizer(vocab_file: str, lowercase: bool):
|
||||||
def create_tokenizer(nlp):
|
def create_tokenizer(nlp):
|
||||||
return BertWordPieceTokenizer(nlp.vocab, vocab_file, lowercase)
|
return BertTokenizer(nlp.vocab, vocab_file, lowercase)
|
||||||
|
|
||||||
return create_tokenizer
|
return create_tokenizer
|
||||||
```
|
```
|
||||||
|
|
|
@ -244,7 +244,7 @@ tagging pipeline. This is also why the pipeline state is always held by the
|
||||||
together and returns an instance of `Language` with a pipeline set and access to
|
together and returns an instance of `Language` with a pipeline set and access to
|
||||||
the binary data:
|
the binary data:
|
||||||
|
|
||||||
```python {title="spacy.load under the hood"}
|
```python {title="spacy.load under the hood (abstract example)"}
|
||||||
lang = "en"
|
lang = "en"
|
||||||
pipeline = ["tok2vec", "tagger", "parser", "ner", "attribute_ruler", "lemmatizer"]
|
pipeline = ["tok2vec", "tagger", "parser", "ner", "attribute_ruler", "lemmatizer"]
|
||||||
data_path = "path/to/en_core_web_sm/en_core_web_sm-3.0.0"
|
data_path = "path/to/en_core_web_sm/en_core_web_sm-3.0.0"
|
||||||
|
@ -252,7 +252,7 @@ data_path = "path/to/en_core_web_sm/en_core_web_sm-3.0.0"
|
||||||
cls = spacy.util.get_lang_class(lang) # 1. Get Language class, e.g. English
|
cls = spacy.util.get_lang_class(lang) # 1. Get Language class, e.g. English
|
||||||
nlp = cls() # 2. Initialize it
|
nlp = cls() # 2. Initialize it
|
||||||
for name in pipeline:
|
for name in pipeline:
|
||||||
nlp.add_pipe(name) # 3. Add the component to the pipeline
|
nlp.add_pipe(name, config={...}) # 3. Add the component to the pipeline
|
||||||
nlp.from_disk(data_path) # 4. Load in the binary data
|
nlp.from_disk(data_path) # 4. Load in the binary data
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
|
@ -311,7 +311,7 @@ import re
|
||||||
nlp = spacy.load("en_core_web_sm")
|
nlp = spacy.load("en_core_web_sm")
|
||||||
doc = nlp("The United States of America (USA) are commonly known as the United States (U.S. or US) or America.")
|
doc = nlp("The United States of America (USA) are commonly known as the United States (U.S. or US) or America.")
|
||||||
|
|
||||||
expression = r"[Uu](nited|\\.?) ?[Ss](tates|\\.?)"
|
expression = r"[Uu](nited|\.?) ?[Ss](tates|\.?)"
|
||||||
for match in re.finditer(expression, doc.text):
|
for match in re.finditer(expression, doc.text):
|
||||||
start, end = match.span()
|
start, end = match.span()
|
||||||
span = doc.char_span(start, end)
|
span = doc.char_span(start, end)
|
||||||
|
@ -850,14 +850,14 @@ negative pattern. To keep it simple, we'll either add or subtract `0.1` points
|
||||||
this way, the score will also reflect combinations of emoji, even positive _and_
|
this way, the score will also reflect combinations of emoji, even positive _and_
|
||||||
negative ones.
|
negative ones.
|
||||||
|
|
||||||
With a library like [Emojipedia](https://github.com/bcongdon/python-emojipedia),
|
With a library like [emoji](https://github.com/carpedm20/emoji), we can also
|
||||||
we can also retrieve a short description for each emoji – for example, 😍's
|
retrieve a short description for each emoji – for example, 😍's official title
|
||||||
official title is "Smiling Face With Heart-Eyes". Assigning it to a
|
is "Smiling Face With Heart-Eyes". Assigning it to a
|
||||||
[custom attribute](/usage/processing-pipelines#custom-components-attributes) on
|
[custom attribute](/usage/processing-pipelines#custom-components-attributes) on
|
||||||
the emoji span will make it available as `span._.emoji_desc`.
|
the emoji span will make it available as `span._.emoji_desc`.
|
||||||
|
|
||||||
```python
|
```python
|
||||||
from emojipedia import Emojipedia # Installation: pip install emojipedia
|
import emoji # Installation: pip install emoji
|
||||||
from spacy.tokens import Span # Get the global Span object
|
from spacy.tokens import Span # Get the global Span object
|
||||||
|
|
||||||
Span.set_extension("emoji_desc", default=None) # Register the custom attribute
|
Span.set_extension("emoji_desc", default=None) # Register the custom attribute
|
||||||
|
@ -869,9 +869,9 @@ def label_sentiment(matcher, doc, i, matches):
|
||||||
elif doc.vocab.strings[match_id] == "SAD":
|
elif doc.vocab.strings[match_id] == "SAD":
|
||||||
doc.sentiment -= 0.1 # Subtract 0.1 for negative sentiment
|
doc.sentiment -= 0.1 # Subtract 0.1 for negative sentiment
|
||||||
span = doc[start:end]
|
span = doc[start:end]
|
||||||
emoji = Emojipedia.search(span[0].text) # Get data for emoji
|
# Verify if it is an emoji and set the extension attribute correctly.
|
||||||
span._.emoji_desc = emoji.title # Assign emoji description
|
if emoji.is_emoji(span[0].text):
|
||||||
|
span._.emoji_desc = emoji.demojize(span[0].text, delimiters=("", ""), language=doc.lang_).replace("_", " ")
|
||||||
```
|
```
|
||||||
|
|
||||||
To label the hashtags, we can use a
|
To label the hashtags, we can use a
|
||||||
|
@ -1096,28 +1096,28 @@ The following operators are supported by the `DependencyMatcher`, most of which
|
||||||
come directly from
|
come directly from
|
||||||
[Semgrex](https://nlp.stanford.edu/nlp/javadoc/javanlp/edu/stanford/nlp/semgraph/semgrex/SemgrexPattern.html):
|
[Semgrex](https://nlp.stanford.edu/nlp/javadoc/javanlp/edu/stanford/nlp/semgraph/semgrex/SemgrexPattern.html):
|
||||||
|
|
||||||
| Symbol | Description |
|
| Symbol | Description |
|
||||||
| --------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------- |
|
| --------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------ |
|
||||||
| `A < B` | `A` is the immediate dependent of `B`. |
|
| `A < B` | `A` is the immediate dependent of `B`. |
|
||||||
| `A > B` | `A` is the immediate head of `B`. |
|
| `A > B` | `A` is the immediate head of `B`. |
|
||||||
| `A << B` | `A` is the dependent in a chain to `B` following dep → head paths. |
|
| `A << B` | `A` is the dependent in a chain to `B` following dep → head paths. |
|
||||||
| `A >> B` | `A` is the head in a chain to `B` following head → dep paths. |
|
| `A >> B` | `A` is the head in a chain to `B` following head → dep paths. |
|
||||||
| `A . B` | `A` immediately precedes `B`, i.e. `A.i == B.i - 1`, and both are within the same dependency tree. |
|
| `A . B` | `A` immediately precedes `B`, i.e. `A.i == B.i - 1`, and both are within the same dependency tree. |
|
||||||
| `A .* B` | `A` precedes `B`, i.e. `A.i < B.i`, and both are within the same dependency tree _(Semgrex counterpart: `..`)_. |
|
| `A .* B` | `A` precedes `B`, i.e. `A.i < B.i`, and both are within the same dependency tree _(Semgrex counterpart: `..`)_. |
|
||||||
| `A ; B` | `A` immediately follows `B`, i.e. `A.i == B.i + 1`, and both are within the same dependency tree _(Semgrex counterpart: `-`)_. |
|
| `A ; B` | `A` immediately follows `B`, i.e. `A.i == B.i + 1`, and both are within the same dependency tree _(Semgrex counterpart: `-`)_. |
|
||||||
| `A ;* B` | `A` follows `B`, i.e. `A.i > B.i`, and both are within the same dependency tree _(Semgrex counterpart: `--`)_. |
|
| `A ;* B` | `A` follows `B`, i.e. `A.i > B.i`, and both are within the same dependency tree _(Semgrex counterpart: `--`)_. |
|
||||||
| `A $+ B` | `B` is a right immediate sibling of `A`, i.e. `A` and `B` have the same parent and `A.i == B.i - 1`. |
|
| `A $+ B` | `B` is a right immediate sibling of `A`, i.e. `A` and `B` have the same parent and `A.i == B.i - 1`. |
|
||||||
| `A $- B` | `B` is a left immediate sibling of `A`, i.e. `A` and `B` have the same parent and `A.i == B.i + 1`. |
|
| `A $- B` | `B` is a left immediate sibling of `A`, i.e. `A` and `B` have the same parent and `A.i == B.i + 1`. |
|
||||||
| `A $++ B` | `B` is a right sibling of `A`, i.e. `A` and `B` have the same parent and `A.i < B.i`. |
|
| `A $++ B` | `B` is a right sibling of `A`, i.e. `A` and `B` have the same parent and `A.i < B.i`. |
|
||||||
| `A $-- B` | `B` is a left sibling of `A`, i.e. `A` and `B` have the same parent and `A.i > B.i`. |
|
| `A $-- B` | `B` is a left sibling of `A`, i.e. `A` and `B` have the same parent and `A.i > B.i`. |
|
||||||
| `A >+ B` <Tag variant="new">3.5.1</Tag> | `B` is a right immediate child of `A`, i.e. `A` is a parent of `B` and `A.i == B.i - 1` _(not in Semgrex)_. |
|
| `A >+ B` <Tag variant="new">3.5.1</Tag> | `B` is a right immediate child of `A`, i.e. `A` is a parent of `B` and `A.i == B.i - 1` _(not in Semgrex)_. |
|
||||||
| `A >- B` <Tag variant="new">3.5.1</Tag> | `B` is a left immediate child of `A`, i.e. `A` is a parent of `B` and `A.i == B.i + 1` _(not in Semgrex)_. |
|
| `A >- B` <Tag variant="new">3.5.1</Tag> | `B` is a left immediate child of `A`, i.e. `A` is a parent of `B` and `A.i == B.i + 1` _(not in Semgrex)_. |
|
||||||
| `A >++ B` | `B` is a right child of `A`, i.e. `A` is a parent of `B` and `A.i < B.i`. |
|
| `A >++ B` | `B` is a right child of `A`, i.e. `A` is a parent of `B` and `A.i < B.i`. |
|
||||||
| `A >-- B` | `B` is a left child of `A`, i.e. `A` is a parent of `B` and `A.i > B.i`. |
|
| `A >-- B` | `B` is a left child of `A`, i.e. `A` is a parent of `B` and `A.i > B.i`. |
|
||||||
| `A <+ B` <Tag variant="new">3.5.1</Tag> | `B` is a right immediate parent of `A`, i.e. `A` is a child of `B` and `A.i == B.i - 1` _(not in Semgrex)_. |
|
| `A <+ B` <Tag variant="new">3.5.1</Tag> | `B` is a right immediate parent of `A`, i.e. `A` is a child of `B` and `A.i == B.i - 1` _(not in Semgrex)_. |
|
||||||
| `A <- B` <Tag variant="new">3.5.1</Tag> | `B` is a left immediate parent of `A`, i.e. `A` is a child of `B` and `A.i == B.i + 1` _(not in Semgrex)_. |
|
| `A <- B` <Tag variant="new">3.5.1</Tag> | `B` is a left immediate parent of `A`, i.e. `A` is a child of `B` and `A.i == B.i + 1` _(not in Semgrex)_. |
|
||||||
| `A <++ B` | `B` is a right parent of `A`, i.e. `A` is a child of `B` and `A.i < B.i`. |
|
| `A <++ B` | `B` is a right parent of `A`, i.e. `A` is a child of `B` and `A.i < B.i`. |
|
||||||
| `A <-- B` | `B` is a left parent of `A`, i.e. `A` is a child of `B` and `A.i > B.i`. |
|
| `A <-- B` | `B` is a left parent of `A`, i.e. `A` is a child of `B` and `A.i > B.i`. |
|
||||||
|
|
||||||
### Designing dependency matcher patterns {id="dependencymatcher-patterns"}
|
### Designing dependency matcher patterns {id="dependencymatcher-patterns"}
|
||||||
|
|
||||||
|
|
|
@ -180,7 +180,7 @@ Some of the main advantages and features of spaCy's training config are:
|
||||||
|
|
||||||
Under the hood, the config is parsed into a dictionary. It's divided into
|
Under the hood, the config is parsed into a dictionary. It's divided into
|
||||||
sections and subsections, indicated by the square brackets and dot notation. For
|
sections and subsections, indicated by the square brackets and dot notation. For
|
||||||
example, `[training]` is a section and `[training.batch_size]` a subsection.
|
example, `[training]` is a section and `[training.batcher]` a subsection.
|
||||||
Subsections can define values, just like a dictionary, or use the `@` syntax to
|
Subsections can define values, just like a dictionary, or use the `@` syntax to
|
||||||
refer to [registered functions](#config-functions). This allows the config to
|
refer to [registered functions](#config-functions). This allows the config to
|
||||||
not just define static settings, but also construct objects like architectures,
|
not just define static settings, but also construct objects like architectures,
|
||||||
|
@ -254,7 +254,7 @@ For cases like this, you can set additional command-line options starting with
|
||||||
block.
|
block.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
$ python -m spacy train config.cfg --paths.train ./corpus/train.spacy --paths.dev ./corpus/dev.spacy --training.batch_size 128
|
$ python -m spacy train config.cfg --paths.train ./corpus/train.spacy --paths.dev ./corpus/dev.spacy --training.max_epochs 3
|
||||||
```
|
```
|
||||||
|
|
||||||
Only existing sections and values in the config can be overwritten. At the end
|
Only existing sections and values in the config can be overwritten. At the end
|
||||||
|
@ -279,7 +279,7 @@ process. Environment variables **take precedence** over CLI overrides and values
|
||||||
defined in the config file.
|
defined in the config file.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
$ SPACY_CONFIG_OVERRIDES="--system.gpu_allocator pytorch --training.batch_size 128" ./your_script.sh
|
$ SPACY_CONFIG_OVERRIDES="--system.gpu_allocator pytorch --training.max_epochs 3" ./your_script.sh
|
||||||
```
|
```
|
||||||
|
|
||||||
### Reading from standard input {id="config-stdin"}
|
### Reading from standard input {id="config-stdin"}
|
||||||
|
@ -578,16 +578,17 @@ now-updated model to the predicted docs.
|
||||||
|
|
||||||
The training configuration defined in the config file doesn't have to only
|
The training configuration defined in the config file doesn't have to only
|
||||||
consist of static values. Some settings can also be **functions**. For instance,
|
consist of static values. Some settings can also be **functions**. For instance,
|
||||||
the `batch_size` can be a number that doesn't change, or a schedule, like a
|
the batch size can be a number that doesn't change, or a schedule, like a
|
||||||
sequence of compounding values, which has shown to be an effective trick (see
|
sequence of compounding values, which has shown to be an effective trick (see
|
||||||
[Smith et al., 2017](https://arxiv.org/abs/1711.00489)).
|
[Smith et al., 2017](https://arxiv.org/abs/1711.00489)).
|
||||||
|
|
||||||
```ini {title="With static value"}
|
```ini {title="With static value"}
|
||||||
[training]
|
[training.batcher]
|
||||||
batch_size = 128
|
@batchers = "spacy.batch_by_words.v1"
|
||||||
|
size = 3000
|
||||||
```
|
```
|
||||||
|
|
||||||
To refer to a function instead, you can make `[training.batch_size]` its own
|
To refer to a function instead, you can make `[training.batcher.size]` its own
|
||||||
section and use the `@` syntax to specify the function and its arguments – in
|
section and use the `@` syntax to specify the function and its arguments – in
|
||||||
this case [`compounding.v1`](https://thinc.ai/docs/api-schedules#compounding)
|
this case [`compounding.v1`](https://thinc.ai/docs/api-schedules#compounding)
|
||||||
defined in the [function registry](/api/top-level#registry). All other values
|
defined in the [function registry](/api/top-level#registry). All other values
|
||||||
|
@ -606,7 +607,7 @@ from your configs.
|
||||||
> optimizer.
|
> optimizer.
|
||||||
|
|
||||||
```ini {title="With registered function"}
|
```ini {title="With registered function"}
|
||||||
[training.batch_size]
|
[training.batcher.size]
|
||||||
@schedules = "compounding.v1"
|
@schedules = "compounding.v1"
|
||||||
start = 100
|
start = 100
|
||||||
stop = 1000
|
stop = 1000
|
||||||
|
@ -1027,14 +1028,14 @@ def my_custom_schedule(start: int = 1, factor: float = 1.001):
|
||||||
```
|
```
|
||||||
|
|
||||||
In your config, you can now reference the schedule in the
|
In your config, you can now reference the schedule in the
|
||||||
`[training.batch_size]` block via `@schedules`. If a block contains a key
|
`[training.batcher.size]` block via `@schedules`. If a block contains a key
|
||||||
starting with an `@`, it's interpreted as a reference to a function. All other
|
starting with an `@`, it's interpreted as a reference to a function. All other
|
||||||
settings in the block will be passed to the function as keyword arguments. Keep
|
settings in the block will be passed to the function as keyword arguments. Keep
|
||||||
in mind that the config shouldn't have any hidden defaults and all arguments on
|
in mind that the config shouldn't have any hidden defaults and all arguments on
|
||||||
the functions need to be represented in the config.
|
the functions need to be represented in the config.
|
||||||
|
|
||||||
```ini {title="config.cfg (excerpt)"}
|
```ini {title="config.cfg (excerpt)"}
|
||||||
[training.batch_size]
|
[training.batcher.size]
|
||||||
@schedules = "my_custom_schedule.v1"
|
@schedules = "my_custom_schedule.v1"
|
||||||
start = 2
|
start = 2
|
||||||
factor = 1.005
|
factor = 1.005
|
||||||
|
|
|
@ -349,7 +349,8 @@ or
|
||||||
[SyntaxNet](https://github.com/tensorflow/models/tree/master/research/syntaxnet).
|
[SyntaxNet](https://github.com/tensorflow/models/tree/master/research/syntaxnet).
|
||||||
If you set `manual=True` on either `render()` or `serve()`, you can pass in data
|
If you set `manual=True` on either `render()` or `serve()`, you can pass in data
|
||||||
in displaCy's format as a dictionary (instead of `Doc` objects). There are
|
in displaCy's format as a dictionary (instead of `Doc` objects). There are
|
||||||
helper functions for converting `Doc` objects to displaCy's format for use with
|
helper functions for converting `Doc` objects to
|
||||||
|
[displaCy's format](/api/top-level#displacy_structures) for use with
|
||||||
`manual=True`: [`displacy.parse_deps`](/api/top-level#displacy.parse_deps),
|
`manual=True`: [`displacy.parse_deps`](/api/top-level#displacy.parse_deps),
|
||||||
[`displacy.parse_ents`](/api/top-level#displacy.parse_ents), and
|
[`displacy.parse_ents`](/api/top-level#displacy.parse_ents), and
|
||||||
[`displacy.parse_spans`](/api/top-level#displacy.parse_spans).
|
[`displacy.parse_spans`](/api/top-level#displacy.parse_spans).
|
||||||
|
|
|
@ -26,16 +26,19 @@
|
||||||
{ "text": "Processing Pipelines", "url": "/usage/processing-pipelines" },
|
{ "text": "Processing Pipelines", "url": "/usage/processing-pipelines" },
|
||||||
{
|
{
|
||||||
"text": "Embeddings & Transformers",
|
"text": "Embeddings & Transformers",
|
||||||
"url": "/usage/embeddings-transformers",
|
"url": "/usage/embeddings-transformers"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"text": "Large Language Models",
|
||||||
|
"url": "/usage/large-language-models",
|
||||||
"tag": "new"
|
"tag": "new"
|
||||||
},
|
},
|
||||||
{ "text": "Training Models", "url": "/usage/training", "tag": "new" },
|
{ "text": "Training Models", "url": "/usage/training" },
|
||||||
{
|
{
|
||||||
"text": "Layers & Model Architectures",
|
"text": "Layers & Model Architectures",
|
||||||
"url": "/usage/layers-architectures",
|
"url": "/usage/layers-architectures"
|
||||||
"tag": "new"
|
|
||||||
},
|
},
|
||||||
{ "text": "spaCy Projects", "url": "/usage/projects", "tag": "new" },
|
{ "text": "spaCy Projects", "url": "/usage/projects" },
|
||||||
{ "text": "Saving & Loading", "url": "/usage/saving-loading" },
|
{ "text": "Saving & Loading", "url": "/usage/saving-loading" },
|
||||||
{ "text": "Visualizers", "url": "/usage/visualizers" }
|
{ "text": "Visualizers", "url": "/usage/visualizers" }
|
||||||
]
|
]
|
||||||
|
@ -103,6 +106,7 @@
|
||||||
{ "text": "EntityLinker", "url": "/api/entitylinker" },
|
{ "text": "EntityLinker", "url": "/api/entitylinker" },
|
||||||
{ "text": "EntityRecognizer", "url": "/api/entityrecognizer" },
|
{ "text": "EntityRecognizer", "url": "/api/entityrecognizer" },
|
||||||
{ "text": "EntityRuler", "url": "/api/entityruler" },
|
{ "text": "EntityRuler", "url": "/api/entityruler" },
|
||||||
|
{ "text": "Large Language Models", "url": "/api/large-language-models" },
|
||||||
{ "text": "Lemmatizer", "url": "/api/lemmatizer" },
|
{ "text": "Lemmatizer", "url": "/api/lemmatizer" },
|
||||||
{ "text": "Morphologizer", "url": "/api/morphologizer" },
|
{ "text": "Morphologizer", "url": "/api/morphologizer" },
|
||||||
{ "text": "SentenceRecognizer", "url": "/api/sentencerecognizer" },
|
{ "text": "SentenceRecognizer", "url": "/api/sentencerecognizer" },
|
||||||
|
|
|
@ -17,6 +17,31 @@
|
||||||
"category": ["extension"],
|
"category": ["extension"],
|
||||||
"tags": []
|
"tags": []
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"id": "sayswho",
|
||||||
|
"title": "SaysWho",
|
||||||
|
"slogan": "Quote identification, attribution and resolution",
|
||||||
|
"description": "A Python package for identifying and attributing quotes in text. It uses a combination of spaCy functionality, logic and grammar to find quotes and their speakers, then uses the spaCy coreferencing model to better clarify who is speaking. Currently English only.",
|
||||||
|
"github": "afriedman412/sayswho",
|
||||||
|
"pip": "sayswho",
|
||||||
|
"code_language": "python",
|
||||||
|
"author": "Andy Friedman",
|
||||||
|
"author_links": {
|
||||||
|
"twitter": "@steadynappin",
|
||||||
|
"github": "afriedman412"
|
||||||
|
},
|
||||||
|
"code_example": [
|
||||||
|
"from sayswho import SaysWho",
|
||||||
|
"text = open(\"path/to/your/text_file.txt\").read()",
|
||||||
|
"sw = SaysWho()",
|
||||||
|
"sw.attribute(text)",
|
||||||
|
|
||||||
|
"sw.expand_match() # see quote/cluster matches",
|
||||||
|
"sw.render_to_html() # output your text, quotes and cluster matches to an html file called \"temp.html\""
|
||||||
|
],
|
||||||
|
"category": ["standalone"],
|
||||||
|
"tags": ["attribution", "coref", "text-processing"]
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"id": "parsigs",
|
"id": "parsigs",
|
||||||
"title": "parsigs",
|
"title": "parsigs",
|
||||||
|
@ -67,6 +92,33 @@
|
||||||
"category": ["pipeline", "research"],
|
"category": ["pipeline", "research"],
|
||||||
"tags": ["latin"]
|
"tags": ["latin"]
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"id": "odycy",
|
||||||
|
"title": "OdyCy",
|
||||||
|
"slogan": "General-purpose language pipelines for premodern Greek.",
|
||||||
|
"description": "Academically validated modular NLP pipelines for premodern Greek. odyCy achieves state of the art performance on multiple tasks on unseen test data from the Universal Dependencies Perseus treebank, and performs second best on the PROIEL treebank’s test set on even more tasks. In addition performance also seems relatively stable across the two evaluation datasets in comparison with other NLP pipelines. OdyCy is being used at the Center for Humanities Computing for preprocessing and analyzing Ancient Greek corpora for New Testament research, meaning that you can expect consistent maintenance and improvements.",
|
||||||
|
"github": "centre-for-humanities-computing/odyCy",
|
||||||
|
"code_example": [
|
||||||
|
"# To install the high-accuracy transformer-based pipeline",
|
||||||
|
"# pip install https://huggingface.co/chcaa/grc_odycy_joint_trf/resolve/main/grc_odycy_joint_trf-any-py3-none-any.whl",
|
||||||
|
"import spacy",
|
||||||
|
"",
|
||||||
|
"nlp = spacy.load('grc_odycy_joint_trf')",
|
||||||
|
"",
|
||||||
|
"doc = nlp('τὴν γοῦν Ἀττικὴν ἐκ τοῦ ἐπὶ πλεῖστον διὰ τὸ λεπτόγεων ἀστασίαστον οὖσαν ἄνθρωποι ᾤκουν οἱ αὐτοὶ αἰεί.')"
|
||||||
|
],
|
||||||
|
"code_language": "python",
|
||||||
|
"url": "https://centre-for-humanities-computing.github.io/odyCy/",
|
||||||
|
"thumb": "https://raw.githubusercontent.com/centre-for-humanities-computing/odyCy/7b94fec60679d06272dca88a4dcfe0f329779aea/docs/_static/logo.svg",
|
||||||
|
"image": "https://github.com/centre-for-humanities-computing/odyCy/raw/main/docs/_static/logo_with_text_below.svg",
|
||||||
|
"author": "Jan Kostkan, Márton Kardos (Center for Humanities Computing, Aarhus University)",
|
||||||
|
"author_links": {
|
||||||
|
"github": "centre-for-humanities-computing",
|
||||||
|
"website": "https://chc.au.dk/"
|
||||||
|
},
|
||||||
|
"category": ["pipeline", "standalone", "research"],
|
||||||
|
"tags": ["ancient Greek"]
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"id": "spacy-wasm",
|
"id": "spacy-wasm",
|
||||||
"title": "spacy-wasm",
|
"title": "spacy-wasm",
|
||||||
|
@ -2754,7 +2806,7 @@
|
||||||
"",
|
"",
|
||||||
"# see github repo for examples on sentence-transformers and Huggingface",
|
"# see github repo for examples on sentence-transformers and Huggingface",
|
||||||
"nlp = spacy.load('en_core_web_md')",
|
"nlp = spacy.load('en_core_web_md')",
|
||||||
"nlp.add_pipe(\"text_categorizer\", ",
|
"nlp.add_pipe(\"classy_classification\", ",
|
||||||
" config={",
|
" config={",
|
||||||
" \"data\": data,",
|
" \"data\": data,",
|
||||||
" \"model\": \"spacy\"",
|
" \"model\": \"spacy\"",
|
||||||
|
@ -2958,8 +3010,8 @@
|
||||||
"# Load the spaCy language model:",
|
"# Load the spaCy language model:",
|
||||||
"nlp = spacy.load(\"en_core_web_sm\")",
|
"nlp = spacy.load(\"en_core_web_sm\")",
|
||||||
"",
|
"",
|
||||||
"# Add the \"text_categorizer\" pipeline component to the spaCy model, and configure it with SetFit parameters:",
|
"# Add the \"spacy_setfit\" pipeline component to the spaCy model, and configure it with SetFit parameters:",
|
||||||
"nlp.add_pipe(\"text_categorizer\", config={",
|
"nlp.add_pipe(\"spacy_setfit\", config={",
|
||||||
" \"pretrained_model_name_or_path\": \"paraphrase-MiniLM-L3-v2\",",
|
" \"pretrained_model_name_or_path\": \"paraphrase-MiniLM-L3-v2\",",
|
||||||
" \"setfit_trainer_args\": {",
|
" \"setfit_trainer_args\": {",
|
||||||
" \"train_dataset\": train_dataset",
|
" \"train_dataset\": train_dataset",
|
||||||
|
@ -4392,6 +4444,62 @@
|
||||||
},
|
},
|
||||||
"category": ["pipeline", "standalone", "scientific"],
|
"category": ["pipeline", "standalone", "scientific"],
|
||||||
"tags": ["ner"]
|
"tags": ["ner"]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "hobbit-spacy",
|
||||||
|
"title": "Hobbit spaCy",
|
||||||
|
"slogan": "NLP for Middle Earth",
|
||||||
|
"description": "Hobbit spaCy is a custom spaCy pipeline designed specifically for working with Middle Earth and texts from the world of J.R.R. Tolkien.",
|
||||||
|
"github": "wjbmattingly/hobbit-spacy",
|
||||||
|
"pip": "en-hobbit",
|
||||||
|
"code_example": [
|
||||||
|
"import spacy",
|
||||||
|
"",
|
||||||
|
"nlp = spacy.load('en_hobbit')",
|
||||||
|
"doc = nlp('Frodo saw Glorfindel and Glóin; and in a corner alone Strider was sitting, clad in his old travel - worn clothes again')"
|
||||||
|
],
|
||||||
|
"code_language": "python",
|
||||||
|
"thumb": "https://github.com/wjbmattingly/hobbit-spacy/blob/main/images/hobbit-thumbnail.png?raw=true",
|
||||||
|
"image": "https://github.com/wjbmattingly/hobbit-spacy/raw/main/images/hobbitspacy.png",
|
||||||
|
"author": "W.J.B. Mattingly",
|
||||||
|
"author_links": {
|
||||||
|
"twitter": "wjb_mattingly",
|
||||||
|
"github": "wjbmattingly",
|
||||||
|
"website": "https://wjbmattingly.com"
|
||||||
|
},
|
||||||
|
"category": ["pipeline", "standalone"],
|
||||||
|
"tags": ["spans", "rules", "ner"]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "rolegal",
|
||||||
|
"title": "A spaCy Package for Romanian Legal Document Processing",
|
||||||
|
"thumb": "https://raw.githubusercontent.com/senisioi/rolegal/main/img/paper200x200.jpeg",
|
||||||
|
"slogan": "rolegal: a spaCy Package for Noisy Romanian Legal Document Processing",
|
||||||
|
"description": "This is a spaCy language model for Romanian legal domain trained with floret 4-gram to 5-gram embeddings and `LEGAL` entity recognition. Useful for processing OCR-resulted noisy legal documents.",
|
||||||
|
"github": "senisioi/rolegal",
|
||||||
|
"pip": "ro-legal-fl",
|
||||||
|
"tags": ["legal", "floret", "ner", "romanian"],
|
||||||
|
"code_example": [
|
||||||
|
"import spacy",
|
||||||
|
"nlp = spacy.load(\"ro_legal_fl\")",
|
||||||
|
"",
|
||||||
|
"doc = nlp(\"Titlul III din LEGEA nr. 255 din 19 iulie 2013, publicată în MONITORUL OFICIAL\")",
|
||||||
|
"# legal entity identification",
|
||||||
|
"for entity in doc.ents:",
|
||||||
|
" print('entity: ', entity, '; entity type: ', entity.label_)",
|
||||||
|
"",
|
||||||
|
"# floret n-gram embeddings robust to typos",
|
||||||
|
"print(nlp('achizit1e public@').similarity(nlp('achiziții publice')))",
|
||||||
|
"# 0.7393895566928835",
|
||||||
|
"print(nlp('achizitii publice').similarity(nlp('achiziții publice')))",
|
||||||
|
"# 0.8996480808279399"
|
||||||
|
],
|
||||||
|
"author": "Sergiu Nisioi",
|
||||||
|
"author_links": {
|
||||||
|
"github": "senisioi",
|
||||||
|
"website": "https://nlp.unibuc.ro/people/snisioi.html"
|
||||||
|
},
|
||||||
|
"category": ["pipeline", "training", "models"]
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
|
|
||||||
|
|
|
@ -16,3 +16,9 @@ NETLIFY_NEXT_PLUGIN_SKIP = "true"
|
||||||
|
|
||||||
[[plugins]]
|
[[plugins]]
|
||||||
package = "@netlify/plugin-nextjs"
|
package = "@netlify/plugin-nextjs"
|
||||||
|
|
||||||
|
[[headers]]
|
||||||
|
for = "/*"
|
||||||
|
[headers.values]
|
||||||
|
X-Frame-Options = "DENY"
|
||||||
|
X-XSS-Protection = "1; mode=block"
|
||||||
|
|
|
@ -106,50 +106,21 @@ const Landing = () => {
|
||||||
|
|
||||||
<LandingBannerGrid>
|
<LandingBannerGrid>
|
||||||
<LandingBanner
|
<LandingBanner
|
||||||
to="https://explosion.ai/custom-solutions"
|
label="NEW"
|
||||||
|
title="Large Language Models: Integrating LLMs into structured NLP pipelines"
|
||||||
|
to="/usage/large-language-models"
|
||||||
button="Learn more"
|
button="Learn more"
|
||||||
background="#E4F4F9"
|
|
||||||
color="#1e1935"
|
|
||||||
small
|
small
|
||||||
>
|
>
|
||||||
<p>
|
<p>
|
||||||
<Link to="https://explosion.ai/custom-solutions" hidden>
|
<Link to="https://github.com/explosion/spacy-llm">
|
||||||
<ImageFill
|
The spacy-llm package
|
||||||
image={tailoredPipelinesImage}
|
</Link>{' '}
|
||||||
alt="spaCy Tailored Pipelines"
|
integrates Large Language Models (LLMs) into spaCy, featuring a modular
|
||||||
/>
|
system for <strong>fast prototyping</strong> and <strong>prompting</strong>,
|
||||||
</Link>
|
and turning unstructured responses into <strong>robust outputs</strong> for
|
||||||
|
various NLP tasks, <strong>no training data</strong> required.
|
||||||
</p>
|
</p>
|
||||||
<p>
|
|
||||||
<strong>
|
|
||||||
Get a custom spaCy pipeline, tailor-made for your NLP problem by
|
|
||||||
spaCy's core developers.
|
|
||||||
</strong>
|
|
||||||
</p>
|
|
||||||
<Ul>
|
|
||||||
<Li emoji="🔥">
|
|
||||||
<strong>Streamlined.</strong> Nobody knows spaCy better than we do. Send
|
|
||||||
us your pipeline requirements and we'll be ready to start producing
|
|
||||||
your solution in no time at all.
|
|
||||||
</Li>
|
|
||||||
<Li emoji="🐿 ">
|
|
||||||
<strong>Production ready.</strong> spaCy pipelines are robust and easy
|
|
||||||
to deploy. You'll get a complete spaCy project folder which is
|
|
||||||
ready to <InlineCode>spacy project run</InlineCode>.
|
|
||||||
</Li>
|
|
||||||
<Li emoji="🔮">
|
|
||||||
<strong>Predictable.</strong> You'll know exactly what you're
|
|
||||||
going to get and what it's going to cost. We quote fees up-front,
|
|
||||||
let you try before you buy, and don't charge for over-runs at our
|
|
||||||
end — all the risk is on us.
|
|
||||||
</Li>
|
|
||||||
<Li emoji="🛠">
|
|
||||||
<strong>Maintainable.</strong> spaCy is an industry standard, and
|
|
||||||
we'll deliver your pipeline with full code, data, tests and
|
|
||||||
documentation, so your team can retrain, update and extend the solution
|
|
||||||
as your requirements change.
|
|
||||||
</Li>
|
|
||||||
</Ul>
|
|
||||||
</LandingBanner>
|
</LandingBanner>
|
||||||
|
|
||||||
<LandingBanner
|
<LandingBanner
|
||||||
|
@ -240,21 +211,50 @@ const Landing = () => {
|
||||||
|
|
||||||
<LandingBannerGrid>
|
<LandingBannerGrid>
|
||||||
<LandingBanner
|
<LandingBanner
|
||||||
label="New in v3.0"
|
to="https://explosion.ai/custom-solutions"
|
||||||
title="Transformer-based pipelines, new training system, project templates & more"
|
button="Learn more"
|
||||||
to="/usage/v3"
|
background="#E4F4F9"
|
||||||
button="See what's new"
|
color="#1e1935"
|
||||||
small
|
small
|
||||||
>
|
>
|
||||||
<p>
|
<p>
|
||||||
spaCy v3.0 features all new <strong>transformer-based pipelines</strong>{' '}
|
<Link to="https://explosion.ai/custom-solutions" noLinkLayout>
|
||||||
that bring spaCy's accuracy right up to the current{' '}
|
<ImageFill
|
||||||
<strong>state-of-the-art</strong>. You can use any pretrained transformer to
|
image={tailoredPipelinesImage}
|
||||||
train your own pipelines, and even share one transformer between multiple
|
alt="spaCy Tailored Pipelines"
|
||||||
components with <strong>multi-task learning</strong>. Training is now fully
|
/>
|
||||||
configurable and extensible, and you can define your own custom models using{' '}
|
</Link>
|
||||||
<strong>PyTorch</strong>, <strong>TensorFlow</strong> and other frameworks.
|
|
||||||
</p>
|
</p>
|
||||||
|
<p>
|
||||||
|
<strong>
|
||||||
|
Get a custom spaCy pipeline, tailor-made for your NLP problem by
|
||||||
|
spaCy's core developers.
|
||||||
|
</strong>
|
||||||
|
</p>
|
||||||
|
<Ul>
|
||||||
|
<Li emoji="🔥">
|
||||||
|
<strong>Streamlined.</strong> Nobody knows spaCy better than we do. Send
|
||||||
|
us your pipeline requirements and we'll be ready to start producing
|
||||||
|
your solution in no time at all.
|
||||||
|
</Li>
|
||||||
|
<Li emoji="🐿 ">
|
||||||
|
<strong>Production ready.</strong> spaCy pipelines are robust and easy
|
||||||
|
to deploy. You'll get a complete spaCy project folder which is
|
||||||
|
ready to <InlineCode>spacy project run</InlineCode>.
|
||||||
|
</Li>
|
||||||
|
<Li emoji="🔮">
|
||||||
|
<strong>Predictable.</strong> You'll know exactly what you're
|
||||||
|
going to get and what it's going to cost. We quote fees up-front,
|
||||||
|
let you try before you buy, and don't charge for over-runs at our
|
||||||
|
end — all the risk is on us.
|
||||||
|
</Li>
|
||||||
|
<Li emoji="🛠">
|
||||||
|
<strong>Maintainable.</strong> spaCy is an industry standard, and
|
||||||
|
we'll deliver your pipeline with full code, data, tests and
|
||||||
|
documentation, so your team can retrain, update and extend the solution
|
||||||
|
as your requirements change.
|
||||||
|
</Li>
|
||||||
|
</Ul>
|
||||||
</LandingBanner>
|
</LandingBanner>
|
||||||
<LandingBanner
|
<LandingBanner
|
||||||
to="https://course.spacy.io"
|
to="https://course.spacy.io"
|
||||||
|
@ -264,7 +264,7 @@ const Landing = () => {
|
||||||
small
|
small
|
||||||
>
|
>
|
||||||
<p>
|
<p>
|
||||||
<Link to="https://course.spacy.io" hidden>
|
<Link to="https://course.spacy.io" noLinkLayout>
|
||||||
<ImageFill
|
<ImageFill
|
||||||
image={courseImage}
|
image={courseImage}
|
||||||
alt="Advanced NLP with spaCy: A free online course"
|
alt="Advanced NLP with spaCy: A free online course"
|
||||||
|
|
|
@ -10,15 +10,19 @@ const DEFAULT_PLATFORM = 'x86'
|
||||||
const DEFAULT_MODELS = ['en']
|
const DEFAULT_MODELS = ['en']
|
||||||
const DEFAULT_OPT = 'efficiency'
|
const DEFAULT_OPT = 'efficiency'
|
||||||
const DEFAULT_HARDWARE = 'cpu'
|
const DEFAULT_HARDWARE = 'cpu'
|
||||||
const DEFAULT_CUDA = 'cuda-autodetect'
|
const DEFAULT_CUDA = 'cuda11x'
|
||||||
const CUDA = {
|
const CUDA = {
|
||||||
'8.0': 'cuda80',
|
'8.0': 'cuda80',
|
||||||
'9.0': 'cuda90',
|
'9.0': 'cuda90',
|
||||||
9.1: 'cuda91',
|
'9.1': 'cuda91',
|
||||||
9.2: 'cuda92',
|
'9.2': 'cuda92',
|
||||||
'10.0': 'cuda100',
|
'10.0': 'cuda100',
|
||||||
10.1: 'cuda101',
|
'10.1': 'cuda101',
|
||||||
'10.2, 11.0+': 'cuda-autodetect',
|
'10.2': 'cuda102',
|
||||||
|
'11.0': 'cuda110',
|
||||||
|
'11.1': 'cuda111',
|
||||||
|
'11.2-11.x': 'cuda11x',
|
||||||
|
'12.x': 'cuda12x',
|
||||||
}
|
}
|
||||||
const LANG_EXTRAS = ['ja'] // only for languages with models
|
const LANG_EXTRAS = ['ja'] // only for languages with models
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue
Block a user