diff --git a/.github/ISSUE_TEMPLATE/01_bugs.md b/.github/ISSUE_TEMPLATE/01_bugs.md index 768832c24..255a5241e 100644 --- a/.github/ISSUE_TEMPLATE/01_bugs.md +++ b/.github/ISSUE_TEMPLATE/01_bugs.md @@ -4,6 +4,8 @@ about: Use this template if you came across a bug or unexpected behaviour differ --- + + ## How to reproduce the behaviour diff --git a/.github/ISSUE_TEMPLATE/config.yml b/.github/ISSUE_TEMPLATE/config.yml index fce1a1064..31f89f917 100644 --- a/.github/ISSUE_TEMPLATE/config.yml +++ b/.github/ISSUE_TEMPLATE/config.yml @@ -1,8 +1,5 @@ blank_issues_enabled: false contact_links: - - name: ⚠️ Python 3.10 Support - url: https://github.com/explosion/spaCy/discussions/9418 - about: Python 3.10 wheels haven't been released yet, see the link for details. - name: 🗯 Discussions Forum url: https://github.com/explosion/spaCy/discussions about: Install issues, usage questions, general discussion and anything else that isn't a bug report. diff --git a/.github/workflows/gputests.yml b/.github/workflows/gputests.yml new file mode 100644 index 000000000..bb7f51d29 --- /dev/null +++ b/.github/workflows/gputests.yml @@ -0,0 +1,21 @@ +name: Weekly GPU tests + +on: + schedule: + - cron: '0 1 * * MON' + +jobs: + weekly-gputests: + strategy: + fail-fast: false + matrix: + branch: [master, v4] + runs-on: ubuntu-latest + steps: + - name: Trigger buildkite build + uses: buildkite/trigger-pipeline-action@v1.2.0 + env: + PIPELINE: explosion-ai/spacy-slow-gpu-tests + BRANCH: ${{ matrix.branch }} + MESSAGE: ":github: Weekly GPU + slow tests - triggered from a GitHub Action" + BUILDKITE_API_ACCESS_TOKEN: ${{ secrets.BUILDKITE_SECRET }} diff --git a/.github/workflows/slowtests.yml b/.github/workflows/slowtests.yml new file mode 100644 index 000000000..1a99c751c --- /dev/null +++ b/.github/workflows/slowtests.yml @@ -0,0 +1,37 @@ +name: Daily slow tests + +on: + schedule: + - cron: '0 0 * * *' + +jobs: + daily-slowtests: + strategy: + fail-fast: false + matrix: + branch: [master, v4] + runs-on: ubuntu-latest + steps: + - name: Checkout + uses: actions/checkout@v1 + with: + ref: ${{ matrix.branch }} + - name: Get commits from past 24 hours + id: check_commits + run: | + today=$(date '+%Y-%m-%d %H:%M:%S') + yesterday=$(date -d "yesterday" '+%Y-%m-%d %H:%M:%S') + if git log --after="$yesterday" --before="$today" | grep commit ; then + echo "::set-output name=run_tests::true" + else + echo "::set-output name=run_tests::false" + fi + + - name: Trigger buildkite build + if: steps.check_commits.outputs.run_tests == 'true' + uses: buildkite/trigger-pipeline-action@v1.2.0 + env: + PIPELINE: explosion-ai/spacy-slow-tests + BRANCH: ${{ matrix.branch }} + MESSAGE: ":github: Daily slow tests - triggered from a GitHub Action" + BUILDKITE_API_ACCESS_TOKEN: ${{ secrets.BUILDKITE_SECRET }} diff --git a/.gitignore b/.gitignore index 60036a475..ac72f2bbf 100644 --- a/.gitignore +++ b/.gitignore @@ -9,7 +9,6 @@ keys/ spacy/tests/package/setup.cfg spacy/tests/package/pyproject.toml spacy/tests/package/requirements.txt -spacy/tests/universe/universe.json # Website website/.cache/ diff --git a/README.md b/README.md index 57d76fb45..05c912ffa 100644 --- a/README.md +++ b/README.md @@ -32,19 +32,20 @@ open-source software, released under the MIT license. ## 📖 Documentation -| Documentation | | -| -------------------------- | -------------------------------------------------------------- | -| ⭐️ **[spaCy 101]** | New to spaCy? Here's everything you need to know! | -| 📚 **[Usage Guides]** | How to use spaCy and its features. | -| 🚀 **[New in v3.0]** | New features, backwards incompatibilities and migration guide. | -| 🪐 **[Project Templates]** | End-to-end workflows you can clone, modify and run. | -| 🎛 **[API Reference]** | The detailed reference for spaCy's API. | -| 📦 **[Models]** | Download trained pipelines for spaCy. | -| 🌌 **[Universe]** | Plugins, extensions, demos and books from the spaCy ecosystem. | -| 👩‍🏫 **[Online Course]** | Learn spaCy in this free and interactive online course. | -| 📺 **[Videos]** | Our YouTube channel with video tutorials, talks and more. | -| 🛠 **[Changelog]** | Changes and version history. | -| 💝 **[Contribute]** | How to contribute to the spaCy project and code base. | +| Documentation | | +| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| ⭐️ **[spaCy 101]** | New to spaCy? Here's everything you need to know! | +| 📚 **[Usage Guides]** | How to use spaCy and its features. | +| 🚀 **[New in v3.0]** | New features, backwards incompatibilities and migration guide. | +| 🪐 **[Project Templates]** | End-to-end workflows you can clone, modify and run. | +| 🎛 **[API Reference]** | The detailed reference for spaCy's API. | +| 📦 **[Models]** | Download trained pipelines for spaCy. | +| 🌌 **[Universe]** | Plugins, extensions, demos and books from the spaCy ecosystem. | +| 👩‍🏫 **[Online Course]** | Learn spaCy in this free and interactive online course. | +| 📺 **[Videos]** | Our YouTube channel with video tutorials, talks and more. | +| 🛠 **[Changelog]** | Changes and version history. | +| 💝 **[Contribute]** | How to contribute to the spaCy project and code base. | +| spaCy Tailored Pipelines | Get a custom spaCy pipeline, tailor-made for your NLP problem by spaCy's core developers. Streamlined, production-ready, predictable and maintainable. Start by completing our 5-minute questionnaire to tell us what you need and we'll be in touch! **[Learn more →](https://explosion.ai/spacy-tailored-pipelines)** | [spacy 101]: https://spacy.io/usage/spacy-101 [new in v3.0]: https://spacy.io/usage/v3 @@ -60,9 +61,7 @@ open-source software, released under the MIT license. ## 💬 Where to ask questions -The spaCy project is maintained by **[@honnibal](https://github.com/honnibal)**, -**[@ines](https://github.com/ines)**, **[@svlandeg](https://github.com/svlandeg)**, -**[@adrianeboyd](https://github.com/adrianeboyd)** and **[@polm](https://github.com/polm)**. +The spaCy project is maintained by the [spaCy team](https://explosion.ai/about). Please understand that we won't be able to provide individual support via email. We also believe that help is much more valuable if it's shared publicly, so that more people can benefit from it. diff --git a/azure-pipelines.yml b/azure-pipelines.yml index 71a793911..4624b2eb2 100644 --- a/azure-pipelines.yml +++ b/azure-pipelines.yml @@ -11,12 +11,14 @@ trigger: exclude: - "website/*" - "*.md" + - ".github/workflows/*" pr: - paths: + paths: exclude: - "*.md" - "website/docs/*" - "website/src/*" + - ".github/workflows/*" jobs: # Perform basic checks for most important errors (syntax etc.) Uses the config diff --git a/extra/DEVELOPER_DOCS/Code Conventions.md b/extra/DEVELOPER_DOCS/Code Conventions.md index eba466c46..37cd8ff27 100644 --- a/extra/DEVELOPER_DOCS/Code Conventions.md +++ b/extra/DEVELOPER_DOCS/Code Conventions.md @@ -137,7 +137,7 @@ If any of the TODOs you've added are important and should be fixed soon, you sho ## Type hints -We use Python type hints across the `.py` files wherever possible. This makes it easy to understand what a function expects and returns, and modern editors will be able to show this information to you when you call an annotated function. Type hints are not currently used in the `.pyx` (Cython) code, except for definitions of registered functions and component factories, where they're used for config validation. +We use Python type hints across the `.py` files wherever possible. This makes it easy to understand what a function expects and returns, and modern editors will be able to show this information to you when you call an annotated function. Type hints are not currently used in the `.pyx` (Cython) code, except for definitions of registered functions and component factories, where they're used for config validation. Ideally when developing, run `mypy spacy` on the code base to inspect any issues. If possible, you should always use the more descriptive type hints like `List[str]` or even `List[Any]` instead of only `list`. We also annotate arguments and return types of `Callable` – although, you can simplify this if the type otherwise gets too verbose (e.g. functions that return factories to create callbacks). Remember that `Callable` takes two values: a **list** of the argument type(s) in order, and the return values. @@ -155,6 +155,13 @@ def create_callback(some_arg: bool) -> Callable[[str, int], List[str]]: return callback ``` +For typing variables, we prefer the explicit format. + +```diff +- var = value # type: Type ++ var: Type = value +``` + For model architectures, Thinc also provides a collection of [custom types](https://thinc.ai/docs/api-types), including more specific types for arrays and model inputs/outputs. Even outside of static type checking, using these types will make the code a lot easier to read and follow, since it's always clear what array types are expected (and what might go wrong if the output is different from the expected type). ```python diff --git a/pyproject.toml b/pyproject.toml index f81484d43..a43b4c814 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -5,7 +5,7 @@ requires = [ "cymem>=2.0.2,<2.1.0", "preshed>=3.0.2,<3.1.0", "murmurhash>=0.28.0,<1.1.0", - "thinc>=8.0.12,<8.1.0", + "thinc>=8.0.14,<8.1.0", "blis>=0.4.0,<0.8.0", "pathy", "numpy>=1.15.0", diff --git a/requirements.txt b/requirements.txt index 8d7372cfe..7b9d343a9 100644 --- a/requirements.txt +++ b/requirements.txt @@ -1,9 +1,9 @@ # Our libraries -spacy-legacy>=3.0.8,<3.1.0 +spacy-legacy>=3.0.9,<3.1.0 spacy-loggers>=1.0.0,<2.0.0 cymem>=2.0.2,<2.1.0 preshed>=3.0.2,<3.1.0 -thinc>=8.0.12,<8.1.0 +thinc>=8.0.14,<8.1.0 blis>=0.4.0,<0.8.0 ml_datasets>=0.2.0,<0.3.0 murmurhash>=0.28.0,<1.1.0 @@ -26,7 +26,7 @@ typing_extensions>=3.7.4.1,<4.0.0.0; python_version < "3.8" # Development dependencies pre-commit>=2.13.0 cython>=0.25,<3.0 -pytest>=5.2.0 +pytest>=5.2.0,!=7.1.0 pytest-timeout>=1.3.0,<2.0.0 mock>=2.0.0,<3.0.0 flake8>=3.8.0,<3.10.0 @@ -35,3 +35,4 @@ mypy==0.910 types-dataclasses>=0.1.3; python_version < "3.7" types-mock>=0.1.1 types-requests +black>=22.0,<23.0 diff --git a/setup.cfg b/setup.cfg index 586a044ff..3c5ba884a 100644 --- a/setup.cfg +++ b/setup.cfg @@ -38,15 +38,15 @@ setup_requires = cymem>=2.0.2,<2.1.0 preshed>=3.0.2,<3.1.0 murmurhash>=0.28.0,<1.1.0 - thinc>=8.0.12,<8.1.0 + thinc>=8.0.14,<8.1.0 install_requires = # Our libraries - spacy-legacy>=3.0.8,<3.1.0 + spacy-legacy>=3.0.9,<3.1.0 spacy-loggers>=1.0.0,<2.0.0 murmurhash>=0.28.0,<1.1.0 cymem>=2.0.2,<2.1.0 preshed>=3.0.2,<3.1.0 - thinc>=8.0.12,<8.1.0 + thinc>=8.0.14,<8.1.0 blis>=0.4.0,<0.8.0 wasabi>=0.8.1,<1.1.0 srsly>=2.4.1,<3.0.0 diff --git a/setup.py b/setup.py index 486324184..397a9e8e5 100755 --- a/setup.py +++ b/setup.py @@ -23,6 +23,7 @@ Options.docstrings = True PACKAGES = find_packages() MOD_NAMES = [ + "spacy.training.alignment_array", "spacy.training.example", "spacy.parts_of_speech", "spacy.strings", @@ -31,6 +32,7 @@ MOD_NAMES = [ "spacy.attrs", "spacy.kb", "spacy.morphology", + "spacy.pipeline._edit_tree_internals.edit_trees", "spacy.pipeline.morphologizer", "spacy.pipeline.multitask", "spacy.pipeline.pipe", @@ -78,7 +80,6 @@ COPY_FILES = { ROOT / "setup.cfg": PACKAGE_ROOT / "tests" / "package", ROOT / "pyproject.toml": PACKAGE_ROOT / "tests" / "package", ROOT / "requirements.txt": PACKAGE_ROOT / "tests" / "package", - ROOT / "website" / "meta" / "universe.json": PACKAGE_ROOT / "tests" / "universe", } diff --git a/spacy/about.py b/spacy/about.py index c253d5052..d01b278c9 100644 --- a/spacy/about.py +++ b/spacy/about.py @@ -1,6 +1,6 @@ # fmt: off __title__ = "spacy" -__version__ = "3.2.1" +__version__ = "3.2.2" __download_url__ = "https://github.com/explosion/spacy-models/releases/download" __compatibility__ = "https://raw.githubusercontent.com/explosion/spacy-models/master/compatibility.json" __projects__ = "https://github.com/explosion/projects" diff --git a/spacy/cli/_util.py b/spacy/cli/_util.py index fb680d888..df98e711f 100644 --- a/spacy/cli/_util.py +++ b/spacy/cli/_util.py @@ -360,7 +360,7 @@ def download_file(src: Union[str, "Pathy"], dest: Path, *, force: bool = False) src = str(src) with smart_open.open(src, mode="rb", ignore_ext=True) as input_file: with dest.open(mode="wb") as output_file: - output_file.write(input_file.read()) + shutil.copyfileobj(input_file, output_file) def ensure_pathy(path): diff --git a/spacy/cli/debug_data.py b/spacy/cli/debug_data.py index b9831fe0c..f94319d1d 100644 --- a/spacy/cli/debug_data.py +++ b/spacy/cli/debug_data.py @@ -19,6 +19,7 @@ from ..morphology import Morphology from ..language import Language from ..util import registry, resolve_dot_names from ..compat import Literal +from ..vectors import Mode as VectorsMode from .. import util @@ -170,29 +171,101 @@ def debug_data( show=verbose, ) if len(nlp.vocab.vectors): - msg.info( - f"{len(nlp.vocab.vectors)} vectors ({nlp.vocab.vectors.n_keys} " - f"unique keys, {nlp.vocab.vectors_length} dimensions)" - ) - n_missing_vectors = sum(gold_train_data["words_missing_vectors"].values()) - msg.warn( - "{} words in training data without vectors ({:.0f}%)".format( - n_missing_vectors, - 100 * (n_missing_vectors / gold_train_data["n_words"]), - ), - ) - msg.text( - "10 most common words without vectors: {}".format( - _format_labels( - gold_train_data["words_missing_vectors"].most_common(10), - counts=True, - ) - ), - show=verbose, - ) + if nlp.vocab.vectors.mode == VectorsMode.floret: + msg.info( + f"floret vectors with {len(nlp.vocab.vectors)} vectors, " + f"{nlp.vocab.vectors_length} dimensions, " + f"{nlp.vocab.vectors.minn}-{nlp.vocab.vectors.maxn} char " + f"n-gram subwords" + ) + else: + msg.info( + f"{len(nlp.vocab.vectors)} vectors ({nlp.vocab.vectors.n_keys} " + f"unique keys, {nlp.vocab.vectors_length} dimensions)" + ) + n_missing_vectors = sum(gold_train_data["words_missing_vectors"].values()) + msg.warn( + "{} words in training data without vectors ({:.0f}%)".format( + n_missing_vectors, + 100 * (n_missing_vectors / gold_train_data["n_words"]), + ), + ) + msg.text( + "10 most common words without vectors: {}".format( + _format_labels( + gold_train_data["words_missing_vectors"].most_common(10), + counts=True, + ) + ), + show=verbose, + ) else: msg.info("No word vectors present in the package") + if "spancat" in factory_names: + model_labels_spancat = _get_labels_from_spancat(nlp) + has_low_data_warning = False + has_no_neg_warning = False + + msg.divider("Span Categorization") + msg.table(model_labels_spancat, header=["Spans Key", "Labels"], divider=True) + + msg.text("Label counts in train data: ", show=verbose) + for spans_key, data_labels in gold_train_data["spancat"].items(): + msg.text( + f"Key: {spans_key}, {_format_labels(data_labels.items(), counts=True)}", + show=verbose, + ) + # Data checks: only take the spans keys in the actual spancat components + data_labels_in_component = { + spans_key: gold_train_data["spancat"][spans_key] + for spans_key in model_labels_spancat.keys() + } + for spans_key, data_labels in data_labels_in_component.items(): + for label, count in data_labels.items(): + # Check for missing labels + spans_key_in_model = spans_key in model_labels_spancat.keys() + if (spans_key_in_model) and ( + label not in model_labels_spancat[spans_key] + ): + msg.warn( + f"Label '{label}' is not present in the model labels of key '{spans_key}'. " + "Performance may degrade after training." + ) + # Check for low number of examples per label + if count <= NEW_LABEL_THRESHOLD: + msg.warn( + f"Low number of examples for label '{label}' in key '{spans_key}' ({count})" + ) + has_low_data_warning = True + # Check for negative examples + with msg.loading("Analyzing label distribution..."): + neg_docs = _get_examples_without_label( + train_dataset, label, "spancat", spans_key + ) + if neg_docs == 0: + msg.warn(f"No examples for texts WITHOUT new label '{label}'") + has_no_neg_warning = True + + if has_low_data_warning: + msg.text( + f"To train a new span type, your data should include at " + f"least {NEW_LABEL_THRESHOLD} instances of the new label", + show=verbose, + ) + else: + msg.good("Good amount of examples for all labels") + + if has_no_neg_warning: + msg.text( + "Training data should always include examples of spans " + "in context, as well as examples without a given span " + "type.", + show=verbose, + ) + else: + msg.good("Examples without ocurrences available for all labels") + if "ner" in factory_names: # Get all unique NER labels present in the data labels = set( @@ -238,7 +311,7 @@ def debug_data( has_low_data_warning = True with msg.loading("Analyzing label distribution..."): - neg_docs = _get_examples_without_label(train_dataset, label) + neg_docs = _get_examples_without_label(train_dataset, label, "ner") if neg_docs == 0: msg.warn(f"No examples for texts WITHOUT new label '{label}'") has_no_neg_warning = True @@ -573,6 +646,7 @@ def _compile_gold( "deps": Counter(), "words": Counter(), "roots": Counter(), + "spancat": dict(), "ws_ents": 0, "boundary_cross_ents": 0, "n_words": 0, @@ -603,6 +677,7 @@ def _compile_gold( if nlp.vocab.strings[word] not in nlp.vocab.vectors: data["words_missing_vectors"].update([word]) if "ner" in factory_names: + sent_starts = eg.get_aligned_sent_starts() for i, label in enumerate(eg.get_aligned_ner()): if label is None: continue @@ -612,10 +687,19 @@ def _compile_gold( if label.startswith(("B-", "U-")): combined_label = label.split("-")[1] data["ner"][combined_label] += 1 - if gold[i].is_sent_start and label.startswith(("I-", "L-")): + if sent_starts[i] == True and label.startswith(("I-", "L-")): data["boundary_cross_ents"] += 1 elif label == "-": data["ner"]["-"] += 1 + if "spancat" in factory_names: + for span_key in list(eg.reference.spans.keys()): + if span_key not in data["spancat"]: + data["spancat"][span_key] = Counter() + for i, span in enumerate(eg.reference.spans[span_key]): + if span.label_ is None: + continue + else: + data["spancat"][span_key][span.label_] += 1 if "textcat" in factory_names or "textcat_multilabel" in factory_names: data["cats"].update(gold.cats) if any(val not in (0, 1) for val in gold.cats.values()): @@ -686,22 +770,34 @@ def _format_labels( return ", ".join([f"'{l}'" for l in cast(Iterable[str], labels)]) -def _get_examples_without_label(data: Sequence[Example], label: str) -> int: +def _get_examples_without_label( + data: Sequence[Example], + label: str, + component: Literal["ner", "spancat"] = "ner", + spans_key: Optional[str] = "sc", +) -> int: count = 0 for eg in data: - labels = [ - label.split("-")[1] - for label in eg.get_aligned_ner() - if label not in ("O", "-", None) - ] + if component == "ner": + labels = [ + label.split("-")[1] + for label in eg.get_aligned_ner() + if label not in ("O", "-", None) + ] + + if component == "spancat": + labels = ( + [span.label_ for span in eg.reference.spans[spans_key]] + if spans_key in eg.reference.spans + else [] + ) + if label not in labels: count += 1 return count -def _get_labels_from_model( - nlp: Language, factory_name: str -) -> Set[str]: +def _get_labels_from_model(nlp: Language, factory_name: str) -> Set[str]: pipe_names = [ pipe_name for pipe_name in nlp.pipe_names @@ -714,9 +810,7 @@ def _get_labels_from_model( return labels -def _get_labels_from_spancat( - nlp: Language -) -> Dict[str, Set[str]]: +def _get_labels_from_spancat(nlp: Language) -> Dict[str, Set[str]]: pipe_names = [ pipe_name for pipe_name in nlp.pipe_names diff --git a/spacy/cli/package.py b/spacy/cli/package.py index f9d2a9af2..b8c8397b6 100644 --- a/spacy/cli/package.py +++ b/spacy/cli/package.py @@ -7,6 +7,7 @@ from collections import defaultdict from catalogue import RegistryError import srsly import sys +import re from ._util import app, Arg, Opt, string_to_list, WHEEL_SUFFIX, SDIST_SUFFIX from ..schemas import validate, ModelMetaSchema @@ -109,6 +110,24 @@ def package( ", ".join(meta["requirements"]), ) if name is not None: + if not name.isidentifier(): + msg.fail( + f"Model name ('{name}') is not a valid module name. " + "This is required so it can be imported as a module.", + "We recommend names that use ASCII A-Z, a-z, _ (underscore), " + "and 0-9. " + "For specific details see: https://docs.python.org/3/reference/lexical_analysis.html#identifiers", + exits=1, + ) + if not _is_permitted_package_name(name): + msg.fail( + f"Model name ('{name}') is not a permitted package name. " + "This is required to correctly load the model with spacy.load.", + "We recommend names that use ASCII A-Z, a-z, _ (underscore), " + "and 0-9. " + "For specific details see: https://www.python.org/dev/peps/pep-0426/#name", + exits=1, + ) meta["name"] = name if version is not None: meta["version"] = version @@ -162,7 +181,7 @@ def package( imports="\n".join(f"from . import {m}" for m in imports) ) create_file(package_path / "__init__.py", init_py) - msg.good(f"Successfully created package '{model_name_v}'", main_path) + msg.good(f"Successfully created package directory '{model_name_v}'", main_path) if create_sdist: with util.working_dir(main_path): util.run_command([sys.executable, "setup.py", "sdist"], capture=False) @@ -171,8 +190,14 @@ def package( if create_wheel: with util.working_dir(main_path): util.run_command([sys.executable, "setup.py", "bdist_wheel"], capture=False) - wheel = main_path / "dist" / f"{model_name_v}{WHEEL_SUFFIX}" + wheel_name_squashed = re.sub("_+", "_", model_name_v) + wheel = main_path / "dist" / f"{wheel_name_squashed}{WHEEL_SUFFIX}" msg.good(f"Successfully created binary wheel", wheel) + if "__" in model_name: + msg.warn( + f"Model name ('{model_name}') contains a run of underscores. " + "Runs of underscores are not significant in installed package names.", + ) def has_wheel() -> bool: @@ -422,6 +447,14 @@ def _format_label_scheme(data: Dict[str, Any]) -> str: return md.text +def _is_permitted_package_name(package_name: str) -> bool: + # regex from: https://www.python.org/dev/peps/pep-0426/#name + permitted_match = re.search( + r"^([A-Z0-9]|[A-Z0-9][A-Z0-9._-]*[A-Z0-9])$", package_name, re.IGNORECASE + ) + return permitted_match is not None + + TEMPLATE_SETUP = """ #!/usr/bin/env python import io diff --git a/spacy/cli/templates/quickstart_training.jinja b/spacy/cli/templates/quickstart_training.jinja index cd51e1aff..0ff1779a8 100644 --- a/spacy/cli/templates/quickstart_training.jinja +++ b/spacy/cli/templates/quickstart_training.jinja @@ -3,9 +3,15 @@ the docs and the init config command. It encodes various best practices and can help generate the best possible configuration, given a user's requirements. #} {%- set use_transformer = hardware != "cpu" -%} {%- set transformer = transformer_data[optimize] if use_transformer else {} -%} +{%- set listener_components = ["tagger", "morphologizer", "parser", "ner", "textcat", "textcat_multilabel", "entity_linker", "spancat", "trainable_lemmatizer"] -%} [paths] train = null dev = null +{% if use_transformer or optimize == "efficiency" or not word_vectors -%} +vectors = null +{% else -%} +vectors = "{{ word_vectors }}" +{% endif -%} [system] {% if use_transformer -%} @@ -19,10 +25,10 @@ lang = "{{ lang }}" {%- set has_textcat = ("textcat" in components or "textcat_multilabel" in components) -%} {%- set with_accuracy = optimize == "accuracy" -%} {%- set has_accurate_textcat = has_textcat and with_accuracy -%} -{%- if ("tagger" in components or "morphologizer" in components or "parser" in components or "ner" in components or "entity_linker" in components or has_accurate_textcat) -%} -{%- set full_pipeline = ["transformer" if use_transformer else "tok2vec"] + components %} +{%- if ("tagger" in components or "morphologizer" in components or "parser" in components or "ner" in components or "spancat" in components or "trainable_lemmatizer" in components or "entity_linker" in components or has_accurate_textcat) -%} +{%- set full_pipeline = ["transformer" if use_transformer else "tok2vec"] + components -%} {%- else -%} -{%- set full_pipeline = components %} +{%- set full_pipeline = components -%} {%- endif %} pipeline = {{ full_pipeline|pprint()|replace("'", '"')|safe }} batch_size = {{ 128 if hardware == "gpu" else 1000 }} @@ -49,7 +55,7 @@ stride = 96 factory = "morphologizer" [components.morphologizer.model] -@architectures = "spacy.Tagger.v1" +@architectures = "spacy.Tagger.v2" nO = null [components.morphologizer.model.tok2vec] @@ -65,7 +71,7 @@ grad_factor = 1.0 factory = "tagger" [components.tagger.model] -@architectures = "spacy.Tagger.v1" +@architectures = "spacy.Tagger.v2" nO = null [components.tagger.model.tok2vec] @@ -116,6 +122,60 @@ grad_factor = 1.0 @layers = "reduce_mean.v1" {% endif -%} +{% if "spancat" in components -%} +[components.spancat] +factory = "spancat" +max_positive = null +scorer = {"@scorers":"spacy.spancat_scorer.v1"} +spans_key = "sc" +threshold = 0.5 + +[components.spancat.model] +@architectures = "spacy.SpanCategorizer.v1" + +[components.spancat.model.reducer] +@layers = "spacy.mean_max_reducer.v1" +hidden_size = 128 + +[components.spancat.model.scorer] +@layers = "spacy.LinearLogistic.v1" +nO = null +nI = null + +[components.spancat.model.tok2vec] +@architectures = "spacy-transformers.TransformerListener.v1" +grad_factor = 1.0 + +[components.spancat.model.tok2vec.pooling] +@layers = "reduce_mean.v1" + +[components.spancat.suggester] +@misc = "spacy.ngram_suggester.v1" +sizes = [1,2,3] +{% endif -%} + +{% if "trainable_lemmatizer" in components -%} +[components.trainable_lemmatizer] +factory = "trainable_lemmatizer" +backoff = "orth" +min_tree_freq = 3 +overwrite = false +scorer = {"@scorers":"spacy.lemmatizer_scorer.v1"} +top_k = 1 + +[components.trainable_lemmatizer.model] +@architectures = "spacy.Tagger.v2" +nO = null +normalize = false + +[components.trainable_lemmatizer.model.tok2vec] +@architectures = "spacy-transformers.TransformerListener.v1" +grad_factor = 1.0 + +[components.trainable_lemmatizer.model.tok2vec.pooling] +@layers = "reduce_mean.v1" +{% endif -%} + {% if "entity_linker" in components -%} [components.entity_linker] factory = "entity_linker" @@ -124,7 +184,7 @@ incl_context = true incl_prior = true [components.entity_linker.model] -@architectures = "spacy.EntityLinker.v1" +@architectures = "spacy.EntityLinker.v2" nO = null [components.entity_linker.model.tok2vec] @@ -231,7 +291,7 @@ maxout_pieces = 3 factory = "morphologizer" [components.morphologizer.model] -@architectures = "spacy.Tagger.v1" +@architectures = "spacy.Tagger.v2" nO = null [components.morphologizer.model.tok2vec] @@ -244,7 +304,7 @@ width = ${components.tok2vec.model.encode.width} factory = "tagger" [components.tagger.model] -@architectures = "spacy.Tagger.v1" +@architectures = "spacy.Tagger.v2" nO = null [components.tagger.model.tok2vec] @@ -286,6 +346,54 @@ nO = null width = ${components.tok2vec.model.encode.width} {% endif %} +{% if "spancat" in components %} +[components.spancat] +factory = "spancat" +max_positive = null +scorer = {"@scorers":"spacy.spancat_scorer.v1"} +spans_key = "sc" +threshold = 0.5 + +[components.spancat.model] +@architectures = "spacy.SpanCategorizer.v1" + +[components.spancat.model.reducer] +@layers = "spacy.mean_max_reducer.v1" +hidden_size = 128 + +[components.spancat.model.scorer] +@layers = "spacy.LinearLogistic.v1" +nO = null +nI = null + +[components.spancat.model.tok2vec] +@architectures = "spacy.Tok2VecListener.v1" +width = ${components.tok2vec.model.encode.width} + +[components.spancat.suggester] +@misc = "spacy.ngram_suggester.v1" +sizes = [1,2,3] +{% endif %} + +{% if "trainable_lemmatizer" in components -%} +[components.trainable_lemmatizer] +factory = "trainable_lemmatizer" +backoff = "orth" +min_tree_freq = 3 +overwrite = false +scorer = {"@scorers":"spacy.lemmatizer_scorer.v1"} +top_k = 1 + +[components.trainable_lemmatizer.model] +@architectures = "spacy.Tagger.v2" +nO = null +normalize = false + +[components.trainable_lemmatizer.model.tok2vec] +@architectures = "spacy.Tok2VecListener.v1" +width = ${components.tok2vec.model.encode.width} +{% endif -%} + {% if "entity_linker" in components -%} [components.entity_linker] factory = "entity_linker" @@ -294,7 +402,7 @@ incl_context = true incl_prior = true [components.entity_linker.model] -@architectures = "spacy.EntityLinker.v1" +@architectures = "spacy.EntityLinker.v2" nO = null [components.entity_linker.model.tok2vec] @@ -360,7 +468,7 @@ no_output_layer = false {% endif %} {% for pipe in components %} -{% if pipe not in ["tagger", "morphologizer", "parser", "ner", "textcat", "textcat_multilabel", "entity_linker"] %} +{% if pipe not in listener_components %} {# Other components defined by the user: we just assume they're factories #} [components.{{ pipe }}] factory = "{{ pipe }}" @@ -417,8 +525,4 @@ compound = 1.001 {% endif %} [initialize] -{% if use_transformer or optimize == "efficiency" or not word_vectors -%} vectors = ${paths.vectors} -{% else -%} -vectors = "{{ word_vectors }}" -{% endif -%} diff --git a/spacy/displacy/__init__.py b/spacy/displacy/__init__.py index 25d530c83..aa00c95d8 100644 --- a/spacy/displacy/__init__.py +++ b/spacy/displacy/__init__.py @@ -4,10 +4,10 @@ spaCy's built in visualization suite for dependencies and named entities. DOCS: https://spacy.io/api/top-level#displacy USAGE: https://spacy.io/usage/visualizers """ -from typing import Union, Iterable, Optional, Dict, Any, Callable +from typing import List, Union, Iterable, Optional, Dict, Any, Callable import warnings -from .render import DependencyRenderer, EntityRenderer +from .render import DependencyRenderer, EntityRenderer, SpanRenderer from ..tokens import Doc, Span from ..errors import Errors, Warnings from ..util import is_in_jupyter @@ -44,6 +44,7 @@ def render( factories = { "dep": (DependencyRenderer, parse_deps), "ent": (EntityRenderer, parse_ents), + "span": (SpanRenderer, parse_spans), } if style not in factories: raise ValueError(Errors.E087.format(style=style)) @@ -203,6 +204,42 @@ def parse_ents(doc: Doc, options: Dict[str, Any] = {}) -> Dict[str, Any]: return {"text": doc.text, "ents": ents, "title": title, "settings": settings} +def parse_spans(doc: Doc, options: Dict[str, Any] = {}) -> Dict[str, Any]: + """Generate spans in [{start: i, end: i, label: 'label'}] format. + + doc (Doc): Document to parse. + options (Dict[str, any]): Span-specific visualisation options. + RETURNS (dict): Generated span types keyed by text (original text) and spans. + """ + kb_url_template = options.get("kb_url_template", None) + spans_key = options.get("spans_key", "sc") + spans = [ + { + "start": span.start_char, + "end": span.end_char, + "start_token": span.start, + "end_token": span.end, + "label": span.label_, + "kb_id": span.kb_id_ if span.kb_id_ else "", + "kb_url": kb_url_template.format(span.kb_id_) if kb_url_template else "#", + } + for span in doc.spans[spans_key] + ] + tokens = [token.text for token in doc] + + if not spans: + warnings.warn(Warnings.W117.format(spans_key=spans_key)) + title = doc.user_data.get("title", None) if hasattr(doc, "user_data") else None + settings = get_doc_settings(doc) + return { + "text": doc.text, + "spans": spans, + "title": title, + "settings": settings, + "tokens": tokens, + } + + def set_render_wrapper(func: Callable[[str], str]) -> None: """Set an optional wrapper function that is called around the generated HTML markup on displacy.render. This can be used to allow integration into diff --git a/spacy/displacy/render.py b/spacy/displacy/render.py index a032d843b..2925c68a0 100644 --- a/spacy/displacy/render.py +++ b/spacy/displacy/render.py @@ -1,12 +1,15 @@ -from typing import Dict, Any, List, Optional, Union +from typing import Any, Dict, List, Optional, Union import uuid +import itertools -from .templates import TPL_DEP_SVG, TPL_DEP_WORDS, TPL_DEP_WORDS_LEMMA, TPL_DEP_ARCS -from .templates import TPL_ENT, TPL_ENT_RTL, TPL_FIGURE, TPL_TITLE, TPL_PAGE -from .templates import TPL_ENTS, TPL_KB_LINK -from ..util import minify_html, escape_html, registry from ..errors import Errors - +from ..util import escape_html, minify_html, registry +from .templates import TPL_DEP_ARCS, TPL_DEP_SVG, TPL_DEP_WORDS +from .templates import TPL_DEP_WORDS_LEMMA, TPL_ENT, TPL_ENT_RTL, TPL_ENTS +from .templates import TPL_FIGURE, TPL_KB_LINK, TPL_PAGE, TPL_SPAN +from .templates import TPL_SPAN_RTL, TPL_SPAN_SLICE, TPL_SPAN_SLICE_RTL +from .templates import TPL_SPAN_START, TPL_SPAN_START_RTL, TPL_SPANS +from .templates import TPL_TITLE DEFAULT_LANG = "en" DEFAULT_DIR = "ltr" @@ -33,6 +36,168 @@ DEFAULT_LABEL_COLORS = { } +class SpanRenderer: + """Render Spans as SVGs.""" + + style = "span" + + def __init__(self, options: Dict[str, Any] = {}) -> None: + """Initialise span renderer + + options (dict): Visualiser-specific options (colors, spans) + """ + # Set up the colors and overall look + colors = dict(DEFAULT_LABEL_COLORS) + user_colors = registry.displacy_colors.get_all() + for user_color in user_colors.values(): + if callable(user_color): + # Since this comes from the function registry, we want to make + # sure we support functions that *return* a dict of colors + user_color = user_color() + if not isinstance(user_color, dict): + raise ValueError(Errors.E925.format(obj=type(user_color))) + colors.update(user_color) + colors.update(options.get("colors", {})) + self.default_color = DEFAULT_ENTITY_COLOR + self.colors = {label.upper(): color for label, color in colors.items()} + + # Set up how the text and labels will be rendered + self.direction = DEFAULT_DIR + self.lang = DEFAULT_LANG + self.top_offset = options.get("top_offset", 40) + self.top_offset_step = options.get("top_offset_step", 17) + + # Set up which templates will be used + template = options.get("template") + if template: + self.span_template = template["span"] + self.span_slice_template = template["slice"] + self.span_start_template = template["start"] + else: + if self.direction == "rtl": + self.span_template = TPL_SPAN_RTL + self.span_slice_template = TPL_SPAN_SLICE_RTL + self.span_start_template = TPL_SPAN_START_RTL + else: + self.span_template = TPL_SPAN + self.span_slice_template = TPL_SPAN_SLICE + self.span_start_template = TPL_SPAN_START + + def render( + self, parsed: List[Dict[str, Any]], page: bool = False, minify: bool = False + ) -> str: + """Render complete markup. + + parsed (list): Dependency parses to render. + page (bool): Render parses wrapped as full HTML page. + minify (bool): Minify HTML markup. + RETURNS (str): Rendered HTML markup. + """ + rendered = [] + for i, p in enumerate(parsed): + if i == 0: + settings = p.get("settings", {}) + self.direction = settings.get("direction", DEFAULT_DIR) + self.lang = settings.get("lang", DEFAULT_LANG) + rendered.append(self.render_spans(p["tokens"], p["spans"], p.get("title"))) + + if page: + docs = "".join([TPL_FIGURE.format(content=doc) for doc in rendered]) + markup = TPL_PAGE.format(content=docs, lang=self.lang, dir=self.direction) + else: + markup = "".join(rendered) + if minify: + return minify_html(markup) + return markup + + def render_spans( + self, + tokens: List[str], + spans: List[Dict[str, Any]], + title: Optional[str], + ) -> str: + """Render span types in text. + + Spans are rendered per-token, this means that for each token, we check if it's part + of a span slice (a member of a span type) or a span start (the starting token of a + given span type). + + tokens (list): Individual tokens in the text + spans (list): Individual entity spans and their start, end, label, kb_id and kb_url. + title (str / None): Document title set in Doc.user_data['title']. + """ + per_token_info = [] + for idx, token in enumerate(tokens): + # Identify if a token belongs to a Span (and which) and if it's a + # start token of said Span. We'll use this for the final HTML render + token_markup: Dict[str, Any] = {} + token_markup["text"] = token + entities = [] + for span in spans: + ent = {} + if span["start_token"] <= idx < span["end_token"]: + ent["label"] = span["label"] + ent["is_start"] = True if idx == span["start_token"] else False + kb_id = span.get("kb_id", "") + kb_url = span.get("kb_url", "#") + ent["kb_link"] = ( + TPL_KB_LINK.format(kb_id=kb_id, kb_url=kb_url) if kb_id else "" + ) + entities.append(ent) + token_markup["entities"] = entities + per_token_info.append(token_markup) + + markup = self._render_markup(per_token_info) + markup = TPL_SPANS.format(content=markup, dir=self.direction) + if title: + markup = TPL_TITLE.format(title=title) + markup + return markup + + def _render_markup(self, per_token_info: List[Dict[str, Any]]) -> str: + """Render the markup from per-token information""" + markup = "" + for token in per_token_info: + entities = sorted(token["entities"], key=lambda d: d["label"]) + if entities: + slices = self._get_span_slices(token["entities"]) + starts = self._get_span_starts(token["entities"]) + markup += self.span_template.format( + text=token["text"], span_slices=slices, span_starts=starts + ) + else: + markup += escape_html(token["text"] + " ") + return markup + + def _get_span_slices(self, entities: List[Dict]) -> str: + """Get the rendered markup of all Span slices""" + span_slices = [] + for entity, step in zip(entities, itertools.count(step=self.top_offset_step)): + color = self.colors.get(entity["label"].upper(), self.default_color) + span_slice = self.span_slice_template.format( + bg=color, top_offset=self.top_offset + step + ) + span_slices.append(span_slice) + return "".join(span_slices) + + def _get_span_starts(self, entities: List[Dict]) -> str: + """Get the rendered markup of all Span start tokens""" + span_starts = [] + for entity, step in zip(entities, itertools.count(step=self.top_offset_step)): + color = self.colors.get(entity["label"].upper(), self.default_color) + span_start = ( + self.span_start_template.format( + bg=color, + top_offset=self.top_offset + step, + label=entity["label"], + kb_link=entity["kb_link"], + ) + if entity["is_start"] + else "" + ) + span_starts.append(span_start) + return "".join(span_starts) + + class DependencyRenderer: """Render dependency parses as SVGs.""" @@ -242,7 +407,7 @@ class EntityRenderer: style = "ent" def __init__(self, options: Dict[str, Any] = {}) -> None: - """Initialise dependency renderer. + """Initialise entity renderer. options (dict): Visualiser-specific options (colors, ents) """ diff --git a/spacy/displacy/templates.py b/spacy/displacy/templates.py index e7d3d4266..ff81e7a1d 100644 --- a/spacy/displacy/templates.py +++ b/spacy/displacy/templates.py @@ -62,6 +62,55 @@ TPL_ENT_RTL = """ """ +TPL_SPANS = """ +
{content}
+""" + +TPL_SPAN = """ + + {text} + {span_slices} + {span_starts} + +""" + +TPL_SPAN_SLICE = """ + + +""" + + +TPL_SPAN_START = """ + + + {label}{kb_link} + + + +""" + +TPL_SPAN_RTL = """ + + {text} + {span_slices} + {span_starts} + +""" + +TPL_SPAN_SLICE_RTL = """ + + +""" + +TPL_SPAN_START_RTL = """ + + + {label}{kb_link} + + +""" + + # Important: this needs to start with a space! TPL_KB_LINK = """ {kb_id} diff --git a/spacy/errors.py b/spacy/errors.py index 390612123..24a9f0339 100644 --- a/spacy/errors.py +++ b/spacy/errors.py @@ -192,6 +192,10 @@ class Warnings(metaclass=ErrorsWithCodes): W115 = ("Skipping {method}: the floret vector table cannot be modified. " "Vectors are calculated from character ngrams.") W116 = ("Unable to clean attribute '{attr}'.") + W117 = ("No spans to visualize found in Doc object with spans_key: '{spans_key}'. If this is " + "surprising to you, make sure the Doc was processed using a model " + "that supports span categorization, and check the `doc.spans[spans_key]` " + "property manually if necessary.") class Errors(metaclass=ErrorsWithCodes): @@ -483,7 +487,7 @@ class Errors(metaclass=ErrorsWithCodes): "components, since spans are only views of the Doc. Use Doc and " "Token attributes (or custom extension attributes) only and remove " "the following: {attrs}") - E181 = ("Received invalid attributes for unkown object {obj}: {attrs}. " + E181 = ("Received invalid attributes for unknown object {obj}: {attrs}. " "Only Doc and Token attributes are supported.") E182 = ("Received invalid attribute declaration: {attr}\nDid you forget " "to define the attribute? For example: `{attr}.???`") @@ -520,10 +524,11 @@ class Errors(metaclass=ErrorsWithCodes): E202 = ("Unsupported {name} mode '{mode}'. Supported modes: {modes}.") # New errors added in v3.x + E857 = ("Entry '{name}' not found in edit tree lemmatizer labels.") E858 = ("The {mode} vector table does not support this operation. " "{alternative}") E859 = ("The floret vector table cannot be modified.") - E860 = ("Can't truncate fasttext-bloom vectors.") + E860 = ("Can't truncate floret vectors.") E861 = ("No 'keys' should be provided when initializing floret vectors " "with 'minn' and 'maxn'.") E862 = ("'hash_count' must be between 1-4 for floret vectors.") @@ -566,9 +571,6 @@ class Errors(metaclass=ErrorsWithCodes): E879 = ("Unexpected type for 'spans' data. Provide a dictionary mapping keys to " "a list of spans, with each span represented by a tuple (start_char, end_char). " "The tuple can be optionally extended with a label and a KB ID.") - E880 = ("The 'wandb' library could not be found - did you install it? " - "Alternatively, specify the 'ConsoleLogger' in the 'training.logger' " - "config section, instead of the 'WandbLogger'.") E884 = ("The pipeline could not be initialized because the vectors " "could not be found at '{vectors}'. If your pipeline was already " "initialized/trained before, call 'resume_training' instead of 'initialize', " @@ -894,6 +896,9 @@ class Errors(metaclass=ErrorsWithCodes): "patterns.") E1025 = ("Cannot intify the value '{value}' as an IOB string. The only " "supported values are: 'I', 'O', 'B' and ''") + E1026 = ("Edit tree has an invalid format:\n{errors}") + E1027 = ("AlignmentArray only supports slicing with a step of 1.") + E1028 = ("AlignmentArray only supports indexing using an int or a slice.") # Deprecated model shortcuts, only used in errors and warnings diff --git a/spacy/glossary.py b/spacy/glossary.py index e45704fc5..57254330f 100644 --- a/spacy/glossary.py +++ b/spacy/glossary.py @@ -310,7 +310,6 @@ GLOSSARY = { "re": "repeated element", "rs": "reported speech", "sb": "subject", - "sb": "subject", "sbp": "passivized subject (PP)", "sp": "subject or predicate", "svp": "separable verb prefix", diff --git a/spacy/lang/char_classes.py b/spacy/lang/char_classes.py index 9e5441a4f..b15bb3cf3 100644 --- a/spacy/lang/char_classes.py +++ b/spacy/lang/char_classes.py @@ -45,6 +45,10 @@ _hangul_syllables = r"\uAC00-\uD7AF" _hangul_jamo = r"\u1100-\u11FF" _hangul = _hangul_syllables + _hangul_jamo +_hiragana = r"\u3040-\u309F" +_katakana = r"\u30A0-\u30FFー" +_kana = _hiragana + _katakana + # letters with diacritics - Catalan, Czech, Latin, Latvian, Lithuanian, Polish, Slovak, Turkish, Welsh _latin_u_extendedA = ( r"\u0100\u0102\u0104\u0106\u0108\u010A\u010C\u010E\u0110\u0112\u0114\u0116\u0118\u011A\u011C" @@ -244,6 +248,7 @@ _uncased = ( + _tamil + _telugu + _hangul + + _kana + _cjk ) diff --git a/spacy/lang/dsb/__init__.py b/spacy/lang/dsb/__init__.py new file mode 100644 index 000000000..c66092a0c --- /dev/null +++ b/spacy/lang/dsb/__init__.py @@ -0,0 +1,16 @@ +from .lex_attrs import LEX_ATTRS +from .stop_words import STOP_WORDS +from ...language import Language, BaseDefaults + + +class LowerSorbianDefaults(BaseDefaults): + lex_attr_getters = LEX_ATTRS + stop_words = STOP_WORDS + + +class LowerSorbian(Language): + lang = "dsb" + Defaults = LowerSorbianDefaults + + +__all__ = ["LowerSorbian"] diff --git a/spacy/lang/dsb/examples.py b/spacy/lang/dsb/examples.py new file mode 100644 index 000000000..6e9143826 --- /dev/null +++ b/spacy/lang/dsb/examples.py @@ -0,0 +1,15 @@ +""" +Example sentences to test spaCy and its language models. + +>>> from spacy.lang.dsb.examples import sentences +>>> docs = nlp.pipe(sentences) +""" + + +sentences = [ + "Z tym stwori so wuměnjenje a zakład za dalše wobdźěłanje přez analyzu tekstoweje struktury a semantisku anotaciju a z tym tež za tu předstajenu digitalnu online-wersiju.", + "Mi so tu jara derje spodoba.", + "Kotre nowniny chceće měć?", + "Tak ako w slědnem lěśe jo teke lětosa jano doma zapustowaś móžno.", + "Zwóstanjo pótakem hyšći wjele źěła.", +] diff --git a/spacy/lang/dsb/lex_attrs.py b/spacy/lang/dsb/lex_attrs.py new file mode 100644 index 000000000..367b3afb8 --- /dev/null +++ b/spacy/lang/dsb/lex_attrs.py @@ -0,0 +1,113 @@ +from ...attrs import LIKE_NUM + +_num_words = [ + "nul", + "jaden", + "jadna", + "jadno", + "dwa", + "dwě", + "tśi", + "tśo", + "styri", + "styrjo", + "pěś", + "pěśo", + "šesć", + "šesćo", + "sedym", + "sedymjo", + "wósym", + "wósymjo", + "źewjeś", + "źewjeśo", + "źaseś", + "źaseśo", + "jadnassćo", + "dwanassćo", + "tśinasćo", + "styrnasćo", + "pěśnasćo", + "šesnasćo", + "sedymnasćo", + "wósymnasćo", + "źewjeśnasćo", + "dwanasćo", + "dwaźasća", + "tśiźasća", + "styrźasća", + "pěśźaset", + "šesćźaset", + "sedymźaset", + "wósymźaset", + "źewjeśźaset", + "sto", + "tysac", + "milion", + "miliarda", + "bilion", + "biliarda", + "trilion", + "triliarda", +] + +_ordinal_words = [ + "prědny", + "prědna", + "prědne", + "drugi", + "druga", + "druge", + "tśeśi", + "tśeśa", + "tśeśe", + "stwórty", + "stwórta", + "stwórte", + "pêty", + "pěta", + "pête", + "šesty", + "šesta", + "šeste", + "sedymy", + "sedyma", + "sedyme", + "wósymy", + "wósyma", + "wósyme", + "źewjety", + "źewjeta", + "źewjete", + "źasety", + "źaseta", + "źasete", + "jadnasty", + "jadnasta", + "jadnaste", + "dwanasty", + "dwanasta", + "dwanaste", +] + + +def like_num(text): + if text.startswith(("+", "-", "±", "~")): + text = text[1:] + text = text.replace(",", "").replace(".", "") + if text.isdigit(): + return True + if text.count("/") == 1: + num, denom = text.split("/") + if num.isdigit() and denom.isdigit(): + return True + text_lower = text.lower() + if text_lower in _num_words: + return True + # Check ordinal number + if text_lower in _ordinal_words: + return True + return False + + +LEX_ATTRS = {LIKE_NUM: like_num} diff --git a/spacy/lang/dsb/stop_words.py b/spacy/lang/dsb/stop_words.py new file mode 100644 index 000000000..376e04aa6 --- /dev/null +++ b/spacy/lang/dsb/stop_words.py @@ -0,0 +1,15 @@ +STOP_WORDS = set( + """ +a abo aby ako ale až + +daniž dokulaž + +gaž + +jolic + +pak pótom + +teke togodla +""".split() +) diff --git a/spacy/lang/en/tokenizer_exceptions.py b/spacy/lang/en/tokenizer_exceptions.py index 55b544e42..2c20b8c27 100644 --- a/spacy/lang/en/tokenizer_exceptions.py +++ b/spacy/lang/en/tokenizer_exceptions.py @@ -447,7 +447,6 @@ for exc_data in [ {ORTH: "La.", NORM: "Louisiana"}, {ORTH: "Mar.", NORM: "March"}, {ORTH: "Mass.", NORM: "Massachusetts"}, - {ORTH: "May.", NORM: "May"}, {ORTH: "Mich.", NORM: "Michigan"}, {ORTH: "Minn.", NORM: "Minnesota"}, {ORTH: "Miss.", NORM: "Mississippi"}, diff --git a/spacy/lang/es/lex_attrs.py b/spacy/lang/es/lex_attrs.py index 988dbaba1..9d1fa93b8 100644 --- a/spacy/lang/es/lex_attrs.py +++ b/spacy/lang/es/lex_attrs.py @@ -47,6 +47,41 @@ _num_words = [ ] +_ordinal_words = [ + "primero", + "segundo", + "tercero", + "cuarto", + "quinto", + "sexto", + "séptimo", + "octavo", + "noveno", + "décimo", + "undécimo", + "duodécimo", + "decimotercero", + "decimocuarto", + "decimoquinto", + "decimosexto", + "decimoséptimo", + "decimoctavo", + "decimonoveno", + "vigésimo", + "trigésimo", + "cuadragésimo", + "quincuagésimo", + "sexagésimo", + "septuagésimo", + "octogésima", + "nonagésima", + "centésima", + "milésima", + "millonésima", + "billonésima", +] + + def like_num(text): if text.startswith(("+", "-", "±", "~")): text = text[1:] @@ -57,7 +92,11 @@ def like_num(text): num, denom = text.split("/") if num.isdigit() and denom.isdigit(): return True - if text.lower() in _num_words: + text_lower = text.lower() + if text_lower in _num_words: + return True + # Check ordinal number + if text_lower in _ordinal_words: return True return False diff --git a/spacy/lang/fi/__init__.py b/spacy/lang/fi/__init__.py index 86a834170..c3a0cf451 100644 --- a/spacy/lang/fi/__init__.py +++ b/spacy/lang/fi/__init__.py @@ -2,6 +2,7 @@ from .tokenizer_exceptions import TOKENIZER_EXCEPTIONS from .stop_words import STOP_WORDS from .lex_attrs import LEX_ATTRS from .punctuation import TOKENIZER_INFIXES, TOKENIZER_SUFFIXES +from .syntax_iterators import SYNTAX_ITERATORS from ...language import Language, BaseDefaults @@ -11,6 +12,7 @@ class FinnishDefaults(BaseDefaults): tokenizer_exceptions = TOKENIZER_EXCEPTIONS lex_attr_getters = LEX_ATTRS stop_words = STOP_WORDS + syntax_iterators = SYNTAX_ITERATORS class Finnish(Language): diff --git a/spacy/lang/fi/syntax_iterators.py b/spacy/lang/fi/syntax_iterators.py new file mode 100644 index 000000000..6b481e51f --- /dev/null +++ b/spacy/lang/fi/syntax_iterators.py @@ -0,0 +1,79 @@ +from typing import Iterator, Tuple, Union +from ...tokens import Doc, Span +from ...symbols import NOUN, PROPN, PRON +from ...errors import Errors + + +def noun_chunks(doclike: Union[Doc, Span]) -> Iterator[Tuple[int, int, int]]: + """Detect base noun phrases from a dependency parse. Works on both Doc and Span.""" + labels = [ + "appos", + "nsubj", + "nsubj:cop", + "obj", + "obl", + "ROOT", + ] + extend_labels = [ + "amod", + "compound", + "compound:nn", + "flat:name", + "nmod", + "nmod:gobj", + "nmod:gsubj", + "nmod:poss", + "nummod", + ] + + def potential_np_head(word): + return word.pos in (NOUN, PROPN) and ( + word.dep in np_deps or word.head.pos == PRON + ) + + doc = doclike.doc # Ensure works on both Doc and Span. + if not doc.has_annotation("DEP"): + raise ValueError(Errors.E029) + + np_deps = [doc.vocab.strings[label] for label in labels] + extend_deps = [doc.vocab.strings[label] for label in extend_labels] + np_label = doc.vocab.strings.add("NP") + conj_label = doc.vocab.strings.add("conj") + + rbracket = 0 + prev_end = -1 + for i, word in enumerate(doclike): + if i < rbracket: + continue + + # Is this a potential independent NP head or coordinated with + # a NOUN that is itself an independent NP head? + # + # e.g. "Terveyden ja hyvinvoinnin laitos" + if potential_np_head(word) or ( + word.dep == conj_label and potential_np_head(word.head) + ): + # Try to extend to the left to include adjective/num + # modifiers, compound words etc. + lbracket = word.i + for ldep in word.lefts: + if ldep.dep in extend_deps: + lbracket = ldep.left_edge.i + break + + # Prevent nested chunks from being produced + if lbracket <= prev_end: + continue + + rbracket = word.i + # Try to extend the span to the right to capture + # appositions and noun modifiers + for rdep in word.rights: + if rdep.dep in extend_deps: + rbracket = rdep.i + prev_end = rbracket + + yield lbracket, rbracket + 1, np_label + + +SYNTAX_ITERATORS = {"noun_chunks": noun_chunks} diff --git a/spacy/lang/fr/syntax_iterators.py b/spacy/lang/fr/syntax_iterators.py index d86662693..5849c40b3 100644 --- a/spacy/lang/fr/syntax_iterators.py +++ b/spacy/lang/fr/syntax_iterators.py @@ -6,16 +6,35 @@ from ...tokens import Doc, Span def noun_chunks(doclike: Union[Doc, Span]) -> Iterator[Tuple[int, int, int]]: - """Detect base noun phrases from a dependency parse. Works on Doc and Span.""" - # fmt: off - labels = ["nsubj", "nsubj:pass", "obj", "iobj", "ROOT", "appos", "nmod", "nmod:poss"] - # fmt: on + """ + Detect base noun phrases from a dependency parse. Works on both Doc and Span. + """ + labels = [ + "nsubj", + "nsubj:pass", + "obj", + "obl", + "obl:agent", + "obl:arg", + "obl:mod", + "nmod", + "pcomp", + "appos", + "ROOT", + ] + post_modifiers = ["flat", "flat:name", "flat:foreign", "fixed", "compound"] doc = doclike.doc # Ensure works on both Doc and Span. if not doc.has_annotation("DEP"): raise ValueError(Errors.E029) - np_deps = [doc.vocab.strings[label] for label in labels] - conj = doc.vocab.strings.add("conj") + np_deps = {doc.vocab.strings.add(label) for label in labels} + np_modifs = {doc.vocab.strings.add(modifier) for modifier in post_modifiers} np_label = doc.vocab.strings.add("NP") + adj_label = doc.vocab.strings.add("amod") + det_label = doc.vocab.strings.add("det") + det_pos = doc.vocab.strings.add("DET") + adp_pos = doc.vocab.strings.add("ADP") + conj_label = doc.vocab.strings.add("conj") + conj_pos = doc.vocab.strings.add("CCONJ") prev_end = -1 for i, word in enumerate(doclike): if word.pos not in (NOUN, PROPN, PRON): @@ -24,16 +43,43 @@ def noun_chunks(doclike: Union[Doc, Span]) -> Iterator[Tuple[int, int, int]]: if word.left_edge.i <= prev_end: continue if word.dep in np_deps: - prev_end = word.right_edge.i - yield word.left_edge.i, word.right_edge.i + 1, np_label - elif word.dep == conj: + right_childs = list(word.rights) + right_child = right_childs[0] if right_childs else None + + if right_child: + if ( + right_child.dep == adj_label + ): # allow chain of adjectives by expanding to right + right_end = right_child.right_edge + elif ( + right_child.dep == det_label and right_child.pos == det_pos + ): # cut relative pronouns here + right_end = right_child + elif right_child.dep in np_modifs: # Check if we can expand to right + right_end = word.right_edge + else: + right_end = word + else: + right_end = word + prev_end = right_end.i + + left_index = word.left_edge.i + left_index = left_index + 1 if word.left_edge.pos == adp_pos else left_index + + yield left_index, right_end.i + 1, np_label + elif word.dep == conj_label: head = word.head - while head.dep == conj and head.head.i < head.i: + while head.dep == conj_label and head.head.i < head.i: head = head.head # If the head is an NP, and we're coordinated to it, we're an NP if head.dep in np_deps: - prev_end = word.right_edge.i - yield word.left_edge.i, word.right_edge.i + 1, np_label + prev_end = word.i + + left_index = word.left_edge.i # eliminate left attached conjunction + left_index = ( + left_index + 1 if word.left_edge.pos == conj_pos else left_index + ) + yield left_index, word.i + 1, np_label SYNTAX_ITERATORS = {"noun_chunks": noun_chunks} diff --git a/spacy/lang/hi/lex_attrs.py b/spacy/lang/hi/lex_attrs.py index a18c2e513..ee845e8b1 100644 --- a/spacy/lang/hi/lex_attrs.py +++ b/spacy/lang/hi/lex_attrs.py @@ -90,7 +90,7 @@ _eleven_to_beyond = [ "अड़सठ", "उनहत्तर", "सत्तर", - "इकहत्तर" + "इकहत्तर", "बहत्तर", "तिहत्तर", "चौहत्तर", diff --git a/spacy/lang/hsb/__init__.py b/spacy/lang/hsb/__init__.py new file mode 100644 index 000000000..034d82319 --- /dev/null +++ b/spacy/lang/hsb/__init__.py @@ -0,0 +1,18 @@ +from .lex_attrs import LEX_ATTRS +from .stop_words import STOP_WORDS +from .tokenizer_exceptions import TOKENIZER_EXCEPTIONS +from ...language import Language, BaseDefaults + + +class UpperSorbianDefaults(BaseDefaults): + lex_attr_getters = LEX_ATTRS + stop_words = STOP_WORDS + tokenizer_exceptions = TOKENIZER_EXCEPTIONS + + +class UpperSorbian(Language): + lang = "hsb" + Defaults = UpperSorbianDefaults + + +__all__ = ["UpperSorbian"] diff --git a/spacy/lang/hsb/examples.py b/spacy/lang/hsb/examples.py new file mode 100644 index 000000000..21f6f7584 --- /dev/null +++ b/spacy/lang/hsb/examples.py @@ -0,0 +1,15 @@ +""" +Example sentences to test spaCy and its language models. + +>>> from spacy.lang.hsb.examples import sentences +>>> docs = nlp.pipe(sentences) +""" + + +sentences = [ + "To běšo wjelgin raźone a jo se wót luźi derje pśiwzeło. Tak som dožywiła wjelgin", + "Jogo pśewóźowarce stej groniłej, až how w serbskich stronach njama Santa Claus nic pytaś.", + "A ten sobuźěłaśeŕ Statneje biblioteki w Barlinju jo pśimjeł drogotne knigły bźez rukajcowu z nagima rukoma!", + "Take wobchadanje z našym kulturnym derbstwom zewšym njejźo.", + "Wopśimjeśe drugich pśinoskow jo było na wusokem niwowje, ako pśecej.", +] diff --git a/spacy/lang/hsb/lex_attrs.py b/spacy/lang/hsb/lex_attrs.py new file mode 100644 index 000000000..5f300a73d --- /dev/null +++ b/spacy/lang/hsb/lex_attrs.py @@ -0,0 +1,106 @@ +from ...attrs import LIKE_NUM + +_num_words = [ + "nul", + "jedyn", + "jedna", + "jedne", + "dwaj", + "dwě", + "tři", + "třo", + "štyri", + "štyrjo", + "pjeć", + "šěsć", + "sydom", + "wosom", + "dźewjeć", + "dźesać", + "jědnaće", + "dwanaće", + "třinaće", + "štyrnaće", + "pjatnaće", + "šěsnaće", + "sydomnaće", + "wosomnaće", + "dźewjatnaće", + "dwaceći", + "třiceći", + "štyrceći", + "pjećdźesat", + "šěsćdźesat", + "sydomdźesat", + "wosomdźesat", + "dźewjećdźesat", + "sto", + "tysac", + "milion", + "miliarda", + "bilion", + "biliarda", + "trilion", + "triliarda", +] + +_ordinal_words = [ + "prěni", + "prěnja", + "prěnje", + "druhi", + "druha", + "druhe", + "třeći", + "třeća", + "třeće", + "štwórty", + "štwórta", + "štwórte", + "pjaty", + "pjata", + "pjate", + "šěsty", + "šěsta", + "šěste", + "sydmy", + "sydma", + "sydme", + "wosmy", + "wosma", + "wosme", + "dźewjaty", + "dźewjata", + "dźewjate", + "dźesaty", + "dźesata", + "dźesate", + "jědnaty", + "jědnata", + "jědnate", + "dwanaty", + "dwanata", + "dwanate", +] + + +def like_num(text): + if text.startswith(("+", "-", "±", "~")): + text = text[1:] + text = text.replace(",", "").replace(".", "") + if text.isdigit(): + return True + if text.count("/") == 1: + num, denom = text.split("/") + if num.isdigit() and denom.isdigit(): + return True + text_lower = text.lower() + if text_lower in _num_words: + return True + # Check ordinal number + if text_lower in _ordinal_words: + return True + return False + + +LEX_ATTRS = {LIKE_NUM: like_num} diff --git a/spacy/lang/hsb/stop_words.py b/spacy/lang/hsb/stop_words.py new file mode 100644 index 000000000..e6fedaf4c --- /dev/null +++ b/spacy/lang/hsb/stop_words.py @@ -0,0 +1,19 @@ +STOP_WORDS = set( + """ +a abo ale ani + +dokelž + +hdyž + +jeli jelizo + +kaž + +pak potom + +tež tohodla + +zo zoby +""".split() +) diff --git a/spacy/lang/hsb/tokenizer_exceptions.py b/spacy/lang/hsb/tokenizer_exceptions.py new file mode 100644 index 000000000..4b9a4f98a --- /dev/null +++ b/spacy/lang/hsb/tokenizer_exceptions.py @@ -0,0 +1,18 @@ +from ..tokenizer_exceptions import BASE_EXCEPTIONS +from ...symbols import ORTH, NORM +from ...util import update_exc + +_exc = dict() +for exc_data in [ + {ORTH: "mil.", NORM: "milion"}, + {ORTH: "wob.", NORM: "wobydler"}, +]: + _exc[exc_data[ORTH]] = [exc_data] + +for orth in [ + "resp.", +]: + _exc[orth] = [{ORTH: orth}] + + +TOKENIZER_EXCEPTIONS = update_exc(BASE_EXCEPTIONS, _exc) diff --git a/spacy/lang/it/__init__.py b/spacy/lang/it/__init__.py index 1edebc837..ecf322bd7 100644 --- a/spacy/lang/it/__init__.py +++ b/spacy/lang/it/__init__.py @@ -6,13 +6,15 @@ from .tokenizer_exceptions import TOKENIZER_EXCEPTIONS from .punctuation import TOKENIZER_PREFIXES, TOKENIZER_INFIXES from ...language import Language, BaseDefaults from .lemmatizer import ItalianLemmatizer +from .syntax_iterators import SYNTAX_ITERATORS class ItalianDefaults(BaseDefaults): tokenizer_exceptions = TOKENIZER_EXCEPTIONS - stop_words = STOP_WORDS prefixes = TOKENIZER_PREFIXES infixes = TOKENIZER_INFIXES + stop_words = STOP_WORDS + syntax_iterators = SYNTAX_ITERATORS class Italian(Language): diff --git a/spacy/lang/it/stop_words.py b/spacy/lang/it/stop_words.py index 4178ed452..42adc7904 100644 --- a/spacy/lang/it/stop_words.py +++ b/spacy/lang/it/stop_words.py @@ -10,18 +10,18 @@ avresti avrete avrà avrò avuta avute avuti avuto basta bene benissimo brava bravo -casa caso cento certa certe certi certo che chi chicchessia chiunque ci +casa caso cento certa certe certi certo che chi chicchessia chiunque ci c' ciascuna ciascuno cima cio cioe circa citta città co codesta codesti codesto cogli coi col colei coll coloro colui come cominci comunque con concernente conciliarsi conclusione consiglio contro cortesia cos cosa cosi così cui -da dagl dagli dai dal dall dalla dalle dallo dappertutto davanti degl degli -dei del dell della delle dello dentro detto deve di dice dietro dire +d' da dagl dagli dai dal dall dall' dalla dalle dallo dappertutto davanti degl degli +dei del dell dell' della delle dello dentro detto deve di dice dietro dire dirimpetto diventa diventare diventato dopo dov dove dovra dovrà dovunque due dunque durante -ebbe ebbero ebbi ecc ecco ed effettivamente egli ella entrambi eppure era -erano eravamo eravate eri ero esempio esse essendo esser essere essi ex +e ebbe ebbero ebbi ecc ecco ed effettivamente egli ella entrambi eppure era +erano eravamo eravate eri ero esempio esse essendo esser essere essi ex è fa faccia facciamo facciano facciate faccio facemmo facendo facesse facessero facessi facessimo faceste facesti faceva facevamo facevano facevate facevi @@ -30,21 +30,21 @@ fareste faresti farete farà farò fatto favore fece fecero feci fin finalmente finche fine fino forse forza fosse fossero fossi fossimo foste fosti fra frattempo fu fui fummo fuori furono futuro generale -gia già giacche giorni giorno gli gliela gliele glieli glielo gliene governo +gia già giacche giorni giorno gli gl' gliela gliele glieli glielo gliene governo grande grazie gruppo ha haha hai hanno ho ieri il improvviso in inc infatti inoltre insieme intanto intorno invece io -la là lasciato lato lavoro le lei li lo lontano loro lui lungo luogo +l' la là lasciato lato lavoro le lei li lo lontano loro lui lungo luogo -ma macche magari maggior mai male malgrado malissimo mancanza marche me +m' ma macche magari maggior mai male malgrado malissimo mancanza marche me medesimo mediante meglio meno mentre mesi mezzo mi mia mie miei mila miliardi milioni minimi ministro mio modo molti moltissimo molto momento mondo mosto -nazionale ne negl negli nei nel nell nella nelle nello nemmeno neppure nessun -nessuna nessuno niente no noi non nondimeno nonostante nonsia nostra nostre +nazionale ne negl negli nei nel nell nella nelle nello nemmeno neppure nessun nessun' +nessuna nessuno nient' niente no noi non nondimeno nonostante nonsia nostra nostre nostri nostro novanta nove nulla nuovo od oggi ogni ognuna ognuno oltre oppure ora ore osi ossia ottanta otto @@ -56,12 +56,12 @@ potrebbe preferibilmente presa press prima primo principalmente probabilmente proprio puo può pure purtroppo qualche qualcosa qualcuna qualcuno quale quali qualunque quando quanta quante -quanti quanto quantunque quasi quattro quel quella quelle quelli quello quest +quanti quanto quantunque quasi quattro quel quel' quella quelle quelli quello quest quest' questa queste questi questo qui quindi realmente recente recentemente registrazione relativo riecco salvo -sara sarà sarai saranno sarebbe sarebbero sarei saremmo saremo sareste +s' sara sarà sarai saranno sarebbe sarebbero sarei saremmo saremo sareste saresti sarete saro sarò scola scopo scorso se secondo seguente seguito sei sembra sembrare sembrato sembri sempre senza sette si sia siamo siano siate siete sig solito solo soltanto sono sopra sotto spesso srl sta stai stando @@ -72,12 +72,12 @@ steste stesti stette stettero stetti stia stiamo stiano stiate sto su sua subito successivamente successivo sue sugl sugli sui sul sull sulla sulle sullo suo suoi -tale tali talvolta tanto te tempo ti titolo tra tranne tre trenta +t' tale tali talvolta tanto te tempo ti titolo tra tranne tre trenta troppo trovato tu tua tue tuo tuoi tutta tuttavia tutte tutti tutto -uguali ulteriore ultimo un una uno uomo +uguali ulteriore ultimo un un' una uno uomo -va vale vari varia varie vario verso vi via vicino visto vita voi volta volte +v' va vale vari varia varie vario verso vi via vicino visto vita voi volta volte vostra vostre vostri vostro """.split() ) diff --git a/spacy/lang/it/syntax_iterators.py b/spacy/lang/it/syntax_iterators.py new file mode 100644 index 000000000..f63df3fad --- /dev/null +++ b/spacy/lang/it/syntax_iterators.py @@ -0,0 +1,86 @@ +from typing import Union, Iterator, Tuple + +from ...symbols import NOUN, PROPN, PRON +from ...errors import Errors +from ...tokens import Doc, Span + + +def noun_chunks(doclike: Union[Doc, Span]) -> Iterator[Tuple[int, int, int]]: + """ + Detect base noun phrases from a dependency parse. Works on both Doc and Span. + """ + labels = [ + "nsubj", + "nsubj:pass", + "obj", + "obl", + "obl:agent", + "nmod", + "pcomp", + "appos", + "ROOT", + ] + post_modifiers = ["flat", "flat:name", "fixed", "compound"] + dets = ["det", "det:poss"] + doc = doclike.doc # Ensure works on both Doc and Span. + if not doc.has_annotation("DEP"): + raise ValueError(Errors.E029) + np_deps = {doc.vocab.strings.add(label) for label in labels} + np_modifs = {doc.vocab.strings.add(modifier) for modifier in post_modifiers} + np_label = doc.vocab.strings.add("NP") + adj_label = doc.vocab.strings.add("amod") + det_labels = {doc.vocab.strings.add(det) for det in dets} + det_pos = doc.vocab.strings.add("DET") + adp_label = doc.vocab.strings.add("ADP") + conj = doc.vocab.strings.add("conj") + conj_pos = doc.vocab.strings.add("CCONJ") + prev_end = -1 + for i, word in enumerate(doclike): + if word.pos not in (NOUN, PROPN, PRON): + continue + # Prevent nested chunks from being produced + if word.left_edge.i <= prev_end: + continue + if word.dep in np_deps: + right_childs = list(word.rights) + right_child = right_childs[0] if right_childs else None + + if right_child: + if ( + right_child.dep == adj_label + ): # allow chain of adjectives by expanding to right + right_end = right_child.right_edge + elif ( + right_child.dep in det_labels and right_child.pos == det_pos + ): # cut relative pronouns here + right_end = right_child + elif right_child.dep in np_modifs: # Check if we can expand to right + right_end = word.right_edge + else: + right_end = word + else: + right_end = word + prev_end = right_end.i + + left_index = word.left_edge.i + left_index = ( + left_index + 1 if word.left_edge.pos == adp_label else left_index + ) + + yield left_index, right_end.i + 1, np_label + elif word.dep == conj: + head = word.head + while head.dep == conj and head.head.i < head.i: + head = head.head + # If the head is an NP, and we're coordinated to it, we're an NP + if head.dep in np_deps: + prev_end = word.i + + left_index = word.left_edge.i # eliminate left attached conjunction + left_index = ( + left_index + 1 if word.left_edge.pos == conj_pos else left_index + ) + yield left_index, word.i + 1, np_label + + +SYNTAX_ITERATORS = {"noun_chunks": noun_chunks} diff --git a/spacy/lang/ko/__init__.py b/spacy/lang/ko/__init__.py index 05fc67e79..0e02e4a2d 100644 --- a/spacy/lang/ko/__init__.py +++ b/spacy/lang/ko/__init__.py @@ -1,12 +1,13 @@ from typing import Iterator, Any, Dict +from .punctuation import TOKENIZER_INFIXES from .stop_words import STOP_WORDS from .tag_map import TAG_MAP from .lex_attrs import LEX_ATTRS from ...language import Language, BaseDefaults from ...tokens import Doc from ...scorer import Scorer -from ...symbols import POS +from ...symbols import POS, X from ...training import validate_examples from ...util import DummyTokenizer, registry, load_config_from_str from ...vocab import Vocab @@ -31,15 +32,24 @@ def create_tokenizer(): class KoreanTokenizer(DummyTokenizer): def __init__(self, vocab: Vocab): self.vocab = vocab - MeCab = try_mecab_import() # type: ignore[func-returns-value] - self.mecab_tokenizer = MeCab("-F%f[0],%f[7]") + self._mecab = try_mecab_import() # type: ignore[func-returns-value] + self._mecab_tokenizer = None + + @property + def mecab_tokenizer(self): + # This is a property so that initializing a pipeline with blank:ko is + # possible without actually requiring mecab-ko, e.g. to run + # `spacy init vectors ko` for a pipeline that will have a different + # tokenizer in the end. The languages need to match for the vectors + # to be imported and there's no way to pass a custom config to + # `init vectors`. + if self._mecab_tokenizer is None: + self._mecab_tokenizer = self._mecab("-F%f[0],%f[7]") + return self._mecab_tokenizer def __reduce__(self): return KoreanTokenizer, (self.vocab,) - def __del__(self): - self.mecab_tokenizer.__del__() - def __call__(self, text: str) -> Doc: dtokens = list(self.detailed_tokens(text)) surfaces = [dt["surface"] for dt in dtokens] @@ -47,7 +57,10 @@ class KoreanTokenizer(DummyTokenizer): for token, dtoken in zip(doc, dtokens): first_tag, sep, eomi_tags = dtoken["tag"].partition("+") token.tag_ = first_tag # stem(어간) or pre-final(선어말 어미) - token.pos = TAG_MAP[token.tag_][POS] + if token.tag_ in TAG_MAP: + token.pos = TAG_MAP[token.tag_][POS] + else: + token.pos = X token.lemma_ = dtoken["lemma"] doc.user_data["full_tags"] = [dt["tag"] for dt in dtokens] return doc @@ -76,6 +89,7 @@ class KoreanDefaults(BaseDefaults): lex_attr_getters = LEX_ATTRS stop_words = STOP_WORDS writing_system = {"direction": "ltr", "has_case": False, "has_letters": False} + infixes = TOKENIZER_INFIXES class Korean(Language): @@ -90,7 +104,8 @@ def try_mecab_import() -> None: return MeCab except ImportError: raise ImportError( - "Korean support requires [mecab-ko](https://bitbucket.org/eunjeon/mecab-ko/src/master/README.md), " + 'The Korean tokenizer ("spacy.ko.KoreanTokenizer") requires ' + "[mecab-ko](https://bitbucket.org/eunjeon/mecab-ko/src/master/README.md), " "[mecab-ko-dic](https://bitbucket.org/eunjeon/mecab-ko-dic), " "and [natto-py](https://github.com/buruzaemon/natto-py)" ) from None diff --git a/spacy/lang/ko/punctuation.py b/spacy/lang/ko/punctuation.py new file mode 100644 index 000000000..7f7b40c5b --- /dev/null +++ b/spacy/lang/ko/punctuation.py @@ -0,0 +1,12 @@ +from ..char_classes import LIST_QUOTES +from ..punctuation import TOKENIZER_INFIXES as BASE_TOKENIZER_INFIXES + + +_infixes = ( + ["·", "ㆍ", "\(", "\)"] + + [r"(?<=[0-9])~(?=[0-9-])"] + + LIST_QUOTES + + BASE_TOKENIZER_INFIXES +) + +TOKENIZER_INFIXES = _infixes diff --git a/spacy/lang/nb/stop_words.py b/spacy/lang/nb/stop_words.py index fd65dd788..d9ed414ef 100644 --- a/spacy/lang/nb/stop_words.py +++ b/spacy/lang/nb/stop_words.py @@ -4,46 +4,42 @@ alle allerede alt and andre annen annet at av bak bare bedre beste blant ble bli blir blitt bris by både -da dag de del dem den denne der dermed det dette disse drept du +da dag de del dem den denne der dermed det dette disse du eller en enn er et ett etter -fem fikk fire fjor flere folk for fortsatt fotball fra fram frankrike fredag +fem fikk fire fjor flere folk for fortsatt fra fram funnet få får fått før først første gang gi gikk gjennom gjorde gjort gjør gjøre god godt grunn gå går -ha hadde ham han hans har hele helt henne hennes her hun hva hvor hvordan -hvorfor +ha hadde ham han hans har hele helt henne hennes her hun i ifølge igjen ikke ingen inn ja jeg kamp kampen kan kl klart kom komme kommer kontakt kort kroner kunne kveld -kvinner -la laget land landet langt leder ligger like litt løpet lørdag +la laget land landet langt leder ligger like litt løpet -man mandag mange mannen mars med meg mellom men mener menn mennesker mens mer -millioner minutter mot msci mye må mål måtte +man mange med meg mellom men mener mennesker mens mer mot mye må mål måtte -ned neste noe noen nok norge norsk norske ntb ny nye nå når +ned neste noe noen nok ny nye nå når -og også om onsdag opp opplyser oslo oss over +og også om opp opplyser oss over -personer plass poeng politidistrikt politiet president prosent på +personer plass poeng på -regjeringen runde rundt russland +runde rundt -sa saken samme sammen samtidig satt se seg seks selv senere september ser sett +sa saken samme sammen samtidig satt se seg seks selv senere ser sett siden sier sin sine siste sitt skal skriver skulle slik som sted stedet stor -store står sverige svært så søndag +store står svært så -ta tatt tid tidligere til tilbake tillegg tirsdag to tok torsdag tre tror -tyskland +ta tatt tid tidligere til tilbake tillegg tok tror -under usa ut uten utenfor +under ut uten utenfor vant var ved veldig vi videre viktig vil ville viser vår være vært diff --git a/spacy/lang/ru/lex_attrs.py b/spacy/lang/ru/lex_attrs.py index 7979c7ea6..2afe47623 100644 --- a/spacy/lang/ru/lex_attrs.py +++ b/spacy/lang/ru/lex_attrs.py @@ -1,56 +1,219 @@ from ...attrs import LIKE_NUM -_num_words = [ - "ноль", - "один", - "два", - "три", - "четыре", - "пять", - "шесть", - "семь", - "восемь", - "девять", - "десять", - "одиннадцать", - "двенадцать", - "тринадцать", - "четырнадцать", - "пятнадцать", - "шестнадцать", - "семнадцать", - "восемнадцать", - "девятнадцать", - "двадцать", - "тридцать", - "сорок", - "пятьдесят", - "шестьдесят", - "семьдесят", - "восемьдесят", - "девяносто", - "сто", - "двести", - "триста", - "четыреста", - "пятьсот", - "шестьсот", - "семьсот", - "восемьсот", - "девятьсот", - "тысяча", - "миллион", - "миллиард", - "триллион", - "квадриллион", - "квинтиллион", -] +_num_words = list( + set( + """ +ноль ноля нолю нолём ноле нулевой нулевого нулевому нулевым нулевом нулевая нулевую нулевое нулевые нулевых нулевыми + +четверть четверти четвертью четвертей четвертям четвертями четвертях + +треть трети третью третей третям третями третях + +половина половины половине половину половиной половин половинам половинами половинах половиною + +один одного одному одним одном +первой первого первому первом первый первым первых +во-первых +единица единицы единице единицу единицей единиц единицам единицами единицах единицею + +два двумя двум двух двоих двое две +второго второму второй втором вторым вторых +двойка двойки двойке двойку двойкой двоек двойкам двойками двойках двойкою +во-вторых +оба обе обеим обеими обеих обоим обоими обоих + +полтора полторы полутора + +три третьего третьему третьем третьим третий тремя трем трех трое троих трёх +тройка тройки тройке тройку тройкою троек тройкам тройками тройках тройкой +троечка троечки троечке троечку троечкой троечек троечкам троечками троечках троечкой +трешка трешки трешке трешку трешкой трешек трешкам трешками трешках трешкою +трёшка трёшки трёшке трёшку трёшкой трёшек трёшкам трёшками трёшках трёшкою +трояк трояка трояку трояком трояке трояки трояков троякам трояками трояках +треха треху трехой +трёха трёху трёхой +втроем втроём + +четыре четвертого четвертому четвертом четвертый четвертым четверка четырьмя четырем четырех четверо четырёх четверым +четверых +вчетвером + +пять пятого пятому пятом пятый пятым пятью пяти пятеро пятерых пятерыми +впятером +пятерочка пятерочки пятерочке пятерочками пятерочкой пятерочку пятерочкой пятерочками +пятёрочка пятёрочки пятёрочке пятёрочками пятёрочкой пятёрочку пятёрочкой пятёрочками +пятерка пятерки пятерке пятерками пятеркой пятерку пятерками +пятёрка пятёрки пятёрке пятёрками пятёркой пятёрку пятёрками +пятёра пятёры пятёре пятёрами пятёрой пятёру пятёрами +пятера пятеры пятере пятерами пятерой пятеру пятерами +пятак пятаки пятаке пятаками пятаком пятаку пятаками + +шесть шестерка шестого шестому шестой шестом шестым шестью шести шестеро шестерых +вшестером + +семь семерка седьмого седьмому седьмой седьмом седьмым семью семи семеро седьмых +всемером + +восемь восьмерка восьмого восьмому восемью восьмой восьмом восьмым восеми восьмером восьми восьмью +восьмерых +ввосьмером + +девять девятого девятому девятка девятом девятый девятым девятью девяти девятером вдевятером девятерых +вдевятером + +десять десятого десятому десятка десятом десятый десятым десятью десяти десятером десятых +вдесятером + +одиннадцать одиннадцатого одиннадцатому одиннадцатом одиннадцатый одиннадцатым одиннадцатью одиннадцати +одиннадцатых + +двенадцать двенадцатого двенадцатому двенадцатом двенадцатый двенадцатым двенадцатью двенадцати +двенадцатых + +тринадцать тринадцатого тринадцатому тринадцатом тринадцатый тринадцатым тринадцатью тринадцати +тринадцатых + +четырнадцать четырнадцатого четырнадцатому четырнадцатом четырнадцатый четырнадцатым четырнадцатью четырнадцати +четырнадцатых + +пятнадцать пятнадцатого пятнадцатому пятнадцатом пятнадцатый пятнадцатым пятнадцатью пятнадцати +пятнадцатых +пятнарик пятнарику пятнариком пятнарики + +шестнадцать шестнадцатого шестнадцатому шестнадцатом шестнадцатый шестнадцатым шестнадцатью шестнадцати +шестнадцатых + +семнадцать семнадцатого семнадцатому семнадцатом семнадцатый семнадцатым семнадцатью семнадцати семнадцатых + +восемнадцать восемнадцатого восемнадцатому восемнадцатом восемнадцатый восемнадцатым восемнадцатью восемнадцати +восемнадцатых + +девятнадцать девятнадцатого девятнадцатому девятнадцатом девятнадцатый девятнадцатым девятнадцатью девятнадцати +девятнадцатых + +двадцать двадцатого двадцатому двадцатом двадцатый двадцатым двадцатью двадцати двадцатых + +четвертак четвертака четвертаке четвертаку четвертаки четвертаком четвертаками + +тридцать тридцатого тридцатому тридцатом тридцатый тридцатым тридцатью тридцати тридцатых +тридцадка тридцадку тридцадке тридцадки тридцадкой тридцадкою тридцадками + +тридевять тридевяти тридевятью + +сорок сорокового сороковому сороковом сороковым сороковой сороковых +сорокет сорокета сорокету сорокете сорокеты сорокетом сорокетами сорокетам + +пятьдесят пятьдесятого пятьдесятому пятьюдесятью пятьдесятом пятьдесятый пятьдесятым пятидесяти пятьдесятых +полтинник полтинника полтиннике полтиннику полтинники полтинником полтинниками полтинникам полтинниках +пятидесятка пятидесятке пятидесятку пятидесятки пятидесяткой пятидесятками пятидесяткам пятидесятках +полтос полтоса полтосе полтосу полтосы полтосом полтосами полтосам полтосах + +шестьдесят шестьдесятого шестьдесятому шестьюдесятью шестьдесятом шестьдесятый шестьдесятым шестидесятые шестидесяти +шестьдесятых + +семьдесят семьдесятого семьдесятому семьюдесятью семьдесятом семьдесятый семьдесятым семидесяти семьдесятых + +восемьдесят восемьдесятого восемьдесятому восемьюдесятью восемьдесятом восемьдесятый восемьдесятым восемидесяти +восьмидесяти восьмидесятых + +девяносто девяностого девяностому девяностом девяностый девяностым девяноста девяностых + +сто сотого сотому сотом сотен сотый сотым ста +стольник стольника стольнику стольнике стольники стольником стольниками +сотка сотки сотке соткой сотками соткам сотках +сотня сотни сотне сотней сотнями сотням сотнях + +двести двумястами двухсотого двухсотому двухсотом двухсотый двухсотым двумстам двухстах двухсот + +триста тремястами трехсотого трехсотому трехсотом трехсотый трехсотым тремстам трехстах трехсот + +четыреста четырехсотого четырехсотому четырьмястами четырехсотом четырехсотый четырехсотым четыремстам четырехстах +четырехсот + +пятьсот пятисотого пятисотому пятьюстами пятисотом пятисотый пятисотым пятистам пятистах пятисот +пятисотка пятисотки пятисотке пятисоткой пятисотками пятисоткам пятисоткою пятисотках +пятихатка пятихатки пятихатке пятихаткой пятихатками пятихаткам пятихаткою пятихатках +пятифан пятифаны пятифане пятифаном пятифанами пятифанах + +шестьсот шестисотого шестисотому шестьюстами шестисотом шестисотый шестисотым шестистам шестистах шестисот + +семьсот семисотого семисотому семьюстами семисотом семисотый семисотым семистам семистах семисот + +восемьсот восемисотого восемисотому восемисотом восемисотый восемисотым восьмистами восьмистам восьмистах восьмисот + +девятьсот девятисотого девятисотому девятьюстами девятисотом девятисотый девятисотым девятистам девятистах девятисот + +тысяча тысячного тысячному тысячном тысячный тысячным тысячам тысячах тысячей тысяч тысячи тыс +косарь косаря косару косарем косарями косарях косарям косарей + +десятитысячный десятитысячного десятитысячному десятитысячным десятитысячном десятитысячная десятитысячной +десятитысячную десятитысячною десятитысячное десятитысячные десятитысячных десятитысячными + +двадцатитысячный двадцатитысячного двадцатитысячному двадцатитысячным двадцатитысячном двадцатитысячная +двадцатитысячной двадцатитысячную двадцатитысячною двадцатитысячное двадцатитысячные двадцатитысячных +двадцатитысячными + +тридцатитысячный тридцатитысячного тридцатитысячному тридцатитысячным тридцатитысячном тридцатитысячная +тридцатитысячной тридцатитысячную тридцатитысячною тридцатитысячное тридцатитысячные тридцатитысячных +тридцатитысячными + +сорокатысячный сорокатысячного сорокатысячному сорокатысячным сорокатысячном сорокатысячная +сорокатысячной сорокатысячную сорокатысячною сорокатысячное сорокатысячные сорокатысячных +сорокатысячными + +пятидесятитысячный пятидесятитысячного пятидесятитысячному пятидесятитысячным пятидесятитысячном пятидесятитысячная +пятидесятитысячной пятидесятитысячную пятидесятитысячною пятидесятитысячное пятидесятитысячные пятидесятитысячных +пятидесятитысячными + +шестидесятитысячный шестидесятитысячного шестидесятитысячному шестидесятитысячным шестидесятитысячном шестидесятитысячная +шестидесятитысячной шестидесятитысячную шестидесятитысячною шестидесятитысячное шестидесятитысячные шестидесятитысячных +шестидесятитысячными + +семидесятитысячный семидесятитысячного семидесятитысячному семидесятитысячным семидесятитысячном семидесятитысячная +семидесятитысячной семидесятитысячную семидесятитысячною семидесятитысячное семидесятитысячные семидесятитысячных +семидесятитысячными + +восьмидесятитысячный восьмидесятитысячного восьмидесятитысячному восьмидесятитысячным восьмидесятитысячном восьмидесятитысячная +восьмидесятитысячной восьмидесятитысячную восьмидесятитысячною восьмидесятитысячное восьмидесятитысячные восьмидесятитысячных +восьмидесятитысячными + +стотысячный стотысячного стотысячному стотысячным стотысячном стотысячная стотысячной стотысячную стотысячное +стотысячные стотысячных стотысячными стотысячною + +миллион миллионного миллионов миллионному миллионном миллионный миллионным миллионом миллиона миллионе миллиону +миллионов +лям ляма лямы лямом лямами лямах лямов +млн + +десятимиллионная десятимиллионной десятимиллионными десятимиллионный десятимиллионным десятимиллионному +десятимиллионными десятимиллионную десятимиллионное десятимиллионные десятимиллионных десятимиллионною + +миллиард миллиардного миллиардному миллиардном миллиардный миллиардным миллиардом миллиарда миллиарде миллиарду +миллиардов +лярд лярда лярды лярдом лярдами лярдах лярдов +млрд + +триллион триллионного триллионному триллионном триллионный триллионным триллионом триллиона триллионе триллиону +триллионов трлн + +квадриллион квадриллионного квадриллионному квадриллионный квадриллионным квадриллионом квадриллиона квадриллионе +квадриллиону квадриллионов квадрлн + +квинтиллион квинтиллионного квинтиллионному квинтиллионный квинтиллионным квинтиллионом квинтиллиона квинтиллионе +квинтиллиону квинтиллионов квинтлн + +i ii iii iv v vi vii viii ix x xi xii xiii xiv xv xvi xvii xviii xix xx xxi xxii xxiii xxiv xxv xxvi xxvii xxvii xxix +""".split() + ) +) def like_num(text): if text.startswith(("+", "-", "±", "~")): text = text[1:] + if text.endswith("%"): + text = text[:-1] text = text.replace(",", "").replace(".", "") if text.isdigit(): return True diff --git a/spacy/lang/ru/stop_words.py b/spacy/lang/ru/stop_words.py index 16cb55ef9..d6ea6b42a 100644 --- a/spacy/lang/ru/stop_words.py +++ b/spacy/lang/ru/stop_words.py @@ -1,52 +1,111 @@ STOP_WORDS = set( """ -а +а авось ага агу аж ай али алло ау ах ая -будем будет будете будешь буду будут будучи будь будьте бы был была были было -быть +б будем будет будете будешь буду будут будучи будь будьте бы был была были было +быть бац без безусловно бишь благо благодаря ближайшие близко более больше +будто бывает бывала бывали бываю бывают бытует в вам вами вас весь во вот все всё всего всей всем всём всеми всему всех всею -всея всю вся вы +всея всю вся вы ваш ваша ваше ваши вдали вдобавок вдруг ведь везде вернее +взаимно взаправду видно вишь включая вместо внакладе вначале вне вниз внизу +вновь вовсе возможно воистину вокруг вон вообще вопреки вперекор вплоть +вполне вправду вправе впрочем впрямь вресноту вроде вряд всегда всюду +всякий всякого всякой всячески вчеред -да для до +г го где гораздо гав -его едим едят ее её ей ел ела ем ему емъ если ест есть ешь еще ещё ею +д да для до дабы давайте давно давным даже далее далеко дальше данная +данного данное данной данном данному данные данный данных дану данунах +даром де действительно довольно доколе доколь долго должен должна +должно должны должный дополнительно другая другие другим другими +других другое другой -же +е его едим едят ее её ей ел ела ем ему емъ если ест есть ешь еще ещё ею едва +ежели еле -за +ж же -и из или им ими имъ их +з за затем зато зачем здесь значит зря + +и из или им ими имъ их ибо иль имеет имел имела имело именно иметь иначе +иногда иным иными итак ишь + +й к как кем ко когда кого ком кому комья которая которого которое которой котором -которому которою которую которые который которым которыми которых кто +которому которою которую которые который которым которыми которых кто ка кабы +каждая каждое каждые каждый кажется казалась казались казалось казался казаться +какая какие каким какими каков какого какой какому какою касательно кой коли +коль конечно короче кроме кстати ку куда -меня мне мной мною мог моги могите могла могли могло могу могут мое моё моего +л ли либо лишь любая любого любое любой любом любую любыми любых + +м меня мне мной мною мог моги могите могла могли могло могу могут мое моё моего моей моем моём моему моею можем может можете можешь мои мой моим моими моих -мочь мою моя мы +мочь мою моя мы мало меж между менее меньше мимо многие много многого многое +многом многому можно мол му -на нам нами нас наса наш наша наше нашего нашей нашем нашему нашею наши нашим +н на нам нами нас наса наш наша наше нашего нашей нашем нашему нашею наши нашим нашими наших нашу не него нее неё ней нем нём нему нет нею ним ними них но +наверняка наверху навряд навыворот над надо назад наиболее наизворот +наизнанку наипаче накануне наконец наоборот наперед наперекор наподобие +например напротив напрямую насилу настоящая настоящее настоящие настоящий +насчет нате находиться начала начале неважно негде недавно недалеко незачем +некем некогда некому некоторая некоторые некоторый некоторых некто некуда +нельзя немногие немногим немного необходимо необходимости необходимые +необходимым неоткуда непрерывно нередко несколько нету неужели нечего +нечем нечему нечто нешто нибудь нигде ниже низко никак никакой никем +никогда никого никому никто никуда ниоткуда нипочем ничего ничем ничему +ничто ну нужная нужно нужного нужные нужный нужных ныне нынешнее нынешней +нынешних нынче о об один одна одни одним одними одних одно одного одной одном одному одною -одну он она оне они оно от +одну он она оне они оно от оба общую обычно ого однажды однако ой около оный +оп опять особенно особо особую особые откуда отнелижа отнелиже отовсюду +отсюда оттого оттот оттуда отчего отчему ох очевидно очень ом -по при +п по при паче перед под подавно поди подобная подобно подобного подобные +подобный подобным подобных поелику пожалуй пожалуйста позже поистине +пока покамест поколе поколь покуда покудова помимо понеже поприще пор +пора посему поскольку после посреди посредством потом потому потомушта +похожем почему почти поэтому прежде притом причем про просто прочего +прочее прочему прочими проще прям пусть + +р ради разве ранее рано раньше рядом с сам сама сами самим самими самих само самого самом самому саму свое своё своего своей своем своём своему своею свои свой своим своими своих свою своя -себе себя собой собою +себе себя собой собою самая самое самой самый самых сверх свыше се сего сей +сейчас сие сих сквозь сколько скорее скоро следует слишком смогут сможет +сначала снова со собственно совсем сперва спокону спустя сразу среди сродни +стал стала стали стало стать суть сызнова -та так такая такие таким такими таких такого такое такой таком такому такою -такую те тебе тебя тем теми тех то тобой тобою того той только том томах тому -тот тою ту ты +та то ту ты ти так такая такие таким такими таких такого такое такой таком такому такою +такую те тебе тебя тем теми тех тобой тобою того той только том томах тому +тот тою также таки таков такова там твои твоим твоих твой твоя твоё +теперь тогда тоже тотчас точно туда тут тьфу тая -у уже +у уже увы уж ура ух ую -чего чем чём чему что чтобы +ф фу -эта эти этим этими этих это этого этой этом этому этот этою эту +х ха хе хорошо хотел хотела хотелось хотеть хоть хотя хочешь хочу хуже -я +ч чего чем чём чему что чтобы часто чаще чей через чтоб чуть чхать чьим +чьих чьё чё + +ш ша + +щ ща щас + +ы ых ые ый + +э эта эти этим этими этих это этого этой этом этому этот этою эту эдак эдакий +эй эка экий этак этакий эх + +ю + +я явно явных яко якобы якоже """.split() ) diff --git a/spacy/lang/ru/tokenizer_exceptions.py b/spacy/lang/ru/tokenizer_exceptions.py index 1dc363fae..f3756e26c 100644 --- a/spacy/lang/ru/tokenizer_exceptions.py +++ b/spacy/lang/ru/tokenizer_exceptions.py @@ -2,7 +2,6 @@ from ..tokenizer_exceptions import BASE_EXCEPTIONS from ...symbols import ORTH, NORM from ...util import update_exc - _exc = {} _abbrev_exc = [ @@ -42,7 +41,6 @@ _abbrev_exc = [ {ORTH: "дек", NORM: "декабрь"}, ] - for abbrev_desc in _abbrev_exc: abbrev = abbrev_desc[ORTH] for orth in (abbrev, abbrev.capitalize(), abbrev.upper()): @@ -50,17 +48,354 @@ for abbrev_desc in _abbrev_exc: _exc[orth + "."] = [{ORTH: orth + ".", NORM: abbrev_desc[NORM]}] -_slang_exc = [ +for abbr in [ + # Year slang abbreviations {ORTH: "2к15", NORM: "2015"}, {ORTH: "2к16", NORM: "2016"}, {ORTH: "2к17", NORM: "2017"}, {ORTH: "2к18", NORM: "2018"}, {ORTH: "2к19", NORM: "2019"}, {ORTH: "2к20", NORM: "2020"}, -] + {ORTH: "2к21", NORM: "2021"}, + {ORTH: "2к22", NORM: "2022"}, + {ORTH: "2к23", NORM: "2023"}, + {ORTH: "2к24", NORM: "2024"}, + {ORTH: "2к25", NORM: "2025"}, +]: + _exc[abbr[ORTH]] = [abbr] -for slang_desc in _slang_exc: - _exc[slang_desc[ORTH]] = [slang_desc] +for abbr in [ + # Profession and academic titles abbreviations + {ORTH: "ак.", NORM: "академик"}, + {ORTH: "акад.", NORM: "академик"}, + {ORTH: "д-р архитектуры", NORM: "доктор архитектуры"}, + {ORTH: "д-р биол. наук", NORM: "доктор биологических наук"}, + {ORTH: "д-р ветеринар. наук", NORM: "доктор ветеринарных наук"}, + {ORTH: "д-р воен. наук", NORM: "доктор военных наук"}, + {ORTH: "д-р геогр. наук", NORM: "доктор географических наук"}, + {ORTH: "д-р геол.-минерал. наук", NORM: "доктор геолого-минералогических наук"}, + {ORTH: "д-р искусствоведения", NORM: "доктор искусствоведения"}, + {ORTH: "д-р ист. наук", NORM: "доктор исторических наук"}, + {ORTH: "д-р культурологии", NORM: "доктор культурологии"}, + {ORTH: "д-р мед. наук", NORM: "доктор медицинских наук"}, + {ORTH: "д-р пед. наук", NORM: "доктор педагогических наук"}, + {ORTH: "д-р полит. наук", NORM: "доктор политических наук"}, + {ORTH: "д-р психол. наук", NORM: "доктор психологических наук"}, + {ORTH: "д-р с.-х. наук", NORM: "доктор сельскохозяйственных наук"}, + {ORTH: "д-р социол. наук", NORM: "доктор социологических наук"}, + {ORTH: "д-р техн. наук", NORM: "доктор технических наук"}, + {ORTH: "д-р фармацевт. наук", NORM: "доктор фармацевтических наук"}, + {ORTH: "д-р физ.-мат. наук", NORM: "доктор физико-математических наук"}, + {ORTH: "д-р филол. наук", NORM: "доктор филологических наук"}, + {ORTH: "д-р филос. наук", NORM: "доктор философских наук"}, + {ORTH: "д-р хим. наук", NORM: "доктор химических наук"}, + {ORTH: "д-р экон. наук", NORM: "доктор экономических наук"}, + {ORTH: "д-р юрид. наук", NORM: "доктор юридических наук"}, + {ORTH: "д-р", NORM: "доктор"}, + {ORTH: "д.б.н.", NORM: "доктор биологических наук"}, + {ORTH: "д.г.-м.н.", NORM: "доктор геолого-минералогических наук"}, + {ORTH: "д.г.н.", NORM: "доктор географических наук"}, + {ORTH: "д.и.н.", NORM: "доктор исторических наук"}, + {ORTH: "д.иск.", NORM: "доктор искусствоведения"}, + {ORTH: "д.м.н.", NORM: "доктор медицинских наук"}, + {ORTH: "д.п.н.", NORM: "доктор психологических наук"}, + {ORTH: "д.пед.н.", NORM: "доктор педагогических наук"}, + {ORTH: "д.полит.н.", NORM: "доктор политических наук"}, + {ORTH: "д.с.-х.н.", NORM: "доктор сельскохозяйственных наук"}, + {ORTH: "д.социол.н.", NORM: "доктор социологических наук"}, + {ORTH: "д.т.н.", NORM: "доктор технических наук"}, + {ORTH: "д.т.н", NORM: "доктор технических наук"}, + {ORTH: "д.ф.-м.н.", NORM: "доктор физико-математических наук"}, + {ORTH: "д.ф.н.", NORM: "доктор филологических наук"}, + {ORTH: "д.филос.н.", NORM: "доктор философских наук"}, + {ORTH: "д.фил.н.", NORM: "доктор филологических наук"}, + {ORTH: "д.х.н.", NORM: "доктор химических наук"}, + {ORTH: "д.э.н.", NORM: "доктор экономических наук"}, + {ORTH: "д.э.н", NORM: "доктор экономических наук"}, + {ORTH: "д.ю.н.", NORM: "доктор юридических наук"}, + {ORTH: "доц.", NORM: "доцент"}, + {ORTH: "и.о.", NORM: "исполняющий обязанности"}, + {ORTH: "к.б.н.", NORM: "кандидат биологических наук"}, + {ORTH: "к.воен.н.", NORM: "кандидат военных наук"}, + {ORTH: "к.г.-м.н.", NORM: "кандидат геолого-минералогических наук"}, + {ORTH: "к.г.н.", NORM: "кандидат географических наук"}, + {ORTH: "к.геогр.н", NORM: "кандидат географических наук"}, + {ORTH: "к.геогр.наук", NORM: "кандидат географических наук"}, + {ORTH: "к.и.н.", NORM: "кандидат исторических наук"}, + {ORTH: "к.иск.", NORM: "кандидат искусствоведения"}, + {ORTH: "к.м.н.", NORM: "кандидат медицинских наук"}, + {ORTH: "к.п.н.", NORM: "кандидат психологических наук"}, + {ORTH: "к.псх.н.", NORM: "кандидат психологических наук"}, + {ORTH: "к.пед.н.", NORM: "кандидат педагогических наук"}, + {ORTH: "канд.пед.наук", NORM: "кандидат педагогических наук"}, + {ORTH: "к.полит.н.", NORM: "кандидат политических наук"}, + {ORTH: "к.с.-х.н.", NORM: "кандидат сельскохозяйственных наук"}, + {ORTH: "к.социол.н.", NORM: "кандидат социологических наук"}, + {ORTH: "к.с.н.", NORM: "кандидат социологических наук"}, + {ORTH: "к.т.н.", NORM: "кандидат технических наук"}, + {ORTH: "к.ф.-м.н.", NORM: "кандидат физико-математических наук"}, + {ORTH: "к.ф.н.", NORM: "кандидат филологических наук"}, + {ORTH: "к.фил.н.", NORM: "кандидат филологических наук"}, + {ORTH: "к.филол.н", NORM: "кандидат филологических наук"}, + {ORTH: "к.фарм.наук", NORM: "кандидат фармакологических наук"}, + {ORTH: "к.фарм.н.", NORM: "кандидат фармакологических наук"}, + {ORTH: "к.фарм.н", NORM: "кандидат фармакологических наук"}, + {ORTH: "к.филос.наук", NORM: "кандидат философских наук"}, + {ORTH: "к.филос.н.", NORM: "кандидат философских наук"}, + {ORTH: "к.филос.н", NORM: "кандидат философских наук"}, + {ORTH: "к.х.н.", NORM: "кандидат химических наук"}, + {ORTH: "к.х.н", NORM: "кандидат химических наук"}, + {ORTH: "к.э.н.", NORM: "кандидат экономических наук"}, + {ORTH: "к.э.н", NORM: "кандидат экономических наук"}, + {ORTH: "к.ю.н.", NORM: "кандидат юридических наук"}, + {ORTH: "к.ю.н", NORM: "кандидат юридических наук"}, + {ORTH: "канд. архитектуры", NORM: "кандидат архитектуры"}, + {ORTH: "канд. биол. наук", NORM: "кандидат биологических наук"}, + {ORTH: "канд. ветеринар. наук", NORM: "кандидат ветеринарных наук"}, + {ORTH: "канд. воен. наук", NORM: "кандидат военных наук"}, + {ORTH: "канд. геогр. наук", NORM: "кандидат географических наук"}, + {ORTH: "канд. геол.-минерал. наук", NORM: "кандидат геолого-минералогических наук"}, + {ORTH: "канд. искусствоведения", NORM: "кандидат искусствоведения"}, + {ORTH: "канд. ист. наук", NORM: "кандидат исторических наук"}, + {ORTH: "к.ист.н.", NORM: "кандидат исторических наук"}, + {ORTH: "канд. культурологии", NORM: "кандидат культурологии"}, + {ORTH: "канд. мед. наук", NORM: "кандидат медицинских наук"}, + {ORTH: "канд. пед. наук", NORM: "кандидат педагогических наук"}, + {ORTH: "канд. полит. наук", NORM: "кандидат политических наук"}, + {ORTH: "канд. психол. наук", NORM: "кандидат психологических наук"}, + {ORTH: "канд. с.-х. наук", NORM: "кандидат сельскохозяйственных наук"}, + {ORTH: "канд. социол. наук", NORM: "кандидат социологических наук"}, + {ORTH: "к.соц.наук", NORM: "кандидат социологических наук"}, + {ORTH: "к.соц.н.", NORM: "кандидат социологических наук"}, + {ORTH: "к.соц.н", NORM: "кандидат социологических наук"}, + {ORTH: "канд. техн. наук", NORM: "кандидат технических наук"}, + {ORTH: "канд. фармацевт. наук", NORM: "кандидат фармацевтических наук"}, + {ORTH: "канд. физ.-мат. наук", NORM: "кандидат физико-математических наук"}, + {ORTH: "канд. филол. наук", NORM: "кандидат филологических наук"}, + {ORTH: "канд. филос. наук", NORM: "кандидат философских наук"}, + {ORTH: "канд. хим. наук", NORM: "кандидат химических наук"}, + {ORTH: "канд. экон. наук", NORM: "кандидат экономических наук"}, + {ORTH: "канд. юрид. наук", NORM: "кандидат юридических наук"}, + {ORTH: "в.н.с.", NORM: "ведущий научный сотрудник"}, + {ORTH: "мл. науч. сотр.", NORM: "младший научный сотрудник"}, + {ORTH: "м.н.с.", NORM: "младший научный сотрудник"}, + {ORTH: "проф.", NORM: "профессор"}, + {ORTH: "профессор.кафедры", NORM: "профессор кафедры"}, + {ORTH: "ст. науч. сотр.", NORM: "старший научный сотрудник"}, + {ORTH: "чл.-к.", NORM: "член корреспондент"}, + {ORTH: "чл.-корр.", NORM: "член-корреспондент"}, + {ORTH: "чл.-кор.", NORM: "член-корреспондент"}, + {ORTH: "дир.", NORM: "директор"}, + {ORTH: "зам. дир.", NORM: "заместитель директора"}, + {ORTH: "зав. каф.", NORM: "заведующий кафедрой"}, + {ORTH: "зав.кафедрой", NORM: "заведующий кафедрой"}, + {ORTH: "зав. кафедрой", NORM: "заведующий кафедрой"}, + {ORTH: "асп.", NORM: "аспирант"}, + {ORTH: "гл. науч. сотр.", NORM: "главный научный сотрудник"}, + {ORTH: "вед. науч. сотр.", NORM: "ведущий научный сотрудник"}, + {ORTH: "науч. сотр.", NORM: "научный сотрудник"}, + {ORTH: "к.м.с.", NORM: "кандидат в мастера спорта"}, +]: + _exc[abbr[ORTH]] = [abbr] + + +for abbr in [ + # Literary phrases abbreviations + {ORTH: "и т.д.", NORM: "и так далее"}, + {ORTH: "и т.п.", NORM: "и тому подобное"}, + {ORTH: "т.д.", NORM: "так далее"}, + {ORTH: "т.п.", NORM: "тому подобное"}, + {ORTH: "т.е.", NORM: "то есть"}, + {ORTH: "т.к.", NORM: "так как"}, + {ORTH: "в т.ч.", NORM: "в том числе"}, + {ORTH: "и пр.", NORM: "и прочие"}, + {ORTH: "и др.", NORM: "и другие"}, + {ORTH: "т.н.", NORM: "так называемый"}, +]: + _exc[abbr[ORTH]] = [abbr] + + +for abbr in [ + # Appeal to a person abbreviations + {ORTH: "г-н", NORM: "господин"}, + {ORTH: "г-да", NORM: "господа"}, + {ORTH: "г-жа", NORM: "госпожа"}, + {ORTH: "тов.", NORM: "товарищ"}, +]: + _exc[abbr[ORTH]] = [abbr] + + +for abbr in [ + # Time periods abbreviations + {ORTH: "до н.э.", NORM: "до нашей эры"}, + {ORTH: "по н.в.", NORM: "по настоящее время"}, + {ORTH: "в н.в.", NORM: "в настоящее время"}, + {ORTH: "наст.", NORM: "настоящий"}, + {ORTH: "наст. время", NORM: "настоящее время"}, + {ORTH: "г.г.", NORM: "годы"}, + {ORTH: "гг.", NORM: "годы"}, + {ORTH: "т.г.", NORM: "текущий год"}, +]: + _exc[abbr[ORTH]] = [abbr] + + +for abbr in [ + # Address forming elements abbreviations + {ORTH: "респ.", NORM: "республика"}, + {ORTH: "обл.", NORM: "область"}, + {ORTH: "г.ф.з.", NORM: "город федерального значения"}, + {ORTH: "а.обл.", NORM: "автономная область"}, + {ORTH: "а.окр.", NORM: "автономный округ"}, + {ORTH: "м.р-н", NORM: "муниципальный район"}, + {ORTH: "г.о.", NORM: "городской округ"}, + {ORTH: "г.п.", NORM: "городское поселение"}, + {ORTH: "с.п.", NORM: "сельское поселение"}, + {ORTH: "вн.р-н", NORM: "внутригородской район"}, + {ORTH: "вн.тер.г.", NORM: "внутригородская территория города"}, + {ORTH: "пос.", NORM: "поселение"}, + {ORTH: "р-н", NORM: "район"}, + {ORTH: "с/с", NORM: "сельсовет"}, + {ORTH: "г.", NORM: "город"}, + {ORTH: "п.г.т.", NORM: "поселок городского типа"}, + {ORTH: "пгт.", NORM: "поселок городского типа"}, + {ORTH: "р.п.", NORM: "рабочий поселок"}, + {ORTH: "рп.", NORM: "рабочий поселок"}, + {ORTH: "кп.", NORM: "курортный поселок"}, + {ORTH: "гп.", NORM: "городской поселок"}, + {ORTH: "п.", NORM: "поселок"}, + {ORTH: "в-ки", NORM: "выселки"}, + {ORTH: "г-к", NORM: "городок"}, + {ORTH: "з-ка", NORM: "заимка"}, + {ORTH: "п-к", NORM: "починок"}, + {ORTH: "киш.", NORM: "кишлак"}, + {ORTH: "п. ст. ", NORM: "поселок станция"}, + {ORTH: "п. ж/д ст. ", NORM: "поселок при железнодорожной станции"}, + {ORTH: "ж/д бл-ст", NORM: "железнодорожный блокпост"}, + {ORTH: "ж/д б-ка", NORM: "железнодорожная будка"}, + {ORTH: "ж/д в-ка", NORM: "железнодорожная ветка"}, + {ORTH: "ж/д к-ма", NORM: "железнодорожная казарма"}, + {ORTH: "ж/д к-т", NORM: "железнодорожный комбинат"}, + {ORTH: "ж/д пл-ма", NORM: "железнодорожная платформа"}, + {ORTH: "ж/д пл-ка", NORM: "железнодорожная площадка"}, + {ORTH: "ж/д п.п.", NORM: "железнодорожный путевой пост"}, + {ORTH: "ж/д о.п.", NORM: "железнодорожный остановочный пункт"}, + {ORTH: "ж/д рзд.", NORM: "железнодорожный разъезд"}, + {ORTH: "ж/д ст. ", NORM: "железнодорожная станция"}, + {ORTH: "м-ко", NORM: "местечко"}, + {ORTH: "д.", NORM: "деревня"}, + {ORTH: "с.", NORM: "село"}, + {ORTH: "сл.", NORM: "слобода"}, + {ORTH: "ст. ", NORM: "станция"}, + {ORTH: "ст-ца", NORM: "станица"}, + {ORTH: "у.", NORM: "улус"}, + {ORTH: "х.", NORM: "хутор"}, + {ORTH: "рзд.", NORM: "разъезд"}, + {ORTH: "зим.", NORM: "зимовье"}, + {ORTH: "б-г", NORM: "берег"}, + {ORTH: "ж/р", NORM: "жилой район"}, + {ORTH: "кв-л", NORM: "квартал"}, + {ORTH: "мкр.", NORM: "микрорайон"}, + {ORTH: "ост-в", NORM: "остров"}, + {ORTH: "платф.", NORM: "платформа"}, + {ORTH: "п/р", NORM: "промышленный район"}, + {ORTH: "р-н", NORM: "район"}, + {ORTH: "тер.", NORM: "территория"}, + { + ORTH: "тер. СНО", + NORM: "территория садоводческих некоммерческих объединений граждан", + }, + { + ORTH: "тер. ОНО", + NORM: "территория огороднических некоммерческих объединений граждан", + }, + {ORTH: "тер. ДНО", NORM: "территория дачных некоммерческих объединений граждан"}, + {ORTH: "тер. СНТ", NORM: "территория садоводческих некоммерческих товариществ"}, + {ORTH: "тер. ОНТ", NORM: "территория огороднических некоммерческих товариществ"}, + {ORTH: "тер. ДНТ", NORM: "территория дачных некоммерческих товариществ"}, + {ORTH: "тер. СПК", NORM: "территория садоводческих потребительских кооперативов"}, + {ORTH: "тер. ОПК", NORM: "территория огороднических потребительских кооперативов"}, + {ORTH: "тер. ДПК", NORM: "территория дачных потребительских кооперативов"}, + {ORTH: "тер. СНП", NORM: "территория садоводческих некоммерческих партнерств"}, + {ORTH: "тер. ОНП", NORM: "территория огороднических некоммерческих партнерств"}, + {ORTH: "тер. ДНП", NORM: "территория дачных некоммерческих партнерств"}, + {ORTH: "тер. ТСН", NORM: "территория товарищества собственников недвижимости"}, + {ORTH: "тер. ГСК", NORM: "территория гаражно-строительного кооператива"}, + {ORTH: "ус.", NORM: "усадьба"}, + {ORTH: "тер.ф.х.", NORM: "территория фермерского хозяйства"}, + {ORTH: "ю.", NORM: "юрты"}, + {ORTH: "ал.", NORM: "аллея"}, + {ORTH: "б-р", NORM: "бульвар"}, + {ORTH: "взв.", NORM: "взвоз"}, + {ORTH: "взд.", NORM: "въезд"}, + {ORTH: "дор.", NORM: "дорога"}, + {ORTH: "ззд.", NORM: "заезд"}, + {ORTH: "км", NORM: "километр"}, + {ORTH: "к-цо", NORM: "кольцо"}, + {ORTH: "лн.", NORM: "линия"}, + {ORTH: "мгстр.", NORM: "магистраль"}, + {ORTH: "наб.", NORM: "набережная"}, + {ORTH: "пер-д", NORM: "переезд"}, + {ORTH: "пер.", NORM: "переулок"}, + {ORTH: "пл-ка", NORM: "площадка"}, + {ORTH: "пл.", NORM: "площадь"}, + {ORTH: "пр-д", NORM: "проезд"}, + {ORTH: "пр-к", NORM: "просек"}, + {ORTH: "пр-ка", NORM: "просека"}, + {ORTH: "пр-лок", NORM: "проселок"}, + {ORTH: "пр-кт", NORM: "проспект"}, + {ORTH: "проул.", NORM: "проулок"}, + {ORTH: "рзд.", NORM: "разъезд"}, + {ORTH: "ряд", NORM: "ряд(ы)"}, + {ORTH: "с-р", NORM: "сквер"}, + {ORTH: "с-к", NORM: "спуск"}, + {ORTH: "сзд.", NORM: "съезд"}, + {ORTH: "туп.", NORM: "тупик"}, + {ORTH: "ул.", NORM: "улица"}, + {ORTH: "ш.", NORM: "шоссе"}, + {ORTH: "влд.", NORM: "владение"}, + {ORTH: "г-ж", NORM: "гараж"}, + {ORTH: "д.", NORM: "дом"}, + {ORTH: "двлд.", NORM: "домовладение"}, + {ORTH: "зд.", NORM: "здание"}, + {ORTH: "з/у", NORM: "земельный участок"}, + {ORTH: "кв.", NORM: "квартира"}, + {ORTH: "ком.", NORM: "комната"}, + {ORTH: "подв.", NORM: "подвал"}, + {ORTH: "кот.", NORM: "котельная"}, + {ORTH: "п-б", NORM: "погреб"}, + {ORTH: "к.", NORM: "корпус"}, + {ORTH: "ОНС", NORM: "объект незавершенного строительства"}, + {ORTH: "оф.", NORM: "офис"}, + {ORTH: "пав.", NORM: "павильон"}, + {ORTH: "помещ.", NORM: "помещение"}, + {ORTH: "раб.уч.", NORM: "рабочий участок"}, + {ORTH: "скл.", NORM: "склад"}, + {ORTH: "coop.", NORM: "сооружение"}, + {ORTH: "стр.", NORM: "строение"}, + {ORTH: "торг.зал", NORM: "торговый зал"}, + {ORTH: "а/п", NORM: "аэропорт"}, + {ORTH: "им.", NORM: "имени"}, +]: + _exc[abbr[ORTH]] = [abbr] + + +for abbr in [ + # Others abbreviations + {ORTH: "тыс.руб.", NORM: "тысяч рублей"}, + {ORTH: "тыс.", NORM: "тысяч"}, + {ORTH: "руб.", NORM: "рубль"}, + {ORTH: "долл.", NORM: "доллар"}, + {ORTH: "прим.", NORM: "примечание"}, + {ORTH: "прим.ред.", NORM: "примечание редакции"}, + {ORTH: "см. также", NORM: "смотри также"}, + {ORTH: "кв.м.", NORM: "квадрантный метр"}, + {ORTH: "м2", NORM: "квадрантный метр"}, + {ORTH: "б/у", NORM: "бывший в употреблении"}, + {ORTH: "сокр.", NORM: "сокращение"}, + {ORTH: "чел.", NORM: "человек"}, + {ORTH: "б.п.", NORM: "базисный пункт"}, +]: + _exc[abbr[ORTH]] = [abbr] TOKENIZER_EXCEPTIONS = update_exc(BASE_EXCEPTIONS, _exc) diff --git a/spacy/lang/sl/examples.py b/spacy/lang/sl/examples.py new file mode 100644 index 000000000..bf483c6a4 --- /dev/null +++ b/spacy/lang/sl/examples.py @@ -0,0 +1,18 @@ +""" +Example sentences to test spaCy and its language models. + +>>> from spacy.lang.sl.examples import sentences +>>> docs = nlp.pipe(sentences) +""" + + +sentences = [ + "Apple načrtuje nakup britanskega startupa za 1 bilijon dolarjev", + "France Prešeren je umrl 8. februarja 1849 v Kranju", + "Staro ljubljansko letališče Moste bo obnovila družba BTC", + "London je največje mesto v Združenem kraljestvu.", + "Kje se skrivaš?", + "Kdo je predsednik Francije?", + "Katero je glavno mesto Združenih držav Amerike?", + "Kdaj je bil rojen Milan Kučan?", +] diff --git a/spacy/lang/sl/stop_words.py b/spacy/lang/sl/stop_words.py index 6fb01a183..c9004ed5d 100644 --- a/spacy/lang/sl/stop_words.py +++ b/spacy/lang/sl/stop_words.py @@ -1,13 +1,10 @@ # Source: https://github.com/stopwords-iso/stopwords-sl -# TODO: probably needs to be tidied up – the list seems to have month names in -# it, which shouldn't be considered stop words. +# Removed various words that are not normally considered stop words, such as months. STOP_WORDS = set( """ a ali -april -avgust b bi bil @@ -19,7 +16,6 @@ biti blizu bo bodo -bojo bolj bom bomo @@ -37,16 +33,6 @@ da daleč dan danes -datum -december -deset -deseta -deseti -deseto -devet -deveta -deveti -deveto do dober dobra @@ -54,16 +40,7 @@ dobri dobro dokler dol -dolg -dolga -dolgi dovolj -drug -druga -drugi -drugo -dva -dve e eden en @@ -74,7 +51,6 @@ enkrat eno etc. f -februar g g. ga @@ -93,16 +69,12 @@ iv ix iz j -januar jaz je ji jih jim jo -julij -junij -jutri k kadarkoli kaj @@ -123,41 +95,23 @@ kje kjer kjerkoli ko -koder koderkoli koga komu kot -kratek -kratka -kratke -kratki l -lahka -lahke -lahki -lahko le lep lepa lepe lepi lepo -leto m -maj -majhen -majhna -majhni -malce -malo manj -marec me med medtem mene -mesec mi midva midve @@ -183,7 +137,6 @@ najmanj naju največ nam -narobe nas nato nazaj @@ -192,7 +145,6 @@ naša naše ne nedavno -nedelja nek neka nekaj @@ -236,7 +188,6 @@ njuna njuno no nocoj -november npr. o ob @@ -244,51 +195,23 @@ oba obe oboje od -odprt -odprta -odprti okoli -oktober on onadva one oni onidve -osem -osma -osmi -osmo oz. p pa -pet -peta -petek -peti -peto po pod pogosto poleg -poln -polna -polni -polno ponavadi -ponedeljek ponovno potem povsod -pozdravljen -pozdravljeni -prav -prava -prave -pravi -pravo -prazen -prazna -prazno prbl. precej pred @@ -297,19 +220,10 @@ preko pri pribl. približno -primer -pripravljen -pripravljena -pripravljeni proti -prva -prvi -prvo r -ravno redko res -reč s saj sam @@ -321,29 +235,17 @@ se sebe sebi sedaj -sedem -sedma -sedmi -sedmo sem -september seveda si sicer skoraj skozi -slab smo so -sobota spet -sreda -srednja -srednji sta ste -stran -stvar sva t ta @@ -358,10 +260,6 @@ te tebe tebi tega -težak -težka -težki -težko ti tista tiste @@ -371,11 +269,6 @@ tj. tja to toda -torek -tretja -tretje -tretji -tri tu tudi tukaj @@ -392,10 +285,6 @@ vaša vaše ve vedno -velik -velika -veliki -veliko vendar ves več @@ -403,10 +292,6 @@ vi vidva vii viii -visok -visoka -visoke -visoki vsa vsaj vsak @@ -420,34 +305,21 @@ vsega vsi vso včasih -včeraj x z za zadaj zadnji zakaj -zaprta -zaprti -zaprto zdaj zelo zunaj č če često -četrta -četrtek -četrti -četrto čez čigav š -šest -šesta -šesti -šesto -štiri ž že """.split() diff --git a/spacy/lang/tr/lex_attrs.py b/spacy/lang/tr/lex_attrs.py index f7416837d..6d9f4f388 100644 --- a/spacy/lang/tr/lex_attrs.py +++ b/spacy/lang/tr/lex_attrs.py @@ -53,7 +53,7 @@ _ordinal_words = [ "doksanıncı", "yüzüncü", "bininci", - "mliyonuncu", + "milyonuncu", "milyarıncı", "trilyonuncu", "katrilyonuncu", diff --git a/spacy/lang/uk/tokenizer_exceptions.py b/spacy/lang/uk/tokenizer_exceptions.py index 94016fd52..7e168a27c 100644 --- a/spacy/lang/uk/tokenizer_exceptions.py +++ b/spacy/lang/uk/tokenizer_exceptions.py @@ -6,19 +6,30 @@ from ...util import update_exc _exc = {} for exc_data in [ + {ORTH: "обл.", NORM: "область"}, + {ORTH: "р-н.", NORM: "район"}, + {ORTH: "р-н", NORM: "район"}, + {ORTH: "м.", NORM: "місто"}, {ORTH: "вул.", NORM: "вулиця"}, - {ORTH: "ім.", NORM: "імені"}, {ORTH: "просп.", NORM: "проспект"}, + {ORTH: "пр-кт", NORM: "проспект"}, {ORTH: "бул.", NORM: "бульвар"}, {ORTH: "пров.", NORM: "провулок"}, {ORTH: "пл.", NORM: "площа"}, + {ORTH: "майд.", NORM: "майдан"}, + {ORTH: "мкр.", NORM: "мікрорайон"}, + {ORTH: "ст.", NORM: "станція"}, + {ORTH: "ж/м", NORM: "житловий масив"}, + {ORTH: "наб.", NORM: "набережна"}, + {ORTH: "в/ч", NORM: "військова частина"}, + {ORTH: "в/м", NORM: "військове містечко"}, + {ORTH: "оз.", NORM: "озеро"}, + {ORTH: "ім.", NORM: "імені"}, {ORTH: "г.", NORM: "гора"}, {ORTH: "п.", NORM: "пан"}, - {ORTH: "м.", NORM: "місто"}, {ORTH: "проф.", NORM: "професор"}, {ORTH: "акад.", NORM: "академік"}, {ORTH: "доц.", NORM: "доцент"}, - {ORTH: "оз.", NORM: "озеро"}, ]: _exc[exc_data[ORTH]] = [exc_data] diff --git a/spacy/lang/xx/examples.py b/spacy/lang/xx/examples.py index 8d63c3c20..34570d747 100644 --- a/spacy/lang/xx/examples.py +++ b/spacy/lang/xx/examples.py @@ -59,7 +59,7 @@ sentences = [ "Czy w ciągu ostatnich 48 godzin spożyłeś leki zawierające paracetamol?", "Kto ma ochotę zapoznać się z innymi niż w książkach przygodami Muminków i ich przyjaciół, temu polecam komiks Tove Jansson „Muminki i morze”.", "Apple está querendo comprar uma startup do Reino Unido por 100 milhões de dólares.", - "Carros autônomos empurram a responsabilidade do seguro para os fabricantes.." + "Carros autônomos empurram a responsabilidade do seguro para os fabricantes..", "São Francisco considera banir os robôs de entrega que andam pelas calçadas.", "Londres é a maior cidade do Reino Unido.", # Translations from English: diff --git a/spacy/language.py b/spacy/language.py index 798254b80..bab403f0e 100644 --- a/spacy/language.py +++ b/spacy/language.py @@ -131,7 +131,7 @@ class Language: self, vocab: Union[Vocab, bool] = True, *, - max_length: int = 10 ** 6, + max_length: int = 10**6, meta: Dict[str, Any] = {}, create_tokenizer: Optional[Callable[["Language"], Callable[[str], Doc]]] = None, batch_size: int = 1000, @@ -354,12 +354,15 @@ class Language: @property def pipe_labels(self) -> Dict[str, List[str]]: """Get the labels set by the pipeline components, if available (if - the component exposes a labels property). + the component exposes a labels property and the labels are not + hidden). RETURNS (Dict[str, List[str]]): Labels keyed by component name. """ labels = {} for name, pipe in self._components: + if hasattr(pipe, "hide_labels") and pipe.hide_labels is True: + continue if hasattr(pipe, "labels"): labels[name] = list(pipe.labels) return SimpleFrozenDict(labels) @@ -522,7 +525,7 @@ class Language: requires: Iterable[str] = SimpleFrozenList(), retokenizes: bool = False, func: Optional["Pipe"] = None, - ) -> Callable: + ) -> Callable[..., Any]: """Register a new pipeline component. Can be used for stateless function components that don't require a separate factory. Can be used as a decorator on a function or classmethod, or called as a function with the @@ -1219,8 +1222,9 @@ class Language: component_cfg = {} grads = {} - def get_grads(W, dW, key=None): + def get_grads(key, W, dW): grads[key] = (W, dW) + return W, dW get_grads.learn_rate = sgd.learn_rate # type: ignore[attr-defined, union-attr] get_grads.b1 = sgd.b1 # type: ignore[attr-defined, union-attr] @@ -1233,7 +1237,7 @@ class Language: examples, sgd=get_grads, losses=losses, **component_cfg.get(name, {}) ) for key, (W, dW) in grads.items(): - sgd(W, dW, key=key) # type: ignore[call-arg, misc] + sgd(key, W, dW) # type: ignore[call-arg, misc] return losses def begin_training( diff --git a/spacy/matcher/dependencymatcher.pyi b/spacy/matcher/dependencymatcher.pyi new file mode 100644 index 000000000..c19d3a71c --- /dev/null +++ b/spacy/matcher/dependencymatcher.pyi @@ -0,0 +1,66 @@ +from typing import Any, Callable, Dict, List, Optional, Tuple, Union +from .matcher import Matcher +from ..vocab import Vocab +from ..tokens.doc import Doc +from ..tokens.span import Span + +class DependencyMatcher: + """Match dependency parse tree based on pattern rules.""" + + _patterns: Dict[str, List[Any]] + _raw_patterns: Dict[str, List[Any]] + _tokens_to_key: Dict[str, List[Any]] + _root: Dict[str, List[Any]] + _tree: Dict[str, List[Any]] + _callbacks: Dict[ + Any, Callable[[DependencyMatcher, Doc, int, List[Tuple[int, List[int]]]], Any] + ] + _ops: Dict[str, Any] + vocab: Vocab + _matcher: Matcher + def __init__(self, vocab: Vocab, *, validate: bool = ...) -> None: ... + def __reduce__( + self, + ) -> Tuple[ + Callable[ + [Vocab, Dict[str, Any], Dict[str, Callable[..., Any]]], DependencyMatcher + ], + Tuple[ + Vocab, + Dict[str, List[Any]], + Dict[ + str, + Callable[ + [DependencyMatcher, Doc, int, List[Tuple[int, List[int]]]], Any + ], + ], + ], + None, + None, + ]: ... + def __len__(self) -> int: ... + def __contains__(self, key: Union[str, int]) -> bool: ... + def add( + self, + key: Union[str, int], + patterns: List[List[Dict[str, Any]]], + *, + on_match: Optional[ + Callable[[DependencyMatcher, Doc, int, List[Tuple[int, List[int]]]], Any] + ] = ... + ) -> None: ... + def has_key(self, key: Union[str, int]) -> bool: ... + def get( + self, key: Union[str, int], default: Optional[Any] = ... + ) -> Tuple[ + Optional[ + Callable[[DependencyMatcher, Doc, int, List[Tuple[int, List[int]]]], Any] + ], + List[List[Dict[str, Any]]], + ]: ... + def remove(self, key: Union[str, int]) -> None: ... + def __call__(self, doclike: Union[Doc, Span]) -> List[Tuple[int, List[int]]]: ... + +def unpickle_matcher( + vocab: Vocab, patterns: Dict[str, Any], callbacks: Dict[str, Callable[..., Any]] +) -> DependencyMatcher: ... diff --git a/spacy/matcher/matcher.pyi b/spacy/matcher/matcher.pyi index ec4a88eaf..390629ff8 100644 --- a/spacy/matcher/matcher.pyi +++ b/spacy/matcher/matcher.pyi @@ -1,4 +1,6 @@ -from typing import Any, List, Dict, Tuple, Optional, Callable, Union, Iterator, Iterable +from typing import Any, List, Dict, Tuple, Optional, Callable, Union +from typing import Iterator, Iterable, overload +from ..compat import Literal from ..vocab import Vocab from ..tokens import Doc, Span @@ -31,12 +33,22 @@ class Matcher: ) -> Union[ Iterator[Tuple[Tuple[Doc, Any], Any]], Iterator[Tuple[Doc, Any]], Iterator[Doc] ]: ... + @overload def __call__( self, doclike: Union[Doc, Span], *, - as_spans: bool = ..., + as_spans: Literal[False] = ..., allow_missing: bool = ..., with_alignments: bool = ... - ) -> Union[List[Tuple[int, int, int]], List[Span]]: ... + ) -> List[Tuple[int, int, int]]: ... + @overload + def __call__( + self, + doclike: Union[Doc, Span], + *, + as_spans: Literal[True], + allow_missing: bool = ..., + with_alignments: bool = ... + ) -> List[Span]: ... def _normalize_key(self, key: Any) -> Any: ... diff --git a/spacy/matcher/matcher.pyx b/spacy/matcher/matcher.pyx index 6aa58f0e3..e75ee9ce2 100644 --- a/spacy/matcher/matcher.pyx +++ b/spacy/matcher/matcher.pyx @@ -244,8 +244,12 @@ cdef class Matcher: pipe = "parser" error_msg = Errors.E155.format(pipe=pipe, attr=self.vocab.strings.as_string(attr)) raise ValueError(error_msg) - matches = find_matches(&self.patterns[0], self.patterns.size(), doclike, length, - extensions=self._extensions, predicates=self._extra_predicates, with_alignments=with_alignments) + + if self.patterns.empty(): + matches = [] + else: + matches = find_matches(&self.patterns[0], self.patterns.size(), doclike, length, + extensions=self._extensions, predicates=self._extra_predicates, with_alignments=with_alignments) final_matches = [] pairs_by_id = {} # For each key, either add all matches, or only the filtered, diff --git a/spacy/matcher/phrasematcher.pyi b/spacy/matcher/phrasematcher.pyi index 741bf7bb6..68e3386e4 100644 --- a/spacy/matcher/phrasematcher.pyi +++ b/spacy/matcher/phrasematcher.pyi @@ -1,6 +1,6 @@ -from typing import List, Tuple, Union, Optional, Callable, Any, Dict - -from . import Matcher +from typing import List, Tuple, Union, Optional, Callable, Any, Dict, overload +from ..compat import Literal +from .matcher import Matcher from ..vocab import Vocab from ..tokens import Doc, Span @@ -14,16 +14,24 @@ class PhraseMatcher: def add( self, key: str, - docs: List[List[Dict[str, Any]]], + docs: List[Doc], *, on_match: Optional[ Callable[[Matcher, Doc, int, List[Tuple[Any, ...]]], Any] ] = ..., ) -> None: ... def remove(self, key: str) -> None: ... + @overload def __call__( self, doclike: Union[Doc, Span], *, - as_spans: bool = ..., - ) -> Union[List[Tuple[int, int, int]], List[Span]]: ... + as_spans: Literal[False] = ..., + ) -> List[Tuple[int, int, int]]: ... + @overload + def __call__( + self, + doclike: Union[Doc, Span], + *, + as_spans: Literal[True], + ) -> List[Span]: ... diff --git a/spacy/ml/extract_spans.py b/spacy/ml/extract_spans.py index edc86ff9c..d5e9bc07c 100644 --- a/spacy/ml/extract_spans.py +++ b/spacy/ml/extract_spans.py @@ -63,4 +63,4 @@ def _get_span_indices(ops, spans: Ragged, lengths: Ints1d) -> Ints1d: def _ensure_cpu(spans: Ragged, lengths: Ints1d) -> Tuple[Ragged, Ints1d]: - return (Ragged(to_numpy(spans.dataXd), to_numpy(spans.lengths)), to_numpy(lengths)) + return Ragged(to_numpy(spans.dataXd), to_numpy(spans.lengths)), to_numpy(lengths) diff --git a/spacy/ml/models/entity_linker.py b/spacy/ml/models/entity_linker.py index 831fee90f..0149bea89 100644 --- a/spacy/ml/models/entity_linker.py +++ b/spacy/ml/models/entity_linker.py @@ -1,34 +1,82 @@ from pathlib import Path -from typing import Optional, Callable, Iterable, List +from typing import Optional, Callable, Iterable, List, Tuple from thinc.types import Floats2d from thinc.api import chain, clone, list2ragged, reduce_mean, residual -from thinc.api import Model, Maxout, Linear +from thinc.api import Model, Maxout, Linear, noop, tuplify, Ragged from ...util import registry from ...kb import KnowledgeBase, Candidate, get_candidates from ...vocab import Vocab from ...tokens import Span, Doc +from ..extract_spans import extract_spans +from ...errors import Errors -@registry.architectures("spacy.EntityLinker.v1") +@registry.architectures("spacy.EntityLinker.v2") def build_nel_encoder( tok2vec: Model, nO: Optional[int] = None ) -> Model[List[Doc], Floats2d]: - with Model.define_operators({">>": chain, "**": clone}): + with Model.define_operators({">>": chain, "&": tuplify}): token_width = tok2vec.maybe_get_dim("nO") output_layer = Linear(nO=nO, nI=token_width) model = ( - tok2vec - >> list2ragged() + ((tok2vec >> list2ragged()) & build_span_maker()) + >> extract_spans() >> reduce_mean() >> residual(Maxout(nO=token_width, nI=token_width, nP=2, dropout=0.0)) # type: ignore[arg-type] >> output_layer ) model.set_ref("output_layer", output_layer) model.set_ref("tok2vec", tok2vec) + # flag to show this isn't legacy + model.attrs["include_span_maker"] = True return model +def build_span_maker(n_sents: int = 0) -> Model: + model: Model = Model("span_maker", forward=span_maker_forward) + model.attrs["n_sents"] = n_sents + return model + + +def span_maker_forward(model, docs: List[Doc], is_train) -> Tuple[Ragged, Callable]: + ops = model.ops + n_sents = model.attrs["n_sents"] + candidates = [] + for doc in docs: + cands = [] + try: + sentences = [s for s in doc.sents] + except ValueError: + # no sentence info, normal in initialization + for tok in doc: + tok.is_sent_start = tok.i == 0 + sentences = [doc[:]] + for ent in doc.ents: + try: + # find the sentence in the list of sentences. + sent_index = sentences.index(ent.sent) + except AttributeError: + # Catch the exception when ent.sent is None and provide a user-friendly warning + raise RuntimeError(Errors.E030) from None + # get n previous sentences, if there are any + start_sentence = max(0, sent_index - n_sents) + # get n posterior sentences, or as many < n as there are + end_sentence = min(len(sentences) - 1, sent_index + n_sents) + # get token positions + start_token = sentences[start_sentence].start + end_token = sentences[end_sentence].end + # save positions for extraction + cands.append((start_token, end_token)) + + candidates.append(ops.asarray2i(cands)) + candlens = ops.asarray1i([len(cands) for cands in candidates]) + candidates = ops.xp.concatenate(candidates) + outputs = Ragged(candidates, candlens) + # because this is just rearranging docs, the backprop does nothing + return outputs, lambda x: [] + + @registry.misc("spacy.KBFromFile.v1") def load_kb(kb_path: Path) -> Callable[[Vocab], KnowledgeBase]: def kb_from_file(vocab): diff --git a/spacy/ml/models/multi_task.py b/spacy/ml/models/multi_task.py index 9e1face63..a7d67c6dd 100644 --- a/spacy/ml/models/multi_task.py +++ b/spacy/ml/models/multi_task.py @@ -85,7 +85,7 @@ def get_characters_loss(ops, docs, prediction, nr_char): target = ops.asarray(to_categorical(target_ids, n_classes=256), dtype="f") target = target.reshape((-1, 256 * nr_char)) diff = prediction - target - loss = (diff ** 2).sum() + loss = (diff**2).sum() d_target = diff / float(prediction.shape[0]) return loss, d_target diff --git a/spacy/ml/models/tagger.py b/spacy/ml/models/tagger.py index 9c7fe042d..9f8ef7b2b 100644 --- a/spacy/ml/models/tagger.py +++ b/spacy/ml/models/tagger.py @@ -1,14 +1,14 @@ from typing import Optional, List -from thinc.api import zero_init, with_array, Softmax, chain, Model +from thinc.api import zero_init, with_array, Softmax_v2, chain, Model from thinc.types import Floats2d from ...util import registry from ...tokens import Doc -@registry.architectures("spacy.Tagger.v1") +@registry.architectures("spacy.Tagger.v2") def build_tagger_model( - tok2vec: Model[List[Doc], List[Floats2d]], nO: Optional[int] = None + tok2vec: Model[List[Doc], List[Floats2d]], nO: Optional[int] = None, normalize=False ) -> Model[List[Doc], List[Floats2d]]: """Build a tagger model, using a provided token-to-vector component. The tagger model simply adds a linear layer with softmax activation to predict scores @@ -19,7 +19,9 @@ def build_tagger_model( """ # TODO: glorot_uniform_init seems to work a bit better than zero_init here?! t2v_width = tok2vec.get_dim("nO") if tok2vec.has_dim("nO") else None - output_layer = Softmax(nO, t2v_width, init_W=zero_init) + output_layer = Softmax_v2( + nO, t2v_width, init_W=zero_init, normalize_outputs=normalize + ) softmax = with_array(output_layer) # type: ignore model = chain(tok2vec, softmax) model.set_ref("tok2vec", tok2vec) diff --git a/spacy/pipeline/__init__.py b/spacy/pipeline/__init__.py index 7b483724c..938ab08c6 100644 --- a/spacy/pipeline/__init__.py +++ b/spacy/pipeline/__init__.py @@ -1,5 +1,6 @@ from .attributeruler import AttributeRuler from .dep_parser import DependencyParser +from .edit_tree_lemmatizer import EditTreeLemmatizer from .entity_linker import EntityLinker from .ner import EntityRecognizer from .entityruler import EntityRuler diff --git a/spacy/pipeline/_edit_tree_internals/__init__.py b/spacy/pipeline/_edit_tree_internals/__init__.py new file mode 100644 index 000000000..e69de29bb diff --git a/spacy/pipeline/_edit_tree_internals/edit_trees.pxd b/spacy/pipeline/_edit_tree_internals/edit_trees.pxd new file mode 100644 index 000000000..dc4289f37 --- /dev/null +++ b/spacy/pipeline/_edit_tree_internals/edit_trees.pxd @@ -0,0 +1,93 @@ +from libc.stdint cimport uint32_t, uint64_t +from libcpp.unordered_map cimport unordered_map +from libcpp.vector cimport vector + +from ...typedefs cimport attr_t, hash_t, len_t +from ...strings cimport StringStore + +cdef extern from "" namespace "std" nogil: + void swap[T](T& a, T& b) except + # Only available in Cython 3. + +# An edit tree (Müller et al., 2015) is a tree structure that consists of +# edit operations. The two types of operations are string matches +# and string substitutions. Given an input string s and an output string t, +# subsitution and match nodes should be interpreted as follows: +# +# * Substitution node: consists of an original string and substitute string. +# If s matches the original string, then t is the substitute. Otherwise, +# the node does not apply. +# * Match node: consists of a prefix length, suffix length, prefix edit tree, +# and suffix edit tree. If s is composed of a prefix, middle part, and suffix +# with the given suffix and prefix lengths, then t is the concatenation +# prefix_tree(prefix) + middle + suffix_tree(suffix). +# +# For efficiency, we represent strings in substitution nodes as integers, with +# the actual strings stored in a StringStore. Subtrees in match nodes are stored +# as tree identifiers (rather than pointers) to simplify serialization. + +cdef uint32_t NULL_TREE_ID + +cdef struct MatchNodeC: + len_t prefix_len + len_t suffix_len + uint32_t prefix_tree + uint32_t suffix_tree + +cdef struct SubstNodeC: + attr_t orig + attr_t subst + +cdef union NodeC: + MatchNodeC match_node + SubstNodeC subst_node + +cdef struct EditTreeC: + bint is_match_node + NodeC inner + +cdef inline EditTreeC edittree_new_match(len_t prefix_len, len_t suffix_len, + uint32_t prefix_tree, uint32_t suffix_tree): + cdef MatchNodeC match_node = MatchNodeC(prefix_len=prefix_len, + suffix_len=suffix_len, prefix_tree=prefix_tree, + suffix_tree=suffix_tree) + cdef NodeC inner = NodeC(match_node=match_node) + return EditTreeC(is_match_node=True, inner=inner) + +cdef inline EditTreeC edittree_new_subst(attr_t orig, attr_t subst): + cdef EditTreeC node + cdef SubstNodeC subst_node = SubstNodeC(orig=orig, subst=subst) + cdef NodeC inner = NodeC(subst_node=subst_node) + return EditTreeC(is_match_node=False, inner=inner) + +cdef inline uint64_t edittree_hash(EditTreeC tree): + cdef MatchNodeC match_node + cdef SubstNodeC subst_node + + if tree.is_match_node: + match_node = tree.inner.match_node + return hash((match_node.prefix_len, match_node.suffix_len, match_node.prefix_tree, match_node.suffix_tree)) + else: + subst_node = tree.inner.subst_node + return hash((subst_node.orig, subst_node.subst)) + +cdef struct LCS: + int source_begin + int source_end + int target_begin + int target_end + +cdef inline bint lcs_is_empty(LCS lcs): + return lcs.source_begin == 0 and lcs.source_end == 0 and lcs.target_begin == 0 and lcs.target_end == 0 + +cdef class EditTrees: + cdef vector[EditTreeC] trees + cdef unordered_map[hash_t, uint32_t] map + cdef StringStore strings + + cpdef uint32_t add(self, str form, str lemma) + cpdef str apply(self, uint32_t tree_id, str form) + cpdef unicode tree_to_str(self, uint32_t tree_id) + + cdef uint32_t _add(self, str form, str lemma) + cdef _apply(self, uint32_t tree_id, str form_part, list lemma_pieces) + cdef uint32_t _tree_id(self, EditTreeC tree) diff --git a/spacy/pipeline/_edit_tree_internals/edit_trees.pyx b/spacy/pipeline/_edit_tree_internals/edit_trees.pyx new file mode 100644 index 000000000..02907b67a --- /dev/null +++ b/spacy/pipeline/_edit_tree_internals/edit_trees.pyx @@ -0,0 +1,305 @@ +# cython: infer_types=True, binding=True +from cython.operator cimport dereference as deref +from libc.stdint cimport uint32_t +from libc.stdint cimport UINT32_MAX +from libc.string cimport memset +from libcpp.pair cimport pair +from libcpp.vector cimport vector + +from pathlib import Path + +from ...typedefs cimport hash_t + +from ... import util +from ...errors import Errors +from ...strings import StringStore +from .schemas import validate_edit_tree + + +NULL_TREE_ID = UINT32_MAX + +cdef LCS find_lcs(str source, str target): + """ + Find the longest common subsequence (LCS) between two strings. If there are + multiple LCSes, only one of them is returned. + + source (str): The first string. + target (str): The second string. + RETURNS (LCS): The spans of the longest common subsequences. + """ + cdef Py_ssize_t source_len = len(source) + cdef Py_ssize_t target_len = len(target) + cdef size_t longest_align = 0; + cdef int source_idx, target_idx + cdef LCS lcs + cdef Py_UCS4 source_cp, target_cp + + memset(&lcs, 0, sizeof(lcs)) + + cdef vector[size_t] prev_aligns = vector[size_t](target_len); + cdef vector[size_t] cur_aligns = vector[size_t](target_len); + + for (source_idx, source_cp) in enumerate(source): + for (target_idx, target_cp) in enumerate(target): + if source_cp == target_cp: + if source_idx == 0 or target_idx == 0: + cur_aligns[target_idx] = 1 + else: + cur_aligns[target_idx] = prev_aligns[target_idx - 1] + 1 + + # Check if this is the longest alignment and replace previous + # best alignment when this is the case. + if cur_aligns[target_idx] > longest_align: + longest_align = cur_aligns[target_idx] + lcs.source_begin = source_idx - longest_align + 1 + lcs.source_end = source_idx + 1 + lcs.target_begin = target_idx - longest_align + 1 + lcs.target_end = target_idx + 1 + else: + # No match, we start with a zero-length alignment. + cur_aligns[target_idx] = 0 + swap(prev_aligns, cur_aligns) + + return lcs + +cdef class EditTrees: + """Container for constructing and storing edit trees.""" + def __init__(self, strings: StringStore): + """Create a container for edit trees. + + strings (StringStore): the string store to use.""" + self.strings = strings + + cpdef uint32_t add(self, str form, str lemma): + """Add an edit tree that rewrites the given string into the given lemma. + + RETURNS (int): identifier of the edit tree in the container. + """ + # Treat two empty strings as a special case. Generating an edit + # tree for identical strings results in a match node. However, + # since two empty strings have a zero-length LCS, a substitution + # node would be created. Since we do not want to clutter the + # recursive tree construction with logic for this case, handle + # it in this wrapper method. + if len(form) == 0 and len(lemma) == 0: + tree = edittree_new_match(0, 0, NULL_TREE_ID, NULL_TREE_ID) + return self._tree_id(tree) + + return self._add(form, lemma) + + cdef uint32_t _add(self, str form, str lemma): + cdef LCS lcs = find_lcs(form, lemma) + + cdef EditTreeC tree + cdef uint32_t tree_id, prefix_tree, suffix_tree + if lcs_is_empty(lcs): + tree = edittree_new_subst(self.strings.add(form), self.strings.add(lemma)) + else: + # If we have a non-empty LCS, such as "gooi" in "ge[gooi]d" and "[gooi]en", + # create edit trees for the prefix pair ("ge"/"") and the suffix pair ("d"/"en"). + prefix_tree = NULL_TREE_ID + if lcs.source_begin != 0 or lcs.target_begin != 0: + prefix_tree = self.add(form[:lcs.source_begin], lemma[:lcs.target_begin]) + + suffix_tree = NULL_TREE_ID + if lcs.source_end != len(form) or lcs.target_end != len(lemma): + suffix_tree = self.add(form[lcs.source_end:], lemma[lcs.target_end:]) + + tree = edittree_new_match(lcs.source_begin, len(form) - lcs.source_end, prefix_tree, suffix_tree) + + return self._tree_id(tree) + + cdef uint32_t _tree_id(self, EditTreeC tree): + # If this tree has been constructed before, return its identifier. + cdef hash_t hash = edittree_hash(tree) + cdef unordered_map[hash_t, uint32_t].iterator iter = self.map.find(hash) + if iter != self.map.end(): + return deref(iter).second + + # The tree hasn't been seen before, store it. + cdef uint32_t tree_id = self.trees.size() + self.trees.push_back(tree) + self.map.insert(pair[hash_t, uint32_t](hash, tree_id)) + + return tree_id + + cpdef str apply(self, uint32_t tree_id, str form): + """Apply an edit tree to a form. + + tree_id (uint32_t): the identifier of the edit tree to apply. + form (str): the form to apply the edit tree to. + RETURNS (str): the transformer form or None if the edit tree + could not be applied to the form. + """ + if tree_id >= self.trees.size(): + raise IndexError("Edit tree identifier out of range") + + lemma_pieces = [] + try: + self._apply(tree_id, form, lemma_pieces) + except ValueError: + return None + return "".join(lemma_pieces) + + cdef _apply(self, uint32_t tree_id, str form_part, list lemma_pieces): + """Recursively apply an edit tree to a form, adding pieces to + the lemma_pieces list.""" + assert tree_id <= self.trees.size() + + cdef EditTreeC tree = self.trees[tree_id] + cdef MatchNodeC match_node + cdef int suffix_start + + if tree.is_match_node: + match_node = tree.inner.match_node + + if match_node.prefix_len + match_node.suffix_len > len(form_part): + raise ValueError("Edit tree cannot be applied to form") + + suffix_start = len(form_part) - match_node.suffix_len + + if match_node.prefix_tree != NULL_TREE_ID: + self._apply(match_node.prefix_tree, form_part[:match_node.prefix_len], lemma_pieces) + + lemma_pieces.append(form_part[match_node.prefix_len:suffix_start]) + + if match_node.suffix_tree != NULL_TREE_ID: + self._apply(match_node.suffix_tree, form_part[suffix_start:], lemma_pieces) + else: + if form_part == self.strings[tree.inner.subst_node.orig]: + lemma_pieces.append(self.strings[tree.inner.subst_node.subst]) + else: + raise ValueError("Edit tree cannot be applied to form") + + cpdef unicode tree_to_str(self, uint32_t tree_id): + """Return the tree as a string. The tree tree string is formatted + like an S-expression. This is primarily useful for debugging. Match + nodes have the following format: + + (m prefix_len suffix_len prefix_tree suffix_tree) + + Substitution nodes have the following format: + + (s original substitute) + + tree_id (uint32_t): the identifier of the edit tree. + RETURNS (str): the tree as an S-expression. + """ + + if tree_id >= self.trees.size(): + raise IndexError("Edit tree identifier out of range") + + cdef EditTreeC tree = self.trees[tree_id] + cdef SubstNodeC subst_node + + if not tree.is_match_node: + subst_node = tree.inner.subst_node + return f"(s '{self.strings[subst_node.orig]}' '{self.strings[subst_node.subst]}')" + + cdef MatchNodeC match_node = tree.inner.match_node + + prefix_tree = "()" + if match_node.prefix_tree != NULL_TREE_ID: + prefix_tree = self.tree_to_str(match_node.prefix_tree) + + suffix_tree = "()" + if match_node.suffix_tree != NULL_TREE_ID: + suffix_tree = self.tree_to_str(match_node.suffix_tree) + + return f"(m {match_node.prefix_len} {match_node.suffix_len} {prefix_tree} {suffix_tree})" + + def from_json(self, trees: list) -> "EditTrees": + self.trees.clear() + + for tree in trees: + tree = _dict2tree(tree) + self.trees.push_back(tree) + + self._rebuild_tree_map() + + def from_bytes(self, bytes_data: bytes, *) -> "EditTrees": + def deserialize_trees(tree_dicts): + cdef EditTreeC c_tree + for tree_dict in tree_dicts: + c_tree = _dict2tree(tree_dict) + self.trees.push_back(c_tree) + + deserializers = {} + deserializers["trees"] = lambda n: deserialize_trees(n) + util.from_bytes(bytes_data, deserializers, []) + + self._rebuild_tree_map() + + return self + + def to_bytes(self, **kwargs) -> bytes: + tree_dicts = [] + for tree in self.trees: + tree = _tree2dict(tree) + tree_dicts.append(tree) + + serializers = {} + serializers["trees"] = lambda: tree_dicts + + return util.to_bytes(serializers, []) + + def to_disk(self, path, **kwargs) -> "EditTrees": + path = util.ensure_path(path) + with path.open("wb") as file_: + file_.write(self.to_bytes()) + + def from_disk(self, path, **kwargs) -> "EditTrees": + path = util.ensure_path(path) + if path.exists(): + with path.open("rb") as file_: + data = file_.read() + return self.from_bytes(data) + + return self + + def __getitem__(self, idx): + return _tree2dict(self.trees[idx]) + + def __len__(self): + return self.trees.size() + + def _rebuild_tree_map(self): + """Rebuild the tree hash -> tree id mapping""" + cdef EditTreeC c_tree + cdef uint32_t tree_id + cdef hash_t tree_hash + + self.map.clear() + + for tree_id in range(self.trees.size()): + c_tree = self.trees[tree_id] + tree_hash = edittree_hash(c_tree) + self.map.insert(pair[hash_t, uint32_t](tree_hash, tree_id)) + + def __reduce__(self): + return (unpickle_edittrees, (self.strings, self.to_bytes())) + + +def unpickle_edittrees(strings, trees_data): + return EditTrees(strings).from_bytes(trees_data) + + +def _tree2dict(tree): + if tree["is_match_node"]: + tree = tree["inner"]["match_node"] + else: + tree = tree["inner"]["subst_node"] + return(dict(tree)) + +def _dict2tree(tree): + errors = validate_edit_tree(tree) + if errors: + raise ValueError(Errors.E1026.format(errors="\n".join(errors))) + + tree = dict(tree) + if "prefix_len" in tree: + tree = {"is_match_node": True, "inner": {"match_node": tree}} + else: + tree = {"is_match_node": False, "inner": {"subst_node": tree}} + + return tree diff --git a/spacy/pipeline/_edit_tree_internals/schemas.py b/spacy/pipeline/_edit_tree_internals/schemas.py new file mode 100644 index 000000000..c01d0632e --- /dev/null +++ b/spacy/pipeline/_edit_tree_internals/schemas.py @@ -0,0 +1,44 @@ +from typing import Any, Dict, List, Union +from collections import defaultdict +from pydantic import BaseModel, Field, ValidationError +from pydantic.types import StrictBool, StrictInt, StrictStr + + +class MatchNodeSchema(BaseModel): + prefix_len: StrictInt = Field(..., title="Prefix length") + suffix_len: StrictInt = Field(..., title="Suffix length") + prefix_tree: StrictInt = Field(..., title="Prefix tree") + suffix_tree: StrictInt = Field(..., title="Suffix tree") + + class Config: + extra = "forbid" + + +class SubstNodeSchema(BaseModel): + orig: Union[int, StrictStr] = Field(..., title="Original substring") + subst: Union[int, StrictStr] = Field(..., title="Replacement substring") + + class Config: + extra = "forbid" + + +class EditTreeSchema(BaseModel): + __root__: Union[MatchNodeSchema, SubstNodeSchema] + + +def validate_edit_tree(obj: Dict[str, Any]) -> List[str]: + """Validate edit tree. + + obj (Dict[str, Any]): JSON-serializable data to validate. + RETURNS (List[str]): A list of error messages, if available. + """ + try: + EditTreeSchema.parse_obj(obj) + return [] + except ValidationError as e: + errors = e.errors() + data = defaultdict(list) + for error in errors: + err_loc = " -> ".join([str(p) for p in error.get("loc", [])]) + data[err_loc].append(error.get("msg")) + return [f"[{loc}] {', '.join(msg)}" for loc, msg in data.items()] # type: ignore[arg-type] diff --git a/spacy/pipeline/_parser_internals/_state.pxd b/spacy/pipeline/_parser_internals/_state.pxd index 9d93814cf..10d6ef3c0 100644 --- a/spacy/pipeline/_parser_internals/_state.pxd +++ b/spacy/pipeline/_parser_internals/_state.pxd @@ -3,6 +3,7 @@ from libc.string cimport memcpy, memset from libc.stdlib cimport calloc, free from libc.stdint cimport uint32_t, uint64_t cimport libcpp +from libcpp.unordered_map cimport unordered_map from libcpp.vector cimport vector from libcpp.set cimport set from cpython.exc cimport PyErr_CheckSignals, PyErr_SetFromErrno @@ -30,8 +31,8 @@ cdef cppclass StateC: vector[int] _stack vector[int] _rebuffer vector[SpanC] _ents - vector[ArcC] _left_arcs - vector[ArcC] _right_arcs + unordered_map[int, vector[ArcC]] _left_arcs + unordered_map[int, vector[ArcC]] _right_arcs vector[libcpp.bool] _unshiftable vector[int] history set[int] _sent_starts @@ -161,15 +162,22 @@ cdef cppclass StateC: else: return &this._sent[i] - void get_arcs(vector[ArcC]* arcs) nogil const: - for i in range(this._left_arcs.size()): - arc = this._left_arcs.at(i) - if arc.head != -1 and arc.child != -1: - arcs.push_back(arc) - for i in range(this._right_arcs.size()): - arc = this._right_arcs.at(i) - if arc.head != -1 and arc.child != -1: - arcs.push_back(arc) + void map_get_arcs(const unordered_map[int, vector[ArcC]] &heads_arcs, vector[ArcC]* out) nogil const: + cdef const vector[ArcC]* arcs + head_arcs_it = heads_arcs.const_begin() + while head_arcs_it != heads_arcs.const_end(): + arcs = &deref(head_arcs_it).second + arcs_it = arcs.const_begin() + while arcs_it != arcs.const_end(): + arc = deref(arcs_it) + if arc.head != -1 and arc.child != -1: + out.push_back(arc) + incr(arcs_it) + incr(head_arcs_it) + + void get_arcs(vector[ArcC]* out) nogil const: + this.map_get_arcs(this._left_arcs, out) + this.map_get_arcs(this._right_arcs, out) int H(int child) nogil const: if child >= this.length or child < 0: @@ -183,37 +191,35 @@ cdef cppclass StateC: else: return this._ents.back().start - int L(int head, int idx) nogil const: - if idx < 1 or this._left_arcs.size() == 0: + int nth_child(const unordered_map[int, vector[ArcC]]& heads_arcs, int head, int idx) nogil const: + if idx < 1: return -1 - # Work backwards through left-arcs to find the arc at the + head_arcs_it = heads_arcs.const_find(head) + if head_arcs_it == heads_arcs.const_end(): + return -1 + + cdef const vector[ArcC]* arcs = &deref(head_arcs_it).second + + # Work backwards through arcs to find the arc at the # requested index more quickly. cdef size_t child_index = 0 - it = this._left_arcs.const_rbegin() - while it != this._left_arcs.rend(): - arc = deref(it) - if arc.head == head and arc.child != -1 and arc.child < head: + arcs_it = arcs.const_rbegin() + while arcs_it != arcs.const_rend() and child_index != idx: + arc = deref(arcs_it) + if arc.child != -1: child_index += 1 if child_index == idx: return arc.child - incr(it) + incr(arcs_it) return -1 + int L(int head, int idx) nogil const: + return this.nth_child(this._left_arcs, head, idx) + int R(int head, int idx) nogil const: - if idx < 1 or this._right_arcs.size() == 0: - return -1 - cdef vector[int] rights - for i in range(this._right_arcs.size()): - arc = this._right_arcs.at(i) - if arc.head == head and arc.child != -1 and arc.child > head: - rights.push_back(arc.child) - idx = (rights.size()) - idx - if idx < 0: - return -1 - else: - return rights.at(idx) + return this.nth_child(this._right_arcs, head, idx) bint empty() nogil const: return this._stack.size() == 0 @@ -254,22 +260,29 @@ cdef cppclass StateC: int r_edge(int word) nogil const: return word - - int n_L(int head) nogil const: + + int n_arcs(const unordered_map[int, vector[ArcC]] &heads_arcs, int head) nogil const: cdef int n = 0 - for i in range(this._left_arcs.size()): - arc = this._left_arcs.at(i) - if arc.head == head and arc.child != -1 and arc.child < arc.head: + head_arcs_it = heads_arcs.const_find(head) + if head_arcs_it == heads_arcs.const_end(): + return n + + cdef const vector[ArcC]* arcs = &deref(head_arcs_it).second + arcs_it = arcs.const_begin() + while arcs_it != arcs.end(): + arc = deref(arcs_it) + if arc.child != -1: n += 1 + incr(arcs_it) + return n + + int n_L(int head) nogil const: + return n_arcs(this._left_arcs, head) + int n_R(int head) nogil const: - cdef int n = 0 - for i in range(this._right_arcs.size()): - arc = this._right_arcs.at(i) - if arc.head == head and arc.child != -1 and arc.child > arc.head: - n += 1 - return n + return n_arcs(this._right_arcs, head) bint stack_is_connected() nogil const: return False @@ -329,19 +342,20 @@ cdef cppclass StateC: arc.child = child arc.label = label if head > child: - this._left_arcs.push_back(arc) + this._left_arcs[arc.head].push_back(arc) else: - this._right_arcs.push_back(arc) + this._right_arcs[arc.head].push_back(arc) this._heads[child] = head - void del_arc(int h_i, int c_i) nogil: - cdef vector[ArcC]* arcs - if h_i > c_i: - arcs = &this._left_arcs - else: - arcs = &this._right_arcs + void map_del_arc(unordered_map[int, vector[ArcC]]* heads_arcs, int h_i, int c_i) nogil: + arcs_it = heads_arcs.find(h_i) + if arcs_it == heads_arcs.end(): + return + + arcs = &deref(arcs_it).second if arcs.size() == 0: return + arc = arcs.back() if arc.head == h_i and arc.child == c_i: arcs.pop_back() @@ -354,6 +368,12 @@ cdef cppclass StateC: arc.label = 0 break + void del_arc(int h_i, int c_i) nogil: + if h_i > c_i: + this.map_del_arc(&this._left_arcs, h_i, c_i) + else: + this.map_del_arc(&this._right_arcs, h_i, c_i) + SpanC get_ent() nogil const: cdef SpanC ent if this._ents.size() == 0: diff --git a/spacy/pipeline/_parser_internals/arc_eager.pyx b/spacy/pipeline/_parser_internals/arc_eager.pyx index 33c7c23b2..a5dfc9707 100644 --- a/spacy/pipeline/_parser_internals/arc_eager.pyx +++ b/spacy/pipeline/_parser_internals/arc_eager.pyx @@ -218,7 +218,7 @@ def _get_aligned_sent_starts(example): sent_starts = [False] * len(example.x) seen_words = set() for y_sent in example.y.sents: - x_indices = list(align[y_sent.start : y_sent.end].dataXd) + x_indices = list(align[y_sent.start : y_sent.end]) if any(x_idx in seen_words for x_idx in x_indices): # If there are any tokens in X that align across two sentences, # regard the sentence annotations as missing, as we can't diff --git a/spacy/pipeline/_parser_internals/nonproj.pyx b/spacy/pipeline/_parser_internals/nonproj.pyx index 82070cd27..36163fcc3 100644 --- a/spacy/pipeline/_parser_internals/nonproj.pyx +++ b/spacy/pipeline/_parser_internals/nonproj.pyx @@ -4,6 +4,10 @@ for doing pseudo-projective parsing implementation uses the HEAD decoration scheme. """ from copy import copy +from libc.limits cimport INT_MAX +from libc.stdlib cimport abs +from libcpp cimport bool +from libcpp.vector cimport vector from ...tokens.doc cimport Doc, set_children_from_heads @@ -41,13 +45,18 @@ def contains_cycle(heads): def is_nonproj_arc(tokenid, heads): + cdef vector[int] c_heads = _heads_to_c(heads) + return _is_nonproj_arc(tokenid, c_heads) + + +cdef bool _is_nonproj_arc(int tokenid, const vector[int]& heads) nogil: # definition (e.g. Havelka 2007): an arc h -> d, h < d is non-projective # if there is a token k, h < k < d such that h is not # an ancestor of k. Same for h -> d, h > d head = heads[tokenid] if head == tokenid: # root arcs cannot be non-projective return False - elif head is None: # unattached tokens cannot be non-projective + elif head < 0: # unattached tokens cannot be non-projective return False cdef int start, end @@ -56,19 +65,29 @@ def is_nonproj_arc(tokenid, heads): else: start, end = (tokenid+1, head) for k in range(start, end): - for ancestor in ancestors(k, heads): - if ancestor is None: # for unattached tokens/subtrees - break - elif ancestor == head: # normal case: k dominated by h - break + if _has_head_as_ancestor(k, head, heads): + continue else: # head not in ancestors: d -> h is non-projective return True return False +cdef bool _has_head_as_ancestor(int tokenid, int head, const vector[int]& heads) nogil: + ancestor = tokenid + cnt = 0 + while cnt < heads.size(): + if heads[ancestor] == head or heads[ancestor] < 0: + return True + ancestor = heads[ancestor] + cnt += 1 + + return False + + def is_nonproj_tree(heads): + cdef vector[int] c_heads = _heads_to_c(heads) # a tree is non-projective if at least one arc is non-projective - return any(is_nonproj_arc(word, heads) for word in range(len(heads))) + return any(_is_nonproj_arc(word, c_heads) for word in range(len(heads))) def decompose(label): @@ -98,16 +117,31 @@ def projectivize(heads, labels): # tree, i.e. connected and cycle-free. Returns a new pair (heads, labels) # which encode a projective and decorated tree. proj_heads = copy(heads) - smallest_np_arc = _get_smallest_nonproj_arc(proj_heads) - if smallest_np_arc is None: # this sentence is already projective + + cdef int new_head + cdef vector[int] c_proj_heads = _heads_to_c(proj_heads) + cdef int smallest_np_arc = _get_smallest_nonproj_arc(c_proj_heads) + if smallest_np_arc == -1: # this sentence is already projective return proj_heads, copy(labels) - while smallest_np_arc is not None: - _lift(smallest_np_arc, proj_heads) - smallest_np_arc = _get_smallest_nonproj_arc(proj_heads) + while smallest_np_arc != -1: + new_head = _lift(smallest_np_arc, proj_heads) + c_proj_heads[smallest_np_arc] = new_head + smallest_np_arc = _get_smallest_nonproj_arc(c_proj_heads) deco_labels = _decorate(heads, proj_heads, labels) return proj_heads, deco_labels +cdef vector[int] _heads_to_c(heads): + cdef vector[int] c_heads; + for head in heads: + if head == None: + c_heads.push_back(-1) + else: + assert head < len(heads) + c_heads.push_back(head) + return c_heads + + cpdef deprojectivize(Doc doc): # Reattach arcs with decorated labels (following HEAD scheme). For each # decorated arc X||Y, search top-down, left-to-right, breadth-first until @@ -137,27 +171,38 @@ def _decorate(heads, proj_heads, labels): deco_labels.append(labels[tokenid]) return deco_labels +def get_smallest_nonproj_arc_slow(heads): + cdef vector[int] c_heads = _heads_to_c(heads) + return _get_smallest_nonproj_arc(c_heads) -def _get_smallest_nonproj_arc(heads): + +cdef int _get_smallest_nonproj_arc(const vector[int]& heads) nogil: # return the smallest non-proj arc or None # where size is defined as the distance between dep and head # and ties are broken left to right - smallest_size = float('inf') - smallest_np_arc = None - for tokenid, head in enumerate(heads): + cdef int smallest_size = INT_MAX + cdef int smallest_np_arc = -1 + cdef int size + cdef int tokenid + cdef int head + + for tokenid in range(heads.size()): + head = heads[tokenid] size = abs(tokenid-head) - if size < smallest_size and is_nonproj_arc(tokenid, heads): + if size < smallest_size and _is_nonproj_arc(tokenid, heads): smallest_size = size smallest_np_arc = tokenid return smallest_np_arc -def _lift(tokenid, heads): +cpdef int _lift(tokenid, heads): # reattaches a word to it's grandfather head = heads[tokenid] ghead = heads[head] + cdef int new_head = ghead if head != ghead else tokenid # attach to ghead if head isn't attached to root else attach to root - heads[tokenid] = ghead if head != ghead else tokenid + heads[tokenid] = new_head + return new_head def _find_new_head(token, headlabel): diff --git a/spacy/pipeline/edit_tree_lemmatizer.py b/spacy/pipeline/edit_tree_lemmatizer.py new file mode 100644 index 000000000..54a7030dc --- /dev/null +++ b/spacy/pipeline/edit_tree_lemmatizer.py @@ -0,0 +1,379 @@ +from typing import cast, Any, Callable, Dict, Iterable, List, Optional +from typing import Sequence, Tuple, Union +from collections import Counter +from copy import deepcopy +from itertools import islice +import numpy as np + +import srsly +from thinc.api import Config, Model, SequenceCategoricalCrossentropy +from thinc.types import Floats2d, Ints1d, Ints2d + +from ._edit_tree_internals.edit_trees import EditTrees +from ._edit_tree_internals.schemas import validate_edit_tree +from .lemmatizer import lemmatizer_score +from .trainable_pipe import TrainablePipe +from ..errors import Errors +from ..language import Language +from ..tokens import Doc +from ..training import Example, validate_examples, validate_get_examples +from ..vocab import Vocab +from .. import util + + +default_model_config = """ +[model] +@architectures = "spacy.Tagger.v2" + +[model.tok2vec] +@architectures = "spacy.HashEmbedCNN.v2" +pretrained_vectors = null +width = 96 +depth = 4 +embed_size = 2000 +window_size = 1 +maxout_pieces = 3 +subword_features = true +""" +DEFAULT_EDIT_TREE_LEMMATIZER_MODEL = Config().from_str(default_model_config)["model"] + + +@Language.factory( + "trainable_lemmatizer", + assigns=["token.lemma"], + requires=[], + default_config={ + "model": DEFAULT_EDIT_TREE_LEMMATIZER_MODEL, + "backoff": "orth", + "min_tree_freq": 3, + "overwrite": False, + "top_k": 1, + "scorer": {"@scorers": "spacy.lemmatizer_scorer.v1"}, + }, + default_score_weights={"lemma_acc": 1.0}, +) +def make_edit_tree_lemmatizer( + nlp: Language, + name: str, + model: Model, + backoff: Optional[str], + min_tree_freq: int, + overwrite: bool, + top_k: int, + scorer: Optional[Callable], +): + """Construct an EditTreeLemmatizer component.""" + return EditTreeLemmatizer( + nlp.vocab, + model, + name, + backoff=backoff, + min_tree_freq=min_tree_freq, + overwrite=overwrite, + top_k=top_k, + scorer=scorer, + ) + + +class EditTreeLemmatizer(TrainablePipe): + """ + Lemmatizer that lemmatizes each word using a predicted edit tree. + """ + + def __init__( + self, + vocab: Vocab, + model: Model, + name: str = "trainable_lemmatizer", + *, + backoff: Optional[str] = "orth", + min_tree_freq: int = 3, + overwrite: bool = False, + top_k: int = 1, + scorer: Optional[Callable] = lemmatizer_score, + ): + """ + Construct an edit tree lemmatizer. + + backoff (Optional[str]): backoff to use when the predicted edit trees + are not applicable. Must be an attribute of Token or None (leave the + lemma unset). + min_tree_freq (int): prune trees that are applied less than this + frequency in the training data. + overwrite (bool): overwrite existing lemma annotations. + top_k (int): try to apply at most the k most probable edit trees. + """ + self.vocab = vocab + self.model = model + self.name = name + self.backoff = backoff + self.min_tree_freq = min_tree_freq + self.overwrite = overwrite + self.top_k = top_k + + self.trees = EditTrees(self.vocab.strings) + self.tree2label: Dict[int, int] = {} + + self.cfg: Dict[str, Any] = {"labels": []} + self.scorer = scorer + + def get_loss( + self, examples: Iterable[Example], scores: List[Floats2d] + ) -> Tuple[float, List[Floats2d]]: + validate_examples(examples, "EditTreeLemmatizer.get_loss") + loss_func = SequenceCategoricalCrossentropy(normalize=False, missing_value=-1) + + truths = [] + for eg in examples: + eg_truths = [] + for (predicted, gold_lemma) in zip( + eg.predicted, eg.get_aligned("LEMMA", as_string=True) + ): + if gold_lemma is None: + label = -1 + else: + tree_id = self.trees.add(predicted.text, gold_lemma) + label = self.tree2label.get(tree_id, 0) + eg_truths.append(label) + + truths.append(eg_truths) + + d_scores, loss = loss_func(scores, truths) # type: ignore + if self.model.ops.xp.isnan(loss): + raise ValueError(Errors.E910.format(name=self.name)) + + return float(loss), d_scores + + def predict(self, docs: Iterable[Doc]) -> List[Ints2d]: + n_docs = len(list(docs)) + if not any(len(doc) for doc in docs): + # Handle cases where there are no tokens in any docs. + n_labels = len(self.cfg["labels"]) + guesses: List[Ints2d] = [ + self.model.ops.alloc((0, n_labels), dtype="i") for doc in docs + ] + assert len(guesses) == n_docs + return guesses + scores = self.model.predict(docs) + assert len(scores) == n_docs + guesses = self._scores2guesses(docs, scores) + assert len(guesses) == n_docs + return guesses + + def _scores2guesses(self, docs, scores): + guesses = [] + for doc, doc_scores in zip(docs, scores): + if self.top_k == 1: + doc_guesses = doc_scores.argmax(axis=1).reshape(-1, 1) + else: + doc_guesses = np.argsort(doc_scores)[..., : -self.top_k - 1 : -1] + + if not isinstance(doc_guesses, np.ndarray): + doc_guesses = doc_guesses.get() + + doc_compat_guesses = [] + for token, candidates in zip(doc, doc_guesses): + tree_id = -1 + for candidate in candidates: + candidate_tree_id = self.cfg["labels"][candidate] + + if self.trees.apply(candidate_tree_id, token.text) is not None: + tree_id = candidate_tree_id + break + doc_compat_guesses.append(tree_id) + + guesses.append(np.array(doc_compat_guesses)) + + return guesses + + def set_annotations(self, docs: Iterable[Doc], batch_tree_ids): + for i, doc in enumerate(docs): + doc_tree_ids = batch_tree_ids[i] + if hasattr(doc_tree_ids, "get"): + doc_tree_ids = doc_tree_ids.get() + for j, tree_id in enumerate(doc_tree_ids): + if self.overwrite or doc[j].lemma == 0: + # If no applicable tree could be found during prediction, + # the special identifier -1 is used. Otherwise the tree + # is guaranteed to be applicable. + if tree_id == -1: + if self.backoff is not None: + doc[j].lemma = getattr(doc[j], self.backoff) + else: + lemma = self.trees.apply(tree_id, doc[j].text) + doc[j].lemma_ = lemma + + @property + def labels(self) -> Tuple[int, ...]: + """Returns the labels currently added to the component.""" + return tuple(self.cfg["labels"]) + + @property + def hide_labels(self) -> bool: + return True + + @property + def label_data(self) -> Dict: + trees = [] + for tree_id in range(len(self.trees)): + tree = self.trees[tree_id] + if "orig" in tree: + tree["orig"] = self.vocab.strings[tree["orig"]] + if "subst" in tree: + tree["subst"] = self.vocab.strings[tree["subst"]] + trees.append(tree) + return dict(trees=trees, labels=tuple(self.cfg["labels"])) + + def initialize( + self, + get_examples: Callable[[], Iterable[Example]], + *, + nlp: Optional[Language] = None, + labels: Optional[Dict] = None, + ): + validate_get_examples(get_examples, "EditTreeLemmatizer.initialize") + + if labels is None: + self._labels_from_data(get_examples) + else: + self._add_labels(labels) + + # Sample for the model. + doc_sample = [] + label_sample = [] + for example in islice(get_examples(), 10): + doc_sample.append(example.x) + gold_labels: List[List[float]] = [] + for token in example.reference: + if token.lemma == 0: + gold_label = None + else: + gold_label = self._pair2label(token.text, token.lemma_) + + gold_labels.append( + [ + 1.0 if label == gold_label else 0.0 + for label in self.cfg["labels"] + ] + ) + + gold_labels = cast(Floats2d, gold_labels) + label_sample.append(self.model.ops.asarray(gold_labels, dtype="float32")) + + self._require_labels() + assert len(doc_sample) > 0, Errors.E923.format(name=self.name) + assert len(label_sample) > 0, Errors.E923.format(name=self.name) + + self.model.initialize(X=doc_sample, Y=label_sample) + + def from_bytes(self, bytes_data, *, exclude=tuple()): + deserializers = { + "cfg": lambda b: self.cfg.update(srsly.json_loads(b)), + "model": lambda b: self.model.from_bytes(b), + "vocab": lambda b: self.vocab.from_bytes(b, exclude=exclude), + "trees": lambda b: self.trees.from_bytes(b), + } + + util.from_bytes(bytes_data, deserializers, exclude) + + return self + + def to_bytes(self, *, exclude=tuple()): + serializers = { + "cfg": lambda: srsly.json_dumps(self.cfg), + "model": lambda: self.model.to_bytes(), + "vocab": lambda: self.vocab.to_bytes(exclude=exclude), + "trees": lambda: self.trees.to_bytes(), + } + + return util.to_bytes(serializers, exclude) + + def to_disk(self, path, exclude=tuple()): + path = util.ensure_path(path) + serializers = { + "cfg": lambda p: srsly.write_json(p, self.cfg), + "model": lambda p: self.model.to_disk(p), + "vocab": lambda p: self.vocab.to_disk(p, exclude=exclude), + "trees": lambda p: self.trees.to_disk(p), + } + util.to_disk(path, serializers, exclude) + + def from_disk(self, path, exclude=tuple()): + def load_model(p): + try: + with open(p, "rb") as mfile: + self.model.from_bytes(mfile.read()) + except AttributeError: + raise ValueError(Errors.E149) from None + + deserializers = { + "cfg": lambda p: self.cfg.update(srsly.read_json(p)), + "model": load_model, + "vocab": lambda p: self.vocab.from_disk(p, exclude=exclude), + "trees": lambda p: self.trees.from_disk(p), + } + + util.from_disk(path, deserializers, exclude) + return self + + def _add_labels(self, labels: Dict): + if "labels" not in labels: + raise ValueError(Errors.E857.format(name="labels")) + if "trees" not in labels: + raise ValueError(Errors.E857.format(name="trees")) + + self.cfg["labels"] = list(labels["labels"]) + trees = [] + for tree in labels["trees"]: + errors = validate_edit_tree(tree) + if errors: + raise ValueError(Errors.E1026.format(errors="\n".join(errors))) + + tree = dict(tree) + if "orig" in tree: + tree["orig"] = self.vocab.strings[tree["orig"]] + if "orig" in tree: + tree["subst"] = self.vocab.strings[tree["subst"]] + + trees.append(tree) + + self.trees.from_json(trees) + + for label, tree in enumerate(self.labels): + self.tree2label[tree] = label + + def _labels_from_data(self, get_examples: Callable[[], Iterable[Example]]): + # Count corpus tree frequencies in ad-hoc storage to avoid cluttering + # the final pipe/string store. + vocab = Vocab() + trees = EditTrees(vocab.strings) + tree_freqs: Counter = Counter() + repr_pairs: Dict = {} + for example in get_examples(): + for token in example.reference: + if token.lemma != 0: + tree_id = trees.add(token.text, token.lemma_) + tree_freqs[tree_id] += 1 + repr_pairs[tree_id] = (token.text, token.lemma_) + + # Construct trees that make the frequency cut-off using representative + # form - token pairs. + for tree_id, freq in tree_freqs.items(): + if freq >= self.min_tree_freq: + form, lemma = repr_pairs[tree_id] + self._pair2label(form, lemma, add_label=True) + + def _pair2label(self, form, lemma, add_label=False): + """ + Look up the edit tree identifier for a form/label pair. If the edit + tree is unknown and "add_label" is set, the edit tree will be added to + the labels. + """ + tree_id = self.trees.add(form, lemma) + if tree_id not in self.tree2label: + if not add_label: + return None + + self.tree2label[tree_id] = len(self.cfg["labels"]) + self.cfg["labels"].append(tree_id) + return self.tree2label[tree_id] diff --git a/spacy/pipeline/entity_linker.py b/spacy/pipeline/entity_linker.py index 1169e898d..89e7576bf 100644 --- a/spacy/pipeline/entity_linker.py +++ b/spacy/pipeline/entity_linker.py @@ -6,17 +6,17 @@ import srsly import random from thinc.api import CosineDistance, Model, Optimizer, Config from thinc.api import set_dropout_rate -import warnings from ..kb import KnowledgeBase, Candidate from ..ml import empty_kb from ..tokens import Doc, Span from .pipe import deserialize_config +from .legacy.entity_linker import EntityLinker_v1 from .trainable_pipe import TrainablePipe from ..language import Language from ..vocab import Vocab from ..training import Example, validate_examples, validate_get_examples -from ..errors import Errors, Warnings +from ..errors import Errors from ..util import SimpleFrozenList, registry from .. import util from ..scorer import Scorer @@ -26,7 +26,7 @@ BACKWARD_OVERWRITE = True default_model_config = """ [model] -@architectures = "spacy.EntityLinker.v1" +@architectures = "spacy.EntityLinker.v2" [model.tok2vec] @architectures = "spacy.HashEmbedCNN.v2" @@ -55,6 +55,7 @@ DEFAULT_NEL_MODEL = Config().from_str(default_model_config)["model"] "get_candidates": {"@misc": "spacy.CandidateGenerator.v1"}, "overwrite": True, "scorer": {"@scorers": "spacy.entity_linker_scorer.v1"}, + "use_gold_ents": True, }, default_score_weights={ "nel_micro_f": 1.0, @@ -75,6 +76,7 @@ def make_entity_linker( get_candidates: Callable[[KnowledgeBase, Span], Iterable[Candidate]], overwrite: bool, scorer: Optional[Callable], + use_gold_ents: bool, ): """Construct an EntityLinker component. @@ -90,6 +92,22 @@ def make_entity_linker( produces a list of candidates, given a certain knowledge base and a textual mention. scorer (Optional[Callable]): The scoring method. """ + + if not model.attrs.get("include_span_maker", False): + # The only difference in arguments here is that use_gold_ents is not available + return EntityLinker_v1( + nlp.vocab, + model, + name, + labels_discard=labels_discard, + n_sents=n_sents, + incl_prior=incl_prior, + incl_context=incl_context, + entity_vector_length=entity_vector_length, + get_candidates=get_candidates, + overwrite=overwrite, + scorer=scorer, + ) return EntityLinker( nlp.vocab, model, @@ -102,6 +120,7 @@ def make_entity_linker( get_candidates=get_candidates, overwrite=overwrite, scorer=scorer, + use_gold_ents=use_gold_ents, ) @@ -136,6 +155,7 @@ class EntityLinker(TrainablePipe): get_candidates: Callable[[KnowledgeBase, Span], Iterable[Candidate]], overwrite: bool = BACKWARD_OVERWRITE, scorer: Optional[Callable] = entity_linker_score, + use_gold_ents: bool, ) -> None: """Initialize an entity linker. @@ -152,6 +172,8 @@ class EntityLinker(TrainablePipe): produces a list of candidates, given a certain knowledge base and a textual mention. scorer (Optional[Callable]): The scoring method. Defaults to Scorer.score_links. + use_gold_ents (bool): Whether to copy entities from gold docs or not. If false, another + component must provide entity annotations. DOCS: https://spacy.io/api/entitylinker#init """ @@ -169,6 +191,7 @@ class EntityLinker(TrainablePipe): # create an empty KB by default. If you want to load a predefined one, specify it in 'initialize'. self.kb = empty_kb(entity_vector_length)(self.vocab) self.scorer = scorer + self.use_gold_ents = use_gold_ents def set_kb(self, kb_loader: Callable[[Vocab], KnowledgeBase]): """Define the KB of this pipe by providing a function that will @@ -212,14 +235,48 @@ class EntityLinker(TrainablePipe): doc_sample = [] vector_sample = [] for example in islice(get_examples(), 10): - doc_sample.append(example.x) + doc = example.x + if self.use_gold_ents: + doc.ents = example.y.ents + doc_sample.append(doc) vector_sample.append(self.model.ops.alloc1f(nO)) assert len(doc_sample) > 0, Errors.E923.format(name=self.name) assert len(vector_sample) > 0, Errors.E923.format(name=self.name) + + # XXX In order for size estimation to work, there has to be at least + # one entity. It's not used for training so it doesn't have to be real, + # so we add a fake one if none are present. + # We can't use Doc.has_annotation here because it can be True for docs + # that have been through an NER component but got no entities. + has_annotations = any([doc.ents for doc in doc_sample]) + if not has_annotations: + doc = doc_sample[0] + ent = doc[0:1] + ent.label_ = "XXX" + doc.ents = (ent,) + self.model.initialize( X=doc_sample, Y=self.model.ops.asarray(vector_sample, dtype="float32") ) + if not has_annotations: + # Clean up dummy annotation + doc.ents = [] + + def batch_has_learnable_example(self, examples): + """Check if a batch contains a learnable example. + + If one isn't present, then the update step needs to be skipped. + """ + + for eg in examples: + for ent in eg.predicted.ents: + candidates = list(self.get_candidates(self.kb, ent)) + if candidates: + return True + + return False + def update( self, examples: Iterable[Example], @@ -247,35 +304,29 @@ class EntityLinker(TrainablePipe): if not examples: return losses validate_examples(examples, "EntityLinker.update") - sentence_docs = [] - for eg in examples: - sentences = [s for s in eg.reference.sents] - kb_ids = eg.get_aligned("ENT_KB_ID", as_string=True) - for ent in eg.reference.ents: - # KB ID of the first token is the same as the whole span - kb_id = kb_ids[ent.start] - if kb_id: - try: - # find the sentence in the list of sentences. - sent_index = sentences.index(ent.sent) - except AttributeError: - # Catch the exception when ent.sent is None and provide a user-friendly warning - raise RuntimeError(Errors.E030) from None - # get n previous sentences, if there are any - start_sentence = max(0, sent_index - self.n_sents) - # get n posterior sentences, or as many < n as there are - end_sentence = min(len(sentences) - 1, sent_index + self.n_sents) - # get token positions - start_token = sentences[start_sentence].start - end_token = sentences[end_sentence].end - # append that span as a doc to training - sent_doc = eg.predicted[start_token:end_token].as_doc() - sentence_docs.append(sent_doc) + set_dropout_rate(self.model, drop) - if not sentence_docs: - warnings.warn(Warnings.W093.format(name="Entity Linker")) + docs = [eg.predicted for eg in examples] + # save to restore later + old_ents = [doc.ents for doc in docs] + + for doc, ex in zip(docs, examples): + if self.use_gold_ents: + doc.ents = ex.reference.ents + else: + # only keep matching ents + doc.ents = ex.get_matching_ents() + + # make sure we have something to learn from, if not, short-circuit + if not self.batch_has_learnable_example(examples): return losses - sentence_encodings, bp_context = self.model.begin_update(sentence_docs) + + sentence_encodings, bp_context = self.model.begin_update(docs) + + # now restore the ents + for doc, old in zip(docs, old_ents): + doc.ents = old + loss, d_scores = self.get_loss( sentence_encodings=sentence_encodings, examples=examples ) @@ -288,24 +339,38 @@ class EntityLinker(TrainablePipe): def get_loss(self, examples: Iterable[Example], sentence_encodings: Floats2d): validate_examples(examples, "EntityLinker.get_loss") entity_encodings = [] + eidx = 0 # indices in gold entities to keep + keep_ents = [] # indices in sentence_encodings to keep + for eg in examples: kb_ids = eg.get_aligned("ENT_KB_ID", as_string=True) + for ent in eg.reference.ents: kb_id = kb_ids[ent.start] if kb_id: entity_encoding = self.kb.get_vector(kb_id) entity_encodings.append(entity_encoding) + keep_ents.append(eidx) + + eidx += 1 entity_encodings = self.model.ops.asarray(entity_encodings, dtype="float32") - if sentence_encodings.shape != entity_encodings.shape: + selected_encodings = sentence_encodings[keep_ents] + + # If the entity encodings list is empty, then + if selected_encodings.shape != entity_encodings.shape: err = Errors.E147.format( method="get_loss", msg="gold entities do not match up" ) raise RuntimeError(err) # TODO: fix typing issue here - gradients = self.distance.get_grad(sentence_encodings, entity_encodings) # type: ignore - loss = self.distance.get_loss(sentence_encodings, entity_encodings) # type: ignore + gradients = self.distance.get_grad(selected_encodings, entity_encodings) # type: ignore + # to match the input size, we need to give a zero gradient for items not in the kb + out = self.model.ops.alloc2f(*sentence_encodings.shape) + out[keep_ents] = gradients + + loss = self.distance.get_loss(selected_encodings, entity_encodings) # type: ignore loss = loss / len(entity_encodings) - return float(loss), gradients + return float(loss), out def predict(self, docs: Iterable[Doc]) -> List[str]: """Apply the pipeline's model to a batch of docs, without modifying them. diff --git a/spacy/pipeline/legacy/__init__.py b/spacy/pipeline/legacy/__init__.py new file mode 100644 index 000000000..f216840dc --- /dev/null +++ b/spacy/pipeline/legacy/__init__.py @@ -0,0 +1,3 @@ +from .entity_linker import EntityLinker_v1 + +__all__ = ["EntityLinker_v1"] diff --git a/spacy/pipeline/legacy/entity_linker.py b/spacy/pipeline/legacy/entity_linker.py new file mode 100644 index 000000000..6440c18e5 --- /dev/null +++ b/spacy/pipeline/legacy/entity_linker.py @@ -0,0 +1,427 @@ +# This file is present to provide a prior version of the EntityLinker component +# for backwards compatability. For details see #9669. + +from typing import Optional, Iterable, Callable, Dict, Union, List, Any +from thinc.types import Floats2d +from pathlib import Path +from itertools import islice +import srsly +import random +from thinc.api import CosineDistance, Model, Optimizer, Config +from thinc.api import set_dropout_rate +import warnings + +from ...kb import KnowledgeBase, Candidate +from ...ml import empty_kb +from ...tokens import Doc, Span +from ..pipe import deserialize_config +from ..trainable_pipe import TrainablePipe +from ...language import Language +from ...vocab import Vocab +from ...training import Example, validate_examples, validate_get_examples +from ...errors import Errors, Warnings +from ...util import SimpleFrozenList, registry +from ... import util +from ...scorer import Scorer + +# See #9050 +BACKWARD_OVERWRITE = True + + +def entity_linker_score(examples, **kwargs): + return Scorer.score_links(examples, negative_labels=[EntityLinker_v1.NIL], **kwargs) + + +class EntityLinker_v1(TrainablePipe): + """Pipeline component for named entity linking. + + DOCS: https://spacy.io/api/entitylinker + """ + + NIL = "NIL" # string used to refer to a non-existing link + + def __init__( + self, + vocab: Vocab, + model: Model, + name: str = "entity_linker", + *, + labels_discard: Iterable[str], + n_sents: int, + incl_prior: bool, + incl_context: bool, + entity_vector_length: int, + get_candidates: Callable[[KnowledgeBase, Span], Iterable[Candidate]], + overwrite: bool = BACKWARD_OVERWRITE, + scorer: Optional[Callable] = entity_linker_score, + ) -> None: + """Initialize an entity linker. + + vocab (Vocab): The shared vocabulary. + model (thinc.api.Model): The Thinc Model powering the pipeline component. + name (str): The component instance name, used to add entries to the + losses during training. + labels_discard (Iterable[str]): NER labels that will automatically get a "NIL" prediction. + n_sents (int): The number of neighbouring sentences to take into account. + incl_prior (bool): Whether or not to include prior probabilities from the KB in the model. + incl_context (bool): Whether or not to include the local context in the model. + entity_vector_length (int): Size of encoding vectors in the KB. + get_candidates (Callable[[KnowledgeBase, Span], Iterable[Candidate]]): Function that + produces a list of candidates, given a certain knowledge base and a textual mention. + scorer (Optional[Callable]): The scoring method. Defaults to + Scorer.score_links. + + DOCS: https://spacy.io/api/entitylinker#init + """ + self.vocab = vocab + self.model = model + self.name = name + self.labels_discard = list(labels_discard) + self.n_sents = n_sents + self.incl_prior = incl_prior + self.incl_context = incl_context + self.get_candidates = get_candidates + self.cfg: Dict[str, Any] = {"overwrite": overwrite} + self.distance = CosineDistance(normalize=False) + # how many neighbour sentences to take into account + # create an empty KB by default. If you want to load a predefined one, specify it in 'initialize'. + self.kb = empty_kb(entity_vector_length)(self.vocab) + self.scorer = scorer + + def set_kb(self, kb_loader: Callable[[Vocab], KnowledgeBase]): + """Define the KB of this pipe by providing a function that will + create it using this object's vocab.""" + if not callable(kb_loader): + raise ValueError(Errors.E885.format(arg_type=type(kb_loader))) + + self.kb = kb_loader(self.vocab) + + def validate_kb(self) -> None: + # Raise an error if the knowledge base is not initialized. + if self.kb is None: + raise ValueError(Errors.E1018.format(name=self.name)) + if len(self.kb) == 0: + raise ValueError(Errors.E139.format(name=self.name)) + + def initialize( + self, + get_examples: Callable[[], Iterable[Example]], + *, + nlp: Optional[Language] = None, + kb_loader: Optional[Callable[[Vocab], KnowledgeBase]] = None, + ): + """Initialize the pipe for training, using a representative set + of data examples. + + get_examples (Callable[[], Iterable[Example]]): Function that + returns a representative sample of gold-standard Example objects. + nlp (Language): The current nlp object the component is part of. + kb_loader (Callable[[Vocab], KnowledgeBase]): A function that creates a KnowledgeBase from a Vocab instance. + Note that providing this argument, will overwrite all data accumulated in the current KB. + Use this only when loading a KB as-such from file. + + DOCS: https://spacy.io/api/entitylinker#initialize + """ + validate_get_examples(get_examples, "EntityLinker_v1.initialize") + if kb_loader is not None: + self.set_kb(kb_loader) + self.validate_kb() + nO = self.kb.entity_vector_length + doc_sample = [] + vector_sample = [] + for example in islice(get_examples(), 10): + doc_sample.append(example.x) + vector_sample.append(self.model.ops.alloc1f(nO)) + assert len(doc_sample) > 0, Errors.E923.format(name=self.name) + assert len(vector_sample) > 0, Errors.E923.format(name=self.name) + self.model.initialize( + X=doc_sample, Y=self.model.ops.asarray(vector_sample, dtype="float32") + ) + + def update( + self, + examples: Iterable[Example], + *, + drop: float = 0.0, + sgd: Optional[Optimizer] = None, + losses: Optional[Dict[str, float]] = None, + ) -> Dict[str, float]: + """Learn from a batch of documents and gold-standard information, + updating the pipe's model. Delegates to predict and get_loss. + + examples (Iterable[Example]): A batch of Example objects. + drop (float): The dropout rate. + sgd (thinc.api.Optimizer): The optimizer. + losses (Dict[str, float]): Optional record of the loss during training. + Updated using the component name as the key. + RETURNS (Dict[str, float]): The updated losses dictionary. + + DOCS: https://spacy.io/api/entitylinker#update + """ + self.validate_kb() + if losses is None: + losses = {} + losses.setdefault(self.name, 0.0) + if not examples: + return losses + validate_examples(examples, "EntityLinker_v1.update") + sentence_docs = [] + for eg in examples: + sentences = [s for s in eg.reference.sents] + kb_ids = eg.get_aligned("ENT_KB_ID", as_string=True) + for ent in eg.reference.ents: + # KB ID of the first token is the same as the whole span + kb_id = kb_ids[ent.start] + if kb_id: + try: + # find the sentence in the list of sentences. + sent_index = sentences.index(ent.sent) + except AttributeError: + # Catch the exception when ent.sent is None and provide a user-friendly warning + raise RuntimeError(Errors.E030) from None + # get n previous sentences, if there are any + start_sentence = max(0, sent_index - self.n_sents) + # get n posterior sentences, or as many < n as there are + end_sentence = min(len(sentences) - 1, sent_index + self.n_sents) + # get token positions + start_token = sentences[start_sentence].start + end_token = sentences[end_sentence].end + # append that span as a doc to training + sent_doc = eg.predicted[start_token:end_token].as_doc() + sentence_docs.append(sent_doc) + set_dropout_rate(self.model, drop) + if not sentence_docs: + warnings.warn(Warnings.W093.format(name="Entity Linker")) + return losses + sentence_encodings, bp_context = self.model.begin_update(sentence_docs) + loss, d_scores = self.get_loss( + sentence_encodings=sentence_encodings, examples=examples + ) + bp_context(d_scores) + if sgd is not None: + self.finish_update(sgd) + losses[self.name] += loss + return losses + + def get_loss(self, examples: Iterable[Example], sentence_encodings: Floats2d): + validate_examples(examples, "EntityLinker_v1.get_loss") + entity_encodings = [] + for eg in examples: + kb_ids = eg.get_aligned("ENT_KB_ID", as_string=True) + for ent in eg.reference.ents: + kb_id = kb_ids[ent.start] + if kb_id: + entity_encoding = self.kb.get_vector(kb_id) + entity_encodings.append(entity_encoding) + entity_encodings = self.model.ops.asarray(entity_encodings, dtype="float32") + if sentence_encodings.shape != entity_encodings.shape: + err = Errors.E147.format( + method="get_loss", msg="gold entities do not match up" + ) + raise RuntimeError(err) + # TODO: fix typing issue here + gradients = self.distance.get_grad(sentence_encodings, entity_encodings) # type: ignore + loss = self.distance.get_loss(sentence_encodings, entity_encodings) # type: ignore + loss = loss / len(entity_encodings) + return float(loss), gradients + + def predict(self, docs: Iterable[Doc]) -> List[str]: + """Apply the pipeline's model to a batch of docs, without modifying them. + Returns the KB IDs for each entity in each doc, including NIL if there is + no prediction. + + docs (Iterable[Doc]): The documents to predict. + RETURNS (List[str]): The models prediction for each document. + + DOCS: https://spacy.io/api/entitylinker#predict + """ + self.validate_kb() + entity_count = 0 + final_kb_ids: List[str] = [] + if not docs: + return final_kb_ids + if isinstance(docs, Doc): + docs = [docs] + for i, doc in enumerate(docs): + sentences = [s for s in doc.sents] + if len(doc) > 0: + # Looping through each entity (TODO: rewrite) + for ent in doc.ents: + sent = ent.sent + sent_index = sentences.index(sent) + assert sent_index >= 0 + # get n_neighbour sentences, clipped to the length of the document + start_sentence = max(0, sent_index - self.n_sents) + end_sentence = min(len(sentences) - 1, sent_index + self.n_sents) + start_token = sentences[start_sentence].start + end_token = sentences[end_sentence].end + sent_doc = doc[start_token:end_token].as_doc() + # currently, the context is the same for each entity in a sentence (should be refined) + xp = self.model.ops.xp + if self.incl_context: + sentence_encoding = self.model.predict([sent_doc])[0] + sentence_encoding_t = sentence_encoding.T + sentence_norm = xp.linalg.norm(sentence_encoding_t) + entity_count += 1 + if ent.label_ in self.labels_discard: + # ignoring this entity - setting to NIL + final_kb_ids.append(self.NIL) + else: + candidates = list(self.get_candidates(self.kb, ent)) + if not candidates: + # no prediction possible for this entity - setting to NIL + final_kb_ids.append(self.NIL) + elif len(candidates) == 1: + # shortcut for efficiency reasons: take the 1 candidate + # TODO: thresholding + final_kb_ids.append(candidates[0].entity_) + else: + random.shuffle(candidates) + # set all prior probabilities to 0 if incl_prior=False + prior_probs = xp.asarray([c.prior_prob for c in candidates]) + if not self.incl_prior: + prior_probs = xp.asarray([0.0 for _ in candidates]) + scores = prior_probs + # add in similarity from the context + if self.incl_context: + entity_encodings = xp.asarray( + [c.entity_vector for c in candidates] + ) + entity_norm = xp.linalg.norm(entity_encodings, axis=1) + if len(entity_encodings) != len(prior_probs): + raise RuntimeError( + Errors.E147.format( + method="predict", + msg="vectors not of equal length", + ) + ) + # cosine similarity + sims = xp.dot(entity_encodings, sentence_encoding_t) / ( + sentence_norm * entity_norm + ) + if sims.shape != prior_probs.shape: + raise ValueError(Errors.E161) + scores = prior_probs + sims - (prior_probs * sims) + # TODO: thresholding + best_index = scores.argmax().item() + best_candidate = candidates[best_index] + final_kb_ids.append(best_candidate.entity_) + if not (len(final_kb_ids) == entity_count): + err = Errors.E147.format( + method="predict", msg="result variables not of equal length" + ) + raise RuntimeError(err) + return final_kb_ids + + def set_annotations(self, docs: Iterable[Doc], kb_ids: List[str]) -> None: + """Modify a batch of documents, using pre-computed scores. + + docs (Iterable[Doc]): The documents to modify. + kb_ids (List[str]): The IDs to set, produced by EntityLinker.predict. + + DOCS: https://spacy.io/api/entitylinker#set_annotations + """ + count_ents = len([ent for doc in docs for ent in doc.ents]) + if count_ents != len(kb_ids): + raise ValueError(Errors.E148.format(ents=count_ents, ids=len(kb_ids))) + i = 0 + overwrite = self.cfg["overwrite"] + for doc in docs: + for ent in doc.ents: + kb_id = kb_ids[i] + i += 1 + for token in ent: + if token.ent_kb_id == 0 or overwrite: + token.ent_kb_id_ = kb_id + + def to_bytes(self, *, exclude=tuple()): + """Serialize the pipe to a bytestring. + + exclude (Iterable[str]): String names of serialization fields to exclude. + RETURNS (bytes): The serialized object. + + DOCS: https://spacy.io/api/entitylinker#to_bytes + """ + self._validate_serialization_attrs() + serialize = {} + if hasattr(self, "cfg") and self.cfg is not None: + serialize["cfg"] = lambda: srsly.json_dumps(self.cfg) + serialize["vocab"] = lambda: self.vocab.to_bytes(exclude=exclude) + serialize["kb"] = self.kb.to_bytes + serialize["model"] = self.model.to_bytes + return util.to_bytes(serialize, exclude) + + def from_bytes(self, bytes_data, *, exclude=tuple()): + """Load the pipe from a bytestring. + + exclude (Iterable[str]): String names of serialization fields to exclude. + RETURNS (TrainablePipe): The loaded object. + + DOCS: https://spacy.io/api/entitylinker#from_bytes + """ + self._validate_serialization_attrs() + + def load_model(b): + try: + self.model.from_bytes(b) + except AttributeError: + raise ValueError(Errors.E149) from None + + deserialize = {} + if hasattr(self, "cfg") and self.cfg is not None: + deserialize["cfg"] = lambda b: self.cfg.update(srsly.json_loads(b)) + deserialize["vocab"] = lambda b: self.vocab.from_bytes(b, exclude=exclude) + deserialize["kb"] = lambda b: self.kb.from_bytes(b) + deserialize["model"] = load_model + util.from_bytes(bytes_data, deserialize, exclude) + return self + + def to_disk( + self, path: Union[str, Path], *, exclude: Iterable[str] = SimpleFrozenList() + ) -> None: + """Serialize the pipe to disk. + + path (str / Path): Path to a directory. + exclude (Iterable[str]): String names of serialization fields to exclude. + + DOCS: https://spacy.io/api/entitylinker#to_disk + """ + serialize = {} + serialize["vocab"] = lambda p: self.vocab.to_disk(p, exclude=exclude) + serialize["cfg"] = lambda p: srsly.write_json(p, self.cfg) + serialize["kb"] = lambda p: self.kb.to_disk(p) + serialize["model"] = lambda p: self.model.to_disk(p) + util.to_disk(path, serialize, exclude) + + def from_disk( + self, path: Union[str, Path], *, exclude: Iterable[str] = SimpleFrozenList() + ) -> "EntityLinker_v1": + """Load the pipe from disk. Modifies the object in place and returns it. + + path (str / Path): Path to a directory. + exclude (Iterable[str]): String names of serialization fields to exclude. + RETURNS (EntityLinker): The modified EntityLinker object. + + DOCS: https://spacy.io/api/entitylinker#from_disk + """ + + def load_model(p): + try: + with p.open("rb") as infile: + self.model.from_bytes(infile.read()) + except AttributeError: + raise ValueError(Errors.E149) from None + + deserialize: Dict[str, Callable[[Any], Any]] = {} + deserialize["cfg"] = lambda p: self.cfg.update(deserialize_config(p)) + deserialize["vocab"] = lambda p: self.vocab.from_disk(p, exclude=exclude) + deserialize["kb"] = lambda p: self.kb.from_disk(p) + deserialize["model"] = load_model + util.from_disk(path, deserialize, exclude) + return self + + def rehearse(self, examples, *, sgd=None, losses=None, **config): + raise NotImplementedError + + def add_label(self, label): + raise NotImplementedError diff --git a/spacy/pipeline/morphologizer.pyx b/spacy/pipeline/morphologizer.pyx index 73d3799b1..24f98508f 100644 --- a/spacy/pipeline/morphologizer.pyx +++ b/spacy/pipeline/morphologizer.pyx @@ -25,7 +25,7 @@ BACKWARD_EXTEND = False default_model_config = """ [model] -@architectures = "spacy.Tagger.v1" +@architectures = "spacy.Tagger.v2" [model.tok2vec] @architectures = "spacy.Tok2Vec.v2" diff --git a/spacy/pipeline/pipe.pyi b/spacy/pipeline/pipe.pyi index c7c0568f9..9dd6a9d50 100644 --- a/spacy/pipeline/pipe.pyi +++ b/spacy/pipeline/pipe.pyi @@ -26,6 +26,8 @@ class Pipe: @property def labels(self) -> Tuple[str, ...]: ... @property + def hide_labels(self) -> bool: ... + @property def label_data(self) -> Any: ... def _require_labels(self) -> None: ... def set_error_handler( diff --git a/spacy/pipeline/pipe.pyx b/spacy/pipeline/pipe.pyx index 9eddc1e3f..d24e4d574 100644 --- a/spacy/pipeline/pipe.pyx +++ b/spacy/pipeline/pipe.pyx @@ -102,6 +102,10 @@ cdef class Pipe: def labels(self) -> Tuple[str, ...]: return tuple() + @property + def hide_labels(self) -> bool: + return False + @property def label_data(self): """Optional JSON-serializable data that would be sufficient to recreate diff --git a/spacy/pipeline/senter.pyx b/spacy/pipeline/senter.pyx index 54ce021af..6808fe70e 100644 --- a/spacy/pipeline/senter.pyx +++ b/spacy/pipeline/senter.pyx @@ -1,6 +1,6 @@ # cython: infer_types=True, profile=True, binding=True -from itertools import islice from typing import Optional, Callable +from itertools import islice import srsly from thinc.api import Model, SequenceCategoricalCrossentropy, Config @@ -20,7 +20,7 @@ BACKWARD_OVERWRITE = False default_model_config = """ [model] -@architectures = "spacy.Tagger.v1" +@architectures = "spacy.Tagger.v2" [model.tok2vec] @architectures = "spacy.HashEmbedCNN.v2" @@ -99,6 +99,10 @@ class SentenceRecognizer(Tagger): # are 0 return tuple(["I", "S"]) + @property + def hide_labels(self): + return True + @property def label_data(self): return None diff --git a/spacy/pipeline/spancat.py b/spacy/pipeline/spancat.py index 829def1eb..0a6138fbc 100644 --- a/spacy/pipeline/spancat.py +++ b/spacy/pipeline/spancat.py @@ -1,9 +1,10 @@ -import numpy from typing import List, Dict, Callable, Tuple, Optional, Iterable, Any, cast from thinc.api import Config, Model, get_current_ops, set_dropout_rate, Ops from thinc.api import Optimizer from thinc.types import Ragged, Ints2d, Floats2d, Ints1d +import numpy + from ..compat import Protocol, runtime_checkable from ..scorer import Scorer from ..language import Language @@ -271,6 +272,24 @@ class SpanCategorizer(TrainablePipe): scores = self.model.predict((docs, indices)) # type: ignore return indices, scores + def set_candidates( + self, docs: Iterable[Doc], *, candidates_key: str = "candidates" + ) -> None: + """Use the spancat suggester to add a list of span candidates to a list of docs. + This method is intended to be used for debugging purposes. + + docs (Iterable[Doc]): The documents to modify. + candidates_key (str): Key of the Doc.spans dict to save the candidate spans under. + + DOCS: https://spacy.io/api/spancategorizer#set_candidates + """ + suggester_output = self.suggester(docs, ops=self.model.ops) + + for candidates, doc in zip(suggester_output, docs): # type: ignore + doc.spans[candidates_key] = [] + for index in candidates.dataXd: + doc.spans[candidates_key].append(doc[index[0] : index[1]]) + def set_annotations(self, docs: Iterable[Doc], indices_scores) -> None: """Modify a batch of Doc objects, using pre-computed scores. @@ -377,7 +396,7 @@ class SpanCategorizer(TrainablePipe): # If the prediction is 0.9 and it's false, the gradient will be # 0.9 (0.9 - 0.0) d_scores = scores - target - loss = float((d_scores ** 2).sum()) + loss = float((d_scores**2).sum()) return loss, d_scores def initialize( @@ -412,7 +431,7 @@ class SpanCategorizer(TrainablePipe): self._require_labels() if subbatch: docs = [eg.x for eg in subbatch] - spans = self.suggester(docs) + spans = build_ngram_suggester(sizes=[1])(docs) Y = self.model.ops.alloc2f(spans.dataXd.shape[0], len(self.labels)) self.model.initialize(X=(docs, spans), Y=Y) else: diff --git a/spacy/pipeline/tagger.pyx b/spacy/pipeline/tagger.pyx index a2bec888e..d6ecbf084 100644 --- a/spacy/pipeline/tagger.pyx +++ b/spacy/pipeline/tagger.pyx @@ -27,7 +27,7 @@ BACKWARD_OVERWRITE = False default_model_config = """ [model] -@architectures = "spacy.Tagger.v1" +@architectures = "spacy.Tagger.v2" [model.tok2vec] @architectures = "spacy.HashEmbedCNN.v2" @@ -225,6 +225,7 @@ class Tagger(TrainablePipe): DOCS: https://spacy.io/api/tagger#rehearse """ + loss_func = SequenceCategoricalCrossentropy() if losses is None: losses = {} losses.setdefault(self.name, 0.0) @@ -236,12 +237,12 @@ class Tagger(TrainablePipe): # Handle cases where there are no tokens in any docs. return losses set_dropout_rate(self.model, drop) - guesses, backprop = self.model.begin_update(docs) - target = self._rehearsal_model(examples) - gradient = guesses - target - backprop(gradient) + tag_scores, bp_tag_scores = self.model.begin_update(docs) + tutor_tag_scores, _ = self._rehearsal_model.begin_update(docs) + grads, loss = loss_func(tag_scores, tutor_tag_scores) + bp_tag_scores(grads) self.finish_update(sgd) - losses[self.name] += (gradient**2).sum() + losses[self.name] += loss return losses def get_loss(self, examples, scores): diff --git a/spacy/pipeline/textcat.py b/spacy/pipeline/textcat.py index 30a65ec52..bc3f127fc 100644 --- a/spacy/pipeline/textcat.py +++ b/spacy/pipeline/textcat.py @@ -1,8 +1,8 @@ -from itertools import islice from typing import Iterable, Tuple, Optional, Dict, List, Callable, Any from thinc.api import get_array_module, Model, Optimizer, set_dropout_rate, Config from thinc.types import Floats2d import numpy +from itertools import islice from .trainable_pipe import TrainablePipe from ..language import Language @@ -158,6 +158,13 @@ class TextCategorizer(TrainablePipe): self.cfg = dict(cfg) self.scorer = scorer + @property + def support_missing_values(self): + # There are no missing values as the textcat should always + # predict exactly one label. All other labels are 0.0 + # Subclasses may override this property to change internal behaviour. + return False + @property def labels(self) -> Tuple[str]: """RETURNS (Tuple[str]): The labels currently added to the component. @@ -276,12 +283,12 @@ class TextCategorizer(TrainablePipe): return losses set_dropout_rate(self.model, drop) scores, bp_scores = self.model.begin_update(docs) - target = self._rehearsal_model(examples) + target, _ = self._rehearsal_model.begin_update(docs) gradient = scores - target bp_scores(gradient) if sgd is not None: self.finish_update(sgd) - losses[self.name] += (gradient ** 2).sum() + losses[self.name] += (gradient**2).sum() return losses def _examples_to_truth( @@ -294,7 +301,7 @@ class TextCategorizer(TrainablePipe): for j, label in enumerate(self.labels): if label in eg.reference.cats: truths[i, j] = eg.reference.cats[label] - else: + elif self.support_missing_values: not_missing[i, j] = 0.0 truths = self.model.ops.asarray(truths) # type: ignore return truths, not_missing # type: ignore @@ -313,9 +320,9 @@ class TextCategorizer(TrainablePipe): self._validate_categories(examples) truths, not_missing = self._examples_to_truth(examples) not_missing = self.model.ops.asarray(not_missing) # type: ignore - d_scores = (scores - truths) / scores.shape[0] + d_scores = scores - truths d_scores *= not_missing - mean_square_error = (d_scores ** 2).sum(axis=1).mean() + mean_square_error = (d_scores**2).mean() return float(mean_square_error), d_scores def add_label(self, label: str) -> int: diff --git a/spacy/pipeline/textcat_multilabel.py b/spacy/pipeline/textcat_multilabel.py index a7bfacca7..e33a885f8 100644 --- a/spacy/pipeline/textcat_multilabel.py +++ b/spacy/pipeline/textcat_multilabel.py @@ -1,8 +1,8 @@ -from itertools import islice from typing import Iterable, Optional, Dict, List, Callable, Any - -from thinc.api import Model, Config from thinc.types import Floats2d +from thinc.api import Model, Config + +from itertools import islice from ..language import Language from ..training import Example, validate_get_examples @@ -158,6 +158,10 @@ class MultiLabel_TextCategorizer(TextCategorizer): self.cfg = dict(cfg) self.scorer = scorer + @property + def support_missing_values(self): + return True + def initialize( # type: ignore[override] self, get_examples: Callable[[], Iterable[Example]], diff --git a/spacy/pipeline/tok2vec.py b/spacy/pipeline/tok2vec.py index cb601e5dc..2e3dde3cb 100644 --- a/spacy/pipeline/tok2vec.py +++ b/spacy/pipeline/tok2vec.py @@ -118,6 +118,10 @@ class Tok2Vec(TrainablePipe): DOCS: https://spacy.io/api/tok2vec#predict """ + if not any(len(doc) for doc in docs): + # Handle cases where there are no tokens in any docs. + width = self.model.get_dim("nO") + return [self.model.ops.alloc((0, width)) for doc in docs] tokvecs = self.model.predict(docs) batch_id = Tok2VecListener.get_batch_id(docs) for listener in self.listeners: diff --git a/spacy/scorer.py b/spacy/scorer.py index 4d596b5e1..8cd755ac4 100644 --- a/spacy/scorer.py +++ b/spacy/scorer.py @@ -228,7 +228,7 @@ class Scorer: if token.orth_.isspace(): continue if align.x2y.lengths[token.i] == 1: - gold_i = align.x2y[token.i].dataXd[0, 0] + gold_i = align.x2y[token.i][0] if gold_i not in missing_indices: pred_tags.add((gold_i, getter(token, attr))) tag_score.score_set(pred_tags, gold_tags) @@ -287,7 +287,7 @@ class Scorer: if token.orth_.isspace(): continue if align.x2y.lengths[token.i] == 1: - gold_i = align.x2y[token.i].dataXd[0, 0] + gold_i = align.x2y[token.i][0] if gold_i not in missing_indices: value = getter(token, attr) morph = gold_doc.vocab.strings[value] @@ -445,7 +445,8 @@ class Scorer: getter(doc, attr) should return the values for the individual doc. labels (Iterable[str]): The set of possible labels. Defaults to []. multi_label (bool): Whether the attribute allows multiple labels. - Defaults to True. + Defaults to True. When set to False (exclusive labels), missing + gold labels are interpreted as 0.0. positive_label (str): The positive label for a binary task with exclusive classes. Defaults to None. threshold (float): Cutoff to consider a prediction "positive". Defaults @@ -484,13 +485,15 @@ class Scorer: for label in labels: pred_score = pred_cats.get(label, 0.0) - gold_score = gold_cats.get(label, 0.0) + gold_score = gold_cats.get(label) + if not gold_score and not multi_label: + gold_score = 0.0 if gold_score is not None: auc_per_type[label].score_set(pred_score, gold_score) if multi_label: for label in labels: pred_score = pred_cats.get(label, 0.0) - gold_score = gold_cats.get(label, 0.0) + gold_score = gold_cats.get(label) if gold_score is not None: if pred_score >= threshold and gold_score > 0: f_per_type[label].tp += 1 @@ -502,16 +505,15 @@ class Scorer: # Get the highest-scoring for each. pred_label, pred_score = max(pred_cats.items(), key=lambda it: it[1]) gold_label, gold_score = max(gold_cats.items(), key=lambda it: it[1]) - if gold_score is not None: - if pred_label == gold_label and pred_score >= threshold: - f_per_type[pred_label].tp += 1 - else: - f_per_type[gold_label].fn += 1 - if pred_score >= threshold: - f_per_type[pred_label].fp += 1 + if pred_label == gold_label and pred_score >= threshold: + f_per_type[pred_label].tp += 1 + else: + f_per_type[gold_label].fn += 1 + if pred_score >= threshold: + f_per_type[pred_label].fp += 1 elif gold_cats: gold_label, gold_score = max(gold_cats, key=lambda it: it[1]) - if gold_score is not None and gold_score > 0: + if gold_score > 0: f_per_type[gold_label].fn += 1 elif pred_cats: pred_label, pred_score = max(pred_cats.items(), key=lambda it: it[1]) @@ -692,13 +694,13 @@ class Scorer: if align.x2y.lengths[token.i] != 1: gold_i = None # type: ignore else: - gold_i = align.x2y[token.i].dataXd[0, 0] + gold_i = align.x2y[token.i][0] if gold_i not in missing_indices: dep = getter(token, attr) head = head_getter(token, head_attr) if dep not in ignore_labels and token.orth_.strip(): if align.x2y.lengths[head.i] == 1: - gold_head = align.x2y[head.i].dataXd[0, 0] + gold_head = align.x2y[head.i][0] else: gold_head = None # None is indistinct, so we can't just add it to the set @@ -748,7 +750,7 @@ def get_ner_prf(examples: Iterable[Example], **kwargs) -> Dict[str, Any]: for pred_ent in eg.x.ents: if pred_ent.label_ not in score_per_type: score_per_type[pred_ent.label_] = PRFScore() - indices = align_x2y[pred_ent.start : pred_ent.end].dataXd.ravel() + indices = align_x2y[pred_ent.start : pred_ent.end] if len(indices): g_span = eg.y[indices[0] : indices[-1] + 1] # Check we aren't missing annotation on this span. If so, diff --git a/spacy/tests/conftest.py b/spacy/tests/conftest.py index ffca79bb9..24474c71e 100644 --- a/spacy/tests/conftest.py +++ b/spacy/tests/conftest.py @@ -99,6 +99,11 @@ def de_vocab(): return get_lang_class("de")().vocab +@pytest.fixture(scope="session") +def dsb_tokenizer(): + return get_lang_class("dsb")().tokenizer + + @pytest.fixture(scope="session") def el_tokenizer(): return get_lang_class("el")().tokenizer @@ -155,6 +160,11 @@ def fr_tokenizer(): return get_lang_class("fr")().tokenizer +@pytest.fixture(scope="session") +def fr_vocab(): + return get_lang_class("fr")().vocab + + @pytest.fixture(scope="session") def ga_tokenizer(): return get_lang_class("ga")().tokenizer @@ -205,18 +215,41 @@ def it_tokenizer(): return get_lang_class("it")().tokenizer +@pytest.fixture(scope="session") +def it_vocab(): + return get_lang_class("it")().vocab + + @pytest.fixture(scope="session") def ja_tokenizer(): pytest.importorskip("sudachipy") return get_lang_class("ja")().tokenizer +@pytest.fixture(scope="session") +def hsb_tokenizer(): + return get_lang_class("hsb")().tokenizer + + @pytest.fixture(scope="session") def ko_tokenizer(): pytest.importorskip("natto") return get_lang_class("ko")().tokenizer +@pytest.fixture(scope="session") +def ko_tokenizer_tokenizer(): + config = { + "nlp": { + "tokenizer": { + "@tokenizers": "spacy.Tokenizer.v1", + } + } + } + nlp = get_lang_class("ko").from_config(config) + return nlp.tokenizer + + @pytest.fixture(scope="session") def lb_tokenizer(): return get_lang_class("lb")().tokenizer diff --git a/spacy/tests/doc/test_doc_api.py b/spacy/tests/doc/test_doc_api.py index 10700b787..858c7cbb6 100644 --- a/spacy/tests/doc/test_doc_api.py +++ b/spacy/tests/doc/test_doc_api.py @@ -684,6 +684,7 @@ def test_has_annotation(en_vocab): attrs = ("TAG", "POS", "MORPH", "LEMMA", "DEP", "HEAD", "ENT_IOB", "ENT_TYPE") for attr in attrs: assert not doc.has_annotation(attr) + assert not doc.has_annotation(attr, require_complete=True) doc[0].tag_ = "A" doc[0].pos_ = "X" @@ -709,6 +710,27 @@ def test_has_annotation(en_vocab): assert doc.has_annotation(attr, require_complete=True) +def test_has_annotation_sents(en_vocab): + doc = Doc(en_vocab, words=["Hello", "beautiful", "world"]) + attrs = ("SENT_START", "IS_SENT_START", "IS_SENT_END") + for attr in attrs: + assert not doc.has_annotation(attr) + assert not doc.has_annotation(attr, require_complete=True) + + # The first token (index 0) is always assumed to be a sentence start, + # and ignored by the check in doc.has_annotation + + doc[1].is_sent_start = False + for attr in attrs: + assert doc.has_annotation(attr) + assert not doc.has_annotation(attr, require_complete=True) + + doc[2].is_sent_start = False + for attr in attrs: + assert doc.has_annotation(attr) + assert doc.has_annotation(attr, require_complete=True) + + def test_is_flags_deprecated(en_tokenizer): doc = en_tokenizer("test") with pytest.deprecated_call(): diff --git a/spacy/tests/doc/test_span.py b/spacy/tests/doc/test_span.py index 10aba5b94..c0496cabf 100644 --- a/spacy/tests/doc/test_span.py +++ b/spacy/tests/doc/test_span.py @@ -573,6 +573,55 @@ def test_span_with_vectors(doc): doc.vocab.vectors = prev_vectors +# fmt: off +def test_span_comparison(doc): + + # Identical start, end, only differ in label and kb_id + assert Span(doc, 0, 3) == Span(doc, 0, 3) + assert Span(doc, 0, 3, "LABEL") == Span(doc, 0, 3, "LABEL") + assert Span(doc, 0, 3, "LABEL", kb_id="KB_ID") == Span(doc, 0, 3, "LABEL", kb_id="KB_ID") + + assert Span(doc, 0, 3) != Span(doc, 0, 3, "LABEL") + assert Span(doc, 0, 3) != Span(doc, 0, 3, "LABEL", kb_id="KB_ID") + assert Span(doc, 0, 3, "LABEL") != Span(doc, 0, 3, "LABEL", kb_id="KB_ID") + + assert Span(doc, 0, 3) <= Span(doc, 0, 3) and Span(doc, 0, 3) >= Span(doc, 0, 3) + assert Span(doc, 0, 3, "LABEL") <= Span(doc, 0, 3, "LABEL") and Span(doc, 0, 3, "LABEL") >= Span(doc, 0, 3, "LABEL") + assert Span(doc, 0, 3, "LABEL", kb_id="KB_ID") <= Span(doc, 0, 3, "LABEL", kb_id="KB_ID") + assert Span(doc, 0, 3, "LABEL", kb_id="KB_ID") >= Span(doc, 0, 3, "LABEL", kb_id="KB_ID") + + assert (Span(doc, 0, 3) < Span(doc, 0, 3, "", kb_id="KB_ID") < Span(doc, 0, 3, "LABEL") < Span(doc, 0, 3, "LABEL", kb_id="KB_ID")) + assert (Span(doc, 0, 3) <= Span(doc, 0, 3, "", kb_id="KB_ID") <= Span(doc, 0, 3, "LABEL") <= Span(doc, 0, 3, "LABEL", kb_id="KB_ID")) + + assert (Span(doc, 0, 3, "LABEL", kb_id="KB_ID") > Span(doc, 0, 3, "LABEL") > Span(doc, 0, 3, "", kb_id="KB_ID") > Span(doc, 0, 3)) + assert (Span(doc, 0, 3, "LABEL", kb_id="KB_ID") >= Span(doc, 0, 3, "LABEL") >= Span(doc, 0, 3, "", kb_id="KB_ID") >= Span(doc, 0, 3)) + + # Different end + assert Span(doc, 0, 3, "LABEL", kb_id="KB_ID") < Span(doc, 0, 4, "LABEL", kb_id="KB_ID") + + assert Span(doc, 0, 3, "LABEL", kb_id="KB_ID") < Span(doc, 0, 4) + assert Span(doc, 0, 3, "LABEL", kb_id="KB_ID") <= Span(doc, 0, 4) + assert Span(doc, 0, 4) > Span(doc, 0, 3, "LABEL", kb_id="KB_ID") + assert Span(doc, 0, 4) >= Span(doc, 0, 3, "LABEL", kb_id="KB_ID") + + # Different start + assert Span(doc, 0, 3, "LABEL", kb_id="KB_ID") != Span(doc, 1, 3, "LABEL", kb_id="KB_ID") + + assert Span(doc, 0, 3, "LABEL", kb_id="KB_ID") < Span(doc, 1, 3) + assert Span(doc, 0, 3, "LABEL", kb_id="KB_ID") <= Span(doc, 1, 3) + assert Span(doc, 1, 3) > Span(doc, 0, 3, "LABEL", kb_id="KB_ID") + assert Span(doc, 1, 3) >= Span(doc, 0, 3, "LABEL", kb_id="KB_ID") + + # Different start & different end + assert Span(doc, 0, 4, "LABEL", kb_id="KB_ID") != Span(doc, 1, 3, "LABEL", kb_id="KB_ID") + + assert Span(doc, 0, 4, "LABEL", kb_id="KB_ID") < Span(doc, 1, 3) + assert Span(doc, 0, 4, "LABEL", kb_id="KB_ID") <= Span(doc, 1, 3) + assert Span(doc, 1, 3) > Span(doc, 0, 4, "LABEL", kb_id="KB_ID") + assert Span(doc, 1, 3) >= Span(doc, 0, 4, "LABEL", kb_id="KB_ID") +# fmt: on + + @pytest.mark.parametrize( "start,end,expected_sentences,expected_sentences_with_hook", [ @@ -606,3 +655,16 @@ def test_span_sents(doc, start, end, expected_sentences, expected_sentences_with def test_span_sents_not_parsed(doc_not_parsed): with pytest.raises(ValueError): list(Span(doc_not_parsed, 0, 3).sents) + + +def test_span_group_copy(doc): + doc.spans["test"] = [doc[0:1], doc[2:4]] + assert len(doc.spans["test"]) == 2 + doc_copy = doc.copy() + # check that the spans were indeed copied + assert len(doc_copy.spans["test"]) == 2 + # add a new span to the original doc + doc.spans["test"].append(doc[3:4]) + assert len(doc.spans["test"]) == 3 + # check that the copy spans were not modified and this is an isolated doc + assert len(doc_copy.spans["test"]) == 2 diff --git a/spacy/tests/doc/test_to_json.py b/spacy/tests/doc/test_to_json.py index 9ebee6c88..202281654 100644 --- a/spacy/tests/doc/test_to_json.py +++ b/spacy/tests/doc/test_to_json.py @@ -1,5 +1,5 @@ import pytest -from spacy.tokens import Doc +from spacy.tokens import Doc, Span @pytest.fixture() @@ -60,3 +60,13 @@ def test_doc_to_json_underscore_error_serialize(doc): Doc.set_extension("json_test4", method=lambda doc: doc.text) with pytest.raises(ValueError): doc.to_json(underscore=["json_test4"]) + + +def test_doc_to_json_span(doc): + """Test that Doc.to_json() includes spans""" + doc.spans["test"] = [Span(doc, 0, 2, "test"), Span(doc, 0, 1, "test")] + json_doc = doc.to_json() + assert "spans" in json_doc + assert len(json_doc["spans"]) == 1 + assert len(json_doc["spans"]["test"]) == 2 + assert json_doc["spans"]["test"][0]["start"] == 0 diff --git a/spacy/tests/lang/dsb/__init__.py b/spacy/tests/lang/dsb/__init__.py new file mode 100644 index 000000000..e69de29bb diff --git a/spacy/tests/lang/dsb/test_text.py b/spacy/tests/lang/dsb/test_text.py new file mode 100644 index 000000000..40f2c15e0 --- /dev/null +++ b/spacy/tests/lang/dsb/test_text.py @@ -0,0 +1,25 @@ +import pytest + + +@pytest.mark.parametrize( + "text,match", + [ + ("10", True), + ("1", True), + ("10,000", True), + ("10,00", True), + ("jadno", True), + ("dwanassćo", True), + ("milion", True), + ("sto", True), + ("ceła", False), + ("kopica", False), + ("narěcow", False), + (",", False), + ("1/2", True), + ], +) +def test_lex_attrs_like_number(dsb_tokenizer, text, match): + tokens = dsb_tokenizer(text) + assert len(tokens) == 1 + assert tokens[0].like_num == match diff --git a/spacy/tests/lang/dsb/test_tokenizer.py b/spacy/tests/lang/dsb/test_tokenizer.py new file mode 100644 index 000000000..135974fb8 --- /dev/null +++ b/spacy/tests/lang/dsb/test_tokenizer.py @@ -0,0 +1,29 @@ +import pytest + +DSB_BASIC_TOKENIZATION_TESTS = [ + ( + "Ale eksistěrujo mimo togo ceła kopica narěcow, ako na pśikład slěpjańska.", + [ + "Ale", + "eksistěrujo", + "mimo", + "togo", + "ceła", + "kopica", + "narěcow", + ",", + "ako", + "na", + "pśikład", + "slěpjańska", + ".", + ], + ), +] + + +@pytest.mark.parametrize("text,expected_tokens", DSB_BASIC_TOKENIZATION_TESTS) +def test_dsb_tokenizer_basic(dsb_tokenizer, text, expected_tokens): + tokens = dsb_tokenizer(text) + token_list = [token.text for token in tokens if not token.is_space] + assert expected_tokens == token_list diff --git a/spacy/tests/lang/fi/test_noun_chunks.py b/spacy/tests/lang/fi/test_noun_chunks.py new file mode 100644 index 000000000..cab84b311 --- /dev/null +++ b/spacy/tests/lang/fi/test_noun_chunks.py @@ -0,0 +1,189 @@ +import pytest +from spacy.tokens import Doc + + +FI_NP_TEST_EXAMPLES = [ + ( + "Kaksi tyttöä potkii punaista palloa", + ["NUM", "NOUN", "VERB", "ADJ", "NOUN"], + ["nummod", "nsubj", "ROOT", "amod", "obj"], + [1, 1, 0, 1, -2], + ["Kaksi tyttöä", "punaista palloa"], + ), + ( + "Erittäin vaarallinen leijona karkasi kiertävän sirkuksen eläintenkesyttäjältä", + ["ADV", "ADJ", "NOUN", "VERB", "ADJ", "NOUN", "NOUN"], + ["advmod", "amod", "nsubj", "ROOT", "amod", "nmod:poss", "obl"], + [1, 1, 1, 0, 1, 1, -3], + ["Erittäin vaarallinen leijona", "kiertävän sirkuksen eläintenkesyttäjältä"], + ), + ( + "Leijona raidallisine tassuineen piileksii Porin kaupungin lähellä", + ["NOUN", "ADJ", "NOUN", "VERB", "PROPN", "NOUN", "ADP"], + ["nsubj", "amod", "nmod", "ROOT", "nmod:poss", "obl", "case"], + [3, 1, -2, 0, 1, -2, -1], + ["Leijona raidallisine tassuineen", "Porin kaupungin"], + ), + ( + "Lounaalla nautittiin salaattia, maukasta kanaa ja raikasta vettä", + ["NOUN", "VERB", "NOUN", "PUNCT", "ADJ", "NOUN", "CCONJ", "ADJ", "NOUN"], + ["obl", "ROOT", "obj", "punct", "amod", "conj", "cc", "amod", "conj"], + [1, 0, -1, 2, 1, -3, 2, 1, -6], + ["Lounaalla", "salaattia", "maukasta kanaa", "raikasta vettä"], + ), + ( + "Minua houkuttaa maalle muuttaminen talven jälkeen", + ["PRON", "VERB", "NOUN", "NOUN", "NOUN", "ADP"], + ["obj", "ROOT", "nmod", "nsubj", "obl", "case"], + [1, 0, 1, -2, -3, -1], + ["maalle muuttaminen", "talven"], + ), + ( + "Päivän kohokohta oli vierailu museossa kummilasten kanssa", + ["NOUN", "NOUN", "AUX", "NOUN", "NOUN", "NOUN", "ADP"], + ["nmod:poss", "nsubj:cop", "cop", "ROOT", "nmod", "obl", "case"], + [1, 2, 1, 0, -1, -2, -1], + ["Päivän kohokohta", "vierailu museossa", "kummilasten"], + ), + ( + "Yrittäjät maksoivat tuomioistuimen määräämät korvaukset", + ["NOUN", "VERB", "NOUN", "VERB", "NOUN"], + ["nsubj", "ROOT", "nsubj", "acl", "obj"], + [1, 0, 1, 1, -3], + ["Yrittäjät", "tuomioistuimen", "korvaukset"], + ), + ( + "Julkisoikeudelliset tai niihin rinnastettavat saatavat ovat suoraan ulosottokelpoisia", + ["ADJ", "CCONJ", "PRON", "VERB", "NOUN", "AUX", "ADV", "NOUN"], + ["amod", "cc", "obl", "acl", "nsubj:cop", "cop", "advmod", "ROOT"], + [4, 3, 1, 1, 3, 2, 1, 0], + ["Julkisoikeudelliset tai niihin rinnastettavat saatavat", "ulosottokelpoisia"], + ), + ( + "Se oli ala-arvoista käytöstä kaikilta oppilailta, myös valvojaoppilailta", + ["PRON", "AUX", "ADJ", "NOUN", "PRON", "NOUN", "PUNCT", "ADV", "NOUN"], + ["nsubj:cop", "cop", "amod", "ROOT", "det", "nmod", "punct", "advmod", "appos"], + [3, 2, 1, 0, 1, -2, 2, 1, -3], + ["ala-arvoista käytöstä kaikilta oppilailta", "valvojaoppilailta"], + ), + ( + "Isä souti veneellä, jonka hän oli vuokrannut", + ["NOUN", "VERB", "NOUN", "PUNCT", "PRON", "PRON", "AUX", "VERB"], + ["nsubj", "ROOT", "obl", "punct", "obj", "nsubj", "aux", "acl:relcl"], + [1, 0, -1, 4, 3, 2, 1, -5], + ["Isä", "veneellä"], + ), + ( + "Kirja, jonka poimin hyllystä, kertoo norsuista", + ["NOUN", "PUNCT", "PRON", "VERB", "NOUN", "PUNCT", "VERB", "NOUN"], + ["nsubj", "punct", "obj", "acl:relcl", "obl", "punct", "ROOT", "obl"], + [6, 2, 1, -3, -1, 1, 0, -1], + ["Kirja", "hyllystä", "norsuista"], + ), + ( + "Huomenna on päivä, jota olemme odottaneet", + ["NOUN", "AUX", "NOUN", "PUNCT", "PRON", "AUX", "VERB"], + ["ROOT", "cop", "nsubj:cop", "punct", "obj", "aux", "acl:relcl"], + [0, -1, -2, 3, 2, 1, -4], + ["Huomenna", "päivä"], + ), + ( + "Liikkuvuuden lisääminen on yksi korkeakoulutuksen keskeisistä kehittämiskohteista", + ["NOUN", "NOUN", "AUX", "PRON", "NOUN", "ADJ", "NOUN"], + ["nmod:gobj", "nsubj:cop", "cop", "ROOT", "nmod:poss", "amod", "nmod"], + [1, 2, 1, 0, 2, 1, -3], + [ + "Liikkuvuuden lisääminen", + "korkeakoulutuksen keskeisistä kehittämiskohteista", + ], + ), + ( + "Kaupalliset palvelut jätetään yksityisten palveluntarjoajien tarjottavaksi", + ["ADJ", "NOUN", "VERB", "ADJ", "NOUN", "NOUN"], + ["amod", "obj", "ROOT", "amod", "nmod:gsubj", "obl"], + [1, 1, 0, 1, 1, -3], + ["Kaupalliset palvelut", "yksityisten palveluntarjoajien tarjottavaksi"], + ), + ( + "New York tunnetaan kaupunkina, joka ei koskaan nuku", + ["PROPN", "PROPN", "VERB", "NOUN", "PUNCT", "PRON", "AUX", "ADV", "VERB"], + [ + "obj", + "flat:name", + "ROOT", + "obl", + "punct", + "nsubj", + "aux", + "advmod", + "acl:relcl", + ], + [2, -1, 0, -1, 4, 3, 2, 1, -5], + ["New York", "kaupunkina"], + ), + ( + "Loput vihjeet saat herra Möttöseltä", + ["NOUN", "NOUN", "VERB", "NOUN", "PROPN"], + ["compound:nn", "obj", "ROOT", "compound:nn", "obj"], + [1, 1, 0, 1, -2], + ["Loput vihjeet", "herra Möttöseltä"], + ), + ( + "mahdollisuus tukea muita päivystysyksiköitä", + ["NOUN", "VERB", "PRON", "NOUN"], + ["ROOT", "acl", "det", "obj"], + [0, -1, 1, -2], + ["mahdollisuus", "päivystysyksiköitä"], + ), + ( + "sairaanhoitopiirit harjoittavat leikkaustoimintaa alueellaan useammassa sairaalassa", + ["NOUN", "VERB", "NOUN", "NOUN", "ADJ", "NOUN"], + ["nsubj", "ROOT", "obj", "obl", "amod", "obl"], + [1, 0, -1, -1, 1, -3], + [ + "sairaanhoitopiirit", + "leikkaustoimintaa", + "alueellaan", + "useammassa sairaalassa", + ], + ), + ( + "Lain mukaan varhaiskasvatus on suunnitelmallista toimintaa", + ["NOUN", "ADP", "NOUN", "AUX", "ADJ", "NOUN"], + ["obl", "case", "nsubj:cop", "cop", "amod", "ROOT"], + [5, -1, 3, 2, 1, 0], + ["Lain", "varhaiskasvatus", "suunnitelmallista toimintaa"], + ), +] + + +def test_noun_chunks_is_parsed(fi_tokenizer): + """Test that noun_chunks raises Value Error for 'fi' language if Doc is not parsed. + To check this test, we're constructing a Doc + with a new Vocab here and forcing is_parsed to 'False' + to make sure the noun chunks don't run. + """ + doc = fi_tokenizer("Tämä on testi") + with pytest.raises(ValueError): + list(doc.noun_chunks) + + +@pytest.mark.parametrize( + "text,pos,deps,heads,expected_noun_chunks", FI_NP_TEST_EXAMPLES +) +def test_fi_noun_chunks(fi_tokenizer, text, pos, deps, heads, expected_noun_chunks): + tokens = fi_tokenizer(text) + + assert len(heads) == len(pos) + doc = Doc( + tokens.vocab, + words=[t.text for t in tokens], + heads=[head + i for i, head in enumerate(heads)], + deps=deps, + pos=pos, + ) + + noun_chunks = list(doc.noun_chunks) + assert len(noun_chunks) == len(expected_noun_chunks) + for i, np in enumerate(noun_chunks): + assert np.text == expected_noun_chunks[i] diff --git a/spacy/tests/lang/fr/test_noun_chunks.py b/spacy/tests/lang/fr/test_noun_chunks.py index 48ac88ead..25b95f566 100644 --- a/spacy/tests/lang/fr/test_noun_chunks.py +++ b/spacy/tests/lang/fr/test_noun_chunks.py @@ -1,8 +1,230 @@ +from spacy.tokens import Doc import pytest +# fmt: off +@pytest.mark.parametrize( + "words,heads,deps,pos,chunk_offsets", + [ + # determiner + noun + # un nom -> un nom + ( + ["un", "nom"], + [1, 1], + ["det", "ROOT"], + ["DET", "NOUN"], + [(0, 2)], + ), + # determiner + noun starting with vowel + # l'heure -> l'heure + ( + ["l'", "heure"], + [1, 1], + ["det", "ROOT"], + ["DET", "NOUN"], + [(0, 2)], + ), + # determiner + plural noun + # les romans -> les romans + ( + ["les", "romans"], + [1, 1], + ["det", "ROOT"], + ["DET", "NOUN"], + [(0, 2)], + ), + # det + adj + noun + # Le vieux Londres -> Le vieux Londres + ( + ['Les', 'vieux', 'Londres'], + [2, 2, 2], + ["det", "amod", "ROOT"], + ["DET", "ADJ", "NOUN"], + [(0,3)] + ), + # det + noun + adj + # le nom propre -> le nom propre a proper noun + ( + ["le", "nom", "propre"], + [1, 1, 1], + ["det", "ROOT", "amod"], + ["DET", "NOUN", "ADJ"], + [(0, 3)], + ), + # det + noun + adj plural + # Les chiens bruns -> les chiens bruns + ( + ["Les", "chiens", "bruns"], + [1, 1, 1], + ["det", "ROOT", "amod"], + ["DET", "NOUN", "ADJ"], + [(0, 3)], + ), + # multiple adjectives: one adj before the noun, one adj after the noun + # un nouveau film intéressant -> un nouveau film intéressant + ( + ["un", "nouveau", "film", "intéressant"], + [2, 2, 2, 2], + ["det", "amod", "ROOT", "amod"], + ["DET", "ADJ", "NOUN", "ADJ"], + [(0,4)] + ), + # multiple adjectives, both adjs after the noun + # une personne intelligente et drôle -> une personne intelligente et drôle + ( + ["une", "personne", "intelligente", "et", "drôle"], + [1, 1, 1, 4, 2], + ["det", "ROOT", "amod", "cc", "conj"], + ["DET", "NOUN", "ADJ", "CCONJ", "ADJ"], + [(0,5)] + ), + # relative pronoun + # un bus qui va au ville -> un bus, qui, ville + ( + ['un', 'bus', 'qui', 'va', 'au', 'ville'], + [1, 1, 3, 1, 5, 3], + ['det', 'ROOT', 'nsubj', 'acl:relcl', 'case', 'obl:arg'], + ['DET', 'NOUN', 'PRON', 'VERB', 'ADP', 'NOUN'], + [(0,2), (2,3), (5,6)] + ), + # relative subclause + # Voilà la maison que nous voulons acheter -> la maison, nous That's the house that we want to buy. + ( + ['Voilà', 'la', 'maison', 'que', 'nous', 'voulons', 'acheter'], + [0, 2, 0, 5, 5, 2, 5], + ['ROOT', 'det', 'obj', 'mark', 'nsubj', 'acl:relcl', 'xcomp'], + ['VERB', 'DET', 'NOUN', 'SCONJ', 'PRON', 'VERB', 'VERB'], + [(1,3), (4,5)] + ), + # Person name and title by flat + # Louis XIV -> Louis XIV + ( + ["Louis", "XIV"], + [0, 0], + ["ROOT", "flat:name"], + ["PROPN", "PROPN"], + [(0,2)] + ), + # Organization name by flat + # Nations Unies -> Nations Unies + ( + ["Nations", "Unies"], + [0, 0], + ["ROOT", "flat:name"], + ["PROPN", "PROPN"], + [(0,2)] + ), + # Noun compound, person name created by two flats + # Louise de Bratagne -> Louise de Bratagne + ( + ["Louise", "de", "Bratagne"], + [0, 0, 0], + ["ROOT", "flat:name", "flat:name"], + ["PROPN", "PROPN", "PROPN"], + [(0,3)] + ), + # Noun compound, person name created by two flats + # Louis François Joseph -> Louis François Joseph + ( + ["Louis", "François", "Joseph"], + [0, 0, 0], + ["ROOT", "flat:name", "flat:name"], + ["PROPN", "PROPN", "PROPN"], + [(0,3)] + ), + # one determiner + one noun + one adjective qualified by an adverb + # quelques agriculteurs très riches -> quelques agriculteurs très riches + ( + ["quelques", "agriculteurs", "très", "riches"], + [1, 1, 3, 1], + ['det', 'ROOT', 'advmod', 'amod'], + ['DET', 'NOUN', 'ADV', 'ADJ'], + [(0,4)] + ), + # Two NPs conjuncted + # Il a un chien et un chat -> Il, un chien, un chat + ( + ['Il', 'a', 'un', 'chien', 'et', 'un', 'chat'], + [1, 1, 3, 1, 6, 6, 3], + ['nsubj', 'ROOT', 'det', 'obj', 'cc', 'det', 'conj'], + ['PRON', 'VERB', 'DET', 'NOUN', 'CCONJ', 'DET', 'NOUN'], + [(0,1), (2,4), (5,7)] + + ), + # Two NPs together + # l'écrivain brésilien Aníbal Machado -> l'écrivain brésilien, Aníbal Machado + ( + ["l'", 'écrivain', 'brésilien', 'Aníbal', 'Machado'], + [1, 1, 1, 1, 3], + ['det', 'ROOT', 'amod', 'appos', 'flat:name'], + ['DET', 'NOUN', 'ADJ', 'PROPN', 'PROPN'], + [(0, 3), (3, 5)] + ), + # nmod relation between NPs + # la destruction de la ville -> la destruction, la ville + ( + ['la', 'destruction', 'de', 'la', 'ville'], + [1, 1, 4, 4, 1], + ['det', 'ROOT', 'case', 'det', 'nmod'], + ['DET', 'NOUN', 'ADP', 'DET', 'NOUN'], + [(0,2), (3,5)] + ), + # nmod relation between NPs + # Archiduchesse d’Autriche -> Archiduchesse, Autriche + ( + ['Archiduchesse', 'd’', 'Autriche'], + [0, 2, 0], + ['ROOT', 'case', 'nmod'], + ['NOUN', 'ADP', 'PROPN'], + [(0,1), (2,3)] + ), + # Compounding by nmod, several NPs chained together + # la première usine de drogue du gouvernement -> la première usine, drogue, gouvernement + ( + ["la", "première", "usine", "de", "drogue", "du", "gouvernement"], + [2, 2, 2, 4, 2, 6, 2], + ['det', 'amod', 'ROOT', 'case', 'nmod', 'case', 'nmod'], + ['DET', 'ADJ', 'NOUN', 'ADP', 'NOUN', 'ADP', 'NOUN'], + [(0, 3), (4, 5), (6, 7)] + ), + # several NPs + # Traduction du rapport de Susana -> Traduction, rapport, Susana + ( + ['Traduction', 'du', 'raport', 'de', 'Susana'], + [0, 2, 0, 4, 2], + ['ROOT', 'case', 'nmod', 'case', 'nmod'], + ['NOUN', 'ADP', 'NOUN', 'ADP', 'PROPN'], + [(0,1), (2,3), (4,5)] + + ), + # Several NPs + # Le gros chat de Susana et son amie -> Le gros chat, Susana, son amie + ( + ['Le', 'gros', 'chat', 'de', 'Susana', 'et', 'son', 'amie'], + [2, 2, 2, 4, 2, 7, 7, 2], + ['det', 'amod', 'ROOT', 'case', 'nmod', 'cc', 'det', 'conj'], + ['DET', 'ADJ', 'NOUN', 'ADP', 'PROPN', 'CCONJ', 'DET', 'NOUN'], + [(0,3), (4,5), (6,8)] + ), + # Passive subject + # Les nouvelles dépenses sont alimentées par le grand compte bancaire de Clinton -> Les nouvelles dépenses, le grand compte bancaire, Clinton + ( + ['Les', 'nouvelles', 'dépenses', 'sont', 'alimentées', 'par', 'le', 'grand', 'compte', 'bancaire', 'de', 'Clinton'], + [2, 2, 4, 4, 4, 8, 8, 8, 4, 8, 11, 8], + ['det', 'amod', 'nsubj:pass', 'aux:pass', 'ROOT', 'case', 'det', 'amod', 'obl:agent', 'amod', 'case', 'nmod'], + ['DET', 'ADJ', 'NOUN', 'AUX', 'VERB', 'ADP', 'DET', 'ADJ', 'NOUN', 'ADJ', 'ADP', 'PROPN'], + [(0, 3), (6, 10), (11, 12)] + ) + ], +) +# fmt: on +def test_fr_noun_chunks(fr_vocab, words, heads, deps, pos, chunk_offsets): + doc = Doc(fr_vocab, words=words, heads=heads, deps=deps, pos=pos) + assert [(c.start, c.end) for c in doc.noun_chunks] == chunk_offsets + + def test_noun_chunks_is_parsed_fr(fr_tokenizer): """Test that noun_chunks raises Value Error for 'fr' language if Doc is not parsed.""" - doc = fr_tokenizer("trouver des travaux antérieurs") + doc = fr_tokenizer("Je suis allé à l'école") with pytest.raises(ValueError): list(doc.noun_chunks) diff --git a/spacy/tests/lang/hsb/__init__.py b/spacy/tests/lang/hsb/__init__.py new file mode 100644 index 000000000..e69de29bb diff --git a/spacy/tests/lang/hsb/test_text.py b/spacy/tests/lang/hsb/test_text.py new file mode 100644 index 000000000..aaa4984eb --- /dev/null +++ b/spacy/tests/lang/hsb/test_text.py @@ -0,0 +1,25 @@ +import pytest + + +@pytest.mark.parametrize( + "text,match", + [ + ("10", True), + ("1", True), + ("10,000", True), + ("10,00", True), + ("jedne", True), + ("dwanaće", True), + ("milion", True), + ("sto", True), + ("załožene", False), + ("wona", False), + ("powšitkownej", False), + (",", False), + ("1/2", True), + ], +) +def test_lex_attrs_like_number(hsb_tokenizer, text, match): + tokens = hsb_tokenizer(text) + assert len(tokens) == 1 + assert tokens[0].like_num == match diff --git a/spacy/tests/lang/hsb/test_tokenizer.py b/spacy/tests/lang/hsb/test_tokenizer.py new file mode 100644 index 000000000..a3ec89ba0 --- /dev/null +++ b/spacy/tests/lang/hsb/test_tokenizer.py @@ -0,0 +1,32 @@ +import pytest + +HSB_BASIC_TOKENIZATION_TESTS = [ + ( + "Hornjoserbšćina wobsteji resp. wobsteješe z wjacorych dialektow, kotrež so zdźěla chětro wot so rozeznawachu.", + [ + "Hornjoserbšćina", + "wobsteji", + "resp.", + "wobsteješe", + "z", + "wjacorych", + "dialektow", + ",", + "kotrež", + "so", + "zdźěla", + "chětro", + "wot", + "so", + "rozeznawachu", + ".", + ], + ), +] + + +@pytest.mark.parametrize("text,expected_tokens", HSB_BASIC_TOKENIZATION_TESTS) +def test_hsb_tokenizer_basic(hsb_tokenizer, text, expected_tokens): + tokens = hsb_tokenizer(text) + token_list = [token.text for token in tokens if not token.is_space] + assert expected_tokens == token_list diff --git a/spacy/tests/lang/it/test_noun_chunks.py b/spacy/tests/lang/it/test_noun_chunks.py new file mode 100644 index 000000000..0a8c10e79 --- /dev/null +++ b/spacy/tests/lang/it/test_noun_chunks.py @@ -0,0 +1,221 @@ +from spacy.tokens import Doc +import pytest + + +# fmt: off +@pytest.mark.parametrize( + "words,heads,deps,pos,chunk_offsets", + [ + # determiner + noun + # un pollo -> un pollo + ( + ["un", "pollo"], + [1, 1], + ["det", "ROOT"], + ["DET", "NOUN"], + [(0,2)], + ), + # two determiners + noun + # il mio cane -> il mio cane + ( + ["il", "mio", "cane"], + [2, 2, 2], + ["det", "det:poss", "ROOT"], + ["DET", "DET", "NOUN"], + [(0,3)], + ), + # two determiners, one is after noun. rare usage but still testing + # il cane mio-> il cane mio + ( + ["il", "cane", "mio"], + [1, 1, 1], + ["det", "ROOT", "det:poss"], + ["DET", "NOUN", "DET"], + [(0,3)], + ), + # relative pronoun + # È molto bello il vestito che hai acquistat -> il vestito, che the dress that you bought is very pretty. + ( + ["È", "molto", "bello", "il", "vestito", "che", "hai", "acquistato"], + [2, 2, 2, 4, 2, 7, 7, 4], + ['cop', 'advmod', 'ROOT', 'det', 'nsubj', 'obj', 'aux', 'acl:relcl'], + ['AUX', 'ADV', 'ADJ', 'DET', 'NOUN', 'PRON', 'AUX', 'VERB'], + [(3,5), (5,6)] + ), + # relative subclause + # il computer che hai comprato -> il computer, che the computer that you bought + ( + ['il', 'computer', 'che', 'hai', 'comprato'], + [1, 1, 4, 4, 1], + ['det', 'ROOT', 'nsubj', 'aux', 'acl:relcl'], + ['DET', 'NOUN', 'PRON', 'AUX', 'VERB'], + [(0,2), (2,3)] + ), + # det + noun + adj + # Una macchina grande -> Una macchina grande + ( + ["Una", "macchina", "grande"], + [1, 1, 1], + ["det", "ROOT", "amod"], + ["DET", "NOUN", "ADJ"], + [(0,3)], + ), + # noun + adj plural + # mucche bianche + ( + ["mucche", "bianche"], + [0, 0], + ["ROOT", "amod"], + ["NOUN", "ADJ"], + [(0,2)], + ), + # det + adj + noun + # Una grande macchina -> Una grande macchina + ( + ['Una', 'grande', 'macchina'], + [2, 2, 2], + ["det", "amod", "ROOT"], + ["DET", "ADJ", "NOUN"], + [(0,3)] + ), + # det + adj + noun, det with apostrophe + # un'importante associazione -> un'importante associazione + ( + ["Un'", 'importante', 'associazione'], + [2, 2, 2], + ["det", "amod", "ROOT"], + ["DET", "ADJ", "NOUN"], + [(0,3)] + ), + # multiple adjectives + # Un cane piccolo e marrone -> Un cane piccolo e marrone + ( + ["Un", "cane", "piccolo", "e", "marrone"], + [1, 1, 1, 4, 2], + ["det", "ROOT", "amod", "cc", "conj"], + ["DET", "NOUN", "ADJ", "CCONJ", "ADJ"], + [(0,5)] + ), + # determiner, adjective, compound created by flat + # le Nazioni Unite -> le Nazioni Unite + ( + ["le", "Nazioni", "Unite"], + [1, 1, 1], + ["det", "ROOT", "flat:name"], + ["DET", "PROPN", "PROPN"], + [(0,3)] + ), + # one determiner + one noun + one adjective qualified by an adverb + # alcuni contadini molto ricchi -> alcuni contadini molto ricchi some very rich farmers + ( + ['alcuni', 'contadini', 'molto', 'ricchi'], + [1, 1, 3, 1], + ['det', 'ROOT', 'advmod', 'amod'], + ['DET', 'NOUN', 'ADV', 'ADJ'], + [(0,4)] + ), + # Two NPs conjuncted + # Ho un cane e un gatto -> un cane, un gatto + ( + ['Ho', 'un', 'cane', 'e', 'un', 'gatto'], + [0, 2, 0, 5, 5, 0], + ['ROOT', 'det', 'obj', 'cc', 'det', 'conj'], + ['VERB', 'DET', 'NOUN', 'CCONJ', 'DET', 'NOUN'], + [(1,3), (4,6)] + + ), + # Two NPs together + # lo scrittore brasiliano Aníbal Machado -> lo scrittore brasiliano, Aníbal Machado + ( + ['lo', 'scrittore', 'brasiliano', 'Aníbal', 'Machado'], + [1, 1, 1, 1, 3], + ['det', 'ROOT', 'amod', 'nmod', 'flat:name'], + ['DET', 'NOUN', 'ADJ', 'PROPN', 'PROPN'], + [(0, 3), (3, 5)] + ), + # Noun compound, person name and titles + # Dom Pedro II -> Dom Pedro II + ( + ["Dom", "Pedro", "II"], + [0, 0, 0], + ["ROOT", "flat:name", "flat:name"], + ["PROPN", "PROPN", "PROPN"], + [(0,3)] + ), + # Noun compound created by flat + # gli Stati Uniti + ( + ["gli", "Stati", "Uniti"], + [1, 1, 1], + ["det", "ROOT", "flat:name"], + ["DET", "PROPN", "PROPN"], + [(0,3)] + ), + # nmod relation between NPs + # la distruzione della città -> la distruzione, città + ( + ['la', 'distruzione', 'della', 'città'], + [1, 1, 3, 1], + ['det', 'ROOT', 'case', 'nmod'], + ['DET', 'NOUN', 'ADP', 'NOUN'], + [(0,2), (3,4)] + ), + # Compounding by nmod, several NPs chained together + # la prima fabbrica di droga del governo -> la prima fabbrica, droga, governo + ( + ["la", "prima", "fabbrica", "di", "droga", "del", "governo"], + [2, 2, 2, 4, 2, 6, 2], + ['det', 'amod', 'ROOT', 'case', 'nmod', 'case', 'nmod'], + ['DET', 'ADJ', 'NOUN', 'ADP', 'NOUN', 'ADP', 'NOUN'], + [(0, 3), (4, 5), (6, 7)] + ), + # several NPs + # Traduzione del rapporto di Susana -> Traduzione, rapporto, Susana + ( + ['Traduzione', 'del', 'rapporto', 'di', 'Susana'], + [0, 2, 0, 4, 2], + ['ROOT', 'case', 'nmod', 'case', 'nmod'], + ['NOUN', 'ADP', 'NOUN', 'ADP', 'PROPN'], + [(0,1), (2,3), (4,5)] + + ), + # Several NPs + # Il gatto grasso di Susana e la sua amica -> Il gatto grasso, Susana, sua amica + ( + ['Il', 'gatto', 'grasso', 'di', 'Susana', 'e', 'la', 'sua', 'amica'], + [1, 1, 1, 4, 1, 8, 8, 8, 1], + ['det', 'ROOT', 'amod', 'case', 'nmod', 'cc', 'det', 'det:poss', 'conj'], + ['DET', 'NOUN', 'ADJ', 'ADP', 'PROPN', 'CCONJ', 'DET', 'DET', 'NOUN'], + [(0,3), (4,5), (6,9)] + ), + # Passive subject + # La nuova spesa è alimentata dal grande conto in banca di Clinton -> Le nuova spesa, grande conto, banca, Clinton + ( + ['La', 'nuova', 'spesa', 'è', 'alimentata', 'dal', 'grande', 'conto', 'in', 'banca', 'di', 'Clinton'], + [2, 2, 4, 4, 4, 7, 7, 4, 9, 7, 11, 9], + ['det', 'amod', 'nsubj:pass', 'aux:pass', 'ROOT', 'case', 'amod', 'obl:agent', 'case', 'nmod', 'case', 'nmod'], + ['DET', 'ADJ', 'NOUN', 'AUX', 'VERB', 'ADP', 'ADJ', 'NOUN', 'ADP', 'NOUN', 'ADP', 'PROPN'], + [(0, 3), (6, 8), (9, 10), (11,12)] + ), + # Misc + # Ma mentre questo prestito possa ora sembrare gestibile, un improvviso cambiamento delle circostanze potrebbe portare a problemi di debiti -> questo prestiti, un provisso cambiento, circostanze, problemi, debiti + ( + ['Ma', 'mentre', 'questo', 'prestito', 'possa', 'ora', 'sembrare', 'gestibile', ',', 'un', 'improvviso', 'cambiamento', 'delle', 'circostanze', 'potrebbe', 'portare', 'a', 'problemi', 'di', 'debitii'], + [15, 6, 3, 6, 6, 6, 15, 6, 6, 11, 11, 15, 13, 11, 15, 15, 17, 15, 19, 17], + ['cc', 'mark', 'det', 'nsubj', 'aux', 'advmod', 'advcl', 'xcomp', 'punct', 'det', 'amod', 'nsubj', 'case', 'nmod', 'aux', 'ROOT', 'case', 'obl', 'case', 'nmod'], + ['CCONJ', 'SCONJ', 'DET', 'NOUN', 'AUX', 'ADV', 'VERB', 'ADJ', 'PUNCT', 'DET', 'ADJ', 'NOUN', 'ADP', 'NOUN', 'AUX', 'VERB', 'ADP', 'NOUN', 'ADP', 'NOUN'], + [(2,4), (9,12), (13,14), (17,18), (19,20)] + ) + ], +) +# fmt: on +def test_it_noun_chunks(it_vocab, words, heads, deps, pos, chunk_offsets): + doc = Doc(it_vocab, words=words, heads=heads, deps=deps, pos=pos) + assert [(c.start, c.end) for c in doc.noun_chunks] == chunk_offsets + + +def test_noun_chunks_is_parsed_it(it_tokenizer): + """Test that noun_chunks raises Value Error for 'it' language if Doc is not parsed.""" + doc = it_tokenizer("Sei andato a Oxford") + with pytest.raises(ValueError): + list(doc.noun_chunks) diff --git a/spacy/tests/lang/it/test_stopwords.py b/spacy/tests/lang/it/test_stopwords.py new file mode 100644 index 000000000..954913164 --- /dev/null +++ b/spacy/tests/lang/it/test_stopwords.py @@ -0,0 +1,17 @@ +import pytest + + +@pytest.mark.parametrize( + "word", ["un", "lo", "dell", "dall", "si", "ti", "mi", "quest", "quel", "quello"] +) +def test_stopwords_basic(it_tokenizer, word): + tok = it_tokenizer(word)[0] + assert tok.is_stop + + +@pytest.mark.parametrize( + "word", ["quest'uomo", "l'ho", "un'amica", "dell'olio", "s'arrende", "m'ascolti"] +) +def test_stopwords_elided(it_tokenizer, word): + tok = it_tokenizer(word)[0] + assert tok.is_stop diff --git a/spacy/tests/lang/ko/test_tokenizer.py b/spacy/tests/lang/ko/test_tokenizer.py index eac309857..6e06e405e 100644 --- a/spacy/tests/lang/ko/test_tokenizer.py +++ b/spacy/tests/lang/ko/test_tokenizer.py @@ -47,3 +47,29 @@ def test_ko_tokenizer_pos(ko_tokenizer, text, expected_pos): def test_ko_empty_doc(ko_tokenizer): tokens = ko_tokenizer("") assert len(tokens) == 0 + + +@pytest.mark.issue(10535) +def test_ko_tokenizer_unknown_tag(ko_tokenizer): + tokens = ko_tokenizer("미닛 리피터") + assert tokens[1].pos_ == "X" + + +# fmt: off +SPACY_TOKENIZER_TESTS = [ + ("있다.", "있다 ."), + ("'예'는", "' 예 ' 는"), + ("부 (富) 는", "부 ( 富 ) 는"), + ("부(富)는", "부 ( 富 ) 는"), + ("1982~1983.", "1982 ~ 1983 ."), + ("사과·배·복숭아·수박은 모두 과일이다.", "사과 · 배 · 복숭아 · 수박은 모두 과일이다 ."), + ("그렇구나~", "그렇구나~"), + ("『9시 반의 당구』,", "『 9시 반의 당구 』 ,"), +] +# fmt: on + + +@pytest.mark.parametrize("text,expected_tokens", SPACY_TOKENIZER_TESTS) +def test_ko_spacy_tokenizer(ko_tokenizer_tokenizer, text, expected_tokens): + tokens = [token.text for token in ko_tokenizer_tokenizer(text)] + assert tokens == expected_tokens.split() diff --git a/spacy/tests/lang/tr/test_text.py b/spacy/tests/lang/tr/test_text.py index a12971e82..323b11bd1 100644 --- a/spacy/tests/lang/tr/test_text.py +++ b/spacy/tests/lang/tr/test_text.py @@ -41,7 +41,7 @@ def test_tr_lex_attrs_like_number_cardinal_ordinal(word): assert like_num(word) -@pytest.mark.parametrize("word", ["beş", "yedi", "yedinci", "birinci"]) +@pytest.mark.parametrize("word", ["beş", "yedi", "yedinci", "birinci", "milyonuncu"]) def test_tr_lex_attrs_capitals(word): assert like_num(word) assert like_num(word.upper()) diff --git a/spacy/tests/package/test_requirements.py b/spacy/tests/package/test_requirements.py index 75908df59..e20227455 100644 --- a/spacy/tests/package/test_requirements.py +++ b/spacy/tests/package/test_requirements.py @@ -12,6 +12,7 @@ def test_build_dependencies(): "flake8", "hypothesis", "pre-commit", + "black", "mypy", "types-dataclasses", "types-mock", diff --git a/spacy/tests/parser/test_nonproj.py b/spacy/tests/parser/test_nonproj.py index 3957e4d77..60d000c44 100644 --- a/spacy/tests/parser/test_nonproj.py +++ b/spacy/tests/parser/test_nonproj.py @@ -93,8 +93,8 @@ def test_parser_pseudoprojectivity(en_vocab): assert nonproj.is_decorated("X") is False nonproj._lift(0, tree) assert tree == [2, 2, 2] - assert nonproj._get_smallest_nonproj_arc(nonproj_tree) == 7 - assert nonproj._get_smallest_nonproj_arc(nonproj_tree2) == 10 + assert nonproj.get_smallest_nonproj_arc_slow(nonproj_tree) == 7 + assert nonproj.get_smallest_nonproj_arc_slow(nonproj_tree2) == 10 # fmt: off proj_heads, deco_labels = nonproj.projectivize(nonproj_tree, labels) assert proj_heads == [1, 2, 2, 4, 5, 2, 7, 5, 2] diff --git a/spacy/tests/pipeline/test_edit_tree_lemmatizer.py b/spacy/tests/pipeline/test_edit_tree_lemmatizer.py new file mode 100644 index 000000000..cf541e301 --- /dev/null +++ b/spacy/tests/pipeline/test_edit_tree_lemmatizer.py @@ -0,0 +1,280 @@ +import pickle +import pytest +from hypothesis import given +import hypothesis.strategies as st +from spacy import util +from spacy.lang.en import English +from spacy.language import Language +from spacy.pipeline._edit_tree_internals.edit_trees import EditTrees +from spacy.training import Example +from spacy.strings import StringStore +from spacy.util import make_tempdir + + +TRAIN_DATA = [ + ("She likes green eggs", {"lemmas": ["she", "like", "green", "egg"]}), + ("Eat blue ham", {"lemmas": ["eat", "blue", "ham"]}), +] + +PARTIAL_DATA = [ + # partial annotation + ("She likes green eggs", {"lemmas": ["", "like", "green", ""]}), + # misaligned partial annotation + ( + "He hates green eggs", + { + "words": ["He", "hat", "es", "green", "eggs"], + "lemmas": ["", "hat", "e", "green", ""], + }, + ), +] + + +def test_initialize_examples(): + nlp = Language() + lemmatizer = nlp.add_pipe("trainable_lemmatizer") + train_examples = [] + for t in TRAIN_DATA: + train_examples.append(Example.from_dict(nlp.make_doc(t[0]), t[1])) + # you shouldn't really call this more than once, but for testing it should be fine + nlp.initialize(get_examples=lambda: train_examples) + with pytest.raises(TypeError): + nlp.initialize(get_examples=lambda: None) + with pytest.raises(TypeError): + nlp.initialize(get_examples=lambda: train_examples[0]) + with pytest.raises(TypeError): + nlp.initialize(get_examples=lambda: []) + with pytest.raises(TypeError): + nlp.initialize(get_examples=train_examples) + + +def test_initialize_from_labels(): + nlp = Language() + lemmatizer = nlp.add_pipe("trainable_lemmatizer") + lemmatizer.min_tree_freq = 1 + train_examples = [] + for t in TRAIN_DATA: + train_examples.append(Example.from_dict(nlp.make_doc(t[0]), t[1])) + nlp.initialize(get_examples=lambda: train_examples) + + nlp2 = Language() + lemmatizer2 = nlp2.add_pipe("trainable_lemmatizer") + lemmatizer2.initialize( + get_examples=lambda: train_examples, + labels=lemmatizer.label_data, + ) + assert lemmatizer2.tree2label == {1: 0, 3: 1, 4: 2, 6: 3} + + +def test_no_data(): + # Test that the lemmatizer provides a nice error when there's no tagging data / labels + TEXTCAT_DATA = [ + ("I'm so happy.", {"cats": {"POSITIVE": 1.0, "NEGATIVE": 0.0}}), + ("I'm so angry", {"cats": {"POSITIVE": 0.0, "NEGATIVE": 1.0}}), + ] + nlp = English() + nlp.add_pipe("trainable_lemmatizer") + nlp.add_pipe("textcat") + + train_examples = [] + for t in TEXTCAT_DATA: + train_examples.append(Example.from_dict(nlp.make_doc(t[0]), t[1])) + + with pytest.raises(ValueError): + nlp.initialize(get_examples=lambda: train_examples) + + +def test_incomplete_data(): + # Test that the lemmatizer works with incomplete information + nlp = English() + lemmatizer = nlp.add_pipe("trainable_lemmatizer") + lemmatizer.min_tree_freq = 1 + train_examples = [] + for t in PARTIAL_DATA: + train_examples.append(Example.from_dict(nlp.make_doc(t[0]), t[1])) + optimizer = nlp.initialize(get_examples=lambda: train_examples) + for i in range(50): + losses = {} + nlp.update(train_examples, sgd=optimizer, losses=losses) + assert losses["trainable_lemmatizer"] < 0.00001 + + # test the trained model + test_text = "She likes blue eggs" + doc = nlp(test_text) + assert doc[1].lemma_ == "like" + assert doc[2].lemma_ == "blue" + + +def test_overfitting_IO(): + nlp = English() + lemmatizer = nlp.add_pipe("trainable_lemmatizer") + lemmatizer.min_tree_freq = 1 + train_examples = [] + for t in TRAIN_DATA: + train_examples.append(Example.from_dict(nlp.make_doc(t[0]), t[1])) + + optimizer = nlp.initialize(get_examples=lambda: train_examples) + + for i in range(50): + losses = {} + nlp.update(train_examples, sgd=optimizer, losses=losses) + assert losses["trainable_lemmatizer"] < 0.00001 + + test_text = "She likes blue eggs" + doc = nlp(test_text) + assert doc[0].lemma_ == "she" + assert doc[1].lemma_ == "like" + assert doc[2].lemma_ == "blue" + assert doc[3].lemma_ == "egg" + + # Check model after a {to,from}_disk roundtrip + with util.make_tempdir() as tmp_dir: + nlp.to_disk(tmp_dir) + nlp2 = util.load_model_from_path(tmp_dir) + doc2 = nlp2(test_text) + assert doc2[0].lemma_ == "she" + assert doc2[1].lemma_ == "like" + assert doc2[2].lemma_ == "blue" + assert doc2[3].lemma_ == "egg" + + # Check model after a {to,from}_bytes roundtrip + nlp_bytes = nlp.to_bytes() + nlp3 = English() + nlp3.add_pipe("trainable_lemmatizer") + nlp3.from_bytes(nlp_bytes) + doc3 = nlp3(test_text) + assert doc3[0].lemma_ == "she" + assert doc3[1].lemma_ == "like" + assert doc3[2].lemma_ == "blue" + assert doc3[3].lemma_ == "egg" + + # Check model after a pickle roundtrip. + nlp_bytes = pickle.dumps(nlp) + nlp4 = pickle.loads(nlp_bytes) + doc4 = nlp4(test_text) + assert doc4[0].lemma_ == "she" + assert doc4[1].lemma_ == "like" + assert doc4[2].lemma_ == "blue" + assert doc4[3].lemma_ == "egg" + + +def test_lemmatizer_requires_labels(): + nlp = English() + nlp.add_pipe("trainable_lemmatizer") + with pytest.raises(ValueError): + nlp.initialize() + + +def test_lemmatizer_label_data(): + nlp = English() + lemmatizer = nlp.add_pipe("trainable_lemmatizer") + lemmatizer.min_tree_freq = 1 + train_examples = [] + for t in TRAIN_DATA: + train_examples.append(Example.from_dict(nlp.make_doc(t[0]), t[1])) + + nlp.initialize(get_examples=lambda: train_examples) + + nlp2 = English() + lemmatizer2 = nlp2.add_pipe("trainable_lemmatizer") + lemmatizer2.initialize( + get_examples=lambda: train_examples, labels=lemmatizer.label_data + ) + + # Verify that the labels and trees are the same. + assert lemmatizer.labels == lemmatizer2.labels + assert lemmatizer.trees.to_bytes() == lemmatizer2.trees.to_bytes() + + +def test_dutch(): + strings = StringStore() + trees = EditTrees(strings) + tree = trees.add("deelt", "delen") + assert trees.tree_to_str(tree) == "(m 0 3 () (m 0 2 (s '' 'l') (s 'lt' 'n')))" + + tree = trees.add("gedeeld", "delen") + assert ( + trees.tree_to_str(tree) == "(m 2 3 (s 'ge' '') (m 0 2 (s '' 'l') (s 'ld' 'n')))" + ) + + +def test_from_to_bytes(): + strings = StringStore() + trees = EditTrees(strings) + trees.add("deelt", "delen") + trees.add("gedeeld", "delen") + + b = trees.to_bytes() + + trees2 = EditTrees(strings) + trees2.from_bytes(b) + + # Verify that the nodes did not change. + assert len(trees) == len(trees2) + for i in range(len(trees)): + assert trees.tree_to_str(i) == trees2.tree_to_str(i) + + # Reinserting the same trees should not add new nodes. + trees2.add("deelt", "delen") + trees2.add("gedeeld", "delen") + assert len(trees) == len(trees2) + + +def test_from_to_disk(): + strings = StringStore() + trees = EditTrees(strings) + trees.add("deelt", "delen") + trees.add("gedeeld", "delen") + + trees2 = EditTrees(strings) + with make_tempdir() as temp_dir: + trees_file = temp_dir / "edit_trees.bin" + trees.to_disk(trees_file) + trees2 = trees2.from_disk(trees_file) + + # Verify that the nodes did not change. + assert len(trees) == len(trees2) + for i in range(len(trees)): + assert trees.tree_to_str(i) == trees2.tree_to_str(i) + + # Reinserting the same trees should not add new nodes. + trees2.add("deelt", "delen") + trees2.add("gedeeld", "delen") + assert len(trees) == len(trees2) + + +@given(st.text(), st.text()) +def test_roundtrip(form, lemma): + strings = StringStore() + trees = EditTrees(strings) + tree = trees.add(form, lemma) + assert trees.apply(tree, form) == lemma + + +@given(st.text(alphabet="ab"), st.text(alphabet="ab")) +def test_roundtrip_small_alphabet(form, lemma): + # Test with small alphabets to have more overlap. + strings = StringStore() + trees = EditTrees(strings) + tree = trees.add(form, lemma) + assert trees.apply(tree, form) == lemma + + +def test_unapplicable_trees(): + strings = StringStore() + trees = EditTrees(strings) + tree3 = trees.add("deelt", "delen") + + # Replacement fails. + assert trees.apply(tree3, "deeld") == None + + # Suffix + prefix are too large. + assert trees.apply(tree3, "de") == None + + +def test_empty_strings(): + strings = StringStore() + trees = EditTrees(strings) + no_change = trees.add("xyz", "xyz") + empty = trees.add("", "") + assert no_change == empty diff --git a/spacy/tests/pipeline/test_entity_linker.py b/spacy/tests/pipeline/test_entity_linker.py index 3740e430e..83d5bf0e2 100644 --- a/spacy/tests/pipeline/test_entity_linker.py +++ b/spacy/tests/pipeline/test_entity_linker.py @@ -9,6 +9,9 @@ from spacy.compat import pickle from spacy.kb import Candidate, KnowledgeBase, get_candidates from spacy.lang.en import English from spacy.ml import load_kb +from spacy.pipeline import EntityLinker +from spacy.pipeline.legacy import EntityLinker_v1 +from spacy.pipeline.tok2vec import DEFAULT_TOK2VEC_MODEL from spacy.scorer import Scorer from spacy.tests.util import make_tempdir from spacy.tokens import Span @@ -168,6 +171,45 @@ def test_issue7065_b(): assert doc +def test_no_entities(): + # Test that having no entities doesn't crash the model + TRAIN_DATA = [ + ( + "The sky is blue.", + { + "sent_starts": [1, 0, 0, 0, 0], + }, + ) + ] + nlp = English() + vector_length = 3 + train_examples = [] + for text, annotation in TRAIN_DATA: + doc = nlp(text) + train_examples.append(Example.from_dict(doc, annotation)) + + def create_kb(vocab): + # create artificial KB + mykb = KnowledgeBase(vocab, entity_vector_length=vector_length) + mykb.add_entity(entity="Q2146908", freq=12, entity_vector=[6, -4, 3]) + mykb.add_alias("Russ Cochran", ["Q2146908"], [0.9]) + return mykb + + # Create and train the Entity Linker + entity_linker = nlp.add_pipe("entity_linker", last=True) + entity_linker.set_kb(create_kb) + optimizer = nlp.initialize(get_examples=lambda: train_examples) + for i in range(2): + losses = {} + nlp.update(train_examples, sgd=optimizer, losses=losses) + + # adding additional components that are required for the entity_linker + nlp.add_pipe("sentencizer", first=True) + + # this will run the pipeline on the examples and shouldn't crash + results = nlp.evaluate(train_examples) + + def test_partial_links(): # Test that having some entities on the doc without gold links, doesn't crash TRAIN_DATA = [ @@ -650,7 +692,7 @@ TRAIN_DATA = [ "sent_starts": [1, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}), ("Russ Cochran his reprints include EC Comics.", {"links": {(0, 12): {"Q7381115": 1.0, "Q2146908": 0.0}}, - "entities": [(0, 12, "PERSON")], + "entities": [(0, 12, "PERSON"), (34, 43, "ART")], "sent_starts": [1, -1, 0, 0, 0, 0, 0, 0]}), ("Russ Cochran has been publishing comic art.", {"links": {(0, 12): {"Q7381115": 1.0, "Q2146908": 0.0}}, @@ -693,6 +735,7 @@ def test_overfitting_IO(): # Create the Entity Linker component and add it to the pipeline entity_linker = nlp.add_pipe("entity_linker", last=True) + assert isinstance(entity_linker, EntityLinker) entity_linker.set_kb(create_kb) assert "Q2146908" in entity_linker.vocab.strings assert "Q2146908" in entity_linker.kb.vocab.strings @@ -922,3 +965,113 @@ def test_scorer_links(): assert scores["nel_micro_p"] == 2 / 3 assert scores["nel_micro_r"] == 2 / 4 + + +# fmt: off +@pytest.mark.parametrize( + "name,config", + [ + ("entity_linker", {"@architectures": "spacy.EntityLinker.v1", "tok2vec": DEFAULT_TOK2VEC_MODEL}), + ("entity_linker", {"@architectures": "spacy.EntityLinker.v2", "tok2vec": DEFAULT_TOK2VEC_MODEL}), + ], +) +# fmt: on +def test_legacy_architectures(name, config): + # Ensure that the legacy architectures still work + vector_length = 3 + nlp = English() + + train_examples = [] + for text, annotation in TRAIN_DATA: + doc = nlp.make_doc(text) + train_examples.append(Example.from_dict(doc, annotation)) + + def create_kb(vocab): + mykb = KnowledgeBase(vocab, entity_vector_length=vector_length) + mykb.add_entity(entity="Q2146908", freq=12, entity_vector=[6, -4, 3]) + mykb.add_entity(entity="Q7381115", freq=12, entity_vector=[9, 1, -7]) + mykb.add_alias( + alias="Russ Cochran", + entities=["Q2146908", "Q7381115"], + probabilities=[0.5, 0.5], + ) + return mykb + + entity_linker = nlp.add_pipe(name, config={"model": config}) + if config["@architectures"] == "spacy.EntityLinker.v1": + assert isinstance(entity_linker, EntityLinker_v1) + else: + assert isinstance(entity_linker, EntityLinker) + entity_linker.set_kb(create_kb) + optimizer = nlp.initialize(get_examples=lambda: train_examples) + + for i in range(2): + losses = {} + nlp.update(train_examples, sgd=optimizer, losses=losses) + + +@pytest.mark.parametrize( + "patterns", + [ + # perfect case + [{"label": "CHARACTER", "pattern": "Kirby"}], + # typo for false negative + [{"label": "PERSON", "pattern": "Korby"}], + # random stuff for false positive + [{"label": "IS", "pattern": "is"}, {"label": "COLOR", "pattern": "pink"}], + ], +) +def test_no_gold_ents(patterns): + # test that annotating components work + TRAIN_DATA = [ + ( + "Kirby is pink", + { + "links": {(0, 5): {"Q613241": 1.0}}, + "entities": [(0, 5, "CHARACTER")], + "sent_starts": [1, 0, 0], + }, + ) + ] + nlp = English() + vector_length = 3 + train_examples = [] + for text, annotation in TRAIN_DATA: + doc = nlp(text) + train_examples.append(Example.from_dict(doc, annotation)) + + # Create a ruler to mark entities + ruler = nlp.add_pipe("entity_ruler") + ruler.add_patterns(patterns) + + # Apply ruler to examples. In a real pipeline this would be an annotating component. + for eg in train_examples: + eg.predicted = ruler(eg.predicted) + + def create_kb(vocab): + # create artificial KB + mykb = KnowledgeBase(vocab, entity_vector_length=vector_length) + mykb.add_entity(entity="Q613241", freq=12, entity_vector=[6, -4, 3]) + mykb.add_alias("Kirby", ["Q613241"], [0.9]) + # Placeholder + mykb.add_entity(entity="pink", freq=12, entity_vector=[7, 2, -5]) + mykb.add_alias("pink", ["pink"], [0.9]) + return mykb + + # Create and train the Entity Linker + entity_linker = nlp.add_pipe( + "entity_linker", config={"use_gold_ents": False}, last=True + ) + entity_linker.set_kb(create_kb) + assert entity_linker.use_gold_ents == False + + optimizer = nlp.initialize(get_examples=lambda: train_examples) + for i in range(2): + losses = {} + nlp.update(train_examples, sgd=optimizer, losses=losses) + + # adding additional components that are required for the entity_linker + nlp.add_pipe("sentencizer", first=True) + + # this will run the pipeline on the examples and shouldn't crash + results = nlp.evaluate(train_examples) diff --git a/spacy/tests/pipeline/test_senter.py b/spacy/tests/pipeline/test_senter.py index 7a256f79b..047f59bef 100644 --- a/spacy/tests/pipeline/test_senter.py +++ b/spacy/tests/pipeline/test_senter.py @@ -97,3 +97,7 @@ def test_overfitting_IO(): ] assert_equal(batch_deps_1, batch_deps_2) assert_equal(batch_deps_1, no_batch_deps) + + # test internal pipe labels vs. Language.pipe_labels with hidden labels + assert nlp.get_pipe("senter").labels == ("I", "S") + assert "senter" not in nlp.pipe_labels diff --git a/spacy/tests/pipeline/test_spancat.py b/spacy/tests/pipeline/test_spancat.py index 39d2e97da..15256a763 100644 --- a/spacy/tests/pipeline/test_spancat.py +++ b/spacy/tests/pipeline/test_spancat.py @@ -79,7 +79,8 @@ def test_explicit_labels(): nlp.initialize() assert spancat.labels == ("PERSON", "LOC") -#TODO figure out why this is flaky + +# TODO figure out why this is flaky @pytest.mark.skip(reason="Test is unreliable for unknown reason") def test_doc_gc(): # If the Doc object is garbage collected, the spans won't be functional afterwards @@ -396,3 +397,25 @@ def test_zero_suggestions(): assert set(spancat.labels) == {"LOC", "PERSON"} nlp.update(train_examples, sgd=optimizer) + + +def test_set_candidates(): + nlp = Language() + spancat = nlp.add_pipe("spancat", config={"spans_key": SPAN_KEY}) + train_examples = make_examples(nlp) + nlp.initialize(get_examples=lambda: train_examples) + texts = [ + "Just a sentence.", + "I like London and Berlin", + "I like Berlin", + "I eat ham.", + ] + + docs = [nlp(text) for text in texts] + spancat.set_candidates(docs) + + assert len(docs) == len(texts) + assert type(docs[0].spans["candidates"]) == SpanGroup + assert len(docs[0].spans["candidates"]) == 9 + assert docs[0].spans["candidates"][0].text == "Just" + assert docs[0].spans["candidates"][4].text == "Just a" diff --git a/spacy/tests/pipeline/test_textcat.py b/spacy/tests/pipeline/test_textcat.py index 282789f2b..798dd165e 100644 --- a/spacy/tests/pipeline/test_textcat.py +++ b/spacy/tests/pipeline/test_textcat.py @@ -277,6 +277,21 @@ def test_issue7019(): print_prf_per_type(msg, scores, name="foo", type="bar") +@pytest.mark.issue(9904) +def test_issue9904(): + nlp = Language() + textcat = nlp.add_pipe("textcat") + get_examples = make_get_examples_single_label(nlp) + nlp.initialize(get_examples) + + examples = get_examples() + scores = textcat.predict([eg.predicted for eg in examples]) + + loss = textcat.get_loss(examples, scores)[0] + loss_double_bs = textcat.get_loss(examples * 2, scores.repeat(2, axis=0))[0] + assert loss == pytest.approx(loss_double_bs) + + @pytest.mark.skip(reason="Test is flakey when run with others") def test_simple_train(): nlp = Language() @@ -725,6 +740,72 @@ def test_textcat_evaluation(): assert scores["cats_micro_r"] == 4 / 6 +@pytest.mark.parametrize( + "multi_label,spring_p", + [(True, 1 / 1), (False, 1 / 2)], +) +def test_textcat_eval_missing(multi_label: bool, spring_p: float): + """ + multi-label: the missing 'spring' in gold_doc_2 doesn't incur a penalty + exclusive labels: the missing 'spring' in gold_doc_2 is interpreted as 0.0""" + train_examples = [] + nlp = English() + + ref1 = nlp("one") + ref1.cats = {"winter": 0.0, "summer": 0.0, "autumn": 0.0, "spring": 1.0} + pred1 = nlp("one") + pred1.cats = {"winter": 0.0, "summer": 0.0, "autumn": 0.0, "spring": 1.0} + train_examples.append(Example(ref1, pred1)) + + ref2 = nlp("two") + # reference 'spring' is missing, pred 'spring' is 1 + ref2.cats = {"winter": 0.0, "summer": 0.0, "autumn": 1.0} + pred2 = nlp("two") + pred2.cats = {"winter": 0.0, "summer": 0.0, "autumn": 0.0, "spring": 1.0} + train_examples.append(Example(pred2, ref2)) + + scores = Scorer().score_cats( + train_examples, + "cats", + labels=["winter", "summer", "spring", "autumn"], + multi_label=multi_label, + ) + assert scores["cats_f_per_type"]["spring"]["p"] == spring_p + assert scores["cats_f_per_type"]["spring"]["r"] == 1 / 1 + + +@pytest.mark.parametrize( + "multi_label,expected_loss", + [(True, 0), (False, 0.125)], +) +def test_textcat_loss(multi_label: bool, expected_loss: float): + """ + multi-label: the missing 'spring' in gold_doc_2 doesn't incur an increase in loss + exclusive labels: the missing 'spring' in gold_doc_2 is interpreted as 0.0 and adds to the loss""" + train_examples = [] + nlp = English() + + doc1 = nlp("one") + cats1 = {"winter": 0.0, "summer": 0.0, "autumn": 0.0, "spring": 1.0} + train_examples.append(Example.from_dict(doc1, {"cats": cats1})) + + doc2 = nlp("two") + cats2 = {"winter": 0.0, "summer": 0.0, "autumn": 1.0} + train_examples.append(Example.from_dict(doc2, {"cats": cats2})) + + if multi_label: + textcat = nlp.add_pipe("textcat_multilabel") + else: + textcat = nlp.add_pipe("textcat") + textcat.initialize(lambda: train_examples) + assert isinstance(textcat, TextCategorizer) + scores = textcat.model.ops.asarray( + [[0.0, 0.0, 0.0, 1.0], [0.0, 0.0, 1.0, 1.0]], dtype="f" # type: ignore + ) + loss, d_scores = textcat.get_loss(train_examples, scores) + assert loss == expected_loss + + def test_textcat_threshold(): # Ensure the scorer can be called with a different threshold nlp = English() diff --git a/spacy/tests/pipeline/test_tok2vec.py b/spacy/tests/pipeline/test_tok2vec.py index 50c4b90ce..59c680e98 100644 --- a/spacy/tests/pipeline/test_tok2vec.py +++ b/spacy/tests/pipeline/test_tok2vec.py @@ -11,7 +11,7 @@ from spacy.lang.en import English from thinc.api import Config, get_current_ops from numpy.testing import assert_array_equal -from ..util import get_batch, make_tempdir +from ..util import get_batch, make_tempdir, add_vecs_to_vocab def test_empty_doc(): @@ -100,7 +100,7 @@ cfg_string = """ factory = "tagger" [components.tagger.model] - @architectures = "spacy.Tagger.v1" + @architectures = "spacy.Tagger.v2" nO = null [components.tagger.model.tok2vec] @@ -140,9 +140,25 @@ TRAIN_DATA = [ ] -def test_tok2vec_listener(): +@pytest.mark.parametrize("with_vectors", (False, True)) +def test_tok2vec_listener(with_vectors): orig_config = Config().from_str(cfg_string) + orig_config["components"]["tok2vec"]["model"]["embed"][ + "include_static_vectors" + ] = with_vectors nlp = util.load_model_from_config(orig_config, auto_fill=True, validate=True) + + if with_vectors: + ops = get_current_ops() + vectors = [ + ("apple", ops.asarray([1, 2, 3])), + ("orange", ops.asarray([-1, -2, -3])), + ("and", ops.asarray([-1, -1, -1])), + ("juice", ops.asarray([5, 5, 10])), + ("pie", ops.asarray([7, 6.3, 8.9])), + ] + add_vecs_to_vocab(nlp.vocab, vectors) + assert nlp.pipe_names == ["tok2vec", "tagger"] tagger = nlp.get_pipe("tagger") tok2vec = nlp.get_pipe("tok2vec") @@ -169,6 +185,9 @@ def test_tok2vec_listener(): ops = get_current_ops() assert_array_equal(ops.to_numpy(doc.tensor), ops.to_numpy(doc_tensor)) + # test with empty doc + doc = nlp("") + # TODO: should this warn or error? nlp.select_pipes(disable="tok2vec") assert nlp.pipe_names == ["tagger"] @@ -244,7 +263,7 @@ cfg_string_multi = """ factory = "tagger" [components.tagger.model] - @architectures = "spacy.Tagger.v1" + @architectures = "spacy.Tagger.v2" nO = null [components.tagger.model.tok2vec] @@ -354,7 +373,7 @@ cfg_string_multi_textcat = """ factory = "tagger" [components.tagger.model] - @architectures = "spacy.Tagger.v1" + @architectures = "spacy.Tagger.v2" nO = null [components.tagger.model.tok2vec] diff --git a/spacy/tests/serialize/test_serialize_config.py b/spacy/tests/serialize/test_serialize_config.py index f7b75c759..238f308e7 100644 --- a/spacy/tests/serialize/test_serialize_config.py +++ b/spacy/tests/serialize/test_serialize_config.py @@ -59,7 +59,7 @@ subword_features = true factory = "tagger" [components.tagger.model] -@architectures = "spacy.Tagger.v1" +@architectures = "spacy.Tagger.v2" [components.tagger.model.tok2vec] @architectures = "spacy.Tok2VecListener.v1" @@ -110,7 +110,7 @@ subword_features = true factory = "tagger" [components.tagger.model] -@architectures = "spacy.Tagger.v1" +@architectures = "spacy.Tagger.v2" [components.tagger.model.tok2vec] @architectures = "spacy.Tok2VecListener.v1" diff --git a/spacy/tests/serialize/test_serialize_language.py b/spacy/tests/serialize/test_serialize_language.py index 6e7fa0e4e..c03287548 100644 --- a/spacy/tests/serialize/test_serialize_language.py +++ b/spacy/tests/serialize/test_serialize_language.py @@ -70,7 +70,7 @@ factory = "ner" factory = "tagger" [components.tagger.model] -@architectures = "spacy.Tagger.v1" +@architectures = "spacy.Tagger.v2" nO = null [components.tagger.model.tok2vec] diff --git a/spacy/tests/serialize/test_serialize_tokenizer.py b/spacy/tests/serialize/test_serialize_tokenizer.py index e271f7707..9b74d7721 100644 --- a/spacy/tests/serialize/test_serialize_tokenizer.py +++ b/spacy/tests/serialize/test_serialize_tokenizer.py @@ -70,6 +70,7 @@ def test_issue4190(): suffix_search=suffix_re.search, infix_finditer=infix_re.finditer, token_match=nlp.tokenizer.token_match, + faster_heuristics=False, ) nlp.tokenizer = new_tokenizer @@ -90,6 +91,7 @@ def test_issue4190(): doc_2 = nlp_2(test_string) result_2 = [token.text for token in doc_2] assert result_1b == result_2 + assert nlp_2.tokenizer.faster_heuristics is False def test_serialize_custom_tokenizer(en_vocab, en_tokenizer): diff --git a/spacy/tests/test_cli.py b/spacy/tests/test_cli.py index 253469909..ec512b839 100644 --- a/spacy/tests/test_cli.py +++ b/spacy/tests/test_cli.py @@ -12,16 +12,18 @@ from spacy.cli._util import is_subpath_of, load_project_config from spacy.cli._util import parse_config_overrides, string_to_list from spacy.cli._util import substitute_project_variables from spacy.cli._util import validate_project_commands -from spacy.cli.debug_data import _get_labels_from_model +from spacy.cli.debug_data import _compile_gold, _get_labels_from_model from spacy.cli.debug_data import _get_labels_from_spancat from spacy.cli.download import get_compatibility, get_version from spacy.cli.init_config import RECOMMENDATIONS, init_config, fill_config from spacy.cli.package import get_third_party_dependencies +from spacy.cli.package import _is_permitted_package_name from spacy.cli.validate import get_model_pkgs from spacy.lang.en import English from spacy.lang.nl import Dutch from spacy.language import Language from spacy.schemas import ProjectConfigSchema, RecommendationSchema, validate +from spacy.tokens import Doc from spacy.training import Example, docs_to_json, offsets_to_biluo_tags from spacy.training.converters import conll_ner_to_docs, conllu_to_docs from spacy.training.converters import iob_to_docs @@ -32,7 +34,7 @@ from .util import make_tempdir @pytest.mark.issue(4665) -def test_issue4665(): +def test_cli_converters_conllu_empty_heads_ner(): """ conllu_to_docs should not raise an exception if the HEAD column contains an underscore @@ -57,7 +59,11 @@ def test_issue4665(): 17 . _ PUNCT . _ _ punct _ _ 18 ] _ PUNCT -RRB- _ _ punct _ _ """ - conllu_to_docs(input_data) + docs = list(conllu_to_docs(input_data)) + # heads are all 0 + assert not all([t.head.i for t in docs[0]]) + # NER is unset + assert not docs[0].has_annotation("ENT_IOB") @pytest.mark.issue(4924) @@ -692,3 +698,39 @@ def test_get_labels_from_model(factory_name, pipe_name): assert _get_labels_from_spancat(nlp)[pipe.key] == set(labels) else: assert _get_labels_from_model(nlp, factory_name) == set(labels) + + +def test_permitted_package_names(): + # https://www.python.org/dev/peps/pep-0426/#name + assert _is_permitted_package_name("Meine_Bäume") == False + assert _is_permitted_package_name("_package") == False + assert _is_permitted_package_name("package_") == False + assert _is_permitted_package_name(".package") == False + assert _is_permitted_package_name("package.") == False + assert _is_permitted_package_name("-package") == False + assert _is_permitted_package_name("package-") == False + + +def test_debug_data_compile_gold(): + nlp = English() + pred = Doc(nlp.vocab, words=["Token", ".", "New", "York", "City"]) + ref = Doc( + nlp.vocab, + words=["Token", ".", "New York City"], + sent_starts=[True, False, True], + ents=["O", "O", "B-ENT"], + ) + eg = Example(pred, ref) + data = _compile_gold([eg], ["ner"], nlp, True) + assert data["boundary_cross_ents"] == 0 + + pred = Doc(nlp.vocab, words=["Token", ".", "New", "York", "City"]) + ref = Doc( + nlp.vocab, + words=["Token", ".", "New York City"], + sent_starts=[True, False, True], + ents=["O", "B-ENT", "I-ENT"], + ) + eg = Example(pred, ref) + data = _compile_gold([eg], ["ner"], nlp, True) + assert data["boundary_cross_ents"] == 1 diff --git a/spacy/tests/test_displacy.py b/spacy/tests/test_displacy.py index 392c95e42..ccad7e342 100644 --- a/spacy/tests/test_displacy.py +++ b/spacy/tests/test_displacy.py @@ -96,6 +96,92 @@ def test_issue5838(): assert found == 4 +def test_displacy_parse_spans(en_vocab): + """Test that spans on a Doc are converted into displaCy's format.""" + doc = Doc(en_vocab, words=["Welcome", "to", "the", "Bank", "of", "China"]) + doc.spans["sc"] = [Span(doc, 3, 6, "ORG"), Span(doc, 5, 6, "GPE")] + spans = displacy.parse_spans(doc) + assert isinstance(spans, dict) + assert spans["text"] == "Welcome to the Bank of China " + assert spans["spans"] == [ + { + "start": 15, + "end": 28, + "start_token": 3, + "end_token": 6, + "label": "ORG", + "kb_id": "", + "kb_url": "#", + }, + { + "start": 23, + "end": 28, + "start_token": 5, + "end_token": 6, + "label": "GPE", + "kb_id": "", + "kb_url": "#", + }, + ] + + +def test_displacy_parse_spans_with_kb_id_options(en_vocab): + """Test that spans with kb_id on a Doc are converted into displaCy's format""" + doc = Doc(en_vocab, words=["Welcome", "to", "the", "Bank", "of", "China"]) + doc.spans["sc"] = [ + Span(doc, 3, 6, "ORG", kb_id="Q790068"), + Span(doc, 5, 6, "GPE", kb_id="Q148"), + ] + + spans = displacy.parse_spans( + doc, {"kb_url_template": "https://wikidata.org/wiki/{}"} + ) + assert isinstance(spans, dict) + assert spans["text"] == "Welcome to the Bank of China " + assert spans["spans"] == [ + { + "start": 15, + "end": 28, + "start_token": 3, + "end_token": 6, + "label": "ORG", + "kb_id": "Q790068", + "kb_url": "https://wikidata.org/wiki/Q790068", + }, + { + "start": 23, + "end": 28, + "start_token": 5, + "end_token": 6, + "label": "GPE", + "kb_id": "Q148", + "kb_url": "https://wikidata.org/wiki/Q148", + }, + ] + + +def test_displacy_parse_spans_different_spans_key(en_vocab): + """Test that spans in a different spans key will be parsed""" + doc = Doc(en_vocab, words=["Welcome", "to", "the", "Bank", "of", "China"]) + doc.spans["sc"] = [Span(doc, 3, 6, "ORG"), Span(doc, 5, 6, "GPE")] + doc.spans["custom"] = [Span(doc, 3, 6, "BANK")] + spans = displacy.parse_spans(doc, options={"spans_key": "custom"}) + + assert isinstance(spans, dict) + assert spans["text"] == "Welcome to the Bank of China " + assert spans["spans"] == [ + { + "start": 15, + "end": 28, + "start_token": 3, + "end_token": 6, + "label": "BANK", + "kb_id": "", + "kb_url": "#", + } + ] + + def test_displacy_parse_ents(en_vocab): """Test that named entities on a Doc are converted into displaCy's format.""" doc = Doc(en_vocab, words=["But", "Google", "is", "starting", "from", "behind"]) diff --git a/spacy/tests/tokenizer/test_tokenizer.py b/spacy/tests/tokenizer/test_tokenizer.py index c2aeffcb5..6af58b344 100644 --- a/spacy/tests/tokenizer/test_tokenizer.py +++ b/spacy/tests/tokenizer/test_tokenizer.py @@ -9,6 +9,7 @@ from spacy.tokenizer import Tokenizer from spacy.tokens import Doc from spacy.training import Example from spacy.util import compile_prefix_regex, compile_suffix_regex, ensure_path +from spacy.util import compile_infix_regex from spacy.vocab import Vocab from spacy.symbols import ORTH @@ -503,3 +504,50 @@ def test_tokenizer_prefix_suffix_overlap_lookbehind(en_vocab): assert tokens == ["a", "10", "."] explain_tokens = [t[1] for t in tokenizer.explain("a10.")] assert tokens == explain_tokens + + +def test_tokenizer_infix_prefix(en_vocab): + # the prefix and suffix matches overlap in the suffix lookbehind + infixes = ["±"] + suffixes = ["%"] + infix_re = compile_infix_regex(infixes) + suffix_re = compile_suffix_regex(suffixes) + tokenizer = Tokenizer( + en_vocab, + infix_finditer=infix_re.finditer, + suffix_search=suffix_re.search, + ) + tokens = [t.text for t in tokenizer("±10%")] + assert tokens == ["±10", "%"] + explain_tokens = [t[1] for t in tokenizer.explain("±10%")] + assert tokens == explain_tokens + + +@pytest.mark.issue(10086) +def test_issue10086(en_tokenizer): + """Test special case works when part of infix substring.""" + text = "No--don't see" + + # without heuristics: do n't + en_tokenizer.faster_heuristics = False + doc = en_tokenizer(text) + assert "n't" in [w.text for w in doc] + assert "do" in [w.text for w in doc] + + # with (default) heuristics: don't + en_tokenizer.faster_heuristics = True + doc = en_tokenizer(text) + assert "don't" in [w.text for w in doc] + + +def test_tokenizer_initial_special_case_explain(en_vocab): + tokenizer = Tokenizer( + en_vocab, + token_match=re.compile("^id$").match, + rules={ + "id": [{"ORTH": "i"}, {"ORTH": "d"}], + }, + ) + tokens = [t.text for t in tokenizer("id")] + explain_tokens = [t[1] for t in tokenizer.explain("id")] + assert tokens == explain_tokens diff --git a/spacy/tests/training/test_augmenters.py b/spacy/tests/training/test_augmenters.py index 43a78e4b0..e3639c5da 100644 --- a/spacy/tests/training/test_augmenters.py +++ b/spacy/tests/training/test_augmenters.py @@ -1,9 +1,11 @@ import pytest -from spacy.training import Corpus +from spacy.pipeline._parser_internals.nonproj import contains_cycle +from spacy.training import Corpus, Example from spacy.training.augment import create_orth_variants_augmenter from spacy.training.augment import create_lower_casing_augmenter +from spacy.training.augment import make_whitespace_variant from spacy.lang.en import English -from spacy.tokens import DocBin, Doc +from spacy.tokens import DocBin, Doc, Span from contextlib import contextmanager import random @@ -153,3 +155,84 @@ def test_custom_data_augmentation(nlp, doc): ents = [(e.start, e.end, e.label) for e in doc.ents] assert [(e.start, e.end, e.label) for e in corpus[0].reference.ents] == ents assert [(e.start, e.end, e.label) for e in corpus[1].reference.ents] == ents + + +def test_make_whitespace_variant(nlp): + # fmt: off + text = "They flew to New York City.\nThen they drove to Washington, D.C." + words = ["They", "flew", "to", "New", "York", "City", ".", "\n", "Then", "they", "drove", "to", "Washington", ",", "D.C."] + spaces = [True, True, True, True, True, False, False, False, True, True, True, True, False, True, False] + tags = ["PRP", "VBD", "IN", "NNP", "NNP", "NNP", ".", "_SP", "RB", "PRP", "VBD", "IN", "NNP", ",", "NNP"] + lemmas = ["they", "fly", "to", "New", "York", "City", ".", "\n", "then", "they", "drive", "to", "Washington", ",", "D.C."] + heads = [1, 1, 1, 4, 5, 2, 1, 10, 10, 10, 10, 10, 11, 12, 12] + deps = ["nsubj", "ROOT", "prep", "compound", "compound", "pobj", "punct", "dep", "advmod", "nsubj", "ROOT", "prep", "pobj", "punct", "appos"] + ents = ["O", "O", "O", "B-GPE", "I-GPE", "I-GPE", "O", "O", "O", "O", "O", "O", "B-GPE", "O", "B-GPE"] + # fmt: on + doc = Doc( + nlp.vocab, + words=words, + spaces=spaces, + tags=tags, + lemmas=lemmas, + heads=heads, + deps=deps, + ents=ents, + ) + assert doc.text == text + example = Example(nlp.make_doc(text), doc) + # whitespace is only added internally in entity spans + mod_ex = make_whitespace_variant(nlp, example, " ", 3) + assert mod_ex.reference.ents[0].text == "New York City" + mod_ex = make_whitespace_variant(nlp, example, " ", 4) + assert mod_ex.reference.ents[0].text == "New York City" + mod_ex = make_whitespace_variant(nlp, example, " ", 5) + assert mod_ex.reference.ents[0].text == "New York City" + mod_ex = make_whitespace_variant(nlp, example, " ", 6) + assert mod_ex.reference.ents[0].text == "New York City" + # add a space at every possible position + for i in range(len(doc) + 1): + mod_ex = make_whitespace_variant(nlp, example, " ", i) + assert mod_ex.reference[i].is_space + # adds annotation when the doc contains at least partial annotation + assert [t.tag_ for t in mod_ex.reference] == tags[:i] + ["_SP"] + tags[i:] + assert [t.lemma_ for t in mod_ex.reference] == lemmas[:i] + [" "] + lemmas[i:] + assert [t.dep_ for t in mod_ex.reference] == deps[:i] + ["dep"] + deps[i:] + # does not add partial annotation if doc does not contain this feature + assert not mod_ex.reference.has_annotation("POS") + assert not mod_ex.reference.has_annotation("MORPH") + # produces well-formed trees + assert not contains_cycle([t.head.i for t in mod_ex.reference]) + assert len(list(doc.sents)) == 2 + if i == 0: + assert mod_ex.reference[i].head.i == 1 + else: + assert mod_ex.reference[i].head.i == i - 1 + # adding another space also produces well-formed trees + for j in (3, 8, 10): + mod_ex2 = make_whitespace_variant(nlp, mod_ex, "\t\t\n", j) + assert not contains_cycle([t.head.i for t in mod_ex2.reference]) + assert len(list(doc.sents)) == 2 + assert mod_ex2.reference[j].head.i == j - 1 + # entities are well-formed + assert len(doc.ents) == len(mod_ex.reference.ents) + for ent in mod_ex.reference.ents: + assert not ent[0].is_space + assert not ent[-1].is_space + + # no modifications if: + # partial dependencies + example.reference[0].dep_ = "" + mod_ex = make_whitespace_variant(nlp, example, " ", 5) + assert mod_ex.text == example.reference.text + example.reference[0].dep_ = "nsubj" # reset + + # spans + example.reference.spans["spans"] = [example.reference[0:5]] + mod_ex = make_whitespace_variant(nlp, example, " ", 5) + assert mod_ex.text == example.reference.text + del example.reference.spans["spans"] # reset + + # links + example.reference.ents = [Span(doc, 0, 2, label="ENT", kb_id="Q123")] + mod_ex = make_whitespace_variant(nlp, example, " ", 5) + assert mod_ex.text == example.reference.text diff --git a/spacy/tests/training/test_new_example.py b/spacy/tests/training/test_new_example.py index 4dd90f416..a39d40ded 100644 --- a/spacy/tests/training/test_new_example.py +++ b/spacy/tests/training/test_new_example.py @@ -421,3 +421,13 @@ def test_Example_missing_heads(): # Ensure that the missing head doesn't create an artificial new sentence start expected = [True, False, False, False, False, False] assert example.get_aligned_sent_starts() == expected + + +def test_Example_aligned_whitespace(en_vocab): + words = ["a", " ", "b"] + tags = ["A", "SPACE", "B"] + predicted = Doc(en_vocab, words=words) + reference = Doc(en_vocab, words=words, tags=tags) + + example = Example(predicted, reference) + assert example.get_aligned("TAG", as_string=True) == tags diff --git a/spacy/tests/training/test_pretraining.py b/spacy/tests/training/test_pretraining.py index 8ee54b544..9359c8485 100644 --- a/spacy/tests/training/test_pretraining.py +++ b/spacy/tests/training/test_pretraining.py @@ -38,7 +38,7 @@ subword_features = true factory = "tagger" [components.tagger.model] -@architectures = "spacy.Tagger.v1" +@architectures = "spacy.Tagger.v2" [components.tagger.model.tok2vec] @architectures = "spacy.Tok2VecListener.v1" @@ -62,7 +62,7 @@ pipeline = ["tagger"] factory = "tagger" [components.tagger.model] -@architectures = "spacy.Tagger.v1" +@architectures = "spacy.Tagger.v2" [components.tagger.model.tok2vec] @architectures = "spacy.HashEmbedCNN.v1" @@ -106,7 +106,7 @@ subword_features = true factory = "tagger" [components.tagger.model] -@architectures = "spacy.Tagger.v1" +@architectures = "spacy.Tagger.v2" [components.tagger.model.tok2vec] @architectures = "spacy.Tok2VecListener.v1" diff --git a/spacy/tests/training/test_rehearse.py b/spacy/tests/training/test_rehearse.py new file mode 100644 index 000000000..84c507702 --- /dev/null +++ b/spacy/tests/training/test_rehearse.py @@ -0,0 +1,211 @@ +import pytest +import spacy + +from typing import List +from spacy.training import Example + + +TRAIN_DATA = [ + ( + "Who is Kofi Annan?", + { + "entities": [(7, 18, "PERSON")], + "tags": ["PRON", "AUX", "PROPN", "PRON", "PUNCT"], + "heads": [1, 1, 3, 1, 1], + "deps": ["attr", "ROOT", "compound", "nsubj", "punct"], + "morphs": [ + "", + "Mood=Ind|Number=Sing|Person=3|Tense=Pres|VerbForm=Fin", + "Number=Sing", + "Number=Sing", + "PunctType=Peri", + ], + "cats": {"question": 1.0}, + }, + ), + ( + "Who is Steve Jobs?", + { + "entities": [(7, 17, "PERSON")], + "tags": ["PRON", "AUX", "PROPN", "PRON", "PUNCT"], + "heads": [1, 1, 3, 1, 1], + "deps": ["attr", "ROOT", "compound", "nsubj", "punct"], + "morphs": [ + "", + "Mood=Ind|Number=Sing|Person=3|Tense=Pres|VerbForm=Fin", + "Number=Sing", + "Number=Sing", + "PunctType=Peri", + ], + "cats": {"question": 1.0}, + }, + ), + ( + "Bob is a nice person.", + { + "entities": [(0, 3, "PERSON")], + "tags": ["PROPN", "AUX", "DET", "ADJ", "NOUN", "PUNCT"], + "heads": [1, 1, 4, 4, 1, 1], + "deps": ["nsubj", "ROOT", "det", "amod", "attr", "punct"], + "morphs": [ + "Number=Sing", + "Mood=Ind|Number=Sing|Person=3|Tense=Pres|VerbForm=Fin", + "Definite=Ind|PronType=Art", + "Degree=Pos", + "Number=Sing", + "PunctType=Peri", + ], + "cats": {"statement": 1.0}, + }, + ), + ( + "Hi Anil, how are you?", + { + "entities": [(3, 7, "PERSON")], + "tags": ["INTJ", "PROPN", "PUNCT", "ADV", "AUX", "PRON", "PUNCT"], + "deps": ["intj", "npadvmod", "punct", "advmod", "ROOT", "nsubj", "punct"], + "heads": [4, 0, 4, 4, 4, 4, 4], + "morphs": [ + "", + "Number=Sing", + "PunctType=Comm", + "", + "Mood=Ind|Tense=Pres|VerbForm=Fin", + "Case=Nom|Person=2|PronType=Prs", + "PunctType=Peri", + ], + "cats": {"greeting": 1.0, "question": 1.0}, + }, + ), + ( + "I like London and Berlin.", + { + "entities": [(7, 13, "LOC"), (18, 24, "LOC")], + "tags": ["PROPN", "VERB", "PROPN", "CCONJ", "PROPN", "PUNCT"], + "deps": ["nsubj", "ROOT", "dobj", "cc", "conj", "punct"], + "heads": [1, 1, 1, 2, 2, 1], + "morphs": [ + "Case=Nom|Number=Sing|Person=1|PronType=Prs", + "Tense=Pres|VerbForm=Fin", + "Number=Sing", + "ConjType=Cmp", + "Number=Sing", + "PunctType=Peri", + ], + "cats": {"statement": 1.0}, + }, + ), +] + +REHEARSE_DATA = [ + ( + "Hi Anil", + { + "entities": [(3, 7, "PERSON")], + "tags": ["INTJ", "PROPN"], + "deps": ["ROOT", "npadvmod"], + "heads": [0, 0], + "morphs": ["", "Number=Sing"], + "cats": {"greeting": 1.0}, + }, + ), + ( + "Hi Ravish, how you doing?", + { + "entities": [(3, 9, "PERSON")], + "tags": ["INTJ", "PROPN", "PUNCT", "ADV", "AUX", "PRON", "PUNCT"], + "deps": ["intj", "ROOT", "punct", "advmod", "nsubj", "advcl", "punct"], + "heads": [1, 1, 1, 5, 5, 1, 1], + "morphs": [ + "", + "VerbForm=Inf", + "PunctType=Comm", + "", + "Case=Nom|Person=2|PronType=Prs", + "Aspect=Prog|Tense=Pres|VerbForm=Part", + "PunctType=Peri", + ], + "cats": {"greeting": 1.0, "question": 1.0}, + }, + ), + # UTENSIL new label + ( + "Natasha bought new forks.", + { + "entities": [(0, 7, "PERSON"), (19, 24, "UTENSIL")], + "tags": ["PROPN", "VERB", "ADJ", "NOUN", "PUNCT"], + "deps": ["nsubj", "ROOT", "amod", "dobj", "punct"], + "heads": [1, 1, 3, 1, 1], + "morphs": [ + "Number=Sing", + "Tense=Past|VerbForm=Fin", + "Degree=Pos", + "Number=Plur", + "PunctType=Peri", + ], + "cats": {"statement": 1.0}, + }, + ), +] + + +def _add_ner_label(ner, data): + for _, annotations in data: + for ent in annotations["entities"]: + ner.add_label(ent[2]) + + +def _add_tagger_label(tagger, data): + for _, annotations in data: + for tag in annotations["tags"]: + tagger.add_label(tag) + + +def _add_parser_label(parser, data): + for _, annotations in data: + for dep in annotations["deps"]: + parser.add_label(dep) + + +def _add_textcat_label(textcat, data): + for _, annotations in data: + for cat in annotations["cats"]: + textcat.add_label(cat) + + +def _optimize(nlp, component: str, data: List, rehearse: bool): + """Run either train or rehearse.""" + pipe = nlp.get_pipe(component) + if component == "ner": + _add_ner_label(pipe, data) + elif component == "tagger": + _add_tagger_label(pipe, data) + elif component == "parser": + _add_tagger_label(pipe, data) + elif component == "textcat_multilabel": + _add_textcat_label(pipe, data) + else: + raise NotImplementedError + + if rehearse: + optimizer = nlp.resume_training() + else: + optimizer = nlp.initialize() + + for _ in range(5): + for text, annotation in data: + doc = nlp.make_doc(text) + example = Example.from_dict(doc, annotation) + if rehearse: + nlp.rehearse([example], sgd=optimizer) + else: + nlp.update([example], sgd=optimizer) + return nlp + + +@pytest.mark.parametrize("component", ["ner", "tagger", "parser", "textcat_multilabel"]) +def test_rehearse(component): + nlp = spacy.blank("en") + nlp.add_pipe(component) + nlp = _optimize(nlp, component, TRAIN_DATA, False) + _optimize(nlp, component, REHEARSE_DATA, True) diff --git a/spacy/tests/training/test_training.py b/spacy/tests/training/test_training.py index 0d73300d8..8e08a25fb 100644 --- a/spacy/tests/training/test_training.py +++ b/spacy/tests/training/test_training.py @@ -8,6 +8,7 @@ from spacy.tokens import Doc, DocBin from spacy.training import Alignment, Corpus, Example, biluo_tags_to_offsets from spacy.training import biluo_tags_to_spans, docs_to_json, iob_to_biluo from spacy.training import offsets_to_biluo_tags +from spacy.training.alignment_array import AlignmentArray from spacy.training.align import get_alignments from spacy.training.converters import json_to_docs from spacy.util import get_words_and_spaces, load_model_from_path, minibatch @@ -241,7 +242,7 @@ maxout_pieces = 3 factory = "tagger" [components.tagger.model] -@architectures = "spacy.Tagger.v1" +@architectures = "spacy.Tagger.v2" nO = null [components.tagger.model.tok2vec] @@ -908,9 +909,41 @@ def test_alignment(): spacy_tokens = ["i", "listened", "to", "obama", "'s", "podcasts", "."] align = Alignment.from_strings(other_tokens, spacy_tokens) assert list(align.x2y.lengths) == [1, 1, 1, 1, 1, 1, 1, 1] - assert list(align.x2y.dataXd) == [0, 1, 2, 3, 4, 4, 5, 6] + assert list(align.x2y.data) == [0, 1, 2, 3, 4, 4, 5, 6] assert list(align.y2x.lengths) == [1, 1, 1, 1, 2, 1, 1] - assert list(align.y2x.dataXd) == [0, 1, 2, 3, 4, 5, 6, 7] + assert list(align.y2x.data) == [0, 1, 2, 3, 4, 5, 6, 7] + + +def test_alignment_array(): + a = AlignmentArray([[0, 1, 2], [3], [], [4, 5, 6, 7], [8, 9]]) + assert list(a.data) == [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] + assert list(a.lengths) == [3, 1, 0, 4, 2] + assert list(a[3]) == [4, 5, 6, 7] + assert list(a[2]) == [] + assert list(a[-2]) == [4, 5, 6, 7] + assert list(a[1:4]) == [3, 4, 5, 6, 7] + assert list(a[1:]) == [3, 4, 5, 6, 7, 8, 9] + assert list(a[:3]) == [0, 1, 2, 3] + assert list(a[:]) == list(a.data) + assert list(a[0:0]) == [] + assert list(a[3:3]) == [] + assert list(a[-1:-1]) == [] + with pytest.raises(ValueError, match=r"only supports slicing with a step of 1"): + a[:4:-1] + with pytest.raises( + ValueError, match=r"only supports indexing using an int or a slice" + ): + a[[0, 1, 3]] + + a = AlignmentArray([[], [1, 2, 3], [4, 5]]) + assert list(a[0]) == [] + assert list(a[0:1]) == [] + assert list(a[2]) == [4, 5] + assert list(a[0:2]) == [1, 2, 3] + + a = AlignmentArray([[1, 2, 3], [4, 5], []]) + assert list(a[-1]) == [] + assert list(a[-2:]) == [4, 5] def test_alignment_case_insensitive(): @@ -918,9 +951,9 @@ def test_alignment_case_insensitive(): spacy_tokens = ["i", "listened", "to", "Obama", "'s", "PODCASTS", "."] align = Alignment.from_strings(other_tokens, spacy_tokens) assert list(align.x2y.lengths) == [1, 1, 1, 1, 1, 1, 1, 1] - assert list(align.x2y.dataXd) == [0, 1, 2, 3, 4, 4, 5, 6] + assert list(align.x2y.data) == [0, 1, 2, 3, 4, 4, 5, 6] assert list(align.y2x.lengths) == [1, 1, 1, 1, 2, 1, 1] - assert list(align.y2x.dataXd) == [0, 1, 2, 3, 4, 5, 6, 7] + assert list(align.y2x.data) == [0, 1, 2, 3, 4, 5, 6, 7] def test_alignment_complex(): @@ -928,9 +961,9 @@ def test_alignment_complex(): spacy_tokens = ["i", "listened", "to", "obama", "'s", "podcasts."] align = Alignment.from_strings(other_tokens, spacy_tokens) assert list(align.x2y.lengths) == [3, 1, 1, 1, 1, 1] - assert list(align.x2y.dataXd) == [0, 1, 2, 3, 4, 4, 5, 5] + assert list(align.x2y.data) == [0, 1, 2, 3, 4, 4, 5, 5] assert list(align.y2x.lengths) == [1, 1, 1, 1, 2, 2] - assert list(align.y2x.dataXd) == [0, 0, 0, 1, 2, 3, 4, 5] + assert list(align.y2x.data) == [0, 0, 0, 1, 2, 3, 4, 5] def test_alignment_complex_example(en_vocab): @@ -947,9 +980,9 @@ def test_alignment_complex_example(en_vocab): example = Example(predicted, reference) align = example.alignment assert list(align.x2y.lengths) == [3, 1, 1, 1, 1, 1] - assert list(align.x2y.dataXd) == [0, 1, 2, 3, 4, 4, 5, 5] + assert list(align.x2y.data) == [0, 1, 2, 3, 4, 4, 5, 5] assert list(align.y2x.lengths) == [1, 1, 1, 1, 2, 2] - assert list(align.y2x.dataXd) == [0, 0, 0, 1, 2, 3, 4, 5] + assert list(align.y2x.data) == [0, 0, 0, 1, 2, 3, 4, 5] def test_alignment_different_texts(): @@ -965,70 +998,70 @@ def test_alignment_spaces(en_vocab): spacy_tokens = ["i", "listened", "to", "obama", "'s", "podcasts."] align = Alignment.from_strings(other_tokens, spacy_tokens) assert list(align.x2y.lengths) == [0, 3, 1, 1, 1, 1, 1] - assert list(align.x2y.dataXd) == [0, 1, 2, 3, 4, 4, 5, 5] + assert list(align.x2y.data) == [0, 1, 2, 3, 4, 4, 5, 5] assert list(align.y2x.lengths) == [1, 1, 1, 1, 2, 2] - assert list(align.y2x.dataXd) == [1, 1, 1, 2, 3, 4, 5, 6] + assert list(align.y2x.data) == [1, 1, 1, 2, 3, 4, 5, 6] # multiple leading whitespace tokens other_tokens = [" ", " ", "i listened to", "obama", "'", "s", "podcasts", "."] spacy_tokens = ["i", "listened", "to", "obama", "'s", "podcasts."] align = Alignment.from_strings(other_tokens, spacy_tokens) assert list(align.x2y.lengths) == [0, 0, 3, 1, 1, 1, 1, 1] - assert list(align.x2y.dataXd) == [0, 1, 2, 3, 4, 4, 5, 5] + assert list(align.x2y.data) == [0, 1, 2, 3, 4, 4, 5, 5] assert list(align.y2x.lengths) == [1, 1, 1, 1, 2, 2] - assert list(align.y2x.dataXd) == [2, 2, 2, 3, 4, 5, 6, 7] + assert list(align.y2x.data) == [2, 2, 2, 3, 4, 5, 6, 7] # both with leading whitespace, not identical other_tokens = [" ", " ", "i listened to", "obama", "'", "s", "podcasts", "."] spacy_tokens = [" ", "i", "listened", "to", "obama", "'s", "podcasts."] align = Alignment.from_strings(other_tokens, spacy_tokens) assert list(align.x2y.lengths) == [1, 0, 3, 1, 1, 1, 1, 1] - assert list(align.x2y.dataXd) == [0, 1, 2, 3, 4, 5, 5, 6, 6] + assert list(align.x2y.data) == [0, 1, 2, 3, 4, 5, 5, 6, 6] assert list(align.y2x.lengths) == [1, 1, 1, 1, 1, 2, 2] - assert list(align.y2x.dataXd) == [0, 2, 2, 2, 3, 4, 5, 6, 7] + assert list(align.y2x.data) == [0, 2, 2, 2, 3, 4, 5, 6, 7] # same leading whitespace, different tokenization other_tokens = [" ", " ", "i listened to", "obama", "'", "s", "podcasts", "."] spacy_tokens = [" ", "i", "listened", "to", "obama", "'s", "podcasts."] align = Alignment.from_strings(other_tokens, spacy_tokens) assert list(align.x2y.lengths) == [1, 1, 3, 1, 1, 1, 1, 1] - assert list(align.x2y.dataXd) == [0, 0, 1, 2, 3, 4, 5, 5, 6, 6] + assert list(align.x2y.data) == [0, 0, 1, 2, 3, 4, 5, 5, 6, 6] assert list(align.y2x.lengths) == [2, 1, 1, 1, 1, 2, 2] - assert list(align.y2x.dataXd) == [0, 1, 2, 2, 2, 3, 4, 5, 6, 7] + assert list(align.y2x.data) == [0, 1, 2, 2, 2, 3, 4, 5, 6, 7] # only one with trailing whitespace other_tokens = ["i listened to", "obama", "'", "s", "podcasts", ".", " "] spacy_tokens = ["i", "listened", "to", "obama", "'s", "podcasts."] align = Alignment.from_strings(other_tokens, spacy_tokens) assert list(align.x2y.lengths) == [3, 1, 1, 1, 1, 1, 0] - assert list(align.x2y.dataXd) == [0, 1, 2, 3, 4, 4, 5, 5] + assert list(align.x2y.data) == [0, 1, 2, 3, 4, 4, 5, 5] assert list(align.y2x.lengths) == [1, 1, 1, 1, 2, 2] - assert list(align.y2x.dataXd) == [0, 0, 0, 1, 2, 3, 4, 5] + assert list(align.y2x.data) == [0, 0, 0, 1, 2, 3, 4, 5] # different trailing whitespace other_tokens = ["i listened to", "obama", "'", "s", "podcasts", ".", " ", " "] spacy_tokens = ["i", "listened", "to", "obama", "'s", "podcasts.", " "] align = Alignment.from_strings(other_tokens, spacy_tokens) assert list(align.x2y.lengths) == [3, 1, 1, 1, 1, 1, 1, 0] - assert list(align.x2y.dataXd) == [0, 1, 2, 3, 4, 4, 5, 5, 6] + assert list(align.x2y.data) == [0, 1, 2, 3, 4, 4, 5, 5, 6] assert list(align.y2x.lengths) == [1, 1, 1, 1, 2, 2, 1] - assert list(align.y2x.dataXd) == [0, 0, 0, 1, 2, 3, 4, 5, 6] + assert list(align.y2x.data) == [0, 0, 0, 1, 2, 3, 4, 5, 6] # same trailing whitespace, different tokenization other_tokens = ["i listened to", "obama", "'", "s", "podcasts", ".", " ", " "] spacy_tokens = ["i", "listened", "to", "obama", "'s", "podcasts.", " "] align = Alignment.from_strings(other_tokens, spacy_tokens) assert list(align.x2y.lengths) == [3, 1, 1, 1, 1, 1, 1, 1] - assert list(align.x2y.dataXd) == [0, 1, 2, 3, 4, 4, 5, 5, 6, 6] + assert list(align.x2y.data) == [0, 1, 2, 3, 4, 4, 5, 5, 6, 6] assert list(align.y2x.lengths) == [1, 1, 1, 1, 2, 2, 2] - assert list(align.y2x.dataXd) == [0, 0, 0, 1, 2, 3, 4, 5, 6, 7] + assert list(align.y2x.data) == [0, 0, 0, 1, 2, 3, 4, 5, 6, 7] # differing whitespace is allowed other_tokens = ["a", " \n ", "b", "c"] spacy_tokens = ["a", "b", " ", "c"] align = Alignment.from_strings(other_tokens, spacy_tokens) - assert list(align.x2y.dataXd) == [0, 1, 3] - assert list(align.y2x.dataXd) == [0, 2, 3] + assert list(align.x2y.data) == [0, 1, 3] + assert list(align.y2x.data) == [0, 2, 3] # other differences in whitespace are allowed other_tokens = [" ", "a"] diff --git a/spacy/tests/universe/test_universe_json.py b/spacy/tests/universe/test_universe_json.py deleted file mode 100644 index 295889186..000000000 --- a/spacy/tests/universe/test_universe_json.py +++ /dev/null @@ -1,17 +0,0 @@ -import json -import re -from pathlib import Path - - -def test_universe_json(): - - root_dir = Path(__file__).parent - universe_file = root_dir / "universe.json" - - with universe_file.open() as f: - universe_data = json.load(f) - for entry in universe_data["resources"]: - if "github" in entry: - assert not re.match( - r"^(http:)|^(https:)", entry["github"] - ), "Github field should be user/repo, not a url" diff --git a/spacy/tests/vocab_vectors/test_vectors.py b/spacy/tests/vocab_vectors/test_vectors.py index 0650a7487..e3ad206f4 100644 --- a/spacy/tests/vocab_vectors/test_vectors.py +++ b/spacy/tests/vocab_vectors/test_vectors.py @@ -455,6 +455,39 @@ def test_vectors_get_batch(): assert_equal(OPS.to_numpy(vecs), OPS.to_numpy(v.get_batch(words))) +def test_vectors_deduplicate(): + data = OPS.asarray([[1, 1], [2, 2], [3, 4], [1, 1], [3, 4]], dtype="f") + v = Vectors(data=data, keys=["a1", "b1", "c1", "a2", "c2"]) + vocab = Vocab() + vocab.vectors = v + # duplicate vectors do not use the same keys + assert ( + vocab.vectors.key2row[v.strings["a1"]] != vocab.vectors.key2row[v.strings["a2"]] + ) + assert ( + vocab.vectors.key2row[v.strings["c1"]] != vocab.vectors.key2row[v.strings["c2"]] + ) + vocab.deduplicate_vectors() + # there are three unique vectors + assert vocab.vectors.shape[0] == 3 + # the uniqued data is the same as the deduplicated data + assert_equal( + numpy.unique(OPS.to_numpy(vocab.vectors.data), axis=0), + OPS.to_numpy(vocab.vectors.data), + ) + # duplicate vectors use the same keys now + assert ( + vocab.vectors.key2row[v.strings["a1"]] == vocab.vectors.key2row[v.strings["a2"]] + ) + assert ( + vocab.vectors.key2row[v.strings["c1"]] == vocab.vectors.key2row[v.strings["c2"]] + ) + # deduplicating again makes no changes + vocab_b = vocab.to_bytes() + vocab.deduplicate_vectors() + assert vocab_b == vocab.to_bytes() + + @pytest.fixture() def floret_vectors_hashvec_str(): """The full hashvec table from floret with the settings: @@ -535,6 +568,10 @@ def test_floret_vectors(floret_vectors_vec_str, floret_vectors_hashvec_str): # every word has a vector assert nlp.vocab[word * 5].has_vector + # n_keys is -1 for floret + assert nlp_plain.vocab.vectors.n_keys > 0 + assert nlp.vocab.vectors.n_keys == -1 + # check that single and batched vector lookups are identical words = [s for s in nlp_plain.vocab.vectors] single_vecs = OPS.to_numpy(OPS.asarray([nlp.vocab[word].vector for word in words])) diff --git a/spacy/tokenizer.pxd b/spacy/tokenizer.pxd index fa38a1015..e6a072053 100644 --- a/spacy/tokenizer.pxd +++ b/spacy/tokenizer.pxd @@ -23,9 +23,10 @@ cdef class Tokenizer: cdef object _infix_finditer cdef object _rules cdef PhraseMatcher _special_matcher - # TODO next two are unused and should be removed in v4 + # TODO convert to bool in v4 + cdef int _faster_heuristics + # TODO next one is unused and should be removed in v4 # https://github.com/explosion/spaCy/pull/9150 - cdef int _unused_int1 cdef int _unused_int2 cdef Doc _tokenize_affixes(self, str string, bint with_special_cases) diff --git a/spacy/tokenizer.pyx b/spacy/tokenizer.pyx index 4a148b356..0e75b5f7a 100644 --- a/spacy/tokenizer.pyx +++ b/spacy/tokenizer.pyx @@ -34,7 +34,7 @@ cdef class Tokenizer: """ def __init__(self, Vocab vocab, rules=None, prefix_search=None, suffix_search=None, infix_finditer=None, token_match=None, - url_match=None): + url_match=None, faster_heuristics=True): """Create a `Tokenizer`, to create `Doc` objects given unicode text. vocab (Vocab): A storage container for lexical types. @@ -43,7 +43,7 @@ cdef class Tokenizer: `re.compile(string).search` to match prefixes. suffix_search (callable): A function matching the signature of `re.compile(string).search` to match suffixes. - `infix_finditer` (callable): A function matching the signature of + infix_finditer (callable): A function matching the signature of `re.compile(string).finditer` to find infixes. token_match (callable): A function matching the signature of `re.compile(string).match`, for matching strings to be @@ -51,6 +51,9 @@ cdef class Tokenizer: url_match (callable): A function matching the signature of `re.compile(string).match`, for matching strings to be recognized as urls. + faster_heuristics (bool): Whether to restrict the final + Matcher-based pass for rules to those containing affixes or space. + Defaults to True. EXAMPLE: >>> tokenizer = Tokenizer(nlp.vocab) @@ -66,6 +69,7 @@ cdef class Tokenizer: self.suffix_search = suffix_search self.infix_finditer = infix_finditer self.vocab = vocab + self.faster_heuristics = faster_heuristics self._rules = {} self._special_matcher = PhraseMatcher(self.vocab) self._load_special_cases(rules) @@ -122,6 +126,14 @@ cdef class Tokenizer: self._specials = PreshMap() self._load_special_cases(rules) + property faster_heuristics: + def __get__(self): + return bool(self._faster_heuristics) + + def __set__(self, faster_heuristics): + self._faster_heuristics = bool(faster_heuristics) + self._reload_special_cases() + def __reduce__(self): args = (self.vocab, self.rules, @@ -287,7 +299,7 @@ cdef class Tokenizer: spans = [doc[match.start:match.end] for match in filtered] cdef bint modify_in_place = True cdef int curr_length = doc.length - cdef int max_length + cdef int max_length = 0 cdef int span_length_diff = 0 span_data = {} for span in spans: @@ -602,7 +614,7 @@ cdef class Tokenizer: self.mem.free(stale_special) self._rules[string] = substrings self._flush_cache() - if self.find_prefix(string) or self.find_infix(string) or self.find_suffix(string) or " " in string: + if not self.faster_heuristics or self.find_prefix(string) or self.find_infix(string) or self.find_suffix(string) or " " in string: self._special_matcher.add(string, None, self._tokenize_affixes(string, False)) def _reload_special_cases(self): @@ -643,6 +655,10 @@ cdef class Tokenizer: for substring in text.split(): suffixes = [] while substring: + if substring in special_cases: + tokens.extend(("SPECIAL-" + str(i + 1), self.vocab.strings[e[ORTH]]) for i, e in enumerate(special_cases[substring])) + substring = '' + continue while prefix_search(substring) or suffix_search(substring): if token_match(substring): tokens.append(("TOKEN_MATCH", substring)) @@ -683,6 +699,8 @@ cdef class Tokenizer: infixes = infix_finditer(substring) offset = 0 for match in infixes: + if offset == 0 and match.start() == 0: + continue if substring[offset : match.start()]: tokens.append(("TOKEN", substring[offset : match.start()])) if substring[match.start() : match.end()]: @@ -771,7 +789,8 @@ cdef class Tokenizer: "infix_finditer": lambda: _get_regex_pattern(self.infix_finditer), "token_match": lambda: _get_regex_pattern(self.token_match), "url_match": lambda: _get_regex_pattern(self.url_match), - "exceptions": lambda: dict(sorted(self._rules.items())) + "exceptions": lambda: dict(sorted(self._rules.items())), + "faster_heuristics": lambda: self.faster_heuristics, } return util.to_bytes(serializers, exclude) @@ -792,7 +811,8 @@ cdef class Tokenizer: "infix_finditer": lambda b: data.setdefault("infix_finditer", b), "token_match": lambda b: data.setdefault("token_match", b), "url_match": lambda b: data.setdefault("url_match", b), - "exceptions": lambda b: data.setdefault("rules", b) + "exceptions": lambda b: data.setdefault("rules", b), + "faster_heuristics": lambda b: data.setdefault("faster_heuristics", b), } # reset all properties and flush all caches (through rules), # reset rules first so that _reload_special_cases is trivial/fast as @@ -816,6 +836,8 @@ cdef class Tokenizer: self.url_match = re.compile(data["url_match"]).match if "rules" in data and isinstance(data["rules"], dict): self.rules = data["rules"] + if "faster_heuristics" in data: + self.faster_heuristics = data["faster_heuristics"] return self diff --git a/spacy/tokens/_dict_proxies.py b/spacy/tokens/_dict_proxies.py index 470d3430f..8643243fa 100644 --- a/spacy/tokens/_dict_proxies.py +++ b/spacy/tokens/_dict_proxies.py @@ -6,6 +6,7 @@ import srsly from .span_group import SpanGroup from ..errors import Errors + if TYPE_CHECKING: # This lets us add type hints for mypy etc. without causing circular imports from .doc import Doc # noqa: F401 @@ -19,6 +20,8 @@ if TYPE_CHECKING: class SpanGroups(UserDict): """A dict-like proxy held by the Doc, to control access to span groups.""" + _EMPTY_BYTES = srsly.msgpack_dumps([]) + def __init__( self, doc: "Doc", items: Iterable[Tuple[str, SpanGroup]] = tuple() ) -> None: @@ -43,11 +46,13 @@ class SpanGroups(UserDict): def to_bytes(self) -> bytes: # We don't need to serialize this as a dict, because the groups # know their names. + if len(self) == 0: + return self._EMPTY_BYTES msg = [value.to_bytes() for value in self.values()] return srsly.msgpack_dumps(msg) def from_bytes(self, bytes_data: bytes) -> "SpanGroups": - msg = srsly.msgpack_loads(bytes_data) + msg = [] if bytes_data == self._EMPTY_BYTES else srsly.msgpack_loads(bytes_data) self.clear() doc = self._ensure_doc() for value_bytes in msg: diff --git a/spacy/tokens/_serialize.py b/spacy/tokens/_serialize.py index bd2bdb811..c4e8f26f4 100644 --- a/spacy/tokens/_serialize.py +++ b/spacy/tokens/_serialize.py @@ -12,6 +12,7 @@ from ..compat import copy_reg from ..attrs import SPACY, ORTH, intify_attr, IDS from ..errors import Errors from ..util import ensure_path, SimpleFrozenList +from ._dict_proxies import SpanGroups # fmt: off ALL_ATTRS = ("ORTH", "NORM", "TAG", "HEAD", "DEP", "ENT_IOB", "ENT_TYPE", "ENT_KB_ID", "ENT_ID", "LEMMA", "MORPH", "POS", "SENT_START") @@ -146,7 +147,8 @@ class DocBin: doc = Doc(vocab, words=tokens[:, orth_col], spaces=spaces) # type: ignore doc = doc.from_array(self.attrs, tokens) # type: ignore doc.cats = self.cats[i] - if self.span_groups[i]: + # backwards-compatibility: may be b'' or serialized empty list + if self.span_groups[i] and self.span_groups[i] != SpanGroups._EMPTY_BYTES: doc.spans.from_bytes(self.span_groups[i]) else: doc.spans.clear() diff --git a/spacy/tokens/doc.pyi b/spacy/tokens/doc.pyi index f540002c9..7e9340d58 100644 --- a/spacy/tokens/doc.pyi +++ b/spacy/tokens/doc.pyi @@ -10,7 +10,7 @@ from ..lexeme import Lexeme from ..vocab import Vocab from .underscore import Underscore from pathlib import Path -import numpy +import numpy as np class DocMethod(Protocol): def __call__(self: Doc, *args: Any, **kwargs: Any) -> Any: ... # type: ignore[misc] @@ -26,7 +26,7 @@ class Doc: user_hooks: Dict[str, Callable[..., Any]] user_token_hooks: Dict[str, Callable[..., Any]] user_span_hooks: Dict[str, Callable[..., Any]] - tensor: numpy.ndarray + tensor: np.ndarray[Any, np.dtype[np.float_]] user_data: Dict[str, Any] has_unknown_spaces: bool _context: Any @@ -144,7 +144,7 @@ class Doc: ) -> Doc: ... def to_array( self, py_attr_ids: Union[int, str, List[Union[int, str]]] - ) -> numpy.ndarray: ... + ) -> np.ndarray[Any, np.dtype[np.float_]]: ... @staticmethod def from_docs( docs: List[Doc], diff --git a/spacy/tokens/doc.pyx b/spacy/tokens/doc.pyx index 5a0db115d..1a48705fd 100644 --- a/spacy/tokens/doc.pyx +++ b/spacy/tokens/doc.pyx @@ -420,6 +420,8 @@ cdef class Doc: cdef int range_start = 0 if attr == "IS_SENT_START" or attr == self.vocab.strings["IS_SENT_START"]: attr = SENT_START + elif attr == "IS_SENT_END" or attr == self.vocab.strings["IS_SENT_END"]: + attr = SENT_START attr = intify_attr(attr) # adjust attributes if attr == HEAD: @@ -1455,7 +1457,7 @@ cdef class Doc: underscore (list): Optional list of string names of custom doc._. attributes. Attribute values need to be JSON-serializable. Values will be added to an "_" key in the data, e.g. "_": {"foo": "bar"}. - RETURNS (dict): The data in spaCy's JSON format. + RETURNS (dict): The data in JSON format. """ data = {"text": self.text} if self.has_annotation("ENT_IOB"): @@ -1484,6 +1486,15 @@ cdef class Doc: token_data["dep"] = token.dep_ token_data["head"] = token.head.i data["tokens"].append(token_data) + + if self.spans: + data["spans"] = {} + for span_group in self.spans: + data["spans"][span_group] = [] + for span in self.spans[span_group]: + span_data = {"start": span.start_char, "end": span.end_char, "label": span.label_, "kb_id": span.kb_id_} + data["spans"][span_group].append(span_data) + if underscore: data["_"] = {} for attr in underscore: diff --git a/spacy/tokens/span.pyx b/spacy/tokens/span.pyx index f7ddc5136..4b0c724e5 100644 --- a/spacy/tokens/span.pyx +++ b/spacy/tokens/span.pyx @@ -126,38 +126,26 @@ cdef class Span: return False else: return True + self_tuple = (self.c.start_char, self.c.end_char, self.c.label, self.c.kb_id, self.doc) + other_tuple = (other.c.start_char, other.c.end_char, other.c.label, other.c.kb_id, other.doc) # < if op == 0: - return self.c.start_char < other.c.start_char + return self_tuple < other_tuple # <= elif op == 1: - return self.c.start_char <= other.c.start_char + return self_tuple <= other_tuple # == elif op == 2: - # Do the cheap comparisons first - return ( - (self.c.start_char == other.c.start_char) and \ - (self.c.end_char == other.c.end_char) and \ - (self.c.label == other.c.label) and \ - (self.c.kb_id == other.c.kb_id) and \ - (self.doc == other.doc) - ) + return self_tuple == other_tuple # != elif op == 3: - # Do the cheap comparisons first - return not ( - (self.c.start_char == other.c.start_char) and \ - (self.c.end_char == other.c.end_char) and \ - (self.c.label == other.c.label) and \ - (self.c.kb_id == other.c.kb_id) and \ - (self.doc == other.doc) - ) + return self_tuple != other_tuple # > elif op == 4: - return self.c.start_char > other.c.start_char + return self_tuple > other_tuple # >= elif op == 5: - return self.c.start_char >= other.c.start_char + return self_tuple >= other_tuple def __hash__(self): return hash((self.doc, self.c.start_char, self.c.end_char, self.c.label, self.c.kb_id)) @@ -471,8 +459,8 @@ cdef class Span: @property def ents(self): - """The named entities in the span. Returns a tuple of named entity - `Span` objects, if the entity recognizer has been applied. + """The named entities that fall completely within the span. Returns + a tuple of `Span` objects. RETURNS (tuple): Entities in the span, one `Span` per entity. diff --git a/spacy/tokens/token.pyx b/spacy/tokens/token.pyx index b515ab67b..d14930348 100644 --- a/spacy/tokens/token.pyx +++ b/spacy/tokens/token.pyx @@ -487,8 +487,6 @@ cdef class Token: RETURNS (bool / None): Whether the token starts a sentence. None if unknown. - - DOCS: https://spacy.io/api/token#is_sent_start """ def __get__(self): if self.c.sent_start == 0: diff --git a/spacy/tokens/underscore.py b/spacy/tokens/underscore.py index 7fa7bf095..e9a4e1862 100644 --- a/spacy/tokens/underscore.py +++ b/spacy/tokens/underscore.py @@ -1,17 +1,31 @@ -from typing import Dict, Any +from typing import Dict, Any, List, Optional, Tuple, Union, TYPE_CHECKING import functools import copy - from ..errors import Errors +if TYPE_CHECKING: + from .doc import Doc + from .span import Span + from .token import Token + class Underscore: mutable_types = (dict, list, set) doc_extensions: Dict[Any, Any] = {} span_extensions: Dict[Any, Any] = {} token_extensions: Dict[Any, Any] = {} + _extensions: Dict[str, Any] + _obj: Union["Doc", "Span", "Token"] + _start: Optional[int] + _end: Optional[int] - def __init__(self, extensions, obj, start=None, end=None): + def __init__( + self, + extensions: Dict[str, Any], + obj: Union["Doc", "Span", "Token"], + start: Optional[int] = None, + end: Optional[int] = None, + ): object.__setattr__(self, "_extensions", extensions) object.__setattr__(self, "_obj", obj) # Assumption is that for doc values, _start and _end will both be None @@ -23,12 +37,12 @@ class Underscore: object.__setattr__(self, "_start", start) object.__setattr__(self, "_end", end) - def __dir__(self): + def __dir__(self) -> List[str]: # Hack to enable autocomplete on custom extensions extensions = list(self._extensions.keys()) return ["set", "get", "has"] + extensions - def __getattr__(self, name): + def __getattr__(self, name: str) -> Any: if name not in self._extensions: raise AttributeError(Errors.E046.format(name=name)) default, method, getter, setter = self._extensions[name] @@ -56,7 +70,7 @@ class Underscore: return new_default return default - def __setattr__(self, name, value): + def __setattr__(self, name: str, value: Any): if name not in self._extensions: raise AttributeError(Errors.E047.format(name=name)) default, method, getter, setter = self._extensions[name] @@ -65,28 +79,30 @@ class Underscore: else: self._doc.user_data[self._get_key(name)] = value - def set(self, name, value): + def set(self, name: str, value: Any): return self.__setattr__(name, value) - def get(self, name): + def get(self, name: str) -> Any: return self.__getattr__(name) - def has(self, name): + def has(self, name: str) -> bool: return name in self._extensions - def _get_key(self, name): + def _get_key(self, name: str) -> Tuple[str, str, Optional[int], Optional[int]]: return ("._.", name, self._start, self._end) @classmethod - def get_state(cls): + def get_state(cls) -> Tuple[Dict[Any, Any], Dict[Any, Any], Dict[Any, Any]]: return cls.token_extensions, cls.span_extensions, cls.doc_extensions @classmethod - def load_state(cls, state): + def load_state( + cls, state: Tuple[Dict[Any, Any], Dict[Any, Any], Dict[Any, Any]] + ) -> None: cls.token_extensions, cls.span_extensions, cls.doc_extensions = state -def get_ext_args(**kwargs): +def get_ext_args(**kwargs: Any): """Validate and convert arguments. Reused in Doc, Token and Span.""" default = kwargs.get("default") getter = kwargs.get("getter") diff --git a/spacy/training/alignment.py b/spacy/training/alignment.py index 3e3b60ca6..6d24714bf 100644 --- a/spacy/training/alignment.py +++ b/spacy/training/alignment.py @@ -1,31 +1,22 @@ from typing import List -import numpy -from thinc.types import Ragged from dataclasses import dataclass from .align import get_alignments +from .alignment_array import AlignmentArray @dataclass class Alignment: - x2y: Ragged - y2x: Ragged + x2y: AlignmentArray + y2x: AlignmentArray @classmethod def from_indices(cls, x2y: List[List[int]], y2x: List[List[int]]) -> "Alignment": - x2y = _make_ragged(x2y) - y2x = _make_ragged(y2x) + x2y = AlignmentArray(x2y) + y2x = AlignmentArray(y2x) return Alignment(x2y=x2y, y2x=y2x) @classmethod def from_strings(cls, A: List[str], B: List[str]) -> "Alignment": x2y, y2x = get_alignments(A, B) return Alignment.from_indices(x2y=x2y, y2x=y2x) - - -def _make_ragged(indices): - lengths = numpy.array([len(x) for x in indices], dtype="i") - flat = [] - for x in indices: - flat.extend(x) - return Ragged(numpy.array(flat, dtype="i"), lengths) diff --git a/spacy/training/alignment_array.pxd b/spacy/training/alignment_array.pxd new file mode 100644 index 000000000..056f5bef3 --- /dev/null +++ b/spacy/training/alignment_array.pxd @@ -0,0 +1,7 @@ +from libcpp.vector cimport vector +cimport numpy as np + +cdef class AlignmentArray: + cdef np.ndarray _data + cdef np.ndarray _lengths + cdef np.ndarray _starts_ends diff --git a/spacy/training/alignment_array.pyx b/spacy/training/alignment_array.pyx new file mode 100644 index 000000000..b58f08786 --- /dev/null +++ b/spacy/training/alignment_array.pyx @@ -0,0 +1,68 @@ +from typing import List +from ..errors import Errors +import numpy + + +cdef class AlignmentArray: + """AlignmentArray is similar to Thinc's Ragged with two simplfications: + indexing returns numpy arrays and this type can only be used for CPU arrays. + However, these changes make AlginmentArray more efficient for indexing in a + tight loop.""" + + __slots__ = [] + + def __init__(self, alignment: List[List[int]]): + self._lengths = None + self._starts_ends = numpy.zeros(len(alignment) + 1, dtype="i") + + cdef int data_len = 0 + cdef int outer_len + cdef int idx + for idx, outer in enumerate(alignment): + outer_len = len(outer) + self._starts_ends[idx + 1] = self._starts_ends[idx] + outer_len + data_len += outer_len + + self._data = numpy.empty(data_len, dtype="i") + idx = 0 + for outer in alignment: + for inner in outer: + self._data[idx] = inner + idx += 1 + + def __getitem__(self, idx): + starts = self._starts_ends[:-1] + ends = self._starts_ends[1:] + if isinstance(idx, int): + start = starts[idx] + end = ends[idx] + elif isinstance(idx, slice): + if not (idx.step is None or idx.step == 1): + raise ValueError(Errors.E1027) + start = starts[idx] + if len(start) == 0: + return self._data[0:0] + start = start[0] + end = ends[idx][-1] + else: + raise ValueError(Errors.E1028) + + return self._data[start:end] + + @property + def data(self): + return self._data + + @property + def lengths(self): + if self._lengths is None: + self._lengths = self.ends - self.starts + return self._lengths + + @property + def ends(self): + return self._starts_ends[1:] + + @property + def starts(self): + return self._starts_ends[:-1] diff --git a/spacy/training/augment.py b/spacy/training/augment.py index 63b54034c..59a39c7ee 100644 --- a/spacy/training/augment.py +++ b/spacy/training/augment.py @@ -1,4 +1,5 @@ from typing import Callable, Iterator, Dict, List, Tuple, TYPE_CHECKING +from typing import Optional import random import itertools from functools import partial @@ -11,32 +12,87 @@ if TYPE_CHECKING: from ..language import Language # noqa: F401 -class OrthVariantsSingle(BaseModel): - tags: List[StrictStr] - variants: List[StrictStr] +@registry.augmenters("spacy.combined_augmenter.v1") +def create_combined_augmenter( + lower_level: float, + orth_level: float, + orth_variants: Optional[Dict[str, List[Dict]]], + whitespace_level: float, + whitespace_per_token: float, + whitespace_variants: Optional[List[str]], +) -> Callable[["Language", Example], Iterator[Example]]: + """Create a data augmentation callback that uses orth-variant replacement. + The callback can be added to a corpus or other data iterator during training. + + lower_level (float): The percentage of texts that will be lowercased. + orth_level (float): The percentage of texts that will be augmented. + orth_variants (Optional[Dict[str, List[Dict]]]): A dictionary containing the + single and paired orth variants. Typically loaded from a JSON file. + whitespace_level (float): The percentage of texts that will have whitespace + tokens inserted. + whitespace_per_token (float): The number of whitespace tokens to insert in + the modified doc as a percentage of the doc length. + whitespace_variants (Optional[List[str]]): The whitespace token texts. + RETURNS (Callable[[Language, Example], Iterator[Example]]): The augmenter. + """ + return partial( + combined_augmenter, + lower_level=lower_level, + orth_level=orth_level, + orth_variants=orth_variants, + whitespace_level=whitespace_level, + whitespace_per_token=whitespace_per_token, + whitespace_variants=whitespace_variants, + ) -class OrthVariantsPaired(BaseModel): - tags: List[StrictStr] - variants: List[List[StrictStr]] - - -class OrthVariants(BaseModel): - paired: List[OrthVariantsPaired] = [] - single: List[OrthVariantsSingle] = [] +def combined_augmenter( + nlp: "Language", + example: Example, + *, + lower_level: float = 0.0, + orth_level: float = 0.0, + orth_variants: Optional[Dict[str, List[Dict]]] = None, + whitespace_level: float = 0.0, + whitespace_per_token: float = 0.0, + whitespace_variants: Optional[List[str]] = None, +) -> Iterator[Example]: + if random.random() < lower_level: + example = make_lowercase_variant(nlp, example) + if orth_variants and random.random() < orth_level: + raw_text = example.text + orig_dict = example.to_dict() + variant_text, variant_token_annot = make_orth_variants( + nlp, + raw_text, + orig_dict["token_annotation"], + orth_variants, + lower=False, + ) + orig_dict["token_annotation"] = variant_token_annot + example = example.from_dict(nlp.make_doc(variant_text), orig_dict) + if whitespace_variants and random.random() < whitespace_level: + for _ in range(int(len(example.reference) * whitespace_per_token)): + example = make_whitespace_variant( + nlp, + example, + random.choice(whitespace_variants), + random.randrange(0, len(example.reference)), + ) + yield example @registry.augmenters("spacy.orth_variants.v1") def create_orth_variants_augmenter( - level: float, lower: float, orth_variants: OrthVariants + level: float, lower: float, orth_variants: Dict[str, List[Dict]] ) -> Callable[["Language", Example], Iterator[Example]]: """Create a data augmentation callback that uses orth-variant replacement. The callback can be added to a corpus or other data iterator during training. level (float): The percentage of texts that will be augmented. lower (float): The percentage of texts that will be lowercased. - orth_variants (Dict[str, dict]): A dictionary containing the single and - paired orth variants. Typically loaded from a JSON file. + orth_variants (Dict[str, List[Dict]]): A dictionary containing + the single and paired orth variants. Typically loaded from a JSON file. RETURNS (Callable[[Language, Example], Iterator[Example]]): The augmenter. """ return partial( @@ -67,16 +123,20 @@ def lower_casing_augmenter( if random.random() >= level: yield example else: - example_dict = example.to_dict() - doc = nlp.make_doc(example.text.lower()) - example_dict["token_annotation"]["ORTH"] = [t.lower_ for t in example.reference] - yield example.from_dict(doc, example_dict) + yield make_lowercase_variant(nlp, example) + + +def make_lowercase_variant(nlp: "Language", example: Example): + example_dict = example.to_dict() + doc = nlp.make_doc(example.text.lower()) + example_dict["token_annotation"]["ORTH"] = [t.lower_ for t in example.reference] + return example.from_dict(doc, example_dict) def orth_variants_augmenter( nlp: "Language", example: Example, - orth_variants: Dict, + orth_variants: Dict[str, List[Dict]], *, level: float = 0.0, lower: float = 0.0, @@ -148,10 +208,132 @@ def make_orth_variants( pair_idx = pair.index(words[word_idx]) words[word_idx] = punct_choices[punct_idx][pair_idx] token_dict["ORTH"] = words - # construct modified raw text from words and spaces + raw = construct_modified_raw_text(token_dict) + return raw, token_dict + + +def make_whitespace_variant( + nlp: "Language", + example: Example, + whitespace: str, + position: int, +) -> Example: + """Insert the whitespace token at the specified token offset in the doc. + This is primarily intended for v2-compatible training data that doesn't + include links or spans. If the document includes links, spans, or partial + dependency annotation, it is returned without modifications. + + The augmentation follows the basics of the v2 space attachment policy, but + without a distinction between "real" and other tokens, so space tokens + may be attached to space tokens: + - at the beginning of a sentence attach the space token to the following + token + - otherwise attach the space token to the preceding token + + The augmenter does not attempt to consolidate adjacent whitespace in the + same way that the tokenizer would. + + The following annotation is used for the space token: + TAG: "_SP" + MORPH: "" + POS: "SPACE" + LEMMA: ORTH + DEP: "dep" + SENT_START: False + + The annotation for each attribute is only set for the space token if there + is already at least partial annotation for that attribute in the original + example. + + RETURNS (Example): Example with one additional space token. + """ + example_dict = example.to_dict() + doc_dict = example_dict.get("doc_annotation", {}) + token_dict = example_dict.get("token_annotation", {}) + # returned unmodified if: + # - doc is empty + # - words are not defined + # - links are defined (only character-based offsets, which is more a quirk + # of Example.to_dict than a technical constraint) + # - spans are defined + # - there are partial dependencies + if ( + len(example.reference) == 0 + or "ORTH" not in token_dict + or len(doc_dict.get("links", [])) > 0 + or len(example.reference.spans) > 0 + or ( + example.reference.has_annotation("DEP") + and not example.reference.has_annotation("DEP", require_complete=True) + ) + ): + return example + words = token_dict.get("ORTH", []) + length = len(words) + assert 0 <= position <= length + if example.reference.has_annotation("ENT_TYPE"): + # I-ENTITY if between B/I-ENTITY and I/L-ENTITY otherwise O + entity = "O" + if position > 1 and position < length: + ent_prev = doc_dict["entities"][position - 1] + ent_next = doc_dict["entities"][position] + if "-" in ent_prev and "-" in ent_next: + ent_iob_prev = ent_prev.split("-")[0] + ent_type_prev = ent_prev.split("-", 1)[1] + ent_iob_next = ent_next.split("-")[0] + ent_type_next = ent_next.split("-", 1)[1] + if ( + ent_iob_prev in ("B", "I") + and ent_iob_next in ("I", "L") + and ent_type_prev == ent_type_next + ): + entity = f"I-{ent_type_prev}" + doc_dict["entities"].insert(position, entity) + else: + del doc_dict["entities"] + token_dict["ORTH"].insert(position, whitespace) + token_dict["SPACY"].insert(position, False) + if example.reference.has_annotation("TAG"): + token_dict["TAG"].insert(position, "_SP") + else: + del token_dict["TAG"] + if example.reference.has_annotation("LEMMA"): + token_dict["LEMMA"].insert(position, whitespace) + else: + del token_dict["LEMMA"] + if example.reference.has_annotation("POS"): + token_dict["POS"].insert(position, "SPACE") + else: + del token_dict["POS"] + if example.reference.has_annotation("MORPH"): + token_dict["MORPH"].insert(position, "") + else: + del token_dict["MORPH"] + if example.reference.has_annotation("DEP", require_complete=True): + if position == 0: + token_dict["HEAD"].insert(position, 0) + else: + token_dict["HEAD"].insert(position, position - 1) + for i in range(len(token_dict["HEAD"])): + if token_dict["HEAD"][i] >= position: + token_dict["HEAD"][i] += 1 + token_dict["DEP"].insert(position, "dep") + else: + del token_dict["HEAD"] + del token_dict["DEP"] + if example.reference.has_annotation("SENT_START"): + token_dict["SENT_START"].insert(position, False) + else: + del token_dict["SENT_START"] + raw = construct_modified_raw_text(token_dict) + return Example.from_dict(nlp.make_doc(raw), example_dict) + + +def construct_modified_raw_text(token_dict): + """Construct modified raw text from words and spaces.""" raw = "" for orth, spacy in zip(token_dict["ORTH"], token_dict["SPACY"]): raw += orth if spacy: raw += " " - return raw, token_dict + return raw diff --git a/spacy/training/converters/conllu_to_docs.py b/spacy/training/converters/conllu_to_docs.py index 66156b6e5..7052504cc 100644 --- a/spacy/training/converters/conllu_to_docs.py +++ b/spacy/training/converters/conllu_to_docs.py @@ -71,6 +71,7 @@ def read_conllx( ): """Yield docs, one for each sentence""" vocab = Vocab() # need vocab to make a minimal Doc + set_ents = has_ner(input_data, ner_tag_pattern) for sent in input_data.strip().split("\n\n"): lines = sent.strip().split("\n") if lines: @@ -83,6 +84,7 @@ def read_conllx( merge_subtokens=merge_subtokens, append_morphology=append_morphology, ner_map=ner_map, + set_ents=set_ents, ) yield doc @@ -133,6 +135,7 @@ def conllu_sentence_to_doc( merge_subtokens=False, append_morphology=False, ner_map=None, + set_ents=False, ): """Create an Example from the lines for one CoNLL-U sentence, merging subtokens and appending morphology to tags if required. @@ -188,6 +191,7 @@ def conllu_sentence_to_doc( id_ = int(id_) - 1 head = (int(head) - 1) if head not in ("0", "_") else id_ tag = pos if tag == "_" else tag + pos = pos if pos != "_" else "" morph = morph if morph != "_" else "" dep = "ROOT" if dep == "root" else dep lemmas.append(lemma) @@ -213,8 +217,10 @@ def conllu_sentence_to_doc( doc[i]._.merged_morph = morphs[i] doc[i]._.merged_lemma = lemmas[i] doc[i]._.merged_spaceafter = spaces[i] - ents = get_entities(lines, ner_tag_pattern, ner_map) - doc.ents = biluo_tags_to_spans(doc, ents) + ents = None + if set_ents: + ents = get_entities(lines, ner_tag_pattern, ner_map) + doc.ents = biluo_tags_to_spans(doc, ents) if merge_subtokens: doc = merge_conllu_subtokens(lines, doc) @@ -246,7 +252,10 @@ def conllu_sentence_to_doc( deps=deps, heads=heads, ) - doc_x.ents = [Span(doc_x, ent.start, ent.end, label=ent.label) for ent in doc.ents] + if set_ents: + doc_x.ents = [ + Span(doc_x, ent.start, ent.end, label=ent.label) for ent in doc.ents + ] return doc_x diff --git a/spacy/training/example.pyx b/spacy/training/example.pyx index 5357b5c0b..008be1b99 100644 --- a/spacy/training/example.pyx +++ b/spacy/training/example.pyx @@ -158,20 +158,17 @@ cdef class Example: gold_values = self.reference.to_array([field]) output = [None] * len(self.predicted) for token in self.predicted: - if token.is_space: + values = gold_values[align[token.i]] + values = values.ravel() + if len(values) == 0: output[token.i] = None + elif len(values) == 1: + output[token.i] = values[0] + elif len(set(list(values))) == 1: + # If all aligned tokens have the same value, use it. + output[token.i] = values[0] else: - values = gold_values[align[token.i].dataXd] - values = values.ravel() - if len(values) == 0: - output[token.i] = None - elif len(values) == 1: - output[token.i] = values[0] - elif len(set(list(values))) == 1: - # If all aligned tokens have the same value, use it. - output[token.i] = values[0] - else: - output[token.i] = None + output[token.i] = None if as_string and field not in ["ENT_IOB", "SENT_START"]: output = [vocab.strings[o] if o is not None else o for o in output] return output @@ -192,9 +189,9 @@ cdef class Example: deps = [d if has_deps[i] else deps[i] for i, d in enumerate(proj_deps)] for cand_i in range(self.x.length): if cand_to_gold.lengths[cand_i] == 1: - gold_i = cand_to_gold[cand_i].dataXd[0, 0] + gold_i = cand_to_gold[cand_i][0] if gold_to_cand.lengths[heads[gold_i]] == 1: - aligned_heads[cand_i] = int(gold_to_cand[heads[gold_i]].dataXd[0, 0]) + aligned_heads[cand_i] = int(gold_to_cand[heads[gold_i]][0]) aligned_deps[cand_i] = deps[gold_i] return aligned_heads, aligned_deps @@ -206,7 +203,7 @@ cdef class Example: align = self.alignment.y2x sent_starts = [False] * len(self.x) for y_sent in self.y.sents: - x_start = int(align[y_sent.start].dataXd[0]) + x_start = int(align[y_sent.start][0]) sent_starts[x_start] = True return sent_starts else: @@ -222,7 +219,7 @@ cdef class Example: seen = set() output = [] for span in spans: - indices = align[span.start : span.end].data.ravel() + indices = align[span.start : span.end] if not allow_overlap: indices = [idx for idx in indices if idx not in seen] if len(indices) >= 1: @@ -258,6 +255,29 @@ cdef class Example: x_ents, x_tags = self.get_aligned_ents_and_ner() return x_tags + def get_matching_ents(self, check_label=True): + """Return entities that are shared between predicted and reference docs. + + If `check_label` is True, entities must have matching labels to be + kept. Otherwise only the character indices need to match. + """ + gold = {} + for ent in self.reference.ents: + gold[(ent.start_char, ent.end_char)] = ent.label + + keep = [] + for ent in self.predicted.ents: + key = (ent.start_char, ent.end_char) + if key not in gold: + continue + + if check_label and ent.label != gold[key]: + continue + + keep.append(ent) + + return keep + def to_dict(self): return { "doc_annotation": { @@ -295,7 +315,7 @@ cdef class Example: seen_indices = set() output = [] for y_sent in self.reference.sents: - indices = align[y_sent.start : y_sent.end].data.ravel() + indices = align[y_sent.start : y_sent.end] indices = [idx for idx in indices if idx not in seen_indices] if indices: x_sent = self.predicted[indices[0] : indices[-1] + 1] diff --git a/spacy/training/initialize.py b/spacy/training/initialize.py index b59288e38..48ff7b589 100644 --- a/spacy/training/initialize.py +++ b/spacy/training/initialize.py @@ -213,6 +213,7 @@ def convert_vectors( for lex in nlp.vocab: if lex.rank and lex.rank != OOV_RANK: nlp.vocab.vectors.add(lex.orth, row=lex.rank) # type: ignore[attr-defined] + nlp.vocab.deduplicate_vectors() else: if vectors_loc: logger.info(f"Reading vectors from {vectors_loc}") @@ -239,6 +240,7 @@ def convert_vectors( nlp.vocab.vectors = Vectors( strings=nlp.vocab.strings, data=vectors_data, keys=vector_keys ) + nlp.vocab.deduplicate_vectors() if name is None: # TODO: Is this correct? Does this matter? nlp.vocab.vectors.name = f"{nlp.meta['lang']}_{nlp.meta['name']}.vectors" diff --git a/spacy/util.py b/spacy/util.py index 14714143c..66e257dd8 100644 --- a/spacy/util.py +++ b/spacy/util.py @@ -485,13 +485,16 @@ def load_model_from_path( config_path = model_path / "config.cfg" overrides = dict_to_dot(config) config = load_config(config_path, overrides=overrides) - nlp = load_model_from_config(config, vocab=vocab, disable=disable, exclude=exclude) + nlp = load_model_from_config( + config, vocab=vocab, disable=disable, exclude=exclude, meta=meta + ) return nlp.from_disk(model_path, exclude=exclude, overrides=overrides) def load_model_from_config( config: Union[Dict[str, Any], Config], *, + meta: Dict[str, Any] = SimpleFrozenDict(), vocab: Union["Vocab", bool] = True, disable: Iterable[str] = SimpleFrozenList(), exclude: Iterable[str] = SimpleFrozenList(), @@ -529,6 +532,7 @@ def load_model_from_config( exclude=exclude, auto_fill=auto_fill, validate=validate, + meta=meta, ) return nlp @@ -871,7 +875,6 @@ def get_package_path(name: str) -> Path: name (str): Package name. RETURNS (Path): Path to installed package. """ - name = name.lower() # use lowercase version to be safe # Here we're importing the module just to find it. This is worryingly # indirect, but it's otherwise very difficult to find the package. pkg = importlib.import_module(name) diff --git a/spacy/vectors.pyx b/spacy/vectors.pyx index bc4863703..bcba9d03f 100644 --- a/spacy/vectors.pyx +++ b/spacy/vectors.pyx @@ -170,6 +170,8 @@ cdef class Vectors: DOCS: https://spacy.io/api/vectors#n_keys """ + if self.mode == Mode.floret: + return -1 return len(self.key2row) def __reduce__(self): @@ -563,8 +565,9 @@ cdef class Vectors: # the source of numpy.save indicates that the file object is closed after use. # but it seems that somehow this does not happen, as ResourceWarnings are raised here. # in order to not rely on this, wrap in context manager. + ops = get_current_ops() with path.open("wb") as _file: - save_array(self.data, _file) + save_array(ops.to_numpy(self.data, byte_order="<"), _file) serializers = { "strings": lambda p: self.strings.to_disk(p.with_suffix(".json")), @@ -600,6 +603,7 @@ cdef class Vectors: ops = get_current_ops() if path.exists(): self.data = ops.xp.load(str(path)) + self.to_ops(ops) def load_settings(path): if path.exists(): @@ -629,7 +633,8 @@ cdef class Vectors: if hasattr(self.data, "to_bytes"): return self.data.to_bytes() else: - return srsly.msgpack_dumps(self.data) + ops = get_current_ops() + return srsly.msgpack_dumps(ops.to_numpy(self.data, byte_order="<")) serializers = { "strings": lambda: self.strings.to_bytes(), @@ -654,6 +659,8 @@ cdef class Vectors: else: xp = get_array_module(self.data) self.data = xp.asarray(srsly.msgpack_loads(b)) + ops = get_current_ops() + self.to_ops(ops) deserializers = { "strings": lambda b: self.strings.from_bytes(b), diff --git a/spacy/vocab.pyi b/spacy/vocab.pyi index 713e85c01..4cc359c47 100644 --- a/spacy/vocab.pyi +++ b/spacy/vocab.pyi @@ -46,6 +46,7 @@ class Vocab: def reset_vectors( self, *, width: Optional[int] = ..., shape: Optional[int] = ... ) -> None: ... + def deduplicate_vectors(self) -> None: ... def prune_vectors(self, nr_row: int, batch_size: int = ...) -> Dict[str, float]: ... def get_vector( self, diff --git a/spacy/vocab.pyx b/spacy/vocab.pyx index badd291ed..428cadd82 100644 --- a/spacy/vocab.pyx +++ b/spacy/vocab.pyx @@ -1,6 +1,7 @@ # cython: profile=True from libc.string cimport memcpy +import numpy import srsly from thinc.api import get_array_module, get_current_ops import functools @@ -297,6 +298,33 @@ cdef class Vocab: width = width if width is not None else self.vectors.shape[1] self.vectors = Vectors(strings=self.strings, shape=(self.vectors.shape[0], width)) + def deduplicate_vectors(self): + if self.vectors.mode != VectorsMode.default: + raise ValueError(Errors.E858.format( + mode=self.vectors.mode, + alternative="" + )) + ops = get_current_ops() + xp = get_array_module(self.vectors.data) + filled = xp.asarray( + sorted(list({row for row in self.vectors.key2row.values()})) + ) + # deduplicate data and remap keys + data = numpy.unique(ops.to_numpy(self.vectors.data[filled]), axis=0) + data = ops.asarray(data) + if data.shape == self.vectors.data.shape: + # nothing to deduplicate + return + row_by_bytes = {row.tobytes(): i for i, row in enumerate(data)} + key2row = { + key: row_by_bytes[self.vectors.data[row].tobytes()] + for key, row in self.vectors.key2row.items() + } + # replace vectors with deduplicated version + self.vectors = Vectors(strings=self.strings, data=data, name=self.vectors.name) + for key, row in key2row.items(): + self.vectors.add(key, row=row) + def prune_vectors(self, nr_row, batch_size=1024): """Reduce the current vector table to `nr_row` unique entries. Words mapped to the discarded vectors will be remapped to the closest vector @@ -325,7 +353,10 @@ cdef class Vocab: DOCS: https://spacy.io/api/vocab#prune_vectors """ if self.vectors.mode != VectorsMode.default: - raise ValueError(Errors.E866) + raise ValueError(Errors.E858.format( + mode=self.vectors.mode, + alternative="" + )) ops = get_current_ops() xp = get_array_module(self.vectors.data) # Make sure all vectors are in the vocab @@ -354,8 +385,9 @@ cdef class Vocab: def get_vector(self, orth): """Retrieve a vector for a word in the vocabulary. Words can be looked - up by string or int ID. If no vectors data is loaded, ValueError is - raised. + up by string or int ID. If the current vectors do not contain an entry + for the word, a 0-vector with the same number of dimensions as the + current vectors is returned. orth (int / unicode): The hash value of a word, or its unicode string. RETURNS (numpy.ndarray or cupy.ndarray): A word vector. Size diff --git a/website/Dockerfile b/website/Dockerfile new file mode 100644 index 000000000..f71733e55 --- /dev/null +++ b/website/Dockerfile @@ -0,0 +1,16 @@ +FROM node:11.15.0 + +WORKDIR /spacy-io + +RUN npm install -g gatsby-cli@2.7.4 + +COPY package.json . +COPY package-lock.json . + +RUN npm install + +# This is so the installed node_modules will be up one directory +# from where a user mounts files, so that they don't accidentally mount +# their own node_modules from a different build +# https://nodejs.org/api/modules.html#modules_loading_from_node_modules_folders +WORKDIR /spacy-io/website/ diff --git a/website/README.md b/website/README.md index 076032d92..db050cf03 100644 --- a/website/README.md +++ b/website/README.md @@ -554,6 +554,42 @@ extensions for your code editor. The [`.prettierrc`](https://github.com/explosion/spaCy/tree/master/website/.prettierrc) file in the root defines the settings used in this codebase. +## Building & developing the site with Docker {#docker} +Sometimes it's hard to get a local environment working due to rapid updates to node dependencies, +so it may be easier to use docker for building the docs. + +If you'd like to do this, +**be sure you do *not* include your local `node_modules` folder**, +since there are some dependencies that need to be built for the image system. +Rename it before using. + +```bash +docker run -it \ + -v $(pwd):/spacy-io/website \ + -p 8000:8000 \ + ghcr.io/explosion/spacy-io \ + gatsby develop -H 0.0.0.0 +``` + +This will allow you to access the built website at http://0.0.0.0:8000/ +in your browser, and still edit code in your editor while having the site +reflect those changes. + +**Note**: If you're working on a Mac with an M1 processor, +you might see segfault errors from `qemu` if you use the default image. +To fix this use the `arm64` tagged image in the `docker run` command +(ghcr.io/explosion/spacy-io:arm64). + +### Building the Docker image {#docker-build} + +If you'd like to build the image locally, you can do so like this: + +```bash +docker build -t spacy-io . +``` + +This will take some time, so if you want to use the prebuilt image you'll save a bit of time. + ## Markdown reference {#markdown} All page content and page meta lives in the `.md` files in the `/docs` diff --git a/website/docs/api/architectures.md b/website/docs/api/architectures.md index 7a3d26b41..93f7b762c 100644 --- a/website/docs/api/architectures.md +++ b/website/docs/api/architectures.md @@ -104,7 +104,7 @@ consisting of a CNN and a layer-normalized maxout activation function. > factory = "tagger" > > [components.tagger.model] -> @architectures = "spacy.Tagger.v1" +> @architectures = "spacy.Tagger.v2" > > [components.tagger.model.tok2vec] > @architectures = "spacy.Tok2VecListener.v1" @@ -158,8 +158,8 @@ be configured with the `attrs` argument. The suggested attributes are `NORM`, `PREFIX`, `SUFFIX` and `SHAPE`. This lets the model take into account some subword information, without construction a fully character-based representation. If pretrained vectors are available, they can be included in the -representation as well, with the vectors table kept static (i.e. it's -not updated). +representation as well, with the vectors table kept static (i.e. it's not +updated). | Name | Description | | ------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | @@ -611,14 +611,15 @@ same signature, but the `use_upper` argument was `True` by default. ## Tagging architectures {#tagger source="spacy/ml/models/tagger.py"} -### spacy.Tagger.v1 {#Tagger} +### spacy.Tagger.v2 {#Tagger} > #### Example Config > > ```ini > [model] -> @architectures = "spacy.Tagger.v1" +> @architectures = "spacy.Tagger.v2" > nO = null +> normalize = false > > [model.tok2vec] > # ... @@ -632,8 +633,18 @@ the token vectors. | ----------- | ------------------------------------------------------------------------------------------ | | `tok2vec` | Subnetwork to map tokens into vector representations. ~~Model[List[Doc], List[Floats2d]]~~ | | `nO` | The number of tags to output. Inferred from the data if `None`. ~~Optional[int]~~ | +| `normalize` | Normalize probabilities during inference. Defaults to `False`. ~~bool~~ | | **CREATES** | The model using the architecture. ~~Model[List[Doc], List[Floats2d]]~~ | + + +- The `normalize` argument was added in `spacy.Tagger.v2`. `spacy.Tagger.v1` + always normalizes probabilities during inference. + +The other arguments are shared between all versions. + + + ## Text classification architectures {#textcat source="spacy/ml/models/textcat.py"} A text classification architecture needs to take a [`Doc`](/api/doc) as input, @@ -856,13 +867,13 @@ into the "real world". This requires 3 main components: - A machine learning [`Model`](https://thinc.ai/docs/api-model) that picks the most plausible ID from the set of candidates. -### spacy.EntityLinker.v1 {#EntityLinker} +### spacy.EntityLinker.v2 {#EntityLinker} > #### Example Config > > ```ini > [model] -> @architectures = "spacy.EntityLinker.v1" +> @architectures = "spacy.EntityLinker.v2" > nO = null > > [model.tok2vec] diff --git a/website/docs/api/corpus.md b/website/docs/api/corpus.md index 986c6f458..35afc8fea 100644 --- a/website/docs/api/corpus.md +++ b/website/docs/api/corpus.md @@ -79,6 +79,7 @@ train/test skew. | `max_length` | Maximum document length. Longer documents will be split into sentences, if sentence boundaries are available. Defaults to `0` for no limit. ~~int~~ | | `limit` | Limit corpus to a subset of examples, e.g. for debugging. Defaults to `0` for no limit. ~~int~~ | | `augmenter` | Optional data augmentation callback. ~~Callable[[Language, Example], Iterable[Example]]~~ | +| `shuffle` | Whether to shuffle the examples. Defaults to `False`. ~~bool~~ | ## Corpus.\_\_call\_\_ {#call tag="method"} diff --git a/website/docs/api/dependencyparser.md b/website/docs/api/dependencyparser.md index 118cdc611..103e0826e 100644 --- a/website/docs/api/dependencyparser.md +++ b/website/docs/api/dependencyparser.md @@ -100,7 +100,7 @@ shortcut for this and instantiate the component using its string name and | `vocab` | The shared vocabulary. ~~Vocab~~ | | `model` | The [`Model`](https://thinc.ai/docs/api-model) powering the pipeline component. ~~Model[List[Doc], List[Floats2d]]~~ | | `name` | String name of the component instance. Used to add entries to the `losses` during training. ~~str~~ | -| `moves` | A list of transition names. Inferred from the data if not provided. ~~Optional[List[str]]~~ | +| `moves` | A list of transition names. Inferred from the data if not provided. ~~Optional[TransitionSystem]~~ | | _keyword-only_ | | | `update_with_oracle_cut_size` | During training, cut long sequences into shorter segments by creating intermediate states based on the gold-standard history. The model is not very sensitive to this parameter, so you usually won't need to change it. Defaults to `100`. ~~int~~ | | `learn_tokens` | Whether to learn to merge subtokens that are split relative to the gold standard. Experimental. Defaults to `False`. ~~bool~~ | diff --git a/website/docs/api/doc.md b/website/docs/api/doc.md index 9836b8c21..c21328caf 100644 --- a/website/docs/api/doc.md +++ b/website/docs/api/doc.md @@ -304,7 +304,7 @@ ancestor is found, e.g. if span excludes a necessary ancestor. ## Doc.has_annotation {#has_annotation tag="method"} -Check whether the doc contains annotation on a token attribute. +Check whether the doc contains annotation on a [`Token` attribute](/api/token#attributes). diff --git a/website/docs/api/edittreelemmatizer.md b/website/docs/api/edittreelemmatizer.md new file mode 100644 index 000000000..99a705f5e --- /dev/null +++ b/website/docs/api/edittreelemmatizer.md @@ -0,0 +1,409 @@ +--- +title: EditTreeLemmatizer +tag: class +source: spacy/pipeline/edit_tree_lemmatizer.py +new: 3.3 +teaser: 'Pipeline component for lemmatization' +api_base_class: /api/pipe +api_string_name: trainable_lemmatizer +api_trainable: true +--- + +A trainable component for assigning base forms to tokens. This lemmatizer uses +**edit trees** to transform tokens into base forms. The lemmatization model +predicts which edit tree is applicable to a token. The edit tree data structure +and construction method used by this lemmatizer were proposed in +[Joint Lemmatization and Morphological Tagging with Lemming](https://aclanthology.org/D15-1272.pdf) +(Thomas Müller et al., 2015). + +For a lookup and rule-based lemmatizer, see [`Lemmatizer`](/api/lemmatizer). + +## Assigned Attributes {#assigned-attributes} + +Predictions are assigned to `Token.lemma`. + +| Location | Value | +| -------------- | ------------------------- | +| `Token.lemma` | The lemma (hash). ~~int~~ | +| `Token.lemma_` | The lemma. ~~str~~ | + +## Config and implementation {#config} + +The default config is defined by the pipeline component factory and describes +how the component should be configured. You can override its settings via the +`config` argument on [`nlp.add_pipe`](/api/language#add_pipe) or in your +[`config.cfg` for training](/usage/training#config). See the +[model architectures](/api/architectures) documentation for details on the +architectures and their arguments and hyperparameters. + +> #### Example +> +> ```python +> from spacy.pipeline.edit_tree_lemmatizer import DEFAULT_EDIT_TREE_LEMMATIZER_MODEL +> config = {"model": DEFAULT_EDIT_TREE_LEMMATIZER_MODEL} +> nlp.add_pipe("trainable_lemmatizer", config=config, name="lemmatizer") +> ``` + +| Setting | Description | +| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| `model` | A model instance that predicts the edit tree probabilities. The output vectors should match the number of edit trees in size, and be normalized as probabilities (all scores between 0 and 1, with the rows summing to `1`). Defaults to [Tagger](/api/architectures#Tagger). ~~Model[List[Doc], List[Floats2d]]~~ | +| `backoff` | ~~Token~~ attribute to use when no applicable edit tree is found. Defaults to `orth`. ~~str~~ | +| `min_tree_freq` | Minimum frequency of an edit tree in the training set to be used. Defaults to `3`. ~~int~~ | +| `overwrite` | Whether existing annotation is overwritten. Defaults to `False`. ~~bool~~ | +| `top_k` | The number of most probable edit trees to try before resorting to `backoff`. Defaults to `1`. ~~int~~ | +| `scorer` | The scoring method. Defaults to [`Scorer.score_token_attr`](/api/scorer#score_token_attr) for the attribute `"lemma"`. ~~Optional[Callable]~~ | + +```python +%%GITHUB_SPACY/spacy/pipeline/edit_tree_lemmatizer.py +``` + +## EditTreeLemmatizer.\_\_init\_\_ {#init tag="method"} + +> #### Example +> +> ```python +> # Construction via add_pipe with default model +> lemmatizer = nlp.add_pipe("trainable_lemmatizer", name="lemmatizer") +> +> # Construction via create_pipe with custom model +> config = {"model": {"@architectures": "my_tagger"}} +> lemmatizer = nlp.add_pipe("trainable_lemmatizer", config=config, name="lemmatizer") +> +> # Construction from class +> from spacy.pipeline import EditTreeLemmatizer +> lemmatizer = EditTreeLemmatizer(nlp.vocab, model) +> ``` + +Create a new pipeline instance. In your application, you would normally use a +shortcut for this and instantiate the component using its string name and +[`nlp.add_pipe`](/api/language#add_pipe). + +| Name | Description | +| --------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `vocab` | The shared vocabulary. ~~Vocab~~ | +| `model` | A model instance that predicts the edit tree probabilities. The output vectors should match the number of edit trees in size, and be normalized as probabilities (all scores between 0 and 1, with the rows summing to `1`). ~~Model[List[Doc], List[Floats2d]]~~ | +| `name` | String name of the component instance. Used to add entries to the `losses` during training. ~~str~~ | +| _keyword-only_ | | +| `backoff` | ~~Token~~ attribute to use when no applicable edit tree is found. Defaults to `orth`. ~~str~~ | +| `min_tree_freq` | Minimum frequency of an edit tree in the training set to be used. Defaults to `3`. ~~int~~ | +| `overwrite` | Whether existing annotation is overwritten. Defaults to `False`. ~~bool~~ | +| `top_k` | The number of most probable edit trees to try before resorting to `backoff`. Defaults to `1`. ~~int~~ | +| `scorer` | The scoring method. Defaults to [`Scorer.score_token_attr`](/api/scorer#score_token_attr) for the attribute `"lemma"`. ~~Optional[Callable]~~ | + +## EditTreeLemmatizer.\_\_call\_\_ {#call tag="method"} + +Apply the pipe to one document. The document is modified in place, and returned. +This usually happens under the hood when the `nlp` object is called on a text +and all pipeline components are applied to the `Doc` in order. Both +[`__call__`](/api/edittreelemmatizer#call) and +[`pipe`](/api/edittreelemmatizer#pipe) delegate to the +[`predict`](/api/edittreelemmatizer#predict) and +[`set_annotations`](/api/edittreelemmatizer#set_annotations) methods. + +> #### Example +> +> ```python +> doc = nlp("This is a sentence.") +> lemmatizer = nlp.add_pipe("trainable_lemmatizer", name="lemmatizer") +> # This usually happens under the hood +> processed = lemmatizer(doc) +> ``` + +| Name | Description | +| ----------- | -------------------------------- | +| `doc` | The document to process. ~~Doc~~ | +| **RETURNS** | The processed document. ~~Doc~~ | + +## EditTreeLemmatizer.pipe {#pipe tag="method"} + +Apply the pipe to a stream of documents. This usually happens under the hood +when the `nlp` object is called on a text and all pipeline components are +applied to the `Doc` in order. Both [`__call__`](/api/edittreelemmatizer#call) +and [`pipe`](/api/edittreelemmatizer#pipe) delegate to the +[`predict`](/api/edittreelemmatizer#predict) and +[`set_annotations`](/api/edittreelemmatizer#set_annotations) methods. + +> #### Example +> +> ```python +> lemmatizer = nlp.add_pipe("trainable_lemmatizer", name="lemmatizer") +> for doc in lemmatizer.pipe(docs, batch_size=50): +> pass +> ``` + +| Name | Description | +| -------------- | ------------------------------------------------------------- | +| `stream` | A stream of documents. ~~Iterable[Doc]~~ | +| _keyword-only_ | | +| `batch_size` | The number of documents to buffer. Defaults to `128`. ~~int~~ | +| **YIELDS** | The processed documents in order. ~~Doc~~ | + +## EditTreeLemmatizer.initialize {#initialize tag="method" new="3"} + +Initialize the component for training. `get_examples` should be a function that +returns an iterable of [`Example`](/api/example) objects. The data examples are +used to **initialize the model** of the component and can either be the full +training data or a representative sample. Initialization includes validating the +network, +[inferring missing shapes](https://thinc.ai/docs/usage-models#validation) and +setting up the label scheme based on the data. This method is typically called +by [`Language.initialize`](/api/language#initialize) and lets you customize +arguments it receives via the +[`[initialize.components]`](/api/data-formats#config-initialize) block in the +config. + +> #### Example +> +> ```python +> lemmatizer = nlp.add_pipe("trainable_lemmatizer", name="lemmatizer") +> lemmatizer.initialize(lambda: [], nlp=nlp) +> ``` +> +> ```ini +> ### config.cfg +> [initialize.components.lemmatizer] +> +> [initialize.components.lemmatizer.labels] +> @readers = "spacy.read_labels.v1" +> path = "corpus/labels/lemmatizer.json +> ``` + +| Name | Description | +| -------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `get_examples` | Function that returns gold-standard annotations in the form of [`Example`](/api/example) objects. ~~Callable[[], Iterable[Example]]~~ | +| _keyword-only_ | | +| `nlp` | The current `nlp` object. Defaults to `None`. ~~Optional[Language]~~ | +| `labels` | The label information to add to the component, as provided by the [`label_data`](#label_data) property after initialization. To generate a reusable JSON file from your data, you should run the [`init labels`](/api/cli#init-labels) command. If no labels are provided, the `get_examples` callback is used to extract the labels from the data, which may be a lot slower. ~~Optional[Iterable[str]]~~ | + +## EditTreeLemmatizer.predict {#predict tag="method"} + +Apply the component's model to a batch of [`Doc`](/api/doc) objects, without +modifying them. + +> #### Example +> +> ```python +> lemmatizer = nlp.add_pipe("trainable_lemmatizer", name="lemmatizer") +> tree_ids = lemmatizer.predict([doc1, doc2]) +> ``` + +| Name | Description | +| ----------- | ------------------------------------------- | +| `docs` | The documents to predict. ~~Iterable[Doc]~~ | +| **RETURNS** | The model's prediction for each document. | + +## EditTreeLemmatizer.set_annotations {#set_annotations tag="method"} + +Modify a batch of [`Doc`](/api/doc) objects, using pre-computed tree +identifiers. + +> #### Example +> +> ```python +> lemmatizer = nlp.add_pipe("trainable_lemmatizer", name="lemmatizer") +> tree_ids = lemmatizer.predict([doc1, doc2]) +> lemmatizer.set_annotations([doc1, doc2], tree_ids) +> ``` + +| Name | Description | +| ---------- | ------------------------------------------------------------------------------------- | +| `docs` | The documents to modify. ~~Iterable[Doc]~~ | +| `tree_ids` | The identifiers of the edit trees to apply, produced by `EditTreeLemmatizer.predict`. | + +## EditTreeLemmatizer.update {#update tag="method"} + +Learn from a batch of [`Example`](/api/example) objects containing the +predictions and gold-standard annotations, and update the component's model. +Delegates to [`predict`](/api/edittreelemmatizer#predict) and +[`get_loss`](/api/edittreelemmatizer#get_loss). + +> #### Example +> +> ```python +> lemmatizer = nlp.add_pipe("trainable_lemmatizer", name="lemmatizer") +> optimizer = nlp.initialize() +> losses = lemmatizer.update(examples, sgd=optimizer) +> ``` + +| Name | Description | +| -------------- | ------------------------------------------------------------------------------------------------------------------------ | +| `examples` | A batch of [`Example`](/api/example) objects to learn from. ~~Iterable[Example]~~ | +| _keyword-only_ | | +| `drop` | The dropout rate. ~~float~~ | +| `sgd` | An optimizer. Will be created via [`create_optimizer`](#create_optimizer) if not set. ~~Optional[Optimizer]~~ | +| `losses` | Optional record of the loss during training. Updated using the component name as the key. ~~Optional[Dict[str, float]]~~ | +| **RETURNS** | The updated `losses` dictionary. ~~Dict[str, float]~~ | + +## EditTreeLemmatizer.get_loss {#get_loss tag="method"} + +Find the loss and gradient of loss for the batch of documents and their +predicted scores. + +> #### Example +> +> ```python +> lemmatizer = nlp.add_pipe("trainable_lemmatizer", name="lemmatizer") +> scores = lemmatizer.model.begin_update([eg.predicted for eg in examples]) +> loss, d_loss = lemmatizer.get_loss(examples, scores) +> ``` + +| Name | Description | +| ----------- | --------------------------------------------------------------------------- | +| `examples` | The batch of examples. ~~Iterable[Example]~~ | +| `scores` | Scores representing the model's predictions. | +| **RETURNS** | The loss and the gradient, i.e. `(loss, gradient)`. ~~Tuple[float, float]~~ | + +## EditTreeLemmatizer.create_optimizer {#create_optimizer tag="method"} + +Create an optimizer for the pipeline component. + +> #### Example +> +> ```python +> lemmatizer = nlp.add_pipe("trainable_lemmatizer", name="lemmatizer") +> optimizer = lemmatizer.create_optimizer() +> ``` + +| Name | Description | +| ----------- | ---------------------------- | +| **RETURNS** | The optimizer. ~~Optimizer~~ | + +## EditTreeLemmatizer.use_params {#use_params tag="method, contextmanager"} + +Modify the pipe's model, to use the given parameter values. At the end of the +context, the original parameters are restored. + +> #### Example +> +> ```python +> lemmatizer = nlp.add_pipe("trainable_lemmatizer", name="lemmatizer") +> with lemmatizer.use_params(optimizer.averages): +> lemmatizer.to_disk("/best_model") +> ``` + +| Name | Description | +| -------- | -------------------------------------------------- | +| `params` | The parameter values to use in the model. ~~dict~~ | + +## EditTreeLemmatizer.to_disk {#to_disk tag="method"} + +Serialize the pipe to disk. + +> #### Example +> +> ```python +> lemmatizer = nlp.add_pipe("trainable_lemmatizer", name="lemmatizer") +> lemmatizer.to_disk("/path/to/lemmatizer") +> ``` + +| Name | Description | +| -------------- | ------------------------------------------------------------------------------------------------------------------------------------------ | +| `path` | A path to a directory, which will be created if it doesn't exist. Paths may be either strings or `Path`-like objects. ~~Union[str, Path]~~ | +| _keyword-only_ | | +| `exclude` | String names of [serialization fields](#serialization-fields) to exclude. ~~Iterable[str]~~ | + +## EditTreeLemmatizer.from_disk {#from_disk tag="method"} + +Load the pipe from disk. Modifies the object in place and returns it. + +> #### Example +> +> ```python +> lemmatizer = nlp.add_pipe("trainable_lemmatizer", name="lemmatizer") +> lemmatizer.from_disk("/path/to/lemmatizer") +> ``` + +| Name | Description | +| -------------- | ----------------------------------------------------------------------------------------------- | +| `path` | A path to a directory. Paths may be either strings or `Path`-like objects. ~~Union[str, Path]~~ | +| _keyword-only_ | | +| `exclude` | String names of [serialization fields](#serialization-fields) to exclude. ~~Iterable[str]~~ | +| **RETURNS** | The modified `EditTreeLemmatizer` object. ~~EditTreeLemmatizer~~ | + +## EditTreeLemmatizer.to_bytes {#to_bytes tag="method"} + +> #### Example +> +> ```python +> lemmatizer = nlp.add_pipe("trainable_lemmatizer", name="lemmatizer") +> lemmatizer_bytes = lemmatizer.to_bytes() +> ``` + +Serialize the pipe to a bytestring. + +| Name | Description | +| -------------- | ------------------------------------------------------------------------------------------- | +| _keyword-only_ | | +| `exclude` | String names of [serialization fields](#serialization-fields) to exclude. ~~Iterable[str]~~ | +| **RETURNS** | The serialized form of the `EditTreeLemmatizer` object. ~~bytes~~ | + +## EditTreeLemmatizer.from_bytes {#from_bytes tag="method"} + +Load the pipe from a bytestring. Modifies the object in place and returns it. + +> #### Example +> +> ```python +> lemmatizer_bytes = lemmatizer.to_bytes() +> lemmatizer = nlp.add_pipe("trainable_lemmatizer", name="lemmatizer") +> lemmatizer.from_bytes(lemmatizer_bytes) +> ``` + +| Name | Description | +| -------------- | ------------------------------------------------------------------------------------------- | +| `bytes_data` | The data to load from. ~~bytes~~ | +| _keyword-only_ | | +| `exclude` | String names of [serialization fields](#serialization-fields) to exclude. ~~Iterable[str]~~ | +| **RETURNS** | The `EditTreeLemmatizer` object. ~~EditTreeLemmatizer~~ | + +## EditTreeLemmatizer.labels {#labels tag="property"} + +The labels currently added to the component. + + + +The `EditTreeLemmatizer` labels are not useful by themselves, since they are +identifiers of edit trees. + + + +| Name | Description | +| ----------- | ------------------------------------------------------ | +| **RETURNS** | The labels added to the component. ~~Tuple[str, ...]~~ | + +## EditTreeLemmatizer.label_data {#label_data tag="property" new="3"} + +The labels currently added to the component and their internal meta information. +This is the data generated by [`init labels`](/api/cli#init-labels) and used by +[`EditTreeLemmatizer.initialize`](/api/edittreelemmatizer#initialize) to +initialize the model with a pre-defined label set. + +> #### Example +> +> ```python +> labels = lemmatizer.label_data +> lemmatizer.initialize(lambda: [], nlp=nlp, labels=labels) +> ``` + +| Name | Description | +| ----------- | ---------------------------------------------------------- | +| **RETURNS** | The label data added to the component. ~~Tuple[str, ...]~~ | + +## Serialization fields {#serialization-fields} + +During serialization, spaCy will export several data fields used to restore +different aspects of the object. If needed, you can exclude them from +serialization by passing in the string names via the `exclude` argument. + +> #### Example +> +> ```python +> data = lemmatizer.to_disk("/path", exclude=["vocab"]) +> ``` + +| Name | Description | +| ------- | -------------------------------------------------------------- | +| `vocab` | The shared [`Vocab`](/api/vocab). | +| `cfg` | The config file. You usually don't want to exclude this. | +| `model` | The binary model data. You usually don't want to exclude this. | +| `trees` | The edit trees. You usually don't want to exclude this. | diff --git a/website/docs/api/entitylinker.md b/website/docs/api/entitylinker.md index 3d3372679..8e0d6087a 100644 --- a/website/docs/api/entitylinker.md +++ b/website/docs/api/entitylinker.md @@ -59,6 +59,7 @@ architectures and their arguments and hyperparameters. | `incl_context` | Whether or not to include the local context in the model. Defaults to `True`. ~~bool~~ | | `model` | The [`Model`](https://thinc.ai/docs/api-model) powering the pipeline component. Defaults to [EntityLinker](/api/architectures#EntityLinker). ~~Model~~ | | `entity_vector_length` | Size of encoding vectors in the KB. Defaults to `64`. ~~int~~ | +| `use_gold_ents` | Whether to copy entities from the gold docs or not. Defaults to `True`. If `False`, entities must be set in the training data or by an annotating component in the pipeline. ~~int~~ | | `get_candidates` | Function that generates plausible candidates for a given `Span` object. Defaults to [CandidateGenerator](/api/architectures#CandidateGenerator), a function looking up exact, case-dependent aliases in the KB. ~~Callable[[KnowledgeBase, Span], Iterable[Candidate]]~~ | | `overwrite` 3.2 | Whether existing annotation is overwritten. Defaults to `True`. ~~bool~~ | | `scorer` 3.2 | The scoring method. Defaults to [`Scorer.score_links`](/api/scorer#score_links). ~~Optional[Callable]~~ | diff --git a/website/docs/api/entityrecognizer.md b/website/docs/api/entityrecognizer.md index 14b6fece4..7c153f064 100644 --- a/website/docs/api/entityrecognizer.md +++ b/website/docs/api/entityrecognizer.md @@ -62,7 +62,7 @@ architectures and their arguments and hyperparameters. | Setting | Description | | ----------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| `moves` | A list of transition names. Inferred from the data if not provided. Defaults to `None`. ~~Optional[List[str]]~~ | +| `moves` | A list of transition names. Inferred from the data if not provided. Defaults to `None`. ~~Optional[TransitionSystem]~~ | | `update_with_oracle_cut_size` | During training, cut long sequences into shorter segments by creating intermediate states based on the gold-standard history. The model is not very sensitive to this parameter, so you usually won't need to change it. Defaults to `100`. ~~int~~ | | `model` | The [`Model`](https://thinc.ai/docs/api-model) powering the pipeline component. Defaults to [TransitionBasedParser](/api/architectures#TransitionBasedParser). ~~Model[List[Doc], List[Floats2d]]~~ | | `incorrect_spans_key` | This key refers to a `SpanGroup` in `doc.spans` that specifies incorrect spans. The NER will learn not to predict (exactly) those spans. Defaults to `None`. ~~Optional[str]~~ | @@ -98,7 +98,7 @@ shortcut for this and instantiate the component using its string name and | `vocab` | The shared vocabulary. ~~Vocab~~ | | `model` | The [`Model`](https://thinc.ai/docs/api-model) powering the pipeline component. ~~Model[List[Doc], List[Floats2d]]~~ | | `name` | String name of the component instance. Used to add entries to the `losses` during training. ~~str~~ | -| `moves` | A list of transition names. Inferred from the data if set to `None`, which is the default. ~~Optional[List[str]]~~ | +| `moves` | A list of transition names. Inferred from the data if set to `None`, which is the default. ~~Optional[TransitionSystem]~~ | | _keyword-only_ | | | `update_with_oracle_cut_size` | During training, cut long sequences into shorter segments by creating intermediate states based on the gold-standard history. The model is not very sensitive to this parameter, so you usually won't need to change it. Defaults to `100`. ~~int~~ | | `incorrect_spans_key` | Identifies spans that are known to be incorrect entity annotations. The incorrect entity annotations can be stored in the span group in [`Doc.spans`](/api/doc#spans), under this key. Defaults to `None`. ~~Optional[str]~~ | diff --git a/website/docs/api/legacy.md b/website/docs/api/legacy.md index 916a5bf7f..e24c37d77 100644 --- a/website/docs/api/legacy.md +++ b/website/docs/api/legacy.md @@ -248,23 +248,6 @@ the others, but may not be as accurate, especially if texts are short. ## Loggers {#loggers} -These functions are available from `@spacy.registry.loggers`. +Logging utilities for spaCy are implemented in the [`spacy-loggers`](https://github.com/explosion/spacy-loggers) repo, and the functions are typically available from `@spacy.registry.loggers`. -### spacy.WandbLogger.v1 {#WandbLogger_v1} - -The first version of the [`WandbLogger`](/api/top-level#WandbLogger) did not yet -support the `log_dataset_dir` and `model_log_interval` arguments. - -> #### Example config -> -> ```ini -> [training.logger] -> @loggers = "spacy.WandbLogger.v1" -> project_name = "monitor_spacy_training" -> remove_config_values = ["paths.train", "paths.dev", "corpora.train.path", "corpora.dev.path"] -> ``` -> -> | Name | Description | -> | ---------------------- | ------------------------------------------------------------------------------------------------------------------------------------- | -> | `project_name` | The name of the project in the Weights & Biases interface. The project will be created automatically if it doesn't exist yet. ~~str~~ | -> | `remove_config_values` | A list of values to include from the config before it is uploaded to W&B (default: empty). ~~List[str]~~ | +More documentation can be found in that repo's [readme](https://github.com/explosion/spacy-loggers/blob/main/README.md) file. diff --git a/website/docs/api/lemmatizer.md b/website/docs/api/lemmatizer.md index 2fa040917..75387305a 100644 --- a/website/docs/api/lemmatizer.md +++ b/website/docs/api/lemmatizer.md @@ -9,14 +9,15 @@ api_trainable: false --- Component for assigning base forms to tokens using rules based on part-of-speech -tags, or lookup tables. Functionality to train the component is coming soon. -Different [`Language`](/api/language) subclasses can implement their own -lemmatizer components via +tags, or lookup tables. Different [`Language`](/api/language) subclasses can +implement their own lemmatizer components via [language-specific factories](/usage/processing-pipelines#factories-language). The default data used is provided by the [`spacy-lookups-data`](https://github.com/explosion/spacy-lookups-data) extension package. +For a trainable lemmatizer, see [`EditTreeLemmatizer`](/api/edittreelemmatizer). + As of v3.0, the `Lemmatizer` is a **standalone pipeline component** that can be diff --git a/website/docs/api/matcher.md b/website/docs/api/matcher.md index 3e7f9dc04..273c202ca 100644 --- a/website/docs/api/matcher.md +++ b/website/docs/api/matcher.md @@ -34,6 +34,7 @@ rule-based matching are: | ----------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------- | | `ORTH` | The exact verbatim text of a token. ~~str~~ | | `TEXT` 2.1 | The exact verbatim text of a token. ~~str~~ | +| `NORM` | The normalized form of the token text. ~~str~~ | | `LOWER` | The lowercase form of the token text. ~~str~~ | |  `LENGTH` | The length of the token text. ~~int~~ | |  `IS_ALPHA`, `IS_ASCII`, `IS_DIGIT` | Token text consists of alphabetic characters, ASCII characters, digits. ~~bool~~ | diff --git a/website/docs/api/span.md b/website/docs/api/span.md index 7ecebf93e..ff7905bc0 100644 --- a/website/docs/api/span.md +++ b/website/docs/api/span.md @@ -257,8 +257,8 @@ shape `(N, M)`, where `N` is the length of the document. The values will be ## Span.ents {#ents tag="property" new="2.0.13" model="ner"} -The named entities in the span. Returns a tuple of named entity `Span` objects, -if the entity recognizer has been applied. +The named entities that fall completely within the span. Returns a tuple of +`Span` objects. > #### Example > diff --git a/website/docs/api/spancategorizer.md b/website/docs/api/spancategorizer.md index 26fcaefdf..fc666aaf7 100644 --- a/website/docs/api/spancategorizer.md +++ b/website/docs/api/spancategorizer.md @@ -239,6 +239,24 @@ Delegates to [`predict`](/api/spancategorizer#predict) and | `losses` | Optional record of the loss during training. Updated using the component name as the key. ~~Optional[Dict[str, float]]~~ | | **RETURNS** | The updated `losses` dictionary. ~~Dict[str, float]~~ | +## SpanCategorizer.set_candidates {#set_candidates tag="method", new="3.3"} + +Use the suggester to add a list of [`Span`](/api/span) candidates to a list of +[`Doc`](/api/doc) objects. This method is intended to be used for debugging +purposes. + +> #### Example +> +> ```python +> spancat = nlp.add_pipe("spancat") +> spancat.set_candidates(docs, "candidates") +> ``` + +| Name | Description | +| ---------------- | -------------------------------------------------------------------- | +| `docs` | The documents to modify. ~~Iterable[Doc]~~ | +| `candidates_key` | Key of the Doc.spans dict to save the candidate spans under. ~~str~~ | + ## SpanCategorizer.get_loss {#get_loss tag="method"} Find the loss and gradient of loss for the batch of documents and their diff --git a/website/docs/api/textcategorizer.md b/website/docs/api/textcategorizer.md index 47f868637..2ff569bad 100644 --- a/website/docs/api/textcategorizer.md +++ b/website/docs/api/textcategorizer.md @@ -34,7 +34,11 @@ only. Predictions will be saved to `doc.cats` as a dictionary, where the key is the name of the category and the value is a score between 0 and 1 (inclusive). For `textcat` (exclusive categories), the scores will sum to 1, while for -`textcat_multilabel` there is no particular guarantee about their sum. +`textcat_multilabel` there is no particular guarantee about their sum. This also +means that for `textcat`, missing values are equated to a value of 0 (i.e. +`False`) and are counted as such towards the loss and scoring metrics. This is +not the case for `textcat_multilabel`, where missing values in the gold standard +data do not influence the loss or accuracy calculations. Note that when assigning values to create training data, the score of each category must be 0 or 1. Using other values, for example to create a document diff --git a/website/docs/api/token.md b/website/docs/api/token.md index 44a2ea9e8..3c3d12d54 100644 --- a/website/docs/api/token.md +++ b/website/docs/api/token.md @@ -349,23 +349,6 @@ A sequence containing the token and all the token's syntactic descendants. | ---------- | ------------------------------------------------------------------------------------ | | **YIELDS** | A descendant token such that `self.is_ancestor(token)` or `token == self`. ~~Token~~ | -## Token.is_sent_start {#is_sent_start tag="property" new="2"} - -A boolean value indicating whether the token starts a sentence. `None` if -unknown. Defaults to `True` for the first token in the `Doc`. - -> #### Example -> -> ```python -> doc = nlp("Give it back! He pleaded.") -> assert doc[4].is_sent_start -> assert not doc[5].is_sent_start -> ``` - -| Name | Description | -| ----------- | ------------------------------------------------------- | -| **RETURNS** | Whether the token starts a sentence. ~~Optional[bool]~~ | - ## Token.has_vector {#has_vector tag="property" model="vectors"} A boolean value indicating whether a word vector is associated with the token. @@ -465,6 +448,8 @@ The L2 norm of the token's vector representation. | `is_punct` | Is the token punctuation? ~~bool~~ | | `is_left_punct` | Is the token a left punctuation mark, e.g. `"("` ? ~~bool~~ | | `is_right_punct` | Is the token a right punctuation mark, e.g. `")"` ? ~~bool~~ | +| `is_sent_start` | Does the token start a sentence? ~~bool~~ or `None` if unknown. Defaults to `True` for the first token in the `Doc`. | +| `is_sent_end` | Does the token end a sentence? ~~bool~~ or `None` if unknown. | | `is_space` | Does the token consist of whitespace characters? Equivalent to `token.text.isspace()`. ~~bool~~ | | `is_bracket` | Is the token a bracket? ~~bool~~ | | `is_quote` | Is the token a quotation mark? ~~bool~~ | diff --git a/website/docs/api/tokenizer.md b/website/docs/api/tokenizer.md index 8809c10bc..6eb7e8024 100644 --- a/website/docs/api/tokenizer.md +++ b/website/docs/api/tokenizer.md @@ -44,15 +44,16 @@ how to construct a custom tokenizer with different tokenization rules, see the > tokenizer = nlp.tokenizer > ``` -| Name | Description | -| ---------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| `vocab` | A storage container for lexical types. ~~Vocab~~ | -| `rules` | Exceptions and special-cases for the tokenizer. ~~Optional[Dict[str, List[Dict[int, str]]]]~~ | -| `prefix_search` | A function matching the signature of `re.compile(string).search` to match prefixes. ~~Optional[Callable[[str], Optional[Match]]]~~ | -| `suffix_search` | A function matching the signature of `re.compile(string).search` to match suffixes. ~~Optional[Callable[[str], Optional[Match]]]~~ | -| `infix_finditer` | A function matching the signature of `re.compile(string).finditer` to find infixes. ~~Optional[Callable[[str], Iterator[Match]]]~~ | -| `token_match` | A function matching the signature of `re.compile(string).match` to find token matches. ~~Optional[Callable[[str], Optional[Match]]]~~ | -| `url_match` | A function matching the signature of `re.compile(string).match` to find token matches after considering prefixes and suffixes. ~~Optional[Callable[[str], Optional[Match]]]~~ | +| Name | Description | +| -------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `vocab` | A storage container for lexical types. ~~Vocab~~ | +| `rules` | Exceptions and special-cases for the tokenizer. ~~Optional[Dict[str, List[Dict[int, str]]]]~~ | +| `prefix_search` | A function matching the signature of `re.compile(string).search` to match prefixes. ~~Optional[Callable[[str], Optional[Match]]]~~ | +| `suffix_search` | A function matching the signature of `re.compile(string).search` to match suffixes. ~~Optional[Callable[[str], Optional[Match]]]~~ | +| `infix_finditer` | A function matching the signature of `re.compile(string).finditer` to find infixes. ~~Optional[Callable[[str], Iterator[Match]]]~~ | +| `token_match` | A function matching the signature of `re.compile(string).match` to find token matches. ~~Optional[Callable[[str], Optional[Match]]]~~ | +| `url_match` | A function matching the signature of `re.compile(string).match` to find token matches after considering prefixes and suffixes. ~~Optional[Callable[[str], Optional[Match]]]~~ | +| `faster_heuristics` 3.3.0 | Whether to restrict the final `Matcher`-based pass for rules to those containing affixes or space. Defaults to `True`. ~~bool~~ | ## Tokenizer.\_\_call\_\_ {#call tag="method"} diff --git a/website/docs/api/top-level.md b/website/docs/api/top-level.md index be19f9c3a..6d7431f28 100644 --- a/website/docs/api/top-level.md +++ b/website/docs/api/top-level.md @@ -320,12 +320,31 @@ If a setting is not present in the options, the default value will be used. | `template` 2.2 | Optional template to overwrite the HTML used to render entity spans. Should be a format string and can use `{bg}`, `{text}` and `{label}`. See [`templates.py`](%%GITHUB_SPACY/spacy/displacy/templates.py) for examples. ~~Optional[str]~~ | | `kb_url_template` 3.2.1 | Optional template to construct the KB url for the entity to link to. Expects a python f-string format with single field to fill in. ~~Optional[str]~~ | -By default, displaCy comes with colors for all entity types used by -[spaCy's trained pipelines](/models). If you're using custom entity types, you -can use the `colors` setting to add your own colors for them. Your application -or pipeline package can also expose a -[`spacy_displacy_colors` entry point](/usage/saving-loading#entry-points-displacy) -to add custom labels and their colors automatically. + +#### Span Visualizer options {#displacy_options-span} + +> #### Example +> +> ```python +> options = {"spans_key": "sc"} +> displacy.serve(doc, style="span", options=options) +> ``` + +| Name | Description | +|-----------------|---------------------------------------------------------------------------------------------------------------------------------------------------------| +| `spans_key` | Which spans key to render spans from. Default is `"sc"`. ~~str~~ | +| `templates` | Dictionary containing the keys `"span"`, `"slice"`, and `"start"`. These dictate how the overall span, a span slice, and the starting token will be rendered. ~~Optional[Dict[str, str]~~ | +| `kb_url_template` | Optional template to construct the KB url for the entity to link to. Expects a python f-string format with single field to fill in ~~Optional[str]~~ | +| `colors` | Color overrides. Entity types should be mapped to color names or values. ~~Dict[str, str]~~ | + + +By default, displaCy comes with colors for all entity types used by [spaCy's +trained pipelines](/models) for both entity and span visualizer. If you're +using custom entity types, you can use the `colors` setting to add your own +colors for them. Your application or pipeline package can also expose a +[`spacy_displacy_colors` entry +point](/usage/saving-loading#entry-points-displacy) to add custom labels and +their colors automatically. By default, displaCy links to `#` for entities without a `kb_id` set on their span. If you wish to link an entity to their URL then consider using the @@ -335,6 +354,7 @@ span. If you wish to link an entity to their URL then consider using the should redirect you to their Wikidata page, in this case `https://www.wikidata.org/wiki/Q95`. + ## registry {#registry source="spacy/util.py" new="3"} spaCy's function registry extends @@ -423,7 +443,7 @@ and the accuracy scores on the development set. The built-in, default logger is the ConsoleLogger, which prints results to the console in tabular format. The [spacy-loggers](https://github.com/explosion/spacy-loggers) package, included as -a dependency of spaCy, enables other loggers: currently it provides one that +a dependency of spaCy, enables other loggers, such as one that sends results to a [Weights & Biases](https://www.wandb.com/) dashboard. Instead of using one of the built-in loggers, you can diff --git a/website/docs/api/vectors.md b/website/docs/api/vectors.md index b3bee822c..a651c23b0 100644 --- a/website/docs/api/vectors.md +++ b/website/docs/api/vectors.md @@ -327,9 +327,9 @@ will be counted individually. In `floret` mode, the keys table is not used. > assert vectors.n_keys == 0 > ``` -| Name | Description | -| ----------- | -------------------------------------------- | -| **RETURNS** | The number of all keys in the table. ~~int~~ | +| Name | Description | +| ----------- | ----------------------------------------------------------------------------- | +| **RETURNS** | The number of all keys in the table. Returns `-1` for floret vectors. ~~int~~ | ## Vectors.most_similar {#most_similar tag="method"} @@ -348,7 +348,7 @@ supported for `floret` mode. > ``` | Name | Description | -| -------------- | --------------------------------------------------------------------------- | +| -------------- | --------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------- | | `queries` | An array with one or more vectors. ~~numpy.ndarray~~ | | _keyword-only_ | | | `batch_size` | The batch size to use. Default to `1024`. ~~int~~ | @@ -385,7 +385,7 @@ Change the embedding matrix to use different Thinc ops. > ``` | Name | Description | -|-------|----------------------------------------------------------| +| ----- | -------------------------------------------------------- | | `ops` | The Thinc ops to switch the embedding matrix to. ~~Ops~~ | ## Vectors.to_disk {#to_disk tag="method"} diff --git a/website/docs/api/vocab.md b/website/docs/api/vocab.md index c0a269d95..2e4a206ec 100644 --- a/website/docs/api/vocab.md +++ b/website/docs/api/vocab.md @@ -156,7 +156,7 @@ cosines are calculated in minibatches to reduce memory usage. > > ```python > nlp.vocab.prune_vectors(10000) -> assert len(nlp.vocab.vectors) <= 1000 +> assert len(nlp.vocab.vectors) <= 10000 > ``` | Name | Description | @@ -165,26 +165,34 @@ cosines are calculated in minibatches to reduce memory usage. | `batch_size` | Batch of vectors for calculating the similarities. Larger batch sizes might be faster, while temporarily requiring more memory. ~~int~~ | | **RETURNS** | A dictionary keyed by removed words mapped to `(string, score)` tuples, where `string` is the entry the removed word was mapped to, and `score` the similarity score between the two words. ~~Dict[str, Tuple[str, float]]~~ | +## Vocab.deduplicate_vectors {#deduplicate_vectors tag="method" new="3.3"} + +> #### Example +> +> ```python +> nlp.vocab.deduplicate_vectors() +> ``` + +Remove any duplicate rows from the current vector table, maintaining the +mappings for all words in the vectors. + ## Vocab.get_vector {#get_vector tag="method" new="2"} Retrieve a vector for a word in the vocabulary. Words can be looked up by string -or hash value. If no vectors data is loaded, a `ValueError` is raised. If `minn` -is defined, then the resulting vector uses [FastText](https://fasttext.cc/)'s -subword features by average over n-grams of `orth` (introduced in spaCy `v2.1`). +or hash value. If the current vectors do not contain an entry for the word, a +0-vector with the same number of dimensions +([`Vocab.vectors_length`](#attributes)) as the current vectors is returned. > #### Example > > ```python > nlp.vocab.get_vector("apple") -> nlp.vocab.get_vector("apple", minn=1, maxn=5) > ``` -| Name | Description | -| ----------------------------------- | ---------------------------------------------------------------------------------------------------------------------- | -| `orth` | The hash value of a word, or its unicode string. ~~Union[int, str]~~ | -| `minn` 2.1 | Minimum n-gram length used for FastText's n-gram computation. Defaults to the length of `orth`. ~~int~~ | -| `maxn` 2.1 | Maximum n-gram length used for FastText's n-gram computation. Defaults to the length of `orth`. ~~int~~ | -| **RETURNS** | A word vector. Size and shape are determined by the `Vocab.vectors` instance. ~~numpy.ndarray[ndim=1, dtype=float32]~~ | +| Name | Description | +| ----------- | ---------------------------------------------------------------------------------------------------------------------- | +| `orth` | The hash value of a word, or its unicode string. ~~Union[int, str]~~ | +| **RETURNS** | A word vector. Size and shape are determined by the `Vocab.vectors` instance. ~~numpy.ndarray[ndim=1, dtype=float32]~~ | ## Vocab.set_vector {#set_vector tag="method" new="2"} diff --git a/website/docs/images/displacy-span-custom.html b/website/docs/images/displacy-span-custom.html new file mode 100644 index 000000000..97dd3b140 --- /dev/null +++ b/website/docs/images/displacy-span-custom.html @@ -0,0 +1,31 @@ +
+ Welcome to the + + Bank + + + + + BANK + + + + + of + + + + + China + + + + + . +
\ No newline at end of file diff --git a/website/docs/images/displacy-span.html b/website/docs/images/displacy-span.html new file mode 100644 index 000000000..9bbc6403c --- /dev/null +++ b/website/docs/images/displacy-span.html @@ -0,0 +1,41 @@ +
+ Welcome to the + + Bank + + + + + ORG + + + + + of + + + + + + China + + + + + + + GPE + + + + . +
\ No newline at end of file diff --git a/website/docs/images/spacy-tailored-pipelines_wide.png b/website/docs/images/spacy-tailored-pipelines_wide.png new file mode 100644 index 000000000..d1a762ebe Binary files /dev/null and b/website/docs/images/spacy-tailored-pipelines_wide.png differ diff --git a/website/docs/usage/101/_architecture.md b/website/docs/usage/101/_architecture.md index 8fb452895..22e2b961e 100644 --- a/website/docs/usage/101/_architecture.md +++ b/website/docs/usage/101/_architecture.md @@ -45,10 +45,11 @@ components for different language processing tasks and also allows adding | ----------------------------------------------- | ------------------------------------------------------------------------------------------- | | [`AttributeRuler`](/api/attributeruler) | Set token attributes using matcher rules. | | [`DependencyParser`](/api/dependencyparser) | Predict syntactic dependencies. | +| [`EditTreeLemmatizer`](/api/edittreelemmatizer) | Predict base forms of words. | | [`EntityLinker`](/api/entitylinker) | Disambiguate named entities to nodes in a knowledge base. | | [`EntityRecognizer`](/api/entityrecognizer) | Predict named entities, e.g. persons or products. | | [`EntityRuler`](/api/entityruler) | Add entity spans to the `Doc` using token-based rules or exact phrase matches. | -| [`Lemmatizer`](/api/lemmatizer) | Determine the base forms of words. | +| [`Lemmatizer`](/api/lemmatizer) | Determine the base forms of words using rules and lookups. | | [`Morphologizer`](/api/morphologizer) | Predict morphological features and coarse-grained part-of-speech tags. | | [`SentenceRecognizer`](/api/sentencerecognizer) | Predict sentence boundaries. | | [`Sentencizer`](/api/sentencizer) | Implement rule-based sentence boundary detection that doesn't require the dependency parse. | diff --git a/website/docs/usage/embeddings-transformers.md b/website/docs/usage/embeddings-transformers.md index 2b74b6c57..6b02d7cae 100644 --- a/website/docs/usage/embeddings-transformers.md +++ b/website/docs/usage/embeddings-transformers.md @@ -211,23 +211,23 @@ PyTorch as a dependency below, but it may not find the best version for your setup. ```bash -### Example: Install PyTorch 1.7.1 for CUDA 10.1 with pip +### Example: Install PyTorch 1.11.0 for CUDA 11.3 with pip # See: https://pytorch.org/get-started/locally/ -$ pip install torch==1.7.1+cu101 torchvision==0.8.2+cu101 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html +$ pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html ``` Next, install spaCy with the extras for your CUDA version and transformers. The -CUDA extra (e.g., `cuda92`, `cuda102`, `cuda111`) installs the correct version -of [`cupy`](https://docs.cupy.dev/en/stable/install.html#installing-cupy), which +CUDA extra (e.g., `cuda102`, `cuda113`) installs the correct version of +[`cupy`](https://docs.cupy.dev/en/stable/install.html#installing-cupy), which is just like `numpy`, but for GPU. You may also need to set the `CUDA_PATH` environment variable if your CUDA runtime is installed in a non-standard -location. Putting it all together, if you had installed CUDA 10.2 in +location. Putting it all together, if you had installed CUDA 11.3 in `/opt/nvidia/cuda`, you would run: ```bash ### Installation with CUDA $ export CUDA_PATH="/opt/nvidia/cuda" -$ pip install -U %%SPACY_PKG_NAME[cuda102,transformers]%%SPACY_PKG_FLAGS +$ pip install -U %%SPACY_PKG_NAME[cuda113,transformers]%%SPACY_PKG_FLAGS ``` For [`transformers`](https://huggingface.co/transformers/) v4.0.0+ and models diff --git a/website/docs/usage/linguistic-features.md b/website/docs/usage/linguistic-features.md index f748fa8d6..b3b896a54 100644 --- a/website/docs/usage/linguistic-features.md +++ b/website/docs/usage/linguistic-features.md @@ -120,10 +120,13 @@ print(doc[2].pos_) # 'PRON' ## Lemmatization {#lemmatization model="lemmatizer" new="3"} -The [`Lemmatizer`](/api/lemmatizer) is a pipeline component that provides lookup -and rule-based lemmatization methods in a configurable component. An individual -language can extend the `Lemmatizer` as part of its -[language data](#language-data). +spaCy provides two pipeline components for lemmatization: + +1. The [`Lemmatizer`](/api/lemmatizer) component provides lookup and rule-based + lemmatization methods in a configurable component. An individual language can + extend the `Lemmatizer` as part of its [language data](#language-data). +2. The [`EditTreeLemmatizer`](/api/edittreelemmatizer) + 3.3 component provides a trainable lemmatizer. ```python ### {executable="true"} @@ -197,6 +200,20 @@ information, without consulting the context of the token. The rule-based lemmatizer also accepts list-based exception files. For English, these are acquired from [WordNet](https://wordnet.princeton.edu/). +### Trainable lemmatizer + +The [`EditTreeLemmatizer`](/api/edittreelemmatizer) can learn form-to-lemma +transformations from a training corpus that includes lemma annotations. This +removes the need to write language-specific rules and can (in many cases) +provide higher accuracies than lookup and rule-based lemmatizers. + +```python +import spacy + +nlp = spacy.blank("de") +nlp.add_pipe("trainable_lemmatizer", name="lemmatizer") +``` + ## Dependency Parsing {#dependency-parse model="parser"} spaCy features a fast and accurate syntactic dependency parser, and has a rich @@ -799,6 +816,10 @@ def tokenizer_pseudo_code( for substring in text.split(): suffixes = [] while substring: + if substring in special_cases: + tokens.extend(special_cases[substring]) + substring = "" + continue while prefix_search(substring) or suffix_search(substring): if token_match(substring): tokens.append(substring) @@ -831,6 +852,8 @@ def tokenizer_pseudo_code( infixes = infix_finditer(substring) offset = 0 for match in infixes: + if offset == 0 and match.start() == 0: + continue tokens.append(substring[offset : match.start()]) tokens.append(substring[match.start() : match.end()]) offset = match.end() @@ -849,20 +872,22 @@ def tokenizer_pseudo_code( The algorithm can be summarized as follows: 1. Iterate over space-separated substrings. -2. Look for a token match. If there is a match, stop processing and keep this - token. -3. Check whether we have an explicitly defined special case for this substring. +2. Check whether we have an explicitly defined special case for this substring. If we do, use it. -4. Otherwise, try to consume one prefix. If we consumed a prefix, go back to #2, +3. Look for a token match. If there is a match, stop processing and keep this + token. +4. Check whether we have an explicitly defined special case for this substring. + If we do, use it. +5. Otherwise, try to consume one prefix. If we consumed a prefix, go back to #3, so that the token match and special cases always get priority. -5. If we didn't consume a prefix, try to consume a suffix and then go back to - #2. -6. If we can't consume a prefix or a suffix, look for a URL match. -7. If there's no URL match, then look for a special case. -8. Look for "infixes" – stuff like hyphens etc. and split the substring into +6. If we didn't consume a prefix, try to consume a suffix and then go back to + #3. +7. If we can't consume a prefix or a suffix, look for a URL match. +8. If there's no URL match, then look for a special case. +9. Look for "infixes" – stuff like hyphens etc. and split the substring into tokens on all infixes. -9. Once we can't consume any more of the string, handle it as a single token. -10. Make a final pass over the text to check for special cases that include +10. Once we can't consume any more of the string, handle it as a single token. +11. Make a final pass over the text to check for special cases that include spaces or that were missed due to the incremental processing of affixes. @@ -1181,7 +1206,7 @@ class WhitespaceTokenizer: spaces = spaces[0:-1] else: spaces[-1] = False - + return Doc(self.vocab, words=words, spaces=spaces) nlp = spacy.blank("en") @@ -1261,8 +1286,8 @@ hyperparameters, pipeline and tokenizer used for constructing and training the pipeline. The `[nlp.tokenizer]` block refers to a **registered function** that takes the `nlp` object and returns a tokenizer. Here, we're registering a function called `whitespace_tokenizer` in the -[`@tokenizers` registry](/api/top-level#registry). To make sure spaCy knows how to -construct your tokenizer during training, you can pass in your Python file by +[`@tokenizers` registry](/api/top-level#registry). To make sure spaCy knows how +to construct your tokenizer during training, you can pass in your Python file by setting `--code functions.py` when you run [`spacy train`](/api/cli#train). > #### config.cfg diff --git a/website/docs/usage/models.md b/website/docs/usage/models.md index 3b79c4d0d..f82da44d9 100644 --- a/website/docs/usage/models.md +++ b/website/docs/usage/models.md @@ -259,6 +259,45 @@ used for training the current [Japanese pipelines](/models/ja).
+### Korean language support {#korean} + +> #### mecab-ko tokenizer +> +> ```python +> nlp = spacy.blank("ko") +> ``` + +The default MeCab-based Korean tokenizer requires: + +- [mecab-ko](https://bitbucket.org/eunjeon/mecab-ko/src/master/README.md) +- [mecab-ko-dic](https://bitbucket.org/eunjeon/mecab-ko-dic) +- [natto-py](https://github.com/buruzaemon/natto-py) + +For some Korean datasets and tasks, the +[rule-based tokenizer](/usage/linguistic-features#tokenization) is better-suited +than MeCab. To configure a Korean pipeline with the rule-based tokenizer: + +> #### Rule-based tokenizer +> +> ```python +> config = {"nlp": {"tokenizer": {"@tokenizers": "spacy.Tokenizer.v1"}}} +> nlp = spacy.blank("ko", config=config) +> ``` + +```ini +### config.cfg +[nlp] +lang = "ko" +tokenizer = {"@tokenizers" = "spacy.Tokenizer.v1"} +``` + + + +The [Korean trained pipelines](/models/ko) use the rule-based tokenizer, so no +additional dependencies are required. + + + ## Installing and using trained pipelines {#download} The easiest way to download a trained pipeline is via spaCy's @@ -417,10 +456,10 @@ doc = nlp("This is a sentence.") You can use the [`info`](/api/cli#info) command or -[`spacy.info()`](/api/top-level#spacy.info) method to print a pipeline -package's meta data before loading it. Each `Language` object with a loaded -pipeline also exposes the pipeline's meta data as the attribute `meta`. For -example, `nlp.meta['version']` will return the package version. +[`spacy.info()`](/api/top-level#spacy.info) method to print a pipeline package's +meta data before loading it. Each `Language` object with a loaded pipeline also +exposes the pipeline's meta data as the attribute `meta`. For example, +`nlp.meta['version']` will return the package version. diff --git a/website/docs/usage/processing-pipelines.md b/website/docs/usage/processing-pipelines.md index 11fd1459d..4f75b5193 100644 --- a/website/docs/usage/processing-pipelines.md +++ b/website/docs/usage/processing-pipelines.md @@ -303,22 +303,23 @@ available pipeline components and component functions. > ruler = nlp.add_pipe("entity_ruler") > ``` -| String name | Component | Description | -| -------------------- | ---------------------------------------------------- | ----------------------------------------------------------------------------------------- | -| `tagger` | [`Tagger`](/api/tagger) | Assign part-of-speech-tags. | -| `parser` | [`DependencyParser`](/api/dependencyparser) | Assign dependency labels. | -| `ner` | [`EntityRecognizer`](/api/entityrecognizer) | Assign named entities. | -| `entity_linker` | [`EntityLinker`](/api/entitylinker) | Assign knowledge base IDs to named entities. Should be added after the entity recognizer. | -| `entity_ruler` | [`EntityRuler`](/api/entityruler) | Assign named entities based on pattern rules and dictionaries. | -| `textcat` | [`TextCategorizer`](/api/textcategorizer) | Assign text categories: exactly one category is predicted per document. | -| `textcat_multilabel` | [`MultiLabel_TextCategorizer`](/api/textcategorizer) | Assign text categories in a multi-label setting: zero, one or more labels per document. | -| `lemmatizer` | [`Lemmatizer`](/api/lemmatizer) | Assign base forms to words. | -| `morphologizer` | [`Morphologizer`](/api/morphologizer) | Assign morphological features and coarse-grained POS tags. | -| `attribute_ruler` | [`AttributeRuler`](/api/attributeruler) | Assign token attribute mappings and rule-based exceptions. | -| `senter` | [`SentenceRecognizer`](/api/sentencerecognizer) | Assign sentence boundaries. | -| `sentencizer` | [`Sentencizer`](/api/sentencizer) | Add rule-based sentence segmentation without the dependency parse. | -| `tok2vec` | [`Tok2Vec`](/api/tok2vec) | Assign token-to-vector embeddings. | -| `transformer` | [`Transformer`](/api/transformer) | Assign the tokens and outputs of a transformer model. | +| String name | Component | Description | +| ---------------------- | ---------------------------------------------------- | ----------------------------------------------------------------------------------------- | +| `tagger` | [`Tagger`](/api/tagger) | Assign part-of-speech-tags. | +| `parser` | [`DependencyParser`](/api/dependencyparser) | Assign dependency labels. | +| `ner` | [`EntityRecognizer`](/api/entityrecognizer) | Assign named entities. | +| `entity_linker` | [`EntityLinker`](/api/entitylinker) | Assign knowledge base IDs to named entities. Should be added after the entity recognizer. | +| `entity_ruler` | [`EntityRuler`](/api/entityruler) | Assign named entities based on pattern rules and dictionaries. | +| `textcat` | [`TextCategorizer`](/api/textcategorizer) | Assign text categories: exactly one category is predicted per document. | +| `textcat_multilabel` | [`MultiLabel_TextCategorizer`](/api/textcategorizer) | Assign text categories in a multi-label setting: zero, one or more labels per document. | +| `lemmatizer` | [`Lemmatizer`](/api/lemmatizer) | Assign base forms to words using rules and lookups. | +| `trainable_lemmatizer` | [`EditTreeLemmatizer`](/api/edittreelemmatizer) | Assign base forms to words. | +| `morphologizer` | [`Morphologizer`](/api/morphologizer) | Assign morphological features and coarse-grained POS tags. | +| `attribute_ruler` | [`AttributeRuler`](/api/attributeruler) | Assign token attribute mappings and rule-based exceptions. | +| `senter` | [`SentenceRecognizer`](/api/sentencerecognizer) | Assign sentence boundaries. | +| `sentencizer` | [`Sentencizer`](/api/sentencizer) | Add rule-based sentence segmentation without the dependency parse. | +| `tok2vec` | [`Tok2Vec`](/api/tok2vec) | Assign token-to-vector embeddings. | +| `transformer` | [`Transformer`](/api/transformer) | Assign the tokens and outputs of a transformer model. | ### Disabling, excluding and modifying components {#disabling} @@ -1081,13 +1082,17 @@ on [serialization methods](/usage/saving-loading/#serialization-methods). > directory. ```python -### Custom serialization methods {highlight="6-7,9-11"} +### Custom serialization methods {highlight="7-11,13-15"} import srsly +from spacy.util import ensure_path class AcronymComponent: # other methods here... def to_disk(self, path, exclude=tuple()): + path = ensure_path(path) + if not path.exists(): + path.mkdir() srsly.write_json(path / "data.json", self.data) def from_disk(self, path, exclude=tuple()): diff --git a/website/docs/usage/projects.md b/website/docs/usage/projects.md index e0e787a1d..57d226913 100644 --- a/website/docs/usage/projects.md +++ b/website/docs/usage/projects.md @@ -213,6 +213,12 @@ format, train a pipeline, evaluate it and export metrics, package it and spin up a quick web demo. It looks pretty similar to a config file used to define CI pipelines. +> #### Tip: Multi-line YAML syntax for long values +> +> YAML has [multi-line syntax](https://yaml-multiline.info/) that can be +> helpful for readability with longer values such as project descriptions or +> commands that take several arguments. + ```yaml %%GITHUB_PROJECTS/pipelines/tagger_parser_ud/project.yml ``` diff --git a/website/docs/usage/rule-based-matching.md b/website/docs/usage/rule-based-matching.md index 74bb10304..710c52dfd 100644 --- a/website/docs/usage/rule-based-matching.md +++ b/website/docs/usage/rule-based-matching.md @@ -162,6 +162,7 @@ rule-based matching are: | ----------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `ORTH` | The exact verbatim text of a token. ~~str~~ | | `TEXT` 2.1 | The exact verbatim text of a token. ~~str~~ | +| `NORM` | The normalized form of the token text. ~~str~~ | | `LOWER` | The lowercase form of the token text. ~~str~~ | |  `LENGTH` | The length of the token text. ~~int~~ | |  `IS_ALPHA`, `IS_ASCII`, `IS_DIGIT` | Token text consists of alphabetic characters, ASCII characters, digits. ~~bool~~ | diff --git a/website/docs/usage/saving-loading.md b/website/docs/usage/saving-loading.md index 9dad077e7..af140e7a7 100644 --- a/website/docs/usage/saving-loading.md +++ b/website/docs/usage/saving-loading.md @@ -202,7 +202,9 @@ the data to and from a JSON file. > rules _with_ the component data. ```python -### {highlight="14-18,20-25"} +### {highlight="16-23,25-30"} +from spacy.util import ensure_path + @Language.factory("my_component") class CustomComponent: def __init__(self): @@ -218,6 +220,9 @@ class CustomComponent: def to_disk(self, path, exclude=tuple()): # This will receive the directory path + /my_component + path = ensure_path(path) + if not path.exists(): + path.mkdir() data_path = path / "data.json" with data_path.open("w", encoding="utf8") as f: f.write(json.dumps(self.data)) @@ -467,7 +472,12 @@ pipeline package. When you save out a pipeline using `nlp.to_disk` and the component exposes a `to_disk` method, it will be called with the disk path. ```python +from spacy.util import ensure_path + def to_disk(self, path, exclude=tuple()): + path = ensure_path(path) + if not path.exists(): + path.mkdir() snek_path = path / "snek.txt" with snek_path.open("w", encoding="utf8") as snek_file: snek_file.write(self.snek) diff --git a/website/docs/usage/visualizers.md b/website/docs/usage/visualizers.md index 072718f91..f98c43224 100644 --- a/website/docs/usage/visualizers.md +++ b/website/docs/usage/visualizers.md @@ -167,6 +167,59 @@ This feature is especially handy if you're using displaCy to compare performance at different stages of a process, e.g. during training. Here you could use the title for a brief description of the text example and the number of iterations. +## Visualizing spans {#span} + +The span visualizer, `span`, highlights overlapping spans in a text. + +```python +### Span example +import spacy +from spacy import displacy +from spacy.tokens import Span + +text = "Welcome to the Bank of China." + +nlp = spacy.blank("en") +doc = nlp(text) + +doc.spans["sc"] = [ + Span(doc, 3, 6, "ORG"), + Span(doc, 5, 6, "GPE"), +] + +displacy.serve(doc, style="span") +``` + +import DisplacySpanHtml from 'images/displacy-span.html' + +