mirror of
https://github.com/explosion/spaCy.git
synced 2024-12-25 17:36:30 +03:00
657af5f91f
* 🚨 Ignore all existing Mypy errors * 🏗 Add Mypy check to CI * Add types-mock and types-requests as dev requirements * Add additional type ignore directives * Add types packages to dev-only list in reqs test * Add types-dataclasses for python 3.6 * Add ignore to pretrain * 🏷 Improve type annotation on `run_command` helper The `run_command` helper previously declared that it returned an `Optional[subprocess.CompletedProcess]`, but it isn't actually possible for the function to return `None`. These changes modify the type annotation of the `run_command` helper and remove all now-unnecessary `# type: ignore` directives. * 🔧 Allow variable type redefinition in limited contexts These changes modify how Mypy is configured to allow variables to have their type automatically redefined under certain conditions. The Mypy documentation contains the following example: ```python def process(items: List[str]) -> None: # 'items' has type List[str] items = [item.split() for item in items] # 'items' now has type List[List[str]] ... ``` This configuration change is especially helpful in reducing the number of `# type: ignore` directives needed to handle the common pattern of: * Accepting a filepath as a string * Overwriting the variable using `filepath = ensure_path(filepath)` These changes enable redefinition and remove all `# type: ignore` directives rendered redundant by this change. * 🏷 Add type annotation to converters mapping * 🚨 Fix Mypy error in convert CLI argument verification * 🏷 Improve type annotation on `resolve_dot_names` helper * 🏷 Add type annotations for `Vocab` attributes `strings` and `vectors` * 🏷 Add type annotations for more `Vocab` attributes * 🏷 Add loose type annotation for gold data compilation * 🏷 Improve `_format_labels` type annotation * 🏷 Fix `get_lang_class` type annotation * 🏷 Loosen return type of `Language.evaluate` * 🏷 Don't accept `Scorer` in `handle_scores_per_type` * 🏷 Add `string_to_list` overloads * 🏷 Fix non-Optional command-line options * 🙈 Ignore redefinition of `wandb_logger` in `loggers.py` * ➕ Install `typing_extensions` in Python 3.8+ The `typing_extensions` package states that it should be used when "writing code that must be compatible with multiple Python versions". Since SpaCy needs to support multiple Python versions, it should be used when newer `typing` module members are required. One example of this is `Literal`, which is available starting with Python 3.8. Previously SpaCy tried to import `Literal` from `typing`, falling back to `typing_extensions` if the import failed. However, Mypy doesn't seem to be able to understand what `Literal` means when the initial import means. Therefore, these changes modify how `compat` imports `Literal` by always importing it from `typing_extensions`. These changes also modify how `typing_extensions` is installed, so that it is a requirement for all Python versions, including those greater than or equal to 3.8. * 🏷 Improve type annotation for `Language.pipe` These changes add a missing overload variant to the type signature of `Language.pipe`. Additionally, the type signature is enhanced to allow type checkers to differentiate between the two overload variants based on the `as_tuple` parameter. Fixes #8772 * ➖ Don't install `typing-extensions` in Python 3.8+ After more detailed analysis of how to implement Python version-specific type annotations using SpaCy, it has been determined that by branching on a comparison against `sys.version_info` can be statically analyzed by Mypy well enough to enable us to conditionally use `typing_extensions.Literal`. This means that we no longer need to install `typing_extensions` for Python versions greater than or equal to 3.8! 🎉 These changes revert previous changes installing `typing-extensions` regardless of Python version and modify how we import the `Literal` type to ensure that Mypy treats it properly. * resolve mypy errors for Strict pydantic types * refactor code to avoid missing return statement * fix types of convert CLI command * avoid list-set confustion in debug_data * fix typo and formatting * small fixes to avoid type ignores * fix types in profile CLI command and make it more efficient * type fixes in projects CLI * put one ignore back * type fixes for render * fix render types - the sequel * fix BaseDefault in language definitions * fix type of noun_chunks iterator - yields tuple instead of span * fix types in language-specific modules * 🏷 Expand accepted inputs of `get_string_id` `get_string_id` accepts either a string (in which case it returns its ID) or an ID (in which case it immediately returns the ID). These changes extend the type annotation of `get_string_id` to indicate that it can accept either strings or IDs. * 🏷 Handle override types in `combine_score_weights` The `combine_score_weights` function allows users to pass an `overrides` mapping to override data extracted from the `weights` argument. Since it allows `Optional` dictionary values, the return value may also include `Optional` dictionary values. These changes update the type annotations for `combine_score_weights` to reflect this fact. * 🏷 Fix tokenizer serialization method signatures in `DummyTokenizer` * 🏷 Fix redefinition of `wandb_logger` These changes fix the redefinition of `wandb_logger` by giving a separate name to each `WandbLogger` version. For backwards-compatibility, `spacy.train` still exports `wandb_logger_v3` as `wandb_logger` for now. * more fixes for typing in language * type fixes in model definitions * 🏷 Annotate `_RandomWords.probs` as `NDArray` * 🏷 Annotate `tok2vec` layers to help Mypy * 🐛 Fix `_RandomWords.probs` type annotations for Python 3.6 Also remove an import that I forgot to move to the top of the module 😅 * more fixes for matchers and other pipeline components * quick fix for entity linker * fixing types for spancat, textcat, etc * bugfix for tok2vec * type annotations for scorer * add runtime_checkable for Protocol * type and import fixes in tests * mypy fixes for training utilities * few fixes in util * fix import * 🐵 Remove unused `# type: ignore` directives * 🏷 Annotate `Language._components` * 🏷 Annotate `spacy.pipeline.Pipe` * add doc as property to span.pyi * small fixes and cleanup * explicit type annotations instead of via comment Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com> Co-authored-by: svlandeg <sofie.vanlandeghem@gmail.com> Co-authored-by: svlandeg <svlandeg@github.com>
439 lines
16 KiB
Python
439 lines
16 KiB
Python
import numpy
|
|
from typing import List, Dict, Callable, Tuple, Optional, Iterable, Any, cast
|
|
from typing_extensions import Protocol, runtime_checkable
|
|
from thinc.api import Config, Model, get_current_ops, set_dropout_rate, Ops
|
|
from thinc.api import Optimizer
|
|
from thinc.types import Ragged, Ints2d, Floats2d, Ints1d
|
|
|
|
from ..scorer import Scorer
|
|
from ..language import Language
|
|
from .trainable_pipe import TrainablePipe
|
|
from ..tokens import Doc, SpanGroup, Span
|
|
from ..vocab import Vocab
|
|
from ..training import Example, validate_examples
|
|
from ..errors import Errors
|
|
from ..util import registry
|
|
|
|
|
|
spancat_default_config = """
|
|
[model]
|
|
@architectures = "spacy.SpanCategorizer.v1"
|
|
scorer = {"@layers": "spacy.LinearLogistic.v1"}
|
|
|
|
[model.reducer]
|
|
@layers = spacy.mean_max_reducer.v1
|
|
hidden_size = 128
|
|
|
|
[model.tok2vec]
|
|
@architectures = "spacy.Tok2Vec.v1"
|
|
|
|
[model.tok2vec.embed]
|
|
@architectures = "spacy.MultiHashEmbed.v1"
|
|
width = 96
|
|
rows = [5000, 2000, 1000, 1000]
|
|
attrs = ["ORTH", "PREFIX", "SUFFIX", "SHAPE"]
|
|
include_static_vectors = false
|
|
|
|
[model.tok2vec.encode]
|
|
@architectures = "spacy.MaxoutWindowEncoder.v1"
|
|
width = ${model.tok2vec.embed.width}
|
|
window_size = 1
|
|
maxout_pieces = 3
|
|
depth = 4
|
|
"""
|
|
|
|
DEFAULT_SPANCAT_MODEL = Config().from_str(spancat_default_config)["model"]
|
|
|
|
|
|
@runtime_checkable
|
|
class Suggester(Protocol):
|
|
def __call__(self, docs: Iterable[Doc], *, ops: Optional[Ops] = None) -> Ragged:
|
|
...
|
|
|
|
|
|
@registry.misc("spacy.ngram_suggester.v1")
|
|
def build_ngram_suggester(sizes: List[int]) -> Suggester:
|
|
"""Suggest all spans of the given lengths. Spans are returned as a ragged
|
|
array of integers. The array has two columns, indicating the start and end
|
|
position."""
|
|
|
|
def ngram_suggester(docs: Iterable[Doc], *, ops: Optional[Ops] = None) -> Ragged:
|
|
if ops is None:
|
|
ops = get_current_ops()
|
|
spans = []
|
|
lengths = []
|
|
for doc in docs:
|
|
starts = ops.xp.arange(len(doc), dtype="i")
|
|
starts = starts.reshape((-1, 1))
|
|
length = 0
|
|
for size in sizes:
|
|
if size <= len(doc):
|
|
starts_size = starts[: len(doc) - (size - 1)]
|
|
spans.append(ops.xp.hstack((starts_size, starts_size + size)))
|
|
length += spans[-1].shape[0]
|
|
if spans:
|
|
assert spans[-1].ndim == 2, spans[-1].shape
|
|
lengths.append(length)
|
|
lengths_array = cast(Ints1d, ops.asarray(lengths, dtype="i"))
|
|
if len(spans) > 0:
|
|
output = Ragged(ops.xp.vstack(spans), lengths_array)
|
|
else:
|
|
output = Ragged(ops.xp.zeros((0, 0)), lengths_array)
|
|
|
|
assert output.dataXd.ndim == 2
|
|
return output
|
|
|
|
return ngram_suggester
|
|
|
|
|
|
@registry.misc("spacy.ngram_range_suggester.v1")
|
|
def build_ngram_range_suggester(min_size: int, max_size: int) -> Suggester:
|
|
"""Suggest all spans of the given lengths between a given min and max value - both inclusive.
|
|
Spans are returned as a ragged array of integers. The array has two columns,
|
|
indicating the start and end position."""
|
|
sizes = list(range(min_size, max_size + 1))
|
|
return build_ngram_suggester(sizes)
|
|
|
|
|
|
@Language.factory(
|
|
"spancat",
|
|
assigns=["doc.spans"],
|
|
default_config={
|
|
"threshold": 0.5,
|
|
"spans_key": "sc",
|
|
"max_positive": None,
|
|
"model": DEFAULT_SPANCAT_MODEL,
|
|
"suggester": {"@misc": "spacy.ngram_suggester.v1", "sizes": [1, 2, 3]},
|
|
},
|
|
default_score_weights={"spans_sc_f": 1.0, "spans_sc_p": 0.0, "spans_sc_r": 0.0},
|
|
)
|
|
def make_spancat(
|
|
nlp: Language,
|
|
name: str,
|
|
suggester: Suggester,
|
|
model: Model[Tuple[List[Doc], Ragged], Floats2d],
|
|
spans_key: str,
|
|
threshold: float = 0.5,
|
|
max_positive: Optional[int] = None,
|
|
) -> "SpanCategorizer":
|
|
"""Create a SpanCategorizer component. The span categorizer consists of two
|
|
parts: a suggester function that proposes candidate spans, and a labeller
|
|
model that predicts one or more labels for each span.
|
|
|
|
suggester (Callable[[Iterable[Doc], Optional[Ops]], Ragged]): A function that suggests spans.
|
|
Spans are returned as a ragged array with two integer columns, for the
|
|
start and end positions.
|
|
model (Model[Tuple[List[Doc], Ragged], Floats2d]): A model instance that
|
|
is given a list of documents and (start, end) indices representing
|
|
candidate span offsets. The model predicts a probability for each category
|
|
for each span.
|
|
spans_key (str): Key of the doc.spans dict to save the spans under. During
|
|
initialization and training, the component will look for spans on the
|
|
reference document under the same key.
|
|
threshold (float): Minimum probability to consider a prediction positive.
|
|
Spans with a positive prediction will be saved on the Doc. Defaults to
|
|
0.5.
|
|
max_positive (Optional[int]): Maximum number of labels to consider positive
|
|
per span. Defaults to None, indicating no limit.
|
|
"""
|
|
return SpanCategorizer(
|
|
nlp.vocab,
|
|
suggester=suggester,
|
|
model=model,
|
|
spans_key=spans_key,
|
|
threshold=threshold,
|
|
max_positive=max_positive,
|
|
name=name,
|
|
)
|
|
|
|
|
|
class SpanCategorizer(TrainablePipe):
|
|
"""Pipeline component to label spans of text.
|
|
|
|
DOCS: https://spacy.io/api/spancategorizer
|
|
"""
|
|
|
|
def __init__(
|
|
self,
|
|
vocab: Vocab,
|
|
model: Model[Tuple[List[Doc], Ragged], Floats2d],
|
|
suggester: Suggester,
|
|
name: str = "spancat",
|
|
*,
|
|
spans_key: str = "spans",
|
|
threshold: float = 0.5,
|
|
max_positive: Optional[int] = None,
|
|
) -> None:
|
|
"""Initialize the span categorizer.
|
|
|
|
DOCS: https://spacy.io/api/spancategorizer#init
|
|
"""
|
|
self.cfg = {
|
|
"labels": [],
|
|
"spans_key": spans_key,
|
|
"threshold": threshold,
|
|
"max_positive": max_positive,
|
|
}
|
|
self.vocab = vocab
|
|
self.suggester = suggester
|
|
self.model = model
|
|
self.name = name
|
|
|
|
@property
|
|
def key(self) -> str:
|
|
"""Key of the doc.spans dict to save the spans under. During
|
|
initialization and training, the component will look for spans on the
|
|
reference document under the same key.
|
|
"""
|
|
return str(self.cfg["spans_key"])
|
|
|
|
def add_label(self, label: str) -> int:
|
|
"""Add a new label to the pipe.
|
|
|
|
label (str): The label to add.
|
|
RETURNS (int): 0 if label is already present, otherwise 1.
|
|
|
|
DOCS: https://spacy.io/api/spancategorizer#add_label
|
|
"""
|
|
if not isinstance(label, str):
|
|
raise ValueError(Errors.E187)
|
|
if label in self.labels:
|
|
return 0
|
|
self._allow_extra_label()
|
|
self.cfg["labels"].append(label) # type: ignore
|
|
self.vocab.strings.add(label)
|
|
return 1
|
|
|
|
@property
|
|
def labels(self) -> Tuple[str]:
|
|
"""RETURNS (Tuple[str]): The labels currently added to the component.
|
|
|
|
DOCS: https://spacy.io/api/spancategorizer#labels
|
|
"""
|
|
return tuple(self.cfg["labels"]) # type: ignore
|
|
|
|
@property
|
|
def label_data(self) -> List[str]:
|
|
"""RETURNS (List[str]): Information about the component's labels.
|
|
|
|
DOCS: https://spacy.io/api/spancategorizer#label_data
|
|
"""
|
|
return list(self.labels)
|
|
|
|
def predict(self, docs: Iterable[Doc]):
|
|
"""Apply the pipeline's model to a batch of docs, without modifying them.
|
|
|
|
docs (Iterable[Doc]): The documents to predict.
|
|
RETURNS: The models prediction for each document.
|
|
|
|
DOCS: https://spacy.io/api/spancategorizer#predict
|
|
"""
|
|
indices = self.suggester(docs, ops=self.model.ops)
|
|
scores = self.model.predict((docs, indices)) # type: ignore
|
|
return indices, scores
|
|
|
|
def set_annotations(self, docs: Iterable[Doc], indices_scores) -> None:
|
|
"""Modify a batch of Doc objects, using pre-computed scores.
|
|
|
|
docs (Iterable[Doc]): The documents to modify.
|
|
scores: The scores to set, produced by SpanCategorizer.predict.
|
|
|
|
DOCS: https://spacy.io/api/spancategorizer#set_annotations
|
|
"""
|
|
labels = self.labels
|
|
indices, scores = indices_scores
|
|
offset = 0
|
|
for i, doc in enumerate(docs):
|
|
indices_i = indices[i].dataXd
|
|
doc.spans[self.key] = self._make_span_group(
|
|
doc, indices_i, scores[offset : offset + indices.lengths[i]], labels # type: ignore[arg-type]
|
|
)
|
|
offset += indices.lengths[i]
|
|
|
|
def update(
|
|
self,
|
|
examples: Iterable[Example],
|
|
*,
|
|
drop: float = 0.0,
|
|
sgd: Optional[Optimizer] = None,
|
|
losses: Optional[Dict[str, float]] = None,
|
|
) -> Dict[str, float]:
|
|
"""Learn from a batch of documents and gold-standard information,
|
|
updating the pipe's model. Delegates to predict and get_loss.
|
|
|
|
examples (Iterable[Example]): A batch of Example objects.
|
|
drop (float): The dropout rate.
|
|
sgd (thinc.api.Optimizer): The optimizer.
|
|
losses (Dict[str, float]): Optional record of the loss during training.
|
|
Updated using the component name as the key.
|
|
RETURNS (Dict[str, float]): The updated losses dictionary.
|
|
|
|
DOCS: https://spacy.io/api/spancategorizer#update
|
|
"""
|
|
if losses is None:
|
|
losses = {}
|
|
losses.setdefault(self.name, 0.0)
|
|
validate_examples(examples, "SpanCategorizer.update")
|
|
self._validate_categories(examples)
|
|
if not any(len(eg.predicted) if eg.predicted else 0 for eg in examples):
|
|
# Handle cases where there are no tokens in any docs.
|
|
return losses
|
|
docs = [eg.predicted for eg in examples]
|
|
spans = self.suggester(docs, ops=self.model.ops)
|
|
if spans.lengths.sum() == 0:
|
|
return losses
|
|
set_dropout_rate(self.model, drop)
|
|
scores, backprop_scores = self.model.begin_update((docs, spans))
|
|
loss, d_scores = self.get_loss(examples, (spans, scores))
|
|
backprop_scores(d_scores) # type: ignore
|
|
if sgd is not None:
|
|
self.finish_update(sgd)
|
|
losses[self.name] += loss
|
|
return losses
|
|
|
|
def get_loss(
|
|
self, examples: Iterable[Example], spans_scores: Tuple[Ragged, Floats2d]
|
|
) -> Tuple[float, float]:
|
|
"""Find the loss and gradient of loss for the batch of documents and
|
|
their predicted scores.
|
|
|
|
examples (Iterable[Examples]): The batch of examples.
|
|
spans_scores: Scores representing the model's predictions.
|
|
RETURNS (Tuple[float, float]): The loss and the gradient.
|
|
|
|
DOCS: https://spacy.io/api/spancategorizer#get_loss
|
|
"""
|
|
spans, scores = spans_scores
|
|
spans = Ragged(
|
|
self.model.ops.to_numpy(spans.data), self.model.ops.to_numpy(spans.lengths)
|
|
)
|
|
label_map = {label: i for i, label in enumerate(self.labels)}
|
|
target = numpy.zeros(scores.shape, dtype=scores.dtype)
|
|
offset = 0
|
|
for i, eg in enumerate(examples):
|
|
# Map (start, end) offset of spans to the row in the d_scores array,
|
|
# so that we can adjust the gradient for predictions that were
|
|
# in the gold standard.
|
|
spans_index = {}
|
|
spans_i = spans[i].dataXd
|
|
for j in range(spans.lengths[i]):
|
|
start = int(spans_i[j, 0]) # type: ignore
|
|
end = int(spans_i[j, 1]) # type: ignore
|
|
spans_index[(start, end)] = offset + j
|
|
for gold_span in self._get_aligned_spans(eg):
|
|
key = (gold_span.start, gold_span.end)
|
|
if key in spans_index:
|
|
row = spans_index[key]
|
|
k = label_map[gold_span.label_]
|
|
target[row, k] = 1.0
|
|
# The target is a flat array for all docs. Track the position
|
|
# we're at within the flat array.
|
|
offset += spans.lengths[i]
|
|
target = self.model.ops.asarray(target, dtype="f") # type: ignore
|
|
# The target will have the values 0 (for untrue predictions) or 1
|
|
# (for true predictions).
|
|
# The scores should be in the range [0, 1].
|
|
# If the prediction is 0.9 and it's true, the gradient
|
|
# will be -0.1 (0.9 - 1.0).
|
|
# If the prediction is 0.9 and it's false, the gradient will be
|
|
# 0.9 (0.9 - 0.0)
|
|
d_scores = scores - target
|
|
loss = float((d_scores ** 2).sum())
|
|
return loss, d_scores
|
|
|
|
def initialize(
|
|
self,
|
|
get_examples: Callable[[], Iterable[Example]],
|
|
*,
|
|
nlp: Optional[Language] = None,
|
|
labels: Optional[List[str]] = None,
|
|
) -> None:
|
|
"""Initialize the pipe for training, using a representative set
|
|
of data examples.
|
|
|
|
get_examples (Callable[[], Iterable[Example]]): Function that
|
|
returns a representative sample of gold-standard Example objects.
|
|
nlp (Optional[Language]): The current nlp object the component is part of.
|
|
labels (Optional[List[str]]): The labels to add to the component, typically generated by the
|
|
`init labels` command. If no labels are provided, the get_examples
|
|
callback is used to extract the labels from the data.
|
|
|
|
DOCS: https://spacy.io/api/spancategorizer#initialize
|
|
"""
|
|
subbatch: List[Example] = []
|
|
if labels is not None:
|
|
for label in labels:
|
|
self.add_label(label)
|
|
for eg in get_examples():
|
|
if labels is None:
|
|
for span in eg.reference.spans.get(self.key, []):
|
|
self.add_label(span.label_)
|
|
if len(subbatch) < 10:
|
|
subbatch.append(eg)
|
|
self._require_labels()
|
|
if subbatch:
|
|
docs = [eg.x for eg in subbatch]
|
|
spans = self.suggester(docs)
|
|
Y = self.model.ops.alloc2f(spans.dataXd.shape[0], len(self.labels))
|
|
self.model.initialize(X=(docs, spans), Y=Y)
|
|
else:
|
|
self.model.initialize()
|
|
|
|
def score(self, examples: Iterable[Example], **kwargs) -> Dict[str, Any]:
|
|
"""Score a batch of examples.
|
|
|
|
examples (Iterable[Example]): The examples to score.
|
|
RETURNS (Dict[str, Any]): The scores, produced by Scorer.score_cats.
|
|
|
|
DOCS: https://spacy.io/api/spancategorizer#score
|
|
"""
|
|
validate_examples(examples, "SpanCategorizer.score")
|
|
self._validate_categories(examples)
|
|
kwargs = dict(kwargs)
|
|
attr_prefix = "spans_"
|
|
kwargs.setdefault("attr", f"{attr_prefix}{self.key}")
|
|
kwargs.setdefault("allow_overlap", True)
|
|
kwargs.setdefault(
|
|
"getter", lambda doc, key: doc.spans.get(key[len(attr_prefix) :], [])
|
|
)
|
|
kwargs.setdefault("has_annotation", lambda doc: self.key in doc.spans)
|
|
return Scorer.score_spans(examples, **kwargs)
|
|
|
|
def _validate_categories(self, examples: Iterable[Example]):
|
|
# TODO
|
|
pass
|
|
|
|
def _get_aligned_spans(self, eg: Example):
|
|
return eg.get_aligned_spans_y2x(
|
|
eg.reference.spans.get(self.key, []), allow_overlap=True
|
|
)
|
|
|
|
def _make_span_group(
|
|
self, doc: Doc, indices: Ints2d, scores: Floats2d, labels: List[str]
|
|
) -> SpanGroup:
|
|
spans = SpanGroup(doc, name=self.key)
|
|
max_positive = self.cfg["max_positive"]
|
|
threshold = self.cfg["threshold"]
|
|
|
|
keeps = scores >= threshold
|
|
ranked = (scores * -1).argsort() # type: ignore
|
|
if max_positive is not None:
|
|
assert isinstance(max_positive, int)
|
|
span_filter = ranked[:, max_positive:]
|
|
for i, row in enumerate(span_filter):
|
|
keeps[i, row] = False
|
|
spans.attrs["scores"] = scores[keeps].flatten()
|
|
|
|
indices = self.model.ops.to_numpy(indices)
|
|
keeps = self.model.ops.to_numpy(keeps)
|
|
|
|
for i in range(indices.shape[0]):
|
|
start = indices[i, 0]
|
|
end = indices[i, 1]
|
|
|
|
for j, keep in enumerate(keeps[i]):
|
|
if keep:
|
|
spans.append(Span(doc, start, end, label=labels[j]))
|
|
|
|
return spans
|