mirror of
https://github.com/explosion/spaCy.git
synced 2024-12-25 17:36:30 +03:00
Merge branch 'master' into spacy.io
This commit is contained in:
commit
764359c952
106
.github/contributors/Bharat123rox.md
vendored
Normal file
106
.github/contributors/Bharat123rox.md
vendored
Normal file
|
@ -0,0 +1,106 @@
|
||||||
|
# spaCy contributor agreement
|
||||||
|
|
||||||
|
This spaCy Contributor Agreement (**"SCA"**) is based on the
|
||||||
|
[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
|
||||||
|
The SCA applies to any contribution that you make to any product or project
|
||||||
|
managed by us (the **"project"**), and sets out the intellectual property rights
|
||||||
|
you grant to us in the contributed materials. The term **"us"** shall mean
|
||||||
|
[ExplosionAI GmbH](https://explosion.ai/legal). The term
|
||||||
|
**"you"** shall mean the person or entity identified below.
|
||||||
|
|
||||||
|
If you agree to be bound by these terms, fill in the information requested
|
||||||
|
below and include the filled-in version with your first pull request, under the
|
||||||
|
folder [`.github/contributors/`](/.github/contributors/). The name of the file
|
||||||
|
should be your GitHub username, with the extension `.md`. For example, the user
|
||||||
|
example_user would create the file `.github/contributors/example_user.md`.
|
||||||
|
|
||||||
|
Read this agreement carefully before signing. These terms and conditions
|
||||||
|
constitute a binding legal agreement.
|
||||||
|
|
||||||
|
## Contributor Agreement
|
||||||
|
|
||||||
|
1. The term "contribution" or "contributed materials" means any source code,
|
||||||
|
object code, patch, tool, sample, graphic, specification, manual,
|
||||||
|
documentation, or any other material posted or submitted by you to the project.
|
||||||
|
|
||||||
|
2. With respect to any worldwide copyrights, or copyright applications and
|
||||||
|
registrations, in your contribution:
|
||||||
|
|
||||||
|
* you hereby assign to us joint ownership, and to the extent that such
|
||||||
|
assignment is or becomes invalid, ineffective or unenforceable, you hereby
|
||||||
|
grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
|
||||||
|
royalty-free, unrestricted license to exercise all rights under those
|
||||||
|
copyrights. This includes, at our option, the right to sublicense these same
|
||||||
|
rights to third parties through multiple levels of sublicensees or other
|
||||||
|
licensing arrangements;
|
||||||
|
|
||||||
|
* you agree that each of us can do all things in relation to your
|
||||||
|
contribution as if each of us were the sole owners, and if one of us makes
|
||||||
|
a derivative work of your contribution, the one who makes the derivative
|
||||||
|
work (or has it made will be the sole owner of that derivative work;
|
||||||
|
|
||||||
|
* you agree that you will not assert any moral rights in your contribution
|
||||||
|
against us, our licensees or transferees;
|
||||||
|
|
||||||
|
* you agree that we may register a copyright in your contribution and
|
||||||
|
exercise all ownership rights associated with it; and
|
||||||
|
|
||||||
|
* you agree that neither of us has any duty to consult with, obtain the
|
||||||
|
consent of, pay or render an accounting to the other for any use or
|
||||||
|
distribution of your contribution.
|
||||||
|
|
||||||
|
3. With respect to any patents you own, or that you can license without payment
|
||||||
|
to any third party, you hereby grant to us a perpetual, irrevocable,
|
||||||
|
non-exclusive, worldwide, no-charge, royalty-free license to:
|
||||||
|
|
||||||
|
* make, have made, use, sell, offer to sell, import, and otherwise transfer
|
||||||
|
your contribution in whole or in part, alone or in combination with or
|
||||||
|
included in any product, work or materials arising out of the project to
|
||||||
|
which your contribution was submitted, and
|
||||||
|
|
||||||
|
* at our option, to sublicense these same rights to third parties through
|
||||||
|
multiple levels of sublicensees or other licensing arrangements.
|
||||||
|
|
||||||
|
4. Except as set out above, you keep all right, title, and interest in your
|
||||||
|
contribution. The rights that you grant to us under these terms are effective
|
||||||
|
on the date you first submitted a contribution to us, even if your submission
|
||||||
|
took place before the date you sign these terms.
|
||||||
|
|
||||||
|
5. You covenant, represent, warrant and agree that:
|
||||||
|
|
||||||
|
* Each contribution that you submit is and shall be an original work of
|
||||||
|
authorship and you can legally grant the rights set out in this SCA;
|
||||||
|
|
||||||
|
* to the best of your knowledge, each contribution will not violate any
|
||||||
|
third party's copyrights, trademarks, patents, or other intellectual
|
||||||
|
property rights; and
|
||||||
|
|
||||||
|
* each contribution shall be in compliance with U.S. export control laws and
|
||||||
|
other applicable export and import laws. You agree to notify us if you
|
||||||
|
become aware of any circumstance which would make any of the foregoing
|
||||||
|
representations inaccurate in any respect. We may publicly disclose your
|
||||||
|
participation in the project, including the fact that you have signed the SCA.
|
||||||
|
|
||||||
|
6. This SCA is governed by the laws of the State of California and applicable
|
||||||
|
U.S. Federal law. Any choice of law rules will not apply.
|
||||||
|
|
||||||
|
7. Please place an “x” on one of the applicable statement below. Please do NOT
|
||||||
|
mark both statements:
|
||||||
|
|
||||||
|
* [x] I am signing on behalf of myself as an individual and no other person
|
||||||
|
or entity, including my employer, has or will have rights with respect to my
|
||||||
|
contributions.
|
||||||
|
|
||||||
|
* [ ] I am signing on behalf of my employer or a legal entity and I have the
|
||||||
|
actual authority to contractually bind that entity.
|
||||||
|
|
||||||
|
## Contributor Details
|
||||||
|
|
||||||
|
| Field | Entry |
|
||||||
|
|------------------------------- | -------------------- |
|
||||||
|
| Name | Bharat Raghunathan |
|
||||||
|
| Company name (if applicable) | |
|
||||||
|
| Title or role (if applicable) | |
|
||||||
|
| Date | 19 - 03 - 2019 |
|
||||||
|
| GitHub username | Bharat123rox |
|
||||||
|
| Website (optional) | |
|
|
@ -13,14 +13,14 @@ import srsly
|
||||||
|
|
||||||
import spacy
|
import spacy
|
||||||
import spacy.util
|
import spacy.util
|
||||||
from ...tokens import Token, Doc
|
from spacy.tokens import Token, Doc
|
||||||
from ...gold import GoldParse
|
from spacy.gold import GoldParse
|
||||||
from ...util import compounding, minibatch_by_words
|
from spacy.util import compounding, minibatch_by_words
|
||||||
from ...syntax.nonproj import projectivize
|
from spacy.syntax.nonproj import projectivize
|
||||||
from ...matcher import Matcher
|
from spacy.matcher import Matcher
|
||||||
|
|
||||||
# from ...morphology import Fused_begin, Fused_inside
|
# from spacy.morphology import Fused_begin, Fused_inside
|
||||||
from ... import displacy
|
from spacy import displacy
|
||||||
from collections import defaultdict, Counter
|
from collections import defaultdict, Counter
|
||||||
from timeit import default_timer as timer
|
from timeit import default_timer as timer
|
||||||
|
|
||||||
|
@ -33,10 +33,10 @@ import numpy.random
|
||||||
|
|
||||||
from . import conll17_ud_eval
|
from . import conll17_ud_eval
|
||||||
|
|
||||||
from ... import lang
|
from spacy import lang
|
||||||
from ...lang import zh
|
from spacy.lang import zh
|
||||||
from ...lang import ja
|
from spacy.lang import ja
|
||||||
from ...lang import ru
|
from spacy.lang import ru
|
||||||
|
|
||||||
|
|
||||||
################
|
################
|
|
@ -13,12 +13,12 @@ import json
|
||||||
|
|
||||||
import spacy
|
import spacy
|
||||||
import spacy.util
|
import spacy.util
|
||||||
from ...tokens import Token, Doc
|
from spacy.tokens import Token, Doc
|
||||||
from ...gold import GoldParse
|
from spacy.gold import GoldParse
|
||||||
from ...util import compounding, minibatch, minibatch_by_words
|
from spacy.util import compounding, minibatch, minibatch_by_words
|
||||||
from ...syntax.nonproj import projectivize
|
from spacy.syntax.nonproj import projectivize
|
||||||
from ...matcher import Matcher
|
from spacy.matcher import Matcher
|
||||||
from ... import displacy
|
from spacy import displacy
|
||||||
from collections import defaultdict, Counter
|
from collections import defaultdict, Counter
|
||||||
from timeit import default_timer as timer
|
from timeit import default_timer as timer
|
||||||
|
|
||||||
|
@ -28,9 +28,9 @@ import numpy.random
|
||||||
|
|
||||||
from . import conll17_ud_eval
|
from . import conll17_ud_eval
|
||||||
|
|
||||||
from ... import lang
|
from spacy import lang
|
||||||
from ...lang import zh
|
from spacy.lang import zh
|
||||||
from ...lang import ja
|
from spacy.lang import ja
|
||||||
|
|
||||||
try:
|
try:
|
||||||
import torch
|
import torch
|
12
setup.py
12
setup.py
|
@ -238,12 +238,12 @@ def setup_package():
|
||||||
],
|
],
|
||||||
setup_requires=["wheel"],
|
setup_requires=["wheel"],
|
||||||
extras_require={
|
extras_require={
|
||||||
"cuda": ["cupy>=4.0"],
|
"cuda": ["thinc_gpu_ops>=0.0.1,<0.1.0", "cupy>=5.0.0b4"],
|
||||||
"cuda80": ["cupy-cuda80>=4.0"],
|
"cuda80": ["thinc_gpu_ops>=0.0.1,<0.1.0", "cupy-cuda80>=5.0.0b4"],
|
||||||
"cuda90": ["cupy-cuda90>=4.0"],
|
"cuda90": ["thinc_gpu_ops>=0.0.1,<0.1.0", "cupy-cuda90>=5.0.0b4"],
|
||||||
"cuda91": ["cupy-cuda91>=4.0"],
|
"cuda91": ["thinc_gpu_ops>=0.0.1,<0.1.0", "cupy-cuda91>=5.0.0b4"],
|
||||||
"cuda92": ["cupy-cuda92>=4.0"],
|
"cuda92": ["thinc_gpu_ops>=0.0.1,<0.1.0", "cupy-cuda92>=5.0.0b4"],
|
||||||
"cuda100": ["cupy-cuda100>=4.0"],
|
"cuda100": ["thinc_gpu_ops>=0.0.1,<0.1.0", "cupy-cuda100>=5.0.0b4"],
|
||||||
# Language tokenizers with external dependencies
|
# Language tokenizers with external dependencies
|
||||||
"ja": ["mecab-python3==0.7"],
|
"ja": ["mecab-python3==0.7"],
|
||||||
},
|
},
|
||||||
|
|
|
@ -1,6 +1,7 @@
|
||||||
# coding: utf8
|
# coding: utf8
|
||||||
from __future__ import unicode_literals
|
from __future__ import unicode_literals
|
||||||
import warnings
|
import warnings
|
||||||
|
import sys
|
||||||
|
|
||||||
warnings.filterwarnings("ignore", message="numpy.dtype size changed")
|
warnings.filterwarnings("ignore", message="numpy.dtype size changed")
|
||||||
warnings.filterwarnings("ignore", message="numpy.ufunc size changed")
|
warnings.filterwarnings("ignore", message="numpy.ufunc size changed")
|
||||||
|
@ -11,10 +12,14 @@ from thinc.neural.util import prefer_gpu, require_gpu
|
||||||
from .cli.info import info as cli_info
|
from .cli.info import info as cli_info
|
||||||
from .glossary import explain
|
from .glossary import explain
|
||||||
from .about import __version__
|
from .about import __version__
|
||||||
from .errors import Warnings, deprecation_warning
|
from .errors import Errors, Warnings, deprecation_warning
|
||||||
from . import util
|
from . import util
|
||||||
|
|
||||||
|
|
||||||
|
if sys.maxunicode == 65535:
|
||||||
|
raise SystemError(Errors.E130)
|
||||||
|
|
||||||
|
|
||||||
def load(name, **overrides):
|
def load(name, **overrides):
|
||||||
depr_path = overrides.get("path")
|
depr_path = overrides.get("path")
|
||||||
if depr_path not in (True, False, None):
|
if depr_path not in (True, False, None):
|
||||||
|
|
|
@ -9,8 +9,7 @@ if __name__ == "__main__":
|
||||||
import sys
|
import sys
|
||||||
from wasabi import Printer
|
from wasabi import Printer
|
||||||
from spacy.cli import download, link, info, package, train, pretrain, convert
|
from spacy.cli import download, link, info, package, train, pretrain, convert
|
||||||
from spacy.cli import init_model, profile, evaluate, validate
|
from spacy.cli import init_model, profile, evaluate, validate, debug_data
|
||||||
from spacy.cli import ud_train, ud_evaluate, debug_data
|
|
||||||
|
|
||||||
msg = Printer()
|
msg = Printer()
|
||||||
|
|
||||||
|
@ -21,9 +20,7 @@ if __name__ == "__main__":
|
||||||
"train": train,
|
"train": train,
|
||||||
"pretrain": pretrain,
|
"pretrain": pretrain,
|
||||||
"debug-data": debug_data,
|
"debug-data": debug_data,
|
||||||
"ud-train": ud_train,
|
|
||||||
"evaluate": evaluate,
|
"evaluate": evaluate,
|
||||||
"ud-evaluate": ud_evaluate,
|
|
||||||
"convert": convert,
|
"convert": convert,
|
||||||
"package": package,
|
"package": package,
|
||||||
"init-model": init_model,
|
"init-model": init_model,
|
||||||
|
|
|
@ -4,7 +4,7 @@
|
||||||
# fmt: off
|
# fmt: off
|
||||||
|
|
||||||
__title__ = "spacy"
|
__title__ = "spacy"
|
||||||
__version__ = "2.1.0"
|
__version__ = "2.1.1"
|
||||||
__summary__ = "Industrial-strength Natural Language Processing (NLP) with Python and Cython"
|
__summary__ = "Industrial-strength Natural Language Processing (NLP) with Python and Cython"
|
||||||
__uri__ = "https://spacy.io"
|
__uri__ = "https://spacy.io"
|
||||||
__author__ = "Explosion AI"
|
__author__ = "Explosion AI"
|
||||||
|
|
|
@ -10,4 +10,3 @@ from .evaluate import evaluate # noqa: F401
|
||||||
from .convert import convert # noqa: F401
|
from .convert import convert # noqa: F401
|
||||||
from .init_model import init_model # noqa: F401
|
from .init_model import init_model # noqa: F401
|
||||||
from .validate import validate # noqa: F401
|
from .validate import validate # noqa: F401
|
||||||
from .ud import ud_train, ud_evaluate # noqa: F401
|
|
||||||
|
|
|
@ -9,7 +9,7 @@ from collections import Counter
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from thinc.v2v import Affine, Maxout
|
from thinc.v2v import Affine, Maxout
|
||||||
from thinc.misc import LayerNorm as LN
|
from thinc.misc import LayerNorm as LN
|
||||||
from thinc.neural.util import prefer_gpu
|
from thinc.neural.util import prefer_gpu, get_array_module
|
||||||
from wasabi import Printer
|
from wasabi import Printer
|
||||||
import srsly
|
import srsly
|
||||||
|
|
||||||
|
@ -27,6 +27,7 @@ from .. import util
|
||||||
width=("Width of CNN layers", "option", "cw", int),
|
width=("Width of CNN layers", "option", "cw", int),
|
||||||
depth=("Depth of CNN layers", "option", "cd", int),
|
depth=("Depth of CNN layers", "option", "cd", int),
|
||||||
embed_rows=("Embedding rows", "option", "er", int),
|
embed_rows=("Embedding rows", "option", "er", int),
|
||||||
|
loss_func=("Loss to use for the objective. L2 or cosine", "option", "L", str),
|
||||||
use_vectors=("Whether to use the static vectors as input features", "flag", "uv"),
|
use_vectors=("Whether to use the static vectors as input features", "flag", "uv"),
|
||||||
dropout=("Dropout", "option", "d", float),
|
dropout=("Dropout", "option", "d", float),
|
||||||
batch_size=("Number of words per training batch", "option", "bs", int),
|
batch_size=("Number of words per training batch", "option", "bs", int),
|
||||||
|
@ -42,6 +43,7 @@ def pretrain(
|
||||||
width=96,
|
width=96,
|
||||||
depth=4,
|
depth=4,
|
||||||
embed_rows=2000,
|
embed_rows=2000,
|
||||||
|
loss_func="cosine",
|
||||||
use_vectors=False,
|
use_vectors=False,
|
||||||
dropout=0.2,
|
dropout=0.2,
|
||||||
nr_iter=1000,
|
nr_iter=1000,
|
||||||
|
@ -123,7 +125,7 @@ def pretrain(
|
||||||
max_length=max_length,
|
max_length=max_length,
|
||||||
min_length=min_length,
|
min_length=min_length,
|
||||||
)
|
)
|
||||||
loss = make_update(model, docs, optimizer, drop=dropout)
|
loss = make_update(model, docs, optimizer, objective=loss_func, drop=dropout)
|
||||||
progress = tracker.update(epoch, loss, docs)
|
progress = tracker.update(epoch, loss, docs)
|
||||||
if progress:
|
if progress:
|
||||||
msg.row(progress, **row_settings)
|
msg.row(progress, **row_settings)
|
||||||
|
@ -196,11 +198,26 @@ def get_vectors_loss(ops, docs, prediction, objective="L2"):
|
||||||
ids = ops.flatten([doc.to_array(ID).ravel() for doc in docs])
|
ids = ops.flatten([doc.to_array(ID).ravel() for doc in docs])
|
||||||
target = docs[0].vocab.vectors.data[ids]
|
target = docs[0].vocab.vectors.data[ids]
|
||||||
if objective == "L2":
|
if objective == "L2":
|
||||||
d_scores = prediction - target
|
d_target = prediction - target
|
||||||
loss = (d_scores ** 2).sum()
|
loss = (d_target ** 2).sum()
|
||||||
else:
|
elif objective == "cosine":
|
||||||
raise NotImplementedError(objective)
|
loss, d_target = get_cossim_loss(prediction, target)
|
||||||
return loss, d_scores
|
return loss, d_target
|
||||||
|
|
||||||
|
|
||||||
|
def get_cossim_loss(yh, y):
|
||||||
|
# Add a small constant to avoid 0 vectors
|
||||||
|
yh = yh + 1e-8
|
||||||
|
y = y + 1e-8
|
||||||
|
# https://math.stackexchange.com/questions/1923613/partial-derivative-of-cosine-similarity
|
||||||
|
xp = get_array_module(yh)
|
||||||
|
norm_yh = xp.linalg.norm(yh, axis=1, keepdims=True)
|
||||||
|
norm_y = xp.linalg.norm(y, axis=1, keepdims=True)
|
||||||
|
mul_norms = norm_yh * norm_y
|
||||||
|
cosine = (yh * y).sum(axis=1, keepdims=True) / mul_norms
|
||||||
|
d_yh = (y / mul_norms) - (cosine * (yh / norm_yh**2))
|
||||||
|
loss = xp.abs(cosine-1).sum()
|
||||||
|
return loss, -d_yh
|
||||||
|
|
||||||
|
|
||||||
def create_pretraining_model(nlp, tok2vec):
|
def create_pretraining_model(nlp, tok2vec):
|
||||||
|
|
|
@ -367,6 +367,10 @@ class Errors(object):
|
||||||
"Instead, create a new Span object and specify the `label` keyword argument, "
|
"Instead, create a new Span object and specify the `label` keyword argument, "
|
||||||
"for example:\nfrom spacy.tokens import Span\n"
|
"for example:\nfrom spacy.tokens import Span\n"
|
||||||
"span = Span(doc, start={start}, end={end}, label='{label}')")
|
"span = Span(doc, start={start}, end={end}, label='{label}')")
|
||||||
|
E130 = ("You are running a narrow unicode build, which is incompatible "
|
||||||
|
"with spacy >= 2.1.0. To fix this, reinstall Python and use a wide "
|
||||||
|
"unicode build instead. You can also rebuild Python and set the "
|
||||||
|
"--enable-unicode=ucs4 flag.")
|
||||||
|
|
||||||
|
|
||||||
@add_codes
|
@add_codes
|
||||||
|
|
|
@ -139784,7 +139784,7 @@ LOOKUP = {
|
||||||
"nomadisant": ("nomadiser",),
|
"nomadisant": ("nomadiser",),
|
||||||
"nomadisent": ("nomadiser",),
|
"nomadisent": ("nomadiser",),
|
||||||
"nomadismes": ("nomadisme",),
|
"nomadismes": ("nomadisme",),
|
||||||
"nombres": ("nombrer",),
|
"nombres": ("nombre",),
|
||||||
"nombreuse": ("nombreux",),
|
"nombreuse": ("nombreux",),
|
||||||
"nombreuses": ("nombreux",),
|
"nombreuses": ("nombreux",),
|
||||||
"nombrilismes": ("nombrilisme",),
|
"nombrilismes": ("nombrilisme",),
|
||||||
|
|
|
@ -416,8 +416,9 @@ cdef class Doc:
|
||||||
return self.user_hooks["vector"](self)
|
return self.user_hooks["vector"](self)
|
||||||
if self._vector is not None:
|
if self._vector is not None:
|
||||||
return self._vector
|
return self._vector
|
||||||
elif not len(self):
|
xp = get_array_module(self.vocab.vectors.data)
|
||||||
self._vector = numpy.zeros((self.vocab.vectors_length,), dtype="f")
|
if not len(self):
|
||||||
|
self._vector = xp.zeros((self.vocab.vectors_length,), dtype="f")
|
||||||
return self._vector
|
return self._vector
|
||||||
elif self.vocab.vectors.data.size > 0:
|
elif self.vocab.vectors.data.size > 0:
|
||||||
self._vector = sum(t.vector for t in self) / len(self)
|
self._vector = sum(t.vector for t in self) / len(self)
|
||||||
|
@ -426,7 +427,7 @@ cdef class Doc:
|
||||||
self._vector = self.tensor.mean(axis=0)
|
self._vector = self.tensor.mean(axis=0)
|
||||||
return self._vector
|
return self._vector
|
||||||
else:
|
else:
|
||||||
return numpy.zeros((self.vocab.vectors_length,), dtype="float32")
|
return xp.zeros((self.vocab.vectors_length,), dtype="float32")
|
||||||
|
|
||||||
def __set__(self, value):
|
def __set__(self, value):
|
||||||
self._vector = value
|
self._vector = value
|
||||||
|
|
|
@ -420,13 +420,11 @@ cdef class Span:
|
||||||
"""
|
"""
|
||||||
if "vector_norm" in self.doc.user_span_hooks:
|
if "vector_norm" in self.doc.user_span_hooks:
|
||||||
return self.doc.user_span_hooks["vector"](self)
|
return self.doc.user_span_hooks["vector"](self)
|
||||||
cdef float value
|
vector = self.vector
|
||||||
cdef double norm = 0
|
xp = get_array_module(vector)
|
||||||
if self._vector_norm is None:
|
if self._vector_norm is None:
|
||||||
norm = 0
|
total = (vector*vector).sum()
|
||||||
for value in self.vector:
|
self._vector_norm = xp.sqrt(total) if total != 0. else 0.
|
||||||
norm += value * value
|
|
||||||
self._vector_norm = sqrt(norm) if norm != 0 else 0
|
|
||||||
return self._vector_norm
|
return self._vector_norm
|
||||||
|
|
||||||
@property
|
@property
|
||||||
|
|
|
@ -404,7 +404,9 @@ cdef class Token:
|
||||||
if "vector_norm" in self.doc.user_token_hooks:
|
if "vector_norm" in self.doc.user_token_hooks:
|
||||||
return self.doc.user_token_hooks["vector_norm"](self)
|
return self.doc.user_token_hooks["vector_norm"](self)
|
||||||
vector = self.vector
|
vector = self.vector
|
||||||
return numpy.sqrt((vector ** 2).sum())
|
xp = get_array_module(vector)
|
||||||
|
total = (vector ** 2).sum()
|
||||||
|
return xp.sqrt(total) if total != 0. else 0.
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def n_lefts(self):
|
def n_lefts(self):
|
||||||
|
|
|
@ -19,11 +19,11 @@ Create a Span object from the `slice doc[start : end]`.
|
||||||
> ```
|
> ```
|
||||||
|
|
||||||
| Name | Type | Description |
|
| Name | Type | Description |
|
||||||
| ----------- | ---------------------------------------- | ------------------------------------------------------- |
|
| ----------- | ---------------------------------------- | ----------------------------------------------------------------------------------------------------------- |
|
||||||
| `doc` | `Doc` | The parent document. |
|
| `doc` | `Doc` | The parent document. |
|
||||||
| `start` | int | The index of the first token of the span. |
|
| `start` | int | The index of the first token of the span. |
|
||||||
| `end` | int | The index of the first token after the span. |
|
| `end` | int | The index of the first token after the span. |
|
||||||
| `label` | int | A label to attach to the span, e.g. for named entities. |
|
| `label` | int / unicode | A label to attach to the span, e.g. for named entities. As of v2.1, the label can also be a unicode string. |
|
||||||
| `vector` | `numpy.ndarray[ndim=1, dtype='float32']` | A meaning representation of the span. |
|
| `vector` | `numpy.ndarray[ndim=1, dtype='float32']` | A meaning representation of the span. |
|
||||||
| **RETURNS** | `Span` | The newly constructed object. |
|
| **RETURNS** | `Span` | The newly constructed object. |
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue
Block a user