mirror of
https://github.com/explosion/spaCy.git
synced 2025-07-12 17:22:25 +03:00
Merge pull request #5703 from adrianeboyd/v2.3.x
Update v2.3.x from master
This commit is contained in:
commit
899f7b4460
106
.github/contributors/hertelm.md
vendored
Normal file
106
.github/contributors/hertelm.md
vendored
Normal file
|
@ -0,0 +1,106 @@
|
||||||
|
# spaCy contributor agreement
|
||||||
|
|
||||||
|
This spaCy Contributor Agreement (**"SCA"**) is based on the
|
||||||
|
[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
|
||||||
|
The SCA applies to any contribution that you make to any product or project
|
||||||
|
managed by us (the **"project"**), and sets out the intellectual property rights
|
||||||
|
you grant to us in the contributed materials. The term **"us"** shall mean
|
||||||
|
[ExplosionAI GmbH](https://explosion.ai/legal). The term
|
||||||
|
**"you"** shall mean the person or entity identified below.
|
||||||
|
|
||||||
|
If you agree to be bound by these terms, fill in the information requested
|
||||||
|
below and include the filled-in version with your first pull request, under the
|
||||||
|
folder [`.github/contributors/`](/.github/contributors/). The name of the file
|
||||||
|
should be your GitHub username, with the extension `.md`. For example, the user
|
||||||
|
example_user would create the file `.github/contributors/example_user.md`.
|
||||||
|
|
||||||
|
Read this agreement carefully before signing. These terms and conditions
|
||||||
|
constitute a binding legal agreement.
|
||||||
|
|
||||||
|
## Contributor Agreement
|
||||||
|
|
||||||
|
1. The term "contribution" or "contributed materials" means any source code,
|
||||||
|
object code, patch, tool, sample, graphic, specification, manual,
|
||||||
|
documentation, or any other material posted or submitted by you to the project.
|
||||||
|
|
||||||
|
2. With respect to any worldwide copyrights, or copyright applications and
|
||||||
|
registrations, in your contribution:
|
||||||
|
|
||||||
|
* you hereby assign to us joint ownership, and to the extent that such
|
||||||
|
assignment is or becomes invalid, ineffective or unenforceable, you hereby
|
||||||
|
grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
|
||||||
|
royalty-free, unrestricted license to exercise all rights under those
|
||||||
|
copyrights. This includes, at our option, the right to sublicense these same
|
||||||
|
rights to third parties through multiple levels of sublicensees or other
|
||||||
|
licensing arrangements;
|
||||||
|
|
||||||
|
* you agree that each of us can do all things in relation to your
|
||||||
|
contribution as if each of us were the sole owners, and if one of us makes
|
||||||
|
a derivative work of your contribution, the one who makes the derivative
|
||||||
|
work (or has it made will be the sole owner of that derivative work;
|
||||||
|
|
||||||
|
* you agree that you will not assert any moral rights in your contribution
|
||||||
|
against us, our licensees or transferees;
|
||||||
|
|
||||||
|
* you agree that we may register a copyright in your contribution and
|
||||||
|
exercise all ownership rights associated with it; and
|
||||||
|
|
||||||
|
* you agree that neither of us has any duty to consult with, obtain the
|
||||||
|
consent of, pay or render an accounting to the other for any use or
|
||||||
|
distribution of your contribution.
|
||||||
|
|
||||||
|
3. With respect to any patents you own, or that you can license without payment
|
||||||
|
to any third party, you hereby grant to us a perpetual, irrevocable,
|
||||||
|
non-exclusive, worldwide, no-charge, royalty-free license to:
|
||||||
|
|
||||||
|
* make, have made, use, sell, offer to sell, import, and otherwise transfer
|
||||||
|
your contribution in whole or in part, alone or in combination with or
|
||||||
|
included in any product, work or materials arising out of the project to
|
||||||
|
which your contribution was submitted, and
|
||||||
|
|
||||||
|
* at our option, to sublicense these same rights to third parties through
|
||||||
|
multiple levels of sublicensees or other licensing arrangements.
|
||||||
|
|
||||||
|
4. Except as set out above, you keep all right, title, and interest in your
|
||||||
|
contribution. The rights that you grant to us under these terms are effective
|
||||||
|
on the date you first submitted a contribution to us, even if your submission
|
||||||
|
took place before the date you sign these terms.
|
||||||
|
|
||||||
|
5. You covenant, represent, warrant and agree that:
|
||||||
|
|
||||||
|
* Each contribution that you submit is and shall be an original work of
|
||||||
|
authorship and you can legally grant the rights set out in this SCA;
|
||||||
|
|
||||||
|
* to the best of your knowledge, each contribution will not violate any
|
||||||
|
third party's copyrights, trademarks, patents, or other intellectual
|
||||||
|
property rights; and
|
||||||
|
|
||||||
|
* each contribution shall be in compliance with U.S. export control laws and
|
||||||
|
other applicable export and import laws. You agree to notify us if you
|
||||||
|
become aware of any circumstance which would make any of the foregoing
|
||||||
|
representations inaccurate in any respect. We may publicly disclose your
|
||||||
|
participation in the project, including the fact that you have signed the SCA.
|
||||||
|
|
||||||
|
6. This SCA is governed by the laws of the State of California and applicable
|
||||||
|
U.S. Federal law. Any choice of law rules will not apply.
|
||||||
|
|
||||||
|
7. Please place an “x” on one of the applicable statement below. Please do NOT
|
||||||
|
mark both statements:
|
||||||
|
|
||||||
|
* [x] I am signing on behalf of myself as an individual and no other person
|
||||||
|
or entity, including my employer, has or will have rights with respect to my
|
||||||
|
contributions.
|
||||||
|
|
||||||
|
* [ ] I am signing on behalf of my employer or a legal entity and I have the
|
||||||
|
actual authority to contractually bind that entity.
|
||||||
|
|
||||||
|
## Contributor Details
|
||||||
|
|
||||||
|
| Field | Entry |
|
||||||
|
|------------------------------- | -------------------- |
|
||||||
|
| Name | Matthias Hertel |
|
||||||
|
| Company name (if applicable) | |
|
||||||
|
| Title or role (if applicable) | |
|
||||||
|
| Date | June 29, 2020 |
|
||||||
|
| GitHub username | hertelm |
|
||||||
|
| Website (optional) | |
|
1
.gitignore
vendored
1
.gitignore
vendored
|
@ -70,6 +70,7 @@ Pipfile.lock
|
||||||
*.egg
|
*.egg
|
||||||
.eggs
|
.eggs
|
||||||
MANIFEST
|
MANIFEST
|
||||||
|
spacy/git_info.py
|
||||||
|
|
||||||
# Temporary files
|
# Temporary files
|
||||||
*.~*
|
*.~*
|
||||||
|
|
|
@ -6,3 +6,4 @@ include bin/spacy
|
||||||
include pyproject.toml
|
include pyproject.toml
|
||||||
recursive-exclude spacy/lang *.json
|
recursive-exclude spacy/lang *.json
|
||||||
recursive-include spacy/lang *.json.gz
|
recursive-include spacy/lang *.json.gz
|
||||||
|
recursive-include licenses *
|
||||||
|
|
|
@ -67,7 +67,7 @@ def main(model_name, unlabelled_loc):
|
||||||
pipe_exceptions = ["ner", "trf_wordpiecer", "trf_tok2vec"]
|
pipe_exceptions = ["ner", "trf_wordpiecer", "trf_tok2vec"]
|
||||||
other_pipes = [pipe for pipe in nlp.pipe_names if pipe not in pipe_exceptions]
|
other_pipes = [pipe for pipe in nlp.pipe_names if pipe not in pipe_exceptions]
|
||||||
sizes = compounding(1.0, 4.0, 1.001)
|
sizes = compounding(1.0, 4.0, 1.001)
|
||||||
with nlp.disable_pipes(*other_pipes) and warnings.catch_warnings():
|
with nlp.disable_pipes(*other_pipes), warnings.catch_warnings():
|
||||||
# show warnings for misaligned entity spans once
|
# show warnings for misaligned entity spans once
|
||||||
warnings.filterwarnings("once", category=UserWarning, module='spacy')
|
warnings.filterwarnings("once", category=UserWarning, module='spacy')
|
||||||
|
|
||||||
|
|
|
@ -60,12 +60,12 @@ TRAIN_DATA = sample_train_data()
|
||||||
output_dir=("Optional output directory", "option", "o", Path),
|
output_dir=("Optional output directory", "option", "o", Path),
|
||||||
n_iter=("Number of training iterations", "option", "n", int),
|
n_iter=("Number of training iterations", "option", "n", int),
|
||||||
)
|
)
|
||||||
def main(kb_path, vocab_path=None, output_dir=None, n_iter=50):
|
def main(kb_path, vocab_path, output_dir=None, n_iter=50):
|
||||||
"""Create a blank model with the specified vocab, set up the pipeline and train the entity linker.
|
"""Create a blank model with the specified vocab, set up the pipeline and train the entity linker.
|
||||||
The `vocab` should be the one used during creation of the KB."""
|
The `vocab` should be the one used during creation of the KB."""
|
||||||
vocab = Vocab().from_disk(vocab_path)
|
|
||||||
# create blank English model with correct vocab
|
# create blank English model with correct vocab
|
||||||
nlp = spacy.blank("en", vocab=vocab)
|
nlp = spacy.blank("en")
|
||||||
|
nlp.vocab.from_disk(vocab_path)
|
||||||
nlp.vocab.vectors.name = "spacy_pretrained_vectors"
|
nlp.vocab.vectors.name = "spacy_pretrained_vectors"
|
||||||
print("Created blank 'en' model with vocab from '%s'" % vocab_path)
|
print("Created blank 'en' model with vocab from '%s'" % vocab_path)
|
||||||
|
|
||||||
|
|
|
@ -59,7 +59,7 @@ def main(model=None, output_dir=None, n_iter=100):
|
||||||
pipe_exceptions = ["ner", "trf_wordpiecer", "trf_tok2vec"]
|
pipe_exceptions = ["ner", "trf_wordpiecer", "trf_tok2vec"]
|
||||||
other_pipes = [pipe for pipe in nlp.pipe_names if pipe not in pipe_exceptions]
|
other_pipes = [pipe for pipe in nlp.pipe_names if pipe not in pipe_exceptions]
|
||||||
# only train NER
|
# only train NER
|
||||||
with nlp.disable_pipes(*other_pipes) and warnings.catch_warnings():
|
with nlp.disable_pipes(*other_pipes), warnings.catch_warnings():
|
||||||
# show warnings for misaligned entity spans once
|
# show warnings for misaligned entity spans once
|
||||||
warnings.filterwarnings("once", category=UserWarning, module='spacy')
|
warnings.filterwarnings("once", category=UserWarning, module='spacy')
|
||||||
|
|
||||||
|
|
|
@ -99,7 +99,7 @@ def main(model=None, new_model_name="animal", output_dir=None, n_iter=30):
|
||||||
pipe_exceptions = ["ner", "trf_wordpiecer", "trf_tok2vec"]
|
pipe_exceptions = ["ner", "trf_wordpiecer", "trf_tok2vec"]
|
||||||
other_pipes = [pipe for pipe in nlp.pipe_names if pipe not in pipe_exceptions]
|
other_pipes = [pipe for pipe in nlp.pipe_names if pipe not in pipe_exceptions]
|
||||||
# only train NER
|
# only train NER
|
||||||
with nlp.disable_pipes(*other_pipes) and warnings.catch_warnings():
|
with nlp.disable_pipes(*other_pipes), warnings.catch_warnings():
|
||||||
# show warnings for misaligned entity spans once
|
# show warnings for misaligned entity spans once
|
||||||
warnings.filterwarnings("once", category=UserWarning, module='spacy')
|
warnings.filterwarnings("once", category=UserWarning, module='spacy')
|
||||||
|
|
||||||
|
|
|
@ -1,6 +1,8 @@
|
||||||
redirects = [
|
redirects = [
|
||||||
# Netlify
|
# Netlify
|
||||||
{from = "https://spacy.netlify.com/*", to="https://spacy.io/:splat", force = true },
|
{from = "https://spacy.netlify.com/*", to="https://spacy.io/:splat", force = true },
|
||||||
|
# Subdomain for branches
|
||||||
|
{from = "https://nightly.spacy.io/*", to="https://nightly-spacy-io.spacy.io/:splat", force = true, status = 200},
|
||||||
# Old subdomains
|
# Old subdomains
|
||||||
{from = "https://survey.spacy.io/*", to = "https://spacy.io", force = true},
|
{from = "https://survey.spacy.io/*", to = "https://spacy.io", force = true},
|
||||||
{from = "http://survey.spacy.io/*", to = "https://spacy.io", force = true},
|
{from = "http://survey.spacy.io/*", to = "https://spacy.io", force = true},
|
||||||
|
|
51
setup.py
51
setup.py
|
@ -118,6 +118,55 @@ def is_source_release(path):
|
||||||
return os.path.exists(os.path.join(path, "PKG-INFO"))
|
return os.path.exists(os.path.join(path, "PKG-INFO"))
|
||||||
|
|
||||||
|
|
||||||
|
# Include the git version in the build (adapted from NumPy)
|
||||||
|
# Copyright (c) 2005-2020, NumPy Developers.
|
||||||
|
# BSD 3-Clause license, see licenses/3rd_party_licenses.txt
|
||||||
|
def write_git_info_py(filename="spacy/git_info.py"):
|
||||||
|
def _minimal_ext_cmd(cmd):
|
||||||
|
# construct minimal environment
|
||||||
|
env = {}
|
||||||
|
for k in ["SYSTEMROOT", "PATH", "HOME"]:
|
||||||
|
v = os.environ.get(k)
|
||||||
|
if v is not None:
|
||||||
|
env[k] = v
|
||||||
|
# LANGUAGE is used on win32
|
||||||
|
env["LANGUAGE"] = "C"
|
||||||
|
env["LANG"] = "C"
|
||||||
|
env["LC_ALL"] = "C"
|
||||||
|
out = subprocess.check_output(cmd, stderr=subprocess.STDOUT, env=env)
|
||||||
|
return out
|
||||||
|
|
||||||
|
git_version = "Unknown"
|
||||||
|
if os.path.exists(".git"):
|
||||||
|
try:
|
||||||
|
out = _minimal_ext_cmd(["git", "rev-parse", "--short", "HEAD"])
|
||||||
|
git_version = out.strip().decode("ascii")
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
elif os.path.exists(filename):
|
||||||
|
# must be a source distribution, use existing version file
|
||||||
|
try:
|
||||||
|
a = open(filename, "r")
|
||||||
|
lines = a.readlines()
|
||||||
|
git_version = lines[-1].split('"')[1]
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
finally:
|
||||||
|
a.close()
|
||||||
|
|
||||||
|
text = """# THIS FILE IS GENERATED FROM SPACY SETUP.PY
|
||||||
|
#
|
||||||
|
GIT_VERSION = "%(git_version)s"
|
||||||
|
"""
|
||||||
|
a = open(filename, "w")
|
||||||
|
try:
|
||||||
|
a.write(
|
||||||
|
text % {"git_version": git_version,}
|
||||||
|
)
|
||||||
|
finally:
|
||||||
|
a.close()
|
||||||
|
|
||||||
|
|
||||||
def clean(path):
|
def clean(path):
|
||||||
for name in MOD_NAMES:
|
for name in MOD_NAMES:
|
||||||
name = name.replace(".", "/")
|
name = name.replace(".", "/")
|
||||||
|
@ -140,6 +189,8 @@ def chdir(new_dir):
|
||||||
|
|
||||||
|
|
||||||
def setup_package():
|
def setup_package():
|
||||||
|
write_git_info_py()
|
||||||
|
|
||||||
root = os.path.abspath(os.path.dirname(__file__))
|
root = os.path.abspath(os.path.dirname(__file__))
|
||||||
|
|
||||||
if len(sys.argv) > 1 and sys.argv[1] == "clean":
|
if len(sys.argv) > 1 and sys.argv[1] == "clean":
|
||||||
|
|
|
@ -34,6 +34,7 @@ from .lang.tag_map import TAG_MAP
|
||||||
from .tokens import Doc
|
from .tokens import Doc
|
||||||
from .lang.lex_attrs import LEX_ATTRS, is_stop
|
from .lang.lex_attrs import LEX_ATTRS, is_stop
|
||||||
from .errors import Errors, Warnings
|
from .errors import Errors, Warnings
|
||||||
|
from .git_info import GIT_VERSION
|
||||||
from . import util
|
from . import util
|
||||||
from . import about
|
from . import about
|
||||||
|
|
||||||
|
@ -206,6 +207,7 @@ class Language(object):
|
||||||
self._meta.setdefault("email", "")
|
self._meta.setdefault("email", "")
|
||||||
self._meta.setdefault("url", "")
|
self._meta.setdefault("url", "")
|
||||||
self._meta.setdefault("license", "")
|
self._meta.setdefault("license", "")
|
||||||
|
self._meta.setdefault("spacy_git_version", GIT_VERSION)
|
||||||
self._meta["vectors"] = {
|
self._meta["vectors"] = {
|
||||||
"width": self.vocab.vectors_length,
|
"width": self.vocab.vectors_length,
|
||||||
"vectors": len(self.vocab.vectors),
|
"vectors": len(self.vocab.vectors),
|
||||||
|
|
|
@ -106,10 +106,16 @@ def test_doc_api_getitem(en_tokenizer):
|
||||||
)
|
)
|
||||||
def test_doc_api_serialize(en_tokenizer, text):
|
def test_doc_api_serialize(en_tokenizer, text):
|
||||||
tokens = en_tokenizer(text)
|
tokens = en_tokenizer(text)
|
||||||
|
tokens[0].lemma_ = "lemma"
|
||||||
|
tokens[0].norm_ = "norm"
|
||||||
|
tokens[0].ent_kb_id_ = "ent_kb_id"
|
||||||
new_tokens = Doc(tokens.vocab).from_bytes(tokens.to_bytes())
|
new_tokens = Doc(tokens.vocab).from_bytes(tokens.to_bytes())
|
||||||
assert tokens.text == new_tokens.text
|
assert tokens.text == new_tokens.text
|
||||||
assert [t.text for t in tokens] == [t.text for t in new_tokens]
|
assert [t.text for t in tokens] == [t.text for t in new_tokens]
|
||||||
assert [t.orth for t in tokens] == [t.orth for t in new_tokens]
|
assert [t.orth for t in tokens] == [t.orth for t in new_tokens]
|
||||||
|
assert new_tokens[0].lemma_ == "lemma"
|
||||||
|
assert new_tokens[0].norm_ == "norm"
|
||||||
|
assert new_tokens[0].ent_kb_id_ == "ent_kb_id"
|
||||||
|
|
||||||
new_tokens = Doc(tokens.vocab).from_bytes(
|
new_tokens = Doc(tokens.vocab).from_bytes(
|
||||||
tokens.to_bytes(exclude=["tensor"]), exclude=["tensor"]
|
tokens.to_bytes(exclude=["tensor"]), exclude=["tensor"]
|
||||||
|
|
|
@ -892,7 +892,7 @@ cdef class Doc:
|
||||||
|
|
||||||
DOCS: https://spacy.io/api/doc#to_bytes
|
DOCS: https://spacy.io/api/doc#to_bytes
|
||||||
"""
|
"""
|
||||||
array_head = [LENGTH, SPACY, LEMMA, ENT_IOB, ENT_TYPE, ENT_ID, NORM] # TODO: ENT_KB_ID ?
|
array_head = [LENGTH, SPACY, LEMMA, ENT_IOB, ENT_TYPE, ENT_ID, NORM, ENT_KB_ID]
|
||||||
if self.is_tagged:
|
if self.is_tagged:
|
||||||
array_head.extend([TAG, POS])
|
array_head.extend([TAG, POS])
|
||||||
# If doc parsed add head and dep attribute
|
# If doc parsed add head and dep attribute
|
||||||
|
@ -901,6 +901,14 @@ cdef class Doc:
|
||||||
# Otherwise add sent_start
|
# Otherwise add sent_start
|
||||||
else:
|
else:
|
||||||
array_head.append(SENT_START)
|
array_head.append(SENT_START)
|
||||||
|
strings = set()
|
||||||
|
for token in self:
|
||||||
|
strings.add(token.tag_)
|
||||||
|
strings.add(token.lemma_)
|
||||||
|
strings.add(token.dep_)
|
||||||
|
strings.add(token.ent_type_)
|
||||||
|
strings.add(token.ent_kb_id_)
|
||||||
|
strings.add(token.norm_)
|
||||||
# Msgpack doesn't distinguish between lists and tuples, which is
|
# Msgpack doesn't distinguish between lists and tuples, which is
|
||||||
# vexing for user data. As a best guess, we *know* that within
|
# vexing for user data. As a best guess, we *know* that within
|
||||||
# keys, we must have tuples. In values we just have to hope
|
# keys, we must have tuples. In values we just have to hope
|
||||||
|
@ -912,6 +920,7 @@ cdef class Doc:
|
||||||
"sentiment": lambda: self.sentiment,
|
"sentiment": lambda: self.sentiment,
|
||||||
"tensor": lambda: self.tensor,
|
"tensor": lambda: self.tensor,
|
||||||
"cats": lambda: self.cats,
|
"cats": lambda: self.cats,
|
||||||
|
"strings": lambda: list(strings),
|
||||||
}
|
}
|
||||||
for key in kwargs:
|
for key in kwargs:
|
||||||
if key in serializers or key in ("user_data", "user_data_keys", "user_data_values"):
|
if key in serializers or key in ("user_data", "user_data_keys", "user_data_values"):
|
||||||
|
@ -942,6 +951,7 @@ cdef class Doc:
|
||||||
"sentiment": lambda b: None,
|
"sentiment": lambda b: None,
|
||||||
"tensor": lambda b: None,
|
"tensor": lambda b: None,
|
||||||
"cats": lambda b: None,
|
"cats": lambda b: None,
|
||||||
|
"strings": lambda b: None,
|
||||||
"user_data_keys": lambda b: None,
|
"user_data_keys": lambda b: None,
|
||||||
"user_data_values": lambda b: None,
|
"user_data_values": lambda b: None,
|
||||||
}
|
}
|
||||||
|
@ -965,6 +975,9 @@ cdef class Doc:
|
||||||
self.tensor = msg["tensor"]
|
self.tensor = msg["tensor"]
|
||||||
if "cats" not in exclude and "cats" in msg:
|
if "cats" not in exclude and "cats" in msg:
|
||||||
self.cats = msg["cats"]
|
self.cats = msg["cats"]
|
||||||
|
if "strings" not in exclude and "strings" in msg:
|
||||||
|
for s in msg["strings"]:
|
||||||
|
self.vocab.strings.add(s)
|
||||||
start = 0
|
start = 0
|
||||||
cdef const LexemeC* lex
|
cdef const LexemeC* lex
|
||||||
cdef unicode orth_
|
cdef unicode orth_
|
||||||
|
|
|
@ -91,7 +91,7 @@ Match a stream of documents, yielding them in turn.
|
||||||
> ```python
|
> ```python
|
||||||
> from spacy.matcher import PhraseMatcher
|
> from spacy.matcher import PhraseMatcher
|
||||||
> matcher = PhraseMatcher(nlp.vocab)
|
> matcher = PhraseMatcher(nlp.vocab)
|
||||||
> for doc in matcher.pipe(texts, batch_size=50):
|
> for doc in matcher.pipe(docs, batch_size=50):
|
||||||
> pass
|
> pass
|
||||||
> ```
|
> ```
|
||||||
|
|
||||||
|
|
|
@ -122,7 +122,7 @@ for match_id, start, end in matches:
|
||||||
```
|
```
|
||||||
|
|
||||||
The matcher returns a list of `(match_id, start, end)` tuples – in this case,
|
The matcher returns a list of `(match_id, start, end)` tuples – in this case,
|
||||||
`[('15578876784678163569', 0, 2)]`, which maps to the span `doc[0:2]` of our
|
`[('15578876784678163569', 0, 3)]`, which maps to the span `doc[0:3]` of our
|
||||||
original document. The `match_id` is the [hash value](/usage/spacy-101#vocab) of
|
original document. The `match_id` is the [hash value](/usage/spacy-101#vocab) of
|
||||||
the string ID "HelloWorld". To get the string value, you can look up the ID in
|
the string ID "HelloWorld". To get the string value, you can look up the ID in
|
||||||
the [`StringStore`](/api/stringstore).
|
the [`StringStore`](/api/stringstore).
|
||||||
|
|
Loading…
Reference in New Issue
Block a user