Merge remote-tracking branch 'upstream/master' into bugfix/noun-chunks

# Conflicts:
#	spacy/lang/el/syntax_iterators.py
#	spacy/lang/en/syntax_iterators.py
#	spacy/lang/fa/syntax_iterators.py
#	spacy/lang/fr/syntax_iterators.py
#	spacy/lang/id/syntax_iterators.py
#	spacy/lang/nb/syntax_iterators.py
#	spacy/lang/sv/syntax_iterators.py
This commit is contained in:
svlandeg 2020-05-21 19:19:50 +02:00
commit 84d5b7ad0a
61 changed files with 361 additions and 149 deletions

106
.github/contributors/kevinlu1248.md vendored Normal file
View File

@ -0,0 +1,106 @@
# spaCy contributor agreement
This spaCy Contributor Agreement (**"SCA"**) is based on the
[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
The SCA applies to any contribution that you make to any product or project
managed by us (the **"project"**), and sets out the intellectual property rights
you grant to us in the contributed materials. The term **"us"** shall mean
[ExplosionAI GmbH](https://explosion.ai/legal). The term
**"you"** shall mean the person or entity identified below.
If you agree to be bound by these terms, fill in the information requested
below and include the filled-in version with your first pull request, under the
folder [`.github/contributors/`](/.github/contributors/). The name of the file
should be your GitHub username, with the extension `.md`. For example, the user
example_user would create the file `.github/contributors/example_user.md`.
Read this agreement carefully before signing. These terms and conditions
constitute a binding legal agreement.
## Contributor Agreement
1. The term "contribution" or "contributed materials" means any source code,
object code, patch, tool, sample, graphic, specification, manual,
documentation, or any other material posted or submitted by you to the project.
2. With respect to any worldwide copyrights, or copyright applications and
registrations, in your contribution:
* you hereby assign to us joint ownership, and to the extent that such
assignment is or becomes invalid, ineffective or unenforceable, you hereby
grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
royalty-free, unrestricted license to exercise all rights under those
copyrights. This includes, at our option, the right to sublicense these same
rights to third parties through multiple levels of sublicensees or other
licensing arrangements;
* you agree that each of us can do all things in relation to your
contribution as if each of us were the sole owners, and if one of us makes
a derivative work of your contribution, the one who makes the derivative
work (or has it made will be the sole owner of that derivative work;
* you agree that you will not assert any moral rights in your contribution
against us, our licensees or transferees;
* you agree that we may register a copyright in your contribution and
exercise all ownership rights associated with it; and
* you agree that neither of us has any duty to consult with, obtain the
consent of, pay or render an accounting to the other for any use or
distribution of your contribution.
3. With respect to any patents you own, or that you can license without payment
to any third party, you hereby grant to us a perpetual, irrevocable,
non-exclusive, worldwide, no-charge, royalty-free license to:
* make, have made, use, sell, offer to sell, import, and otherwise transfer
your contribution in whole or in part, alone or in combination with or
included in any product, work or materials arising out of the project to
which your contribution was submitted, and
* at our option, to sublicense these same rights to third parties through
multiple levels of sublicensees or other licensing arrangements.
4. Except as set out above, you keep all right, title, and interest in your
contribution. The rights that you grant to us under these terms are effective
on the date you first submitted a contribution to us, even if your submission
took place before the date you sign these terms.
5. You covenant, represent, warrant and agree that:
* Each contribution that you submit is and shall be an original work of
authorship and you can legally grant the rights set out in this SCA;
* to the best of your knowledge, each contribution will not violate any
third party's copyrights, trademarks, patents, or other intellectual
property rights; and
* each contribution shall be in compliance with U.S. export control laws and
other applicable export and import laws. You agree to notify us if you
become aware of any circumstance which would make any of the foregoing
representations inaccurate in any respect. We may publicly disclose your
participation in the project, including the fact that you have signed the SCA.
6. This SCA is governed by the laws of the State of California and applicable
U.S. Federal law. Any choice of law rules will not apply.
7. Please place an “x” on one of the applicable statement below. Please do NOT
mark both statements:
* [x] I am signing on behalf of myself as an individual and no other person
or entity, including my employer, has or will have rights with respect to my
contributions.
* [ ] I am signing on behalf of my employer or a legal entity and I have the
actual authority to contractually bind that entity.
## Contributor Details
| Field | Entry |
|------------------------------- | -------------------- |
| Name | Kevin Lu|
| Company name (if applicable) | |
| Title or role (if applicable) | Student|
| Date | |
| GitHub username | kevinlu1248|
| Website (optional) | |

View File

@ -187,12 +187,17 @@ def debug_data(
n_missing_vectors = sum(gold_train_data["words_missing_vectors"].values())
msg.warn(
"{} words in training data without vectors ({:0.2f}%)".format(
n_missing_vectors,
n_missing_vectors / gold_train_data["n_words"],
n_missing_vectors, n_missing_vectors / gold_train_data["n_words"],
),
)
msg.text(
"10 most common words without vectors: {}".format(_format_labels(gold_train_data["words_missing_vectors"].most_common(10), counts=True)), show=verbose,
"10 most common words without vectors: {}".format(
_format_labels(
gold_train_data["words_missing_vectors"].most_common(10),
counts=True,
)
),
show=verbose,
)
else:
msg.info("No word vectors present in the model")

View File

@ -18,6 +18,8 @@ from wasabi import msg
from ..vectors import Vectors
from ..errors import Errors, Warnings
from ..util import ensure_path, get_lang_class, load_model, OOV_RANK
from ..lookups import Lookups
try:
import ftfy
@ -49,6 +51,7 @@ DEFAULT_OOV_PROB = -20
str,
),
model_name=("Optional name for the model meta", "option", "mn", str),
omit_extra_lookups=("Don't include extra lookups in model", "flag", "OEL", bool),
base_model=("Base model (for languages with custom tokenizers)", "option", "b", str),
)
def init_model(
@ -62,6 +65,7 @@ def init_model(
prune_vectors=-1,
vectors_name=None,
model_name=None,
omit_extra_lookups=False,
base_model=None,
):
"""
@ -95,6 +99,15 @@ def init_model(
with msg.loading("Creating model..."):
nlp = create_model(lang, lex_attrs, name=model_name, base_model=base_model)
# Create empty extra lexeme tables so the data from spacy-lookups-data
# isn't loaded if these features are accessed
if omit_extra_lookups:
nlp.vocab.lookups_extra = Lookups()
nlp.vocab.lookups_extra.add_table("lexeme_cluster")
nlp.vocab.lookups_extra.add_table("lexeme_prob")
nlp.vocab.lookups_extra.add_table("lexeme_settings")
msg.good("Successfully created model")
if vectors_loc is not None:
add_vectors(nlp, vectors_loc, truncate_vectors, prune_vectors, vectors_name)

View File

@ -17,6 +17,7 @@ from .._ml import create_default_optimizer
from ..util import use_gpu as set_gpu
from ..gold import GoldCorpus
from ..compat import path2str
from ..lookups import Lookups
from .. import util
from .. import about
@ -57,6 +58,7 @@ from .. import about
textcat_arch=("Textcat model architecture", "option", "ta", str),
textcat_positive_label=("Textcat positive label for binary classes with two labels", "option", "tpl", str),
tag_map_path=("Location of JSON-formatted tag map", "option", "tm", Path),
omit_extra_lookups=("Don't include extra lookups in model", "flag", "OEL", bool),
verbose=("Display more information for debug", "flag", "VV", bool),
debug=("Run data diagnostics before training", "flag", "D", bool),
# fmt: on
@ -96,6 +98,7 @@ def train(
textcat_arch="bow",
textcat_positive_label=None,
tag_map_path=None,
omit_extra_lookups=False,
verbose=False,
debug=False,
):
@ -247,6 +250,14 @@ def train(
# Update tag map with provided mapping
nlp.vocab.morphology.tag_map.update(tag_map)
# Create empty extra lexeme tables so the data from spacy-lookups-data
# isn't loaded if these features are accessed
if omit_extra_lookups:
nlp.vocab.lookups_extra = Lookups()
nlp.vocab.lookups_extra.add_table("lexeme_cluster")
nlp.vocab.lookups_extra.add_table("lexeme_prob")
nlp.vocab.lookups_extra.add_table("lexeme_settings")
if vectors:
msg.text("Loading vector from model '{}'".format(vectors))
_load_vectors(nlp, vectors)

View File

@ -8,7 +8,7 @@ def add_codes(err_cls):
class ErrorsWithCodes(err_cls):
def __getattribute__(self, code):
msg = super().__getattribute__(code)
if code.startswith('__'): # python system attributes like __class__
if code.startswith("__"): # python system attributes like __class__
return msg
else:
return "[{code}] {msg}".format(code=code, msg=msg)
@ -116,6 +116,7 @@ class Warnings(object):
" to check the alignment. Misaligned entities ('-') will be "
"ignored during training.")
@add_codes
class Errors(object):
E001 = ("No component '{name}' found in pipeline. Available names: {opts}")

View File

@ -9,7 +9,6 @@ from .morph_rules import MORPH_RULES
from ..tag_map import TAG_MAP
from ..tokenizer_exceptions import BASE_EXCEPTIONS
from ..norm_exceptions import BASE_NORMS
from ...language import Language
from ...attrs import LANG
from ...util import update_exc

View File

@ -47,7 +47,7 @@ kleines kommen kommt können könnt konnte könnte konnten kurz
lang lange leicht leider lieber los
machen macht machte mag magst man manche manchem manchen mancher manches mehr
mein meine meinem meinen meiner meines mich mir mit mittel mochte möchte mochten
mein meine meinem meinen meiner meines mich mir mit mittel mochte möchte mochten
mögen möglich mögt morgen muss muß müssen musst müsst musste mussten
na nach nachdem nahm natürlich neben nein neue neuen neun neunte neunten neunter

View File

@ -5,7 +5,7 @@ from ...symbols import NOUN, PROPN, PRON
from ...errors import Errors
def noun_chunks(obj):
def noun_chunks(doclike):
"""
Detect base noun phrases from a dependency parse. Works on both Doc and Span.
"""
@ -28,7 +28,7 @@ def noun_chunks(obj):
"og",
"app",
]
doc = obj.doc # Ensure works on both Doc and Span.
doc = doclike.doc # Ensure works on both Doc and Span.
if not doc.is_parsed:
raise ValueError(Errors.E029)
@ -38,7 +38,7 @@ def noun_chunks(obj):
close_app = doc.vocab.strings.add("nk")
rbracket = 0
for i, word in enumerate(obj):
for i, word in enumerate(doclike):
if i < rbracket:
continue
if word.pos in (NOUN, PROPN, PRON) and word.dep in np_deps:

View File

@ -5,7 +5,7 @@ from ...symbols import NOUN, PROPN, PRON
from ...errors import Errors
def noun_chunks(obj):
def noun_chunks(doclike):
"""
Detect base noun phrases. Works on both Doc and Span.
"""
@ -14,7 +14,7 @@ def noun_chunks(obj):
# obj tag corrects some DEP tagger mistakes.
# Further improvement of the models will eliminate the need for this tag.
labels = ["nsubj", "obj", "iobj", "appos", "ROOT", "obl"]
doc = obj.doc # Ensure works on both Doc and Span.
doc = doclike.doc # Ensure works on both Doc and Span.
if not doc.is_parsed:
raise ValueError(Errors.E029)
@ -24,7 +24,7 @@ def noun_chunks(obj):
nmod = doc.vocab.strings.add("nmod")
np_label = doc.vocab.strings.add("NP")
prev_end = -1
for i, word in enumerate(obj):
for i, word in enumerate(doclike):
if word.pos not in (NOUN, PROPN, PRON):
continue
# Prevent nested chunks from being produced

View File

@ -5,7 +5,7 @@ from ...symbols import NOUN, PROPN, PRON
from ...errors import Errors
def noun_chunks(obj):
def noun_chunks(doclike):
"""
Detect base noun phrases from a dependency parse. Works on both Doc and Span.
"""
@ -20,7 +20,7 @@ def noun_chunks(obj):
"attr",
"ROOT",
]
doc = obj.doc # Ensure works on both Doc and Span.
doc = doclike.doc # Ensure works on both Doc and Span.
if not doc.is_parsed:
raise ValueError(Errors.E029)
@ -29,7 +29,7 @@ def noun_chunks(obj):
conj = doc.vocab.strings.add("conj")
np_label = doc.vocab.strings.add("NP")
prev_end = -1
for i, word in enumerate(obj):
for i, word in enumerate(doclike):
if word.pos not in (NOUN, PROPN, PRON):
continue
# Prevent nested chunks from being produced

View File

@ -197,7 +197,7 @@ for word in ["who", "what", "when", "where", "why", "how", "there", "that"]:
_exc[orth + "d"] = [
{ORTH: orth, LEMMA: word, NORM: word},
{ORTH: "d", NORM: "'d"}
{ORTH: "d", NORM: "'d"},
]
_exc[orth + "'d've"] = [

View File

@ -5,7 +5,6 @@ from ..char_classes import LIST_PUNCT, LIST_ELLIPSES, LIST_QUOTES
from ..char_classes import LIST_ICONS, CURRENCY, LIST_UNITS, PUNCT
from ..char_classes import CONCAT_QUOTES, ALPHA_LOWER, ALPHA_UPPER, ALPHA
from ..char_classes import merge_chars
from ..punctuation import TOKENIZER_PREFIXES as BASE_TOKENIZER_PREFIXES
_list_units = [u for u in LIST_UNITS if u != "%"]

View File

@ -5,8 +5,8 @@ from ...symbols import NOUN, PROPN, PRON, VERB, AUX
from ...errors import Errors
def noun_chunks(obj):
doc = obj.doc
def noun_chunks(doclike):
doc = doclike.doc
if not doc.is_parsed:
raise ValueError(Errors.E029)
@ -21,7 +21,7 @@ def noun_chunks(obj):
np_right_deps = [doc.vocab.strings.add(label) for label in right_labels]
stop_deps = [doc.vocab.strings.add(label) for label in stop_labels]
token = doc[0]
while token and token.i < len(doc):
while token and token.i < len(doclike):
if token.pos in [PROPN, NOUN, PRON]:
left, right = noun_bounds(
doc, token, np_left_deps, np_right_deps, stop_deps

View File

@ -5,7 +5,7 @@ from ...symbols import NOUN, PROPN, PRON
from ...errors import Errors
def noun_chunks(obj):
def noun_chunks(doclike):
"""
Detect base noun phrases from a dependency parse. Works on both Doc and Span.
"""
@ -20,7 +20,7 @@ def noun_chunks(obj):
"attr",
"ROOT",
]
doc = obj.doc # Ensure works on both Doc and Span.
doc = doclike.doc # Ensure works on both Doc and Span.
if not doc.is_parsed:
raise ValueError(Errors.E029)
@ -29,7 +29,7 @@ def noun_chunks(obj):
conj = doc.vocab.strings.add("conj")
np_label = doc.vocab.strings.add("NP")
prev_end = -1
for i, word in enumerate(obj):
for i, word in enumerate(doclike):
if word.pos not in (NOUN, PROPN, PRON):
continue
# Prevent nested chunks from being produced

View File

@ -5,7 +5,7 @@ from ...symbols import NOUN, PROPN, PRON
from ...errors import Errors
def noun_chunks(obj):
def noun_chunks(doclike):
"""
Detect base noun phrases from a dependency parse. Works on both Doc and Span.
"""
@ -19,7 +19,7 @@ def noun_chunks(obj):
"nmod",
"nmod:poss",
]
doc = obj.doc # Ensure works on both Doc and Span.
doc = doclike.doc # Ensure works on both Doc and Span.
if not doc.is_parsed:
raise ValueError(Errors.E029)
@ -28,7 +28,7 @@ def noun_chunks(obj):
conj = doc.vocab.strings.add("conj")
np_label = doc.vocab.strings.add("NP")
prev_end = -1
for i, word in enumerate(obj):
for i, word in enumerate(doclike):
if word.pos not in (NOUN, PROPN, PRON):
continue
# Prevent nested chunks from being produced

View File

@ -461,5 +461,5 @@ _regular_exp.append(URL_PATTERN)
TOKENIZER_EXCEPTIONS = _exc
TOKEN_MATCH = re.compile(
"(?iu)" + "|".join("(?:{})".format(m) for m in _regular_exp)
"(?iu)" + "|".join("(?:{})".format(m) for m in _regular_exp)
).match

View File

@ -3,7 +3,7 @@ from __future__ import unicode_literals
STOP_WORDS = set(
"""
એમ
એમ
રહ
@ -24,7 +24,7 @@ STOP_WORDS = set(
મન
મન
મણ
મન
મન
અન
અહ
@ -33,12 +33,12 @@ STOP_WORDS = set(
પણ
@ -69,12 +69,12 @@ STOP_WORDS = set(
કર
કર
કર
કર
રબ
રબ
તથ

View File

@ -1,11 +1,12 @@
# coding: utf8
from __future__ import unicode_literals
from .stop_words import STOP_WORDS
from .lex_attrs import LEX_ATTRS
from .tag_map import TAG_MAP
from ...attrs import LANG
from ...language import Language
from ...tokens import Doc
class ArmenianDefaults(Language.Defaults):

View File

@ -1,6 +1,6 @@
# coding: utf8
from __future__ import unicode_literals
"""
Example sentences to test spaCy and its language models.
>>> from spacy.lang.hy.examples import sentences

View File

@ -1,3 +1,4 @@
# coding: utf8
from __future__ import unicode_literals
from ...attrs import LIKE_NUM

View File

@ -1,6 +1,6 @@
# coding: utf8
from __future__ import unicode_literals
STOP_WORDS = set(
"""
նա
@ -105,6 +105,6 @@ STOP_WORDS = set(
յուրաքանչյուր
այս
մեջ
թ
թ
""".split()
)

View File

@ -1,7 +1,7 @@
# coding: utf8
from __future__ import unicode_literals
from ...symbols import POS, SYM, ADJ, NUM, DET, ADV, ADP, X, VERB, NOUN
from ...symbols import POS, ADJ, NUM, DET, ADV, ADP, X, VERB, NOUN
from ...symbols import PROPN, PART, INTJ, PRON, SCONJ, AUX, CCONJ
TAG_MAP = {
@ -716,7 +716,7 @@ TAG_MAP = {
POS: NOUN,
"Animacy": "Nhum",
"Case": "Dat",
"Number": "Coll",
# "Number": "Coll",
"Number": "Sing",
"Person": "1",
},
@ -815,7 +815,7 @@ TAG_MAP = {
"Animacy": "Nhum",
"Case": "Nom",
"Definite": "Def",
"Number": "Plur",
# "Number": "Plur",
"Number": "Sing",
"Poss": "Yes",
},
@ -880,7 +880,7 @@ TAG_MAP = {
POS: NOUN,
"Animacy": "Nhum",
"Case": "Nom",
"Number": "Plur",
# "Number": "Plur",
"Number": "Sing",
"Person": "2",
},
@ -1223,9 +1223,9 @@ TAG_MAP = {
"PRON_Case=Nom|Number=Sing|Number=Plur|Person=3|Person=1|PronType=Emp": {
POS: PRON,
"Case": "Nom",
"Number": "Sing",
# "Number": "Sing",
"Number": "Plur",
"Person": "3",
# "Person": "3",
"Person": "1",
"PronType": "Emp",
},

View File

@ -5,7 +5,7 @@ from ...symbols import NOUN, PROPN, PRON
from ...errors import Errors
def noun_chunks(obj):
def noun_chunks(doclike):
"""
Detect base noun phrases from a dependency parse. Works on both Doc and Span.
"""
@ -19,7 +19,7 @@ def noun_chunks(obj):
"nmod",
"nmod:poss",
]
doc = obj.doc # Ensure works on both Doc and Span.
doc = doclike.doc # Ensure works on both Doc and Span.
if not doc.is_parsed:
raise ValueError(Errors.E029)
@ -28,7 +28,7 @@ def noun_chunks(obj):
conj = doc.vocab.strings.add("conj")
np_label = doc.vocab.strings.add("NP")
prev_end = -1
for i, word in enumerate(obj):
for i, word in enumerate(doclike):
if word.pos not in (NOUN, PROPN, PRON):
continue
# Prevent nested chunks from being produced

View File

@ -55,7 +55,7 @@ _num_words = [
"തൊണ്ണൂറ് ",
"നുറ് ",
"ആയിരം ",
"പത്തുലക്ഷം"
"പത്തുലക്ഷം",
]

View File

@ -3,7 +3,6 @@ from __future__ import unicode_literals
STOP_WORDS = set(
"""
അത
ഇത

View File

@ -5,7 +5,7 @@ from ...symbols import NOUN, PROPN, PRON
from ...errors import Errors
def noun_chunks(obj):
def noun_chunks(doclike):
"""
Detect base noun phrases from a dependency parse. Works on both Doc and Span.
"""
@ -19,7 +19,7 @@ def noun_chunks(obj):
"nmod",
"nmod:poss",
]
doc = obj.doc # Ensure works on both Doc and Span.
doc = doclike.doc # Ensure works on both Doc and Span.
if not doc.is_parsed:
raise ValueError(Errors.E029)
@ -28,7 +28,7 @@ def noun_chunks(obj):
conj = doc.vocab.strings.add("conj")
np_label = doc.vocab.strings.add("NP")
prev_end = -1
for i, word in enumerate(obj):
for i, word in enumerate(doclike):
if word.pos not in (NOUN, PROPN, PRON):
continue
# Prevent nested chunks from being produced

View File

@ -12,7 +12,7 @@ from ..tokenizer_exceptions import BASE_EXCEPTIONS
from ..norm_exceptions import BASE_NORMS
from ...language import Language
from ...attrs import LANG, NORM
from ...util import update_exc, add_lookups
from ...util import add_lookups
from ...lookups import Lookups

View File

@ -3,7 +3,6 @@ from __future__ import unicode_literals
from ...lemmatizer import Lemmatizer
from ...parts_of_speech import NAMES
from ...errors import Errors
class PolishLemmatizer(Lemmatizer):

View File

@ -8,7 +8,9 @@ from ..punctuation import TOKENIZER_PREFIXES as BASE_TOKENIZER_PREFIXES
_quotes = CONCAT_QUOTES.replace("'", "")
_prefixes = _prefixes = [r"(długo|krótko|jedno|dwu|trzy|cztero)-"] + BASE_TOKENIZER_PREFIXES
_prefixes = _prefixes = [
r"(długo|krótko|jedno|dwu|trzy|cztero)-"
] + BASE_TOKENIZER_PREFIXES
_infixes = (
LIST_ELLIPSES

View File

@ -40,7 +40,7 @@ _num_words = [
"miljard",
"biljon",
"biljard",
"kvadriljon"
"kvadriljon",
]

View File

@ -5,7 +5,7 @@ from ...symbols import NOUN, PROPN, PRON
from ...errors import Errors
def noun_chunks(obj):
def noun_chunks(doclike):
"""
Detect base noun phrases from a dependency parse. Works on both Doc and Span.
"""
@ -20,7 +20,7 @@ def noun_chunks(obj):
"nmod",
"nmod:poss",
]
doc = obj.doc # Ensure works on both Doc and Span.
doc = doclike.doc # Ensure works on both Doc and Span.
if not doc.is_parsed:
raise ValueError(Errors.E029)
@ -29,7 +29,7 @@ def noun_chunks(obj):
conj = doc.vocab.strings.add("conj")
np_label = doc.vocab.strings.add("NP")
prev_end = -1
for i, word in enumerate(obj):
for i, word in enumerate(doclike):
if word.pos not in (NOUN, PROPN, PRON):
continue
# Prevent nested chunks from being produced

View File

@ -38,7 +38,6 @@ TAG_MAP = {
"NNPC": {POS: PROPN},
"NNC": {POS: NOUN},
"PSP": {POS: ADP},
".": {POS: PUNCT},
",": {POS: PUNCT},
"-LRB-": {POS: PUNCT},

View File

@ -109,6 +109,7 @@ class ChineseTokenizer(DummyTokenizer):
if reset:
try:
import pkuseg
self.pkuseg_seg.preprocesser = pkuseg.Preprocesser(None)
except ImportError:
if self.use_pkuseg:
@ -118,7 +119,7 @@ class ChineseTokenizer(DummyTokenizer):
)
raise ImportError(msg)
for word in words:
self.pkuseg_seg.preprocesser.insert(word.strip(), '')
self.pkuseg_seg.preprocesser.insert(word.strip(), "")
def _get_config(self):
config = OrderedDict(
@ -168,21 +169,16 @@ class ChineseTokenizer(DummyTokenizer):
return util.to_bytes(serializers, [])
def from_bytes(self, data, **kwargs):
pkuseg_features_b = b""
pkuseg_weights_b = b""
pkuseg_processors_data = None
pkuseg_data = {"features_b": b"", "weights_b": b"", "processors_data": None}
def deserialize_pkuseg_features(b):
nonlocal pkuseg_features_b
pkuseg_features_b = b
pkuseg_data["features_b"] = b
def deserialize_pkuseg_weights(b):
nonlocal pkuseg_weights_b
pkuseg_weights_b = b
pkuseg_data["weights_b"] = b
def deserialize_pkuseg_processors(b):
nonlocal pkuseg_processors_data
pkuseg_processors_data = srsly.msgpack_loads(b)
pkuseg_data["processors_data"] = srsly.msgpack_loads(b)
deserializers = OrderedDict(
(
@ -194,13 +190,13 @@ class ChineseTokenizer(DummyTokenizer):
)
util.from_bytes(data, deserializers, [])
if pkuseg_features_b and pkuseg_weights_b:
if pkuseg_data["features_b"] and pkuseg_data["weights_b"]:
with tempfile.TemporaryDirectory() as tempdir:
tempdir = Path(tempdir)
with open(tempdir / "features.pkl", "wb") as fileh:
fileh.write(pkuseg_features_b)
fileh.write(pkuseg_data["features_b"])
with open(tempdir / "weights.npz", "wb") as fileh:
fileh.write(pkuseg_weights_b)
fileh.write(pkuseg_data["weights_b"])
try:
import pkuseg
except ImportError:
@ -209,13 +205,9 @@ class ChineseTokenizer(DummyTokenizer):
+ _PKUSEG_INSTALL_MSG
)
self.pkuseg_seg = pkuseg.pkuseg(str(tempdir))
if pkuseg_processors_data:
(
user_dict,
do_process,
common_words,
other_words,
) = pkuseg_processors_data
if pkuseg_data["processors_data"]:
processors_data = pkuseg_data["processors_data"]
(user_dict, do_process, common_words, other_words) = processors_data
self.pkuseg_seg.preprocesser = pkuseg.Preprocesser(user_dict)
self.pkuseg_seg.postprocesser.do_process = do_process
self.pkuseg_seg.postprocesser.common_words = set(common_words)

View File

@ -79,7 +79,9 @@ class BaseDefaults(object):
lookups=lookups,
)
vocab.lex_attr_getters[NORM] = util.add_lookups(
vocab.lex_attr_getters.get(NORM, LEX_ATTRS[NORM]), BASE_NORMS, vocab.lookups.get_table("lexeme_norm")
vocab.lex_attr_getters.get(NORM, LEX_ATTRS[NORM]),
BASE_NORMS,
vocab.lookups.get_table("lexeme_norm"),
)
for tag_str, exc in cls.morph_rules.items():
for orth_str, attrs in exc.items():
@ -974,7 +976,9 @@ class Language(object):
serializers = OrderedDict()
serializers["vocab"] = lambda: self.vocab.to_bytes()
serializers["tokenizer"] = lambda: self.tokenizer.to_bytes(exclude=["vocab"])
serializers["meta.json"] = lambda: srsly.json_dumps(OrderedDict(sorted(self.meta.items())))
serializers["meta.json"] = lambda: srsly.json_dumps(
OrderedDict(sorted(self.meta.items()))
)
for name, proc in self.pipeline:
if name in exclude:
continue

View File

@ -6,6 +6,7 @@ from collections import OrderedDict
from .symbols import NOUN, VERB, ADJ, PUNCT, PROPN
from .errors import Errors
from .lookups import Lookups
from .parts_of_speech import NAMES as UPOS_NAMES
class Lemmatizer(object):
@ -43,17 +44,11 @@ class Lemmatizer(object):
lookup_table = self.lookups.get_table("lemma_lookup", {})
if "lemma_rules" not in self.lookups:
return [lookup_table.get(string, string)]
if univ_pos in (NOUN, "NOUN", "noun"):
univ_pos = "noun"
elif univ_pos in (VERB, "VERB", "verb"):
univ_pos = "verb"
elif univ_pos in (ADJ, "ADJ", "adj"):
univ_pos = "adj"
elif univ_pos in (PUNCT, "PUNCT", "punct"):
univ_pos = "punct"
elif univ_pos in (PROPN, "PROPN"):
return [string]
else:
if isinstance(univ_pos, int):
univ_pos = UPOS_NAMES.get(univ_pos, "X")
univ_pos = univ_pos.lower()
if univ_pos in ("", "eol", "space"):
return [string.lower()]
# See Issue #435 for example of where this logic is requied.
if self.is_base_form(univ_pos, morphology):
@ -61,6 +56,11 @@ class Lemmatizer(object):
index_table = self.lookups.get_table("lemma_index", {})
exc_table = self.lookups.get_table("lemma_exc", {})
rules_table = self.lookups.get_table("lemma_rules", {})
if not any((index_table.get(univ_pos), exc_table.get(univ_pos), rules_table.get(univ_pos))):
if univ_pos == "propn":
return [string]
else:
return [string.lower()]
lemmas = self.lemmatize(
string,
index_table.get(univ_pos, {}),

View File

@ -213,28 +213,28 @@ cdef class Matcher:
else:
yield doc
def __call__(self, object doc_or_span):
def __call__(self, object doclike):
"""Find all token sequences matching the supplied pattern.
doc_or_span (Doc or Span): The document to match over.
doclike (Doc or Span): The document to match over.
RETURNS (list): A list of `(key, start, end)` tuples,
describing the matches. A match tuple describes a span
`doc[start:end]`. The `label_id` and `key` are both integers.
"""
if isinstance(doc_or_span, Doc):
doc = doc_or_span
if isinstance(doclike, Doc):
doc = doclike
length = len(doc)
elif isinstance(doc_or_span, Span):
doc = doc_or_span.doc
length = doc_or_span.end - doc_or_span.start
elif isinstance(doclike, Span):
doc = doclike.doc
length = doclike.end - doclike.start
else:
raise ValueError(Errors.E195.format(good="Doc or Span", got=type(doc_or_span).__name__))
raise ValueError(Errors.E195.format(good="Doc or Span", got=type(doclike).__name__))
if len(set([LEMMA, POS, TAG]) & self._seen_attrs) > 0 \
and not doc.is_tagged:
raise ValueError(Errors.E155.format())
if DEP in self._seen_attrs and not doc.is_parsed:
raise ValueError(Errors.E156.format())
matches = find_matches(&self.patterns[0], self.patterns.size(), doc_or_span, length,
matches = find_matches(&self.patterns[0], self.patterns.size(), doclike, length,
extensions=self._extensions, predicates=self._extra_predicates)
for i, (key, start, end) in enumerate(matches):
on_match = self._callbacks.get(key, None)
@ -257,7 +257,7 @@ def unpickle_matcher(vocab, patterns, callbacks):
return matcher
cdef find_matches(TokenPatternC** patterns, int n, object doc_or_span, int length, extensions=None, predicates=tuple()):
cdef find_matches(TokenPatternC** patterns, int n, object doclike, int length, extensions=None, predicates=tuple()):
"""Find matches in a doc, with a compiled array of patterns. Matches are
returned as a list of (id, start, end) tuples.
@ -286,7 +286,7 @@ cdef find_matches(TokenPatternC** patterns, int n, object doc_or_span, int lengt
else:
nr_extra_attr = 0
extra_attr_values = <attr_t*>mem.alloc(length, sizeof(attr_t))
for i, token in enumerate(doc_or_span):
for i, token in enumerate(doclike):
for name, index in extensions.items():
value = token._.get(name)
if isinstance(value, basestring):
@ -298,7 +298,7 @@ cdef find_matches(TokenPatternC** patterns, int n, object doc_or_span, int lengt
for j in range(n):
states.push_back(PatternStateC(patterns[j], i, 0))
transition_states(states, matches, predicate_cache,
doc_or_span[i], extra_attr_values, predicates)
doclike[i], extra_attr_values, predicates)
extra_attr_values += nr_extra_attr
predicate_cache += len(predicates)
# Handle matches that end in 0-width patterns

View File

@ -112,6 +112,7 @@ def ga_tokenizer():
def gu_tokenizer():
return get_lang_class("gu").Defaults.create_tokenizer()
@pytest.fixture(scope="session")
def he_tokenizer():
return get_lang_class("he").Defaults.create_tokenizer()
@ -246,7 +247,9 @@ def yo_tokenizer():
@pytest.fixture(scope="session")
def zh_tokenizer_char():
return get_lang_class("zh").Defaults.create_tokenizer(config={"use_jieba": False, "use_pkuseg": False})
return get_lang_class("zh").Defaults.create_tokenizer(
config={"use_jieba": False, "use_pkuseg": False}
)
@pytest.fixture(scope="session")
@ -258,7 +261,9 @@ def zh_tokenizer_jieba():
@pytest.fixture(scope="session")
def zh_tokenizer_pkuseg():
pytest.importorskip("pkuseg")
return get_lang_class("zh").Defaults.create_tokenizer(config={"pkuseg_model": "default", "use_jieba": False, "use_pkuseg": True})
return get_lang_class("zh").Defaults.create_tokenizer(
config={"pkuseg_model": "default", "use_jieba": False, "use_pkuseg": True}
)
@pytest.fixture(scope="session")

View File

@ -50,7 +50,9 @@ def test_create_from_words_and_text(vocab):
assert [t.text for t in doc] == [" ", "'", "dogs", "'", "\n\n", "run", " "]
assert [t.whitespace_ for t in doc] == ["", "", "", "", "", " ", ""]
assert doc.text == text
assert [t.text for t in doc if not t.text.isspace()] == [word for word in words if not word.isspace()]
assert [t.text for t in doc if not t.text.isspace()] == [
word for word in words if not word.isspace()
]
# partial whitespace in words
words = [" ", "'", "dogs", "'", "\n\n", "run", " "]
@ -60,7 +62,9 @@ def test_create_from_words_and_text(vocab):
assert [t.text for t in doc] == [" ", "'", "dogs", "'", "\n\n", "run", " "]
assert [t.whitespace_ for t in doc] == ["", "", "", "", "", " ", ""]
assert doc.text == text
assert [t.text for t in doc if not t.text.isspace()] == [word for word in words if not word.isspace()]
assert [t.text for t in doc if not t.text.isspace()] == [
word for word in words if not word.isspace()
]
# non-standard whitespace tokens
words = [" ", " ", "'", "dogs", "'", "\n\n", "run"]
@ -70,7 +74,9 @@ def test_create_from_words_and_text(vocab):
assert [t.text for t in doc] == [" ", "'", "dogs", "'", "\n\n", "run", " "]
assert [t.whitespace_ for t in doc] == ["", "", "", "", "", " ", ""]
assert doc.text == text
assert [t.text for t in doc if not t.text.isspace()] == [word for word in words if not word.isspace()]
assert [t.text for t in doc if not t.text.isspace()] == [
word for word in words if not word.isspace()
]
# mismatch between words and text
with pytest.raises(ValueError):

View File

@ -181,6 +181,7 @@ def test_is_sent_start(en_tokenizer):
doc.is_parsed = True
assert len(list(doc.sents)) == 2
def test_is_sent_end(en_tokenizer):
doc = en_tokenizer("This is a sentence. This is another.")
assert doc[4].is_sent_end is None
@ -213,6 +214,7 @@ def test_token0_has_sent_start_true():
assert doc[1].is_sent_start is None
assert not doc.is_sentenced
def test_tokenlast_has_sent_end_true():
doc = Doc(Vocab(), words=["hello", "world"])
assert doc[0].is_sent_end is None

View File

@ -5,9 +5,9 @@ import pytest
def test_noun_chunks_is_parsed_de(de_tokenizer):
"""Test that noun_chunks raises Value Error for 'de' language if Doc is not parsed.
"""Test that noun_chunks raises Value Error for 'de' language if Doc is not parsed.
To check this test, we're constructing a Doc
with a new Vocab here and forcing is_parsed to 'False'
with a new Vocab here and forcing is_parsed to 'False'
to make sure the noun chunks don't run.
"""
doc = de_tokenizer("Er lag auf seinem")

View File

@ -5,9 +5,9 @@ import pytest
def test_noun_chunks_is_parsed_el(el_tokenizer):
"""Test that noun_chunks raises Value Error for 'el' language if Doc is not parsed.
"""Test that noun_chunks raises Value Error for 'el' language if Doc is not parsed.
To check this test, we're constructing a Doc
with a new Vocab here and forcing is_parsed to 'False'
with a new Vocab here and forcing is_parsed to 'False'
to make sure the noun chunks don't run.
"""
doc = el_tokenizer("είναι χώρα της νοτιοανατολικής")

View File

@ -13,9 +13,9 @@ from ...util import get_doc
def test_noun_chunks_is_parsed(en_tokenizer):
"""Test that noun_chunks raises Value Error for 'en' language if Doc is not parsed.
"""Test that noun_chunks raises Value Error for 'en' language if Doc is not parsed.
To check this test, we're constructing a Doc
with a new Vocab here and forcing is_parsed to 'False'
with a new Vocab here and forcing is_parsed to 'False'
to make sure the noun chunks don't run.
"""
doc = en_tokenizer("This is a sentence")

View File

@ -5,9 +5,9 @@ import pytest
def test_noun_chunks_is_parsed_es(es_tokenizer):
"""Test that noun_chunks raises Value Error for 'es' language if Doc is not parsed.
"""Test that noun_chunks raises Value Error for 'es' language if Doc is not parsed.
To check this test, we're constructing a Doc
with a new Vocab here and forcing is_parsed to 'False'
with a new Vocab here and forcing is_parsed to 'False'
to make sure the noun chunks don't run.
"""
doc = es_tokenizer("en Oxford este verano")

View File

@ -62,4 +62,4 @@ def test_lex_attrs_like_number(es_tokenizer, text, match):
@pytest.mark.parametrize("word", ["once"])
def test_es_lex_attrs_capitals(word):
assert like_num(word)
assert like_num(word.upper())
assert like_num(word.upper())

View File

@ -5,9 +5,9 @@ import pytest
def test_noun_chunks_is_parsed_fr(fr_tokenizer):
"""Test that noun_chunks raises Value Error for 'fr' language if Doc is not parsed.
"""Test that noun_chunks raises Value Error for 'fr' language if Doc is not parsed.
To check this test, we're constructing a Doc
with a new Vocab here and forcing is_parsed to 'False'
with a new Vocab here and forcing is_parsed to 'False'
to make sure the noun chunks don't run.
"""
doc = fr_tokenizer("trouver des travaux antérieurs")

View File

@ -3,17 +3,16 @@ from __future__ import unicode_literals
import pytest
def test_gu_tokenizer_handlers_long_text(gu_tokenizer):
text = """પશ્ચિમ ભારતમાં આવેલું ગુજરાત રાજ્ય જે વ્યક્તિઓની માતૃભૂમિ છે"""
tokens = gu_tokenizer(text)
assert len(tokens) == 9
@pytest.mark.parametrize(
"text,length",
[
("ગુજરાતીઓ ખાવાના શોખીન માનવામાં આવે છે", 6),
("ખેતરની ખેડ કરવામાં આવે છે.", 5),
],
[("ગુજરાતીઓ ખાવાના શોખીન માનવામાં આવે છે", 6), ("ખેતરની ખેડ કરવામાં આવે છે.", 5)],
)
def test_gu_tokenizer_handles_cnts(gu_tokenizer, text, length):
tokens = gu_tokenizer(text)

View File

@ -1,3 +1,4 @@
# coding: utf8
from __future__ import unicode_literals
import pytest

View File

@ -1,3 +1,4 @@
# coding: utf8
from __future__ import unicode_literals
import pytest

View File

@ -5,9 +5,9 @@ import pytest
def test_noun_chunks_is_parsed_id(id_tokenizer):
"""Test that noun_chunks raises Value Error for 'id' language if Doc is not parsed.
"""Test that noun_chunks raises Value Error for 'id' language if Doc is not parsed.
To check this test, we're constructing a Doc
with a new Vocab here and forcing is_parsed to 'False'
with a new Vocab here and forcing is_parsed to 'False'
to make sure the noun chunks don't run.
"""
doc = id_tokenizer("sebelas")

View File

@ -10,7 +10,16 @@ def test_ml_tokenizer_handles_long_text(ml_tokenizer):
assert len(tokens) == 5
@pytest.mark.parametrize("text,length", [("എന്നാൽ അച്ചടിയുടെ ആവിർഭാവം ലിപിയിൽ കാര്യമായ മാറ്റങ്ങൾ വരുത്തിയത് കൂട്ടക്ഷരങ്ങളെ അണുഅക്ഷരങ്ങളായി പിരിച്ചുകൊണ്ടായിരുന്നു", 10), ("പരമ്പരാഗതമായി മലയാളം ഇടത്തുനിന്ന് വലത്തോട്ടാണ് എഴുതുന്നത്", 5)])
@pytest.mark.parametrize(
"text,length",
[
(
"എന്നാൽ അച്ചടിയുടെ ആവിർഭാവം ലിപിയിൽ കാര്യമായ മാറ്റങ്ങൾ വരുത്തിയത് കൂട്ടക്ഷരങ്ങളെ അണുഅക്ഷരങ്ങളായി പിരിച്ചുകൊണ്ടായിരുന്നു",
10,
),
("പരമ്പരാഗതമായി മലയാളം ഇടത്തുനിന്ന് വലത്തോട്ടാണ് എഴുതുന്നത്", 5),
],
)
def test_ml_tokenizer_handles_cnts(ml_tokenizer, text, length):
tokens = ml_tokenizer(text)
assert len(tokens) == length

View File

@ -5,9 +5,9 @@ import pytest
def test_noun_chunks_is_parsed_nb(nb_tokenizer):
"""Test that noun_chunks raises Value Error for 'nb' language if Doc is not parsed.
"""Test that noun_chunks raises Value Error for 'nb' language if Doc is not parsed.
To check this test, we're constructing a Doc
with a new Vocab here and forcing is_parsed to 'False'
with a new Vocab here and forcing is_parsed to 'False'
to make sure the noun chunks don't run.
"""
doc = nb_tokenizer("Smørsausen brukes bl.a. til")

View File

@ -7,9 +7,9 @@ from ...util import get_doc
def test_noun_chunks_is_parsed_sv(sv_tokenizer):
"""Test that noun_chunks raises Value Error for 'sv' language if Doc is not parsed.
"""Test that noun_chunks raises Value Error for 'sv' language if Doc is not parsed.
To check this test, we're constructing a Doc
with a new Vocab here and forcing is_parsed to 'False'
with a new Vocab here and forcing is_parsed to 'False'
to make sure the noun chunks don't run.
"""
doc = sv_tokenizer("Studenten läste den bästa boken")

View File

@ -34,5 +34,15 @@ def test_zh_tokenizer_serialize_pkuseg(zh_tokenizer_pkuseg):
@pytest.mark.slow
def test_zh_tokenizer_serialize_pkuseg_with_processors(zh_tokenizer_pkuseg):
nlp = Chinese(meta={"tokenizer": {"config": {"use_jieba": False, "use_pkuseg": True, "pkuseg_model": "medicine"}}})
nlp = Chinese(
meta={
"tokenizer": {
"config": {
"use_jieba": False,
"use_pkuseg": True,
"pkuseg_model": "medicine",
}
}
}
)
zh_tokenizer_serialize(nlp.tokenizer)

View File

@ -43,12 +43,16 @@ def test_zh_tokenizer_pkuseg(zh_tokenizer_pkuseg, text, expected_tokens):
def test_zh_tokenizer_pkuseg_user_dict(zh_tokenizer_pkuseg):
user_dict = _get_pkuseg_trie_data(zh_tokenizer_pkuseg.pkuseg_seg.preprocesser.trie)
zh_tokenizer_pkuseg.pkuseg_update_user_dict(["nonsense_asdf"])
updated_user_dict = _get_pkuseg_trie_data(zh_tokenizer_pkuseg.pkuseg_seg.preprocesser.trie)
updated_user_dict = _get_pkuseg_trie_data(
zh_tokenizer_pkuseg.pkuseg_seg.preprocesser.trie
)
assert len(user_dict) == len(updated_user_dict) - 1
# reset user dict
zh_tokenizer_pkuseg.pkuseg_update_user_dict([], reset=True)
reset_user_dict = _get_pkuseg_trie_data(zh_tokenizer_pkuseg.pkuseg_seg.preprocesser.trie)
reset_user_dict = _get_pkuseg_trie_data(
zh_tokenizer_pkuseg.pkuseg_seg.preprocesser.trie
)
assert len(reset_user_dict) == 0

View File

@ -265,15 +265,15 @@ def test_matcher_regex_shape(en_vocab):
@pytest.mark.parametrize(
"cmp, bad",
"cmp, bad",
[
("==", ["a", "aaa"]),
("!=", ["aa"]),
(">=", ["a"]),
("<=", ["aaa"]),
(">", ["a", "aa"]),
("<", ["aa", "aaa"])
]
("<", ["aa", "aaa"]),
],
)
def test_matcher_compare_length(en_vocab, cmp, bad):
matcher = Matcher(en_vocab)

View File

@ -106,7 +106,9 @@ def test_sentencizer_complex(en_vocab, words, sent_starts, sent_ends, n_sents):
),
],
)
def test_sentencizer_custom_punct(en_vocab, punct_chars, words, sent_starts, sent_ends, n_sents):
def test_sentencizer_custom_punct(
en_vocab, punct_chars, words, sent_starts, sent_ends, n_sents
):
doc = Doc(en_vocab, words=words)
sentencizer = Sentencizer(punct_chars=punct_chars)
doc = sentencizer(doc)

View File

@ -37,7 +37,7 @@ def test_serialize_vocab_roundtrip_bytes(strings1, strings2):
assert vocab1.to_bytes() == vocab1_b
new_vocab1 = Vocab().from_bytes(vocab1_b)
assert new_vocab1.to_bytes() == vocab1_b
assert len(new_vocab1.strings) == len(strings1) + 1 # adds _SP
assert len(new_vocab1.strings) == len(strings1) + 1 # adds _SP
assert sorted([s for s in new_vocab1.strings]) == sorted(strings1 + ["_SP"])
@ -56,9 +56,13 @@ def test_serialize_vocab_roundtrip_disk(strings1, strings2):
assert strings1 == [s for s in vocab1_d.strings if s != "_SP"]
assert strings2 == [s for s in vocab2_d.strings if s != "_SP"]
if strings1 == strings2:
assert [s for s in vocab1_d.strings if s != "_SP"] == [s for s in vocab2_d.strings if s != "_SP"]
assert [s for s in vocab1_d.strings if s != "_SP"] == [
s for s in vocab2_d.strings if s != "_SP"
]
else:
assert [s for s in vocab1_d.strings if s != "_SP"] != [s for s in vocab2_d.strings if s != "_SP"]
assert [s for s in vocab1_d.strings if s != "_SP"] != [
s for s in vocab2_d.strings if s != "_SP"
]
@pytest.mark.parametrize("strings,lex_attr", test_strings_attrs)
@ -76,9 +80,8 @@ def test_serialize_vocab_lex_attrs_bytes(strings, lex_attr):
def test_deserialize_vocab_seen_entries(strings, lex_attr):
# Reported in #2153
vocab = Vocab(strings=strings)
length = len(vocab)
vocab.from_bytes(vocab.to_bytes())
assert len(vocab.strings) == len(strings) + 1 # adds _SP
assert len(vocab.strings) == len(strings) + 1 # adds _SP
@pytest.mark.parametrize("strings,lex_attr", test_strings_attrs)
@ -130,6 +133,7 @@ def test_serialize_stringstore_roundtrip_disk(strings1, strings2):
else:
assert list(sstore1_d) != list(sstore2_d)
@pytest.mark.parametrize("strings,lex_attr", test_strings_attrs)
def test_pickle_vocab(strings, lex_attr):
vocab = Vocab(strings=strings)

View File

@ -112,7 +112,7 @@ def test_gold_biluo_different_tokenization(en_vocab, en_tokenizer):
data = (
"I'll return the ₹54 amount",
{
"words": ["I", "'ll", "return", "the", "", "54", "amount",],
"words": ["I", "'ll", "return", "the", "", "54", "amount"],
"entities": [(16, 19, "MONEY")],
},
)
@ -122,7 +122,7 @@ def test_gold_biluo_different_tokenization(en_vocab, en_tokenizer):
data = (
"I'll return the $54 amount",
{
"words": ["I", "'ll", "return", "the", "$", "54", "amount",],
"words": ["I", "'ll", "return", "the", "$", "54", "amount"],
"entities": [(16, 19, "MONEY")],
},
)

View File

@ -366,6 +366,7 @@ def test_vectors_serialize():
assert row == row_r
assert_equal(v.data, v_r.data)
def test_vector_is_oov():
vocab = Vocab(vectors_name="test_vocab_is_oov")
data = numpy.ndarray((5, 3), dtype="f")
@ -375,4 +376,4 @@ def test_vector_is_oov():
vocab.set_vector("dog", data[1])
assert vocab["cat"].is_oov is True
assert vocab["dog"].is_oov is True
assert vocab["hamster"].is_oov is False
assert vocab["hamster"].is_oov is False

View File

@ -774,7 +774,7 @@ def get_words_and_spaces(words, text):
except ValueError:
raise ValueError(Errors.E194.format(text=text, words=words))
if word_start > 0:
text_words.append(text[text_pos:text_pos+word_start])
text_words.append(text[text_pos : text_pos + word_start])
text_spaces.append(False)
text_pos += word_start
text_words.append(word)

View File

@ -2172,6 +2172,43 @@
"model_uri = f'runs:/{my_run_id}/model'",
"nlp2 = mlflow.spacy.load_model(model_uri=model_uri)"
]
},
{
"id": "pyate",
"title": "PyATE",
"slogan": "Python Automated Term Extraction",
"description": "PyATE is a term extraction library written in Python using Spacy POS tagging with Basic, Combo Basic, C-Value, TermExtractor, and Weirdness.",
"github": "kevinlu1248/pyate",
"pip": "pyate",
"code_example": [
"import spacy",
"from pyate.term_extraction_pipeline import TermExtractionPipeline",
"",
"nlp = spacy.load('en_core_web_sm')",
"nlp.add_pipe(TermExtractionPipeline())",
"# source: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1994795/",
"string = 'Central to the development of cancer are genetic changes that endow these “cancer cells” with many of the hallmarks of cancer, such as self-sufficient growth and resistance to anti-growth and pro-death signals. However, while the genetic changes that occur within cancer cells themselves, such as activated oncogenes or dysfunctional tumor suppressors, are responsible for many aspects of cancer development, they are not sufficient. Tumor promotion and progression are dependent on ancillary processes provided by cells of the tumor environment but that are not necessarily cancerous themselves. Inflammation has long been associated with the development of cancer. This review will discuss the reflexive relationship between cancer and inflammation with particular focus on how considering the role of inflammation in physiologic processes such as the maintenance of tissue homeostasis and repair may provide a logical framework for understanding the connection between the inflammatory response and cancer.'",
"",
"doc = nlp(string)",
"print(doc._.combo_basic.sort_values(ascending=False).head(5))",
"\"\"\"\"\"\"",
"dysfunctional tumor 1.443147",
"tumor suppressors 1.443147",
"genetic changes 1.386294",
"cancer cells 1.386294",
"dysfunctional tumor suppressors 1.298612",
"\"\"\"\"\"\""
],
"code_language": "python",
"url": "https://github.com/kevinlu1248/pyate",
"author": "Kevin Lu",
"author_links": {
"twitter": "kevinlu1248",
"github": "kevinlu1248",
"website": "https://github.com/kevinlu1248/pyate"
},
"category": ["pipeline", "research"],
"tags": ["term_extraction"]
}
],