mirror of
https://github.com/explosion/spaCy.git
synced 2024-12-25 01:16:28 +03:00
Merge remote-tracking branch 'upstream/master' into bugfix/noun-chunks
# Conflicts: # spacy/lang/el/syntax_iterators.py # spacy/lang/en/syntax_iterators.py # spacy/lang/fa/syntax_iterators.py # spacy/lang/fr/syntax_iterators.py # spacy/lang/id/syntax_iterators.py # spacy/lang/nb/syntax_iterators.py # spacy/lang/sv/syntax_iterators.py
This commit is contained in:
commit
84d5b7ad0a
106
.github/contributors/kevinlu1248.md
vendored
Normal file
106
.github/contributors/kevinlu1248.md
vendored
Normal file
|
@ -0,0 +1,106 @@
|
||||||
|
# spaCy contributor agreement
|
||||||
|
|
||||||
|
This spaCy Contributor Agreement (**"SCA"**) is based on the
|
||||||
|
[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
|
||||||
|
The SCA applies to any contribution that you make to any product or project
|
||||||
|
managed by us (the **"project"**), and sets out the intellectual property rights
|
||||||
|
you grant to us in the contributed materials. The term **"us"** shall mean
|
||||||
|
[ExplosionAI GmbH](https://explosion.ai/legal). The term
|
||||||
|
**"you"** shall mean the person or entity identified below.
|
||||||
|
|
||||||
|
If you agree to be bound by these terms, fill in the information requested
|
||||||
|
below and include the filled-in version with your first pull request, under the
|
||||||
|
folder [`.github/contributors/`](/.github/contributors/). The name of the file
|
||||||
|
should be your GitHub username, with the extension `.md`. For example, the user
|
||||||
|
example_user would create the file `.github/contributors/example_user.md`.
|
||||||
|
|
||||||
|
Read this agreement carefully before signing. These terms and conditions
|
||||||
|
constitute a binding legal agreement.
|
||||||
|
|
||||||
|
## Contributor Agreement
|
||||||
|
|
||||||
|
1. The term "contribution" or "contributed materials" means any source code,
|
||||||
|
object code, patch, tool, sample, graphic, specification, manual,
|
||||||
|
documentation, or any other material posted or submitted by you to the project.
|
||||||
|
|
||||||
|
2. With respect to any worldwide copyrights, or copyright applications and
|
||||||
|
registrations, in your contribution:
|
||||||
|
|
||||||
|
* you hereby assign to us joint ownership, and to the extent that such
|
||||||
|
assignment is or becomes invalid, ineffective or unenforceable, you hereby
|
||||||
|
grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
|
||||||
|
royalty-free, unrestricted license to exercise all rights under those
|
||||||
|
copyrights. This includes, at our option, the right to sublicense these same
|
||||||
|
rights to third parties through multiple levels of sublicensees or other
|
||||||
|
licensing arrangements;
|
||||||
|
|
||||||
|
* you agree that each of us can do all things in relation to your
|
||||||
|
contribution as if each of us were the sole owners, and if one of us makes
|
||||||
|
a derivative work of your contribution, the one who makes the derivative
|
||||||
|
work (or has it made will be the sole owner of that derivative work;
|
||||||
|
|
||||||
|
* you agree that you will not assert any moral rights in your contribution
|
||||||
|
against us, our licensees or transferees;
|
||||||
|
|
||||||
|
* you agree that we may register a copyright in your contribution and
|
||||||
|
exercise all ownership rights associated with it; and
|
||||||
|
|
||||||
|
* you agree that neither of us has any duty to consult with, obtain the
|
||||||
|
consent of, pay or render an accounting to the other for any use or
|
||||||
|
distribution of your contribution.
|
||||||
|
|
||||||
|
3. With respect to any patents you own, or that you can license without payment
|
||||||
|
to any third party, you hereby grant to us a perpetual, irrevocable,
|
||||||
|
non-exclusive, worldwide, no-charge, royalty-free license to:
|
||||||
|
|
||||||
|
* make, have made, use, sell, offer to sell, import, and otherwise transfer
|
||||||
|
your contribution in whole or in part, alone or in combination with or
|
||||||
|
included in any product, work or materials arising out of the project to
|
||||||
|
which your contribution was submitted, and
|
||||||
|
|
||||||
|
* at our option, to sublicense these same rights to third parties through
|
||||||
|
multiple levels of sublicensees or other licensing arrangements.
|
||||||
|
|
||||||
|
4. Except as set out above, you keep all right, title, and interest in your
|
||||||
|
contribution. The rights that you grant to us under these terms are effective
|
||||||
|
on the date you first submitted a contribution to us, even if your submission
|
||||||
|
took place before the date you sign these terms.
|
||||||
|
|
||||||
|
5. You covenant, represent, warrant and agree that:
|
||||||
|
|
||||||
|
* Each contribution that you submit is and shall be an original work of
|
||||||
|
authorship and you can legally grant the rights set out in this SCA;
|
||||||
|
|
||||||
|
* to the best of your knowledge, each contribution will not violate any
|
||||||
|
third party's copyrights, trademarks, patents, or other intellectual
|
||||||
|
property rights; and
|
||||||
|
|
||||||
|
* each contribution shall be in compliance with U.S. export control laws and
|
||||||
|
other applicable export and import laws. You agree to notify us if you
|
||||||
|
become aware of any circumstance which would make any of the foregoing
|
||||||
|
representations inaccurate in any respect. We may publicly disclose your
|
||||||
|
participation in the project, including the fact that you have signed the SCA.
|
||||||
|
|
||||||
|
6. This SCA is governed by the laws of the State of California and applicable
|
||||||
|
U.S. Federal law. Any choice of law rules will not apply.
|
||||||
|
|
||||||
|
7. Please place an “x” on one of the applicable statement below. Please do NOT
|
||||||
|
mark both statements:
|
||||||
|
|
||||||
|
* [x] I am signing on behalf of myself as an individual and no other person
|
||||||
|
or entity, including my employer, has or will have rights with respect to my
|
||||||
|
contributions.
|
||||||
|
|
||||||
|
* [ ] I am signing on behalf of my employer or a legal entity and I have the
|
||||||
|
actual authority to contractually bind that entity.
|
||||||
|
|
||||||
|
## Contributor Details
|
||||||
|
|
||||||
|
| Field | Entry |
|
||||||
|
|------------------------------- | -------------------- |
|
||||||
|
| Name | Kevin Lu|
|
||||||
|
| Company name (if applicable) | |
|
||||||
|
| Title or role (if applicable) | Student|
|
||||||
|
| Date | |
|
||||||
|
| GitHub username | kevinlu1248|
|
||||||
|
| Website (optional) | |
|
|
@ -187,12 +187,17 @@ def debug_data(
|
||||||
n_missing_vectors = sum(gold_train_data["words_missing_vectors"].values())
|
n_missing_vectors = sum(gold_train_data["words_missing_vectors"].values())
|
||||||
msg.warn(
|
msg.warn(
|
||||||
"{} words in training data without vectors ({:0.2f}%)".format(
|
"{} words in training data without vectors ({:0.2f}%)".format(
|
||||||
n_missing_vectors,
|
n_missing_vectors, n_missing_vectors / gold_train_data["n_words"],
|
||||||
n_missing_vectors / gold_train_data["n_words"],
|
|
||||||
),
|
),
|
||||||
)
|
)
|
||||||
msg.text(
|
msg.text(
|
||||||
"10 most common words without vectors: {}".format(_format_labels(gold_train_data["words_missing_vectors"].most_common(10), counts=True)), show=verbose,
|
"10 most common words without vectors: {}".format(
|
||||||
|
_format_labels(
|
||||||
|
gold_train_data["words_missing_vectors"].most_common(10),
|
||||||
|
counts=True,
|
||||||
|
)
|
||||||
|
),
|
||||||
|
show=verbose,
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
msg.info("No word vectors present in the model")
|
msg.info("No word vectors present in the model")
|
||||||
|
|
|
@ -18,6 +18,8 @@ from wasabi import msg
|
||||||
from ..vectors import Vectors
|
from ..vectors import Vectors
|
||||||
from ..errors import Errors, Warnings
|
from ..errors import Errors, Warnings
|
||||||
from ..util import ensure_path, get_lang_class, load_model, OOV_RANK
|
from ..util import ensure_path, get_lang_class, load_model, OOV_RANK
|
||||||
|
from ..lookups import Lookups
|
||||||
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
import ftfy
|
import ftfy
|
||||||
|
@ -49,6 +51,7 @@ DEFAULT_OOV_PROB = -20
|
||||||
str,
|
str,
|
||||||
),
|
),
|
||||||
model_name=("Optional name for the model meta", "option", "mn", str),
|
model_name=("Optional name for the model meta", "option", "mn", str),
|
||||||
|
omit_extra_lookups=("Don't include extra lookups in model", "flag", "OEL", bool),
|
||||||
base_model=("Base model (for languages with custom tokenizers)", "option", "b", str),
|
base_model=("Base model (for languages with custom tokenizers)", "option", "b", str),
|
||||||
)
|
)
|
||||||
def init_model(
|
def init_model(
|
||||||
|
@ -62,6 +65,7 @@ def init_model(
|
||||||
prune_vectors=-1,
|
prune_vectors=-1,
|
||||||
vectors_name=None,
|
vectors_name=None,
|
||||||
model_name=None,
|
model_name=None,
|
||||||
|
omit_extra_lookups=False,
|
||||||
base_model=None,
|
base_model=None,
|
||||||
):
|
):
|
||||||
"""
|
"""
|
||||||
|
@ -95,6 +99,15 @@ def init_model(
|
||||||
|
|
||||||
with msg.loading("Creating model..."):
|
with msg.loading("Creating model..."):
|
||||||
nlp = create_model(lang, lex_attrs, name=model_name, base_model=base_model)
|
nlp = create_model(lang, lex_attrs, name=model_name, base_model=base_model)
|
||||||
|
|
||||||
|
# Create empty extra lexeme tables so the data from spacy-lookups-data
|
||||||
|
# isn't loaded if these features are accessed
|
||||||
|
if omit_extra_lookups:
|
||||||
|
nlp.vocab.lookups_extra = Lookups()
|
||||||
|
nlp.vocab.lookups_extra.add_table("lexeme_cluster")
|
||||||
|
nlp.vocab.lookups_extra.add_table("lexeme_prob")
|
||||||
|
nlp.vocab.lookups_extra.add_table("lexeme_settings")
|
||||||
|
|
||||||
msg.good("Successfully created model")
|
msg.good("Successfully created model")
|
||||||
if vectors_loc is not None:
|
if vectors_loc is not None:
|
||||||
add_vectors(nlp, vectors_loc, truncate_vectors, prune_vectors, vectors_name)
|
add_vectors(nlp, vectors_loc, truncate_vectors, prune_vectors, vectors_name)
|
||||||
|
|
|
@ -17,6 +17,7 @@ from .._ml import create_default_optimizer
|
||||||
from ..util import use_gpu as set_gpu
|
from ..util import use_gpu as set_gpu
|
||||||
from ..gold import GoldCorpus
|
from ..gold import GoldCorpus
|
||||||
from ..compat import path2str
|
from ..compat import path2str
|
||||||
|
from ..lookups import Lookups
|
||||||
from .. import util
|
from .. import util
|
||||||
from .. import about
|
from .. import about
|
||||||
|
|
||||||
|
@ -57,6 +58,7 @@ from .. import about
|
||||||
textcat_arch=("Textcat model architecture", "option", "ta", str),
|
textcat_arch=("Textcat model architecture", "option", "ta", str),
|
||||||
textcat_positive_label=("Textcat positive label for binary classes with two labels", "option", "tpl", str),
|
textcat_positive_label=("Textcat positive label for binary classes with two labels", "option", "tpl", str),
|
||||||
tag_map_path=("Location of JSON-formatted tag map", "option", "tm", Path),
|
tag_map_path=("Location of JSON-formatted tag map", "option", "tm", Path),
|
||||||
|
omit_extra_lookups=("Don't include extra lookups in model", "flag", "OEL", bool),
|
||||||
verbose=("Display more information for debug", "flag", "VV", bool),
|
verbose=("Display more information for debug", "flag", "VV", bool),
|
||||||
debug=("Run data diagnostics before training", "flag", "D", bool),
|
debug=("Run data diagnostics before training", "flag", "D", bool),
|
||||||
# fmt: on
|
# fmt: on
|
||||||
|
@ -96,6 +98,7 @@ def train(
|
||||||
textcat_arch="bow",
|
textcat_arch="bow",
|
||||||
textcat_positive_label=None,
|
textcat_positive_label=None,
|
||||||
tag_map_path=None,
|
tag_map_path=None,
|
||||||
|
omit_extra_lookups=False,
|
||||||
verbose=False,
|
verbose=False,
|
||||||
debug=False,
|
debug=False,
|
||||||
):
|
):
|
||||||
|
@ -247,6 +250,14 @@ def train(
|
||||||
# Update tag map with provided mapping
|
# Update tag map with provided mapping
|
||||||
nlp.vocab.morphology.tag_map.update(tag_map)
|
nlp.vocab.morphology.tag_map.update(tag_map)
|
||||||
|
|
||||||
|
# Create empty extra lexeme tables so the data from spacy-lookups-data
|
||||||
|
# isn't loaded if these features are accessed
|
||||||
|
if omit_extra_lookups:
|
||||||
|
nlp.vocab.lookups_extra = Lookups()
|
||||||
|
nlp.vocab.lookups_extra.add_table("lexeme_cluster")
|
||||||
|
nlp.vocab.lookups_extra.add_table("lexeme_prob")
|
||||||
|
nlp.vocab.lookups_extra.add_table("lexeme_settings")
|
||||||
|
|
||||||
if vectors:
|
if vectors:
|
||||||
msg.text("Loading vector from model '{}'".format(vectors))
|
msg.text("Loading vector from model '{}'".format(vectors))
|
||||||
_load_vectors(nlp, vectors)
|
_load_vectors(nlp, vectors)
|
||||||
|
|
|
@ -8,7 +8,7 @@ def add_codes(err_cls):
|
||||||
class ErrorsWithCodes(err_cls):
|
class ErrorsWithCodes(err_cls):
|
||||||
def __getattribute__(self, code):
|
def __getattribute__(self, code):
|
||||||
msg = super().__getattribute__(code)
|
msg = super().__getattribute__(code)
|
||||||
if code.startswith('__'): # python system attributes like __class__
|
if code.startswith("__"): # python system attributes like __class__
|
||||||
return msg
|
return msg
|
||||||
else:
|
else:
|
||||||
return "[{code}] {msg}".format(code=code, msg=msg)
|
return "[{code}] {msg}".format(code=code, msg=msg)
|
||||||
|
@ -116,6 +116,7 @@ class Warnings(object):
|
||||||
" to check the alignment. Misaligned entities ('-') will be "
|
" to check the alignment. Misaligned entities ('-') will be "
|
||||||
"ignored during training.")
|
"ignored during training.")
|
||||||
|
|
||||||
|
|
||||||
@add_codes
|
@add_codes
|
||||||
class Errors(object):
|
class Errors(object):
|
||||||
E001 = ("No component '{name}' found in pipeline. Available names: {opts}")
|
E001 = ("No component '{name}' found in pipeline. Available names: {opts}")
|
||||||
|
|
|
@ -9,7 +9,6 @@ from .morph_rules import MORPH_RULES
|
||||||
from ..tag_map import TAG_MAP
|
from ..tag_map import TAG_MAP
|
||||||
|
|
||||||
from ..tokenizer_exceptions import BASE_EXCEPTIONS
|
from ..tokenizer_exceptions import BASE_EXCEPTIONS
|
||||||
from ..norm_exceptions import BASE_NORMS
|
|
||||||
from ...language import Language
|
from ...language import Language
|
||||||
from ...attrs import LANG
|
from ...attrs import LANG
|
||||||
from ...util import update_exc
|
from ...util import update_exc
|
||||||
|
|
|
@ -5,7 +5,7 @@ from ...symbols import NOUN, PROPN, PRON
|
||||||
from ...errors import Errors
|
from ...errors import Errors
|
||||||
|
|
||||||
|
|
||||||
def noun_chunks(obj):
|
def noun_chunks(doclike):
|
||||||
"""
|
"""
|
||||||
Detect base noun phrases from a dependency parse. Works on both Doc and Span.
|
Detect base noun phrases from a dependency parse. Works on both Doc and Span.
|
||||||
"""
|
"""
|
||||||
|
@ -28,7 +28,7 @@ def noun_chunks(obj):
|
||||||
"og",
|
"og",
|
||||||
"app",
|
"app",
|
||||||
]
|
]
|
||||||
doc = obj.doc # Ensure works on both Doc and Span.
|
doc = doclike.doc # Ensure works on both Doc and Span.
|
||||||
|
|
||||||
if not doc.is_parsed:
|
if not doc.is_parsed:
|
||||||
raise ValueError(Errors.E029)
|
raise ValueError(Errors.E029)
|
||||||
|
@ -38,7 +38,7 @@ def noun_chunks(obj):
|
||||||
close_app = doc.vocab.strings.add("nk")
|
close_app = doc.vocab.strings.add("nk")
|
||||||
|
|
||||||
rbracket = 0
|
rbracket = 0
|
||||||
for i, word in enumerate(obj):
|
for i, word in enumerate(doclike):
|
||||||
if i < rbracket:
|
if i < rbracket:
|
||||||
continue
|
continue
|
||||||
if word.pos in (NOUN, PROPN, PRON) and word.dep in np_deps:
|
if word.pos in (NOUN, PROPN, PRON) and word.dep in np_deps:
|
||||||
|
|
|
@ -5,7 +5,7 @@ from ...symbols import NOUN, PROPN, PRON
|
||||||
from ...errors import Errors
|
from ...errors import Errors
|
||||||
|
|
||||||
|
|
||||||
def noun_chunks(obj):
|
def noun_chunks(doclike):
|
||||||
"""
|
"""
|
||||||
Detect base noun phrases. Works on both Doc and Span.
|
Detect base noun phrases. Works on both Doc and Span.
|
||||||
"""
|
"""
|
||||||
|
@ -14,7 +14,7 @@ def noun_chunks(obj):
|
||||||
# obj tag corrects some DEP tagger mistakes.
|
# obj tag corrects some DEP tagger mistakes.
|
||||||
# Further improvement of the models will eliminate the need for this tag.
|
# Further improvement of the models will eliminate the need for this tag.
|
||||||
labels = ["nsubj", "obj", "iobj", "appos", "ROOT", "obl"]
|
labels = ["nsubj", "obj", "iobj", "appos", "ROOT", "obl"]
|
||||||
doc = obj.doc # Ensure works on both Doc and Span.
|
doc = doclike.doc # Ensure works on both Doc and Span.
|
||||||
|
|
||||||
if not doc.is_parsed:
|
if not doc.is_parsed:
|
||||||
raise ValueError(Errors.E029)
|
raise ValueError(Errors.E029)
|
||||||
|
@ -24,7 +24,7 @@ def noun_chunks(obj):
|
||||||
nmod = doc.vocab.strings.add("nmod")
|
nmod = doc.vocab.strings.add("nmod")
|
||||||
np_label = doc.vocab.strings.add("NP")
|
np_label = doc.vocab.strings.add("NP")
|
||||||
prev_end = -1
|
prev_end = -1
|
||||||
for i, word in enumerate(obj):
|
for i, word in enumerate(doclike):
|
||||||
if word.pos not in (NOUN, PROPN, PRON):
|
if word.pos not in (NOUN, PROPN, PRON):
|
||||||
continue
|
continue
|
||||||
# Prevent nested chunks from being produced
|
# Prevent nested chunks from being produced
|
||||||
|
|
|
@ -5,7 +5,7 @@ from ...symbols import NOUN, PROPN, PRON
|
||||||
from ...errors import Errors
|
from ...errors import Errors
|
||||||
|
|
||||||
|
|
||||||
def noun_chunks(obj):
|
def noun_chunks(doclike):
|
||||||
"""
|
"""
|
||||||
Detect base noun phrases from a dependency parse. Works on both Doc and Span.
|
Detect base noun phrases from a dependency parse. Works on both Doc and Span.
|
||||||
"""
|
"""
|
||||||
|
@ -20,7 +20,7 @@ def noun_chunks(obj):
|
||||||
"attr",
|
"attr",
|
||||||
"ROOT",
|
"ROOT",
|
||||||
]
|
]
|
||||||
doc = obj.doc # Ensure works on both Doc and Span.
|
doc = doclike.doc # Ensure works on both Doc and Span.
|
||||||
|
|
||||||
if not doc.is_parsed:
|
if not doc.is_parsed:
|
||||||
raise ValueError(Errors.E029)
|
raise ValueError(Errors.E029)
|
||||||
|
@ -29,7 +29,7 @@ def noun_chunks(obj):
|
||||||
conj = doc.vocab.strings.add("conj")
|
conj = doc.vocab.strings.add("conj")
|
||||||
np_label = doc.vocab.strings.add("NP")
|
np_label = doc.vocab.strings.add("NP")
|
||||||
prev_end = -1
|
prev_end = -1
|
||||||
for i, word in enumerate(obj):
|
for i, word in enumerate(doclike):
|
||||||
if word.pos not in (NOUN, PROPN, PRON):
|
if word.pos not in (NOUN, PROPN, PRON):
|
||||||
continue
|
continue
|
||||||
# Prevent nested chunks from being produced
|
# Prevent nested chunks from being produced
|
||||||
|
|
|
@ -197,7 +197,7 @@ for word in ["who", "what", "when", "where", "why", "how", "there", "that"]:
|
||||||
|
|
||||||
_exc[orth + "d"] = [
|
_exc[orth + "d"] = [
|
||||||
{ORTH: orth, LEMMA: word, NORM: word},
|
{ORTH: orth, LEMMA: word, NORM: word},
|
||||||
{ORTH: "d", NORM: "'d"}
|
{ORTH: "d", NORM: "'d"},
|
||||||
]
|
]
|
||||||
|
|
||||||
_exc[orth + "'d've"] = [
|
_exc[orth + "'d've"] = [
|
||||||
|
|
|
@ -5,7 +5,6 @@ from ..char_classes import LIST_PUNCT, LIST_ELLIPSES, LIST_QUOTES
|
||||||
from ..char_classes import LIST_ICONS, CURRENCY, LIST_UNITS, PUNCT
|
from ..char_classes import LIST_ICONS, CURRENCY, LIST_UNITS, PUNCT
|
||||||
from ..char_classes import CONCAT_QUOTES, ALPHA_LOWER, ALPHA_UPPER, ALPHA
|
from ..char_classes import CONCAT_QUOTES, ALPHA_LOWER, ALPHA_UPPER, ALPHA
|
||||||
from ..char_classes import merge_chars
|
from ..char_classes import merge_chars
|
||||||
from ..punctuation import TOKENIZER_PREFIXES as BASE_TOKENIZER_PREFIXES
|
|
||||||
|
|
||||||
|
|
||||||
_list_units = [u for u in LIST_UNITS if u != "%"]
|
_list_units = [u for u in LIST_UNITS if u != "%"]
|
||||||
|
|
|
@ -5,8 +5,8 @@ from ...symbols import NOUN, PROPN, PRON, VERB, AUX
|
||||||
from ...errors import Errors
|
from ...errors import Errors
|
||||||
|
|
||||||
|
|
||||||
def noun_chunks(obj):
|
def noun_chunks(doclike):
|
||||||
doc = obj.doc
|
doc = doclike.doc
|
||||||
|
|
||||||
if not doc.is_parsed:
|
if not doc.is_parsed:
|
||||||
raise ValueError(Errors.E029)
|
raise ValueError(Errors.E029)
|
||||||
|
@ -21,7 +21,7 @@ def noun_chunks(obj):
|
||||||
np_right_deps = [doc.vocab.strings.add(label) for label in right_labels]
|
np_right_deps = [doc.vocab.strings.add(label) for label in right_labels]
|
||||||
stop_deps = [doc.vocab.strings.add(label) for label in stop_labels]
|
stop_deps = [doc.vocab.strings.add(label) for label in stop_labels]
|
||||||
token = doc[0]
|
token = doc[0]
|
||||||
while token and token.i < len(doc):
|
while token and token.i < len(doclike):
|
||||||
if token.pos in [PROPN, NOUN, PRON]:
|
if token.pos in [PROPN, NOUN, PRON]:
|
||||||
left, right = noun_bounds(
|
left, right = noun_bounds(
|
||||||
doc, token, np_left_deps, np_right_deps, stop_deps
|
doc, token, np_left_deps, np_right_deps, stop_deps
|
||||||
|
|
|
@ -5,7 +5,7 @@ from ...symbols import NOUN, PROPN, PRON
|
||||||
from ...errors import Errors
|
from ...errors import Errors
|
||||||
|
|
||||||
|
|
||||||
def noun_chunks(obj):
|
def noun_chunks(doclike):
|
||||||
"""
|
"""
|
||||||
Detect base noun phrases from a dependency parse. Works on both Doc and Span.
|
Detect base noun phrases from a dependency parse. Works on both Doc and Span.
|
||||||
"""
|
"""
|
||||||
|
@ -20,7 +20,7 @@ def noun_chunks(obj):
|
||||||
"attr",
|
"attr",
|
||||||
"ROOT",
|
"ROOT",
|
||||||
]
|
]
|
||||||
doc = obj.doc # Ensure works on both Doc and Span.
|
doc = doclike.doc # Ensure works on both Doc and Span.
|
||||||
|
|
||||||
if not doc.is_parsed:
|
if not doc.is_parsed:
|
||||||
raise ValueError(Errors.E029)
|
raise ValueError(Errors.E029)
|
||||||
|
@ -29,7 +29,7 @@ def noun_chunks(obj):
|
||||||
conj = doc.vocab.strings.add("conj")
|
conj = doc.vocab.strings.add("conj")
|
||||||
np_label = doc.vocab.strings.add("NP")
|
np_label = doc.vocab.strings.add("NP")
|
||||||
prev_end = -1
|
prev_end = -1
|
||||||
for i, word in enumerate(obj):
|
for i, word in enumerate(doclike):
|
||||||
if word.pos not in (NOUN, PROPN, PRON):
|
if word.pos not in (NOUN, PROPN, PRON):
|
||||||
continue
|
continue
|
||||||
# Prevent nested chunks from being produced
|
# Prevent nested chunks from being produced
|
||||||
|
|
|
@ -5,7 +5,7 @@ from ...symbols import NOUN, PROPN, PRON
|
||||||
from ...errors import Errors
|
from ...errors import Errors
|
||||||
|
|
||||||
|
|
||||||
def noun_chunks(obj):
|
def noun_chunks(doclike):
|
||||||
"""
|
"""
|
||||||
Detect base noun phrases from a dependency parse. Works on both Doc and Span.
|
Detect base noun phrases from a dependency parse. Works on both Doc and Span.
|
||||||
"""
|
"""
|
||||||
|
@ -19,7 +19,7 @@ def noun_chunks(obj):
|
||||||
"nmod",
|
"nmod",
|
||||||
"nmod:poss",
|
"nmod:poss",
|
||||||
]
|
]
|
||||||
doc = obj.doc # Ensure works on both Doc and Span.
|
doc = doclike.doc # Ensure works on both Doc and Span.
|
||||||
|
|
||||||
if not doc.is_parsed:
|
if not doc.is_parsed:
|
||||||
raise ValueError(Errors.E029)
|
raise ValueError(Errors.E029)
|
||||||
|
@ -28,7 +28,7 @@ def noun_chunks(obj):
|
||||||
conj = doc.vocab.strings.add("conj")
|
conj = doc.vocab.strings.add("conj")
|
||||||
np_label = doc.vocab.strings.add("NP")
|
np_label = doc.vocab.strings.add("NP")
|
||||||
prev_end = -1
|
prev_end = -1
|
||||||
for i, word in enumerate(obj):
|
for i, word in enumerate(doclike):
|
||||||
if word.pos not in (NOUN, PROPN, PRON):
|
if word.pos not in (NOUN, PROPN, PRON):
|
||||||
continue
|
continue
|
||||||
# Prevent nested chunks from being produced
|
# Prevent nested chunks from being produced
|
||||||
|
|
|
@ -1,11 +1,12 @@
|
||||||
|
# coding: utf8
|
||||||
|
from __future__ import unicode_literals
|
||||||
|
|
||||||
from .stop_words import STOP_WORDS
|
from .stop_words import STOP_WORDS
|
||||||
from .lex_attrs import LEX_ATTRS
|
from .lex_attrs import LEX_ATTRS
|
||||||
from .tag_map import TAG_MAP
|
from .tag_map import TAG_MAP
|
||||||
|
|
||||||
|
|
||||||
from ...attrs import LANG
|
from ...attrs import LANG
|
||||||
from ...language import Language
|
from ...language import Language
|
||||||
from ...tokens import Doc
|
|
||||||
|
|
||||||
|
|
||||||
class ArmenianDefaults(Language.Defaults):
|
class ArmenianDefaults(Language.Defaults):
|
||||||
|
|
|
@ -1,6 +1,6 @@
|
||||||
|
# coding: utf8
|
||||||
from __future__ import unicode_literals
|
from __future__ import unicode_literals
|
||||||
|
|
||||||
|
|
||||||
"""
|
"""
|
||||||
Example sentences to test spaCy and its language models.
|
Example sentences to test spaCy and its language models.
|
||||||
>>> from spacy.lang.hy.examples import sentences
|
>>> from spacy.lang.hy.examples import sentences
|
||||||
|
|
|
@ -1,3 +1,4 @@
|
||||||
|
# coding: utf8
|
||||||
from __future__ import unicode_literals
|
from __future__ import unicode_literals
|
||||||
|
|
||||||
from ...attrs import LIKE_NUM
|
from ...attrs import LIKE_NUM
|
||||||
|
|
|
@ -1,6 +1,6 @@
|
||||||
|
# coding: utf8
|
||||||
from __future__ import unicode_literals
|
from __future__ import unicode_literals
|
||||||
|
|
||||||
|
|
||||||
STOP_WORDS = set(
|
STOP_WORDS = set(
|
||||||
"""
|
"""
|
||||||
նա
|
նա
|
||||||
|
|
|
@ -1,7 +1,7 @@
|
||||||
# coding: utf8
|
# coding: utf8
|
||||||
from __future__ import unicode_literals
|
from __future__ import unicode_literals
|
||||||
|
|
||||||
from ...symbols import POS, SYM, ADJ, NUM, DET, ADV, ADP, X, VERB, NOUN
|
from ...symbols import POS, ADJ, NUM, DET, ADV, ADP, X, VERB, NOUN
|
||||||
from ...symbols import PROPN, PART, INTJ, PRON, SCONJ, AUX, CCONJ
|
from ...symbols import PROPN, PART, INTJ, PRON, SCONJ, AUX, CCONJ
|
||||||
|
|
||||||
TAG_MAP = {
|
TAG_MAP = {
|
||||||
|
@ -716,7 +716,7 @@ TAG_MAP = {
|
||||||
POS: NOUN,
|
POS: NOUN,
|
||||||
"Animacy": "Nhum",
|
"Animacy": "Nhum",
|
||||||
"Case": "Dat",
|
"Case": "Dat",
|
||||||
"Number": "Coll",
|
# "Number": "Coll",
|
||||||
"Number": "Sing",
|
"Number": "Sing",
|
||||||
"Person": "1",
|
"Person": "1",
|
||||||
},
|
},
|
||||||
|
@ -815,7 +815,7 @@ TAG_MAP = {
|
||||||
"Animacy": "Nhum",
|
"Animacy": "Nhum",
|
||||||
"Case": "Nom",
|
"Case": "Nom",
|
||||||
"Definite": "Def",
|
"Definite": "Def",
|
||||||
"Number": "Plur",
|
# "Number": "Plur",
|
||||||
"Number": "Sing",
|
"Number": "Sing",
|
||||||
"Poss": "Yes",
|
"Poss": "Yes",
|
||||||
},
|
},
|
||||||
|
@ -880,7 +880,7 @@ TAG_MAP = {
|
||||||
POS: NOUN,
|
POS: NOUN,
|
||||||
"Animacy": "Nhum",
|
"Animacy": "Nhum",
|
||||||
"Case": "Nom",
|
"Case": "Nom",
|
||||||
"Number": "Plur",
|
# "Number": "Plur",
|
||||||
"Number": "Sing",
|
"Number": "Sing",
|
||||||
"Person": "2",
|
"Person": "2",
|
||||||
},
|
},
|
||||||
|
@ -1223,9 +1223,9 @@ TAG_MAP = {
|
||||||
"PRON_Case=Nom|Number=Sing|Number=Plur|Person=3|Person=1|PronType=Emp": {
|
"PRON_Case=Nom|Number=Sing|Number=Plur|Person=3|Person=1|PronType=Emp": {
|
||||||
POS: PRON,
|
POS: PRON,
|
||||||
"Case": "Nom",
|
"Case": "Nom",
|
||||||
"Number": "Sing",
|
# "Number": "Sing",
|
||||||
"Number": "Plur",
|
"Number": "Plur",
|
||||||
"Person": "3",
|
# "Person": "3",
|
||||||
"Person": "1",
|
"Person": "1",
|
||||||
"PronType": "Emp",
|
"PronType": "Emp",
|
||||||
},
|
},
|
||||||
|
|
|
@ -5,7 +5,7 @@ from ...symbols import NOUN, PROPN, PRON
|
||||||
from ...errors import Errors
|
from ...errors import Errors
|
||||||
|
|
||||||
|
|
||||||
def noun_chunks(obj):
|
def noun_chunks(doclike):
|
||||||
"""
|
"""
|
||||||
Detect base noun phrases from a dependency parse. Works on both Doc and Span.
|
Detect base noun phrases from a dependency parse. Works on both Doc and Span.
|
||||||
"""
|
"""
|
||||||
|
@ -19,7 +19,7 @@ def noun_chunks(obj):
|
||||||
"nmod",
|
"nmod",
|
||||||
"nmod:poss",
|
"nmod:poss",
|
||||||
]
|
]
|
||||||
doc = obj.doc # Ensure works on both Doc and Span.
|
doc = doclike.doc # Ensure works on both Doc and Span.
|
||||||
|
|
||||||
if not doc.is_parsed:
|
if not doc.is_parsed:
|
||||||
raise ValueError(Errors.E029)
|
raise ValueError(Errors.E029)
|
||||||
|
@ -28,7 +28,7 @@ def noun_chunks(obj):
|
||||||
conj = doc.vocab.strings.add("conj")
|
conj = doc.vocab.strings.add("conj")
|
||||||
np_label = doc.vocab.strings.add("NP")
|
np_label = doc.vocab.strings.add("NP")
|
||||||
prev_end = -1
|
prev_end = -1
|
||||||
for i, word in enumerate(obj):
|
for i, word in enumerate(doclike):
|
||||||
if word.pos not in (NOUN, PROPN, PRON):
|
if word.pos not in (NOUN, PROPN, PRON):
|
||||||
continue
|
continue
|
||||||
# Prevent nested chunks from being produced
|
# Prevent nested chunks from being produced
|
||||||
|
|
|
@ -55,7 +55,7 @@ _num_words = [
|
||||||
"തൊണ്ണൂറ് ",
|
"തൊണ്ണൂറ് ",
|
||||||
"നുറ് ",
|
"നുറ് ",
|
||||||
"ആയിരം ",
|
"ആയിരം ",
|
||||||
"പത്തുലക്ഷം"
|
"പത്തുലക്ഷം",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -3,7 +3,6 @@ from __future__ import unicode_literals
|
||||||
|
|
||||||
|
|
||||||
STOP_WORDS = set(
|
STOP_WORDS = set(
|
||||||
|
|
||||||
"""
|
"""
|
||||||
അത്
|
അത്
|
||||||
ഇത്
|
ഇത്
|
||||||
|
|
|
@ -5,7 +5,7 @@ from ...symbols import NOUN, PROPN, PRON
|
||||||
from ...errors import Errors
|
from ...errors import Errors
|
||||||
|
|
||||||
|
|
||||||
def noun_chunks(obj):
|
def noun_chunks(doclike):
|
||||||
"""
|
"""
|
||||||
Detect base noun phrases from a dependency parse. Works on both Doc and Span.
|
Detect base noun phrases from a dependency parse. Works on both Doc and Span.
|
||||||
"""
|
"""
|
||||||
|
@ -19,7 +19,7 @@ def noun_chunks(obj):
|
||||||
"nmod",
|
"nmod",
|
||||||
"nmod:poss",
|
"nmod:poss",
|
||||||
]
|
]
|
||||||
doc = obj.doc # Ensure works on both Doc and Span.
|
doc = doclike.doc # Ensure works on both Doc and Span.
|
||||||
|
|
||||||
if not doc.is_parsed:
|
if not doc.is_parsed:
|
||||||
raise ValueError(Errors.E029)
|
raise ValueError(Errors.E029)
|
||||||
|
@ -28,7 +28,7 @@ def noun_chunks(obj):
|
||||||
conj = doc.vocab.strings.add("conj")
|
conj = doc.vocab.strings.add("conj")
|
||||||
np_label = doc.vocab.strings.add("NP")
|
np_label = doc.vocab.strings.add("NP")
|
||||||
prev_end = -1
|
prev_end = -1
|
||||||
for i, word in enumerate(obj):
|
for i, word in enumerate(doclike):
|
||||||
if word.pos not in (NOUN, PROPN, PRON):
|
if word.pos not in (NOUN, PROPN, PRON):
|
||||||
continue
|
continue
|
||||||
# Prevent nested chunks from being produced
|
# Prevent nested chunks from being produced
|
||||||
|
|
|
@ -12,7 +12,7 @@ from ..tokenizer_exceptions import BASE_EXCEPTIONS
|
||||||
from ..norm_exceptions import BASE_NORMS
|
from ..norm_exceptions import BASE_NORMS
|
||||||
from ...language import Language
|
from ...language import Language
|
||||||
from ...attrs import LANG, NORM
|
from ...attrs import LANG, NORM
|
||||||
from ...util import update_exc, add_lookups
|
from ...util import add_lookups
|
||||||
from ...lookups import Lookups
|
from ...lookups import Lookups
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -3,7 +3,6 @@ from __future__ import unicode_literals
|
||||||
|
|
||||||
from ...lemmatizer import Lemmatizer
|
from ...lemmatizer import Lemmatizer
|
||||||
from ...parts_of_speech import NAMES
|
from ...parts_of_speech import NAMES
|
||||||
from ...errors import Errors
|
|
||||||
|
|
||||||
|
|
||||||
class PolishLemmatizer(Lemmatizer):
|
class PolishLemmatizer(Lemmatizer):
|
||||||
|
|
|
@ -8,7 +8,9 @@ from ..punctuation import TOKENIZER_PREFIXES as BASE_TOKENIZER_PREFIXES
|
||||||
|
|
||||||
_quotes = CONCAT_QUOTES.replace("'", "")
|
_quotes = CONCAT_QUOTES.replace("'", "")
|
||||||
|
|
||||||
_prefixes = _prefixes = [r"(długo|krótko|jedno|dwu|trzy|cztero)-"] + BASE_TOKENIZER_PREFIXES
|
_prefixes = _prefixes = [
|
||||||
|
r"(długo|krótko|jedno|dwu|trzy|cztero)-"
|
||||||
|
] + BASE_TOKENIZER_PREFIXES
|
||||||
|
|
||||||
_infixes = (
|
_infixes = (
|
||||||
LIST_ELLIPSES
|
LIST_ELLIPSES
|
||||||
|
|
|
@ -40,7 +40,7 @@ _num_words = [
|
||||||
"miljard",
|
"miljard",
|
||||||
"biljon",
|
"biljon",
|
||||||
"biljard",
|
"biljard",
|
||||||
"kvadriljon"
|
"kvadriljon",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -5,7 +5,7 @@ from ...symbols import NOUN, PROPN, PRON
|
||||||
from ...errors import Errors
|
from ...errors import Errors
|
||||||
|
|
||||||
|
|
||||||
def noun_chunks(obj):
|
def noun_chunks(doclike):
|
||||||
"""
|
"""
|
||||||
Detect base noun phrases from a dependency parse. Works on both Doc and Span.
|
Detect base noun phrases from a dependency parse. Works on both Doc and Span.
|
||||||
"""
|
"""
|
||||||
|
@ -20,7 +20,7 @@ def noun_chunks(obj):
|
||||||
"nmod",
|
"nmod",
|
||||||
"nmod:poss",
|
"nmod:poss",
|
||||||
]
|
]
|
||||||
doc = obj.doc # Ensure works on both Doc and Span.
|
doc = doclike.doc # Ensure works on both Doc and Span.
|
||||||
|
|
||||||
if not doc.is_parsed:
|
if not doc.is_parsed:
|
||||||
raise ValueError(Errors.E029)
|
raise ValueError(Errors.E029)
|
||||||
|
@ -29,7 +29,7 @@ def noun_chunks(obj):
|
||||||
conj = doc.vocab.strings.add("conj")
|
conj = doc.vocab.strings.add("conj")
|
||||||
np_label = doc.vocab.strings.add("NP")
|
np_label = doc.vocab.strings.add("NP")
|
||||||
prev_end = -1
|
prev_end = -1
|
||||||
for i, word in enumerate(obj):
|
for i, word in enumerate(doclike):
|
||||||
if word.pos not in (NOUN, PROPN, PRON):
|
if word.pos not in (NOUN, PROPN, PRON):
|
||||||
continue
|
continue
|
||||||
# Prevent nested chunks from being produced
|
# Prevent nested chunks from being produced
|
||||||
|
|
|
@ -38,7 +38,6 @@ TAG_MAP = {
|
||||||
"NNPC": {POS: PROPN},
|
"NNPC": {POS: PROPN},
|
||||||
"NNC": {POS: NOUN},
|
"NNC": {POS: NOUN},
|
||||||
"PSP": {POS: ADP},
|
"PSP": {POS: ADP},
|
||||||
|
|
||||||
".": {POS: PUNCT},
|
".": {POS: PUNCT},
|
||||||
",": {POS: PUNCT},
|
",": {POS: PUNCT},
|
||||||
"-LRB-": {POS: PUNCT},
|
"-LRB-": {POS: PUNCT},
|
||||||
|
|
|
@ -109,6 +109,7 @@ class ChineseTokenizer(DummyTokenizer):
|
||||||
if reset:
|
if reset:
|
||||||
try:
|
try:
|
||||||
import pkuseg
|
import pkuseg
|
||||||
|
|
||||||
self.pkuseg_seg.preprocesser = pkuseg.Preprocesser(None)
|
self.pkuseg_seg.preprocesser = pkuseg.Preprocesser(None)
|
||||||
except ImportError:
|
except ImportError:
|
||||||
if self.use_pkuseg:
|
if self.use_pkuseg:
|
||||||
|
@ -118,7 +119,7 @@ class ChineseTokenizer(DummyTokenizer):
|
||||||
)
|
)
|
||||||
raise ImportError(msg)
|
raise ImportError(msg)
|
||||||
for word in words:
|
for word in words:
|
||||||
self.pkuseg_seg.preprocesser.insert(word.strip(), '')
|
self.pkuseg_seg.preprocesser.insert(word.strip(), "")
|
||||||
|
|
||||||
def _get_config(self):
|
def _get_config(self):
|
||||||
config = OrderedDict(
|
config = OrderedDict(
|
||||||
|
@ -168,21 +169,16 @@ class ChineseTokenizer(DummyTokenizer):
|
||||||
return util.to_bytes(serializers, [])
|
return util.to_bytes(serializers, [])
|
||||||
|
|
||||||
def from_bytes(self, data, **kwargs):
|
def from_bytes(self, data, **kwargs):
|
||||||
pkuseg_features_b = b""
|
pkuseg_data = {"features_b": b"", "weights_b": b"", "processors_data": None}
|
||||||
pkuseg_weights_b = b""
|
|
||||||
pkuseg_processors_data = None
|
|
||||||
|
|
||||||
def deserialize_pkuseg_features(b):
|
def deserialize_pkuseg_features(b):
|
||||||
nonlocal pkuseg_features_b
|
pkuseg_data["features_b"] = b
|
||||||
pkuseg_features_b = b
|
|
||||||
|
|
||||||
def deserialize_pkuseg_weights(b):
|
def deserialize_pkuseg_weights(b):
|
||||||
nonlocal pkuseg_weights_b
|
pkuseg_data["weights_b"] = b
|
||||||
pkuseg_weights_b = b
|
|
||||||
|
|
||||||
def deserialize_pkuseg_processors(b):
|
def deserialize_pkuseg_processors(b):
|
||||||
nonlocal pkuseg_processors_data
|
pkuseg_data["processors_data"] = srsly.msgpack_loads(b)
|
||||||
pkuseg_processors_data = srsly.msgpack_loads(b)
|
|
||||||
|
|
||||||
deserializers = OrderedDict(
|
deserializers = OrderedDict(
|
||||||
(
|
(
|
||||||
|
@ -194,13 +190,13 @@ class ChineseTokenizer(DummyTokenizer):
|
||||||
)
|
)
|
||||||
util.from_bytes(data, deserializers, [])
|
util.from_bytes(data, deserializers, [])
|
||||||
|
|
||||||
if pkuseg_features_b and pkuseg_weights_b:
|
if pkuseg_data["features_b"] and pkuseg_data["weights_b"]:
|
||||||
with tempfile.TemporaryDirectory() as tempdir:
|
with tempfile.TemporaryDirectory() as tempdir:
|
||||||
tempdir = Path(tempdir)
|
tempdir = Path(tempdir)
|
||||||
with open(tempdir / "features.pkl", "wb") as fileh:
|
with open(tempdir / "features.pkl", "wb") as fileh:
|
||||||
fileh.write(pkuseg_features_b)
|
fileh.write(pkuseg_data["features_b"])
|
||||||
with open(tempdir / "weights.npz", "wb") as fileh:
|
with open(tempdir / "weights.npz", "wb") as fileh:
|
||||||
fileh.write(pkuseg_weights_b)
|
fileh.write(pkuseg_data["weights_b"])
|
||||||
try:
|
try:
|
||||||
import pkuseg
|
import pkuseg
|
||||||
except ImportError:
|
except ImportError:
|
||||||
|
@ -209,13 +205,9 @@ class ChineseTokenizer(DummyTokenizer):
|
||||||
+ _PKUSEG_INSTALL_MSG
|
+ _PKUSEG_INSTALL_MSG
|
||||||
)
|
)
|
||||||
self.pkuseg_seg = pkuseg.pkuseg(str(tempdir))
|
self.pkuseg_seg = pkuseg.pkuseg(str(tempdir))
|
||||||
if pkuseg_processors_data:
|
if pkuseg_data["processors_data"]:
|
||||||
(
|
processors_data = pkuseg_data["processors_data"]
|
||||||
user_dict,
|
(user_dict, do_process, common_words, other_words) = processors_data
|
||||||
do_process,
|
|
||||||
common_words,
|
|
||||||
other_words,
|
|
||||||
) = pkuseg_processors_data
|
|
||||||
self.pkuseg_seg.preprocesser = pkuseg.Preprocesser(user_dict)
|
self.pkuseg_seg.preprocesser = pkuseg.Preprocesser(user_dict)
|
||||||
self.pkuseg_seg.postprocesser.do_process = do_process
|
self.pkuseg_seg.postprocesser.do_process = do_process
|
||||||
self.pkuseg_seg.postprocesser.common_words = set(common_words)
|
self.pkuseg_seg.postprocesser.common_words = set(common_words)
|
||||||
|
|
|
@ -79,7 +79,9 @@ class BaseDefaults(object):
|
||||||
lookups=lookups,
|
lookups=lookups,
|
||||||
)
|
)
|
||||||
vocab.lex_attr_getters[NORM] = util.add_lookups(
|
vocab.lex_attr_getters[NORM] = util.add_lookups(
|
||||||
vocab.lex_attr_getters.get(NORM, LEX_ATTRS[NORM]), BASE_NORMS, vocab.lookups.get_table("lexeme_norm")
|
vocab.lex_attr_getters.get(NORM, LEX_ATTRS[NORM]),
|
||||||
|
BASE_NORMS,
|
||||||
|
vocab.lookups.get_table("lexeme_norm"),
|
||||||
)
|
)
|
||||||
for tag_str, exc in cls.morph_rules.items():
|
for tag_str, exc in cls.morph_rules.items():
|
||||||
for orth_str, attrs in exc.items():
|
for orth_str, attrs in exc.items():
|
||||||
|
@ -974,7 +976,9 @@ class Language(object):
|
||||||
serializers = OrderedDict()
|
serializers = OrderedDict()
|
||||||
serializers["vocab"] = lambda: self.vocab.to_bytes()
|
serializers["vocab"] = lambda: self.vocab.to_bytes()
|
||||||
serializers["tokenizer"] = lambda: self.tokenizer.to_bytes(exclude=["vocab"])
|
serializers["tokenizer"] = lambda: self.tokenizer.to_bytes(exclude=["vocab"])
|
||||||
serializers["meta.json"] = lambda: srsly.json_dumps(OrderedDict(sorted(self.meta.items())))
|
serializers["meta.json"] = lambda: srsly.json_dumps(
|
||||||
|
OrderedDict(sorted(self.meta.items()))
|
||||||
|
)
|
||||||
for name, proc in self.pipeline:
|
for name, proc in self.pipeline:
|
||||||
if name in exclude:
|
if name in exclude:
|
||||||
continue
|
continue
|
||||||
|
|
|
@ -6,6 +6,7 @@ from collections import OrderedDict
|
||||||
from .symbols import NOUN, VERB, ADJ, PUNCT, PROPN
|
from .symbols import NOUN, VERB, ADJ, PUNCT, PROPN
|
||||||
from .errors import Errors
|
from .errors import Errors
|
||||||
from .lookups import Lookups
|
from .lookups import Lookups
|
||||||
|
from .parts_of_speech import NAMES as UPOS_NAMES
|
||||||
|
|
||||||
|
|
||||||
class Lemmatizer(object):
|
class Lemmatizer(object):
|
||||||
|
@ -43,17 +44,11 @@ class Lemmatizer(object):
|
||||||
lookup_table = self.lookups.get_table("lemma_lookup", {})
|
lookup_table = self.lookups.get_table("lemma_lookup", {})
|
||||||
if "lemma_rules" not in self.lookups:
|
if "lemma_rules" not in self.lookups:
|
||||||
return [lookup_table.get(string, string)]
|
return [lookup_table.get(string, string)]
|
||||||
if univ_pos in (NOUN, "NOUN", "noun"):
|
if isinstance(univ_pos, int):
|
||||||
univ_pos = "noun"
|
univ_pos = UPOS_NAMES.get(univ_pos, "X")
|
||||||
elif univ_pos in (VERB, "VERB", "verb"):
|
univ_pos = univ_pos.lower()
|
||||||
univ_pos = "verb"
|
|
||||||
elif univ_pos in (ADJ, "ADJ", "adj"):
|
if univ_pos in ("", "eol", "space"):
|
||||||
univ_pos = "adj"
|
|
||||||
elif univ_pos in (PUNCT, "PUNCT", "punct"):
|
|
||||||
univ_pos = "punct"
|
|
||||||
elif univ_pos in (PROPN, "PROPN"):
|
|
||||||
return [string]
|
|
||||||
else:
|
|
||||||
return [string.lower()]
|
return [string.lower()]
|
||||||
# See Issue #435 for example of where this logic is requied.
|
# See Issue #435 for example of where this logic is requied.
|
||||||
if self.is_base_form(univ_pos, morphology):
|
if self.is_base_form(univ_pos, morphology):
|
||||||
|
@ -61,6 +56,11 @@ class Lemmatizer(object):
|
||||||
index_table = self.lookups.get_table("lemma_index", {})
|
index_table = self.lookups.get_table("lemma_index", {})
|
||||||
exc_table = self.lookups.get_table("lemma_exc", {})
|
exc_table = self.lookups.get_table("lemma_exc", {})
|
||||||
rules_table = self.lookups.get_table("lemma_rules", {})
|
rules_table = self.lookups.get_table("lemma_rules", {})
|
||||||
|
if not any((index_table.get(univ_pos), exc_table.get(univ_pos), rules_table.get(univ_pos))):
|
||||||
|
if univ_pos == "propn":
|
||||||
|
return [string]
|
||||||
|
else:
|
||||||
|
return [string.lower()]
|
||||||
lemmas = self.lemmatize(
|
lemmas = self.lemmatize(
|
||||||
string,
|
string,
|
||||||
index_table.get(univ_pos, {}),
|
index_table.get(univ_pos, {}),
|
||||||
|
|
|
@ -213,28 +213,28 @@ cdef class Matcher:
|
||||||
else:
|
else:
|
||||||
yield doc
|
yield doc
|
||||||
|
|
||||||
def __call__(self, object doc_or_span):
|
def __call__(self, object doclike):
|
||||||
"""Find all token sequences matching the supplied pattern.
|
"""Find all token sequences matching the supplied pattern.
|
||||||
|
|
||||||
doc_or_span (Doc or Span): The document to match over.
|
doclike (Doc or Span): The document to match over.
|
||||||
RETURNS (list): A list of `(key, start, end)` tuples,
|
RETURNS (list): A list of `(key, start, end)` tuples,
|
||||||
describing the matches. A match tuple describes a span
|
describing the matches. A match tuple describes a span
|
||||||
`doc[start:end]`. The `label_id` and `key` are both integers.
|
`doc[start:end]`. The `label_id` and `key` are both integers.
|
||||||
"""
|
"""
|
||||||
if isinstance(doc_or_span, Doc):
|
if isinstance(doclike, Doc):
|
||||||
doc = doc_or_span
|
doc = doclike
|
||||||
length = len(doc)
|
length = len(doc)
|
||||||
elif isinstance(doc_or_span, Span):
|
elif isinstance(doclike, Span):
|
||||||
doc = doc_or_span.doc
|
doc = doclike.doc
|
||||||
length = doc_or_span.end - doc_or_span.start
|
length = doclike.end - doclike.start
|
||||||
else:
|
else:
|
||||||
raise ValueError(Errors.E195.format(good="Doc or Span", got=type(doc_or_span).__name__))
|
raise ValueError(Errors.E195.format(good="Doc or Span", got=type(doclike).__name__))
|
||||||
if len(set([LEMMA, POS, TAG]) & self._seen_attrs) > 0 \
|
if len(set([LEMMA, POS, TAG]) & self._seen_attrs) > 0 \
|
||||||
and not doc.is_tagged:
|
and not doc.is_tagged:
|
||||||
raise ValueError(Errors.E155.format())
|
raise ValueError(Errors.E155.format())
|
||||||
if DEP in self._seen_attrs and not doc.is_parsed:
|
if DEP in self._seen_attrs and not doc.is_parsed:
|
||||||
raise ValueError(Errors.E156.format())
|
raise ValueError(Errors.E156.format())
|
||||||
matches = find_matches(&self.patterns[0], self.patterns.size(), doc_or_span, length,
|
matches = find_matches(&self.patterns[0], self.patterns.size(), doclike, length,
|
||||||
extensions=self._extensions, predicates=self._extra_predicates)
|
extensions=self._extensions, predicates=self._extra_predicates)
|
||||||
for i, (key, start, end) in enumerate(matches):
|
for i, (key, start, end) in enumerate(matches):
|
||||||
on_match = self._callbacks.get(key, None)
|
on_match = self._callbacks.get(key, None)
|
||||||
|
@ -257,7 +257,7 @@ def unpickle_matcher(vocab, patterns, callbacks):
|
||||||
return matcher
|
return matcher
|
||||||
|
|
||||||
|
|
||||||
cdef find_matches(TokenPatternC** patterns, int n, object doc_or_span, int length, extensions=None, predicates=tuple()):
|
cdef find_matches(TokenPatternC** patterns, int n, object doclike, int length, extensions=None, predicates=tuple()):
|
||||||
"""Find matches in a doc, with a compiled array of patterns. Matches are
|
"""Find matches in a doc, with a compiled array of patterns. Matches are
|
||||||
returned as a list of (id, start, end) tuples.
|
returned as a list of (id, start, end) tuples.
|
||||||
|
|
||||||
|
@ -286,7 +286,7 @@ cdef find_matches(TokenPatternC** patterns, int n, object doc_or_span, int lengt
|
||||||
else:
|
else:
|
||||||
nr_extra_attr = 0
|
nr_extra_attr = 0
|
||||||
extra_attr_values = <attr_t*>mem.alloc(length, sizeof(attr_t))
|
extra_attr_values = <attr_t*>mem.alloc(length, sizeof(attr_t))
|
||||||
for i, token in enumerate(doc_or_span):
|
for i, token in enumerate(doclike):
|
||||||
for name, index in extensions.items():
|
for name, index in extensions.items():
|
||||||
value = token._.get(name)
|
value = token._.get(name)
|
||||||
if isinstance(value, basestring):
|
if isinstance(value, basestring):
|
||||||
|
@ -298,7 +298,7 @@ cdef find_matches(TokenPatternC** patterns, int n, object doc_or_span, int lengt
|
||||||
for j in range(n):
|
for j in range(n):
|
||||||
states.push_back(PatternStateC(patterns[j], i, 0))
|
states.push_back(PatternStateC(patterns[j], i, 0))
|
||||||
transition_states(states, matches, predicate_cache,
|
transition_states(states, matches, predicate_cache,
|
||||||
doc_or_span[i], extra_attr_values, predicates)
|
doclike[i], extra_attr_values, predicates)
|
||||||
extra_attr_values += nr_extra_attr
|
extra_attr_values += nr_extra_attr
|
||||||
predicate_cache += len(predicates)
|
predicate_cache += len(predicates)
|
||||||
# Handle matches that end in 0-width patterns
|
# Handle matches that end in 0-width patterns
|
||||||
|
|
|
@ -112,6 +112,7 @@ def ga_tokenizer():
|
||||||
def gu_tokenizer():
|
def gu_tokenizer():
|
||||||
return get_lang_class("gu").Defaults.create_tokenizer()
|
return get_lang_class("gu").Defaults.create_tokenizer()
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture(scope="session")
|
@pytest.fixture(scope="session")
|
||||||
def he_tokenizer():
|
def he_tokenizer():
|
||||||
return get_lang_class("he").Defaults.create_tokenizer()
|
return get_lang_class("he").Defaults.create_tokenizer()
|
||||||
|
@ -246,7 +247,9 @@ def yo_tokenizer():
|
||||||
|
|
||||||
@pytest.fixture(scope="session")
|
@pytest.fixture(scope="session")
|
||||||
def zh_tokenizer_char():
|
def zh_tokenizer_char():
|
||||||
return get_lang_class("zh").Defaults.create_tokenizer(config={"use_jieba": False, "use_pkuseg": False})
|
return get_lang_class("zh").Defaults.create_tokenizer(
|
||||||
|
config={"use_jieba": False, "use_pkuseg": False}
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture(scope="session")
|
@pytest.fixture(scope="session")
|
||||||
|
@ -258,7 +261,9 @@ def zh_tokenizer_jieba():
|
||||||
@pytest.fixture(scope="session")
|
@pytest.fixture(scope="session")
|
||||||
def zh_tokenizer_pkuseg():
|
def zh_tokenizer_pkuseg():
|
||||||
pytest.importorskip("pkuseg")
|
pytest.importorskip("pkuseg")
|
||||||
return get_lang_class("zh").Defaults.create_tokenizer(config={"pkuseg_model": "default", "use_jieba": False, "use_pkuseg": True})
|
return get_lang_class("zh").Defaults.create_tokenizer(
|
||||||
|
config={"pkuseg_model": "default", "use_jieba": False, "use_pkuseg": True}
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture(scope="session")
|
@pytest.fixture(scope="session")
|
||||||
|
|
|
@ -50,7 +50,9 @@ def test_create_from_words_and_text(vocab):
|
||||||
assert [t.text for t in doc] == [" ", "'", "dogs", "'", "\n\n", "run", " "]
|
assert [t.text for t in doc] == [" ", "'", "dogs", "'", "\n\n", "run", " "]
|
||||||
assert [t.whitespace_ for t in doc] == ["", "", "", "", "", " ", ""]
|
assert [t.whitespace_ for t in doc] == ["", "", "", "", "", " ", ""]
|
||||||
assert doc.text == text
|
assert doc.text == text
|
||||||
assert [t.text for t in doc if not t.text.isspace()] == [word for word in words if not word.isspace()]
|
assert [t.text for t in doc if not t.text.isspace()] == [
|
||||||
|
word for word in words if not word.isspace()
|
||||||
|
]
|
||||||
|
|
||||||
# partial whitespace in words
|
# partial whitespace in words
|
||||||
words = [" ", "'", "dogs", "'", "\n\n", "run", " "]
|
words = [" ", "'", "dogs", "'", "\n\n", "run", " "]
|
||||||
|
@ -60,7 +62,9 @@ def test_create_from_words_and_text(vocab):
|
||||||
assert [t.text for t in doc] == [" ", "'", "dogs", "'", "\n\n", "run", " "]
|
assert [t.text for t in doc] == [" ", "'", "dogs", "'", "\n\n", "run", " "]
|
||||||
assert [t.whitespace_ for t in doc] == ["", "", "", "", "", " ", ""]
|
assert [t.whitespace_ for t in doc] == ["", "", "", "", "", " ", ""]
|
||||||
assert doc.text == text
|
assert doc.text == text
|
||||||
assert [t.text for t in doc if not t.text.isspace()] == [word for word in words if not word.isspace()]
|
assert [t.text for t in doc if not t.text.isspace()] == [
|
||||||
|
word for word in words if not word.isspace()
|
||||||
|
]
|
||||||
|
|
||||||
# non-standard whitespace tokens
|
# non-standard whitespace tokens
|
||||||
words = [" ", " ", "'", "dogs", "'", "\n\n", "run"]
|
words = [" ", " ", "'", "dogs", "'", "\n\n", "run"]
|
||||||
|
@ -70,7 +74,9 @@ def test_create_from_words_and_text(vocab):
|
||||||
assert [t.text for t in doc] == [" ", "'", "dogs", "'", "\n\n", "run", " "]
|
assert [t.text for t in doc] == [" ", "'", "dogs", "'", "\n\n", "run", " "]
|
||||||
assert [t.whitespace_ for t in doc] == ["", "", "", "", "", " ", ""]
|
assert [t.whitespace_ for t in doc] == ["", "", "", "", "", " ", ""]
|
||||||
assert doc.text == text
|
assert doc.text == text
|
||||||
assert [t.text for t in doc if not t.text.isspace()] == [word for word in words if not word.isspace()]
|
assert [t.text for t in doc if not t.text.isspace()] == [
|
||||||
|
word for word in words if not word.isspace()
|
||||||
|
]
|
||||||
|
|
||||||
# mismatch between words and text
|
# mismatch between words and text
|
||||||
with pytest.raises(ValueError):
|
with pytest.raises(ValueError):
|
||||||
|
|
|
@ -181,6 +181,7 @@ def test_is_sent_start(en_tokenizer):
|
||||||
doc.is_parsed = True
|
doc.is_parsed = True
|
||||||
assert len(list(doc.sents)) == 2
|
assert len(list(doc.sents)) == 2
|
||||||
|
|
||||||
|
|
||||||
def test_is_sent_end(en_tokenizer):
|
def test_is_sent_end(en_tokenizer):
|
||||||
doc = en_tokenizer("This is a sentence. This is another.")
|
doc = en_tokenizer("This is a sentence. This is another.")
|
||||||
assert doc[4].is_sent_end is None
|
assert doc[4].is_sent_end is None
|
||||||
|
@ -213,6 +214,7 @@ def test_token0_has_sent_start_true():
|
||||||
assert doc[1].is_sent_start is None
|
assert doc[1].is_sent_start is None
|
||||||
assert not doc.is_sentenced
|
assert not doc.is_sentenced
|
||||||
|
|
||||||
|
|
||||||
def test_tokenlast_has_sent_end_true():
|
def test_tokenlast_has_sent_end_true():
|
||||||
doc = Doc(Vocab(), words=["hello", "world"])
|
doc = Doc(Vocab(), words=["hello", "world"])
|
||||||
assert doc[0].is_sent_end is None
|
assert doc[0].is_sent_end is None
|
||||||
|
|
|
@ -3,17 +3,16 @@ from __future__ import unicode_literals
|
||||||
|
|
||||||
import pytest
|
import pytest
|
||||||
|
|
||||||
|
|
||||||
def test_gu_tokenizer_handlers_long_text(gu_tokenizer):
|
def test_gu_tokenizer_handlers_long_text(gu_tokenizer):
|
||||||
text = """પશ્ચિમ ભારતમાં આવેલું ગુજરાત રાજ્ય જે વ્યક્તિઓની માતૃભૂમિ છે"""
|
text = """પશ્ચિમ ભારતમાં આવેલું ગુજરાત રાજ્ય જે વ્યક્તિઓની માતૃભૂમિ છે"""
|
||||||
tokens = gu_tokenizer(text)
|
tokens = gu_tokenizer(text)
|
||||||
assert len(tokens) == 9
|
assert len(tokens) == 9
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.parametrize(
|
@pytest.mark.parametrize(
|
||||||
"text,length",
|
"text,length",
|
||||||
[
|
[("ગુજરાતીઓ ખાવાના શોખીન માનવામાં આવે છે", 6), ("ખેતરની ખેડ કરવામાં આવે છે.", 5)],
|
||||||
("ગુજરાતીઓ ખાવાના શોખીન માનવામાં આવે છે", 6),
|
|
||||||
("ખેતરની ખેડ કરવામાં આવે છે.", 5),
|
|
||||||
],
|
|
||||||
)
|
)
|
||||||
def test_gu_tokenizer_handles_cnts(gu_tokenizer, text, length):
|
def test_gu_tokenizer_handles_cnts(gu_tokenizer, text, length):
|
||||||
tokens = gu_tokenizer(text)
|
tokens = gu_tokenizer(text)
|
||||||
|
|
|
@ -1,3 +1,4 @@
|
||||||
|
# coding: utf8
|
||||||
from __future__ import unicode_literals
|
from __future__ import unicode_literals
|
||||||
|
|
||||||
import pytest
|
import pytest
|
||||||
|
|
|
@ -1,3 +1,4 @@
|
||||||
|
# coding: utf8
|
||||||
from __future__ import unicode_literals
|
from __future__ import unicode_literals
|
||||||
|
|
||||||
import pytest
|
import pytest
|
||||||
|
|
|
@ -10,7 +10,16 @@ def test_ml_tokenizer_handles_long_text(ml_tokenizer):
|
||||||
assert len(tokens) == 5
|
assert len(tokens) == 5
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.parametrize("text,length", [("എന്നാൽ അച്ചടിയുടെ ആവിർഭാവം ലിപിയിൽ കാര്യമായ മാറ്റങ്ങൾ വരുത്തിയത് കൂട്ടക്ഷരങ്ങളെ അണുഅക്ഷരങ്ങളായി പിരിച്ചുകൊണ്ടായിരുന്നു", 10), ("പരമ്പരാഗതമായി മലയാളം ഇടത്തുനിന്ന് വലത്തോട്ടാണ് എഴുതുന്നത്", 5)])
|
@pytest.mark.parametrize(
|
||||||
|
"text,length",
|
||||||
|
[
|
||||||
|
(
|
||||||
|
"എന്നാൽ അച്ചടിയുടെ ആവിർഭാവം ലിപിയിൽ കാര്യമായ മാറ്റങ്ങൾ വരുത്തിയത് കൂട്ടക്ഷരങ്ങളെ അണുഅക്ഷരങ്ങളായി പിരിച്ചുകൊണ്ടായിരുന്നു",
|
||||||
|
10,
|
||||||
|
),
|
||||||
|
("പരമ്പരാഗതമായി മലയാളം ഇടത്തുനിന്ന് വലത്തോട്ടാണ് എഴുതുന്നത്", 5),
|
||||||
|
],
|
||||||
|
)
|
||||||
def test_ml_tokenizer_handles_cnts(ml_tokenizer, text, length):
|
def test_ml_tokenizer_handles_cnts(ml_tokenizer, text, length):
|
||||||
tokens = ml_tokenizer(text)
|
tokens = ml_tokenizer(text)
|
||||||
assert len(tokens) == length
|
assert len(tokens) == length
|
||||||
|
|
|
@ -34,5 +34,15 @@ def test_zh_tokenizer_serialize_pkuseg(zh_tokenizer_pkuseg):
|
||||||
|
|
||||||
@pytest.mark.slow
|
@pytest.mark.slow
|
||||||
def test_zh_tokenizer_serialize_pkuseg_with_processors(zh_tokenizer_pkuseg):
|
def test_zh_tokenizer_serialize_pkuseg_with_processors(zh_tokenizer_pkuseg):
|
||||||
nlp = Chinese(meta={"tokenizer": {"config": {"use_jieba": False, "use_pkuseg": True, "pkuseg_model": "medicine"}}})
|
nlp = Chinese(
|
||||||
|
meta={
|
||||||
|
"tokenizer": {
|
||||||
|
"config": {
|
||||||
|
"use_jieba": False,
|
||||||
|
"use_pkuseg": True,
|
||||||
|
"pkuseg_model": "medicine",
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
)
|
||||||
zh_tokenizer_serialize(nlp.tokenizer)
|
zh_tokenizer_serialize(nlp.tokenizer)
|
||||||
|
|
|
@ -43,12 +43,16 @@ def test_zh_tokenizer_pkuseg(zh_tokenizer_pkuseg, text, expected_tokens):
|
||||||
def test_zh_tokenizer_pkuseg_user_dict(zh_tokenizer_pkuseg):
|
def test_zh_tokenizer_pkuseg_user_dict(zh_tokenizer_pkuseg):
|
||||||
user_dict = _get_pkuseg_trie_data(zh_tokenizer_pkuseg.pkuseg_seg.preprocesser.trie)
|
user_dict = _get_pkuseg_trie_data(zh_tokenizer_pkuseg.pkuseg_seg.preprocesser.trie)
|
||||||
zh_tokenizer_pkuseg.pkuseg_update_user_dict(["nonsense_asdf"])
|
zh_tokenizer_pkuseg.pkuseg_update_user_dict(["nonsense_asdf"])
|
||||||
updated_user_dict = _get_pkuseg_trie_data(zh_tokenizer_pkuseg.pkuseg_seg.preprocesser.trie)
|
updated_user_dict = _get_pkuseg_trie_data(
|
||||||
|
zh_tokenizer_pkuseg.pkuseg_seg.preprocesser.trie
|
||||||
|
)
|
||||||
assert len(user_dict) == len(updated_user_dict) - 1
|
assert len(user_dict) == len(updated_user_dict) - 1
|
||||||
|
|
||||||
# reset user dict
|
# reset user dict
|
||||||
zh_tokenizer_pkuseg.pkuseg_update_user_dict([], reset=True)
|
zh_tokenizer_pkuseg.pkuseg_update_user_dict([], reset=True)
|
||||||
reset_user_dict = _get_pkuseg_trie_data(zh_tokenizer_pkuseg.pkuseg_seg.preprocesser.trie)
|
reset_user_dict = _get_pkuseg_trie_data(
|
||||||
|
zh_tokenizer_pkuseg.pkuseg_seg.preprocesser.trie
|
||||||
|
)
|
||||||
assert len(reset_user_dict) == 0
|
assert len(reset_user_dict) == 0
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -272,8 +272,8 @@ def test_matcher_regex_shape(en_vocab):
|
||||||
(">=", ["a"]),
|
(">=", ["a"]),
|
||||||
("<=", ["aaa"]),
|
("<=", ["aaa"]),
|
||||||
(">", ["a", "aa"]),
|
(">", ["a", "aa"]),
|
||||||
("<", ["aa", "aaa"])
|
("<", ["aa", "aaa"]),
|
||||||
]
|
],
|
||||||
)
|
)
|
||||||
def test_matcher_compare_length(en_vocab, cmp, bad):
|
def test_matcher_compare_length(en_vocab, cmp, bad):
|
||||||
matcher = Matcher(en_vocab)
|
matcher = Matcher(en_vocab)
|
||||||
|
|
|
@ -106,7 +106,9 @@ def test_sentencizer_complex(en_vocab, words, sent_starts, sent_ends, n_sents):
|
||||||
),
|
),
|
||||||
],
|
],
|
||||||
)
|
)
|
||||||
def test_sentencizer_custom_punct(en_vocab, punct_chars, words, sent_starts, sent_ends, n_sents):
|
def test_sentencizer_custom_punct(
|
||||||
|
en_vocab, punct_chars, words, sent_starts, sent_ends, n_sents
|
||||||
|
):
|
||||||
doc = Doc(en_vocab, words=words)
|
doc = Doc(en_vocab, words=words)
|
||||||
sentencizer = Sentencizer(punct_chars=punct_chars)
|
sentencizer = Sentencizer(punct_chars=punct_chars)
|
||||||
doc = sentencizer(doc)
|
doc = sentencizer(doc)
|
||||||
|
|
|
@ -56,9 +56,13 @@ def test_serialize_vocab_roundtrip_disk(strings1, strings2):
|
||||||
assert strings1 == [s for s in vocab1_d.strings if s != "_SP"]
|
assert strings1 == [s for s in vocab1_d.strings if s != "_SP"]
|
||||||
assert strings2 == [s for s in vocab2_d.strings if s != "_SP"]
|
assert strings2 == [s for s in vocab2_d.strings if s != "_SP"]
|
||||||
if strings1 == strings2:
|
if strings1 == strings2:
|
||||||
assert [s for s in vocab1_d.strings if s != "_SP"] == [s for s in vocab2_d.strings if s != "_SP"]
|
assert [s for s in vocab1_d.strings if s != "_SP"] == [
|
||||||
|
s for s in vocab2_d.strings if s != "_SP"
|
||||||
|
]
|
||||||
else:
|
else:
|
||||||
assert [s for s in vocab1_d.strings if s != "_SP"] != [s for s in vocab2_d.strings if s != "_SP"]
|
assert [s for s in vocab1_d.strings if s != "_SP"] != [
|
||||||
|
s for s in vocab2_d.strings if s != "_SP"
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.parametrize("strings,lex_attr", test_strings_attrs)
|
@pytest.mark.parametrize("strings,lex_attr", test_strings_attrs)
|
||||||
|
@ -76,7 +80,6 @@ def test_serialize_vocab_lex_attrs_bytes(strings, lex_attr):
|
||||||
def test_deserialize_vocab_seen_entries(strings, lex_attr):
|
def test_deserialize_vocab_seen_entries(strings, lex_attr):
|
||||||
# Reported in #2153
|
# Reported in #2153
|
||||||
vocab = Vocab(strings=strings)
|
vocab = Vocab(strings=strings)
|
||||||
length = len(vocab)
|
|
||||||
vocab.from_bytes(vocab.to_bytes())
|
vocab.from_bytes(vocab.to_bytes())
|
||||||
assert len(vocab.strings) == len(strings) + 1 # adds _SP
|
assert len(vocab.strings) == len(strings) + 1 # adds _SP
|
||||||
|
|
||||||
|
@ -130,6 +133,7 @@ def test_serialize_stringstore_roundtrip_disk(strings1, strings2):
|
||||||
else:
|
else:
|
||||||
assert list(sstore1_d) != list(sstore2_d)
|
assert list(sstore1_d) != list(sstore2_d)
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.parametrize("strings,lex_attr", test_strings_attrs)
|
@pytest.mark.parametrize("strings,lex_attr", test_strings_attrs)
|
||||||
def test_pickle_vocab(strings, lex_attr):
|
def test_pickle_vocab(strings, lex_attr):
|
||||||
vocab = Vocab(strings=strings)
|
vocab = Vocab(strings=strings)
|
||||||
|
|
|
@ -112,7 +112,7 @@ def test_gold_biluo_different_tokenization(en_vocab, en_tokenizer):
|
||||||
data = (
|
data = (
|
||||||
"I'll return the ₹54 amount",
|
"I'll return the ₹54 amount",
|
||||||
{
|
{
|
||||||
"words": ["I", "'ll", "return", "the", "₹", "54", "amount",],
|
"words": ["I", "'ll", "return", "the", "₹", "54", "amount"],
|
||||||
"entities": [(16, 19, "MONEY")],
|
"entities": [(16, 19, "MONEY")],
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
|
@ -122,7 +122,7 @@ def test_gold_biluo_different_tokenization(en_vocab, en_tokenizer):
|
||||||
data = (
|
data = (
|
||||||
"I'll return the $54 amount",
|
"I'll return the $54 amount",
|
||||||
{
|
{
|
||||||
"words": ["I", "'ll", "return", "the", "$", "54", "amount",],
|
"words": ["I", "'ll", "return", "the", "$", "54", "amount"],
|
||||||
"entities": [(16, 19, "MONEY")],
|
"entities": [(16, 19, "MONEY")],
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
|
|
|
@ -366,6 +366,7 @@ def test_vectors_serialize():
|
||||||
assert row == row_r
|
assert row == row_r
|
||||||
assert_equal(v.data, v_r.data)
|
assert_equal(v.data, v_r.data)
|
||||||
|
|
||||||
|
|
||||||
def test_vector_is_oov():
|
def test_vector_is_oov():
|
||||||
vocab = Vocab(vectors_name="test_vocab_is_oov")
|
vocab = Vocab(vectors_name="test_vocab_is_oov")
|
||||||
data = numpy.ndarray((5, 3), dtype="f")
|
data = numpy.ndarray((5, 3), dtype="f")
|
||||||
|
|
|
@ -774,7 +774,7 @@ def get_words_and_spaces(words, text):
|
||||||
except ValueError:
|
except ValueError:
|
||||||
raise ValueError(Errors.E194.format(text=text, words=words))
|
raise ValueError(Errors.E194.format(text=text, words=words))
|
||||||
if word_start > 0:
|
if word_start > 0:
|
||||||
text_words.append(text[text_pos:text_pos+word_start])
|
text_words.append(text[text_pos : text_pos + word_start])
|
||||||
text_spaces.append(False)
|
text_spaces.append(False)
|
||||||
text_pos += word_start
|
text_pos += word_start
|
||||||
text_words.append(word)
|
text_words.append(word)
|
||||||
|
|
|
@ -2172,6 +2172,43 @@
|
||||||
"model_uri = f'runs:/{my_run_id}/model'",
|
"model_uri = f'runs:/{my_run_id}/model'",
|
||||||
"nlp2 = mlflow.spacy.load_model(model_uri=model_uri)"
|
"nlp2 = mlflow.spacy.load_model(model_uri=model_uri)"
|
||||||
]
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "pyate",
|
||||||
|
"title": "PyATE",
|
||||||
|
"slogan": "Python Automated Term Extraction",
|
||||||
|
"description": "PyATE is a term extraction library written in Python using Spacy POS tagging with Basic, Combo Basic, C-Value, TermExtractor, and Weirdness.",
|
||||||
|
"github": "kevinlu1248/pyate",
|
||||||
|
"pip": "pyate",
|
||||||
|
"code_example": [
|
||||||
|
"import spacy",
|
||||||
|
"from pyate.term_extraction_pipeline import TermExtractionPipeline",
|
||||||
|
"",
|
||||||
|
"nlp = spacy.load('en_core_web_sm')",
|
||||||
|
"nlp.add_pipe(TermExtractionPipeline())",
|
||||||
|
"# source: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1994795/",
|
||||||
|
"string = 'Central to the development of cancer are genetic changes that endow these “cancer cells” with many of the hallmarks of cancer, such as self-sufficient growth and resistance to anti-growth and pro-death signals. However, while the genetic changes that occur within cancer cells themselves, such as activated oncogenes or dysfunctional tumor suppressors, are responsible for many aspects of cancer development, they are not sufficient. Tumor promotion and progression are dependent on ancillary processes provided by cells of the tumor environment but that are not necessarily cancerous themselves. Inflammation has long been associated with the development of cancer. This review will discuss the reflexive relationship between cancer and inflammation with particular focus on how considering the role of inflammation in physiologic processes such as the maintenance of tissue homeostasis and repair may provide a logical framework for understanding the connection between the inflammatory response and cancer.'",
|
||||||
|
"",
|
||||||
|
"doc = nlp(string)",
|
||||||
|
"print(doc._.combo_basic.sort_values(ascending=False).head(5))",
|
||||||
|
"\"\"\"\"\"\"",
|
||||||
|
"dysfunctional tumor 1.443147",
|
||||||
|
"tumor suppressors 1.443147",
|
||||||
|
"genetic changes 1.386294",
|
||||||
|
"cancer cells 1.386294",
|
||||||
|
"dysfunctional tumor suppressors 1.298612",
|
||||||
|
"\"\"\"\"\"\""
|
||||||
|
],
|
||||||
|
"code_language": "python",
|
||||||
|
"url": "https://github.com/kevinlu1248/pyate",
|
||||||
|
"author": "Kevin Lu",
|
||||||
|
"author_links": {
|
||||||
|
"twitter": "kevinlu1248",
|
||||||
|
"github": "kevinlu1248",
|
||||||
|
"website": "https://github.com/kevinlu1248/pyate"
|
||||||
|
},
|
||||||
|
"category": ["pipeline", "research"],
|
||||||
|
"tags": ["term_extraction"]
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue
Block a user