Merge branch 'master' into spacy.io

This commit is contained in:
Ines Montani 2020-02-03 13:15:54 +01:00
commit 19dc77a738
19 changed files with 2257 additions and 50 deletions

106
.github/contributors/ceteri.md vendored Normal file
View File

@ -0,0 +1,106 @@
# spaCy contributor agreement
This spaCy Contributor Agreement (**"SCA"**) is based on the
[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
The SCA applies to any contribution that you make to any product or project
managed by us (the **"project"**), and sets out the intellectual property rights
you grant to us in the contributed materials. The term **"us"** shall mean
[ExplosionAI GmbH](https://explosion.ai/legal). The term
**"you"** shall mean the person or entity identified below.
If you agree to be bound by these terms, fill in the information requested
below and include the filled-in version with your first pull request, under the
folder [`.github/contributors/`](/.github/contributors/). The name of the file
should be your GitHub username, with the extension `.md`. For example, the user
example_user would create the file `.github/contributors/example_user.md`.
Read this agreement carefully before signing. These terms and conditions
constitute a binding legal agreement.
## Contributor Agreement
1. The term "contribution" or "contributed materials" means any source code,
object code, patch, tool, sample, graphic, specification, manual,
documentation, or any other material posted or submitted by you to the project.
2. With respect to any worldwide copyrights, or copyright applications and
registrations, in your contribution:
* you hereby assign to us joint ownership, and to the extent that such
assignment is or becomes invalid, ineffective or unenforceable, you hereby
grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
royalty-free, unrestricted license to exercise all rights under those
copyrights. This includes, at our option, the right to sublicense these same
rights to third parties through multiple levels of sublicensees or other
licensing arrangements;
* you agree that each of us can do all things in relation to your
contribution as if each of us were the sole owners, and if one of us makes
a derivative work of your contribution, the one who makes the derivative
work (or has it made will be the sole owner of that derivative work;
* you agree that you will not assert any moral rights in your contribution
against us, our licensees or transferees;
* you agree that we may register a copyright in your contribution and
exercise all ownership rights associated with it; and
* you agree that neither of us has any duty to consult with, obtain the
consent of, pay or render an accounting to the other for any use or
distribution of your contribution.
3. With respect to any patents you own, or that you can license without payment
to any third party, you hereby grant to us a perpetual, irrevocable,
non-exclusive, worldwide, no-charge, royalty-free license to:
* make, have made, use, sell, offer to sell, import, and otherwise transfer
your contribution in whole or in part, alone or in combination with or
included in any product, work or materials arising out of the project to
which your contribution was submitted, and
* at our option, to sublicense these same rights to third parties through
multiple levels of sublicensees or other licensing arrangements.
4. Except as set out above, you keep all right, title, and interest in your
contribution. The rights that you grant to us under these terms are effective
on the date you first submitted a contribution to us, even if your submission
took place before the date you sign these terms.
5. You covenant, represent, warrant and agree that:
* Each contribution that you submit is and shall be an original work of
authorship and you can legally grant the rights set out in this SCA;
* to the best of your knowledge, each contribution will not violate any
third party's copyrights, trademarks, patents, or other intellectual
property rights; and
* each contribution shall be in compliance with U.S. export control laws and
other applicable export and import laws. You agree to notify us if you
become aware of any circumstance which would make any of the foregoing
representations inaccurate in any respect. We may publicly disclose your
participation in the project, including the fact that you have signed the SCA.
6. This SCA is governed by the laws of the State of California and applicable
U.S. Federal law. Any choice of law rules will not apply.
7. Please place an “x” on one of the applicable statement below. Please do NOT
mark both statements:
* [ ] I am signing on behalf of myself as an individual and no other person
or entity, including my employer, has or will have rights with respect to my
contributions.
* [x] I am signing on behalf of my employer or a legal entity and I have the
actual authority to contractually bind that entity.
## Contributor Details
| Field | Entry |
|------------------------------- | ---------------------- |
| Name | Paco Nathan |
| Company name (if applicable) | Derwen, Inc. |
| Title or role (if applicable) | Managing Partner |
| Date | 2020-01-25 |
| GitHub username | ceteri |
| Website (optional) | https://derwen.ai/paco |

106
.github/contributors/drndos.md vendored Normal file
View File

@ -0,0 +1,106 @@
# spaCy contributor agreement
This spaCy Contributor Agreement (**"SCA"**) is based on the
[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
The SCA applies to any contribution that you make to any product or project
managed by us (the **"project"**), and sets out the intellectual property rights
you grant to us in the contributed materials. The term **"us"** shall mean
[ExplosionAI GmbH](https://explosion.ai/legal). The term
**"you"** shall mean the person or entity identified below.
If you agree to be bound by these terms, fill in the information requested
below and include the filled-in version with your first pull request, under the
folder [`.github/contributors/`](/.github/contributors/). The name of the file
should be your GitHub username, with the extension `.md`. For example, the user
example_user would create the file `.github/contributors/example_user.md`.
Read this agreement carefully before signing. These terms and conditions
constitute a binding legal agreement.
## Contributor Agreement
1. The term "contribution" or "contributed materials" means any source code,
object code, patch, tool, sample, graphic, specification, manual,
documentation, or any other material posted or submitted by you to the project.
2. With respect to any worldwide copyrights, or copyright applications and
registrations, in your contribution:
* you hereby assign to us joint ownership, and to the extent that such
assignment is or becomes invalid, ineffective or unenforceable, you hereby
grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
royalty-free, unrestricted license to exercise all rights under those
copyrights. This includes, at our option, the right to sublicense these same
rights to third parties through multiple levels of sublicensees or other
licensing arrangements;
* you agree that each of us can do all things in relation to your
contribution as if each of us were the sole owners, and if one of us makes
a derivative work of your contribution, the one who makes the derivative
work (or has it made will be the sole owner of that derivative work;
* you agree that you will not assert any moral rights in your contribution
against us, our licensees or transferees;
* you agree that we may register a copyright in your contribution and
exercise all ownership rights associated with it; and
* you agree that neither of us has any duty to consult with, obtain the
consent of, pay or render an accounting to the other for any use or
distribution of your contribution.
3. With respect to any patents you own, or that you can license without payment
to any third party, you hereby grant to us a perpetual, irrevocable,
non-exclusive, worldwide, no-charge, royalty-free license to:
* make, have made, use, sell, offer to sell, import, and otherwise transfer
your contribution in whole or in part, alone or in combination with or
included in any product, work or materials arising out of the project to
which your contribution was submitted, and
* at our option, to sublicense these same rights to third parties through
multiple levels of sublicensees or other licensing arrangements.
4. Except as set out above, you keep all right, title, and interest in your
contribution. The rights that you grant to us under these terms are effective
on the date you first submitted a contribution to us, even if your submission
took place before the date you sign these terms.
5. You covenant, represent, warrant and agree that:
* Each contribution that you submit is and shall be an original work of
authorship and you can legally grant the rights set out in this SCA;
* to the best of your knowledge, each contribution will not violate any
third party's copyrights, trademarks, patents, or other intellectual
property rights; and
* each contribution shall be in compliance with U.S. export control laws and
other applicable export and import laws. You agree to notify us if you
become aware of any circumstance which would make any of the foregoing
representations inaccurate in any respect. We may publicly disclose your
participation in the project, including the fact that you have signed the SCA.
6. This SCA is governed by the laws of the State of California and applicable
U.S. Federal law. Any choice of law rules will not apply.
7. Please place an “x” on one of the applicable statement below. Please do NOT
mark both statements:
* [ ] I am signing on behalf of myself as an individual and no other person
or entity, including my employer, has or will have rights with respect to my
contributions.
* [x] I am signing on behalf of my employer or a legal entity and I have the
actual authority to contractually bind that entity.
## Contributor Details
| Field | Entry |
|------------------------------- | -------------------- |
| Name | Filip Bednárik |
| Company name (if applicable) | Ardevop, s. r. o. |
| Title or role (if applicable) | IT Consultant |
| Date | 2020-01-26 |
| GitHub username | drndos |
| Website (optional) | https://ardevop.sk |

106
.github/contributors/onlyanegg.md vendored Normal file
View File

@ -0,0 +1,106 @@
# spaCy contributor agreement
This spaCy Contributor Agreement (**"SCA"**) is based on the
[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
The SCA applies to any contribution that you make to any product or project
managed by us (the **"project"**), and sets out the intellectual property rights
you grant to us in the contributed materials. The term **"us"** shall mean
[ExplosionAI GmbH](https://explosion.ai/legal). The term
**"you"** shall mean the person or entity identified below.
If you agree to be bound by these terms, fill in the information requested
below and include the filled-in version with your first pull request, under the
folder [`.github/contributors/`](/.github/contributors/). The name of the file
should be your GitHub username, with the extension `.md`. For example, the user
example_user would create the file `.github/contributors/example_user.md`.
Read this agreement carefully before signing. These terms and conditions
constitute a binding legal agreement.
## Contributor Agreement
1. The term "contribution" or "contributed materials" means any source code,
object code, patch, tool, sample, graphic, specification, manual,
documentation, or any other material posted or submitted by you to the project.
2. With respect to any worldwide copyrights, or copyright applications and
registrations, in your contribution:
* you hereby assign to us joint ownership, and to the extent that such
assignment is or becomes invalid, ineffective or unenforceable, you hereby
grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
royalty-free, unrestricted license to exercise all rights under those
copyrights. This includes, at our option, the right to sublicense these same
rights to third parties through multiple levels of sublicensees or other
licensing arrangements;
* you agree that each of us can do all things in relation to your
contribution as if each of us were the sole owners, and if one of us makes
a derivative work of your contribution, the one who makes the derivative
work (or has it made will be the sole owner of that derivative work;
* you agree that you will not assert any moral rights in your contribution
against us, our licensees or transferees;
* you agree that we may register a copyright in your contribution and
exercise all ownership rights associated with it; and
* you agree that neither of us has any duty to consult with, obtain the
consent of, pay or render an accounting to the other for any use or
distribution of your contribution.
3. With respect to any patents you own, or that you can license without payment
to any third party, you hereby grant to us a perpetual, irrevocable,
non-exclusive, worldwide, no-charge, royalty-free license to:
* make, have made, use, sell, offer to sell, import, and otherwise transfer
your contribution in whole or in part, alone or in combination with or
included in any product, work or materials arising out of the project to
which your contribution was submitted, and
* at our option, to sublicense these same rights to third parties through
multiple levels of sublicensees or other licensing arrangements.
4. Except as set out above, you keep all right, title, and interest in your
contribution. The rights that you grant to us under these terms are effective
on the date you first submitted a contribution to us, even if your submission
took place before the date you sign these terms.
5. You covenant, represent, warrant and agree that:
- Each contribution that you submit is and shall be an original work of
authorship and you can legally grant the rights set out in this SCA;
- to the best of your knowledge, each contribution will not violate any
third party's copyrights, trademarks, patents, or other intellectual
property rights; and
- each contribution shall be in compliance with U.S. export control laws and
other applicable export and import laws. You agree to notify us if you
become aware of any circumstance which would make any of the foregoing
representations inaccurate in any respect. We may publicly disclose your
participation in the project, including the fact that you have signed the SCA.
6. This SCA is governed by the laws of the State of California and applicable
U.S. Federal law. Any choice of law rules will not apply.
7. Please place an “x” on one of the applicable statement below. Please do NOT
mark both statements:
* [x] I am signing on behalf of myself as an individual and no other person
or entity, including my employer, has or will have rights with respect to my
contributions.
* [ ] I am signing on behalf of my employer or a legal entity and I have the
actual authority to contractually bind that entity.
## Contributor Details
| Field | Entry |
| ----------------------------- | ---------------- |
| Name | Tyler Couto |
| Company name (if applicable) | |
| Title or role (if applicable) | |
| Date | January 29, 2020 |
| GitHub username | onlyanegg |
| Website (optional) | |

View File

@ -50,6 +50,7 @@ install_requires =
srsly>=0.1.0,<1.1.0
catalogue>=0.0.7,<1.1.0
# Third-party dependencies
tqdm>=4.38.0,<5.0.0
setuptools
numpy>=1.15.0
plac>=0.9.6,<1.2.0

View File

@ -70,7 +70,7 @@ def read_conllx(input_data, use_morphology=False, n=0):
continue
try:
id_ = int(id_) - 1
head = (int(head) - 1) if head != "0" else id_
head = (int(head) - 1) if head not in ["0", "_"] else id_
dep = "ROOT" if dep == "root" else dep
tag = pos if tag == "_" else tag
tag = tag + "__" + morph if use_morphology else tag

View File

@ -694,6 +694,11 @@ cdef class GoldParse:
self.cats = {} if cats is None else dict(cats)
self.links = links
# orig_annot is used as an iterator in `nlp.evalate` even if self.length == 0,
# so set a empty list to avoid error.
# if self.lenght > 0, this is modified latter.
self.orig_annot = []
# avoid allocating memory if the doc does not contain any tokens
if self.length > 0:
if words is None:

View File

@ -2,13 +2,18 @@
from __future__ import unicode_literals
from .stop_words import STOP_WORDS
from .tag_map import TAG_MAP
from .lex_attrs import LEX_ATTRS
from ...language import Language
from ...attrs import LANG
class SlovakDefaults(Language.Defaults):
lex_attr_getters = dict(Language.Defaults.lex_attr_getters)
lex_attr_getters.update(LEX_ATTRS)
lex_attr_getters[LANG] = lambda text: "sk"
tag_map = TAG_MAP
stop_words = STOP_WORDS

27
spacy/lang/sk/examples.py Normal file
View File

@ -0,0 +1,27 @@
# coding: utf8
from __future__ import unicode_literals
"""
Example sentences to test spaCy and its language models.
>>> from spacy.lang.sk.examples import sentences
>>> docs = nlp.pipe(sentences)
"""
sentences = [
"Ardevop, s.r.o. je malá startup firma na území SR.",
"Samojazdiace autá presúvajú poistnú zodpovednosť na výrobcov automobilov.",
"Košice sú na východe.",
"Bratislava je hlavné mesto Slovenskej republiky.",
"Kde si?",
"Kto je prezidentom Francúzska?",
"Aké je hlavné mesto Slovenska?",
"Kedy sa narodil Andrej Kiska?",
"Včera som dostal 100€ na ruku.",
"Dnes je nedeľa 26.1.2020.",
"Narodil sa 15.4.1998 v Ružomberku.",
"Niekto mi povedal, že 500 eur je veľa peňazí.",
"Podaj mi ruku!",
]

View File

@ -0,0 +1,62 @@
# coding: utf8
from __future__ import unicode_literals
from ...attrs import LIKE_NUM
_num_words = [
"nula",
"jeden",
"dva",
"tri",
"štyri",
"päť",
"šesť",
"sedem",
"osem",
"deväť",
"desať",
"jedenásť",
"dvanásť",
"trinásť",
"štrnásť",
"pätnásť",
"šestnásť",
"sedemnásť",
"osemnásť",
"devätnásť",
"dvadsať",
"tridsať",
"štyridsať",
"päťdesiat",
"šesťdesiat",
"sedemdesiat",
"osemdesiat",
"deväťdesiat",
"sto",
"tisíc",
"milión",
"miliarda",
"bilión",
"biliarda",
"trilión",
"triliarda",
"kvadrilión",
]
def like_num(text):
if text.startswith(("+", "-", "±", "~")):
text = text[1:]
text = text.replace(",", "").replace(".", "")
if text.isdigit():
return True
if text.count("/") == 1:
num, denom = text.split("/")
if num.isdigit() and denom.isdigit():
return True
if text.lower() in _num_words:
return True
return False
LEX_ATTRS = {LIKE_NUM: like_num}

View File

@ -2,7 +2,7 @@
from __future__ import unicode_literals
# Source: https://github.com/stopwords-iso/stopwords-sk
# Source: https://github.com/Ardevop-sk/stopwords-sk
STOP_WORDS = set(
"""
@ -10,17 +10,41 @@ a
aby
aj
ak
akej
akejže
ako
akom
akomže
akou
akouže
akože
aká
akáže
aké
akého
akéhože
akému
akémuže
akéže
akú
akúže
aký
akých
akýchže
akým
akými
akýmiže
akýmže
akýže
ale
alebo
and
ani
asi
avšak
ba
bez
bezo
bol
bola
boli
@ -31,23 +55,32 @@ budeme
budete
budeš
budú
buï
buď
by
byť
cez
cezo
dnes
do
ešte
for
ho
hoci
i
iba
ich
im
inej
inom
iná
iné
iného
inému
iní
inú
iný
iných
iným
inými
ja
je
jeho
@ -56,80 +89,185 @@ jemu
ju
k
kam
kamže
každou
každá
každé
každého
každému
každí
každú
každý
každých
každým
každými
kde
kedže
keï
kej
kejže
keď
keďže
kie
kieho
kiehože
kiemu
kiemuže
kieže
koho
kom
komu
kou
kouže
kto
ktorej
ktorou
ktorá
ktoré
ktorí
ktorú
ktorý
ktorých
ktorým
ktorými
ku
káže
kéže
kúže
kýho
kýhože
kým
kýmu
kýmuže
kýže
lebo
leda
ledaže
len
ma
majú
mal
mala
mali
mať
medzi
menej
mi
mna
mne
mnou
moja
moje
mojej
mojich
mojim
mojimi
mojou
moju
možno
mu
musia
musieť
musí
musím
musíme
musíte
musíš
my
mám
máme
máte
mòa
máš
môcť
môj
môjho
môže
môžem
môžeme
môžete
môžeš
môžu
mňa
na
nad
nado
najmä
nami
naša
naše
našej
naši
našich
našim
našimi
našou
ne
nech
neho
nej
nejakej
nejakom
nejakou
nejaká
nejaké
nejakého
nejakému
nejakú
nejaký
nejakých
nejakým
nejakými
nemu
než
nich
nie
niektorej
niektorom
niektorou
niektorá
niektoré
niektorého
niektorému
niektorú
niektorý
niektorých
niektorým
niektorými
nielen
niečo
nim
nimi
nič
ničoho
ničom
ničomu
ničím
no
nová
nové
noví
nový
nám
nás
náš
nášho
ním
o
od
odo
of
on
ona
oni
ono
ony
oňho
po
pod
podo
podľa
pokiaľ
popod
popri
potom
poza
pre
pred
predo
@ -137,42 +275,56 @@ preto
pretože
prečo
pri
prvá
prvé
prví
prvý
práve
pýta
s
sa
seba
sebe
sebou
sem
si
sme
so
som
späť
ste
svoj
svoja
svoje
svojho
svojich
svojim
svojimi
svojou
svoju
svojím
svojími
ta
tak
takej
takejto
taká
takáto
také
takého
takéhoto
takému
takémuto
takéto
takí
takú
takúto
taký
takýto
takže
tam
te
teba
tebe
tebou
teda
tej
tejto
ten
tento
the
ti
tie
tieto
@ -180,52 +332,97 @@ tiež
to
toho
tohoto
tohto
tom
tomto
tomu
tomuto
toto
tou
touto
tu
tvoj
tvojími
tvoja
tvoje
tvojej
tvojho
tvoji
tvojich
tvojim
tvojimi
tvojím
ty
táto
títo
túto
tých
tým
tými
týmto
u
v
vami
vaša
vaše
veï
vašej
vaši
vašich
vašim
vaším
veď
viac
vo
vy
vám
vás
váš
vášho
však
všetci
všetka
všetko
všetky
všetok
z
za
začo
začože
zo
a
áno
èi
èo
èí
òom
òou
òu
čej
či
čia
čie
čieho
čiemu
čiu
čo
čoho
čom
čomu
čou
čože
čí
čím
čími
ďalšia
ďalšie
ďalšieho
ďalšiemu
ďalšiu
ďalšom
ďalšou
ďalší
ďalších
ďalším
ďalšími
ňom
ňou
ňu
že
""".split()
)

1467
spacy/lang/sk/tag_map.py Normal file

File diff suppressed because it is too large Load Diff

View File

@ -1492,20 +1492,21 @@ class Sentencizer(object):
return guesses
guesses = []
for doc in docs:
start = 0
seen_period = False
doc_guesses = [False] * len(doc)
doc_guesses[0] = True
for i, token in enumerate(doc):
is_in_punct_chars = token.text in self.punct_chars
if seen_period and not token.is_punct and not is_in_punct_chars:
if len(doc) > 0:
start = 0
seen_period = False
doc_guesses[0] = True
for i, token in enumerate(doc):
is_in_punct_chars = token.text in self.punct_chars
if seen_period and not token.is_punct and not is_in_punct_chars:
doc_guesses[start] = True
start = token.i
seen_period = False
elif is_in_punct_chars:
seen_period = True
if start < len(doc):
doc_guesses[start] = True
start = token.i
seen_period = False
elif is_in_punct_chars:
seen_period = True
if start < len(doc):
doc_guesses[start] = True
guesses.append(doc_guesses)
return guesses

View File

@ -29,6 +29,22 @@ def test_sentencizer_pipe():
assert len(list(doc.sents)) == 2
def test_sentencizer_empty_docs():
one_empty_text = [""]
many_empty_texts = ["", "", ""]
some_empty_texts = ["hi", "", "This is a test. Here are two sentences.", ""]
nlp = English()
nlp.add_pipe(nlp.create_pipe("sentencizer"))
for texts in [one_empty_text, many_empty_texts, some_empty_texts]:
for doc in nlp.pipe(texts):
assert doc.is_sentenced
sent_starts = [t.is_sent_start for t in doc]
if len(doc) == 0:
assert sent_starts == []
else:
assert len(sent_starts) > 0
@pytest.mark.parametrize(
"words,sent_starts,n_sents",
[

View File

@ -0,0 +1,31 @@
from spacy.cli.converters.conllu2json import conllu2json
input_data = """
1 [ _ PUNCT -LRB- _ _ punct _ _
2 This _ DET DT _ _ det _ _
3 killing _ NOUN NN _ _ nsubj _ _
4 of _ ADP IN _ _ case _ _
5 a _ DET DT _ _ det _ _
6 respected _ ADJ JJ _ _ amod _ _
7 cleric _ NOUN NN _ _ nmod _ _
8 will _ AUX MD _ _ aux _ _
9 be _ AUX VB _ _ aux _ _
10 causing _ VERB VBG _ _ root _ _
11 us _ PRON PRP _ _ iobj _ _
12 trouble _ NOUN NN _ _ dobj _ _
13 for _ ADP IN _ _ case _ _
14 years _ NOUN NNS _ _ nmod _ _
15 to _ PART TO _ _ mark _ _
16 come _ VERB VB _ _ acl _ _
17 . _ PUNCT . _ _ punct _ _
18 ] _ PUNCT -RRB- _ _ punct _ _
"""
def test_issue4665():
"""
conllu2json should not raise an exception if the HEAD column contains an
underscore
"""
conllu2json(input_data)

View File

@ -0,0 +1,16 @@
# coding: utf8
from __future__ import unicode_literals
import pytest
import spacy
@pytest.fixture
def nlp():
return spacy.blank("en")
def test_evaluate(nlp):
docs_golds = [("", {})]
nlp.evaluate(docs_golds)

View File

@ -372,7 +372,7 @@ $ python -m spacy train [lang] [output_path] [train_path] [dev_path]
| `--n-iter`, `-n` | option | Number of iterations (default: `30`). |
| `--n-early-stopping`, `-ne` | option | Maximum number of training epochs without dev accuracy improvement. |
| `--n-examples`, `-ns` | option | Number of examples to use (defaults to `0` for all examples). |
| `--use-gpu`, `-g` | option | Whether to use GPU. Can be either `0`, `1` or `-1`. |
| `--use-gpu`, `-g` | option | GPU ID or `-1` for CPU only (default: `-1`). |
| `--version`, `-V` | option | Model version. Will be written out to the model's `meta.json` after training. |
| `--meta-path`, `-m` <Tag variant="new">2</Tag> | option | Optional path to model [`meta.json`](/usage/training#models-generating). All relevant properties like `lang`, `pipeline` and `spacy_version` will be overwritten. |
| `--init-tok2vec`, `-t2v` <Tag variant="new">2.1</Tag> | option | Path to pretrained weights for the token-to-vector parts of the models. See `spacy pretrain`. Experimental. |

View File

@ -38,7 +38,7 @@ be shown.
| Name | Type | Description |
| --------------------------------------- | --------------- | ------------------------------------------------------------------------------------------- |
| `vocab` | `Vocab` | The vocabulary object, which must be shared with the documents the matcher will operate on. |
| `max_length` | int | Deprecated argument - the `PhraseMatcher` does not have a phrase length limit anymore. |
| `max_length` | int | Deprecated argument - the `PhraseMatcher` does not have a phrase length limit anymore. |
| `attr` <Tag variant="new">2.1</Tag> | int / unicode | The token attribute to match on. Defaults to `ORTH`, i.e. the verbatim token text. |
| `validate` <Tag variant="new">2.1</Tag> | bool | Validate patterns added to the matcher. |
| **RETURNS** | `PhraseMatcher` | The newly constructed object. |
@ -70,6 +70,18 @@ Find all token sequences matching the supplied patterns on the `Doc`.
| `doc` | `Doc` | The document to match over. |
| **RETURNS** | list | A list of `(match_id, start, end)` tuples, describing the matches. A match tuple describes a span `doc[start:end]`. The `match_id` is the ID of the added match pattern. |
<Infobox title="Note on retrieving the string representation of the match_id" variant="warning">
Because spaCy stores all strings as integers, the `match_id` you get back will
be an integer, too but you can always get the string representation by looking
it up in the vocabulary's `StringStore`, i.e. `nlp.vocab.strings`:
```python
match_id_string = nlp.vocab.strings[match_id]
```
</Infobox>
## PhraseMatcher.pipe {#pipe tag="method"}
Match a stream of documents, yielding them in turn.

View File

@ -5,8 +5,6 @@ next: /usage/spacy-101
menu:
- ['Feature Comparison', 'comparison']
- ['Benchmarks', 'benchmarks']
- ['Powered by spaCy', 'powered-by']
- ['Other Libraries', 'other-libraries']
---
## Feature comparison {#comparison}

View File

@ -1839,6 +1839,20 @@
"github": "microsoft"
}
},
{
"id": "presidio-research",
"title": "Presidio Research",
"slogan": "Toolbox for developing and evaluating PII detectors, NER models for PII and generating fake PII data",
"description": "This package features data-science related tasks for developing new recognizers for Microsoft Presidio. It is used for the evaluation of the entire system, as well as for evaluating specific PII recognizers or PII detection models. Anyone interested in evaluating an existing Microsoft Presidio instance, a specific PII recognizer or to develop new models or logic for detecting PII could leverage the preexisting work in this package. Additionally, anyone interested in generating new data based on previous datasets (e.g. to increase the coverage of entity values) for Named Entity Recognition models could leverage the data generator contained in this package.",
"url": "https://aka.ms/presidio-research",
"github": "microsoft/presidio-research",
"category": ["standalone"],
"thumb": "https://avatars0.githubusercontent.com/u/6154722",
"author": "Microsoft",
"author_links": {
"github": "microsoft"
}
},
{
"id": "python-sentence-boundary-disambiguation",
"title": "pySBD - python Sentence Boundary Disambiguation",
@ -1903,6 +1917,43 @@
"twitter": "PatadiaYash",
"github": "yash1994"
}
},
{
"id": "spacy-pytextrank",
"title": "PyTextRank",
"slogan": "Py impl of TextRank for lightweight phrase extraction",
"description": "An implementation of TextRank in Python for use in spaCy pipelines which provides fast, effective phrase extraction from texts, along with extractive summarization. The graph algorithm works independent of a specific natural language and does not require domain knowledge. See (Mihalcea 2004) https://web.eecs.umich.edu/~mihalcea/papers/mihalcea.emnlp04.pdf",
"github": "DerwenAI/pytextrank",
"pip": "pytextrank",
"code_example": [
"import spacy",
"import pytextrank",
"",
"nlp = spacy.load('en_core_web_sm')",
"",
"tr = pytextrank.TextRank()",
"nlp.add_pipe(tr.PipelineComponent, name='textrank', last=True)",
"",
"text = 'Compatibility of systems of linear constraints over the set of natural numbers. Criteria of compatibility of a system of linear Diophantine equations, strict inequations, and nonstrict inequations are considered.'",
"doc = nlp(text)",
"",
"# examine the top-ranked phrases in the document",
"for p in doc._.phrases:",
" print('{:.4f} {:5d} {}'.format(p.rank, p.count, p.text))",
" print(p.chunks)"
],
"code_language": "python",
"url": "https://github.com/DerwenAI/pytextrank/wiki",
"thumb": "https://memegenerator.net/img/instances/66942896.jpg",
"image": "https://memegenerator.net/img/instances/66942896.jpg",
"author": "Paco Nathan",
"author_links": {
"twitter": "pacoid",
"github": "ceteri",
"website": "https://derwen.ai/paco"
},
"category": ["pipeline"],
"tags": ["phrase extraction", "ner", "summarization", "graph algorithms", "textrank"]
}
],