mirror of
https://github.com/explosion/spaCy.git
synced 2025-01-05 23:06:28 +03:00
Merge branch 'v2.x' into spacy.io
This commit is contained in:
commit
cce428298b
106
.github/contributors/jganseman.md
vendored
Normal file
106
.github/contributors/jganseman.md
vendored
Normal file
|
@ -0,0 +1,106 @@
|
|||
# spaCy contributor agreement
|
||||
|
||||
This spaCy Contributor Agreement (**"SCA"**) is based on the
|
||||
[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
|
||||
The SCA applies to any contribution that you make to any product or project
|
||||
managed by us (the **"project"**), and sets out the intellectual property rights
|
||||
you grant to us in the contributed materials. The term **"us"** shall mean
|
||||
[ExplosionAI GmbH](https://explosion.ai/legal). The term
|
||||
**"you"** shall mean the person or entity identified below.
|
||||
|
||||
If you agree to be bound by these terms, fill in the information requested
|
||||
below and include the filled-in version with your first pull request, under the
|
||||
folder [`.github/contributors/`](/.github/contributors/). The name of the file
|
||||
should be your GitHub username, with the extension `.md`. For example, the user
|
||||
example_user would create the file `.github/contributors/example_user.md`.
|
||||
|
||||
Read this agreement carefully before signing. These terms and conditions
|
||||
constitute a binding legal agreement.
|
||||
|
||||
## Contributor Agreement
|
||||
|
||||
1. The term "contribution" or "contributed materials" means any source code,
|
||||
object code, patch, tool, sample, graphic, specification, manual,
|
||||
documentation, or any other material posted or submitted by you to the project.
|
||||
|
||||
2. With respect to any worldwide copyrights, or copyright applications and
|
||||
registrations, in your contribution:
|
||||
|
||||
* you hereby assign to us joint ownership, and to the extent that such
|
||||
assignment is or becomes invalid, ineffective or unenforceable, you hereby
|
||||
grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
|
||||
royalty-free, unrestricted license to exercise all rights under those
|
||||
copyrights. This includes, at our option, the right to sublicense these same
|
||||
rights to third parties through multiple levels of sublicensees or other
|
||||
licensing arrangements;
|
||||
|
||||
* you agree that each of us can do all things in relation to your
|
||||
contribution as if each of us were the sole owners, and if one of us makes
|
||||
a derivative work of your contribution, the one who makes the derivative
|
||||
work (or has it made will be the sole owner of that derivative work;
|
||||
|
||||
* you agree that you will not assert any moral rights in your contribution
|
||||
against us, our licensees or transferees;
|
||||
|
||||
* you agree that we may register a copyright in your contribution and
|
||||
exercise all ownership rights associated with it; and
|
||||
|
||||
* you agree that neither of us has any duty to consult with, obtain the
|
||||
consent of, pay or render an accounting to the other for any use or
|
||||
distribution of your contribution.
|
||||
|
||||
3. With respect to any patents you own, or that you can license without payment
|
||||
to any third party, you hereby grant to us a perpetual, irrevocable,
|
||||
non-exclusive, worldwide, no-charge, royalty-free license to:
|
||||
|
||||
* make, have made, use, sell, offer to sell, import, and otherwise transfer
|
||||
your contribution in whole or in part, alone or in combination with or
|
||||
included in any product, work or materials arising out of the project to
|
||||
which your contribution was submitted, and
|
||||
|
||||
* at our option, to sublicense these same rights to third parties through
|
||||
multiple levels of sublicensees or other licensing arrangements.
|
||||
|
||||
4. Except as set out above, you keep all right, title, and interest in your
|
||||
contribution. The rights that you grant to us under these terms are effective
|
||||
on the date you first submitted a contribution to us, even if your submission
|
||||
took place before the date you sign these terms.
|
||||
|
||||
5. You covenant, represent, warrant and agree that:
|
||||
|
||||
* Each contribution that you submit is and shall be an original work of
|
||||
authorship and you can legally grant the rights set out in this SCA;
|
||||
|
||||
* to the best of your knowledge, each contribution will not violate any
|
||||
third party's copyrights, trademarks, patents, or other intellectual
|
||||
property rights; and
|
||||
|
||||
* each contribution shall be in compliance with U.S. export control laws and
|
||||
other applicable export and import laws. You agree to notify us if you
|
||||
become aware of any circumstance which would make any of the foregoing
|
||||
representations inaccurate in any respect. We may publicly disclose your
|
||||
participation in the project, including the fact that you have signed the SCA.
|
||||
|
||||
6. This SCA is governed by the laws of the State of California and applicable
|
||||
U.S. Federal law. Any choice of law rules will not apply.
|
||||
|
||||
7. Please place an “x” on one of the applicable statement below. Please do NOT
|
||||
mark both statements:
|
||||
|
||||
* [x] I am signing on behalf of myself as an individual and no other person
|
||||
or entity, including my employer, has or will have rights with respect to my
|
||||
contributions.
|
||||
|
||||
* [ ] I am signing on behalf of my employer or a legal entity and I have the
|
||||
actual authority to contractually bind that entity.
|
||||
|
||||
## Contributor Details
|
||||
|
||||
| Field | Entry |
|
||||
|------------------------------- | -------------------- |
|
||||
| Name | Joachim Ganseman |
|
||||
| Company name (if applicable) | |
|
||||
| Title or role (if applicable) | |
|
||||
| Date | 26/01/2021 |
|
||||
| GitHub username | jganseman |
|
||||
| Website (optional) | www.ganseman.be |
|
106
.github/contributors/jumasheff.md
vendored
Normal file
106
.github/contributors/jumasheff.md
vendored
Normal file
|
@ -0,0 +1,106 @@
|
|||
# spaCy contributor agreement
|
||||
|
||||
This spaCy Contributor Agreement (**"SCA"**) is based on the
|
||||
[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
|
||||
The SCA applies to any contribution that you make to any product or project
|
||||
managed by us (the **"project"**), and sets out the intellectual property rights
|
||||
you grant to us in the contributed materials. The term **"us"** shall mean
|
||||
[ExplosionAI GmbH](https://explosion.ai/legal). The term
|
||||
**"you"** shall mean the person or entity identified below.
|
||||
|
||||
If you agree to be bound by these terms, fill in the information requested
|
||||
below and include the filled-in version with your first pull request, under the
|
||||
folder [`.github/contributors/`](/.github/contributors/). The name of the file
|
||||
should be your GitHub username, with the extension `.md`. For example, the user
|
||||
example_user would create the file `.github/contributors/example_user.md`.
|
||||
|
||||
Read this agreement carefully before signing. These terms and conditions
|
||||
constitute a binding legal agreement.
|
||||
|
||||
## Contributor Agreement
|
||||
|
||||
1. The term "contribution" or "contributed materials" means any source code,
|
||||
object code, patch, tool, sample, graphic, specification, manual,
|
||||
documentation, or any other material posted or submitted by you to the project.
|
||||
|
||||
2. With respect to any worldwide copyrights, or copyright applications and
|
||||
registrations, in your contribution:
|
||||
|
||||
* you hereby assign to us joint ownership, and to the extent that such
|
||||
assignment is or becomes invalid, ineffective or unenforceable, you hereby
|
||||
grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
|
||||
royalty-free, unrestricted license to exercise all rights under those
|
||||
copyrights. This includes, at our option, the right to sublicense these same
|
||||
rights to third parties through multiple levels of sublicensees or other
|
||||
licensing arrangements;
|
||||
|
||||
* you agree that each of us can do all things in relation to your
|
||||
contribution as if each of us were the sole owners, and if one of us makes
|
||||
a derivative work of your contribution, the one who makes the derivative
|
||||
work (or has it made will be the sole owner of that derivative work;
|
||||
|
||||
* you agree that you will not assert any moral rights in your contribution
|
||||
against us, our licensees or transferees;
|
||||
|
||||
* you agree that we may register a copyright in your contribution and
|
||||
exercise all ownership rights associated with it; and
|
||||
|
||||
* you agree that neither of us has any duty to consult with, obtain the
|
||||
consent of, pay or render an accounting to the other for any use or
|
||||
distribution of your contribution.
|
||||
|
||||
3. With respect to any patents you own, or that you can license without payment
|
||||
to any third party, you hereby grant to us a perpetual, irrevocable,
|
||||
non-exclusive, worldwide, no-charge, royalty-free license to:
|
||||
|
||||
* make, have made, use, sell, offer to sell, import, and otherwise transfer
|
||||
your contribution in whole or in part, alone or in combination with or
|
||||
included in any product, work or materials arising out of the project to
|
||||
which your contribution was submitted, and
|
||||
|
||||
* at our option, to sublicense these same rights to third parties through
|
||||
multiple levels of sublicensees or other licensing arrangements.
|
||||
|
||||
4. Except as set out above, you keep all right, title, and interest in your
|
||||
contribution. The rights that you grant to us under these terms are effective
|
||||
on the date you first submitted a contribution to us, even if your submission
|
||||
took place before the date you sign these terms.
|
||||
|
||||
5. You covenant, represent, warrant and agree that:
|
||||
|
||||
* Each contribution that you submit is and shall be an original work of
|
||||
authorship and you can legally grant the rights set out in this SCA;
|
||||
|
||||
* to the best of your knowledge, each contribution will not violate any
|
||||
third party's copyrights, trademarks, patents, or other intellectual
|
||||
property rights; and
|
||||
|
||||
* each contribution shall be in compliance with U.S. export control laws and
|
||||
other applicable export and import laws. You agree to notify us if you
|
||||
become aware of any circumstance which would make any of the foregoing
|
||||
representations inaccurate in any respect. We may publicly disclose your
|
||||
participation in the project, including the fact that you have signed the SCA.
|
||||
|
||||
6. This SCA is governed by the laws of the State of California and applicable
|
||||
U.S. Federal law. Any choice of law rules will not apply.
|
||||
|
||||
7. Please place an “x” on one of the applicable statement below. Please do NOT
|
||||
mark both statements:
|
||||
|
||||
* [x] I am signing on behalf of myself as an individual and no other person
|
||||
or entity, including my employer, has or will have rights with respect to my
|
||||
contributions.
|
||||
|
||||
* [ ] I am signing on behalf of my employer or a legal entity and I have the
|
||||
actual authority to contractually bind that entity.
|
||||
|
||||
## Contributor Details
|
||||
|
||||
| Field | Entry |
|
||||
|------------------------------- | -------------------- |
|
||||
| Name | Murat Jumashev |
|
||||
| Company name (if applicable) | |
|
||||
| Title or role (if applicable) | |
|
||||
| Date | 25.01.2021 |
|
||||
| GitHub username | jumasheff |
|
||||
| Website (optional) | |
|
106
.github/contributors/tupui.md
vendored
Normal file
106
.github/contributors/tupui.md
vendored
Normal file
|
@ -0,0 +1,106 @@
|
|||
# spaCy contributor agreement
|
||||
|
||||
This spaCy Contributor Agreement (**"SCA"**) is based on the
|
||||
[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
|
||||
The SCA applies to any contribution that you make to any product or project
|
||||
managed by us (the **"project"**), and sets out the intellectual property rights
|
||||
you grant to us in the contributed materials. The term **"us"** shall mean
|
||||
[ExplosionAI GmbH](https://explosion.ai/legal). The term
|
||||
**"you"** shall mean the person or entity identified below.
|
||||
|
||||
If you agree to be bound by these terms, fill in the information requested
|
||||
below and include the filled-in version with your first pull request, under the
|
||||
folder [`.github/contributors/`](/.github/contributors/). The name of the file
|
||||
should be your GitHub username, with the extension `.md`. For example, the user
|
||||
example_user would create the file `.github/contributors/example_user.md`.
|
||||
|
||||
Read this agreement carefully before signing. These terms and conditions
|
||||
constitute a binding legal agreement.
|
||||
|
||||
## Contributor Agreement
|
||||
|
||||
1. The term "contribution" or "contributed materials" means any source code,
|
||||
object code, patch, tool, sample, graphic, specification, manual,
|
||||
documentation, or any other material posted or submitted by you to the project.
|
||||
|
||||
2. With respect to any worldwide copyrights, or copyright applications and
|
||||
registrations, in your contribution:
|
||||
|
||||
* you hereby assign to us joint ownership, and to the extent that such
|
||||
assignment is or becomes invalid, ineffective or unenforceable, you hereby
|
||||
grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
|
||||
royalty-free, unrestricted license to exercise all rights under those
|
||||
copyrights. This includes, at our option, the right to sublicense these same
|
||||
rights to third parties through multiple levels of sublicensees or other
|
||||
licensing arrangements;
|
||||
|
||||
* you agree that each of us can do all things in relation to your
|
||||
contribution as if each of us were the sole owners, and if one of us makes
|
||||
a derivative work of your contribution, the one who makes the derivative
|
||||
work (or has it made will be the sole owner of that derivative work;
|
||||
|
||||
* you agree that you will not assert any moral rights in your contribution
|
||||
against us, our licensees or transferees;
|
||||
|
||||
* you agree that we may register a copyright in your contribution and
|
||||
exercise all ownership rights associated with it; and
|
||||
|
||||
* you agree that neither of us has any duty to consult with, obtain the
|
||||
consent of, pay or render an accounting to the other for any use or
|
||||
distribution of your contribution.
|
||||
|
||||
3. With respect to any patents you own, or that you can license without payment
|
||||
to any third party, you hereby grant to us a perpetual, irrevocable,
|
||||
non-exclusive, worldwide, no-charge, royalty-free license to:
|
||||
|
||||
* make, have made, use, sell, offer to sell, import, and otherwise transfer
|
||||
your contribution in whole or in part, alone or in combination with or
|
||||
included in any product, work or materials arising out of the project to
|
||||
which your contribution was submitted, and
|
||||
|
||||
* at our option, to sublicense these same rights to third parties through
|
||||
multiple levels of sublicensees or other licensing arrangements.
|
||||
|
||||
4. Except as set out above, you keep all right, title, and interest in your
|
||||
contribution. The rights that you grant to us under these terms are effective
|
||||
on the date you first submitted a contribution to us, even if your submission
|
||||
took place before the date you sign these terms.
|
||||
|
||||
5. You covenant, represent, warrant and agree that:
|
||||
|
||||
* Each contribution that you submit is and shall be an original work of
|
||||
authorship and you can legally grant the rights set out in this SCA;
|
||||
|
||||
* to the best of your knowledge, each contribution will not violate any
|
||||
third party's copyrights, trademarks, patents, or other intellectual
|
||||
property rights; and
|
||||
|
||||
* each contribution shall be in compliance with U.S. export control laws and
|
||||
other applicable export and import laws. You agree to notify us if you
|
||||
become aware of any circumstance which would make any of the foregoing
|
||||
representations inaccurate in any respect. We may publicly disclose your
|
||||
participation in the project, including the fact that you have signed the SCA.
|
||||
|
||||
6. This SCA is governed by the laws of the State of California and applicable
|
||||
U.S. Federal law. Any choice of law rules will not apply.
|
||||
|
||||
7. Please place an “x” on one of the applicable statement below. Please do NOT
|
||||
mark both statements:
|
||||
|
||||
* [x] I am signing on behalf of myself as an individual and no other person
|
||||
or entity, including my employer, has or will have rights with respect to my
|
||||
contributions.
|
||||
|
||||
* [ ] I am signing on behalf of my employer or a legal entity and I have the
|
||||
actual authority to contractually bind that entity.
|
||||
|
||||
## Contributor Details
|
||||
|
||||
| Field | Entry |
|
||||
|------------------------------- | -------------------- |
|
||||
| Name | Pamphile Roy |
|
||||
| Company name (if applicable) | N/A |
|
||||
| Title or role (if applicable) | N/A |
|
||||
| Date | January 29th, 2021 |
|
||||
| GitHub username | tupui |
|
||||
| Website (optional) | N/A |
|
|
@ -128,8 +128,6 @@ def get_version(model, comp):
|
|||
|
||||
def download_model(filename, user_pip_args=None):
|
||||
download_url = about.__download_url__ + "/" + filename
|
||||
pip_args = ["--no-cache-dir"]
|
||||
if user_pip_args:
|
||||
pip_args.extend(user_pip_args)
|
||||
pip_args = user_pip_args if user_pip_args is not None else []
|
||||
cmd = [sys.executable, "-m", "pip", "install"] + pip_args + [download_url]
|
||||
return subprocess.call(cmd, env=os.environ.copy())
|
||||
|
|
|
@ -591,6 +591,7 @@ class Errors(object):
|
|||
E200 = ("Specifying a base model with a pretrained component '{component}' "
|
||||
"can not be combined with adding a pretrained Tok2Vec layer.")
|
||||
E201 = ("Span index out of range.")
|
||||
E202 = ("Unsupported alignment mode '{mode}'. Supported modes: {modes}.")
|
||||
|
||||
|
||||
@add_codes
|
||||
|
|
|
@ -9,12 +9,6 @@ def noun_chunks(doclike):
|
|||
def is_verb_token(tok):
|
||||
return tok.pos in [VERB, AUX]
|
||||
|
||||
def next_token(tok):
|
||||
try:
|
||||
return tok.nbor()
|
||||
except IndexError:
|
||||
return None
|
||||
|
||||
def get_left_bound(doc, root):
|
||||
left_bound = root
|
||||
for tok in reversed(list(root.lefts)):
|
||||
|
@ -67,7 +61,6 @@ def noun_chunks(doclike):
|
|||
np_right_deps = [doc.vocab.strings.add(label) for label in right_labels]
|
||||
stop_deps = [doc.vocab.strings.add(label) for label in stop_labels]
|
||||
|
||||
chunks = []
|
||||
prev_right = -1
|
||||
for token in doclike:
|
||||
if token.pos in [PROPN, NOUN, PRON]:
|
||||
|
|
|
@ -20,27 +20,23 @@ def noun_chunks(doclike):
|
|||
np_left_deps = [doc.vocab.strings.add(label) for label in left_labels]
|
||||
np_right_deps = [doc.vocab.strings.add(label) for label in right_labels]
|
||||
stop_deps = [doc.vocab.strings.add(label) for label in stop_labels]
|
||||
|
||||
prev_right = -1
|
||||
for token in doclike:
|
||||
if token.pos in [PROPN, NOUN, PRON]:
|
||||
left, right = noun_bounds(
|
||||
doc, token, np_left_deps, np_right_deps, stop_deps
|
||||
)
|
||||
if left.i <= prev_right:
|
||||
continue
|
||||
yield left.i, right.i + 1, np_label
|
||||
token = right
|
||||
token = next_token(token)
|
||||
prev_right = right.i
|
||||
|
||||
|
||||
def is_verb_token(token):
|
||||
return token.pos in [VERB, AUX]
|
||||
|
||||
|
||||
def next_token(token):
|
||||
try:
|
||||
return token.nbor()
|
||||
except IndexError:
|
||||
return None
|
||||
|
||||
|
||||
def noun_bounds(doc, root, np_left_deps, np_right_deps, stop_deps):
|
||||
left_bound = root
|
||||
for token in reversed(list(root.lefts)):
|
||||
|
|
31
spacy/lang/ky/__init__.py
Normal file
31
spacy/lang/ky/__init__.py
Normal file
|
@ -0,0 +1,31 @@
|
|||
# coding: utf8
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from .lex_attrs import LEX_ATTRS
|
||||
from .punctuation import TOKENIZER_INFIXES
|
||||
from .stop_words import STOP_WORDS
|
||||
from .tokenizer_exceptions import TOKENIZER_EXCEPTIONS
|
||||
from ..tokenizer_exceptions import BASE_EXCEPTIONS
|
||||
from ...attrs import LANG
|
||||
from ...language import Language
|
||||
from ...util import update_exc
|
||||
|
||||
|
||||
class KyrgyzDefaults(Language.Defaults):
|
||||
lex_attr_getters = dict(Language.Defaults.lex_attr_getters)
|
||||
lex_attr_getters[LANG] = lambda text: "ky"
|
||||
|
||||
lex_attr_getters.update(LEX_ATTRS)
|
||||
|
||||
tokenizer_exceptions = update_exc(BASE_EXCEPTIONS, TOKENIZER_EXCEPTIONS)
|
||||
infixes = tuple(TOKENIZER_INFIXES)
|
||||
|
||||
stop_words = STOP_WORDS
|
||||
|
||||
|
||||
class Kyrgyz(Language):
|
||||
lang = "ky"
|
||||
Defaults = KyrgyzDefaults
|
||||
|
||||
|
||||
__all__ = ["Kyrgyz"]
|
19
spacy/lang/ky/examples.py
Normal file
19
spacy/lang/ky/examples.py
Normal file
|
@ -0,0 +1,19 @@
|
|||
# coding: utf8
|
||||
from __future__ import unicode_literals
|
||||
|
||||
"""
|
||||
Example sentences to test spaCy and its language models.
|
||||
>>> from spacy.lang.ky.examples import sentences
|
||||
>>> docs = nlp.pipe(sentences)
|
||||
"""
|
||||
|
||||
sentences = [
|
||||
"Apple Улуу Британия стартабын $1 миллиардга сатып алууну көздөөдө.",
|
||||
"Автоном автомобилдерди камсыздоо жоопкерчилиги өндүрүүчүлөргө артылды.",
|
||||
"Сан-Франциско тротуар менен жүрүүчү робот-курьерлерге тыю салууну караштырууда.",
|
||||
"Лондон - Улуу Британияда жайгашкан ири шаар.",
|
||||
"Кайдасың?",
|
||||
"Франциянын президенти ким?",
|
||||
"Америка Кошмо Штаттарынын борбор калаасы кайсы шаар?",
|
||||
"Барак Обама качан төрөлгөн?",
|
||||
]
|
51
spacy/lang/ky/lex_attrs.py
Normal file
51
spacy/lang/ky/lex_attrs.py
Normal file
|
@ -0,0 +1,51 @@
|
|||
# coding: utf8
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from ...attrs import LIKE_NUM
|
||||
|
||||
_num_words = [
|
||||
"нөл",
|
||||
"ноль",
|
||||
"бир",
|
||||
"эки",
|
||||
"үч",
|
||||
"төрт",
|
||||
"беш",
|
||||
"алты",
|
||||
"жети",
|
||||
"сегиз",
|
||||
"тогуз",
|
||||
"он",
|
||||
"жыйырма",
|
||||
"отуз",
|
||||
"кырк",
|
||||
"элүү",
|
||||
"алтымыш",
|
||||
"жетмиш",
|
||||
"сексен",
|
||||
"токсон",
|
||||
"жүз",
|
||||
"миң",
|
||||
"миллион",
|
||||
"миллиард",
|
||||
"триллион",
|
||||
"триллиард",
|
||||
]
|
||||
|
||||
|
||||
def like_num(text):
|
||||
if text.startswith(("+", "-", "±", "~")):
|
||||
text = text[1:]
|
||||
text = text.replace(",", "").replace(".", "")
|
||||
if text.isdigit():
|
||||
return True
|
||||
if text.count("/") == 1:
|
||||
num, denom = text.split("/")
|
||||
if num.isdigit() and denom.isdigit():
|
||||
return True
|
||||
if text in _num_words:
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
LEX_ATTRS = {LIKE_NUM: like_num}
|
24
spacy/lang/ky/punctuation.py
Normal file
24
spacy/lang/ky/punctuation.py
Normal file
|
@ -0,0 +1,24 @@
|
|||
# coding: utf8
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from ..char_classes import ALPHA, ALPHA_LOWER, ALPHA_UPPER, CONCAT_QUOTES, HYPHENS
|
||||
from ..char_classes import LIST_ELLIPSES, LIST_ICONS
|
||||
|
||||
_hyphens_no_dash = HYPHENS.replace("-", "").strip("|").replace("||", "")
|
||||
_infixes = (
|
||||
LIST_ELLIPSES
|
||||
+ LIST_ICONS
|
||||
+ [
|
||||
r"(?<=[{al}])\.(?=[{au}])".format(al=ALPHA_LOWER, au=ALPHA_UPPER),
|
||||
r"(?<=[{a}])[,!?/()]+(?=[{a}])".format(a=ALPHA),
|
||||
r"(?<=[{a}{q}])[:<>=](?=[{a}])".format(a=ALPHA, q=CONCAT_QUOTES),
|
||||
r"(?<=[{a}])--(?=[{a}])".format(a=ALPHA),
|
||||
r"(?<=[{a}]),(?=[{a}])".format(a=ALPHA),
|
||||
r"(?<=[{a}])([{q}\)\]\(\[])(?=[\-{a}])".format(a=ALPHA, q=CONCAT_QUOTES),
|
||||
r"(?<=[{a}])(?:{h})(?=[{a}])".format(a=ALPHA, h=_hyphens_no_dash),
|
||||
r"(?<=[0-9])-(?=[{a}])".format(a=ALPHA),
|
||||
r"(?<=[0-9])-(?=[0-9])",
|
||||
]
|
||||
)
|
||||
|
||||
TOKENIZER_INFIXES = _infixes
|
45
spacy/lang/ky/stop_words.py
Normal file
45
spacy/lang/ky/stop_words.py
Normal file
|
@ -0,0 +1,45 @@
|
|||
# encoding: utf8
|
||||
from __future__ import unicode_literals
|
||||
|
||||
STOP_WORDS = set(
|
||||
"""
|
||||
ага адам айтты айтымында айтып ал алар
|
||||
алардын алган алуу алып анда андан аны
|
||||
анын ар
|
||||
|
||||
бар басма баш башка башкы башчысы берген
|
||||
биз билдирген билдирди бир биринчи бирок
|
||||
бишкек болгон болот болсо болуп боюнча
|
||||
буга бул
|
||||
|
||||
гана
|
||||
|
||||
да дагы деген деди деп
|
||||
|
||||
жана жатат жаткан жаңы же жогорку жок жол
|
||||
жолу
|
||||
|
||||
кабыл калган кандай карата каршы катары
|
||||
келген керек кийин кол кылмыш кыргыз
|
||||
күнү көп
|
||||
|
||||
маалымат мамлекеттик мен менен миң
|
||||
мурдагы мыйзам мындай мүмкүн
|
||||
|
||||
ошол ошондой
|
||||
|
||||
сүрөт сөз
|
||||
|
||||
тарабынан турган тууралуу
|
||||
|
||||
укук учурда
|
||||
|
||||
чейин чек
|
||||
|
||||
экенин эки эл эле эмес эми эч
|
||||
|
||||
үч үчүн
|
||||
|
||||
өз
|
||||
""".split()
|
||||
)
|
55
spacy/lang/ky/tokenizer_exceptions.py
Normal file
55
spacy/lang/ky/tokenizer_exceptions.py
Normal file
|
@ -0,0 +1,55 @@
|
|||
# coding: utf8
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from ...symbols import ORTH, LEMMA, NORM
|
||||
|
||||
_exc = {}
|
||||
|
||||
_abbrev_exc = [
|
||||
# Weekdays abbreviations
|
||||
{ORTH: "дүй", LEMMA: "дүйшөмбү"},
|
||||
{ORTH: "шей", LEMMA: "шейшемби"},
|
||||
{ORTH: "шар", LEMMA: "шаршемби"},
|
||||
{ORTH: "бей", LEMMA: "бейшемби"},
|
||||
{ORTH: "жум", LEMMA: "жума"},
|
||||
{ORTH: "ишм", LEMMA: "ишемби"},
|
||||
{ORTH: "жек", LEMMA: "жекшемби"},
|
||||
# Months abbreviations
|
||||
{ORTH: "янв", LEMMA: "январь"},
|
||||
{ORTH: "фев", LEMMA: "февраль"},
|
||||
{ORTH: "мар", LEMMA: "март"},
|
||||
{ORTH: "апр", LEMMA: "апрель"},
|
||||
{ORTH: "июн", LEMMA: "июнь"},
|
||||
{ORTH: "июл", LEMMA: "июль"},
|
||||
{ORTH: "авг", LEMMA: "август"},
|
||||
{ORTH: "сен", LEMMA: "сентябрь"},
|
||||
{ORTH: "окт", LEMMA: "октябрь"},
|
||||
{ORTH: "ноя", LEMMA: "ноябрь"},
|
||||
{ORTH: "дек", LEMMA: "декабрь"},
|
||||
# Number abbreviations
|
||||
{ORTH: "млрд", LEMMA: "миллиард"},
|
||||
{ORTH: "млн", LEMMA: "миллион"},
|
||||
]
|
||||
|
||||
for abbr in _abbrev_exc:
|
||||
for orth in (abbr[ORTH], abbr[ORTH].capitalize(), abbr[ORTH].upper()):
|
||||
_exc[orth] = [{ORTH: orth, LEMMA: abbr[LEMMA], NORM: abbr[LEMMA]}]
|
||||
_exc[orth + "."] = [{ORTH: orth + ".", LEMMA: abbr[LEMMA], NORM: abbr[LEMMA]}]
|
||||
|
||||
for exc_data in [ # "etc." abbreviations
|
||||
{ORTH: "ж.б.у.с.", NORM: "жана башка ушул сыяктуу"},
|
||||
{ORTH: "ж.б.", NORM: "жана башка"},
|
||||
{ORTH: "ж.", NORM: "жыл"},
|
||||
{ORTH: "б.з.ч.", NORM: "биздин заманга чейин"},
|
||||
{ORTH: "б.з.", NORM: "биздин заман"},
|
||||
{ORTH: "кк.", NORM: "кылымдар"},
|
||||
{ORTH: "жж.", NORM: "жылдар"},
|
||||
{ORTH: "к.", NORM: "кылым"},
|
||||
{ORTH: "көч.", NORM: "көчөсү"},
|
||||
{ORTH: "м-н", NORM: "менен"},
|
||||
{ORTH: "б-ча", NORM: "боюнча"},
|
||||
]:
|
||||
exc_data[LEMMA] = exc_data[NORM]
|
||||
_exc[exc_data[ORTH]] = [exc_data]
|
||||
|
||||
TOKENIZER_EXCEPTIONS = _exc
|
|
@ -313,7 +313,8 @@ cdef find_matches(TokenPatternC** patterns, int n, object doclike, int length, e
|
|||
# We need to deduplicate, because we could otherwise arrive at the same
|
||||
# match through two paths, e.g. .?.? matching 'a'. Are we matching the
|
||||
# first .?, or the second .? -- it doesn't matter, it's just one match.
|
||||
if match not in seen:
|
||||
# Skip 0-length matches. (TODO: fix algorithm)
|
||||
if match not in seen and matches[i].length > 0:
|
||||
output.append(match)
|
||||
seen.add(match)
|
||||
return output
|
||||
|
|
|
@ -8,6 +8,7 @@ from preshed.maps cimport map_init, map_set, map_get, map_clear, map_iter
|
|||
|
||||
import warnings
|
||||
|
||||
from ..attrs import IDS
|
||||
from ..attrs cimport ORTH, POS, TAG, DEP, LEMMA
|
||||
from ..structs cimport TokenC
|
||||
from ..tokens.token cimport Token
|
||||
|
@ -58,9 +59,11 @@ cdef class PhraseMatcher:
|
|||
attr = attr.upper()
|
||||
if attr == "TEXT":
|
||||
attr = "ORTH"
|
||||
if attr == "IS_SENT_START":
|
||||
attr = "SENT_START"
|
||||
if attr not in TOKEN_PATTERN_SCHEMA["items"]["properties"]:
|
||||
raise ValueError(Errors.E152.format(attr=attr))
|
||||
self.attr = self.vocab.strings[attr]
|
||||
self.attr = IDS.get(attr)
|
||||
|
||||
def __len__(self):
|
||||
"""Get the number of match IDs added to the matcher.
|
||||
|
|
|
@ -262,6 +262,11 @@ def tt_tokenizer():
|
|||
return get_lang_class("tt").Defaults.create_tokenizer()
|
||||
|
||||
|
||||
@pytest.fixture(scope="session")
|
||||
def ky_tokenizer():
|
||||
return get_lang_class("ky").Defaults.create_tokenizer()
|
||||
|
||||
|
||||
@pytest.fixture(scope="session")
|
||||
def uk_tokenizer():
|
||||
pytest.importorskip("pymorphy2")
|
||||
|
|
|
@ -197,6 +197,12 @@ def test_spans_by_character(doc):
|
|||
assert span1.end_char == span2.end_char
|
||||
assert span2.label_ == "GPE"
|
||||
|
||||
# unsupported alignment mode
|
||||
with pytest.raises(ValueError):
|
||||
span2 = doc.char_span(
|
||||
span1.start_char + 1, span1.end_char, label="GPE", alignment_mode="unk"
|
||||
)
|
||||
|
||||
|
||||
def test_span_to_array(doc):
|
||||
span = doc[1:-2]
|
||||
|
|
0
spacy/tests/lang/ky/__init__.py
Normal file
0
spacy/tests/lang/ky/__init__.py
Normal file
91
spacy/tests/lang/ky/test_tokenizer.py
Normal file
91
spacy/tests/lang/ky/test_tokenizer.py
Normal file
|
@ -0,0 +1,91 @@
|
|||
# coding: utf8
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
INFIX_HYPHEN_TESTS = [
|
||||
("Бала-чака жакшыбы?", "Бала-чака жакшыбы ?".split()),
|
||||
("Кыз-келиндер кийими.", "Кыз-келиндер кийими .".split()),
|
||||
]
|
||||
|
||||
PUNC_INSIDE_WORDS_TESTS = [
|
||||
(
|
||||
"Пассажир саны - 2,13 млн — киши/күнүнө (2010), 783,9 млн. киши/жылына.",
|
||||
"Пассажир саны - 2,13 млн — киши / күнүнө ( 2010 ) ,"
|
||||
" 783,9 млн. киши / жылына .".split(),
|
||||
),
|
||||
('То"кой', 'То " кой'.split()),
|
||||
]
|
||||
|
||||
MIXED_ORDINAL_NUMS_TESTS = [
|
||||
("Эртең 22-январь...", "Эртең 22 - январь ...".split())
|
||||
]
|
||||
|
||||
ABBREV_TESTS = [
|
||||
("Маселе б-ча эртең келет", "Маселе б-ча эртең келет".split()),
|
||||
("Ахунбаев көч. турат.", "Ахунбаев көч. турат .".split()),
|
||||
("«3-жылы (б.з.ч.) туулган", "« 3 - жылы ( б.з.ч. ) туулган".split()),
|
||||
("Жүгөрү ж.б. дандар колдонулат", "Жүгөрү ж.б. дандар колдонулат".split()),
|
||||
("3-4 кк. курулган.", "3 - 4 кк. курулган .".split()),
|
||||
]
|
||||
|
||||
NAME_ABBREV_TESTS = [
|
||||
("М.Жумаш", "М.Жумаш".split()),
|
||||
("М.жумаш", "М.жумаш".split()),
|
||||
("м.Жумаш", "м . Жумаш".split()),
|
||||
("Жумаш М.Н.", "Жумаш М.Н.".split()),
|
||||
("Жумаш.", "Жумаш .".split()),
|
||||
]
|
||||
|
||||
TYPOS_IN_PUNC_TESTS = [
|
||||
("«3-жылда , туулган", "« 3 - жылда , туулган".split()),
|
||||
("«3-жылда,туулган", "« 3 - жылда , туулган".split()),
|
||||
("«3-жылда,туулган.", "« 3 - жылда , туулган .".split()),
|
||||
("Ал иштейт(качан?)", "Ал иштейт ( качан ? )".split()),
|
||||
("Ал (качан?)иштейт", "Ал ( качан ?) иштейт".split()), # "?)" => "?)" or "? )"
|
||||
]
|
||||
|
||||
LONG_TEXTS_TESTS = [
|
||||
(
|
||||
"Алыскы өлкөлөргө аздыр-көптүр татаалыраак жүрүштөргө чыккандар "
|
||||
"азыраак: ал бир топ кымбат жана логистика маселесинин айынан "
|
||||
"кыйла татаал. Мисалы, январдагы майрамдарда Мароккого үчүнчү "
|
||||
"категориядагы маршрутка (100 чакырымдан кем эмес) барып "
|
||||
"келгенге аракет кылдык.",
|
||||
"Алыскы өлкөлөргө аздыр-көптүр татаалыраак жүрүштөргө чыккандар "
|
||||
"азыраак : ал бир топ кымбат жана логистика маселесинин айынан "
|
||||
"кыйла татаал . Мисалы , январдагы майрамдарда Мароккого үчүнчү "
|
||||
"категориядагы маршрутка ( 100 чакырымдан кем эмес ) барып "
|
||||
"келгенге аракет кылдык .".split(),
|
||||
)
|
||||
]
|
||||
|
||||
TESTCASES = (
|
||||
INFIX_HYPHEN_TESTS
|
||||
+ PUNC_INSIDE_WORDS_TESTS
|
||||
+ MIXED_ORDINAL_NUMS_TESTS
|
||||
+ ABBREV_TESTS
|
||||
+ NAME_ABBREV_TESTS
|
||||
+ LONG_TEXTS_TESTS
|
||||
+ TYPOS_IN_PUNC_TESTS
|
||||
)
|
||||
|
||||
NORM_TESTCASES = [
|
||||
(
|
||||
"ит, мышык ж.б.у.с. үй жаныбарлары.",
|
||||
["ит", ",", "мышык", "жана башка ушул сыяктуу", "үй", "жаныбарлары", "."],
|
||||
)
|
||||
]
|
||||
|
||||
|
||||
@pytest.mark.parametrize("text,expected_tokens", TESTCASES)
|
||||
def test_ky_tokenizer_handles_testcases(ky_tokenizer, text, expected_tokens):
|
||||
tokens = [token.text for token in ky_tokenizer(text) if not token.is_space]
|
||||
assert expected_tokens == tokens
|
||||
|
||||
|
||||
@pytest.mark.parametrize("text,norms", NORM_TESTCASES)
|
||||
def test_ky_tokenizer_handles_norm_exceptions(ky_tokenizer, text, norms):
|
||||
tokens = ky_tokenizer(text)
|
||||
assert [token.norm_ for token in tokens] == norms
|
|
@ -493,3 +493,13 @@ def test_matcher_remove_zero_operator(en_vocab):
|
|||
assert "Rule" in matcher
|
||||
matcher.remove("Rule")
|
||||
assert "Rule" not in matcher
|
||||
|
||||
|
||||
def test_matcher_no_zero_length(en_vocab):
|
||||
doc = Doc(en_vocab, words=["a", "b"])
|
||||
doc[0].tag_ = "A"
|
||||
doc[1].tag_ = "B"
|
||||
doc.is_tagged = True
|
||||
matcher = Matcher(en_vocab)
|
||||
matcher.add("TEST", [[{"TAG": "C", "OP": "?"}]])
|
||||
assert len(matcher(doc)) == 0
|
||||
|
|
|
@ -290,3 +290,8 @@ def test_phrase_matcher_pickle(en_vocab):
|
|||
# clunky way to vaguely check that callback is unpickled
|
||||
(vocab, docs, callbacks, attr) = matcher_unpickled.__reduce__()[1]
|
||||
assert isinstance(callbacks.get("TEST2"), Mock)
|
||||
|
||||
|
||||
@pytest.mark.parametrize("attr", ["SENT_START", "IS_SENT_START"])
|
||||
def test_phrase_matcher_sent_start(en_vocab, attr):
|
||||
matcher = PhraseMatcher(en_vocab, attr=attr)
|
||||
|
|
9
spacy/tests/regression/test_issue6755.py
Normal file
9
spacy/tests/regression/test_issue6755.py
Normal file
|
@ -0,0 +1,9 @@
|
|||
# coding: utf8
|
||||
from __future__ import unicode_literals
|
||||
|
||||
|
||||
def test_issue6755(en_tokenizer):
|
||||
doc = en_tokenizer("This is a magnificent sentence.")
|
||||
span = doc[:0]
|
||||
assert span.text_with_ws == ""
|
||||
assert span.text == ""
|
|
@ -2,6 +2,7 @@
|
|||
from __future__ import unicode_literals
|
||||
|
||||
import pytest
|
||||
import re
|
||||
from spacy.util import get_lang_class
|
||||
from spacy.tokenizer import Tokenizer
|
||||
|
||||
|
@ -22,6 +23,17 @@ def test_serialize_custom_tokenizer(en_vocab, en_tokenizer):
|
|||
tokenizer_bytes = tokenizer.to_bytes()
|
||||
Tokenizer(en_vocab).from_bytes(tokenizer_bytes)
|
||||
|
||||
# test that empty/unset values are set correctly on deserialization
|
||||
tokenizer = get_lang_class("en").Defaults.create_tokenizer()
|
||||
tokenizer.token_match = re.compile("test").match
|
||||
assert tokenizer.rules != {}
|
||||
assert tokenizer.token_match is not None
|
||||
assert tokenizer.url_match is not None
|
||||
tokenizer.from_bytes(tokenizer_bytes)
|
||||
assert tokenizer.rules == {}
|
||||
assert tokenizer.token_match is None
|
||||
assert tokenizer.url_match is None
|
||||
|
||||
tokenizer = Tokenizer(en_vocab, rules={"ABC.": [{"ORTH": "ABC"}, {"ORTH": "."}]})
|
||||
tokenizer.rules = {}
|
||||
tokenizer_bytes = tokenizer.to_bytes()
|
||||
|
|
|
@ -608,10 +608,16 @@ cdef class Tokenizer:
|
|||
self.suffix_search = re.compile(data["suffix_search"]).search
|
||||
if "infix_finditer" in data and isinstance(data["infix_finditer"], basestring_):
|
||||
self.infix_finditer = re.compile(data["infix_finditer"]).finditer
|
||||
# for token_match and url_match, set to None to override the language
|
||||
# defaults if no regex is provided
|
||||
if "token_match" in data and isinstance(data["token_match"], basestring_):
|
||||
self.token_match = re.compile(data["token_match"]).match
|
||||
else:
|
||||
self.token_match = None
|
||||
if "url_match" in data and isinstance(data["url_match"], basestring_):
|
||||
self.url_match = re.compile(data["url_match"]).match
|
||||
else:
|
||||
self.url_match = None
|
||||
if "rules" in data and isinstance(data["rules"], dict):
|
||||
# make sure to hard reset the cache to remove data from the default exceptions
|
||||
self._rules = {}
|
||||
|
|
|
@ -379,8 +379,9 @@ cdef class Doc:
|
|||
label = self.vocab.strings.add(label)
|
||||
if not isinstance(kb_id, int):
|
||||
kb_id = self.vocab.strings.add(kb_id)
|
||||
if alignment_mode not in ("strict", "contract", "expand"):
|
||||
alignment_mode = "strict"
|
||||
alignment_modes = ("strict", "contract", "expand")
|
||||
if alignment_mode not in alignment_modes:
|
||||
raise ValueError(Errors.E202.format(mode=alignment_mode, modes=", ".join(alignment_modes)))
|
||||
cdef int start = token_by_char(self.c, self.length, start_idx)
|
||||
if start < 0 or (alignment_mode == "strict" and start_idx != self[start].idx):
|
||||
return None
|
||||
|
|
|
@ -500,7 +500,7 @@ cdef class Span:
|
|||
def text(self):
|
||||
"""RETURNS (unicode): The original verbatim text of the span."""
|
||||
text = self.text_with_ws
|
||||
if self[-1].whitespace_:
|
||||
if len(self) > 0 and self[-1].whitespace_:
|
||||
text = text[:-1]
|
||||
return text
|
||||
|
||||
|
|
|
@ -513,7 +513,7 @@ def minibatch(items, size=8):
|
|||
size_ = size
|
||||
items = iter(items)
|
||||
while True:
|
||||
batch_size = next(size_)
|
||||
batch_size = next(size_, 0) # StopIteration isn't handled in generators in Python >= 3.7.
|
||||
batch = list(itertools.islice(items, int(batch_size)))
|
||||
if len(batch) == 0:
|
||||
break
|
||||
|
|
|
@ -250,7 +250,7 @@ POS tag set.
|
|||
<Infobox title="Annotation schemes for other models">
|
||||
|
||||
For the label schemes used by the other models, see the respective `tag_map.py`
|
||||
in [`spacy/lang`](https://github.com/explosion/spaCy/tree/master/spacy/lang).
|
||||
in [`spacy/lang`](https://github.com/explosion/spaCy/tree/v2.x/spacy/lang).
|
||||
|
||||
</Infobox>
|
||||
|
||||
|
@ -564,7 +564,7 @@ Here's an example of dependencies, part-of-speech tags and names entities, taken
|
|||
from the English Wall Street Journal portion of the Penn Treebank:
|
||||
|
||||
```json
|
||||
https://github.com/explosion/spaCy/tree/master/examples/training/training-data.json
|
||||
https://github.com/explosion/spaCy/tree/v2.x/examples/training/training-data.json
|
||||
```
|
||||
|
||||
### Lexical data for vocabulary {#vocab-jsonl new="2"}
|
||||
|
@ -619,5 +619,5 @@ data.
|
|||
Here's an example of the 20 most frequent lexemes in the English training data:
|
||||
|
||||
```json
|
||||
https://github.com/explosion/spaCy/tree/master/examples/training/vocab-data.jsonl
|
||||
https://github.com/explosion/spaCy/tree/v2.x/examples/training/vocab-data.jsonl
|
||||
```
|
||||
|
|
|
@ -166,13 +166,13 @@ All output files generated by this command are compatible with
|
|||
|
||||
### Converter options
|
||||
|
||||
| ID | Description |
|
||||
| ------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `auto` | Automatically pick converter based on file extension and file content (default). |
|
||||
| `conll`, `conllu`, `conllubio` | Universal Dependencies `.conllu` or `.conll` format. |
|
||||
| `ner` | NER with IOB/IOB2 tags, one token per line with columns separated by whitespace. The first column is the token and the final column is the IOB tag. Sentences are separated by blank lines and documents are separated by the line `-DOCSTART- -X- O O`. Supports CoNLL 2003 NER format. See [sample data](https://github.com/explosion/spaCy/tree/master/examples/training/ner_example_data). |
|
||||
| `iob` | NER with IOB/IOB2 tags, one sentence per line with tokens separated by whitespace and annotation separated by `|`, either `word|B-ENT` or `word|POS|B-ENT`. See [sample data](https://github.com/explosion/spaCy/tree/master/examples/training/ner_example_data). |
|
||||
| `jsonl` | NER data formatted as JSONL with one dict per line and a `"text"` and `"spans"` key. This is also the format exported by the [Prodigy](https://prodi.gy) annotation tool. See [sample data](https://raw.githubusercontent.com/explosion/projects/master/ner-fashion-brands/fashion_brands_training.jsonl). |
|
||||
| ID | Description |
|
||||
| ------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `auto` | Automatically pick converter based on file extension and file content (default). |
|
||||
| `conll`, `conllu`, `conllubio` | Universal Dependencies `.conllu` or `.conll` format. |
|
||||
| `ner` | NER with IOB/IOB2 tags, one token per line with columns separated by whitespace. The first column is the token and the final column is the IOB tag. Sentences are separated by blank lines and documents are separated by the line `-DOCSTART- -X- O O`. Supports CoNLL 2003 NER format. See [sample data](https://github.com/explosion/spaCy/tree/v2.x/examples/training/ner_example_data). |
|
||||
| `iob` | NER with IOB/IOB2 tags, one sentence per line with tokens separated by whitespace and annotation separated by `|`, either `word|B-ENT` or `word|POS|B-ENT`. See [sample data](https://github.com/explosion/spaCy/tree/v2.x/examples/training/ner_example_data). |
|
||||
| `jsonl` | NER data formatted as JSONL with one dict per line and a `"text"` and `"spans"` key. This is also the format exported by the [Prodigy](https://prodi.gy) annotation tool. See [sample data](https://raw.githubusercontent.com/explosion/projects/master/ner-fashion-brands/fashion_brands_training.jsonl). |
|
||||
|
||||
## Debug data {#debug-data new="2.2"}
|
||||
|
||||
|
@ -473,7 +473,7 @@ $ python -m spacy pretrain [texts_loc] [vectors_model] [output_dir]
|
|||
| `--use-chars`, `-chr` <Tag variant="new">2.2.2</Tag> | flag | Whether to use character-based embedding. |
|
||||
| `--sa-depth`, `-sa` <Tag variant="new">2.2.2</Tag> | option | Depth of self-attention layers. |
|
||||
| `--embed-rows`, `-er` | option | Number of embedding rows. |
|
||||
| `--loss-func`, `-L` | option | Loss function to use for the objective. Either `"cosine"`, `"L2"` or `"characters"`. |
|
||||
| `--loss-func`, `-L` | option | Loss function to use for the objective. Either `"cosine"`, `"L2"` or `"characters"`. |
|
||||
| `--dropout`, `-d` | option | Dropout rate. |
|
||||
| `--batch-size`, `-bs` | option | Number of words per training batch. |
|
||||
| `--max-length`, `-xw` | option | Maximum words per example. Longer examples are discarded. |
|
||||
|
|
|
@ -23,12 +23,12 @@ abruptly.
|
|||
With Cython there are four ways of declaring complex data types. Unfortunately
|
||||
we use all four in different places, as they all have different utility:
|
||||
|
||||
| Declaration | Description | Example |
|
||||
| --------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------- |
|
||||
| `class` | A normal Python class. | [`Language`](/api/language) |
|
||||
| `cdef class` | A Python extension type. Differs from a normal Python class in that its attributes can be defined on the underlying struct. Can have C-level objects as attributes (notably structs and pointers), and can have methods which have C-level objects as arguments or return types. | [`Lexeme`](/api/cython-classes#lexeme) |
|
||||
| `cdef struct` | A struct is just a collection of variables, sort of like a named tuple, except the memory is contiguous. Structs can't have methods, only attributes. | [`LexemeC`](/api/cython-structs#lexemec) |
|
||||
| `cdef cppclass` | A C++ class. Like a struct, this can be allocated on the stack, but can have methods, a constructor and a destructor. Differs from `cdef class` in that it can be created and destroyed without acquiring the Python global interpreter lock. This style is the most obscure. | [`StateC`](https://github.com/explosion/spaCy/tree/master/spacy/syntax/_state.pxd) |
|
||||
| Declaration | Description | Example |
|
||||
| --------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------- |
|
||||
| `class` | A normal Python class. | [`Language`](/api/language) |
|
||||
| `cdef class` | A Python extension type. Differs from a normal Python class in that its attributes can be defined on the underlying struct. Can have C-level objects as attributes (notably structs and pointers), and can have methods which have C-level objects as arguments or return types. | [`Lexeme`](/api/cython-classes#lexeme) |
|
||||
| `cdef struct` | A struct is just a collection of variables, sort of like a named tuple, except the memory is contiguous. Structs can't have methods, only attributes. | [`LexemeC`](/api/cython-structs#lexemec) |
|
||||
| `cdef cppclass` | A C++ class. Like a struct, this can be allocated on the stack, but can have methods, a constructor and a destructor. Differs from `cdef class` in that it can be created and destroyed without acquiring the Python global interpreter lock. This style is the most obscure. | [`StateC`](https://github.com/explosion/spaCy/tree/v2.x/spacy/syntax/_state.pxd) |
|
||||
|
||||
The most important classes in spaCy are defined as `cdef class` objects. The
|
||||
underlying data for these objects is usually gathered into a struct, which is
|
||||
|
|
|
@ -199,15 +199,15 @@ Create a `Span` object from the slice `doc.text[start_idx:end_idx]`. Returns
|
|||
> assert span.text == "New York"
|
||||
> ```
|
||||
|
||||
| Name | Type | Description |
|
||||
| ------------------------------------ | ---------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `start_idx` | int | The index of the first character of the span. |
|
||||
| `end_idx` | int | The index of the last character after the span. |
|
||||
| `label` | uint64 / unicode | A label to attach to the span, e.g. for named entities. |
|
||||
| `kb_id` <Tag variant="new">2.2</Tag> | uint64 / unicode | An ID from a knowledge base to capture the meaning of a named entity. |
|
||||
| `vector` | `numpy.ndarray[ndim=1, dtype='float32']` | A meaning representation of the span. |
|
||||
| `alignment_mode` | `str` | How character indices snap to token boundaries. Options: "strict" (no snapping), "inside" (span of all tokens completely within the character span), "outside" (span of all tokens at least partially covered by the character span). Defaults to "strict". |
|
||||
| **RETURNS** | `Span` | The newly constructed object or `None`. |
|
||||
| Name | Type | Description |
|
||||
| ------------------------------------ | ---------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
|
||||
| `start_idx` | int | The index of the first character of the span. |
|
||||
| `end_idx` | int | The index of the last character after the span. |
|
||||
| `label` | uint64 / unicode | A label to attach to the span, e.g. for named entities. |
|
||||
| `kb_id` <Tag variant="new">2.2</Tag> | uint64 / unicode | An ID from a knowledge base to capture the meaning of a named entity. |
|
||||
| `vector` | `numpy.ndarray[ndim=1, dtype='float32']` | A meaning representation of the span. |
|
||||
| `alignment_mode` | `str` | How character indices snap to token boundaries. Options: "strict" (no snapping), "contract" (span of all tokens completely within the character span), "expand" (span of all tokens at least partially covered by the character span). Defaults to "strict". |
|
||||
| **RETURNS** | `Span` | The newly constructed object or `None`. |
|
||||
|
||||
## Doc.similarity {#similarity tag="method" model="vectors"}
|
||||
|
||||
|
|
|
@ -14,7 +14,7 @@ Create a `GoldCorpus`. IF the input data is an iterable, each item should be a
|
|||
`(text, paragraphs)` tuple, where each paragraph is a tuple
|
||||
`(sentences, brackets)`, and each sentence is a tuple
|
||||
`(ids, words, tags, heads, ner)`. See the implementation of
|
||||
[`gold.read_json_file`](https://github.com/explosion/spaCy/tree/master/spacy/gold.pyx)
|
||||
[`gold.read_json_file`](https://github.com/explosion/spaCy/tree/v2.x/spacy/gold.pyx)
|
||||
for further details.
|
||||
|
||||
| Name | Type | Description |
|
||||
|
|
|
@ -156,7 +156,7 @@ The L2 norm of the lexeme's vector representation.
|
|||
| `like_url` | bool | Does the lexeme resemble a URL? |
|
||||
| `like_num` | bool | Does the lexeme represent a number? e.g. "10.9", "10", "ten", etc. |
|
||||
| `like_email` | bool | Does the lexeme resemble an email address? |
|
||||
| `is_oov` | bool | Does the lexeme have a word vector? |
|
||||
| `is_oov` | bool | Is the lexeme out-of-vocabulary (i.e. Does it not have a word vector?) |
|
||||
| `is_stop` | bool | Is the lexeme part of a "stop list"? |
|
||||
| `lang` | int | Language of the parent vocabulary. |
|
||||
| `lang_` | unicode | Language of the parent vocabulary. |
|
||||
|
|
|
@ -459,7 +459,7 @@ The L2 norm of the token's vector representation.
|
|||
| `like_url` | bool | Does the token resemble a URL? |
|
||||
| `like_num` | bool | Does the token represent a number? e.g. "10.9", "10", "ten", etc. |
|
||||
| `like_email` | bool | Does the token resemble an email address? |
|
||||
| `is_oov` | bool | Does the token have a word vector? |
|
||||
| `is_oov` | bool | Is the token out-of-vocabulary (i.e. does it not have a word vector?) |
|
||||
| `is_stop` | bool | Is the token part of a "stop list"? |
|
||||
| `pos` | int | Coarse-grained part-of-speech from the [Universal POS tag set](https://universaldependencies.org/docs/u/pos/). |
|
||||
| `pos_` | unicode | Coarse-grained part-of-speech from the [Universal POS tag set](https://universaldependencies.org/docs/u/pos/). |
|
||||
|
|
|
@ -107,7 +107,7 @@ meta data as a dictionary instead, you can use the `meta` attribute on your
|
|||
|
||||
Get a description for a given POS tag, dependency label or entity type. For a
|
||||
list of available terms, see
|
||||
[`glossary.py`](https://github.com/explosion/spaCy/tree/master/spacy/glossary.py).
|
||||
[`glossary.py`](https://github.com/explosion/spaCy/tree/v2.x/spacy/glossary.py).
|
||||
|
||||
> #### Example
|
||||
>
|
||||
|
@ -279,7 +279,7 @@ to add custom labels and their colors automatically.
|
|||
## Utility functions {#util source="spacy/util.py"}
|
||||
|
||||
spaCy comes with a small collection of utility functions located in
|
||||
[`spacy/util.py`](https://github.com/explosion/spaCy/tree/master/spacy/util.py).
|
||||
[`spacy/util.py`](https://github.com/explosion/spaCy/tree/v2.x/spacy/util.py).
|
||||
Because utility functions are mostly intended for **internal use within spaCy**,
|
||||
their behavior may change with future releases. The functions documented on this
|
||||
page should be safe to use and we'll try to ensure backwards compatibility.
|
||||
|
@ -538,10 +538,10 @@ Compile a sequence of prefix rules into a regex object.
|
|||
> nlp.tokenizer.prefix_search = prefix_regex.search
|
||||
> ```
|
||||
|
||||
| Name | Type | Description |
|
||||
| ----------- | ------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `entries` | tuple | The prefix rules, e.g. [`lang.punctuation.TOKENIZER_PREFIXES`](https://github.com/explosion/spaCy/tree/master/spacy/lang/punctuation.py). |
|
||||
| **RETURNS** | [regex](https://docs.python.org/3/library/re.html#re-objects) | The regex object. to be used for [`Tokenizer.prefix_search`](/api/tokenizer#attributes). |
|
||||
| Name | Type | Description |
|
||||
| ----------- | ------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `entries` | tuple | The prefix rules, e.g. [`lang.punctuation.TOKENIZER_PREFIXES`](https://github.com/explosion/spaCy/tree/v2.x/spacy/lang/punctuation.py). |
|
||||
| **RETURNS** | [regex](https://docs.python.org/3/library/re.html#re-objects) | The regex object. to be used for [`Tokenizer.prefix_search`](/api/tokenizer#attributes). |
|
||||
|
||||
### util.compile_suffix_regex {#util.compile_suffix_regex tag="function"}
|
||||
|
||||
|
@ -555,10 +555,10 @@ Compile a sequence of suffix rules into a regex object.
|
|||
> nlp.tokenizer.suffix_search = suffix_regex.search
|
||||
> ```
|
||||
|
||||
| Name | Type | Description |
|
||||
| ----------- | ------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `entries` | tuple | The suffix rules, e.g. [`lang.punctuation.TOKENIZER_SUFFIXES`](https://github.com/explosion/spaCy/tree/master/spacy/lang/punctuation.py). |
|
||||
| **RETURNS** | [regex](https://docs.python.org/3/library/re.html#re-objects) | The regex object. to be used for [`Tokenizer.suffix_search`](/api/tokenizer#attributes). |
|
||||
| Name | Type | Description |
|
||||
| ----------- | ------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `entries` | tuple | The suffix rules, e.g. [`lang.punctuation.TOKENIZER_SUFFIXES`](https://github.com/explosion/spaCy/tree/v2.x/spacy/lang/punctuation.py). |
|
||||
| **RETURNS** | [regex](https://docs.python.org/3/library/re.html#re-objects) | The regex object. to be used for [`Tokenizer.suffix_search`](/api/tokenizer#attributes). |
|
||||
|
||||
### util.compile_infix_regex {#util.compile_infix_regex tag="function"}
|
||||
|
||||
|
@ -572,10 +572,10 @@ Compile a sequence of infix rules into a regex object.
|
|||
> nlp.tokenizer.infix_finditer = infix_regex.finditer
|
||||
> ```
|
||||
|
||||
| Name | Type | Description |
|
||||
| ----------- | ------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `entries` | tuple | The infix rules, e.g. [`lang.punctuation.TOKENIZER_INFIXES`](https://github.com/explosion/spaCy/tree/master/spacy/lang/punctuation.py). |
|
||||
| **RETURNS** | [regex](https://docs.python.org/3/library/re.html#re-objects) | The regex object. to be used for [`Tokenizer.infix_finditer`](/api/tokenizer#attributes). |
|
||||
| Name | Type | Description |
|
||||
| ----------- | ------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `entries` | tuple | The infix rules, e.g. [`lang.punctuation.TOKENIZER_INFIXES`](https://github.com/explosion/spaCy/tree/v2.x/spacy/lang/punctuation.py). |
|
||||
| **RETURNS** | [regex](https://docs.python.org/3/library/re.html#re-objects) | The regex object. to be used for [`Tokenizer.infix_finditer`](/api/tokenizer#attributes). |
|
||||
|
||||
### util.minibatch {#util.minibatch tag="function" new="2"}
|
||||
|
||||
|
|
|
@ -2,7 +2,7 @@ Every language is different – and usually full of **exceptions and special
|
|||
cases**, especially amongst the most common words. Some of these exceptions are
|
||||
shared across languages, while others are **entirely specific** – usually so
|
||||
specific that they need to be hard-coded. The
|
||||
[`lang`](https://github.com/explosion/spaCy/tree/master/spacy/lang) module
|
||||
[`lang`](https://github.com/explosion/spacy/tree/v2.x/spacy/lang) module
|
||||
contains all language-specific data, organized in simple Python files. This
|
||||
makes the data easy to update and extend.
|
||||
|
||||
|
@ -39,21 +39,21 @@ together all components and creating the `Language` subclass – for example,
|
|||
| **Lemmatizer**<br />[`spacy-lookups-data`][spacy-lookups-data] | Lemmatization rules or a lookup-based lemmatization table to assign base forms, for example "be" for "was". |
|
||||
|
||||
[stop_words.py]:
|
||||
https://github.com/explosion/spaCy/tree/master/spacy/lang/en/stop_words.py
|
||||
https://github.com/explosion/spacy/tree/v2.x/spacy/lang/en/stop_words.py
|
||||
[tokenizer_exceptions.py]:
|
||||
https://github.com/explosion/spaCy/tree/master/spacy/lang/de/tokenizer_exceptions.py
|
||||
https://github.com/explosion/spacy/tree/v2.x/spacy/lang/de/tokenizer_exceptions.py
|
||||
[norm_exceptions.py]:
|
||||
https://github.com/explosion/spaCy/tree/master/spacy/lang/norm_exceptions.py
|
||||
https://github.com/explosion/spacy/tree/v2.x/spacy/lang/norm_exceptions.py
|
||||
[punctuation.py]:
|
||||
https://github.com/explosion/spaCy/tree/master/spacy/lang/punctuation.py
|
||||
https://github.com/explosion/spacy/tree/v2.x/spacy/lang/punctuation.py
|
||||
[char_classes.py]:
|
||||
https://github.com/explosion/spaCy/tree/master/spacy/lang/char_classes.py
|
||||
https://github.com/explosion/spacy/tree/v2.x/spacy/lang/char_classes.py
|
||||
[lex_attrs.py]:
|
||||
https://github.com/explosion/spaCy/tree/master/spacy/lang/en/lex_attrs.py
|
||||
https://github.com/explosion/spacy/tree/v2.x/spacy/lang/en/lex_attrs.py
|
||||
[syntax_iterators.py]:
|
||||
https://github.com/explosion/spaCy/tree/master/spacy/lang/en/syntax_iterators.py
|
||||
https://github.com/explosion/spacy/tree/v2.x/spacy/lang/en/syntax_iterators.py
|
||||
[tag_map.py]:
|
||||
https://github.com/explosion/spaCy/tree/master/spacy/lang/en/tag_map.py
|
||||
https://github.com/explosion/spacy/tree/v2.x/spacy/lang/en/tag_map.py
|
||||
[morph_rules.py]:
|
||||
https://github.com/explosion/spaCy/tree/master/spacy/lang/en/morph_rules.py
|
||||
https://github.com/explosion/spacy/tree/v2.x/spacy/lang/en/morph_rules.py
|
||||
[spacy-lookups-data]: https://github.com/explosion/spacy-lookups-data
|
||||
|
|
|
@ -15,8 +15,8 @@ the specific workflows for each component.
|
|||
>
|
||||
> To add a new language to spaCy, you'll need to **modify the library's code**.
|
||||
> The easiest way to do this is to clone the
|
||||
> [repository](https://github.com/explosion/spaCy/tree/master/) and **build
|
||||
> spaCy from source**. For more information on this, see the
|
||||
> [repository](https://github.com/explosion/spacy/tree/v2.x/) and **build spaCy
|
||||
> from source**. For more information on this, see the
|
||||
> [installation guide](/usage). Unlike spaCy's core, which is mostly written in
|
||||
> Cython, all language data is stored in regular Python files. This means that
|
||||
> you won't have to rebuild anything in between – you can simply make edits and
|
||||
|
@ -88,7 +88,7 @@ language and training a language model.
|
|||
> #### Should I ever update the global data?
|
||||
>
|
||||
> Reusable language data is collected as atomic pieces in the root of the
|
||||
> [`spacy.lang`](https://github.com/explosion/spaCy/tree/master/spacy/lang)
|
||||
> [`spacy.lang`](https://github.com/explosion/spacy/tree/v2.x/spacy/lang)
|
||||
> module. Often, when a new language is added, you'll find a pattern or symbol
|
||||
> that's missing. Even if it isn't common in other languages, it might be best
|
||||
> to add it to the shared language data, unless it has some conflicting
|
||||
|
@ -102,7 +102,7 @@ In order for the tokenizer to split suffixes, prefixes and infixes, spaCy needs
|
|||
to know the language's character set. If the language you're adding uses
|
||||
non-latin characters, you might need to define the required character classes in
|
||||
the global
|
||||
[`char_classes.py`](https://github.com/explosion/spaCy/tree/master/spacy/lang/char_classes.py).
|
||||
[`char_classes.py`](https://github.com/explosion/spacy/tree/v2.x/spacy/lang/char_classes.py).
|
||||
For efficiency, spaCy uses hard-coded unicode ranges to define character
|
||||
classes, the definitions of which can be found on
|
||||
[Wikipedia](https://en.wikipedia.org/wiki/Unicode_block). If the language
|
||||
|
@ -120,7 +120,7 @@ code and resources specific to Spanish are placed into a directory
|
|||
`spacy/lang/es`, which can be imported as `spacy.lang.es`.
|
||||
|
||||
To get started, you can check out the
|
||||
[existing languages](https://github.com/explosion/spacy/tree/master/spacy/lang).
|
||||
[existing languages](https://github.com/explosion/spacy/tree/v2.x/spacy/lang).
|
||||
Here's what the class could look like:
|
||||
|
||||
```python
|
||||
|
@ -291,14 +291,14 @@ weren't common in the training data, but are equivalent to other words – for
|
|||
example, "realise" and "realize", or "thx" and "thanks".
|
||||
|
||||
Similarly, spaCy also includes
|
||||
[global base norms](https://github.com/explosion/spaCy/tree/master/spacy/lang/norm_exceptions.py)
|
||||
[global base norms](https://github.com/explosion/spacy/tree/v2.x/spacy/lang/norm_exceptions.py)
|
||||
for normalizing different styles of quotation marks and currency symbols. Even
|
||||
though `$` and `€` are very different, spaCy normalizes them both to `$`. This
|
||||
way, they'll always be seen as similar, no matter how common they were in the
|
||||
training data.
|
||||
|
||||
As of spaCy v2.3, language-specific norm exceptions are provided as a
|
||||
JSON dictionary in the package
|
||||
As of spaCy v2.3, language-specific norm exceptions are provided as a JSON
|
||||
dictionary in the package
|
||||
[`spacy-lookups-data`](https://github.com/explosion/spacy-lookups-data) rather
|
||||
than in the main library. For a full example, see
|
||||
[`en_lexeme_norm.json`](https://github.com/explosion/spacy-lookups-data/blob/master/spacy_lookups_data/data/en_lexeme_norm.json).
|
||||
|
@ -378,7 +378,7 @@ number words), requires some customization.
|
|||
> of possible number words).
|
||||
|
||||
Here's an example from the English
|
||||
[`lex_attrs.py`](https://github.com/explosion/spaCy/tree/master/spacy/lang/en/lex_attrs.py):
|
||||
[`lex_attrs.py`](https://github.com/explosion/spacy/tree/v2.x/spacy/lang/en/lex_attrs.py):
|
||||
|
||||
```python
|
||||
### lex_attrs.py
|
||||
|
@ -430,17 +430,17 @@ iterators:
|
|||
> assert chunks[1].text == "another phrase"
|
||||
> ```
|
||||
|
||||
| Language | Code | Source |
|
||||
| ---------------- | ---- | ----------------------------------------------------------------------------------------------------------------- |
|
||||
| English | `en` | [`lang/en/syntax_iterators.py`](https://github.com/explosion/spaCy/tree/master/spacy/lang/en/syntax_iterators.py) |
|
||||
| German | `de` | [`lang/de/syntax_iterators.py`](https://github.com/explosion/spaCy/tree/master/spacy/lang/de/syntax_iterators.py) |
|
||||
| French | `fr` | [`lang/fr/syntax_iterators.py`](https://github.com/explosion/spaCy/tree/master/spacy/lang/fr/syntax_iterators.py) |
|
||||
| Spanish | `es` | [`lang/es/syntax_iterators.py`](https://github.com/explosion/spaCy/tree/master/spacy/lang/es/syntax_iterators.py) |
|
||||
| Greek | `el` | [`lang/el/syntax_iterators.py`](https://github.com/explosion/spaCy/tree/master/spacy/lang/el/syntax_iterators.py) |
|
||||
| Norwegian Bokmål | `nb` | [`lang/nb/syntax_iterators.py`](https://github.com/explosion/spaCy/tree/master/spacy/lang/nb/syntax_iterators.py) |
|
||||
| Swedish | `sv` | [`lang/sv/syntax_iterators.py`](https://github.com/explosion/spaCy/tree/master/spacy/lang/sv/syntax_iterators.py) |
|
||||
| Indonesian | `id` | [`lang/id/syntax_iterators.py`](https://github.com/explosion/spaCy/tree/master/spacy/lang/id/syntax_iterators.py) |
|
||||
| Persian | `fa` | [`lang/fa/syntax_iterators.py`](https://github.com/explosion/spaCy/tree/master/spacy/lang/fa/syntax_iterators.py) |
|
||||
| Language | Code | Source |
|
||||
| ---------------- | ---- | --------------------------------------------------------------------------------------------------------------- |
|
||||
| English | `en` | [`lang/en/syntax_iterators.py`](https://github.com/explosion/spacy/tree/v2.x/spacy/lang/en/syntax_iterators.py) |
|
||||
| German | `de` | [`lang/de/syntax_iterators.py`](https://github.com/explosion/spacy/tree/v2.x/spacy/lang/de/syntax_iterators.py) |
|
||||
| French | `fr` | [`lang/fr/syntax_iterators.py`](https://github.com/explosion/spacy/tree/v2.x/spacy/lang/fr/syntax_iterators.py) |
|
||||
| Spanish | `es` | [`lang/es/syntax_iterators.py`](https://github.com/explosion/spacy/tree/v2.x/spacy/lang/es/syntax_iterators.py) |
|
||||
| Greek | `el` | [`lang/el/syntax_iterators.py`](https://github.com/explosion/spacy/tree/v2.x/spacy/lang/el/syntax_iterators.py) |
|
||||
| Norwegian Bokmål | `nb` | [`lang/nb/syntax_iterators.py`](https://github.com/explosion/spacy/tree/v2.x/spacy/lang/nb/syntax_iterators.py) |
|
||||
| Swedish | `sv` | [`lang/sv/syntax_iterators.py`](https://github.com/explosion/spacy/tree/v2.x/spacy/lang/sv/syntax_iterators.py) |
|
||||
| Indonesian | `id` | [`lang/id/syntax_iterators.py`](https://github.com/explosion/spacy/tree/v2.x/spacy/lang/id/syntax_iterators.py) |
|
||||
| Persian | `fa` | [`lang/fa/syntax_iterators.py`](https://github.com/explosion/spacy/tree/v2.x/spacy/lang/fa/syntax_iterators.py) |
|
||||
|
||||
### Lemmatizer {#lemmatizer new="2"}
|
||||
|
||||
|
@ -561,7 +561,7 @@ be causing regressions.
|
|||
spaCy uses the [pytest framework](https://docs.pytest.org/en/latest/) for
|
||||
testing. For more details on how the tests are structured and best practices for
|
||||
writing your own tests, see our
|
||||
[tests documentation](https://github.com/explosion/spaCy/tree/master/spacy/tests).
|
||||
[tests documentation](https://github.com/explosion/spacy/tree/v2.x/spacy/tests).
|
||||
|
||||
</Infobox>
|
||||
|
||||
|
@ -569,10 +569,10 @@ writing your own tests, see our
|
|||
|
||||
It's recommended to always add at least some tests with examples specific to the
|
||||
language. Language tests should be located in
|
||||
[`tests/lang`](https://github.com/explosion/spaCy/tree/master/spacy/tests/lang)
|
||||
in a directory named after the language ID. You'll also need to create a fixture
|
||||
[`tests/lang`](https://github.com/explosion/spacy/tree/v2.x/spacy/tests/lang) in
|
||||
a directory named after the language ID. You'll also need to create a fixture
|
||||
for your tokenizer in the
|
||||
[`conftest.py`](https://github.com/explosion/spaCy/tree/master/spacy/tests/conftest.py).
|
||||
[`conftest.py`](https://github.com/explosion/spacy/tree/v2.x/spacy/tests/conftest.py).
|
||||
Always use the [`get_lang_class`](/api/top-level#util.get_lang_class) helper
|
||||
function within the fixture, instead of importing the class at the top of the
|
||||
file. This will load the language data only when it's needed. (Otherwise, _all
|
||||
|
@ -585,7 +585,7 @@ def en_tokenizer():
|
|||
```
|
||||
|
||||
When adding test cases, always
|
||||
[`parametrize`](https://github.com/explosion/spaCy/tree/master/spacy/tests#parameters)
|
||||
[`parametrize`](https://github.com/explosion/spacy/tree/v2.x/spacy/tests#parameters)
|
||||
them – this will make it easier for others to add more test cases without having
|
||||
to modify the test itself. You can also add parameter tuples, for example, a
|
||||
test sentence and its expected length, or a list of expected tokens. Here's an
|
||||
|
@ -630,13 +630,13 @@ of using deep learning for NLP with limited labeled data. The vectors are also
|
|||
useful by themselves – they power the `.similarity` methods in spaCy. For best
|
||||
results, you should pre-process the text with spaCy before training the Word2vec
|
||||
model. This ensures your tokenization will match. You can use our
|
||||
[word vectors training script](https://github.com/explosion/spacy/tree/master/bin/train_word_vectors.py),
|
||||
[word vectors training script](https://github.com/explosion/spacy/tree/v2.x/bin/train_word_vectors.py),
|
||||
which pre-processes the text with your language-specific tokenizer and trains
|
||||
the model using [Gensim](https://radimrehurek.com/gensim/). The `vectors.bin`
|
||||
file should consist of one word and vector per line.
|
||||
|
||||
```python
|
||||
https://github.com/explosion/spacy/tree/master/bin/train_word_vectors.py
|
||||
https://github.com/explosion/spacy/tree/v2.x/bin/train_word_vectors.py
|
||||
```
|
||||
|
||||
If you don't have a large sample of text available, you can also convert word
|
||||
|
|
|
@ -17,7 +17,7 @@ This example shows how to use the new [`PhraseMatcher`](/api/phrasematcher) to
|
|||
efficiently find entities from a large terminology list.
|
||||
|
||||
```python
|
||||
https://github.com/explosion/spaCy/tree/master/examples/information_extraction/phrase_matcher.py
|
||||
https://github.com/explosion/spacy/tree/v2.x/examples/information_extraction/phrase_matcher.py
|
||||
```
|
||||
|
||||
### Extracting entity relations {#entity-relations}
|
||||
|
@ -29,7 +29,7 @@ tree to find the noun phrase they are referring to – for example:
|
|||
`"$9.4 million"` → `"Net income"`.
|
||||
|
||||
```python
|
||||
https://github.com/explosion/spaCy/tree/master/examples/information_extraction/entity_relations.py
|
||||
https://github.com/explosion/spacy/tree/v2.x/examples/information_extraction/entity_relations.py
|
||||
```
|
||||
|
||||
### Navigating the parse tree and subtrees {#subtrees}
|
||||
|
@ -38,7 +38,7 @@ This example shows how to navigate the parse tree including subtrees attached to
|
|||
a word.
|
||||
|
||||
```python
|
||||
https://github.com/explosion/spaCy/tree/master/examples/information_extraction/parse_subtrees.py
|
||||
https://github.com/explosion/spacy/tree/v2.x/examples/information_extraction/parse_subtrees.py
|
||||
```
|
||||
|
||||
## Pipeline {#pipeline hidden="true"}
|
||||
|
@ -51,7 +51,7 @@ entities into one token and sets custom attributes on the `Doc`, `Span` and
|
|||
`Token`.
|
||||
|
||||
```python
|
||||
https://github.com/explosion/spaCy/tree/master/examples/pipeline/custom_component_entities.py
|
||||
https://github.com/explosion/spacy/tree/v2.x/examples/pipeline/custom_component_entities.py
|
||||
```
|
||||
|
||||
### Custom pipeline components and attribute extensions via a REST API {#custom-components-api new="2"}
|
||||
|
@ -63,7 +63,7 @@ attributes on the `Doc`, `Span` and `Token` – for example, the capital,
|
|||
latitude/longitude coordinates and the country flag.
|
||||
|
||||
```python
|
||||
https://github.com/explosion/spaCy/tree/master/examples/pipeline/custom_component_countries_api.py
|
||||
https://github.com/explosion/spacy/tree/v2.x/examples/pipeline/custom_component_countries_api.py
|
||||
```
|
||||
|
||||
### Custom method extensions {#custom-components-attr-methods new="2"}
|
||||
|
@ -72,7 +72,7 @@ A collection of snippets showing examples of extensions adding custom methods to
|
|||
the `Doc`, `Token` and `Span`.
|
||||
|
||||
```python
|
||||
https://github.com/explosion/spaCy/tree/master/examples/pipeline/custom_attr_methods.py
|
||||
https://github.com/explosion/spacy/tree/v2.x/examples/pipeline/custom_attr_methods.py
|
||||
```
|
||||
|
||||
### Multi-processing with Joblib {#multi-processing}
|
||||
|
@ -85,7 +85,7 @@ IMDB movie reviews dataset and will be loaded automatically via Thinc's built-in
|
|||
dataset loader.
|
||||
|
||||
```python
|
||||
https://github.com/explosion/spaCy/tree/master/examples/pipeline/multi_processing.py
|
||||
https://github.com/explosion/spacy/tree/v2.x/examples/pipeline/multi_processing.py
|
||||
```
|
||||
|
||||
## Training {#training hidden="true"}
|
||||
|
@ -93,11 +93,11 @@ https://github.com/explosion/spaCy/tree/master/examples/pipeline/multi_processin
|
|||
### Training spaCy's Named Entity Recognizer {#training-ner}
|
||||
|
||||
This example shows how to update spaCy's entity recognizer with your own
|
||||
examples, starting off with an existing, pretrained model, or from scratch
|
||||
using a blank `Language` class.
|
||||
examples, starting off with an existing, pretrained model, or from scratch using
|
||||
a blank `Language` class.
|
||||
|
||||
```python
|
||||
https://github.com/explosion/spaCy/tree/master/examples/training/train_ner.py
|
||||
https://github.com/explosion/spacy/tree/v2.x/examples/training/train_ner.py
|
||||
```
|
||||
|
||||
### Training an additional entity type {#new-entity-type}
|
||||
|
@ -108,28 +108,28 @@ examples. In practice, you'll need many more — a few hundred would be a good
|
|||
start.
|
||||
|
||||
```python
|
||||
https://github.com/explosion/spaCy/tree/master/examples/training/train_new_entity_type.py
|
||||
https://github.com/explosion/spacy/tree/v2.x/examples/training/train_new_entity_type.py
|
||||
```
|
||||
|
||||
### Creating a Knowledge Base for Named Entity Linking {#kb}
|
||||
|
||||
This example shows how to create a knowledge base in spaCy,
|
||||
which is needed to implement entity linking functionality.
|
||||
It requires as input a spaCy model with pretrained word vectors,
|
||||
and it stores the KB to file (if an `output_dir` is provided).
|
||||
This example shows how to create a knowledge base in spaCy, which is needed to
|
||||
implement entity linking functionality. It requires as input a spaCy model with
|
||||
pretrained word vectors, and it stores the KB to file (if an `output_dir` is
|
||||
provided).
|
||||
|
||||
```python
|
||||
https://github.com/explosion/spaCy/tree/master/examples/training/create_kb.py
|
||||
https://github.com/explosion/spacy/tree/v2.x/examples/training/create_kb.py
|
||||
```
|
||||
|
||||
### Training spaCy's Named Entity Linker {#nel}
|
||||
|
||||
This example shows how to train spaCy's entity linker with your own custom
|
||||
examples, starting off with a predefined knowledge base and its vocab,
|
||||
and using a blank `English` class.
|
||||
examples, starting off with a predefined knowledge base and its vocab, and using
|
||||
a blank `English` class.
|
||||
|
||||
```python
|
||||
https://github.com/explosion/spaCy/tree/master/examples/training/train_entity_linker.py
|
||||
https://github.com/explosion/spacy/tree/v2.x/examples/training/train_entity_linker.py
|
||||
```
|
||||
|
||||
### Training spaCy's Dependency Parser {#parser}
|
||||
|
@ -138,7 +138,7 @@ This example shows how to update spaCy's dependency parser, starting off with an
|
|||
existing, pretrained model, or from scratch using a blank `Language` class.
|
||||
|
||||
```python
|
||||
https://github.com/explosion/spaCy/tree/master/examples/training/train_parser.py
|
||||
https://github.com/explosion/spacy/tree/v2.x/examples/training/train_parser.py
|
||||
```
|
||||
|
||||
### Training spaCy's Part-of-speech Tagger {#tagger}
|
||||
|
@ -148,7 +148,7 @@ map, mapping our own tags to the mapping those tags to the
|
|||
[Universal Dependencies scheme](http://universaldependencies.github.io/docs/u/pos/index.html).
|
||||
|
||||
```python
|
||||
https://github.com/explosion/spaCy/tree/master/examples/training/train_tagger.py
|
||||
https://github.com/explosion/spacy/tree/v2.x/examples/training/train_tagger.py
|
||||
```
|
||||
|
||||
### Training a custom parser for chat intent semantics {#intent-parser}
|
||||
|
@ -162,7 +162,7 @@ following types of relations: `ROOT`, `PLACE`, `QUALITY`, `ATTRIBUTE`, `TIME`
|
|||
and `LOCATION`.
|
||||
|
||||
```python
|
||||
https://github.com/explosion/spaCy/tree/master/examples/training/train_intent_parser.py
|
||||
https://github.com/explosion/spacy/tree/v2.x/examples/training/train_intent_parser.py
|
||||
```
|
||||
|
||||
### Training spaCy's text classifier {#textcat new="2"}
|
||||
|
@ -174,7 +174,7 @@ automatically via Thinc's built-in dataset loader. Predictions are available via
|
|||
[`Doc.cats`](/api/doc#attributes).
|
||||
|
||||
```python
|
||||
https://github.com/explosion/spaCy/tree/master/examples/training/train_textcat.py
|
||||
https://github.com/explosion/spacy/tree/v2.x/examples/training/train_textcat.py
|
||||
```
|
||||
|
||||
## Vectors {#vectors hidden="true"}
|
||||
|
@ -186,7 +186,7 @@ This script lets you load any spaCy model containing word vectors into
|
|||
[embedding visualization](https://github.com/tensorflow/tensorboard/blob/master/docs/tensorboard_projector_plugin.ipynb).
|
||||
|
||||
```python
|
||||
https://github.com/explosion/spaCy/tree/master/examples/vectors_tensorboard.py
|
||||
https://github.com/explosion/spacy/tree/v2.x/examples/vectors_tensorboard.py
|
||||
```
|
||||
|
||||
## Deep Learning {#deep-learning hidden="true"}
|
||||
|
@ -203,5 +203,5 @@ documents so that they're a fixed size. This hurts review accuracy a lot,
|
|||
because people often summarize their rating in the final sentence.
|
||||
|
||||
```python
|
||||
https://github.com/explosion/spaCy/tree/master/examples/deep_learning_keras.py
|
||||
https://github.com/explosion/spacy/tree/v2.x/examples/deep_learning_keras.py
|
||||
```
|
||||
|
|
|
@ -13,13 +13,6 @@ spaCy is compatible with **64-bit CPython 2.7 / 3.5+** and runs on
|
|||
available over [pip](https://pypi.python.org/pypi/spacy) and
|
||||
[conda](https://anaconda.org/conda-forge/spacy).
|
||||
|
||||
> #### 📖 Looking for the old docs?
|
||||
>
|
||||
> To help you make the transition from v1.x to v2.0, we've uploaded the old
|
||||
> website to [**legacy.spacy.io**](https://legacy.spacy.io/docs). Wherever
|
||||
> possible, the new docs also include notes on features that have changed in
|
||||
> v2.0, and features that were introduced in the new version.
|
||||
|
||||
## Quickstart {hidden="true"}
|
||||
|
||||
import QuickstartInstall from 'widgets/quickstart-install.js'
|
||||
|
@ -183,7 +176,7 @@ pip install -r requirements.txt
|
|||
```
|
||||
|
||||
Compared to regular install via pip, the
|
||||
[`requirements.txt`](https://github.com/explosion/spaCy/tree/master/requirements.txt)
|
||||
[`requirements.txt`](https://github.com/explosion/spacy/tree/v2.x/requirements.txt)
|
||||
additionally installs developer dependencies such as Cython. See the the
|
||||
[quickstart widget](#quickstart) to get the right commands for your platform and
|
||||
Python version.
|
||||
|
@ -250,14 +243,14 @@ source code and recompiling frequently.
|
|||
### Run tests {#run-tests}
|
||||
|
||||
spaCy comes with an
|
||||
[extensive test suite](https://github.com/explosion/spaCy/tree/master/spacy/tests).
|
||||
[extensive test suite](https://github.com/explosion/spacy/tree/v2.x/spacy/tests).
|
||||
In order to run the tests, you'll usually want to clone the
|
||||
[repository](https://github.com/explosion/spaCy/tree/master/) and
|
||||
[repository](https://github.com/explosion/spacy/tree/v2.x/) and
|
||||
[build spaCy from source](#source). This will also install the required
|
||||
development dependencies and test utilities defined in the `requirements.txt`.
|
||||
|
||||
Alternatively, you can run `pytest` on the tests packaged with the install
|
||||
`spacy package. Don't forget to also install the test utilities via spaCy's [`requirements.txt`](https://github.com/explosion/spaCy/tree/master/requirements.txt):
|
||||
`spacy package. Don't forget to also install the test utilities via spaCy's [`requirements.txt`](https://github.com/explosion/spacy/tree/v2.x/requirements.txt):
|
||||
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
|
|
|
@ -540,7 +540,7 @@ gold = GoldParse(doc, entities=["U-ANIMAL", "O", "O", "O"])
|
|||
|
||||
For more details on **training and updating** the named entity recognizer, see
|
||||
the usage guides on [training](/usage/training) or check out the runnable
|
||||
[training script](https://github.com/explosion/spaCy/tree/master/examples/training/train_ner.py)
|
||||
[training script](https://github.com/explosion/spacy/tree/v2.x/examples/training/train_ner.py)
|
||||
on GitHub.
|
||||
|
||||
</Infobox>
|
||||
|
@ -646,7 +646,7 @@ import Tokenization101 from 'usage/101/\_tokenization.md'
|
|||
|
||||
**Global** and **language-specific** tokenizer data is supplied via the language
|
||||
data in
|
||||
[`spacy/lang`](https://github.com/explosion/spaCy/tree/master/spacy/lang). The
|
||||
[`spacy/lang`](https://github.com/explosion/spacy/tree/v2.x/spacy/lang). The
|
||||
tokenizer exceptions define special cases like "don't" in English, which needs
|
||||
to be split into two tokens: `{ORTH: "do"}` and `{ORTH: "n't", NORM: "not"}`.
|
||||
The prefixes, suffixes and infixes mostly define punctuation rules – for
|
||||
|
@ -666,7 +666,7 @@ For more details on the language-specific data, see the usage guide on
|
|||
|
||||
Tokenization rules that are specific to one language, but can be **generalized
|
||||
across that language** should ideally live in the language data in
|
||||
[`spacy/lang`](https://github.com/explosion/spaCy/tree/master/spacy/lang) – we
|
||||
[`spacy/lang`](https://github.com/explosion/spacy/tree/v2.x/spacy/lang) – we
|
||||
always appreciate pull requests! Anything that's specific to a domain or text
|
||||
type – like financial trading abbreviations, or Bavarian youth slang – should be
|
||||
added as a special case rule to your tokenizer instance. If you're dealing with
|
||||
|
@ -843,7 +843,7 @@ domain. There are six things you may need to define:
|
|||
be split, overriding the infix rules. Useful for things like numbers.
|
||||
6. An optional boolean function `url_match`, which is similar to `token_match`
|
||||
except that prefixes and suffixes are removed before applying the match.
|
||||
|
||||
|
||||
<Infobox title="Important note: token match in spaCy v2.2" variant="warning">
|
||||
|
||||
In spaCy v2.2.2-v2.2.4, the `token_match` was equivalent to the `url_match`
|
||||
|
|
|
@ -78,7 +78,7 @@ As of v2.0, spaCy supports models trained on more than one language. This is
|
|||
especially useful for named entity recognition. The language ID used for
|
||||
multi-language or language-neutral models is `xx`. The language class, a generic
|
||||
subclass containing only the base language data, can be found in
|
||||
[`lang/xx`](https://github.com/explosion/spaCy/tree/master/spacy/lang/xx).
|
||||
[`lang/xx`](https://github.com/explosion/spacy/tree/v2.x/spacy/lang/xx).
|
||||
|
||||
To load your model with the neutral, multi-language class, simply set
|
||||
`"language": "xx"` in your [model package](/usage/training#models-generating)'s
|
||||
|
|
|
@ -489,7 +489,7 @@ When you call `nlp` on a text, the custom pipeline component is applied to the
|
|||
`Doc`.
|
||||
|
||||
```python
|
||||
https://github.com/explosion/spaCy/tree/master/examples/pipeline/custom_component_entities.py
|
||||
https://github.com/explosion/spacy/tree/v2.x/examples/pipeline/custom_component_entities.py
|
||||
```
|
||||
|
||||
Wrapping this functionality in a pipeline component allows you to reuse the
|
||||
|
@ -650,7 +650,7 @@ attributes on the `Doc`, `Span` and `Token` – for example, the capital,
|
|||
latitude/longitude coordinates and even the country flag.
|
||||
|
||||
```python
|
||||
https://github.com/explosion/spaCy/tree/master/examples/pipeline/custom_component_countries_api.py
|
||||
https://github.com/explosion/spacy/tree/v2.x/examples/pipeline/custom_component_countries_api.py
|
||||
```
|
||||
|
||||
In this case, all data can be fetched on initialization in one request. However,
|
||||
|
|
|
@ -193,7 +193,7 @@ computed properties can't be accessed.
|
|||
|
||||
The uppercase attribute names like `LOWER` or `IS_PUNCT` refer to symbols from
|
||||
the
|
||||
[`spacy.attrs`](https://github.com/explosion/spaCy/tree/master/spacy/attrs.pyx)
|
||||
[`spacy.attrs`](https://github.com/explosion/spacy/tree/v2.x/spacy/attrs.pyx)
|
||||
enum table. They're passed into a function that essentially is a big case/switch
|
||||
statement, to figure out which struct field to return. The same attribute
|
||||
identifiers are used in [`Doc.to_array`](/api/doc#to_array), and a few other
|
||||
|
|
|
@ -194,7 +194,7 @@ add to that data and saves and loads the data to and from a JSON file.
|
|||
>
|
||||
> To see custom serialization methods in action, check out the new
|
||||
> [`EntityRuler`](/api/entityruler) component and its
|
||||
> [source](https://github.com/explosion/spaCy/tree/master/spacy/pipeline/entityruler.py).
|
||||
> [source](https://github.com/explosion/spacy/tree/v2.x/spacy/pipeline/entityruler.py).
|
||||
> Patterns added to the component will be saved to a `.jsonl` file if the
|
||||
> pipeline is serialized to disk, and to a bytestring if the pipeline is
|
||||
> serialized to bytes. This allows saving out a model with a rule-based entity
|
||||
|
|
|
@ -915,9 +915,9 @@ via the following platforms:
|
|||
questions** and everything related to problems with your specific code. The
|
||||
Stack Overflow community is much larger than ours, so if your problem can be
|
||||
solved by others, you'll receive help much quicker.
|
||||
- [GitHub discussions](https://github.com/explosion/spaCy/discussions): **General
|
||||
discussion**, **project ideas** and **usage questions**. Meet other community
|
||||
members to get help with a specific code implementation, discuss ideas for new
|
||||
- [GitHub discussions](https://github.com/explosion/spaCy/discussions): **General
|
||||
discussion**, **project ideas** and **usage questions**. Meet other community
|
||||
members to get help with a specific code implementation, discuss ideas for new
|
||||
projects/plugins, support more languages, and share best practices.
|
||||
- [GitHub issue tracker](https://github.com/explosion/spaCy/issues): **Bug
|
||||
reports** and **improvement suggestions**, i.e. everything that's likely
|
||||
|
@ -959,7 +959,7 @@ regressions to the parts of the library that you care about the most.
|
|||
|
||||
**For more details on the types of contributions we're looking for, the code
|
||||
conventions and other useful tips, make sure to check out the
|
||||
[contributing guidelines](https://github.com/explosion/spaCy/tree/master/CONTRIBUTING.md).**
|
||||
[contributing guidelines](https://github.com/explosion/spacy/tree/v2.x/CONTRIBUTING.md).**
|
||||
|
||||
<Infobox title="Code of Conduct" variant="warning">
|
||||
|
||||
|
|
|
@ -352,7 +352,7 @@ a blank `Language` class. To do this, you'll need **example texts** and the
|
|||
**character offsets** and **labels** of each entity contained in the texts.
|
||||
|
||||
```python
|
||||
https://github.com/explosion/spaCy/tree/master/examples/training/train_ner.py
|
||||
https://github.com/explosion/spacy/tree/v2.x/examples/training/train_ner.py
|
||||
```
|
||||
|
||||
#### Step by step guide {#step-by-step-ner}
|
||||
|
@ -384,7 +384,7 @@ entity recognizer over unlabelled sentences, and adding their annotations to the
|
|||
training set.
|
||||
|
||||
```python
|
||||
https://github.com/explosion/spaCy/tree/master/examples/training/train_new_entity_type.py
|
||||
https://github.com/explosion/spacy/tree/v2.x/examples/training/train_new_entity_type.py
|
||||
```
|
||||
|
||||
<Infobox title="Important note" variant="warning">
|
||||
|
@ -426,7 +426,7 @@ the respective **heads** and **dependency label** for each token of the example
|
|||
texts.
|
||||
|
||||
```python
|
||||
https://github.com/explosion/spaCy/tree/master/examples/training/train_parser.py
|
||||
https://github.com/explosion/spacy/tree/v2.x/examples/training/train_parser.py
|
||||
```
|
||||
|
||||
#### Step by step guide {#step-by-step-parser}
|
||||
|
@ -460,7 +460,7 @@ those tags to the
|
|||
[Universal Dependencies scheme](http://universaldependencies.github.io/docs/u/pos/index.html).
|
||||
|
||||
```python
|
||||
https://github.com/explosion/spaCy/tree/master/examples/training/train_tagger.py
|
||||
https://github.com/explosion/spacy/tree/v2.x/examples/training/train_tagger.py
|
||||
```
|
||||
|
||||
#### Step by step guide {#step-by-step-tagger}
|
||||
|
@ -528,7 +528,7 @@ message semantics will have the following types of relations: `ROOT`, `PLACE`,
|
|||
`QUALITY`, `ATTRIBUTE`, `TIME` and `LOCATION`.
|
||||
|
||||
```python
|
||||
https://github.com/explosion/spaCy/tree/master/examples/training/train_intent_parser.py
|
||||
https://github.com/explosion/spacy/tree/v2.x/examples/training/train_intent_parser.py
|
||||
```
|
||||
|
||||
#### Step by step guide {#step-by-step-parser-custom}
|
||||
|
@ -567,7 +567,7 @@ automatically via Thinc's built-in dataset loader. Predictions are available via
|
|||
[`Doc.cats`](/api/doc#attributes).
|
||||
|
||||
```python
|
||||
https://github.com/explosion/spaCy/tree/master/examples/training/train_textcat.py
|
||||
https://github.com/explosion/spacy/tree/v2.x/examples/training/train_textcat.py
|
||||
```
|
||||
|
||||
#### Step by step guide {#step-by-step-textcat}
|
||||
|
@ -614,7 +614,7 @@ pretrained word vectors to obtain an encoding of an entity's description as its
|
|||
vector.
|
||||
|
||||
```python
|
||||
https://github.com/explosion/spaCy/tree/master/examples/training/create_kb.py
|
||||
https://github.com/explosion/spacy/tree/v2.x/examples/training/create_kb.py
|
||||
```
|
||||
|
||||
#### Step by step guide {#step-by-step-kb}
|
||||
|
@ -639,7 +639,7 @@ offsets** and **knowledge base identifiers** of each entity contained in the
|
|||
texts.
|
||||
|
||||
```python
|
||||
https://github.com/explosion/spaCy/tree/master/examples/training/train_entity_linker.py
|
||||
https://github.com/explosion/spacy/tree/v2.x/examples/training/train_entity_linker.py
|
||||
```
|
||||
|
||||
#### Step by step guide {#step-by-step-entity-linker}
|
||||
|
|
|
@ -180,7 +180,7 @@ entirely **in Markdown**, without having to compromise on easy-to-use custom UI
|
|||
components. We're hoping that the Markdown source will make it even easier to
|
||||
contribute to the documentation. For more details, check out the
|
||||
[styleguide](/styleguide) and
|
||||
[source](https://github.com/explosion/spaCy/tree/master/website). While
|
||||
[source](https://github.com/explosion/spacy/tree/v2.x/website). While
|
||||
converting the pages to Markdown, we've also fixed a bunch of typos, improved
|
||||
the existing pages and added some new content:
|
||||
|
||||
|
|
|
@ -161,8 +161,8 @@ debugging your tokenizer configuration.
|
|||
|
||||
spaCy's custom warnings have been replaced with native Python
|
||||
[`warnings`](https://docs.python.org/3/library/warnings.html). Instead of
|
||||
setting `SPACY_WARNING_IGNORE`, use the [`warnings`
|
||||
filters](https://docs.python.org/3/library/warnings.html#the-warnings-filter)
|
||||
setting `SPACY_WARNING_IGNORE`, use the
|
||||
[`warnings` filters](https://docs.python.org/3/library/warnings.html#the-warnings-filter)
|
||||
to manage warnings.
|
||||
|
||||
```diff
|
||||
|
@ -176,7 +176,7 @@ import spacy
|
|||
#### Normalization tables
|
||||
|
||||
The normalization tables have moved from the language data in
|
||||
[`spacy/lang`](https://github.com/explosion/spaCy/tree/master/spacy/lang) to the
|
||||
[`spacy/lang`](https://github.com/explosion/spacy/tree/v2.x/spacy/lang) to the
|
||||
package [`spacy-lookups-data`](https://github.com/explosion/spacy-lookups-data).
|
||||
If you're adding data for a new language, the normalization table should be
|
||||
added to `spacy-lookups-data`. See
|
||||
|
@ -190,8 +190,8 @@ lexemes will be added to the vocab automatically, just as in small models
|
|||
without vectors.
|
||||
|
||||
To see the number of unique vectors and number of words with vectors, see
|
||||
`nlp.meta['vectors']`, for example for `en_core_web_md` there are `20000`
|
||||
unique vectors and `684830` words with vectors:
|
||||
`nlp.meta['vectors']`, for example for `en_core_web_md` there are `20000` unique
|
||||
vectors and `684830` words with vectors:
|
||||
|
||||
```python
|
||||
{
|
||||
|
@ -210,8 +210,8 @@ for orth in nlp.vocab.vectors:
|
|||
_ = nlp.vocab[orth]
|
||||
```
|
||||
|
||||
If your workflow previously iterated over `nlp.vocab`, a similar alternative
|
||||
is to iterate over words with vectors instead:
|
||||
If your workflow previously iterated over `nlp.vocab`, a similar alternative is
|
||||
to iterate over words with vectors instead:
|
||||
|
||||
```diff
|
||||
- lexemes = [w for w in nlp.vocab]
|
||||
|
@ -220,9 +220,9 @@ is to iterate over words with vectors instead:
|
|||
|
||||
Be aware that the set of preloaded lexemes in a v2.2 model is not equivalent to
|
||||
the set of words with vectors. For English, v2.2 `md/lg` models have 1.3M
|
||||
provided lexemes but only 685K words with vectors. The vectors have been
|
||||
updated for most languages in v2.2, but the English models contain the same
|
||||
vectors for both v2.2 and v2.3.
|
||||
provided lexemes but only 685K words with vectors. The vectors have been updated
|
||||
for most languages in v2.2, but the English models contain the same vectors for
|
||||
both v2.2 and v2.3.
|
||||
|
||||
#### Lexeme.is_oov and Token.is_oov
|
||||
|
||||
|
@ -234,8 +234,7 @@ fixed in the next patch release v2.3.1.
|
|||
</Infobox>
|
||||
|
||||
In v2.3, `Lexeme.is_oov` and `Token.is_oov` are `True` if the lexeme does not
|
||||
have a word vector. This is equivalent to `token.orth not in
|
||||
nlp.vocab.vectors`.
|
||||
have a word vector. This is equivalent to `token.orth not in nlp.vocab.vectors`.
|
||||
|
||||
Previously in v2.2, `is_oov` corresponded to whether a lexeme had stored
|
||||
probability and cluster features. The probability and cluster features are no
|
||||
|
@ -270,8 +269,8 @@ as part of the model vocab.
|
|||
|
||||
To load the probability table into a provided model, first make sure you have
|
||||
`spacy-lookups-data` installed. To load the table, remove the empty provided
|
||||
`lexeme_prob` table and then access `Lexeme.prob` for any word to load the
|
||||
table from `spacy-lookups-data`:
|
||||
`lexeme_prob` table and then access `Lexeme.prob` for any word to load the table
|
||||
from `spacy-lookups-data`:
|
||||
|
||||
```diff
|
||||
+ # prerequisite: pip install spacy-lookups-data
|
||||
|
@ -321,9 +320,9 @@ the [train CLI](/api/cli#train), you can use the new `--tag-map-path` option to
|
|||
provide in the tag map as a JSON dict.
|
||||
|
||||
If you want to export a tag map from a provided model for use with the train
|
||||
CLI, you can save it as a JSON dict. To only use string keys as required by
|
||||
JSON and to make it easier to read and edit, any internal integer IDs need to
|
||||
be converted back to strings:
|
||||
CLI, you can save it as a JSON dict. To only use string keys as required by JSON
|
||||
and to make it easier to read and edit, any internal integer IDs need to be
|
||||
converted back to strings:
|
||||
|
||||
```python
|
||||
import spacy
|
||||
|
|
|
@ -306,7 +306,7 @@ lookup-based lemmatization – and **many new languages**!
|
|||
<Infobox>
|
||||
|
||||
**API:** [`Language`](/api/language) **Code:**
|
||||
[`spacy/lang`](https://github.com/explosion/spaCy/tree/master/spacy/lang)
|
||||
[`spacy/lang`](https://github.com/explosion/spacy/tree/v2.x/spacy/lang)
|
||||
**Usage:** [Adding languages](/usage/adding-languages)
|
||||
|
||||
</Infobox>
|
||||
|
|
|
@ -14,10 +14,12 @@ const models = require('./meta/languages.json')
|
|||
const universe = require('./meta/universe.json')
|
||||
|
||||
const DEFAULT_TEMPLATE = path.resolve('./src/templates/index.js')
|
||||
const legacy = site.legacy || !!+process.env.SPACY_LEGACY
|
||||
|
||||
module.exports = {
|
||||
siteMetadata: {
|
||||
...site,
|
||||
legacy,
|
||||
...logos,
|
||||
sidebars,
|
||||
...models,
|
||||
|
@ -127,7 +129,7 @@ module.exports = {
|
|||
background_color: site.theme,
|
||||
theme_color: site.theme,
|
||||
display: `minimal-ui`,
|
||||
icon: `src/images/icon.png`,
|
||||
icon: legacy ? `src/images/icon_legacy.png` : `src/images/icon.png`,
|
||||
},
|
||||
},
|
||||
{
|
||||
|
@ -136,6 +138,26 @@ module.exports = {
|
|||
domain: site.domain,
|
||||
},
|
||||
},
|
||||
{
|
||||
resolve: 'gatsby-plugin-robots-txt',
|
||||
options: {
|
||||
host: site.siteUrl,
|
||||
sitemap: `${site.siteUrl}/sitemap.xml`,
|
||||
// If we're in a special state prevent indexing
|
||||
resolveEnv: () => (legacy ? 'development' : 'production'),
|
||||
env: {
|
||||
production: {
|
||||
policy: [{ userAgent: '*', allow: '/' }],
|
||||
},
|
||||
development: {
|
||||
policy: [
|
||||
{ userAgent: '*', disallow: ['/'] },
|
||||
{ userAgent: 'Twitterbot', allow: '/' },
|
||||
],
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
`gatsby-plugin-offline`,
|
||||
],
|
||||
}
|
||||
|
|
|
@ -154,6 +154,12 @@
|
|||
{ "code": "fa", "name": "Persian", "has_examples": true },
|
||||
{ "code": "ur", "name": "Urdu", "example": "یہ ایک جملہ ہے", "has_examples": true },
|
||||
{ "code": "tt", "name": "Tatar", "has_examples": true },
|
||||
{
|
||||
"code": "ky",
|
||||
"name": "Kyrgyz",
|
||||
"example": "Адамга эң кыйыны — күн сайын адам болуу",
|
||||
"has_examples": true
|
||||
},
|
||||
{ "code": "te", "name": "Telugu", "example": "ఇది ఒక వాక్యం.", "has_examples": true },
|
||||
{ "code": "si", "name": "Sinhala", "example": "මෙය වාක්යයකි.", "has_examples": true },
|
||||
{ "code": "ga", "name": "Irish" },
|
||||
|
|
|
@ -2,8 +2,10 @@
|
|||
"title": "spaCy",
|
||||
"description": "spaCy is a free open-source library for Natural Language Processing in Python. It features NER, POS tagging, dependency parsing, word vectors and more.",
|
||||
"slogan": "Industrial-strength Natural Language Processing in Python",
|
||||
"siteUrl": "https://spacy.io",
|
||||
"domain": "spacy.io",
|
||||
"siteUrl": "https://v2.spacy.io",
|
||||
"domain": "v2.spacy.io",
|
||||
"legacy": false,
|
||||
"codeBranch": "v2.x",
|
||||
"email": "contact@explosion.ai",
|
||||
"company": "Explosion AI",
|
||||
"companyUrl": "https://explosion.ai",
|
||||
|
@ -24,8 +26,8 @@
|
|||
"indexName": "spacy"
|
||||
},
|
||||
"binderUrl": "explosion/spacy-io-binder",
|
||||
"binderBranch": "live",
|
||||
"binderVersion": "2.3.0",
|
||||
"binderBranch": "v2.spacy.io",
|
||||
"binderVersion": "2.3.5",
|
||||
"sections": [
|
||||
{ "id": "usage", "title": "Usage Documentation", "theme": "blue" },
|
||||
{ "id": "models", "title": "Models Documentation", "theme": "blue" },
|
||||
|
|
678
website/package-lock.json
generated
678
website/package-lock.json
generated
|
@ -3437,6 +3437,11 @@
|
|||
"resolved": "https://registry.npmjs.org/@types/minimatch/-/minimatch-3.0.3.tgz",
|
||||
"integrity": "sha512-tHq6qdbT9U1IRSGf14CL0pUlULksvY9OZ+5eEgl1N7t+OA3tGvNpxJCzuKQlsNgCVwbAs670L1vcVQi8j9HjnA=="
|
||||
},
|
||||
"@types/minimist": {
|
||||
"version": "1.2.1",
|
||||
"resolved": "https://registry.npmjs.org/@types/minimist/-/minimist-1.2.1.tgz",
|
||||
"integrity": "sha512-fZQQafSREFyuZcdWFAExYjBiCL7AUCdgsk80iO0q4yihYYdcIiH28CcuPTGFgLOCC8RlW49GSQxdHwZP+I7CNg=="
|
||||
},
|
||||
"@types/mkdirp": {
|
||||
"version": "0.5.2",
|
||||
"resolved": "https://registry.npmjs.org/@types/mkdirp/-/mkdirp-0.5.2.tgz",
|
||||
|
@ -3479,6 +3484,11 @@
|
|||
}
|
||||
}
|
||||
},
|
||||
"@types/normalize-package-data": {
|
||||
"version": "2.4.0",
|
||||
"resolved": "https://registry.npmjs.org/@types/normalize-package-data/-/normalize-package-data-2.4.0.tgz",
|
||||
"integrity": "sha512-f5j5b/Gf71L+dbqxIpQ4Z2WlmI/mPJ0fOkGGmFgtb6sAu97EPczzbS3/tJKxmcYDj55OX6ssqwDAWOHIYDRDGA=="
|
||||
},
|
||||
"@types/parse-json": {
|
||||
"version": "4.0.0",
|
||||
"resolved": "https://registry.npmjs.org/@types/parse-json/-/parse-json-4.0.0.tgz",
|
||||
|
@ -4500,6 +4510,11 @@
|
|||
"resolved": "https://registry.npmjs.org/asynckit/-/asynckit-0.4.0.tgz",
|
||||
"integrity": "sha1-x57Zf380y48robyXkLzDZkdLS3k="
|
||||
},
|
||||
"at-least-node": {
|
||||
"version": "1.0.0",
|
||||
"resolved": "https://registry.npmjs.org/at-least-node/-/at-least-node-1.0.0.tgz",
|
||||
"integrity": "sha512-+q/t7Ekv1EDY2l6Gda6LLiX14rU9TV20Wa3ofeQmwPFZbOMo9DXrLbOjFaaclkXKWidIaopwAObQDqwWtGUjqg=="
|
||||
},
|
||||
"atob": {
|
||||
"version": "2.1.2",
|
||||
"resolved": "https://registry.npmjs.org/atob/-/atob-2.1.2.tgz",
|
||||
|
@ -8700,16 +8715,6 @@
|
|||
"logalot": "^2.1.0"
|
||||
}
|
||||
},
|
||||
"cyclist": {
|
||||
"version": "1.0.1",
|
||||
"resolved": "https://registry.npmjs.org/cyclist/-/cyclist-1.0.1.tgz",
|
||||
"integrity": "sha1-WW6WmP0MgOEgOMK4LW6xs1tiJNk="
|
||||
},
|
||||
"damerau-levenshtein": {
|
||||
"version": "1.0.6",
|
||||
"resolved": "https://registry.npmjs.org/damerau-levenshtein/-/damerau-levenshtein-1.0.6.tgz",
|
||||
"integrity": "sha512-JVrozIeElnj3QzfUIt8tB8YMluBJom4Vw9qTPpjGYQ9fYlB3D/rb6OordUxf3xeFB35LKWs0xqcO5U6ySvBtug=="
|
||||
},
|
||||
"dashdash": {
|
||||
"version": "1.14.1",
|
||||
"resolved": "https://registry.npmjs.org/dashdash/-/dashdash-1.14.1.tgz",
|
||||
|
@ -8729,9 +8734,9 @@
|
|||
"integrity": "sha512-sAJVKx/FqrLYHAQeN7VpJrPhagZc9R4ImZIWYRFZaaohR3KzmuK88touwsSwSVT8Qcbd4zoDsnGfX4GFB4imyQ=="
|
||||
},
|
||||
"debug": {
|
||||
"version": "3.2.7",
|
||||
"resolved": "https://registry.npmjs.org/debug/-/debug-3.2.7.tgz",
|
||||
"integrity": "sha512-CFjzYYAi4ThfiQvizrFQevTTXHtnCqWfe7x1AhgEscTz6ZbLbfoLRLPugTQyBth6f8ZERVUSyWHFD/7Wu4t1XQ==",
|
||||
"version": "3.2.6",
|
||||
"resolved": "https://registry.npmjs.org/debug/-/debug-3.2.6.tgz",
|
||||
"integrity": "sha512-mel+jf7nrtEl5Pn1Qx46zARXKDpBbvzezse7p7LqINmdoIk8PYP5SySaxEmYv6TZ0JyEKA1hsCId6DIhgITtWQ==",
|
||||
"requires": {
|
||||
"ms": "^2.1.1"
|
||||
}
|
||||
|
@ -8741,6 +8746,15 @@
|
|||
"resolved": "https://registry.npmjs.org/decamelize/-/decamelize-1.2.0.tgz",
|
||||
"integrity": "sha1-9lNNFRSCabIDUue+4m9QH5oZEpA="
|
||||
},
|
||||
"decamelize-keys": {
|
||||
"version": "1.1.0",
|
||||
"resolved": "https://registry.npmjs.org/decamelize-keys/-/decamelize-keys-1.1.0.tgz",
|
||||
"integrity": "sha1-0XGoeTMlKAfrPLYdwcFEXQeN8tk=",
|
||||
"requires": {
|
||||
"decamelize": "^1.1.0",
|
||||
"map-obj": "^1.0.0"
|
||||
}
|
||||
},
|
||||
"decode-uri-component": {
|
||||
"version": "0.2.0",
|
||||
"resolved": "https://registry.npmjs.org/decode-uri-component/-/decode-uri-component-0.2.0.tgz",
|
||||
|
@ -10178,6 +10192,11 @@
|
|||
"regenerator-runtime": "^0.13.4"
|
||||
}
|
||||
},
|
||||
"damerau-levenshtein": {
|
||||
"version": "1.0.6",
|
||||
"resolved": "https://registry.npmjs.org/damerau-levenshtein/-/damerau-levenshtein-1.0.6.tgz",
|
||||
"integrity": "sha512-JVrozIeElnj3QzfUIt8tB8YMluBJom4Vw9qTPpjGYQ9fYlB3D/rb6OordUxf3xeFB35LKWs0xqcO5U6ySvBtug=="
|
||||
},
|
||||
"emoji-regex": {
|
||||
"version": "9.2.0",
|
||||
"resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-9.2.0.tgz",
|
||||
|
@ -13091,6 +13110,30 @@
|
|||
"svg-react-loader": "^0.4.4"
|
||||
}
|
||||
},
|
||||
"gatsby-plugin-robots-txt": {
|
||||
"version": "1.5.5",
|
||||
"resolved": "https://registry.npmjs.org/gatsby-plugin-robots-txt/-/gatsby-plugin-robots-txt-1.5.5.tgz",
|
||||
"integrity": "sha512-wLIep04R0cnY+3t9uFVFitA/eLbI6o8xkrUPg6gVxnas/LtzMe5tUiMK5P+idC14B0ohY1y2zl2hP+Bu54/dHQ==",
|
||||
"requires": {
|
||||
"@babel/runtime": "^7.11.2",
|
||||
"generate-robotstxt": "^8.0.3"
|
||||
},
|
||||
"dependencies": {
|
||||
"@babel/runtime": {
|
||||
"version": "7.12.5",
|
||||
"resolved": "https://registry.npmjs.org/@babel/runtime/-/runtime-7.12.5.tgz",
|
||||
"integrity": "sha512-plcc+hbExy3McchJCEQG3knOsuh3HH+Prx1P6cLIkET/0dLuQDEnrT+s27Axgc9bqfsmNUNHfscgMUdBpC9xfg==",
|
||||
"requires": {
|
||||
"regenerator-runtime": "^0.13.4"
|
||||
}
|
||||
},
|
||||
"regenerator-runtime": {
|
||||
"version": "0.13.7",
|
||||
"resolved": "https://registry.npmjs.org/regenerator-runtime/-/regenerator-runtime-0.13.7.tgz",
|
||||
"integrity": "sha512-a54FxoJDIr27pgf7IgeQGxmqUNYrcV338lf/6gH456HZ/PhX+5BcwHXG9ajESmwe6WRO0tAzRUrRmNONWgkrew=="
|
||||
}
|
||||
}
|
||||
},
|
||||
"gatsby-plugin-sass": {
|
||||
"version": "2.0.10",
|
||||
"resolved": "https://registry.npmjs.org/gatsby-plugin-sass/-/gatsby-plugin-sass-2.0.10.tgz",
|
||||
|
@ -14758,6 +14801,234 @@
|
|||
"globule": "^1.0.0"
|
||||
}
|
||||
},
|
||||
"generate-robotstxt": {
|
||||
"version": "8.0.3",
|
||||
"resolved": "https://registry.npmjs.org/generate-robotstxt/-/generate-robotstxt-8.0.3.tgz",
|
||||
"integrity": "sha512-iD//oAVKcHOCz9M0IiT3pyUiF2uN1qvL3qaTA8RGLz7NU7l0XVwyzd3rN+tzhB657DNUgrygXt9w8+0zkTMFrg==",
|
||||
"requires": {
|
||||
"cosmiconfig": "^6.0.0",
|
||||
"fs-extra": "^9.0.0",
|
||||
"ip-regex": "^4.1.0",
|
||||
"is-absolute-url": "^3.0.3",
|
||||
"meow": "^7.0.1",
|
||||
"resolve-from": "^5.0.0"
|
||||
},
|
||||
"dependencies": {
|
||||
"camelcase-keys": {
|
||||
"version": "6.2.2",
|
||||
"resolved": "https://registry.npmjs.org/camelcase-keys/-/camelcase-keys-6.2.2.tgz",
|
||||
"integrity": "sha512-YrwaA0vEKazPBkn0ipTiMpSajYDSe+KjQfrjhcBMxJt/znbvlHd8Pw/Vamaz5EB4Wfhs3SUR3Z9mwRu/P3s3Yg==",
|
||||
"requires": {
|
||||
"camelcase": "^5.3.1",
|
||||
"map-obj": "^4.0.0",
|
||||
"quick-lru": "^4.0.1"
|
||||
}
|
||||
},
|
||||
"cosmiconfig": {
|
||||
"version": "6.0.0",
|
||||
"resolved": "https://registry.npmjs.org/cosmiconfig/-/cosmiconfig-6.0.0.tgz",
|
||||
"integrity": "sha512-xb3ZL6+L8b9JLLCx3ZdoZy4+2ECphCMo2PwqgP1tlfVq6M6YReyzBJtvWWtbDSpNr9hn96pkCiZqUcFEc+54Qg==",
|
||||
"requires": {
|
||||
"@types/parse-json": "^4.0.0",
|
||||
"import-fresh": "^3.1.0",
|
||||
"parse-json": "^5.0.0",
|
||||
"path-type": "^4.0.0",
|
||||
"yaml": "^1.7.2"
|
||||
}
|
||||
},
|
||||
"fs-extra": {
|
||||
"version": "9.1.0",
|
||||
"resolved": "https://registry.npmjs.org/fs-extra/-/fs-extra-9.1.0.tgz",
|
||||
"integrity": "sha512-hcg3ZmepS30/7BSFqRvoo3DOMQu7IjqxO5nCDt+zM9XWjb33Wg7ziNT+Qvqbuc3+gWpzO02JubVyk2G4Zvo1OQ==",
|
||||
"requires": {
|
||||
"at-least-node": "^1.0.0",
|
||||
"graceful-fs": "^4.2.0",
|
||||
"jsonfile": "^6.0.1",
|
||||
"universalify": "^2.0.0"
|
||||
}
|
||||
},
|
||||
"graceful-fs": {
|
||||
"version": "4.2.4",
|
||||
"resolved": "https://registry.npmjs.org/graceful-fs/-/graceful-fs-4.2.4.tgz",
|
||||
"integrity": "sha512-WjKPNJF79dtJAVniUlGGWHYGz2jWxT6VhN/4m1NdkbZ2nOsEF+cI1Edgql5zCRhs/VsQYRvrXctxktVXZUkixw=="
|
||||
},
|
||||
"import-fresh": {
|
||||
"version": "3.3.0",
|
||||
"resolved": "https://registry.npmjs.org/import-fresh/-/import-fresh-3.3.0.tgz",
|
||||
"integrity": "sha512-veYYhQa+D1QBKznvhUHxb8faxlrwUnxseDAbAp457E0wLNio2bOSKnjYDhMj+YiAq61xrMGhQk9iXVk5FzgQMw==",
|
||||
"requires": {
|
||||
"parent-module": "^1.0.0",
|
||||
"resolve-from": "^4.0.0"
|
||||
},
|
||||
"dependencies": {
|
||||
"resolve-from": {
|
||||
"version": "4.0.0",
|
||||
"resolved": "https://registry.npmjs.org/resolve-from/-/resolve-from-4.0.0.tgz",
|
||||
"integrity": "sha512-pb/MYmXstAkysRFx8piNI1tGFNQIFA3vkE3Gq4EuA1dF6gHp/+vgZqsCGJapvy8N3Q+4o7FwvquPJcnZ7RYy4g=="
|
||||
}
|
||||
}
|
||||
},
|
||||
"indent-string": {
|
||||
"version": "4.0.0",
|
||||
"resolved": "https://registry.npmjs.org/indent-string/-/indent-string-4.0.0.tgz",
|
||||
"integrity": "sha512-EdDDZu4A2OyIK7Lr/2zG+w5jmbuk1DVBnEwREQvBzspBJkCEbRa8GxU1lghYcaGJCnRWibjDXlq779X1/y5xwg=="
|
||||
},
|
||||
"ip-regex": {
|
||||
"version": "4.3.0",
|
||||
"resolved": "https://registry.npmjs.org/ip-regex/-/ip-regex-4.3.0.tgz",
|
||||
"integrity": "sha512-B9ZWJxHHOHUhUjCPrMpLD4xEq35bUTClHM1S6CBU5ixQnkZmwipwgc96vAd7AAGM9TGHvJR+Uss+/Ak6UphK+Q=="
|
||||
},
|
||||
"is-absolute-url": {
|
||||
"version": "3.0.3",
|
||||
"resolved": "https://registry.npmjs.org/is-absolute-url/-/is-absolute-url-3.0.3.tgz",
|
||||
"integrity": "sha512-opmNIX7uFnS96NtPmhWQgQx6/NYFgsUXYMllcfzwWKUMwfo8kku1TvE6hkNcH+Q1ts5cMVrsY7j0bxXQDciu9Q=="
|
||||
},
|
||||
"jsonfile": {
|
||||
"version": "6.1.0",
|
||||
"resolved": "https://registry.npmjs.org/jsonfile/-/jsonfile-6.1.0.tgz",
|
||||
"integrity": "sha512-5dgndWOriYSm5cnYaJNhalLNDKOqFwyDB/rr1E9ZsGciGvKPs8R2xYGCacuf3z6K1YKDz182fd+fY3cn3pMqXQ==",
|
||||
"requires": {
|
||||
"graceful-fs": "^4.1.6",
|
||||
"universalify": "^2.0.0"
|
||||
}
|
||||
},
|
||||
"map-obj": {
|
||||
"version": "4.1.0",
|
||||
"resolved": "https://registry.npmjs.org/map-obj/-/map-obj-4.1.0.tgz",
|
||||
"integrity": "sha512-glc9y00wgtwcDmp7GaE/0b0OnxpNJsVf3ael/An6Fe2Q51LLwN1er6sdomLRzz5h0+yMpiYLhWYF5R7HeqVd4g=="
|
||||
},
|
||||
"meow": {
|
||||
"version": "7.1.1",
|
||||
"resolved": "https://registry.npmjs.org/meow/-/meow-7.1.1.tgz",
|
||||
"integrity": "sha512-GWHvA5QOcS412WCo8vwKDlTelGLsCGBVevQB5Kva961rmNfun0PCbv5+xta2kUMFJyR8/oWnn7ddeKdosbAPbA==",
|
||||
"requires": {
|
||||
"@types/minimist": "^1.2.0",
|
||||
"camelcase-keys": "^6.2.2",
|
||||
"decamelize-keys": "^1.1.0",
|
||||
"hard-rejection": "^2.1.0",
|
||||
"minimist-options": "4.1.0",
|
||||
"normalize-package-data": "^2.5.0",
|
||||
"read-pkg-up": "^7.0.1",
|
||||
"redent": "^3.0.0",
|
||||
"trim-newlines": "^3.0.0",
|
||||
"type-fest": "^0.13.1",
|
||||
"yargs-parser": "^18.1.3"
|
||||
}
|
||||
},
|
||||
"normalize-package-data": {
|
||||
"version": "2.5.0",
|
||||
"resolved": "https://registry.npmjs.org/normalize-package-data/-/normalize-package-data-2.5.0.tgz",
|
||||
"integrity": "sha512-/5CMN3T0R4XTj4DcGaexo+roZSdSFW/0AOOTROrjxzCG1wrWXEsGbRKevjlIL+ZDE4sZlJr5ED4YW0yqmkK+eA==",
|
||||
"requires": {
|
||||
"hosted-git-info": "^2.1.4",
|
||||
"resolve": "^1.10.0",
|
||||
"semver": "2 || 3 || 4 || 5",
|
||||
"validate-npm-package-license": "^3.0.1"
|
||||
}
|
||||
},
|
||||
"parse-json": {
|
||||
"version": "5.2.0",
|
||||
"resolved": "https://registry.npmjs.org/parse-json/-/parse-json-5.2.0.tgz",
|
||||
"integrity": "sha512-ayCKvm/phCGxOkYRSCM82iDwct8/EonSEgCSxWxD7ve6jHggsFl4fZVQBPRNgQoKiuV/odhFrGzQXZwbifC8Rg==",
|
||||
"requires": {
|
||||
"@babel/code-frame": "^7.0.0",
|
||||
"error-ex": "^1.3.1",
|
||||
"json-parse-even-better-errors": "^2.3.0",
|
||||
"lines-and-columns": "^1.1.6"
|
||||
}
|
||||
},
|
||||
"read-pkg": {
|
||||
"version": "5.2.0",
|
||||
"resolved": "https://registry.npmjs.org/read-pkg/-/read-pkg-5.2.0.tgz",
|
||||
"integrity": "sha512-Ug69mNOpfvKDAc2Q8DRpMjjzdtrnv9HcSMX+4VsZxD1aZ6ZzrIE7rlzXBtWTyhULSMKg076AW6WR5iZpD0JiOg==",
|
||||
"requires": {
|
||||
"@types/normalize-package-data": "^2.4.0",
|
||||
"normalize-package-data": "^2.5.0",
|
||||
"parse-json": "^5.0.0",
|
||||
"type-fest": "^0.6.0"
|
||||
},
|
||||
"dependencies": {
|
||||
"type-fest": {
|
||||
"version": "0.6.0",
|
||||
"resolved": "https://registry.npmjs.org/type-fest/-/type-fest-0.6.0.tgz",
|
||||
"integrity": "sha512-q+MB8nYR1KDLrgr4G5yemftpMC7/QLqVndBmEEdqzmNj5dcFOO4Oo8qlwZE3ULT3+Zim1F8Kq4cBnikNhlCMlg=="
|
||||
}
|
||||
}
|
||||
},
|
||||
"read-pkg-up": {
|
||||
"version": "7.0.1",
|
||||
"resolved": "https://registry.npmjs.org/read-pkg-up/-/read-pkg-up-7.0.1.tgz",
|
||||
"integrity": "sha512-zK0TB7Xd6JpCLmlLmufqykGE+/TlOePD6qKClNW7hHDKFh/J7/7gCWGR7joEQEW1bKq3a3yUZSObOoWLFQ4ohg==",
|
||||
"requires": {
|
||||
"find-up": "^4.1.0",
|
||||
"read-pkg": "^5.2.0",
|
||||
"type-fest": "^0.8.1"
|
||||
},
|
||||
"dependencies": {
|
||||
"type-fest": {
|
||||
"version": "0.8.1",
|
||||
"resolved": "https://registry.npmjs.org/type-fest/-/type-fest-0.8.1.tgz",
|
||||
"integrity": "sha512-4dbzIzqvjtgiM5rw1k5rEHtBANKmdudhGyBEajN01fEyhaAIhsoKNy6y7+IN93IfpFtwY9iqi7kD+xwKhQsNJA=="
|
||||
}
|
||||
}
|
||||
},
|
||||
"redent": {
|
||||
"version": "3.0.0",
|
||||
"resolved": "https://registry.npmjs.org/redent/-/redent-3.0.0.tgz",
|
||||
"integrity": "sha512-6tDA8g98We0zd0GvVeMT9arEOnTw9qM03L9cJXaCjrip1OO764RDBLBfrB4cwzNGDj5OA5ioymC9GkizgWJDUg==",
|
||||
"requires": {
|
||||
"indent-string": "^4.0.0",
|
||||
"strip-indent": "^3.0.0"
|
||||
}
|
||||
},
|
||||
"resolve": {
|
||||
"version": "1.19.0",
|
||||
"resolved": "https://registry.npmjs.org/resolve/-/resolve-1.19.0.tgz",
|
||||
"integrity": "sha512-rArEXAgsBG4UgRGcynxWIWKFvh/XZCcS8UJdHhwy91zwAvCZIbcs+vAbflgBnNjYMs/i/i+/Ux6IZhML1yPvxg==",
|
||||
"requires": {
|
||||
"is-core-module": "^2.1.0",
|
||||
"path-parse": "^1.0.6"
|
||||
}
|
||||
},
|
||||
"resolve-from": {
|
||||
"version": "5.0.0",
|
||||
"resolved": "https://registry.npmjs.org/resolve-from/-/resolve-from-5.0.0.tgz",
|
||||
"integrity": "sha512-qYg9KP24dD5qka9J47d0aVky0N+b4fTU89LN9iDnjB5waksiC49rvMB0PrUJQGoTmH50XPiqOvAjDfaijGxYZw=="
|
||||
},
|
||||
"strip-indent": {
|
||||
"version": "3.0.0",
|
||||
"resolved": "https://registry.npmjs.org/strip-indent/-/strip-indent-3.0.0.tgz",
|
||||
"integrity": "sha512-laJTa3Jb+VQpaC6DseHhF7dXVqHTfJPCRDaEbid/drOhgitgYku/letMUqOXFoWV0zIIUbjpdH2t+tYj4bQMRQ==",
|
||||
"requires": {
|
||||
"min-indent": "^1.0.0"
|
||||
}
|
||||
},
|
||||
"trim-newlines": {
|
||||
"version": "3.0.0",
|
||||
"resolved": "https://registry.npmjs.org/trim-newlines/-/trim-newlines-3.0.0.tgz",
|
||||
"integrity": "sha512-C4+gOpvmxaSMKuEf9Qc134F1ZuOHVXKRbtEflf4NTtuuJDEIJ9p5PXsalL8SkeRw+qit1Mo+yuvMPAKwWg/1hA=="
|
||||
},
|
||||
"type-fest": {
|
||||
"version": "0.13.1",
|
||||
"resolved": "https://registry.npmjs.org/type-fest/-/type-fest-0.13.1.tgz",
|
||||
"integrity": "sha512-34R7HTnG0XIJcBSn5XhDd7nNFPRcXYRZrBB2O2jdKqYODldSzBAqzsWoZYYvduky73toYS/ESqxPvkDf/F0XMg=="
|
||||
},
|
||||
"universalify": {
|
||||
"version": "2.0.0",
|
||||
"resolved": "https://registry.npmjs.org/universalify/-/universalify-2.0.0.tgz",
|
||||
"integrity": "sha512-hAZsKq7Yy11Zu1DE0OzWjw7nnLZmJZYTDZZyEFHZdUhV8FkH5MCfoU1XMaxXovpyW5nq5scPqq0ZDP9Zyl04oQ=="
|
||||
},
|
||||
"yargs-parser": {
|
||||
"version": "18.1.3",
|
||||
"resolved": "https://registry.npmjs.org/yargs-parser/-/yargs-parser-18.1.3.tgz",
|
||||
"integrity": "sha512-o50j0JeToy/4K6OZcaQmW6lyXXKhq7csREXcDwk2omFPJEwUNOVtJKvmDr9EI1fAJZUyZcRF7kxGBWmRXudrCQ==",
|
||||
"requires": {
|
||||
"camelcase": "^5.0.0",
|
||||
"decamelize": "^1.2.0"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"gensync": {
|
||||
"version": "1.0.0-beta.2",
|
||||
"resolved": "https://registry.npmjs.org/gensync/-/gensync-1.0.0-beta.2.tgz",
|
||||
|
@ -15371,6 +15642,11 @@
|
|||
}
|
||||
}
|
||||
},
|
||||
"hard-rejection": {
|
||||
"version": "2.1.0",
|
||||
"resolved": "https://registry.npmjs.org/hard-rejection/-/hard-rejection-2.1.0.tgz",
|
||||
"integrity": "sha512-VIZB+ibDhx7ObhAe7OVtoEbuP4h/MuOTHJ+J8h/eBXotJYl0fBgR72xDFCKgIh22OJZIOVNxBMWuhAr10r8HdA=="
|
||||
},
|
||||
"has": {
|
||||
"version": "1.0.3",
|
||||
"resolved": "https://registry.npmjs.org/has/-/has-1.0.3.tgz",
|
||||
|
@ -18345,6 +18621,23 @@
|
|||
"resolved": "http://registry.npmjs.org/minimist/-/minimist-1.2.0.tgz",
|
||||
"integrity": "sha1-o1AIsg9BOD7sH7kU9M1d95omQoQ="
|
||||
},
|
||||
"minimist-options": {
|
||||
"version": "4.1.0",
|
||||
"resolved": "https://registry.npmjs.org/minimist-options/-/minimist-options-4.1.0.tgz",
|
||||
"integrity": "sha512-Q4r8ghd80yhO/0j1O3B2BjweX3fiHg9cdOwjJd2J76Q135c+NDxGCqdYKQ1SKBuFfgWbAUzBfvYjPUEeNgqN1A==",
|
||||
"requires": {
|
||||
"arrify": "^1.0.1",
|
||||
"is-plain-obj": "^1.1.0",
|
||||
"kind-of": "^6.0.3"
|
||||
},
|
||||
"dependencies": {
|
||||
"kind-of": {
|
||||
"version": "6.0.3",
|
||||
"resolved": "https://registry.npmjs.org/kind-of/-/kind-of-6.0.3.tgz",
|
||||
"integrity": "sha512-dcS1ul+9tmeD95T+x28/ehLgd9mENa3LsvDTtzm3vyBEO7RPptvAD+t44WVXaUjTBRcrpFeFlC8WCruUR456hw=="
|
||||
}
|
||||
}
|
||||
},
|
||||
"minipass": {
|
||||
"version": "2.3.5",
|
||||
"resolved": "https://registry.npmjs.org/minipass/-/minipass-2.3.5.tgz",
|
||||
|
@ -19611,6 +19904,13 @@
|
|||
"cyclist": "^1.0.1",
|
||||
"inherits": "^2.0.3",
|
||||
"readable-stream": "^2.1.5"
|
||||
},
|
||||
"dependencies": {
|
||||
"cyclist": {
|
||||
"version": "1.0.1",
|
||||
"resolved": "https://registry.npmjs.org/cyclist/-/cyclist-1.0.1.tgz",
|
||||
"integrity": "sha1-WW6WmP0MgOEgOMK4LW6xs1tiJNk="
|
||||
}
|
||||
}
|
||||
},
|
||||
"param-case": {
|
||||
|
@ -22129,9 +22429,14 @@
|
|||
"resolved": "https://registry.npmjs.org/querystringify/-/querystringify-2.1.0.tgz",
|
||||
"integrity": "sha512-sluvZZ1YiTLD5jsqZcDmFyV2EwToyXZBfpoVOmktMmW+VEnhgakFHnasVph65fOjGPTWN0Nw3+XQaSeMayr0kg=="
|
||||
},
|
||||
"quick-lru": {
|
||||
"version": "4.0.1",
|
||||
"resolved": "https://registry.npmjs.org/quick-lru/-/quick-lru-4.0.1.tgz",
|
||||
"integrity": "sha512-ARhCpm70fzdcvNQfPoy49IaanKkTlRWF2JMzqhcJbhSFRZv7nPTvZJdcY7301IPmvW+/p0RgIWnQDLJxifsQ7g=="
|
||||
},
|
||||
"ramda": {
|
||||
"version": "0.21.0",
|
||||
"resolved": "https://registry.npmjs.org/ramda/-/ramda-0.21.0.tgz",
|
||||
"resolved": "http://registry.npmjs.org/ramda/-/ramda-0.21.0.tgz",
|
||||
"integrity": "sha1-oAGr7bP/YQd9T/HVd9RN536NCjU="
|
||||
},
|
||||
"randombytes": {
|
||||
|
@ -24586,6 +24891,11 @@
|
|||
"kind-of": "^3.2.0"
|
||||
},
|
||||
"dependencies": {
|
||||
"import-lazy": {
|
||||
"version": "3.1.0",
|
||||
"resolved": "https://registry.npmjs.org/import-lazy/-/import-lazy-3.1.0.tgz",
|
||||
"integrity": "sha512-8/gvXvX2JMn0F+CDlSC4l6kOmVaLOO3XLkksI7CI3Ud95KDYJuYur2b9P/PUt/i/pDAMd/DulQsNbbbmRRsDIQ=="
|
||||
},
|
||||
"kind-of": {
|
||||
"version": "3.2.2",
|
||||
"resolved": "https://registry.npmjs.org/kind-of/-/kind-of-3.2.2.tgz",
|
||||
|
@ -24594,11 +24904,6 @@
|
|||
"is-buffer": "^1.1.5"
|
||||
}
|
||||
},
|
||||
"import-lazy": {
|
||||
"version": "3.1.0",
|
||||
"resolved": "https://registry.npmjs.org/import-lazy/-/import-lazy-3.1.0.tgz",
|
||||
"integrity": "sha512-8/gvXvX2JMn0F+CDlSC4l6kOmVaLOO3XLkksI7CI3Ud95KDYJuYur2b9P/PUt/i/pDAMd/DulQsNbbbmRRsDIQ=="
|
||||
},
|
||||
"p-cancelable": {
|
||||
"version": "0.4.1",
|
||||
"resolved": "http://registry.npmjs.org/p-cancelable/-/p-cancelable-0.4.1.tgz",
|
||||
|
@ -24659,6 +24964,13 @@
|
|||
"integrity": "sha512-pYAIzeRo8J6KPEaJ0VWOh5Pzkbw/RetuzehGM7QRRX5he4fPHx2rdKMB256ehJCkX+XRQm16eZLqLNS8RSZXZw==",
|
||||
"requires": {
|
||||
"ms": "^2.1.1"
|
||||
},
|
||||
"dependencies": {
|
||||
"ms": {
|
||||
"version": "2.1.3",
|
||||
"resolved": "https://registry.npmjs.org/ms/-/ms-2.1.3.tgz",
|
||||
"integrity": "sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA=="
|
||||
}
|
||||
}
|
||||
},
|
||||
"iconv-lite": {
|
||||
|
@ -24715,11 +25027,24 @@
|
|||
"ms": "^2.1.1"
|
||||
}
|
||||
},
|
||||
"electron-to-chromium": {
|
||||
"version": "1.3.113",
|
||||
"resolved": "https://registry.npmjs.org/electron-to-chromium/-/electron-to-chromium-1.3.113.tgz",
|
||||
"integrity": "sha512-De+lPAxEcpxvqPTyZAXELNpRZXABRxf+uL/rSykstQhzj/B0l1150G/ExIIxKc16lI89Hgz81J0BHAcbTqK49g=="
|
||||
},
|
||||
"isarray": {
|
||||
"version": "2.0.1",
|
||||
"resolved": "https://registry.npmjs.org/isarray/-/isarray-2.0.1.tgz",
|
||||
"integrity": "sha1-o32U7ZzaLVmGXJ92/llu4fM4dB4="
|
||||
},
|
||||
"node-releases": {
|
||||
"version": "1.1.8",
|
||||
"resolved": "https://registry.npmjs.org/node-releases/-/node-releases-1.1.8.tgz",
|
||||
"integrity": "sha512-gQm+K9mGCiT/NXHy+V/ZZS1N/LOaGGqRAAJJs3X9Ah1g+CIbRcBgNyoNYQ+SEtcyAtB9KqDruu+fF7nWjsqRaA==",
|
||||
"requires": {
|
||||
"semver": "^5.3.0"
|
||||
}
|
||||
},
|
||||
"socket.io-parser": {
|
||||
"version": "3.3.2",
|
||||
"resolved": "https://registry.npmjs.org/socket.io-parser/-/socket.io-parser-3.3.2.tgz",
|
||||
|
@ -24749,19 +25074,6 @@
|
|||
"integrity": "sha1-VgiurfwAvmwpAd9fmGF4jeDVl8g="
|
||||
}
|
||||
}
|
||||
},
|
||||
"electron-to-chromium": {
|
||||
"version": "1.3.113",
|
||||
"resolved": "https://registry.npmjs.org/electron-to-chromium/-/electron-to-chromium-1.3.113.tgz",
|
||||
"integrity": "sha512-De+lPAxEcpxvqPTyZAXELNpRZXABRxf+uL/rSykstQhzj/B0l1150G/ExIIxKc16lI89Hgz81J0BHAcbTqK49g=="
|
||||
},
|
||||
"node-releases": {
|
||||
"version": "1.1.8",
|
||||
"resolved": "https://registry.npmjs.org/node-releases/-/node-releases-1.1.8.tgz",
|
||||
"integrity": "sha512-gQm+K9mGCiT/NXHy+V/ZZS1N/LOaGGqRAAJJs3X9Ah1g+CIbRcBgNyoNYQ+SEtcyAtB9KqDruu+fF7nWjsqRaA==",
|
||||
"requires": {
|
||||
"semver": "^5.3.0"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
|
@ -25352,6 +25664,11 @@
|
|||
"resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-3.0.0.tgz",
|
||||
"integrity": "sha1-7QMXwyIGT3lGbAKWa922Bas32Zg="
|
||||
},
|
||||
"normalize-path": {
|
||||
"version": "3.0.0",
|
||||
"resolved": "https://registry.npmjs.org/normalize-path/-/normalize-path-3.0.0.tgz",
|
||||
"integrity": "sha512-6eZs5Ls3WtCisHWp9S2GUy8dqkpGi4BVSz3GaqiE6ezub0512ESztXUwUB6C6IKbQkY2Pnb/mD4WYojCRwcwLA=="
|
||||
},
|
||||
"strip-ansi": {
|
||||
"version": "4.0.0",
|
||||
"resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-4.0.0.tgz",
|
||||
|
@ -25359,11 +25676,6 @@
|
|||
"requires": {
|
||||
"ansi-regex": "^3.0.0"
|
||||
}
|
||||
},
|
||||
"normalize-path": {
|
||||
"version": "3.0.0",
|
||||
"resolved": "https://registry.npmjs.org/normalize-path/-/normalize-path-3.0.0.tgz",
|
||||
"integrity": "sha512-6eZs5Ls3WtCisHWp9S2GUy8dqkpGi4BVSz3GaqiE6ezub0512ESztXUwUB6C6IKbQkY2Pnb/mD4WYojCRwcwLA=="
|
||||
}
|
||||
}
|
||||
},
|
||||
|
@ -25704,7 +26016,7 @@
|
|||
},
|
||||
"json5": {
|
||||
"version": "0.5.1",
|
||||
"resolved": "https://registry.npmjs.org/json5/-/json5-0.5.1.tgz",
|
||||
"resolved": "http://registry.npmjs.org/json5/-/json5-0.5.1.tgz",
|
||||
"integrity": "sha1-Hq3nrMASA0rYTiOWdn6tn6VJWCE="
|
||||
},
|
||||
"loader-utils": {
|
||||
|
@ -25896,6 +26208,15 @@
|
|||
"tar-stream": "^1.1.2"
|
||||
},
|
||||
"dependencies": {
|
||||
"postcss": {
|
||||
"version": "7.0.14",
|
||||
"resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.14.tgz",
|
||||
"integrity": "sha512-NsbD6XUUMZvBxtQAJuWDJeeC4QFsmWsfozWxCJPWf3M55K9iu2iMDaKqyoOdTJ1R4usBXuxlVFAIo8rZPQD4Bg==",
|
||||
"requires": {
|
||||
"source-map": "^0.6.1",
|
||||
"supports-color": "^6.1.0"
|
||||
}
|
||||
},
|
||||
"pump": {
|
||||
"version": "1.0.3",
|
||||
"resolved": "https://registry.npmjs.org/pump/-/pump-1.0.3.tgz",
|
||||
|
@ -25905,16 +26226,6 @@
|
|||
"once": "^1.3.1"
|
||||
}
|
||||
},
|
||||
"postcss": {
|
||||
"version": "7.0.14",
|
||||
"resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.14.tgz",
|
||||
"integrity": "sha512-NsbD6XUUMZvBxtQAJuWDJeeC4QFsmWsfozWxCJPWf3M55K9iu2iMDaKqyoOdTJ1R4usBXuxlVFAIo8rZPQD4Bg==",
|
||||
"requires": {
|
||||
"chalk": "^2.4.2",
|
||||
"source-map": "^0.6.1",
|
||||
"supports-color": "^6.1.0"
|
||||
}
|
||||
},
|
||||
"source-map": {
|
||||
"version": "0.6.1",
|
||||
"resolved": "https://registry.npmjs.org/source-map/-/source-map-0.6.1.tgz",
|
||||
|
@ -25930,209 +26241,6 @@
|
|||
}
|
||||
}
|
||||
},
|
||||
"cssnano-util-same-parent": {
|
||||
"version": "4.0.1",
|
||||
"resolved": "https://registry.npmjs.org/cssnano-util-same-parent/-/cssnano-util-same-parent-4.0.1.tgz",
|
||||
"integrity": "sha512-WcKx5OY+KoSIAxBW6UBBRay1U6vkYheCdjyVNDm85zt5K9mHoGOfsOsqIszfAqrQQFIIKgjh2+FDgIj/zsl21Q=="
|
||||
},
|
||||
"csso": {
|
||||
"version": "3.5.1",
|
||||
"resolved": "https://registry.npmjs.org/csso/-/csso-3.5.1.tgz",
|
||||
"integrity": "sha512-vrqULLffYU1Q2tLdJvaCYbONStnfkfimRxXNaGjxMldI0C7JPBC4rB1RyjhfdZ4m1frm8pM9uRPKH3d2knZ8gg==",
|
||||
"requires": {
|
||||
"css-tree": "1.0.0-alpha.29"
|
||||
},
|
||||
"dependencies": {
|
||||
"css-tree": {
|
||||
"version": "1.0.0-alpha.29",
|
||||
"resolved": "https://registry.npmjs.org/css-tree/-/css-tree-1.0.0-alpha.29.tgz",
|
||||
"integrity": "sha512-sRNb1XydwkW9IOci6iB2xmy8IGCj6r/fr+JWitvJ2JxQRPzN3T4AGGVWCMlVmVwM1gtgALJRmGIlWv5ppnGGkg==",
|
||||
"requires": {
|
||||
"mdn-data": "~1.1.0",
|
||||
"source-map": "^0.5.3"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"csstype": {
|
||||
"version": "2.6.0",
|
||||
"resolved": "https://registry.npmjs.org/csstype/-/csstype-2.6.0.tgz",
|
||||
"integrity": "sha512-by8hi8BlLbowQq0qtkx54d9aN73R9oUW20HISpka5kmgsR9F7nnxgfsemuR2sdCKZh+CDNf5egW9UZMm4mgJRg=="
|
||||
},
|
||||
"currently-unhandled": {
|
||||
"version": "0.4.1",
|
||||
"resolved": "https://registry.npmjs.org/currently-unhandled/-/currently-unhandled-0.4.1.tgz",
|
||||
"integrity": "sha1-mI3zP+qxke95mmE2nddsF635V+o=",
|
||||
"requires": {
|
||||
"array-find-index": "^1.0.1"
|
||||
}
|
||||
},
|
||||
"cwebp-bin": {
|
||||
"version": "5.0.0",
|
||||
"resolved": "https://registry.npmjs.org/cwebp-bin/-/cwebp-bin-5.0.0.tgz",
|
||||
"integrity": "sha512-7//DAQG0yFr+YGrQ0of50sPlPm+8mIRv1TGxXtlOeq1S0Y56iY2lHlX/aLz+AOTWH/2YVNthNtH97pxRl7q33A==",
|
||||
"requires": {
|
||||
"bin-build": "^3.0.0",
|
||||
"bin-wrapper": "^4.0.1",
|
||||
"logalot": "^2.1.0"
|
||||
}
|
||||
},
|
||||
"cyclist": {
|
||||
"version": "0.2.2",
|
||||
"resolved": "https://registry.npmjs.org/cyclist/-/cyclist-0.2.2.tgz",
|
||||
"integrity": "sha1-GzN5LhHpFKL9bW7WRHRkRE5fpkA="
|
||||
},
|
||||
"damerau-levenshtein": {
|
||||
"version": "1.0.4",
|
||||
"resolved": "https://registry.npmjs.org/damerau-levenshtein/-/damerau-levenshtein-1.0.4.tgz",
|
||||
"integrity": "sha1-AxkcQyy27qFou3fzpV/9zLiXhRQ="
|
||||
},
|
||||
"dashdash": {
|
||||
"version": "1.14.1",
|
||||
"resolved": "https://registry.npmjs.org/dashdash/-/dashdash-1.14.1.tgz",
|
||||
"integrity": "sha1-hTz6D3y+L+1d4gMmuN1YEDX24vA=",
|
||||
"requires": {
|
||||
"assert-plus": "^1.0.0"
|
||||
}
|
||||
},
|
||||
"date-now": {
|
||||
"version": "0.1.4",
|
||||
"resolved": "https://registry.npmjs.org/date-now/-/date-now-0.1.4.tgz",
|
||||
"integrity": "sha1-6vQ5/U1ISK105cx9vvIAZyueNFs="
|
||||
},
|
||||
"debug": {
|
||||
"version": "3.2.6",
|
||||
"resolved": "https://registry.npmjs.org/debug/-/debug-3.2.6.tgz",
|
||||
"integrity": "sha512-mel+jf7nrtEl5Pn1Qx46zARXKDpBbvzezse7p7LqINmdoIk8PYP5SySaxEmYv6TZ0JyEKA1hsCId6DIhgITtWQ==",
|
||||
"requires": {
|
||||
"ms": "^2.1.1"
|
||||
}
|
||||
},
|
||||
"decamelize": {
|
||||
"version": "1.2.0",
|
||||
"resolved": "https://registry.npmjs.org/decamelize/-/decamelize-1.2.0.tgz",
|
||||
"integrity": "sha1-9lNNFRSCabIDUue+4m9QH5oZEpA="
|
||||
},
|
||||
"decode-uri-component": {
|
||||
"version": "0.2.0",
|
||||
"resolved": "https://registry.npmjs.org/decode-uri-component/-/decode-uri-component-0.2.0.tgz",
|
||||
"integrity": "sha1-6zkTMzRYd1y4TNGh+uBiEGu4dUU="
|
||||
},
|
||||
"decompress": {
|
||||
"version": "4.2.0",
|
||||
"resolved": "https://registry.npmjs.org/decompress/-/decompress-4.2.0.tgz",
|
||||
"integrity": "sha1-eu3YVCflqS2s/lVnSnxQXpbQH50=",
|
||||
"requires": {
|
||||
"decompress-tar": "^4.0.0",
|
||||
"decompress-tarbz2": "^4.0.0",
|
||||
"decompress-targz": "^4.0.0",
|
||||
"decompress-unzip": "^4.0.1",
|
||||
"graceful-fs": "^4.1.10",
|
||||
"make-dir": "^1.0.0",
|
||||
"pify": "^2.3.0",
|
||||
"strip-dirs": "^2.0.0"
|
||||
},
|
||||
"dependencies": {
|
||||
"pify": {
|
||||
"version": "2.3.0",
|
||||
"resolved": "http://registry.npmjs.org/pify/-/pify-2.3.0.tgz",
|
||||
"integrity": "sha1-7RQaasBDqEnqWISY59yosVMw6Qw="
|
||||
}
|
||||
}
|
||||
},
|
||||
"decompress-response": {
|
||||
"version": "3.3.0",
|
||||
"resolved": "https://registry.npmjs.org/decompress-response/-/decompress-response-3.3.0.tgz",
|
||||
"integrity": "sha1-gKTdMjdIOEv6JICDYirt7Jgq3/M=",
|
||||
"requires": {
|
||||
"mimic-response": "^1.0.0"
|
||||
}
|
||||
},
|
||||
"decompress-tar": {
|
||||
"version": "4.1.1",
|
||||
"resolved": "https://registry.npmjs.org/decompress-tar/-/decompress-tar-4.1.1.tgz",
|
||||
"integrity": "sha512-JdJMaCrGpB5fESVyxwpCx4Jdj2AagLmv3y58Qy4GE6HMVjWz1FeVQk1Ct4Kye7PftcdOo/7U7UKzYBJgqnGeUQ==",
|
||||
"requires": {
|
||||
"file-type": "^5.2.0",
|
||||
"is-stream": "^1.1.0",
|
||||
"tar-stream": "^1.5.2"
|
||||
},
|
||||
"dependencies": {
|
||||
"file-type": {
|
||||
"version": "5.2.0",
|
||||
"resolved": "https://registry.npmjs.org/file-type/-/file-type-5.2.0.tgz",
|
||||
"integrity": "sha1-LdvqfHP/42No365J3DOMBYwritY="
|
||||
}
|
||||
}
|
||||
},
|
||||
"decompress-tarbz2": {
|
||||
"version": "4.1.1",
|
||||
"resolved": "https://registry.npmjs.org/decompress-tarbz2/-/decompress-tarbz2-4.1.1.tgz",
|
||||
"integrity": "sha512-s88xLzf1r81ICXLAVQVzaN6ZmX4A6U4z2nMbOwobxkLoIIfjVMBg7TeguTUXkKeXni795B6y5rnvDw7rxhAq9A==",
|
||||
"requires": {
|
||||
"decompress-tar": "^4.1.0",
|
||||
"file-type": "^6.1.0",
|
||||
"is-stream": "^1.1.0",
|
||||
"seek-bzip": "^1.0.5",
|
||||
"unbzip2-stream": "^1.0.9"
|
||||
},
|
||||
"dependencies": {
|
||||
"file-type": {
|
||||
"version": "6.2.0",
|
||||
"resolved": "https://registry.npmjs.org/file-type/-/file-type-6.2.0.tgz",
|
||||
"integrity": "sha512-YPcTBDV+2Tm0VqjybVd32MHdlEGAtuxS3VAYsumFokDSMG+ROT5wawGlnHDoz7bfMcMDt9hxuXvXwoKUx2fkOg=="
|
||||
}
|
||||
}
|
||||
},
|
||||
"decompress-targz": {
|
||||
"version": "4.1.1",
|
||||
"resolved": "https://registry.npmjs.org/decompress-targz/-/decompress-targz-4.1.1.tgz",
|
||||
"integrity": "sha512-4z81Znfr6chWnRDNfFNqLwPvm4db3WuZkqV+UgXQzSngG3CEKdBkw5jrv3axjjL96glyiiKjsxJG3X6WBZwX3w==",
|
||||
"requires": {
|
||||
"decompress-tar": "^4.1.1",
|
||||
"file-type": "^5.2.0",
|
||||
"is-stream": "^1.1.0"
|
||||
},
|
||||
"dependencies": {
|
||||
"file-type": {
|
||||
"version": "5.2.0",
|
||||
"resolved": "https://registry.npmjs.org/file-type/-/file-type-5.2.0.tgz",
|
||||
"integrity": "sha1-LdvqfHP/42No365J3DOMBYwritY="
|
||||
}
|
||||
}
|
||||
},
|
||||
"decompress-unzip": {
|
||||
"version": "4.0.1",
|
||||
"resolved": "https://registry.npmjs.org/decompress-unzip/-/decompress-unzip-4.0.1.tgz",
|
||||
"integrity": "sha1-3qrM39FK6vhVePczroIQ+bSEj2k=",
|
||||
"requires": {
|
||||
"file-type": "^3.8.0",
|
||||
"get-stream": "^2.2.0",
|
||||
"pify": "^2.3.0",
|
||||
"yauzl": "^2.4.2"
|
||||
},
|
||||
"dependencies": {
|
||||
"file-type": {
|
||||
"version": "3.9.0",
|
||||
"resolved": "http://registry.npmjs.org/file-type/-/file-type-3.9.0.tgz",
|
||||
"integrity": "sha1-JXoHg4TR24CHvESdEH1SpSZyuek="
|
||||
},
|
||||
"get-stream": {
|
||||
"version": "2.3.1",
|
||||
"resolved": "http://registry.npmjs.org/get-stream/-/get-stream-2.3.1.tgz",
|
||||
"integrity": "sha1-Xzj5PzRgCWZu4BUKBUFn+Rvdld4=",
|
||||
"requires": {
|
||||
"object-assign": "^4.0.1",
|
||||
"pinkie-promise": "^2.0.0"
|
||||
}
|
||||
},
|
||||
"pify": {
|
||||
"version": "2.3.0",
|
||||
"resolved": "http://registry.npmjs.org/pify/-/pify-2.3.0.tgz",
|
||||
"integrity": "sha1-7RQaasBDqEnqWISY59yosVMw6Qw="
|
||||
}
|
||||
}
|
||||
},
|
||||
"tar-stream": {
|
||||
"version": "1.6.2",
|
||||
"resolved": "https://registry.npmjs.org/tar-stream/-/tar-stream-1.6.2.tgz",
|
||||
|
@ -26322,6 +26430,11 @@
|
|||
"rimraf": "^3.0.0"
|
||||
},
|
||||
"dependencies": {
|
||||
"ms": {
|
||||
"version": "2.0.0",
|
||||
"resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz",
|
||||
"integrity": "sha1-VgiurfwAvmwpAd9fmGF4jeDVl8g="
|
||||
},
|
||||
"rimraf": {
|
||||
"version": "3.0.2",
|
||||
"resolved": "https://registry.npmjs.org/rimraf/-/rimraf-3.0.2.tgz",
|
||||
|
@ -26329,11 +26442,6 @@
|
|||
"requires": {
|
||||
"glob": "^7.1.3"
|
||||
}
|
||||
},
|
||||
"ms": {
|
||||
"version": "2.0.0",
|
||||
"resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz",
|
||||
"integrity": "sha1-VgiurfwAvmwpAd9fmGF4jeDVl8g="
|
||||
}
|
||||
}
|
||||
},
|
||||
|
@ -28273,6 +28381,20 @@
|
|||
"resolved": "https://registry.npmjs.org/is-fullwidth-code-point/-/is-fullwidth-code-point-3.0.0.tgz",
|
||||
"integrity": "sha512-zymm5+u+sCsSWyD9qNaejV3DFvhCKclKdizYaJUuHA83RLjb7nSuGnddCHGv0hk+KY7BMAlsWeK4Ueg6EV6XQg=="
|
||||
},
|
||||
"postcss": {
|
||||
"version": "7.0.14",
|
||||
"resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.14.tgz",
|
||||
"integrity": "sha512-NsbD6XUUMZvBxtQAJuWDJeeC4QFsmWsfozWxCJPWf3M55K9iu2iMDaKqyoOdTJ1R4usBXuxlVFAIo8rZPQD4Bg==",
|
||||
"requires": {
|
||||
"source-map": "^0.6.1",
|
||||
"supports-color": "^6.1.0"
|
||||
}
|
||||
},
|
||||
"source-map": {
|
||||
"version": "0.6.1",
|
||||
"resolved": "https://registry.npmjs.org/source-map/-/source-map-0.6.1.tgz",
|
||||
"integrity": "sha512-UjgapumWlbMhkBgzT7Ykc5YXUT46F0iKu8SGXq0bcwP5dz/h0Plj6enJqjz1Zbq2l5WaqYnrVbwWOWMyF3F47g=="
|
||||
},
|
||||
"string-width": {
|
||||
"version": "4.2.0",
|
||||
"resolved": "https://registry.npmjs.org/string-width/-/string-width-4.2.0.tgz",
|
||||
|
@ -28291,21 +28413,6 @@
|
|||
"ansi-regex": "^5.0.0"
|
||||
}
|
||||
},
|
||||
"postcss": {
|
||||
"version": "7.0.14",
|
||||
"resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.14.tgz",
|
||||
"integrity": "sha512-NsbD6XUUMZvBxtQAJuWDJeeC4QFsmWsfozWxCJPWf3M55K9iu2iMDaKqyoOdTJ1R4usBXuxlVFAIo8rZPQD4Bg==",
|
||||
"requires": {
|
||||
"chalk": "^2.4.2",
|
||||
"source-map": "^0.6.1",
|
||||
"supports-color": "^6.1.0"
|
||||
}
|
||||
},
|
||||
"source-map": {
|
||||
"version": "0.6.1",
|
||||
"resolved": "https://registry.npmjs.org/source-map/-/source-map-0.6.1.tgz",
|
||||
"integrity": "sha512-UjgapumWlbMhkBgzT7Ykc5YXUT46F0iKu8SGXq0bcwP5dz/h0Plj6enJqjz1Zbq2l5WaqYnrVbwWOWMyF3F47g=="
|
||||
},
|
||||
"supports-color": {
|
||||
"version": "6.1.0",
|
||||
"resolved": "https://registry.npmjs.org/supports-color/-/supports-color-6.1.0.tgz",
|
||||
|
@ -28508,6 +28615,20 @@
|
|||
"number-is-nan": "^1.0.0"
|
||||
}
|
||||
},
|
||||
"postcss": {
|
||||
"version": "7.0.14",
|
||||
"resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.14.tgz",
|
||||
"integrity": "sha512-NsbD6XUUMZvBxtQAJuWDJeeC4QFsmWsfozWxCJPWf3M55K9iu2iMDaKqyoOdTJ1R4usBXuxlVFAIo8rZPQD4Bg==",
|
||||
"requires": {
|
||||
"source-map": "^0.6.1",
|
||||
"supports-color": "^6.1.0"
|
||||
}
|
||||
},
|
||||
"source-map": {
|
||||
"version": "0.6.1",
|
||||
"resolved": "https://registry.npmjs.org/source-map/-/source-map-0.6.1.tgz",
|
||||
"integrity": "sha512-UjgapumWlbMhkBgzT7Ykc5YXUT46F0iKu8SGXq0bcwP5dz/h0Plj6enJqjz1Zbq2l5WaqYnrVbwWOWMyF3F47g=="
|
||||
},
|
||||
"string-width": {
|
||||
"version": "1.0.2",
|
||||
"resolved": "http://registry.npmjs.org/string-width/-/string-width-1.0.2.tgz",
|
||||
|
@ -28518,21 +28639,6 @@
|
|||
"strip-ansi": "^3.0.0"
|
||||
}
|
||||
},
|
||||
"postcss": {
|
||||
"version": "7.0.14",
|
||||
"resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.14.tgz",
|
||||
"integrity": "sha512-NsbD6XUUMZvBxtQAJuWDJeeC4QFsmWsfozWxCJPWf3M55K9iu2iMDaKqyoOdTJ1R4usBXuxlVFAIo8rZPQD4Bg==",
|
||||
"requires": {
|
||||
"chalk": "^2.4.2",
|
||||
"source-map": "^0.6.1",
|
||||
"supports-color": "^6.1.0"
|
||||
}
|
||||
},
|
||||
"source-map": {
|
||||
"version": "0.6.1",
|
||||
"resolved": "https://registry.npmjs.org/source-map/-/source-map-0.6.1.tgz",
|
||||
"integrity": "sha512-UjgapumWlbMhkBgzT7Ykc5YXUT46F0iKu8SGXq0bcwP5dz/h0Plj6enJqjz1Zbq2l5WaqYnrVbwWOWMyF3F47g=="
|
||||
},
|
||||
"supports-color": {
|
||||
"version": "6.1.0",
|
||||
"resolved": "https://registry.npmjs.org/supports-color/-/supports-color-6.1.0.tgz",
|
||||
|
|
|
@ -25,6 +25,7 @@
|
|||
"gatsby-plugin-plausible": "0.0.6",
|
||||
"gatsby-plugin-react-helmet": "^3.0.6",
|
||||
"gatsby-plugin-react-svg": "^2.1.2",
|
||||
"gatsby-plugin-robots-txt": "^1.5.5",
|
||||
"gatsby-plugin-sass": "^2.0.10",
|
||||
"gatsby-plugin-sharp": "^2.0.20",
|
||||
"gatsby-plugin-sitemap": "^2.0.5",
|
||||
|
@ -52,6 +53,7 @@
|
|||
"scripts": {
|
||||
"build": "gatsby build",
|
||||
"dev": "gatsby develop",
|
||||
"dev:legacy": "SPACY_LEGACY=1 npm run dev",
|
||||
"lint": "eslint **",
|
||||
"clear": "rm -rf .cache",
|
||||
"test": "echo \"Write tests! -> https://gatsby.app/unit-testing\""
|
||||
|
|
|
@ -1,8 +1,10 @@
|
|||
import React, { Fragment } from 'react'
|
||||
import classNames from 'classnames'
|
||||
|
||||
import pattern from '../images/pattern_blue.jpg'
|
||||
import patternOverlay from '../images/pattern_landing.jpg'
|
||||
import patternDefault from '../images/pattern_blue.jpg'
|
||||
import overlayDefault from '../images/pattern_landing.jpg'
|
||||
import patternLegacy from '../images/pattern_legacy.jpg'
|
||||
import overlayLegacy from '../images/pattern_landing_legacy.jpg'
|
||||
import logoSvgs from '../images/logos'
|
||||
|
||||
import Grid from './grid'
|
||||
|
@ -14,9 +16,11 @@ import Link from './link'
|
|||
import { chunkArray } from './util'
|
||||
import classes from '../styles/landing.module.sass'
|
||||
|
||||
export const LandingHeader = ({ style = {}, children }) => {
|
||||
export const LandingHeader = ({ style = {}, children, legacy }) => {
|
||||
const pattern = legacy ? patternLegacy : patternDefault
|
||||
const overlay = legacy ? overlayLegacy : overlayDefault
|
||||
const wrapperStyle = { backgroundImage: `url(${pattern})` }
|
||||
const contentStyle = { backgroundImage: `url(${patternOverlay})`, ...style }
|
||||
const contentStyle = { backgroundImage: `url(${overlay})`, ...style }
|
||||
return (
|
||||
<header className={classes.header}>
|
||||
<div className={classes.headerWrapper} style={wrapperStyle}>
|
||||
|
|
|
@ -5,9 +5,15 @@ import classNames from 'classnames'
|
|||
import patternBlue from '../images/pattern_blue.jpg'
|
||||
import patternGreen from '../images/pattern_green.jpg'
|
||||
import patternPurple from '../images/pattern_purple.jpg'
|
||||
import patternLegacy from '../images/pattern_legacy.jpg'
|
||||
import classes from '../styles/main.module.sass'
|
||||
|
||||
const patterns = { blue: patternBlue, green: patternGreen, purple: patternPurple }
|
||||
const patterns = {
|
||||
blue: patternBlue,
|
||||
green: patternGreen,
|
||||
purple: patternPurple,
|
||||
legacy: patternLegacy,
|
||||
}
|
||||
|
||||
export const Content = ({ Component = 'div', className, children }) => (
|
||||
<Component className={classNames(classes.content, className)}>{children}</Component>
|
||||
|
|
|
@ -6,10 +6,12 @@ import { StaticQuery, graphql } from 'gatsby'
|
|||
import socialImageDefault from '../images/social_default.jpg'
|
||||
import socialImageApi from '../images/social_api.jpg'
|
||||
import socialImageUniverse from '../images/social_universe.jpg'
|
||||
import socialImageLegacy from '../images/social_legacy.jpg'
|
||||
|
||||
function getPageTitle(title, sitename, slogan, sectionTitle) {
|
||||
function getPageTitle(title, sitename, slogan, sectionTitle, legacy) {
|
||||
if (sectionTitle && title) {
|
||||
return `${title} · ${sitename} ${sectionTitle}`
|
||||
const suffix = legacy ? ' (legacy)' : ''
|
||||
return `${title} · ${sitename} ${sectionTitle}${suffix}`
|
||||
}
|
||||
if (title) {
|
||||
return `${title} · ${sitename}`
|
||||
|
@ -17,7 +19,8 @@ function getPageTitle(title, sitename, slogan, sectionTitle) {
|
|||
return `${sitename} · ${slogan}`
|
||||
}
|
||||
|
||||
function getImage(section) {
|
||||
function getImage(section, legacy) {
|
||||
if (legacy) return socialImageLegacy
|
||||
if (section === 'api') return socialImageApi
|
||||
if (section === 'universe') return socialImageUniverse
|
||||
return socialImageDefault
|
||||
|
@ -29,13 +32,15 @@ const SEO = ({ description, lang, title, section, sectionTitle, bodyClass }) =>
|
|||
render={data => {
|
||||
const siteMetadata = data.site.siteMetadata
|
||||
const metaDescription = description || siteMetadata.description
|
||||
const legacy = siteMetadata.legacy
|
||||
const pageTitle = getPageTitle(
|
||||
title,
|
||||
siteMetadata.title,
|
||||
siteMetadata.slogan,
|
||||
sectionTitle
|
||||
sectionTitle,
|
||||
legacy
|
||||
)
|
||||
const socialImage = siteMetadata.siteUrl + getImage(section)
|
||||
const socialImage = siteMetadata.siteUrl + getImage(section, legacy)
|
||||
const meta = [
|
||||
{
|
||||
name: 'description',
|
||||
|
@ -125,6 +130,7 @@ const query = graphql`
|
|||
site {
|
||||
siteMetadata {
|
||||
title
|
||||
legacy
|
||||
description
|
||||
slogan
|
||||
siteUrl
|
||||
|
|
|
@ -6,6 +6,7 @@ import siteMetadata from '../../meta/site.json'
|
|||
|
||||
const htmlToReactParser = new HtmlToReactParser()
|
||||
|
||||
export const defaultBranch = siteMetadata.codeBranch
|
||||
export const repo = siteMetadata.repo
|
||||
export const modelsRepo = siteMetadata.modelsRepo
|
||||
|
||||
|
@ -18,11 +19,11 @@ export const headingTextClassName = 'heading-text'
|
|||
/**
|
||||
* Create a link to the spaCy repository on GitHub
|
||||
* @param {string} filepath - The file path relative to the root of the repo.
|
||||
* @param {string} [branch] - Optional branch. Defaults to master.
|
||||
* @param {string} [branch] - Optional branch.
|
||||
* @returns {string} - URL to the file on GitHub.
|
||||
*/
|
||||
export function github(filepath, branch = 'master') {
|
||||
const path = filepath ? '/tree/' + (branch || 'master') + '/' + filepath : ''
|
||||
export function github(filepath, branch = defaultBranch) {
|
||||
const path = filepath ? '/tree/' + (branch || defaultBranch) + '/' + filepath : ''
|
||||
return `https://github.com/${repo}${path}`
|
||||
}
|
||||
|
||||
|
@ -30,9 +31,9 @@ export function github(filepath, branch = 'master') {
|
|||
* Get the source of a file in the documentation based on its slug
|
||||
* @param {string} slug - The slug, e.g. /api/doc.
|
||||
* @param {boolean} [isIndex] - Whether the page is an index, e.g. /api/index.md
|
||||
* @param {string} [branch] - Optional branch on GitHub. Defaults to master.
|
||||
* @param {string} [branch] - Optional branch on GitHub.
|
||||
*/
|
||||
export function getCurrentSource(slug, isIndex = false, branch = 'master') {
|
||||
export function getCurrentSource(slug, isIndex = false, branch = defaultBranch) {
|
||||
const ext = isIndex ? '/index.md' : '.md'
|
||||
return github(`website/docs${slug}${ext}`, branch)
|
||||
}
|
||||
|
|
BIN
website/src/images/icon_legacy.png
Normal file
BIN
website/src/images/icon_legacy.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 12 KiB |
BIN
website/src/images/pattern_landing_legacy.jpg
Normal file
BIN
website/src/images/pattern_landing_legacy.jpg
Normal file
Binary file not shown.
After Width: | Height: | Size: 86 KiB |
BIN
website/src/images/pattern_legacy.jpg
Normal file
BIN
website/src/images/pattern_legacy.jpg
Normal file
Binary file not shown.
After Width: | Height: | Size: 106 KiB |
BIN
website/src/images/social_legacy.jpg
Normal file
BIN
website/src/images/social_legacy.jpg
Normal file
Binary file not shown.
After Width: | Height: | Size: 199 KiB |
|
@ -8,6 +8,9 @@
|
|||
font: var(--font-size-sm)/var(--line-height-md) var(--font-primary)
|
||||
text-align: center
|
||||
padding: 1rem
|
||||
box-shadow: var(--box-shadow)
|
||||
border-top: 2px solid
|
||||
color: var(--color-theme)
|
||||
|
||||
.warning
|
||||
--alert-bg: var(--color-yellow-light)
|
||||
|
|
|
@ -47,6 +47,11 @@
|
|||
--color-theme-purple-light: hsla(255, 61%, 54%, 0.06)
|
||||
--color-theme-purple-opaque: hsla(255, 61%, 54%, 0.11)
|
||||
|
||||
--color-theme-legacy: #6f6f6f
|
||||
--color-theme-legacy-dark: hsl(257, 0%, 35%)
|
||||
--color-theme-legacy-light: hsla(257, 0%, 67%, 0.06)
|
||||
--color-theme-legacy-opaque: hsla(257, 0%, 67%, 0.11)
|
||||
|
||||
// Regular colors
|
||||
--color-back: hsl(0, 0%, 100%)
|
||||
--color-front: hsl(213, 15%, 12%)
|
||||
|
@ -106,6 +111,12 @@
|
|||
--color-theme-light: var(--color-theme-purple-light)
|
||||
--color-theme-opaque: var(--color-theme-purple-opaque)
|
||||
|
||||
.theme-legacy
|
||||
--color-theme: var(--color-theme-legacy)
|
||||
--color-theme-dark: var(--color-theme-legacy-dark)
|
||||
--color-theme-light: var(--color-theme-legacy-light)
|
||||
--color-theme-opaque: var(--color-theme-legacy-opaque)
|
||||
|
||||
|
||||
/* Fonts */
|
||||
|
||||
|
|
|
@ -31,7 +31,7 @@ const Docs = ({ pageContext, children }) => (
|
|||
theme,
|
||||
version,
|
||||
} = pageContext
|
||||
const { sidebars = [], modelsRepo, languages } = site.siteMetadata
|
||||
const { sidebars = [], modelsRepo, languages, legacy } = site.siteMetadata
|
||||
const isModels = section === 'models'
|
||||
const sidebar = pageContext.sidebar
|
||||
? { items: pageContext.sidebar }
|
||||
|
@ -83,7 +83,7 @@ const Docs = ({ pageContext, children }) => (
|
|||
{sidebar && <Sidebar items={sidebar.items} pageMenu={pageMenu} slug={slug} />}
|
||||
<Main
|
||||
section={section}
|
||||
theme={theme}
|
||||
theme={legacy ? 'legacy' : theme}
|
||||
sidebar
|
||||
asides
|
||||
wrapContent
|
||||
|
@ -140,6 +140,7 @@ const query = graphql`
|
|||
siteMetadata {
|
||||
repo
|
||||
modelsRepo
|
||||
legacy
|
||||
languages {
|
||||
code
|
||||
name
|
||||
|
|
|
@ -75,7 +75,7 @@ const scopeComponents = {
|
|||
InlineCode,
|
||||
}
|
||||
|
||||
const AlertSpace = () => {
|
||||
const AlertSpace = ({ legacy }) => {
|
||||
const isOnline = useOnlineStatus()
|
||||
return (
|
||||
<>
|
||||
|
@ -84,6 +84,16 @@ const AlertSpace = () => {
|
|||
But don't worry, your visited pages should be saved for you.
|
||||
</Alert>
|
||||
)}
|
||||
{legacy && (
|
||||
<Alert
|
||||
title="You're viewing the legacy documentation."
|
||||
icon="warning"
|
||||
closeOnClick={false}
|
||||
>
|
||||
This page reflects an older version of spaCy, not the latest{' '}
|
||||
<Link to="https://spacy.io">stable release</Link>.
|
||||
</Alert>
|
||||
)}
|
||||
</>
|
||||
)
|
||||
}
|
||||
|
@ -131,8 +141,9 @@ class Layout extends React.Component {
|
|||
const { file, site = {} } = data || {}
|
||||
const mdx = file ? file.childMdx : null
|
||||
const { title, section, sectionTitle, teaser, theme = 'blue', searchExclude } = pageContext
|
||||
const bodyClass = classNames(`theme-${theme}`, { 'search-exclude': !!searchExclude })
|
||||
const meta = site.siteMetadata || {}
|
||||
const uiTheme = meta.legacy ? 'legacy' : theme
|
||||
const bodyClass = classNames(`theme-${uiTheme}`, { 'search-exclude': !!searchExclude })
|
||||
const isDocs = ['usage', 'models', 'api', 'styleguide'].includes(section)
|
||||
const content = !mdx ? null : (
|
||||
<MDXProvider components={mdxComponents}>
|
||||
|
@ -149,12 +160,12 @@ class Layout extends React.Component {
|
|||
sectionTitle={sectionTitle}
|
||||
bodyClass={bodyClass}
|
||||
/>
|
||||
<AlertSpace />
|
||||
<AlertSpace legacy={meta.legacy} />
|
||||
<Navigation
|
||||
title={meta.title}
|
||||
items={meta.navigation}
|
||||
section={section}
|
||||
search={<Search settings={meta.docSearch} />}
|
||||
search={meta.legacy ? null : <Search settings={meta.docSearch} />}
|
||||
>
|
||||
<Progress key={location.href} />
|
||||
</Navigation>
|
||||
|
@ -186,6 +197,7 @@ export const pageQuery = graphql`
|
|||
siteMetadata {
|
||||
title
|
||||
description
|
||||
legacy
|
||||
navigation {
|
||||
text
|
||||
url
|
||||
|
|
|
@ -30,8 +30,8 @@ function filterResources(resources, data) {
|
|||
return sorted.filter(res => (res.category || []).includes(data.id))
|
||||
}
|
||||
|
||||
const UniverseContent = ({ content = [], categories, pageContext, location, mdxComponents }) => {
|
||||
const { theme, data = {} } = pageContext
|
||||
const UniverseContent = ({ content = [], categories, pageContext, mdxComponents, theme }) => {
|
||||
const { data = {} } = pageContext
|
||||
const filteredResources = filterResources(content, data)
|
||||
const activeData = data ? content.find(({ id }) => id === data.id) : null
|
||||
const markdownComponents = { ...mdxComponents, code: InlineCode }
|
||||
|
@ -304,6 +304,7 @@ const Universe = ({ pageContext, location, mdxComponents }) => (
|
|||
render={data => {
|
||||
const content = data.site.siteMetadata.universe.resources
|
||||
const categories = data.site.siteMetadata.universe.categories
|
||||
const theme = data.site.siteMetadata.legacy ? 'legacy' : pageContext.theme
|
||||
return (
|
||||
<UniverseContent
|
||||
content={content}
|
||||
|
@ -311,6 +312,7 @@ const Universe = ({ pageContext, location, mdxComponents }) => (
|
|||
pageContext={pageContext}
|
||||
location={location}
|
||||
mdxComponents={mdxComponents}
|
||||
theme={theme}
|
||||
/>
|
||||
)
|
||||
}}
|
||||
|
@ -323,6 +325,7 @@ const query = graphql`
|
|||
query UniverseQuery {
|
||||
site {
|
||||
siteMetadata {
|
||||
legacy
|
||||
universe {
|
||||
resources {
|
||||
type
|
||||
|
|
|
@ -69,7 +69,7 @@ const Landing = ({ data }) => {
|
|||
const counts = getCounts(data.languages)
|
||||
return (
|
||||
<>
|
||||
<LandingHeader>
|
||||
<LandingHeader legacy={data.legacy}>
|
||||
<LandingTitle>
|
||||
Industrial-Strength
|
||||
<br />
|
||||
|
@ -150,12 +150,10 @@ const Landing = ({ data }) => {
|
|||
|
||||
<LandingBannerGrid>
|
||||
<LandingBanner
|
||||
title="spaCy v3.0 nightly: Transformer-based pipelines, new training system, project templates & more"
|
||||
label="Try the pre-release"
|
||||
to="https://nightly.spacy.io"
|
||||
title="spaCy v3.0: Transformer-based pipelines, new training system, project templates & more"
|
||||
label="Out now"
|
||||
to="https://spacy.io"
|
||||
button="See what's new"
|
||||
background="#8758fe"
|
||||
color="#ffffff"
|
||||
small
|
||||
>
|
||||
spaCy v3.0 features all new <strong>transformer-based pipelines</strong> that
|
||||
|
@ -300,6 +298,7 @@ const landingQuery = graphql`
|
|||
query LandingQuery {
|
||||
site {
|
||||
siteMetadata {
|
||||
legacy
|
||||
repo
|
||||
languages {
|
||||
models
|
||||
|
|
Loading…
Reference in New Issue
Block a user