From bf92625edead3707038fbd00e0b249ebe1f04855 Mon Sep 17 00:00:00 2001
From: Ines Montani
Date: Fri, 26 Apr 2019 13:19:50 +0200
Subject: [PATCH 01/41] Update from master
---
.github/contributors/bjascob.md | 106 ++++++++++++++++++++++++++++
spacy/cli/evaluate.py | 2 +-
spacy/cli/pretrain.py | 43 +++++++-----
spacy/cli/train.py | 45 +++++++-----
spacy/displacy/__init__.py | 7 +-
spacy/lang/th/__init__.py | 10 ++-
spacy/lang/th/norm_exceptions.py | 114 +++++++++++++++++++++++++++++++
website/meta/universe.json | 22 ++++++
8 files changed, 310 insertions(+), 39 deletions(-)
create mode 100644 .github/contributors/bjascob.md
create mode 100644 spacy/lang/th/norm_exceptions.py
diff --git a/.github/contributors/bjascob.md b/.github/contributors/bjascob.md
new file mode 100644
index 000000000..4870c494a
--- /dev/null
+++ b/.github/contributors/bjascob.md
@@ -0,0 +1,106 @@
+# spaCy contributor agreement
+
+This spaCy Contributor Agreement (**"SCA"**) is based on the
+[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
+The SCA applies to any contribution that you make to any product or project
+managed by us (the **"project"**), and sets out the intellectual property rights
+you grant to us in the contributed materials. The term **"us"** shall mean
+[ExplosionAI GmbH](https://explosion.ai/legal). The term
+**"you"** shall mean the person or entity identified below.
+
+If you agree to be bound by these terms, fill in the information requested
+below and include the filled-in version with your first pull request, under the
+folder [`.github/contributors/`](/.github/contributors/). The name of the file
+should be your GitHub username, with the extension `.md`. For example, the user
+example_user would create the file `.github/contributors/example_user.md`.
+
+Read this agreement carefully before signing. These terms and conditions
+constitute a binding legal agreement.
+
+## Contributor Agreement
+
+1. The term "contribution" or "contributed materials" means any source code,
+object code, patch, tool, sample, graphic, specification, manual,
+documentation, or any other material posted or submitted by you to the project.
+
+2. With respect to any worldwide copyrights, or copyright applications and
+registrations, in your contribution:
+
+ * you hereby assign to us joint ownership, and to the extent that such
+ assignment is or becomes invalid, ineffective or unenforceable, you hereby
+ grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
+ royalty-free, unrestricted license to exercise all rights under those
+ copyrights. This includes, at our option, the right to sublicense these same
+ rights to third parties through multiple levels of sublicensees or other
+ licensing arrangements;
+
+ * you agree that each of us can do all things in relation to your
+ contribution as if each of us were the sole owners, and if one of us makes
+ a derivative work of your contribution, the one who makes the derivative
+ work (or has it made will be the sole owner of that derivative work;
+
+ * you agree that you will not assert any moral rights in your contribution
+ against us, our licensees or transferees;
+
+ * you agree that we may register a copyright in your contribution and
+ exercise all ownership rights associated with it; and
+
+ * you agree that neither of us has any duty to consult with, obtain the
+ consent of, pay or render an accounting to the other for any use or
+ distribution of your contribution.
+
+3. With respect to any patents you own, or that you can license without payment
+to any third party, you hereby grant to us a perpetual, irrevocable,
+non-exclusive, worldwide, no-charge, royalty-free license to:
+
+ * make, have made, use, sell, offer to sell, import, and otherwise transfer
+ your contribution in whole or in part, alone or in combination with or
+ included in any product, work or materials arising out of the project to
+ which your contribution was submitted, and
+
+ * at our option, to sublicense these same rights to third parties through
+ multiple levels of sublicensees or other licensing arrangements.
+
+4. Except as set out above, you keep all right, title, and interest in your
+contribution. The rights that you grant to us under these terms are effective
+on the date you first submitted a contribution to us, even if your submission
+took place before the date you sign these terms.
+
+5. You covenant, represent, warrant and agree that:
+
+ * Each contribution that you submit is and shall be an original work of
+ authorship and you can legally grant the rights set out in this SCA;
+
+ * to the best of your knowledge, each contribution will not violate any
+ third party's copyrights, trademarks, patents, or other intellectual
+ property rights; and
+
+ * each contribution shall be in compliance with U.S. export control laws and
+ other applicable export and import laws. You agree to notify us if you
+ become aware of any circumstance which would make any of the foregoing
+ representations inaccurate in any respect. We may publicly disclose your
+ participation in the project, including the fact that you have signed the SCA.
+
+6. This SCA is governed by the laws of the State of California and applicable
+U.S. Federal law. Any choice of law rules will not apply.
+
+7. Please place an “x” on one of the applicable statement below. Please do NOT
+mark both statements:
+
+ * [x] I am signing on behalf of myself as an individual and no other person
+ or entity, including my employer, has or will have rights with respect to my
+ contributions.
+
+ * [ ] I am signing on behalf of my employer or a legal entity and I have the
+ actual authority to contractually bind that entity.
+
+## Contributor Details
+
+| Field | Entry |
+|------------------------------- | -------------------- |
+| Name | Brad Jascob |
+| Company name (if applicable) | n/a |
+| Title or role (if applicable) | Software Engineer |
+| Date | 04/25/2019 |
+| GitHub username | bjascob |
+| Website (optional) | n/a |
diff --git a/spacy/cli/evaluate.py b/spacy/cli/evaluate.py
index df391d730..468698e2f 100644
--- a/spacy/cli/evaluate.py
+++ b/spacy/cli/evaluate.py
@@ -17,7 +17,7 @@ from .. import displacy
gpu_id=("Use GPU", "option", "g", int),
displacy_path=("Directory to output rendered parses as HTML", "option", "dp", str),
displacy_limit=("Limit of parses to render as HTML", "option", "dl", int),
- return_scores=("Return dict containing model scores", "flag", "r", bool),
+ return_scores=("Return dict containing model scores", "flag", "R", bool),
)
def evaluate(
model,
diff --git a/spacy/cli/pretrain.py b/spacy/cli/pretrain.py
index 0b316b47c..ef91937a6 100644
--- a/spacy/cli/pretrain.py
+++ b/spacy/cli/pretrain.py
@@ -34,7 +34,8 @@ from .. import util
max_length=("Max words per example.", "option", "xw", int),
min_length=("Min words per example.", "option", "nw", int),
seed=("Seed for random number generators", "option", "s", float),
- nr_iter=("Number of iterations to pretrain", "option", "i", int),
+ n_iter=("Number of iterations to pretrain", "option", "i", int),
+ n_save_every=("Save model every X batches.", "option", "se", int),
)
def pretrain(
texts_loc,
@@ -46,11 +47,12 @@ def pretrain(
loss_func="cosine",
use_vectors=False,
dropout=0.2,
- nr_iter=1000,
+ n_iter=1000,
batch_size=3000,
max_length=500,
min_length=5,
seed=0,
+ n_save_every=None,
):
"""
Pre-train the 'token-to-vector' (tok2vec) layer of pipeline components,
@@ -115,9 +117,26 @@ def pretrain(
msg.divider("Pre-training tok2vec layer")
row_settings = {"widths": (3, 10, 10, 6, 4), "aligns": ("r", "r", "r", "r", "r")}
msg.row(("#", "# Words", "Total Loss", "Loss", "w/s"), **row_settings)
- for epoch in range(nr_iter):
- for batch in util.minibatch_by_words(
- ((text, None) for text in texts), size=batch_size
+
+ def _save_model(epoch, is_temp=False):
+ is_temp_str = ".temp" if is_temp else ""
+ with model.use_params(optimizer.averages):
+ with (output_dir / ("model%d%s.bin" % (epoch, is_temp_str))).open(
+ "wb"
+ ) as file_:
+ file_.write(model.tok2vec.to_bytes())
+ log = {
+ "nr_word": tracker.nr_word,
+ "loss": tracker.loss,
+ "epoch_loss": tracker.epoch_loss,
+ "epoch": epoch,
+ }
+ with (output_dir / "log.jsonl").open("a") as file_:
+ file_.write(srsly.json_dumps(log) + "\n")
+
+ for epoch in range(n_iter):
+ for batch_id, batch in enumerate(
+ util.minibatch_by_words(((text, None) for text in texts), size=batch_size)
):
docs = make_docs(
nlp,
@@ -133,17 +152,9 @@ def pretrain(
msg.row(progress, **row_settings)
if texts_loc == "-" and tracker.words_per_epoch[epoch] >= 10 ** 7:
break
- with model.use_params(optimizer.averages):
- with (output_dir / ("model%d.bin" % epoch)).open("wb") as file_:
- file_.write(model.tok2vec.to_bytes())
- log = {
- "nr_word": tracker.nr_word,
- "loss": tracker.loss,
- "epoch_loss": tracker.epoch_loss,
- "epoch": epoch,
- }
- with (output_dir / "log.jsonl").open("a") as file_:
- file_.write(srsly.json_dumps(log) + "\n")
+ if n_save_every and (batch_id % n_save_every == 0):
+ _save_model(epoch, is_temp=True)
+ _save_model(epoch)
tracker.epoch_loss = 0.0
if texts_loc != "-":
# Reshuffle the texts if texts were loaded from a file
diff --git a/spacy/cli/train.py b/spacy/cli/train.py
index 5cf0f5f6f..63c6242de 100644
--- a/spacy/cli/train.py
+++ b/spacy/cli/train.py
@@ -35,7 +35,12 @@ from .. import about
pipeline=("Comma-separated names of pipeline components", "option", "p", str),
vectors=("Model to load vectors from", "option", "v", str),
n_iter=("Number of iterations", "option", "n", int),
- early_stopping_iter=("Maximum number of training epochs without dev accuracy improvement", "option", "e", int),
+ n_early_stopping=(
+ "Maximum number of training epochs without dev accuracy improvement",
+ "option",
+ "ne",
+ int,
+ ),
n_examples=("Number of examples", "option", "ns", int),
use_gpu=("Use GPU", "option", "g", int),
version=("Model version", "option", "V", str),
@@ -75,7 +80,7 @@ def train(
pipeline="tagger,parser,ner",
vectors=None,
n_iter=30,
- early_stopping_iter=None,
+ n_early_stopping=None,
n_examples=0,
use_gpu=-1,
version="0.0.0",
@@ -226,7 +231,7 @@ def train(
msg.row(["-" * width for width in row_settings["widths"]], **row_settings)
try:
iter_since_best = 0
- best_score = 0.
+ best_score = 0.0
for i in range(n_iter):
train_docs = corpus.train_docs(
nlp, noise_level=noise_level, gold_preproc=gold_preproc, max_length=0
@@ -335,17 +340,23 @@ def train(
gpu_wps=gpu_wps,
)
msg.row(progress, **row_settings)
- # early stopping
- if early_stopping_iter is not None:
+ # Early stopping
+ if n_early_stopping is not None:
current_score = _score_for_model(meta)
if current_score < best_score:
iter_since_best += 1
else:
iter_since_best = 0
best_score = current_score
- if iter_since_best >= early_stopping_iter:
- msg.text("Early stopping, best iteration is: {}".format(i-iter_since_best))
- msg.text("Best score = {}; Final iteration score = {}".format(best_score, current_score))
+ if iter_since_best >= n_early_stopping:
+ msg.text(
+ "Early stopping, best iteration "
+ "is: {}".format(i - iter_since_best)
+ )
+ msg.text(
+ "Best score = {}; Final iteration "
+ "score = {}".format(best_score, current_score)
+ )
break
finally:
with nlp.use_params(optimizer.averages):
@@ -356,19 +367,21 @@ def train(
best_model_path = _collate_best_model(meta, output_path, nlp.pipe_names)
msg.good("Created best model", best_model_path)
+
def _score_for_model(meta):
""" Returns mean score between tasks in pipeline that can be used for early stopping. """
mean_acc = list()
- pipes = meta['pipeline']
- acc = meta['accuracy']
- if 'tagger' in pipes:
- mean_acc.append(acc['tags_acc'])
- if 'parser' in pipes:
- mean_acc.append((acc['uas']+acc['las']) / 2)
- if 'ner' in pipes:
- mean_acc.append((acc['ents_p']+acc['ents_r']+acc['ents_f']) / 3)
+ pipes = meta["pipeline"]
+ acc = meta["accuracy"]
+ if "tagger" in pipes:
+ mean_acc.append(acc["tags_acc"])
+ if "parser" in pipes:
+ mean_acc.append((acc["uas"] + acc["las"]) / 2)
+ if "ner" in pipes:
+ mean_acc.append((acc["ents_p"] + acc["ents_r"] + acc["ents_f"]) / 3)
return sum(mean_acc) / len(mean_acc)
+
@contextlib.contextmanager
def _create_progress_bar(total):
if int(os.environ.get("LOG_FRIENDLY", 0)):
diff --git a/spacy/displacy/__init__.py b/spacy/displacy/__init__.py
index fadbaaa7e..b651c0996 100644
--- a/spacy/displacy/__init__.py
+++ b/spacy/displacy/__init__.py
@@ -19,7 +19,7 @@ RENDER_WRAPPER = None
def render(
- docs, style="dep", page=False, minify=False, jupyter=False, options={}, manual=False
+ docs, style="dep", page=False, minify=False, jupyter=None, options={}, manual=False
):
"""Render displaCy visualisation.
@@ -27,7 +27,7 @@ def render(
style (unicode): Visualisation style, 'dep' or 'ent'.
page (bool): Render markup as full HTML page.
minify (bool): Minify HTML markup.
- jupyter (bool): Experimental, use Jupyter's `display()` to output markup.
+ jupyter (bool): Override Jupyter auto-detection.
options (dict): Visualiser-specific options, e.g. colors.
manual (bool): Don't parse `Doc` and instead expect a dict/list of dicts.
RETURNS (unicode): Rendered HTML markup.
@@ -53,7 +53,8 @@ def render(
html = _html["parsed"]
if RENDER_WRAPPER is not None:
html = RENDER_WRAPPER(html)
- if jupyter or is_in_jupyter(): # return HTML rendered by IPython display()
+ if jupyter or (jupyter is None and is_in_jupyter()):
+ # return HTML rendered by IPython display()
from IPython.core.display import display, HTML
return display(HTML(html))
diff --git a/spacy/lang/th/__init__.py b/spacy/lang/th/__init__.py
index 0bd8333db..ba5b86d77 100644
--- a/spacy/lang/th/__init__.py
+++ b/spacy/lang/th/__init__.py
@@ -4,11 +4,13 @@ from __future__ import unicode_literals
from .tokenizer_exceptions import TOKENIZER_EXCEPTIONS
from .tag_map import TAG_MAP
from .stop_words import STOP_WORDS
+from .norm_exceptions import NORM_EXCEPTIONS
-from ...attrs import LANG
+from ..norm_exceptions import BASE_NORMS
+from ...attrs import LANG, NORM
from ...language import Language
from ...tokens import Doc
-from ...util import DummyTokenizer
+from ...util import DummyTokenizer, add_lookups
class ThaiTokenizer(DummyTokenizer):
@@ -33,7 +35,9 @@ class ThaiTokenizer(DummyTokenizer):
class ThaiDefaults(Language.Defaults):
lex_attr_getters = dict(Language.Defaults.lex_attr_getters)
lex_attr_getters[LANG] = lambda _text: "th"
-
+ lex_attr_getters[NORM] = add_lookups(
+ Language.Defaults.lex_attr_getters[NORM], BASE_NORMS, NORM_EXCEPTIONS
+ )
tokenizer_exceptions = dict(TOKENIZER_EXCEPTIONS)
tag_map = TAG_MAP
stop_words = STOP_WORDS
diff --git a/spacy/lang/th/norm_exceptions.py b/spacy/lang/th/norm_exceptions.py
new file mode 100644
index 000000000..497779cf9
--- /dev/null
+++ b/spacy/lang/th/norm_exceptions.py
@@ -0,0 +1,114 @@
+# coding: utf8
+from __future__ import unicode_literals
+
+
+_exc = {
+ # Conjugation and Diversion invalid to Tonal form (ผันอักษรและเสียงไม่ตรงกับรูปวรรณยุกต์)
+ "สนุ๊กเกอร์": "สนุกเกอร์",
+ "โน้ต": "โน้ต",
+ # Misspelled because of being lazy or hustle (สะกดผิดเพราะขี้เกียจพิมพ์ หรือเร่งรีบ)
+ "โทสับ": "โทรศัพท์",
+ "พุ่งนี้": "พรุ่งนี้",
+ # Strange (ให้ดูแปลกตา)
+ "ชะมะ": "ใช่ไหม",
+ "ชิมิ": "ใช่ไหม",
+ "ชะ": "ใช่ไหม",
+ "ช่ายมะ": "ใช่ไหม",
+ "ป่าว": "เปล่า",
+ "ป่ะ": "เปล่า",
+ "ปล่าว": "เปล่า",
+ "คัย": "ใคร",
+ "ไค": "ใคร",
+ "คราย": "ใคร",
+ "เตง": "ตัวเอง",
+ "ตะเอง": "ตัวเอง",
+ "รึ": "หรือ",
+ "เหรอ": "หรือ",
+ "หรา": "หรือ",
+ "หรอ": "หรือ",
+ "ชั้น": "ฉัน",
+ "ชั้ล": "ฉัน",
+ "ช้าน": "ฉัน",
+ "เทอ": "เธอ",
+ "เทอร์": "เธอ",
+ "เทอว์": "เธอ",
+ "แกร": "แก",
+ "ป๋ม": "ผม",
+ "บ่องตง": "บอกตรงๆ",
+ "ถ่ามตง": "ถามตรงๆ",
+ "ต่อมตง": "ตอบตรงๆ",
+ "เพิ่ล": "เพื่อน",
+ "จอบอ": "จอบอ",
+ "ดั้ย": "ได้",
+ "ขอบคุง": "ขอบคุณ",
+ "ยังงัย": "ยังไง",
+ "Inw": "เทพ",
+ "uou": "นอน",
+ "Lกรีeu": "เกรียน",
+ # Misspelled to express emotions (คำที่สะกดผิดเพื่อแสดงอารมณ์)
+ "เปงราย": "เป็นอะไร",
+ "เปนรัย": "เป็นอะไร",
+ "เปงรัย": "เป็นอะไร",
+ "เป็นอัลไล": "เป็นอะไร",
+ "ทามมาย": "ทำไม",
+ "ทามมัย": "ทำไม",
+ "จังรุย": "จังเลย",
+ "จังเยย": "จังเลย",
+ "จุงเบย": "จังเลย",
+ "ไม่รู้": "มะรุ",
+ "เฮ่ย": "เฮ้ย",
+ "เห้ย": "เฮ้ย",
+ "น่าร็อค": "น่ารัก",
+ "น่าร๊าก": "น่ารัก",
+ "ตั้ลล๊าก": "น่ารัก",
+ "คือร๊ะ": "คืออะไร",
+ "โอป่ะ": "โอเคหรือเปล่า",
+ "น่ามคาน": "น่ารำคาญ",
+ "น่ามสาร": "น่าสงสาร",
+ "วงวาร": "สงสาร",
+ "บับว่า": "แบบว่า",
+ "อัลไล": "อะไร",
+ "อิจ": "อิจฉา",
+ # Reduce rough words or Avoid to software filter (คำที่สะกดผิดเพื่อลดความหยาบของคำ หรืออาจใช้หลีกเลี่ยงการกรองคำหยาบของซอฟต์แวร์)
+ "กรู": "กู",
+ "กุ": "กู",
+ "กรุ": "กู",
+ "ตู": "กู",
+ "ตรู": "กู",
+ "มรึง": "มึง",
+ "เมิง": "มึง",
+ "มืง": "มึง",
+ "มุง": "มึง",
+ "สาด": "สัตว์",
+ "สัส": "สัตว์",
+ "สัก": "สัตว์",
+ "แสรด": "สัตว์",
+ "โคโตะ": "โคตร",
+ "โคด": "โคตร",
+ "โครต": "โคตร",
+ "โคตะระ": "โคตร",
+ "พ่อง": "พ่อมึง",
+ "แม่เมิง": "แม่มึง",
+ "เชี่ย": "เหี้ย",
+ # Imitate words (คำเลียนเสียง โดยส่วนใหญ่จะเพิ่มทัณฑฆาต หรือซ้ำตัวอักษร)
+ "แอร๊ยย": "อ๊าย",
+ "อร๊ายยย": "อ๊าย",
+ "มันส์": "มัน",
+ "วู๊วววววววว์": "วู้",
+ # Acronym (แบบคำย่อ)
+ "หมาลัย": "มหาวิทยาลัย",
+ "วิดวะ": "วิศวะ",
+ "สินสาด ": "ศิลปศาสตร์",
+ "สินกำ ": "ศิลปกรรมศาสตร์",
+ "เสารีย์ ": "อนุเสาวรีย์ชัยสมรภูมิ",
+ "เมกา ": "อเมริกา",
+ "มอไซค์ ": "มอเตอร์ไซค์",
+}
+
+
+NORM_EXCEPTIONS = {}
+
+for string, norm in _exc.items():
+ NORM_EXCEPTIONS[string] = norm
+ NORM_EXCEPTIONS[string.title()] = norm
+
diff --git a/website/meta/universe.json b/website/meta/universe.json
index 29e050964..a6a8bf247 100644
--- a/website/meta/universe.json
+++ b/website/meta/universe.json
@@ -1316,6 +1316,28 @@
"author_links": {
"github": "oterrier"
}
+ },
+ {
+ "id": "pyInflect",
+ "slogan": "A python module for word inflections",
+ "description": "This package uses the [spaCy 2.0 extensions](https://spacy.io/usage/processing-pipelines#extensions) to add word inflections to the system.",
+ "github": "bjascob/pyInflect",
+ "pip": "pyinflect",
+ "code_example": [
+ "import spacy",
+ "import pyinflect",
+ "",
+ "nlp = spacy.load('en_core_web_sm')",
+ "doc = nlp('This is an example.')",
+ "doc[3].tag_ # NN",
+ "doc[3]._.inflect('NNS') # examples"
+ ],
+ "author": "Brad Jascob",
+ "author_links": {
+ "github": "bjascob"
+ },
+ "category": ["pipeline"],
+ "tags": ["inflection"]
}
],
"categories": [
From 4762f5606276f77b6cd2c11d4279eaaf5b7bc463 Mon Sep 17 00:00:00 2001
From: Bram Vanroy
Date: Mon, 6 May 2019 21:08:01 +0200
Subject: [PATCH 02/41] Re-added Universe readme (#3688) (closes #3680)
---
website/UNIVERSE.md | 95 +++++++++++++++++++++++++++++++
website/src/templates/universe.js | 2 +-
2 files changed, 96 insertions(+), 1 deletion(-)
create mode 100644 website/UNIVERSE.md
diff --git a/website/UNIVERSE.md b/website/UNIVERSE.md
new file mode 100644
index 000000000..c26c0fce4
--- /dev/null
+++ b/website/UNIVERSE.md
@@ -0,0 +1,95 @@
+
+
+# spaCy Universe
+
+The [spaCy Universe](https://spacy.io/universe) collects the many great resources developed with or for spaCy. It
+includes standalone packages, plugins, extensions, educational materials,
+operational utilities and bindings for other languages.
+
+If you have a project that you want the spaCy community to make use of, you can
+suggest it by submitting a pull request to this repository. The Universe
+database is open-source and collected in a simple JSON file.
+
+Looking for inspiration for your own spaCy plugin or extension? Check out the
+[`project idea`](https://github.com/explosion/spaCy/labels/project%20idea) label
+on the issue tracker.
+
+## Checklist
+
+### Projects
+
+✅ Libraries and packages should be **open-source** (with a user-friendly license) and at least somewhat **documented** (e.g. a simple `README` with usage instructions).
+
+✅ We're happy to include work in progress and prereleases, but we'd like to keep the emphasis on projects that should be useful to the community **right away**.
+
+✅ Demos and visualizers should be available via a **public URL**.
+
+### Educational Materials
+
+✅ Books should be **available for purchase or download** (not just pre-order). Ebooks and self-published books are fine, too, if they include enough substantial content.
+
+✅ The `"url"` of book entries should either point to the publisher's website or a reseller of your choice (ideally one that ships worldwide or as close as possible).
+
+✅ If an online course is only available behind a paywall, it should at least have a **free excerpt** or chapter available, so users know what to expect.
+
+## JSON format
+
+To add a project, fork this repository, edit the [`universe.json`](universe.json)
+and add an object of the following format to the list of `"resources"`. Before
+you submit your pull request, make sure to use a linter to verify that your
+markup is correct. We'll also be adding linting for the `universe.json` to our
+automated GitHub checks soon.
+
+```json
+{
+ "id": "unique-project-id",
+ "title": "Project title",
+ "slogan": "A short summary",
+ "description": "A longer description – *Mardown allowed!*",
+ "github": "user/repo",
+ "pip": "package-name",
+ "code_example": [
+ "import spacy",
+ "import package_name",
+ "",
+ "nlp = spacy.load('en')",
+ "nlp.add_pipe(package_name)"
+ ],
+ "code_language": "python",
+ "url": "https://example.com",
+ "thumb": "https://example.com/thumb.jpg",
+ "image": "https://example.com/image.jpg",
+ "author": "Your Name",
+ "author_links": {
+ "twitter": "username",
+ "github": "username",
+ "website": "https://example.com"
+ },
+ "category": ["pipeline", "standalone"],
+ "tags": ["some-tag", "etc"]
+}
+```
+
+| Field | Type | Description |
+| --- | --- | --- |
+| `id` | string | Unique ID of the project. |
+| `title` | string | Project title. If not set, the `id` will be used as the display title. |
+| `slogan` | string | A short description of the project. Displayed in the overview and under the title. |
+| `description` | string | A longer description of the project. Markdown is allowed, but should be limited to basic formatting like bold, italics, code or links. |
+| `github` | string | Associated GitHub repo in the format `user/repo`. Will be displayed as a link and used for release, license and star badges. |
+| `pip` | string | Package name on pip. If available, the installation command will be displayed. |
+| `cran` | string | For R packages: package name on CRAN. If available, the installation command will be displayed. |
+| `code_example` | array | Short example that shows how to use the project. Formatted as an array with one string per line. |
+| `code_language` | string | Defaults to `'python'`. Optional code language used for syntax highlighting with [Prism](http://prismjs.com/). |
+| `url` | string | Optional project link to display as button. |
+| `thumb` | string | Optional URL to project thumbnail to display in overview and project header. Recommended size is 100x100px. |
+| `image` | string | Optional URL to project image to display with description. |
+| `author` | string | Name(s) of project author(s). |
+| `author_links` | object | Usernames and links to display as icons to author info. Currently supports `twitter` and `github` usernames, as well as `website` link. |
+| `category` | list | One or more categories to assign to project. Must be one of the available options. |
+| `tags` | list | Still experimental and not used for filtering: one or more tags to assign to project. |
+
+To separate them from the projects, educational materials also specify
+`"type": "education`. Books can also set a `"cover"` field containing a URL
+to a cover image. If available, it's used in the overview and displayed on
+the individual book page.
\ No newline at end of file
diff --git a/website/src/templates/universe.js b/website/src/templates/universe.js
index 644a2de17..379b1a541 100644
--- a/website/src/templates/universe.js
+++ b/website/src/templates/universe.js
@@ -125,7 +125,7 @@ const UniverseContent = ({ content = [], categories, pageContext, location, mdxC
-
+
Read the docs
From 61829f1e79b92890c2cfa0590b0ad667bb831e25 Mon Sep 17 00:00:00 2001
From: Ines Montani
Date: Thu, 9 May 2019 15:36:29 +0200
Subject: [PATCH 03/41] Fix typo
---
website/docs/api/top-level.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/docs/api/top-level.md b/website/docs/api/top-level.md
index 57af729f0..924aca283 100644
--- a/website/docs/api/top-level.md
+++ b/website/docs/api/top-level.md
@@ -351,7 +351,7 @@ the two-letter language code.
| `name` | unicode | Two-letter language code, e.g. `'en'`. |
| `cls` | `Language` | The language class, e.g. `English`. |
-### util.lang_class_is_loaded (#util.lang_class_is_loaded tag="function" new="2.1")
+### util.lang_class_is_loaded {#util.lang_class_is_loaded tag="function" new="2.1"}
Check whether a `Language` class is already loaded. `Language` classes are
loaded lazily, to avoid expensive setup code associated with the language data.
From f256bfbcc407a565897ed5483a552707386aeae1 Mon Sep 17 00:00:00 2001
From: Ines Montani
Date: Fri, 10 May 2019 14:06:06 +0200
Subject: [PATCH 04/41] Add version tag to `--base-model` argument (closes
#3720)
---
website/docs/api/cli.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/docs/api/cli.md b/website/docs/api/cli.md
index 6d3a33c49..d9886004a 100644
--- a/website/docs/api/cli.md
+++ b/website/docs/api/cli.md
@@ -210,7 +210,7 @@ $ python -m spacy train [lang] [output_path] [train_path] [dev_path]
| `output_path` | positional | Directory to store model in. Will be created if it doesn't exist. |
| `train_path` | positional | Location of JSON-formatted training data. Can be a file or a directory of files. |
| `dev_path` | positional | Location of JSON-formatted development data for evaluation. Can be a file or a directory of files. |
-| `--base-model`, `-b` | option | Optional name of base model to update. Can be any loadable spaCy model. |
+| `--base-model`, `-b` 2.1 | option | Optional name of base model to update. Can be any loadable spaCy model. |
| `--pipeline`, `-p` 2.1 | option | Comma-separated names of pipeline components to train. Defaults to `'tagger,parser,ner'`. |
| `--vectors`, `-v` | option | Model to load vectors from. |
| `--n-iter`, `-n` | option | Number of iterations (default: `30`). |
From 914f4b2938be4eb0a7dd4d148854617325fd3f48 Mon Sep 17 00:00:00 2001
From: Aaron Kub
Date: Fri, 10 May 2019 08:23:52 -0400
Subject: [PATCH 05/41] fixing regex matcher examples (#3708) (#3719)
---
.github/contributors/aaronkub.md | 106 ++++++++++++++++++++++
website/docs/usage/rule-based-matching.md | 5 +-
2 files changed, 109 insertions(+), 2 deletions(-)
create mode 100644 .github/contributors/aaronkub.md
diff --git a/.github/contributors/aaronkub.md b/.github/contributors/aaronkub.md
new file mode 100644
index 000000000..c2a7f494e
--- /dev/null
+++ b/.github/contributors/aaronkub.md
@@ -0,0 +1,106 @@
+# spaCy contributor agreement
+
+This spaCy Contributor Agreement (**"SCA"**) is based on the
+[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
+The SCA applies to any contribution that you make to any product or project
+managed by us (the **"project"**), and sets out the intellectual property rights
+you grant to us in the contributed materials. The term **"us"** shall mean
+[ExplosionAI GmbH](https://explosion.ai/legal). The term
+**"you"** shall mean the person or entity identified below.
+
+If you agree to be bound by these terms, fill in the information requested
+below and include the filled-in version with your first pull request, under the
+folder [`.github/contributors/`](/.github/contributors/). The name of the file
+should be your GitHub username, with the extension `.md`. For example, the user
+example_user would create the file `.github/contributors/example_user.md`.
+
+Read this agreement carefully before signing. These terms and conditions
+constitute a binding legal agreement.
+
+## Contributor Agreement
+
+1. The term "contribution" or "contributed materials" means any source code,
+object code, patch, tool, sample, graphic, specification, manual,
+documentation, or any other material posted or submitted by you to the project.
+
+2. With respect to any worldwide copyrights, or copyright applications and
+registrations, in your contribution:
+
+ * you hereby assign to us joint ownership, and to the extent that such
+ assignment is or becomes invalid, ineffective or unenforceable, you hereby
+ grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
+ royalty-free, unrestricted license to exercise all rights under those
+ copyrights. This includes, at our option, the right to sublicense these same
+ rights to third parties through multiple levels of sublicensees or other
+ licensing arrangements;
+
+ * you agree that each of us can do all things in relation to your
+ contribution as if each of us were the sole owners, and if one of us makes
+ a derivative work of your contribution, the one who makes the derivative
+ work (or has it made will be the sole owner of that derivative work;
+
+ * you agree that you will not assert any moral rights in your contribution
+ against us, our licensees or transferees;
+
+ * you agree that we may register a copyright in your contribution and
+ exercise all ownership rights associated with it; and
+
+ * you agree that neither of us has any duty to consult with, obtain the
+ consent of, pay or render an accounting to the other for any use or
+ distribution of your contribution.
+
+3. With respect to any patents you own, or that you can license without payment
+to any third party, you hereby grant to us a perpetual, irrevocable,
+non-exclusive, worldwide, no-charge, royalty-free license to:
+
+ * make, have made, use, sell, offer to sell, import, and otherwise transfer
+ your contribution in whole or in part, alone or in combination with or
+ included in any product, work or materials arising out of the project to
+ which your contribution was submitted, and
+
+ * at our option, to sublicense these same rights to third parties through
+ multiple levels of sublicensees or other licensing arrangements.
+
+4. Except as set out above, you keep all right, title, and interest in your
+contribution. The rights that you grant to us under these terms are effective
+on the date you first submitted a contribution to us, even if your submission
+took place before the date you sign these terms.
+
+5. You covenant, represent, warrant and agree that:
+
+ * Each contribution that you submit is and shall be an original work of
+ authorship and you can legally grant the rights set out in this SCA;
+
+ * to the best of your knowledge, each contribution will not violate any
+ third party's copyrights, trademarks, patents, or other intellectual
+ property rights; and
+
+ * each contribution shall be in compliance with U.S. export control laws and
+ other applicable export and import laws. You agree to notify us if you
+ become aware of any circumstance which would make any of the foregoing
+ representations inaccurate in any respect. We may publicly disclose your
+ participation in the project, including the fact that you have signed the SCA.
+
+6. This SCA is governed by the laws of the State of California and applicable
+U.S. Federal law. Any choice of law rules will not apply.
+
+7. Please place an “x” on one of the applicable statement below. Please do NOT
+mark both statements:
+
+ * [x] I am signing on behalf of myself as an individual and no other person
+ or entity, including my employer, has or will have rights with respect to my
+ contributions.
+
+ * [ ] I am signing on behalf of my employer or a legal entity and I have the
+ actual authority to contractually bind that entity.
+
+## Contributor Details
+
+| Field | Entry |
+|------------------------------- | -------------------- |
+| Name | Aaron Kub |
+| Company name (if applicable) | |
+| Title or role (if applicable) | |
+| Date | 2019-05-09 |
+| GitHub username | aaronkub |
+| Website (optional) | |
diff --git a/website/docs/usage/rule-based-matching.md b/website/docs/usage/rule-based-matching.md
index 37626f6a4..a0959bfbc 100644
--- a/website/docs/usage/rule-based-matching.md
+++ b/website/docs/usage/rule-based-matching.md
@@ -214,7 +214,8 @@ example, you might want to match different spellings of a word, without having
to add a new pattern for each spelling.
```python
-pattern = [{"TEXT": {"REGEX": "^([Uu](\\.?|nited) ?[Ss](\\.?|tates)"}},
+pattern = [{"TEXT": {"REGEX": "^[Uu](\\.?|nited)$"}},
+ {"TEXT": {"REGEX": "^[Ss](\\.?|tates)$"}},
{"LOWER": "president"}]
```
@@ -227,7 +228,7 @@ attributes:
pattern = [{"TAG": {"REGEX": "^V"}}]
# Match custom attribute values with regular expressions
-pattern = [{"_": {"country": {"REGEX": "^([Uu](\\.?|nited) ?[Ss](\\.?|tates)"}}}]
+pattern = [{"_": {"country": {"REGEX": "^[Uu](\\.?|nited) ?[Ss](\\.?|tates)$"}}}]
```
From 377ab1cffb9074e173a77e6163948fb8a5aa7b89 Mon Sep 17 00:00:00 2001
From: Ines Montani
Date: Sat, 11 May 2019 15:22:34 +0200
Subject: [PATCH 06/41] Improve Token.prob and Lexeme.prob docs (resolves
#3701)
---
website/docs/api/cython-structs.md | 2 +-
website/docs/api/lexeme.md | 2 +-
website/docs/api/token.md | 2 +-
3 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/website/docs/api/cython-structs.md b/website/docs/api/cython-structs.md
index 1d3139a96..0e427a8d5 100644
--- a/website/docs/api/cython-structs.md
+++ b/website/docs/api/cython-structs.md
@@ -172,7 +172,7 @@ struct.
| `prefix` | `attr_t` | Length-N substring from the start of the lexeme. Defaults to `N=1`. |
| `suffix` | `attr_t` | Length-N substring from the end of the lexeme. Defaults to `N=3`. |
| `cluster` | `attr_t` | Brown cluster ID. |
-| `prob` | `float` | Smoothed log probability estimate of the lexeme's type. |
+| `prob` | `float` | Smoothed log probability estimate of the lexeme's word type (context-independent entry in the vocabulary). |
| `sentiment` | `float` | A scalar value indicating positivity or negativity. |
### Lexeme.get_struct_attr {#lexeme_get_struct_attr tag="staticmethod, nogil" source="spacy/lexeme.pxd"}
diff --git a/website/docs/api/lexeme.md b/website/docs/api/lexeme.md
index d5e5c54b8..5ec2aaf0c 100644
--- a/website/docs/api/lexeme.md
+++ b/website/docs/api/lexeme.md
@@ -161,6 +161,6 @@ The L2 norm of the lexeme's vector representation.
| `is_stop` | bool | Is the lexeme part of a "stop list"? |
| `lang` | int | Language of the parent vocabulary. |
| `lang_` | unicode | Language of the parent vocabulary. |
-| `prob` | float | Smoothed log probability estimate of the lexeme's type. |
+| `prob` | float | Smoothed log probability estimate of the lexeme's word type (context-independent entry in the vocabulary). |
| `cluster` | int | Brown cluster ID. |
| `sentiment` | float | A scalar value indicating the positivity or negativity of the lexeme. |
diff --git a/website/docs/api/token.md b/website/docs/api/token.md
index a4607b186..2085a02c6 100644
--- a/website/docs/api/token.md
+++ b/website/docs/api/token.md
@@ -465,7 +465,7 @@ The L2 norm of the token's vector representation.
| `dep_` | unicode | Syntactic dependency relation. |
| `lang` | int | Language of the parent document's vocabulary. |
| `lang_` | unicode | Language of the parent document's vocabulary. |
-| `prob` | float | Smoothed log probability estimate of token's type. |
+| `prob` | float | Smoothed log probability estimate of token's word type (context-independent entry in the vocabulary). |
| `idx` | int | The character offset of the token within the parent document. |
| `sentiment` | float | A scalar value indicating the positivity or negativity of the token. |
| `lex_id` | int | Sequential ID of the token's lexical type. |
From 7819404127804db2d76ac2eb8e1569f22f2b1d2f Mon Sep 17 00:00:00 2001
From: Ines Montani
Date: Sat, 11 May 2019 15:37:30 +0200
Subject: [PATCH 07/41] Fix DependencyParser.predict docs (resolves #3561)
---
website/docs/api/dependencyparser.md | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/website/docs/api/dependencyparser.md b/website/docs/api/dependencyparser.md
index 329f96ead..58acc4425 100644
--- a/website/docs/api/dependencyparser.md
+++ b/website/docs/api/dependencyparser.md
@@ -102,10 +102,10 @@ Apply the pipeline's model to a batch of docs, without modifying them.
> scores = parser.predict([doc1, doc2])
> ```
-| Name | Type | Description |
-| ----------- | -------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| `docs` | iterable | The documents to predict. |
-| **RETURNS** | tuple | A `(scores, tensors)` tuple where `scores` is the model's prediction for each document and `tensors` is the token representations used to predict the scores. Each tensor is an array with one row for each token in the document. |
+| Name | Type | Description |
+| ----------- | ------------------- | ---------------------------------------------- |
+| `docs` | iterable | The documents to predict. |
+| **RETURNS** | `syntax.StateClass` | A helper class for the parse state (internal). |
## DependencyParser.set_annotations {#set_annotations tag="method"}
From 0e680046ac3ced5ed1aeb065a11ef16ada1e57b3 Mon Sep 17 00:00:00 2001
From: Ines Montani
Date: Fri, 2 Aug 2019 21:44:26 +0200
Subject: [PATCH 08/41] Update languages.json
---
website/meta/languages.json | 11 +++++++++--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/website/meta/languages.json b/website/meta/languages.json
index ef336ef5f..549bd058b 100644
--- a/website/meta/languages.json
+++ b/website/meta/languages.json
@@ -3,14 +3,21 @@
{
"code": "en",
"name": "English",
- "models": ["en_core_web_sm", "en_core_web_md", "en_core_web_lg", "en_vectors_web_lg"],
+ "models": [
+ "en_core_web_sm",
+ "en_core_web_md",
+ "en_core_web_lg",
+ "en_vectors_web_lg",
+ "en_pytt_bertbaseuncased_lg",
+ "en_pytt_xlnetbasecased_lg"
+ ],
"example": "This is a sentence.",
"has_examples": true
},
{
"code": "de",
"name": "German",
- "models": ["de_core_news_sm", "de_core_news_md"],
+ "models": ["de_core_news_sm", "de_core_news_md", "de_pytt_bertbasecased_lg"],
"example": "Dies ist ein Satz.",
"has_examples": true
},
From 95d63c74b4c007bf77fcfba84de6e6fbf7eeb98c Mon Sep 17 00:00:00 2001
From: Ines Montani
Date: Wed, 7 Aug 2019 00:47:40 +0200
Subject: [PATCH 09/41] Update site.json
---
website/meta/site.json | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/meta/site.json b/website/meta/site.json
index 1820ff5df..7ec146cf5 100644
--- a/website/meta/site.json
+++ b/website/meta/site.json
@@ -29,7 +29,7 @@
"spacyVersion": "2.1",
"binderUrl": "ines/spacy-io-binder",
"binderBranch": "live",
- "binderVersion": "2.1.3",
+ "binderVersion": "2.1.7",
"sections": [
{ "id": "usage", "title": "Usage Documentation", "theme": "blue" },
{ "id": "models", "title": "Models Documentation", "theme": "blue" },
From b6a509a8d159fd8ec8a4f2fca234b0545095eb27 Mon Sep 17 00:00:00 2001
From: Ines Montani
Date: Thu, 26 Sep 2019 16:23:02 +0200
Subject: [PATCH 10/41] Fix tag
---
website/docs/api/vectors.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/docs/api/vectors.md b/website/docs/api/vectors.md
index ffc1fc083..c04085091 100644
--- a/website/docs/api/vectors.md
+++ b/website/docs/api/vectors.md
@@ -211,7 +211,7 @@ Iterate over `(key, vector)` pairs, in order.
| ---------- | ----- | -------------------------------- |
| **YIELDS** | tuple | `(key, vector)` pairs, in order. |
-## Vectors.find (#find tag="method")
+## Vectors.find {#find tag="method"}
Look up one or more keys by row, or vice versa.
From 06d8c3a20f8fd198239052efc3bc314f03f45c0f Mon Sep 17 00:00:00 2001
From: Ines Montani
Date: Mon, 30 Sep 2019 13:14:48 +0200
Subject: [PATCH 11/41] Revert "Merge branch 'master' into spacy.io"
This reverts commit c8bb08b5453da865dda4c54ad3c03ae8817d2c6a, reversing
changes made to b6a509a8d159fd8ec8a4f2fca234b0545095eb27.
---
.github/contributors/EARL_GREYT.md | 106 -
.github/contributors/Hazoom.md | 106 -
.github/contributors/jaydeepborkar.md | 106 -
.github/contributors/seanBE.md | 106 -
.github/contributors/zqianem.md | 106 -
CONTRIBUTING.md | 5 +-
Makefile | 16 +-
README.md | 13 +-
azure-pipelines.yml | 18 +-
bin/get-version.sh | 12 -
bin/ud/run_eval.py | 36 +-
bin/ud/ud_run_test.py | 8 +-
bin/ud/ud_train.py | 105 +-
examples/pipeline/dummy_entity_linking.py | 0
examples/pipeline/wikidata_entity_linking.py | 0
examples/training/pretrain_kb.py | 5 +-
.../training/textcat_example_data/CC0.txt | 121 -
.../textcat_example_data/CC_BY-SA-3.0.txt | 359 --
.../textcat_example_data/CC_BY-SA-4.0.txt | 428 --
.../training/textcat_example_data/README.md | 34 -
.../textcat_example_data/cooking.json | 3487 -----------------
.../textcat_example_data/cooking.jsonl | 10 -
.../jigsaw-toxic-comment.json | 2987 --------------
.../jigsaw-toxic-comment.jsonl | 10 -
.../textcatjsonl_to_trainjson.py | 53 -
examples/training/train_entity_linker.py | 4 +-
examples/training/training-data.json | 2 +-
fabfile.py | 122 +-
requirements.txt | 6 +-
setup.py | 11 +-
spacy/_ml.py | 161 +-
spacy/about.py | 12 +-
spacy/attrs.pyx | 6 +-
spacy/cli/debug_data.py | 66 +-
spacy/cli/download.py | 27 +-
spacy/cli/evaluate.py | 1 -
spacy/cli/init_model.py | 24 +-
spacy/cli/train.py | 259 +-
spacy/errors.py | 25 +-
spacy/glossary.py | 6 -
spacy/gold.pxd | 1 -
spacy/gold.pyx | 168 +-
spacy/kb.pyx | 2 +-
spacy/lang/char_classes.py | 4 +-
spacy/lang/de/__init__.py | 14 -
spacy/lang/de/tag_map.py | 6 +-
spacy/lang/el/lemmatizer/__init__.py | 7 +-
spacy/lang/en/__init__.py | 8 -
spacy/lang/en/lemmatizer/lemma_lookup.json | 2 +-
spacy/lang/en/morph_rules.py | 53 +-
spacy/lang/en/tag_map.py | 20 +-
spacy/lang/en/tokenizer_exceptions.py | 9 +-
spacy/lang/fr/lemmatizer/__init__.py | 6 +-
spacy/lang/hi/stop_words.py | 8 +-
spacy/lang/ja/__init__.py | 23 +-
spacy/lang/ja/tag_map.py | 4 +-
spacy/lang/ko/__init__.py | 39 +-
spacy/lang/lt/tag_map.py | 236 +-
spacy/lang/nl/lemmatizer/__init__.py | 9 +-
spacy/lang/ru/lemmatizer.py | 4 +-
spacy/lang/uk/lemmatizer.py | 4 +-
spacy/language.py | 48 +-
spacy/lemmatizer.py | 36 +-
spacy/lookups.py | 159 +-
spacy/matcher/matcher.pyx | 29 +-
spacy/matcher/phrasematcher.pxd | 26 +-
spacy/matcher/phrasematcher.pyx | 348 +-
spacy/morphology.pxd | 300 +-
spacy/morphology.pyx | 1376 ++-----
spacy/pipeline/__init__.py | 2 -
spacy/pipeline/entityruler.py | 37 +-
spacy/pipeline/morphologizer.pyx | 164 -
spacy/pipeline/pipes.pyx | 30 +-
spacy/scorer.py | 385 +-
spacy/strings.pyx | 16 +-
spacy/structs.pxd | 48 -
spacy/syntax/arc_eager.pyx | 1 -
spacy/syntax/ner.pyx | 53 +-
spacy/syntax/nn_parser.pyx | 26 +-
spacy/syntax/transition_system.pyx | 10 +-
spacy/tests/conftest.py | 6 -
spacy/tests/doc/test_add_entities.py | 25 +-
spacy/tests/doc/test_creation.py | 4 +-
spacy/tests/doc/test_morphanalysis.py | 33 -
spacy/tests/lang/ja/test_tokenizer.py | 7 -
spacy/tests/lang/lt/test_lemmatizer.py | 2 +-
spacy/tests/lang/ru/test_lemmatizer.py | 7 +
...{test_exceptions.py => test_еxceptions.py} | 0
spacy/tests/matcher/test_matcher_api.py | 8 -
spacy/tests/matcher/test_phrase_matcher.py | 89 +-
spacy/tests/morphology/__init__.py | 0
spacy/tests/morphology/test_morph_features.py | 48 -
spacy/tests/parser/test_ner.py | 202 +-
spacy/tests/regression/test_issue1-1000.py | 2 +-
spacy/tests/regression/test_issue1501-2000.py | 2 +-
spacy/tests/regression/test_issue2501-3000.py | 2 +-
spacy/tests/regression/test_issue3001-3500.py | 6 +-
spacy/tests/regression/test_issue4042.py | 82 -
spacy/tests/regression/test_issue4054.py | 4 +-
spacy/tests/regression/test_issue4267.py | 42 -
spacy/tests/regression/test_issue4278.py | 2 +-
spacy/tests/regression/test_issue4313.py | 39 -
spacy/tests/serialize/test_serialize_kb.py | 6 +-
spacy/tests/test_displacy.py | 2 +-
spacy/tests/test_gold.py | 29 -
spacy/tests/test_scorer.py | 75 +-
spacy/tests/vocab_vectors/test_lookups.py | 70 +-
spacy/tests/vocab_vectors/test_vectors.py | 4 +-
spacy/tokens/__init__.py | 3 +-
spacy/tokens/_retokenize.pyx | 3 +-
spacy/tokens/_serialize.py | 124 +-
spacy/tokens/doc.pyx | 87 +-
spacy/tokens/morphanalysis.pxd | 9 -
spacy/tokens/morphanalysis.pyx | 423 --
spacy/tokens/token.pyx | 12 +-
spacy/util.py | 2 +-
spacy/vectors.pyx | 2 +-
spacy/vocab.pyx | 10 +-
website/README.md | 13 +-
website/docs/api/annotation.md | 10 +-
website/docs/api/cli.md | 249 +-
website/docs/api/cython-classes.md | 6 +-
website/docs/api/cython-structs.md | 6 +-
website/docs/api/dependencyparser.md | 2 +-
website/docs/api/doc.md | 79 +-
website/docs/api/docbin.md | 149 -
website/docs/api/entitylinker.md | 300 --
website/docs/api/entityrecognizer.md | 21 +-
website/docs/api/entityruler.md | 4 +-
website/docs/api/goldparse.md | 30 +-
website/docs/api/kb.md | 268 --
website/docs/api/language.md | 28 +-
website/docs/api/lemmatizer.md | 25 +-
website/docs/api/lexeme.md | 18 +-
website/docs/api/lookups.md | 318 --
website/docs/api/matcher.md | 4 +-
website/docs/api/phrasematcher.md | 35 +-
website/docs/api/pipeline-functions.md | 8 +-
website/docs/api/scorer.md | 24 +-
website/docs/api/sentencizer.md | 2 +-
website/docs/api/span.md | 93 +-
website/docs/api/stringstore.md | 30 +-
website/docs/api/tagger.md | 17 +-
website/docs/api/textcategorizer.md | 21 +-
website/docs/api/token.md | 62 +-
website/docs/api/tokenizer.md | 17 +-
website/docs/api/top-level.md | 41 +-
website/docs/api/vectors.md | 23 +-
website/docs/api/vocab.md | 45 +-
website/docs/images/displacy-ent-snek.html | 18 -
website/docs/usage/101/_named-entities.md | 4 +-
website/docs/usage/101/_pipelines.md | 20 +-
website/docs/usage/101/_pos-deps.md | 6 +-
website/docs/usage/101/_serialization.md | 6 +-
website/docs/usage/101/_tokenization.md | 2 +-
website/docs/usage/101/_training.md | 2 +-
website/docs/usage/101/_vectors-similarity.md | 8 +-
website/docs/usage/adding-languages.md | 151 +-
website/docs/usage/facts-figures.md | 2 +-
website/docs/usage/index.md | 2 +-
website/docs/usage/linguistic-features.md | 149 +-
website/docs/usage/models.md | 20 +-
website/docs/usage/processing-pipelines.md | 31 +-
website/docs/usage/rule-based-matching.md | 42 +-
website/docs/usage/saving-loading.md | 137 +-
website/docs/usage/spacy-101.md | 140 +-
website/docs/usage/training.md | 105 +-
website/docs/usage/v2-1.md | 8 +-
website/docs/usage/v2-2.md | 351 --
website/docs/usage/v2.md | 42 +-
website/docs/usage/vectors-similarity.md | 15 +-
website/docs/usage/visualizers.md | 17 +-
website/meta/languages.json | 18 +-
website/meta/sidebars.json | 7 +-
website/meta/site.json | 1 +
website/meta/universe.json | 34 +-
website/src/components/table.js | 13 +-
website/src/styles/accordion.module.sass | 1 -
website/src/styles/code.module.sass | 1 -
website/src/styles/grid.module.sass | 2 +-
website/src/styles/table.module.sass | 3 -
website/src/templates/models.js | 69 +-
website/src/widgets/landing.js | 36 +-
website/src/widgets/quickstart-models.js | 2 +-
184 files changed, 2266 insertions(+), 15310 deletions(-)
delete mode 100644 .github/contributors/EARL_GREYT.md
delete mode 100644 .github/contributors/Hazoom.md
delete mode 100644 .github/contributors/jaydeepborkar.md
delete mode 100644 .github/contributors/seanBE.md
delete mode 100644 .github/contributors/zqianem.md
delete mode 100755 bin/get-version.sh
delete mode 100644 examples/pipeline/dummy_entity_linking.py
delete mode 100644 examples/pipeline/wikidata_entity_linking.py
delete mode 100644 examples/training/textcat_example_data/CC0.txt
delete mode 100644 examples/training/textcat_example_data/CC_BY-SA-3.0.txt
delete mode 100644 examples/training/textcat_example_data/CC_BY-SA-4.0.txt
delete mode 100644 examples/training/textcat_example_data/README.md
delete mode 100644 examples/training/textcat_example_data/cooking.json
delete mode 100644 examples/training/textcat_example_data/cooking.jsonl
delete mode 100644 examples/training/textcat_example_data/jigsaw-toxic-comment.json
delete mode 100644 examples/training/textcat_example_data/jigsaw-toxic-comment.jsonl
delete mode 100644 examples/training/textcat_example_data/textcatjsonl_to_trainjson.py
delete mode 100644 spacy/pipeline/morphologizer.pyx
delete mode 100644 spacy/tests/doc/test_morphanalysis.py
rename spacy/tests/lang/sr/{test_exceptions.py => test_еxceptions.py} (100%)
delete mode 100644 spacy/tests/morphology/__init__.py
delete mode 100644 spacy/tests/morphology/test_morph_features.py
delete mode 100644 spacy/tests/regression/test_issue4042.py
delete mode 100644 spacy/tests/regression/test_issue4267.py
delete mode 100644 spacy/tests/regression/test_issue4313.py
delete mode 100644 spacy/tokens/morphanalysis.pxd
delete mode 100644 spacy/tokens/morphanalysis.pyx
delete mode 100644 website/docs/api/docbin.md
delete mode 100644 website/docs/api/entitylinker.md
delete mode 100644 website/docs/api/kb.md
delete mode 100644 website/docs/api/lookups.md
delete mode 100644 website/docs/images/displacy-ent-snek.html
delete mode 100644 website/docs/usage/v2-2.md
diff --git a/.github/contributors/EARL_GREYT.md b/.github/contributors/EARL_GREYT.md
deleted file mode 100644
index 3ee7d4f41..000000000
--- a/.github/contributors/EARL_GREYT.md
+++ /dev/null
@@ -1,106 +0,0 @@
-# spaCy contributor agreement
-
-This spaCy Contributor Agreement (**"SCA"**) is based on the
-[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
-The SCA applies to any contribution that you make to any product or project
-managed by us (the **"project"**), and sets out the intellectual property rights
-you grant to us in the contributed materials. The term **"us"** shall mean
-[ExplosionAI GmbH](https://explosion.ai/legal). The term
-**"you"** shall mean the person or entity identified below.
-
-If you agree to be bound by these terms, fill in the information requested
-below and include the filled-in version with your first pull request, under the
-folder [`.github/contributors/`](/.github/contributors/). The name of the file
-should be your GitHub username, with the extension `.md`. For example, the user
-example_user would create the file `.github/contributors/example_user.md`.
-
-Read this agreement carefully before signing. These terms and conditions
-constitute a binding legal agreement.
-
-## Contributor Agreement
-
-1. The term "contribution" or "contributed materials" means any source code,
-object code, patch, tool, sample, graphic, specification, manual,
-documentation, or any other material posted or submitted by you to the project.
-
-2. With respect to any worldwide copyrights, or copyright applications and
-registrations, in your contribution:
-
- * you hereby assign to us joint ownership, and to the extent that such
- assignment is or becomes invalid, ineffective or unenforceable, you hereby
- grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
- royalty-free, unrestricted license to exercise all rights under those
- copyrights. This includes, at our option, the right to sublicense these same
- rights to third parties through multiple levels of sublicensees or other
- licensing arrangements;
-
- * you agree that each of us can do all things in relation to your
- contribution as if each of us were the sole owners, and if one of us makes
- a derivative work of your contribution, the one who makes the derivative
- work (or has it made will be the sole owner of that derivative work;
-
- * you agree that you will not assert any moral rights in your contribution
- against us, our licensees or transferees;
-
- * you agree that we may register a copyright in your contribution and
- exercise all ownership rights associated with it; and
-
- * you agree that neither of us has any duty to consult with, obtain the
- consent of, pay or render an accounting to the other for any use or
- distribution of your contribution.
-
-3. With respect to any patents you own, or that you can license without payment
-to any third party, you hereby grant to us a perpetual, irrevocable,
-non-exclusive, worldwide, no-charge, royalty-free license to:
-
- * make, have made, use, sell, offer to sell, import, and otherwise transfer
- your contribution in whole or in part, alone or in combination with or
- included in any product, work or materials arising out of the project to
- which your contribution was submitted, and
-
- * at our option, to sublicense these same rights to third parties through
- multiple levels of sublicensees or other licensing arrangements.
-
-4. Except as set out above, you keep all right, title, and interest in your
-contribution. The rights that you grant to us under these terms are effective
-on the date you first submitted a contribution to us, even if your submission
-took place before the date you sign these terms.
-
-5. You covenant, represent, warrant and agree that:
-
- * Each contribution that you submit is and shall be an original work of
- authorship and you can legally grant the rights set out in this SCA;
-
- * to the best of your knowledge, each contribution will not violate any
- third party's copyrights, trademarks, patents, or other intellectual
- property rights; and
-
- * each contribution shall be in compliance with U.S. export control laws and
- other applicable export and import laws. You agree to notify us if you
- become aware of any circumstance which would make any of the foregoing
- representations inaccurate in any respect. We may publicly disclose your
- participation in the project, including the fact that you have signed the SCA.
-
-6. This SCA is governed by the laws of the State of California and applicable
-U.S. Federal law. Any choice of law rules will not apply.
-
-7. Please place an “x” on one of the applicable statement below. Please do NOT
-mark both statements:
-
- * [x] I am signing on behalf of myself as an individual and no other person
- or entity, including my employer, has or will have rights with respect to my
- contributions.
-
- * [ ] I am signing on behalf of my employer or a legal entity and I have the
- actual authority to contractually bind that entity.
-
-## Contributor Details
-
-| Field | Entry |
-|------------------------------- | -------------------- |
-| Name | David Weßling |
-| Company name (if applicable) | |
-| Title or role (if applicable) | |
-| Date | 27.09.19 |
-| GitHub username | EarlGreyT |
-| Website (optional) | |
diff --git a/.github/contributors/Hazoom.md b/.github/contributors/Hazoom.md
deleted file mode 100644
index 762cb5bef..000000000
--- a/.github/contributors/Hazoom.md
+++ /dev/null
@@ -1,106 +0,0 @@
-# spaCy contributor agreement
-
-This spaCy Contributor Agreement (**"SCA"**) is based on the
-[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
-The SCA applies to any contribution that you make to any product or project
-managed by us (the **"project"**), and sets out the intellectual property rights
-you grant to us in the contributed materials. The term **"us"** shall mean
-[ExplosionAI UG (haftungsbeschränkt)](https://explosion.ai/legal). The term
-**"you"** shall mean the person or entity identified below.
-
-If you agree to be bound by these terms, fill in the information requested
-below and include the filled-in version with your first pull request, under the
-folder [`.github/contributors/`](/.github/contributors/). The name of the file
-should be your GitHub username, with the extension `.md`. For example, the user
-example_user would create the file `.github/contributors/example_user.md`.
-
-Read this agreement carefully before signing. These terms and conditions
-constitute a binding legal agreement.
-
-## Contributor Agreement
-
-1. The term "contribution" or "contributed materials" means any source code,
-object code, patch, tool, sample, graphic, specification, manual,
-documentation, or any other material posted or submitted by you to the project.
-
-2. With respect to any worldwide copyrights, or copyright applications and
-registrations, in your contribution:
-
- * you hereby assign to us joint ownership, and to the extent that such
- assignment is or becomes invalid, ineffective or unenforceable, you hereby
- grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
- royalty-free, unrestricted license to exercise all rights under those
- copyrights. This includes, at our option, the right to sublicense these same
- rights to third parties through multiple levels of sublicensees or other
- licensing arrangements;
-
- * you agree that each of us can do all things in relation to your
- contribution as if each of us were the sole owners, and if one of us makes
- a derivative work of your contribution, the one who makes the derivative
- work (or has it made will be the sole owner of that derivative work;
-
- * you agree that you will not assert any moral rights in your contribution
- against us, our licensees or transferees;
-
- * you agree that we may register a copyright in your contribution and
- exercise all ownership rights associated with it; and
-
- * you agree that neither of us has any duty to consult with, obtain the
- consent of, pay or render an accounting to the other for any use or
- distribution of your contribution.
-
-3. With respect to any patents you own, or that you can license without payment
-to any third party, you hereby grant to us a perpetual, irrevocable,
-non-exclusive, worldwide, no-charge, royalty-free license to:
-
- * make, have made, use, sell, offer to sell, import, and otherwise transfer
- your contribution in whole or in part, alone or in combination with or
- included in any product, work or materials arising out of the project to
- which your contribution was submitted, and
-
- * at our option, to sublicense these same rights to third parties through
- multiple levels of sublicensees or other licensing arrangements.
-
-4. Except as set out above, you keep all right, title, and interest in your
-contribution. The rights that you grant to us under these terms are effective
-on the date you first submitted a contribution to us, even if your submission
-took place before the date you sign these terms.
-
-5. You covenant, represent, warrant and agree that:
-
- * Each contribution that you submit is and shall be an original work of
- authorship and you can legally grant the rights set out in this SCA;
-
- * to the best of your knowledge, each contribution will not violate any
- third party's copyrights, trademarks, patents, or other intellectual
- property rights; and
-
- * each contribution shall be in compliance with U.S. export control laws and
- other applicable export and import laws. You agree to notify us if you
- become aware of any circumstance which would make any of the foregoing
- representations inaccurate in any respect. We may publicly disclose your
- participation in the project, including the fact that you have signed the SCA.
-
-6. This SCA is governed by the laws of the State of California and applicable
-U.S. Federal law. Any choice of law rules will not apply.
-
-7. Please place an “x” on one of the applicable statement below. Please do NOT
-mark both statements:
-
- * [x] I am signing on behalf of myself as an individual and no other person
- or entity, including my employer, has or will have rights with respect to my
- contributions.
-
- * [ ] I am signing on behalf of my employer or a legal entity and I have the
- actual authority to contractually bind that entity.
-
-## Contributor Details
-
-| Field | Entry |
-|------------------------------- | -------------------- |
-| Name | Moshe Hazoom |
-| Company name (if applicable) | Amenity Analytics |
-| Title or role (if applicable) | NLP Engineer |
-| Date | 2019-09-15 |
-| GitHub username | Hazoom |
-| Website (optional) | |
diff --git a/.github/contributors/jaydeepborkar.md b/.github/contributors/jaydeepborkar.md
deleted file mode 100644
index 32199d596..000000000
--- a/.github/contributors/jaydeepborkar.md
+++ /dev/null
@@ -1,106 +0,0 @@
-# spaCy contributor agreement
-
-This spaCy Contributor Agreement (**"SCA"**) is based on the
-[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
-The SCA applies to any contribution that you make to any product or project
-managed by us (the **"project"**), and sets out the intellectual property rights
-you grant to us in the contributed materials. The term **"us"** shall mean
-[ExplosionAI GmbH](https://explosion.ai/legal). The term
-**"you"** shall mean the person or entity identified below.
-
-If you agree to be bound by these terms, fill in the information requested
-below and include the filled-in version with your first pull request, under the
-folder [`.github/contributors/`](/.github/contributors/). The name of the file
-should be your GitHub username, with the extension `.md`. For example, the user
-example_user would create the file `.github/contributors/example_user.md`.
-
-Read this agreement carefully before signing. These terms and conditions
-constitute a binding legal agreement.
-
-## Contributor Agreement
-
-1. The term "contribution" or "contributed materials" means any source code,
-object code, patch, tool, sample, graphic, specification, manual,
-documentation, or any other material posted or submitted by you to the project.
-
-2. With respect to any worldwide copyrights, or copyright applications and
-registrations, in your contribution:
-
- * you hereby assign to us joint ownership, and to the extent that such
- assignment is or becomes invalid, ineffective or unenforceable, you hereby
- grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
- royalty-free, unrestricted license to exercise all rights under those
- copyrights. This includes, at our option, the right to sublicense these same
- rights to third parties through multiple levels of sublicensees or other
- licensing arrangements;
-
- * you agree that each of us can do all things in relation to your
- contribution as if each of us were the sole owners, and if one of us makes
- a derivative work of your contribution, the one who makes the derivative
- work (or has it made will be the sole owner of that derivative work;
-
- * you agree that you will not assert any moral rights in your contribution
- against us, our licensees or transferees;
-
- * you agree that we may register a copyright in your contribution and
- exercise all ownership rights associated with it; and
-
- * you agree that neither of us has any duty to consult with, obtain the
- consent of, pay or render an accounting to the other for any use or
- distribution of your contribution.
-
-3. With respect to any patents you own, or that you can license without payment
-to any third party, you hereby grant to us a perpetual, irrevocable,
-non-exclusive, worldwide, no-charge, royalty-free license to:
-
- * make, have made, use, sell, offer to sell, import, and otherwise transfer
- your contribution in whole or in part, alone or in combination with or
- included in any product, work or materials arising out of the project to
- which your contribution was submitted, and
-
- * at our option, to sublicense these same rights to third parties through
- multiple levels of sublicensees or other licensing arrangements.
-
-4. Except as set out above, you keep all right, title, and interest in your
-contribution. The rights that you grant to us under these terms are effective
-on the date you first submitted a contribution to us, even if your submission
-took place before the date you sign these terms.
-
-5. You covenant, represent, warrant and agree that:
-
- * Each contribution that you submit is and shall be an original work of
- authorship and you can legally grant the rights set out in this SCA;
-
- * to the best of your knowledge, each contribution will not violate any
- third party's copyrights, trademarks, patents, or other intellectual
- property rights; and
-
- * each contribution shall be in compliance with U.S. export control laws and
- other applicable export and import laws. You agree to notify us if you
- become aware of any circumstance which would make any of the foregoing
- representations inaccurate in any respect. We may publicly disclose your
- participation in the project, including the fact that you have signed the SCA.
-
-6. This SCA is governed by the laws of the State of California and applicable
-U.S. Federal law. Any choice of law rules will not apply.
-
-7. Please place an “x” on one of the applicable statement below. Please do NOT
-mark both statements:
-
- * [ ] I am signing on behalf of myself as an individual and no other person
- or entity, including my employer, has or will have rights with respect to my
- contributions.
-
- * [ ] I am signing on behalf of my employer or a legal entity and I have the
- actual authority to contractually bind that entity.
-
-## Contributor Details
-
-| Field | Entry |
-|------------------------------- | -------------------- |
-| Name | Jaydeep Borkar |
-| Company name (if applicable) | Pune University, India |
-| Title or role (if applicable) | CS Undergrad |
-| Date | 9/26/2019 |
-| GitHub username | jaydeepborkar |
-| Website (optional) | http://jaydeepborkar.github.io |
diff --git a/.github/contributors/seanBE.md b/.github/contributors/seanBE.md
deleted file mode 100644
index 5e4b4de0a..000000000
--- a/.github/contributors/seanBE.md
+++ /dev/null
@@ -1,106 +0,0 @@
-# spaCy contributor agreement
-
-This spaCy Contributor Agreement (**"SCA"**) is based on the
-[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
-The SCA applies to any contribution that you make to any product or project
-managed by us (the **"project"**), and sets out the intellectual property rights
-you grant to us in the contributed materials. The term **"us"** shall mean
-[ExplosionAI GmbH](https://explosion.ai/legal). The term
-**"you"** shall mean the person or entity identified below.
-
-If you agree to be bound by these terms, fill in the information requested
-below and include the filled-in version with your first pull request, under the
-folder [`.github/contributors/`](/.github/contributors/). The name of the file
-should be your GitHub username, with the extension `.md`. For example, the user
-example_user would create the file `.github/contributors/example_user.md`.
-
-Read this agreement carefully before signing. These terms and conditions
-constitute a binding legal agreement.
-
-## Contributor Agreement
-
-1. The term "contribution" or "contributed materials" means any source code,
-object code, patch, tool, sample, graphic, specification, manual,
-documentation, or any other material posted or submitted by you to the project.
-
-2. With respect to any worldwide copyrights, or copyright applications and
-registrations, in your contribution:
-
- * you hereby assign to us joint ownership, and to the extent that such
- assignment is or becomes invalid, ineffective or unenforceable, you hereby
- grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
- royalty-free, unrestricted license to exercise all rights under those
- copyrights. This includes, at our option, the right to sublicense these same
- rights to third parties through multiple levels of sublicensees or other
- licensing arrangements;
-
- * you agree that each of us can do all things in relation to your
- contribution as if each of us were the sole owners, and if one of us makes
- a derivative work of your contribution, the one who makes the derivative
- work (or has it made will be the sole owner of that derivative work;
-
- * you agree that you will not assert any moral rights in your contribution
- against us, our licensees or transferees;
-
- * you agree that we may register a copyright in your contribution and
- exercise all ownership rights associated with it; and
-
- * you agree that neither of us has any duty to consult with, obtain the
- consent of, pay or render an accounting to the other for any use or
- distribution of your contribution.
-
-3. With respect to any patents you own, or that you can license without payment
-to any third party, you hereby grant to us a perpetual, irrevocable,
-non-exclusive, worldwide, no-charge, royalty-free license to:
-
- * make, have made, use, sell, offer to sell, import, and otherwise transfer
- your contribution in whole or in part, alone or in combination with or
- included in any product, work or materials arising out of the project to
- which your contribution was submitted, and
-
- * at our option, to sublicense these same rights to third parties through
- multiple levels of sublicensees or other licensing arrangements.
-
-4. Except as set out above, you keep all right, title, and interest in your
-contribution. The rights that you grant to us under these terms are effective
-on the date you first submitted a contribution to us, even if your submission
-took place before the date you sign these terms.
-
-5. You covenant, represent, warrant and agree that:
-
- * Each contribution that you submit is and shall be an original work of
- authorship and you can legally grant the rights set out in this SCA;
-
- * to the best of your knowledge, each contribution will not violate any
- third party's copyrights, trademarks, patents, or other intellectual
- property rights; and
-
- * each contribution shall be in compliance with U.S. export control laws and
- other applicable export and import laws. You agree to notify us if you
- become aware of any circumstance which would make any of the foregoing
- representations inaccurate in any respect. We may publicly disclose your
- participation in the project, including the fact that you have signed the SCA.
-
-6. This SCA is governed by the laws of the State of California and applicable
-U.S. Federal law. Any choice of law rules will not apply.
-
-7. Please place an “x” on one of the applicable statement below. Please do NOT
-mark both statements:
-
- * [x] I am signing on behalf of myself as an individual and no other person
- or entity, including my employer, has or will have rights with respect to my
- contributions.
-
- * [ ] I am signing on behalf of my employer or a legal entity and I have the
- actual authority to contractually bind that entity.
-
-## Contributor Details
-
-| Field | Entry |
-|------------------------------- | ------------------------- |
-| Name | Sean Löfgren |
-| Company name (if applicable) | |
-| Title or role (if applicable) | |
-| Date | 2019-09-17 |
-| GitHub username | seanBE |
-| Website (optional) | http://seanbe.github.io |
diff --git a/.github/contributors/zqianem.md b/.github/contributors/zqianem.md
deleted file mode 100644
index 13f6ab214..000000000
--- a/.github/contributors/zqianem.md
+++ /dev/null
@@ -1,106 +0,0 @@
-# spaCy contributor agreement
-
-This spaCy Contributor Agreement (**"SCA"**) is based on the
-[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
-The SCA applies to any contribution that you make to any product or project
-managed by us (the **"project"**), and sets out the intellectual property rights
-you grant to us in the contributed materials. The term **"us"** shall mean
-[ExplosionAI GmbH](https://explosion.ai/legal). The term
-**"you"** shall mean the person or entity identified below.
-
-If you agree to be bound by these terms, fill in the information requested
-below and include the filled-in version with your first pull request, under the
-folder [`.github/contributors/`](/.github/contributors/). The name of the file
-should be your GitHub username, with the extension `.md`. For example, the user
-example_user would create the file `.github/contributors/example_user.md`.
-
-Read this agreement carefully before signing. These terms and conditions
-constitute a binding legal agreement.
-
-## Contributor Agreement
-
-1. The term "contribution" or "contributed materials" means any source code,
-object code, patch, tool, sample, graphic, specification, manual,
-documentation, or any other material posted or submitted by you to the project.
-
-2. With respect to any worldwide copyrights, or copyright applications and
-registrations, in your contribution:
-
- * you hereby assign to us joint ownership, and to the extent that such
- assignment is or becomes invalid, ineffective or unenforceable, you hereby
- grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
- royalty-free, unrestricted license to exercise all rights under those
- copyrights. This includes, at our option, the right to sublicense these same
- rights to third parties through multiple levels of sublicensees or other
- licensing arrangements;
-
- * you agree that each of us can do all things in relation to your
- contribution as if each of us were the sole owners, and if one of us makes
- a derivative work of your contribution, the one who makes the derivative
- work (or has it made will be the sole owner of that derivative work;
-
- * you agree that you will not assert any moral rights in your contribution
- against us, our licensees or transferees;
-
- * you agree that we may register a copyright in your contribution and
- exercise all ownership rights associated with it; and
-
- * you agree that neither of us has any duty to consult with, obtain the
- consent of, pay or render an accounting to the other for any use or
- distribution of your contribution.
-
-3. With respect to any patents you own, or that you can license without payment
-to any third party, you hereby grant to us a perpetual, irrevocable,
-non-exclusive, worldwide, no-charge, royalty-free license to:
-
- * make, have made, use, sell, offer to sell, import, and otherwise transfer
- your contribution in whole or in part, alone or in combination with or
- included in any product, work or materials arising out of the project to
- which your contribution was submitted, and
-
- * at our option, to sublicense these same rights to third parties through
- multiple levels of sublicensees or other licensing arrangements.
-
-4. Except as set out above, you keep all right, title, and interest in your
-contribution. The rights that you grant to us under these terms are effective
-on the date you first submitted a contribution to us, even if your submission
-took place before the date you sign these terms.
-
-5. You covenant, represent, warrant and agree that:
-
- * Each contribution that you submit is and shall be an original work of
- authorship and you can legally grant the rights set out in this SCA;
-
- * to the best of your knowledge, each contribution will not violate any
- third party's copyrights, trademarks, patents, or other intellectual
- property rights; and
-
- * each contribution shall be in compliance with U.S. export control laws and
- other applicable export and import laws. You agree to notify us if you
- become aware of any circumstance which would make any of the foregoing
- representations inaccurate in any respect. We may publicly disclose your
- participation in the project, including the fact that you have signed the SCA.
-
-6. This SCA is governed by the laws of the State of California and applicable
-U.S. Federal law. Any choice of law rules will not apply.
-
-7. Please place an “x” on one of the applicable statement below. Please do NOT
-mark both statements:
-
- * [x] I am signing on behalf of myself as an individual and no other person
- or entity, including my employer, has or will have rights with respect to my
- contributions.
-
- * [ ] I am signing on behalf of my employer or a legal entity and I have the
- actual authority to contractually bind that entity.
-
-## Contributor Details
-
-| Field | Entry |
-|------------------------------- | -------------------- |
-| Name | Em Zhan |
-| Company name (if applicable) | |
-| Title or role (if applicable) | |
-| Date | 2019-09-25 |
-| GitHub username | zqianem |
-| Website (optional) | |
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index 3c2b56cd3..8b02b7055 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -73,8 +73,9 @@ issue body. A few more tips:
### Issue labels
-[See this page](https://github.com/explosion/spaCy/labels) for an overview of
-the system we use to tag our issues and pull requests.
+To distinguish issues that are opened by us, the maintainers, we usually add a
+💫 to the title. [See this page](https://github.com/explosion/spaCy/labels)
+for an overview of the system we use to tag our issues and pull requests.
## Contributing to the code base
diff --git a/Makefile b/Makefile
index 0f5c31ca6..2834096b7 100644
--- a/Makefile
+++ b/Makefile
@@ -1,17 +1,7 @@
SHELL := /bin/bash
sha = $(shell "git" "rev-parse" "--short" "HEAD")
-version = $(shell "bin/get-version.sh")
-wheel = spacy-$(version)-cp36-cp36m-linux_x86_64.whl
-dist/spacy.pex : dist/spacy-$(sha).pex
- cp dist/spacy-$(sha).pex dist/spacy.pex
- chmod a+rx dist/spacy.pex
-
-dist/spacy-$(sha).pex : dist/$(wheel)
- env3.6/bin/python -m pip install pex==1.5.3
- env3.6/bin/pex pytest dist/$(wheel) -e spacy -o dist/spacy-$(sha).pex
-
-dist/$(wheel) : setup.py spacy/*.py* spacy/*/*.py*
+dist/spacy.pex : spacy/*.py* spacy/*/*.py*
python3.6 -m venv env3.6
source env3.6/bin/activate
env3.6/bin/pip install wheel
@@ -19,6 +9,10 @@ dist/$(wheel) : setup.py spacy/*.py* spacy/*/*.py*
env3.6/bin/python setup.py build_ext --inplace
env3.6/bin/python setup.py sdist
env3.6/bin/python setup.py bdist_wheel
+ env3.6/bin/python -m pip install pex==1.5.3
+ env3.6/bin/pex pytest dist/*.whl -e spacy -o dist/spacy-$(sha).pex
+ cp dist/spacy-$(sha).pex dist/spacy.pex
+ chmod a+rx dist/spacy.pex
.PHONY : clean
diff --git a/README.md b/README.md
index 6bdbc7e46..27a49f465 100644
--- a/README.md
+++ b/README.md
@@ -49,12 +49,9 @@ It's commercial open-source software, released under the MIT license.
## 💬 Where to ask questions
The spaCy project is maintained by [@honnibal](https://github.com/honnibal)
-and [@ines](https://github.com/ines), along with core contributors
-[@svlandeg](https://github.com/svlandeg) and
-[@adrianeboyd](https://github.com/adrianeboyd). Please understand that we won't
-be able to provide individual support via email. We also believe that help is
-much more valuable if it's shared publicly, so that more people can benefit
-from it.
+and [@ines](https://github.com/ines). Please understand that we won't be able
+to provide individual support via email. We also believe that help is much more
+valuable if it's shared publicly, so that more people can benefit from it.
| Type | Platforms |
| ------------------------ | ------------------------------------------------------ |
@@ -175,8 +172,8 @@ python -m spacy download en_core_web_sm
python -m spacy download en
# pip install .tar.gz archive from path or URL
-pip install /Users/you/en_core_web_sm-2.2.0.tar.gz
-pip install https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.2.0/en_core_web_sm-2.2.0.tar.gz
+pip install /Users/you/en_core_web_sm-2.1.0.tar.gz
+pip install https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.1.0/en_core_web_sm-2.1.0.tar.gz
```
### Loading and using models
diff --git a/azure-pipelines.yml b/azure-pipelines.yml
index c23995de6..c5fa563be 100644
--- a/azure-pipelines.yml
+++ b/azure-pipelines.yml
@@ -79,24 +79,14 @@ jobs:
# Downgrading pip is necessary to prevent a wheel version incompatiblity.
# Might be fixed in the future or some other way, so investigate again.
- script: |
- python -m pip install -U pip==18.1 setuptools
+ python -m pip install --upgrade pip==18.1
pip install -r requirements.txt
displayName: 'Install dependencies'
- script: |
python setup.py build_ext --inplace
- python setup.py sdist --formats=gztar
- displayName: 'Compile and build sdist'
+ pip install -e .
+ displayName: 'Build and install'
- - task: DeleteFiles@1
- inputs:
- contents: 'spacy'
- displayName: 'Delete source directory'
-
- - bash: |
- SDIST=$(python -c "import os;print(os.listdir('./dist')[-1])" 2>&1)
- pip install dist/$SDIST
- displayName: 'Install from sdist'
-
- - script: python -m pytest --pyargs spacy
+ - script: python -m pytest --tb=native spacy
displayName: 'Run tests'
diff --git a/bin/get-version.sh b/bin/get-version.sh
deleted file mode 100755
index 5a12ddd7a..000000000
--- a/bin/get-version.sh
+++ /dev/null
@@ -1,12 +0,0 @@
-#!/usr/bin/env bash
-
-set -e
-
-version=$(grep "__version__ = " spacy/about.py)
-version=${version/__version__ = }
-version=${version/\'/}
-version=${version/\'/}
-version=${version/\"/}
-version=${version/\"/}
-
-echo $version
diff --git a/bin/ud/run_eval.py b/bin/ud/run_eval.py
index 2da476721..171687980 100644
--- a/bin/ud/run_eval.py
+++ b/bin/ud/run_eval.py
@@ -7,16 +7,14 @@ import datetime
from pathlib import Path
import xml.etree.ElementTree as ET
-import conll17_ud_eval
-from ud_train import write_conllu
+from spacy.cli.ud import conll17_ud_eval
+from spacy.cli.ud.ud_train import write_conllu
from spacy.lang.lex_attrs import word_shape
from spacy.util import get_lang_class
# All languages in spaCy - in UD format (note that Norwegian is 'no' instead of 'nb')
-ALL_LANGUAGES = ("af, ar, bg, bn, ca, cs, da, de, el, en, es, et, fa, fi, fr,"
- "ga, he, hi, hr, hu, id, is, it, ja, kn, ko, lt, lv, mr, no,"
- "nl, pl, pt, ro, ru, si, sk, sl, sq, sr, sv, ta, te, th, tl,"
- "tr, tt, uk, ur, vi, zh")
+ALL_LANGUAGES = "ar, ca, da, de, el, en, es, fa, fi, fr, ga, he, hi, hr, hu, id, " \
+ "it, ja, no, nl, pl, pt, ro, ru, sv, tr, ur, vi, zh"
# Non-parsing tasks that will be evaluated (works for default models)
EVAL_NO_PARSE = ['Tokens', 'Words', 'Lemmas', 'Sentences', 'Feats']
@@ -75,10 +73,10 @@ def _contains_blinded_text(stats_xml):
tree = ET.parse(stats_xml)
root = tree.getroot()
total_tokens = int(root.find('size/total/tokens').text)
- unique_forms = int(root.find('forms').get('unique'))
+ unique_lemmas = int(root.find('lemmas').get('unique'))
# assume the corpus is largely blinded when there are less than 1% unique tokens
- return (unique_forms / total_tokens) < 0.01
+ return (unique_lemmas / total_tokens) < 0.01
def fetch_all_treebanks(ud_dir, languages, corpus, best_per_language):
@@ -264,26 +262,22 @@ def main(out_path, ud_dir, check_parse=False, langs=ALL_LANGUAGES, exclude_train
if not exclude_trained_models:
if 'de' in models:
models['de'].append(load_model('de_core_news_sm'))
- models['de'].append(load_model('de_core_news_md'))
- if 'el' in models:
- models['el'].append(load_model('el_core_news_sm'))
- models['el'].append(load_model('el_core_news_md'))
- if 'en' in models:
- models['en'].append(load_model('en_core_web_sm'))
- models['en'].append(load_model('en_core_web_md'))
- models['en'].append(load_model('en_core_web_lg'))
if 'es' in models:
models['es'].append(load_model('es_core_news_sm'))
models['es'].append(load_model('es_core_news_md'))
- if 'fr' in models:
- models['fr'].append(load_model('fr_core_news_sm'))
- models['fr'].append(load_model('fr_core_news_md'))
+ if 'pt' in models:
+ models['pt'].append(load_model('pt_core_news_sm'))
if 'it' in models:
models['it'].append(load_model('it_core_news_sm'))
if 'nl' in models:
models['nl'].append(load_model('nl_core_news_sm'))
- if 'pt' in models:
- models['pt'].append(load_model('pt_core_news_sm'))
+ if 'en' in models:
+ models['en'].append(load_model('en_core_web_sm'))
+ models['en'].append(load_model('en_core_web_md'))
+ models['en'].append(load_model('en_core_web_lg'))
+ if 'fr' in models:
+ models['fr'].append(load_model('fr_core_news_sm'))
+ models['fr'].append(load_model('fr_core_news_md'))
with out_path.open(mode='w', encoding='utf-8') as out_file:
run_all_evals(models, treebanks, out_file, check_parse, print_freq_tasks)
diff --git a/bin/ud/ud_run_test.py b/bin/ud/ud_run_test.py
index de01cf350..1c529c831 100644
--- a/bin/ud/ud_run_test.py
+++ b/bin/ud/ud_run_test.py
@@ -109,13 +109,15 @@ def write_conllu(docs, file_):
merger = Matcher(docs[0].vocab)
merger.add("SUBTOK", None, [{"DEP": "subtok", "op": "+"}])
for i, doc in enumerate(docs):
- matches = []
- if doc.is_parsed:
- matches = merger(doc)
+ matches = merger(doc)
spans = [doc[start : end + 1] for _, start, end in matches]
with doc.retokenize() as retokenizer:
for span in spans:
retokenizer.merge(span)
+ # TODO: This shouldn't be necessary? Should be handled in merge
+ for word in doc:
+ if word.i == word.head.i:
+ word.dep_ = "ROOT"
file_.write("# newdoc id = {i}\n".format(i=i))
for j, sent in enumerate(doc.sents):
file_.write("# sent_id = {i}.{j}\n".format(i=i, j=j))
diff --git a/bin/ud/ud_train.py b/bin/ud/ud_train.py
index c1a1501d9..8f699db4f 100644
--- a/bin/ud/ud_train.py
+++ b/bin/ud/ud_train.py
@@ -25,7 +25,7 @@ import itertools
import random
import numpy.random
-import conll17_ud_eval
+from . import conll17_ud_eval
from spacy import lang
from spacy.lang import zh
@@ -82,8 +82,6 @@ def read_data(
head = int(head) - 1 if head != "0" else id_
sent["words"].append(word)
sent["tags"].append(tag)
- sent["morphology"].append(_parse_morph_string(morph))
- sent["morphology"][-1].add("POS_%s" % pos)
sent["heads"].append(head)
sent["deps"].append("ROOT" if dep == "root" else dep)
sent["spaces"].append(space_after == "_")
@@ -92,12 +90,10 @@ def read_data(
if oracle_segments:
docs.append(Doc(nlp.vocab, words=sent["words"], spaces=sent["spaces"]))
golds.append(GoldParse(docs[-1], **sent))
- assert golds[-1].morphology is not None
sent_annots.append(sent)
if raw_text and max_doc_length and len(sent_annots) >= max_doc_length:
doc, gold = _make_gold(nlp, None, sent_annots)
- assert gold.morphology is not None
sent_annots = []
docs.append(doc)
golds.append(gold)
@@ -112,17 +108,6 @@ def read_data(
return docs, golds
return docs, golds
-def _parse_morph_string(morph_string):
- if morph_string == '_':
- return set()
- output = []
- replacements = {'1': 'one', '2': 'two', '3': 'three'}
- for feature in morph_string.split('|'):
- key, value = feature.split('=')
- value = replacements.get(value, value)
- value = value.split(',')[0]
- output.append('%s_%s' % (key, value.lower()))
- return set(output)
def read_conllu(file_):
docs = []
@@ -156,8 +141,8 @@ def _make_gold(nlp, text, sent_annots, drop_deps=0.0):
flat = defaultdict(list)
sent_starts = []
for sent in sent_annots:
- flat["heads"].extend(len(flat["words"])+head for head in sent["heads"])
- for field in ["words", "tags", "deps", "morphology", "entities", "spaces"]:
+ flat["heads"].extend(len(flat["words"]) + head for head in sent["heads"])
+ for field in ["words", "tags", "deps", "entities", "spaces"]:
flat[field].extend(sent[field])
sent_starts.append(True)
sent_starts.extend([False] * (len(sent["words"]) - 1))
@@ -229,18 +214,11 @@ def write_conllu(docs, file_):
merger = Matcher(docs[0].vocab)
merger.add("SUBTOK", None, [{"DEP": "subtok", "op": "+"}])
for i, doc in enumerate(docs):
- matches = []
- if doc.is_parsed:
- matches = merger(doc)
+ matches = merger(doc)
spans = [doc[start : end + 1] for _, start, end in matches]
- seen_tokens = set()
with doc.retokenize() as retokenizer:
for span in spans:
- span_tokens = set(range(span.start, span.end))
- if not span_tokens.intersection(seen_tokens):
- retokenizer.merge(span)
- seen_tokens.update(span_tokens)
-
+ retokenizer.merge(span)
file_.write("# newdoc id = {i}\n".format(i=i))
for j, sent in enumerate(doc.sents):
file_.write("# sent_id = {i}.{j}\n".format(i=i, j=j))
@@ -263,29 +241,27 @@ def write_conllu(docs, file_):
def print_progress(itn, losses, ud_scores):
fields = {
"dep_loss": losses.get("parser", 0.0),
- "morph_loss": losses.get("morphologizer", 0.0),
"tag_loss": losses.get("tagger", 0.0),
"words": ud_scores["Words"].f1 * 100,
"sents": ud_scores["Sentences"].f1 * 100,
"tags": ud_scores["XPOS"].f1 * 100,
"uas": ud_scores["UAS"].f1 * 100,
"las": ud_scores["LAS"].f1 * 100,
- "morph": ud_scores["Feats"].f1 * 100,
}
- header = ["Epoch", "P.Loss", "M.Loss", "LAS", "UAS", "TAG", "MORPH", "SENT", "WORD"]
+ header = ["Epoch", "Loss", "LAS", "UAS", "TAG", "SENT", "WORD"]
if itn == 0:
print("\t".join(header))
- tpl = "\t".join((
- "{:d}",
- "{dep_loss:.1f}",
- "{morph_loss:.1f}",
- "{las:.1f}",
- "{uas:.1f}",
- "{tags:.1f}",
- "{morph:.1f}",
- "{sents:.1f}",
- "{words:.1f}",
- ))
+ tpl = "\t".join(
+ (
+ "{:d}",
+ "{dep_loss:.1f}",
+ "{las:.1f}",
+ "{uas:.1f}",
+ "{tags:.1f}",
+ "{sents:.1f}",
+ "{words:.1f}",
+ )
+ )
print(tpl.format(itn, **fields))
@@ -306,27 +282,25 @@ def get_token_conllu(token, i):
head = 0
else:
head = i + (token.head.i - token.i) + 1
- features = list(token.morph)
- feat_str = []
- replacements = {"one": "1", "two": "2", "three": "3"}
- for feat in features:
- if not feat.startswith("begin") and not feat.startswith("end"):
- key, value = feat.split("_", 1)
- value = replacements.get(value, value)
- feat_str.append("%s=%s" % (key, value.title()))
- if not feat_str:
- feat_str = "_"
- else:
- feat_str = "|".join(feat_str)
- fields = [str(i+1), token.text, token.lemma_, token.pos_, token.tag_, feat_str,
- str(head), token.dep_.lower(), "_", "_"]
+ fields = [
+ str(i + 1),
+ token.text,
+ token.lemma_,
+ token.pos_,
+ token.tag_,
+ "_",
+ str(head),
+ token.dep_.lower(),
+ "_",
+ "_",
+ ]
lines.append("\t".join(fields))
return "\n".join(lines)
-Token.set_extension("get_conllu_lines", method=get_token_conllu, force=True)
-Token.set_extension("begins_fused", default=False, force=True)
-Token.set_extension("inside_fused", default=False, force=True)
+Token.set_extension("get_conllu_lines", method=get_token_conllu)
+Token.set_extension("begins_fused", default=False)
+Token.set_extension("inside_fused", default=False)
##################
@@ -350,8 +324,7 @@ def load_nlp(corpus, config, vectors=None):
def initialize_pipeline(nlp, docs, golds, config, device):
- nlp.add_pipe(nlp.create_pipe("tagger", config={"set_morphology": False}))
- nlp.add_pipe(nlp.create_pipe("morphologizer"))
+ nlp.add_pipe(nlp.create_pipe("tagger"))
nlp.add_pipe(nlp.create_pipe("parser"))
if config.multitask_tag:
nlp.parser.add_multitask_objective("tag")
@@ -551,12 +524,14 @@ def main(
out_path = parses_dir / corpus / "epoch-{i}.conllu".format(i=i)
with nlp.use_params(optimizer.averages):
if use_oracle_segments:
- parsed_docs, scores = evaluate(nlp, paths.dev.conllu,
- paths.dev.conllu, out_path)
+ parsed_docs, scores = evaluate(
+ nlp, paths.dev.conllu, paths.dev.conllu, out_path
+ )
else:
- parsed_docs, scores = evaluate(nlp, paths.dev.text,
- paths.dev.conllu, out_path)
- print_progress(i, losses, scores)
+ parsed_docs, scores = evaluate(
+ nlp, paths.dev.text, paths.dev.conllu, out_path
+ )
+ print_progress(i, losses, scores)
def _render_parses(i, to_render):
diff --git a/examples/pipeline/dummy_entity_linking.py b/examples/pipeline/dummy_entity_linking.py
deleted file mode 100644
index e69de29bb..000000000
diff --git a/examples/pipeline/wikidata_entity_linking.py b/examples/pipeline/wikidata_entity_linking.py
deleted file mode 100644
index e69de29bb..000000000
diff --git a/examples/training/pretrain_kb.py b/examples/training/pretrain_kb.py
index 2c494d5c4..d5281ad42 100644
--- a/examples/training/pretrain_kb.py
+++ b/examples/training/pretrain_kb.py
@@ -8,8 +8,8 @@ For more details, see the documentation:
* Knowledge base: https://spacy.io/api/kb
* Entity Linking: https://spacy.io/usage/linguistic-features#entity-linking
-Compatible with: spaCy v2.2
-Last tested with: v2.2
+Compatible with: spaCy vX.X
+Last tested with: vX.X
"""
from __future__ import unicode_literals, print_function
@@ -73,6 +73,7 @@ def main(vocab_path=None, model=None, output_dir=None, n_iter=50):
input_dim=INPUT_DIM,
desc_width=DESC_WIDTH,
epochs=n_iter,
+ threshold=0.001,
)
encoder.train(description_list=descriptions, to_print=True)
diff --git a/examples/training/textcat_example_data/CC0.txt b/examples/training/textcat_example_data/CC0.txt
deleted file mode 100644
index 0e259d42c..000000000
--- a/examples/training/textcat_example_data/CC0.txt
+++ /dev/null
@@ -1,121 +0,0 @@
-Creative Commons Legal Code
-
-CC0 1.0 Universal
-
- CREATIVE COMMONS CORPORATION IS NOT A LAW FIRM AND DOES NOT PROVIDE
- LEGAL SERVICES. DISTRIBUTION OF THIS DOCUMENT DOES NOT CREATE AN
- ATTORNEY-CLIENT RELATIONSHIP. CREATIVE COMMONS PROVIDES THIS
- INFORMATION ON AN "AS-IS" BASIS. CREATIVE COMMONS MAKES NO WARRANTIES
- REGARDING THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS
- PROVIDED HEREUNDER, AND DISCLAIMS LIABILITY FOR DAMAGES RESULTING FROM
- THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS PROVIDED
- HEREUNDER.
-
-Statement of Purpose
-
-The laws of most jurisdictions throughout the world automatically confer
-exclusive Copyright and Related Rights (defined below) upon the creator
-and subsequent owner(s) (each and all, an "owner") of an original work of
-authorship and/or a database (each, a "Work").
-
-Certain owners wish to permanently relinquish those rights to a Work for
-the purpose of contributing to a commons of creative, cultural and
-scientific works ("Commons") that the public can reliably and without fear
-of later claims of infringement build upon, modify, incorporate in other
-works, reuse and redistribute as freely as possible in any form whatsoever
-and for any purposes, including without limitation commercial purposes.
-These owners may contribute to the Commons to promote the ideal of a free
-culture and the further production of creative, cultural and scientific
-works, or to gain reputation or greater distribution for their Work in
-part through the use and efforts of others.
-
-For these and/or other purposes and motivations, and without any
-expectation of additional consideration or compensation, the person
-associating CC0 with a Work (the "Affirmer"), to the extent that he or she
-is an owner of Copyright and Related Rights in the Work, voluntarily
-elects to apply CC0 to the Work and publicly distribute the Work under its
-terms, with knowledge of his or her Copyright and Related Rights in the
-Work and the meaning and intended legal effect of CC0 on those rights.
-
-1. Copyright and Related Rights. A Work made available under CC0 may be
-protected by copyright and related or neighboring rights ("Copyright and
-Related Rights"). Copyright and Related Rights include, but are not
-limited to, the following:
-
- i. the right to reproduce, adapt, distribute, perform, display,
- communicate, and translate a Work;
- ii. moral rights retained by the original author(s) and/or performer(s);
-iii. publicity and privacy rights pertaining to a person's image or
- likeness depicted in a Work;
- iv. rights protecting against unfair competition in regards to a Work,
- subject to the limitations in paragraph 4(a), below;
- v. rights protecting the extraction, dissemination, use and reuse of data
- in a Work;
- vi. database rights (such as those arising under Directive 96/9/EC of the
- European Parliament and of the Council of 11 March 1996 on the legal
- protection of databases, and under any national implementation
- thereof, including any amended or successor version of such
- directive); and
-vii. other similar, equivalent or corresponding rights throughout the
- world based on applicable law or treaty, and any national
- implementations thereof.
-
-2. Waiver. To the greatest extent permitted by, but not in contravention
-of, applicable law, Affirmer hereby overtly, fully, permanently,
-irrevocably and unconditionally waives, abandons, and surrenders all of
-Affirmer's Copyright and Related Rights and associated claims and causes
-of action, whether now known or unknown (including existing as well as
-future claims and causes of action), in the Work (i) in all territories
-worldwide, (ii) for the maximum duration provided by applicable law or
-treaty (including future time extensions), (iii) in any current or future
-medium and for any number of copies, and (iv) for any purpose whatsoever,
-including without limitation commercial, advertising or promotional
-purposes (the "Waiver"). Affirmer makes the Waiver for the benefit of each
-member of the public at large and to the detriment of Affirmer's heirs and
-successors, fully intending that such Waiver shall not be subject to
-revocation, rescission, cancellation, termination, or any other legal or
-equitable action to disrupt the quiet enjoyment of the Work by the public
-as contemplated by Affirmer's express Statement of Purpose.
-
-3. Public License Fallback. Should any part of the Waiver for any reason
-be judged legally invalid or ineffective under applicable law, then the
-Waiver shall be preserved to the maximum extent permitted taking into
-account Affirmer's express Statement of Purpose. In addition, to the
-extent the Waiver is so judged Affirmer hereby grants to each affected
-person a royalty-free, non transferable, non sublicensable, non exclusive,
-irrevocable and unconditional license to exercise Affirmer's Copyright and
-Related Rights in the Work (i) in all territories worldwide, (ii) for the
-maximum duration provided by applicable law or treaty (including future
-time extensions), (iii) in any current or future medium and for any number
-of copies, and (iv) for any purpose whatsoever, including without
-limitation commercial, advertising or promotional purposes (the
-"License"). The License shall be deemed effective as of the date CC0 was
-applied by Affirmer to the Work. Should any part of the License for any
-reason be judged legally invalid or ineffective under applicable law, such
-partial invalidity or ineffectiveness shall not invalidate the remainder
-of the License, and in such case Affirmer hereby affirms that he or she
-will not (i) exercise any of his or her remaining Copyright and Related
-Rights in the Work or (ii) assert any associated claims and causes of
-action with respect to the Work, in either case contrary to Affirmer's
-express Statement of Purpose.
-
-4. Limitations and Disclaimers.
-
- a. No trademark or patent rights held by Affirmer are waived, abandoned,
- surrendered, licensed or otherwise affected by this document.
- b. Affirmer offers the Work as-is and makes no representations or
- warranties of any kind concerning the Work, express, implied,
- statutory or otherwise, including without limitation warranties of
- title, merchantability, fitness for a particular purpose, non
- infringement, or the absence of latent or other defects, accuracy, or
- the present or absence of errors, whether or not discoverable, all to
- the greatest extent permissible under applicable law.
- c. Affirmer disclaims responsibility for clearing rights of other persons
- that may apply to the Work or any use thereof, including without
- limitation any person's Copyright and Related Rights in the Work.
- Further, Affirmer disclaims responsibility for obtaining any necessary
- consents, permissions or other rights required for any use of the
- Work.
- d. Affirmer understands and acknowledges that Creative Commons is not a
- party to this document and has no duty or obligation with respect to
- this CC0 or use of the Work.
diff --git a/examples/training/textcat_example_data/CC_BY-SA-3.0.txt b/examples/training/textcat_example_data/CC_BY-SA-3.0.txt
deleted file mode 100644
index 604209a80..000000000
--- a/examples/training/textcat_example_data/CC_BY-SA-3.0.txt
+++ /dev/null
@@ -1,359 +0,0 @@
-Creative Commons Legal Code
-
-Attribution-ShareAlike 3.0 Unported
-
- CREATIVE COMMONS CORPORATION IS NOT A LAW FIRM AND DOES NOT PROVIDE
- LEGAL SERVICES. DISTRIBUTION OF THIS LICENSE DOES NOT CREATE AN
- ATTORNEY-CLIENT RELATIONSHIP. CREATIVE COMMONS PROVIDES THIS
- INFORMATION ON AN "AS-IS" BASIS. CREATIVE COMMONS MAKES NO WARRANTIES
- REGARDING THE INFORMATION PROVIDED, AND DISCLAIMS LIABILITY FOR
- DAMAGES RESULTING FROM ITS USE.
-
-License
-
-THE WORK (AS DEFINED BELOW) IS PROVIDED UNDER THE TERMS OF THIS CREATIVE
-COMMONS PUBLIC LICENSE ("CCPL" OR "LICENSE"). THE WORK IS PROTECTED BY
-COPYRIGHT AND/OR OTHER APPLICABLE LAW. ANY USE OF THE WORK OTHER THAN AS
-AUTHORIZED UNDER THIS LICENSE OR COPYRIGHT LAW IS PROHIBITED.
-
-BY EXERCISING ANY RIGHTS TO THE WORK PROVIDED HERE, YOU ACCEPT AND AGREE
-TO BE BOUND BY THE TERMS OF THIS LICENSE. TO THE EXTENT THIS LICENSE MAY
-BE CONSIDERED TO BE A CONTRACT, THE LICENSOR GRANTS YOU THE RIGHTS
-CONTAINED HERE IN CONSIDERATION OF YOUR ACCEPTANCE OF SUCH TERMS AND
-CONDITIONS.
-
-1. Definitions
-
- a. "Adaptation" means a work based upon the Work, or upon the Work and
- other pre-existing works, such as a translation, adaptation,
- derivative work, arrangement of music or other alterations of a
- literary or artistic work, or phonogram or performance and includes
- cinematographic adaptations or any other form in which the Work may be
- recast, transformed, or adapted including in any form recognizably
- derived from the original, except that a work that constitutes a
- Collection will not be considered an Adaptation for the purpose of
- this License. For the avoidance of doubt, where the Work is a musical
- work, performance or phonogram, the synchronization of the Work in
- timed-relation with a moving image ("synching") will be considered an
- Adaptation for the purpose of this License.
- b. "Collection" means a collection of literary or artistic works, such as
- encyclopedias and anthologies, or performances, phonograms or
- broadcasts, or other works or subject matter other than works listed
- in Section 1(f) below, which, by reason of the selection and
- arrangement of their contents, constitute intellectual creations, in
- which the Work is included in its entirety in unmodified form along
- with one or more other contributions, each constituting separate and
- independent works in themselves, which together are assembled into a
- collective whole. A work that constitutes a Collection will not be
- considered an Adaptation (as defined below) for the purposes of this
- License.
- c. "Creative Commons Compatible License" means a license that is listed
- at https://creativecommons.org/compatiblelicenses that has been
- approved by Creative Commons as being essentially equivalent to this
- License, including, at a minimum, because that license: (i) contains
- terms that have the same purpose, meaning and effect as the License
- Elements of this License; and, (ii) explicitly permits the relicensing
- of adaptations of works made available under that license under this
- License or a Creative Commons jurisdiction license with the same
- License Elements as this License.
- d. "Distribute" means to make available to the public the original and
- copies of the Work or Adaptation, as appropriate, through sale or
- other transfer of ownership.
- e. "License Elements" means the following high-level license attributes
- as selected by Licensor and indicated in the title of this License:
- Attribution, ShareAlike.
- f. "Licensor" means the individual, individuals, entity or entities that
- offer(s) the Work under the terms of this License.
- g. "Original Author" means, in the case of a literary or artistic work,
- the individual, individuals, entity or entities who created the Work
- or if no individual or entity can be identified, the publisher; and in
- addition (i) in the case of a performance the actors, singers,
- musicians, dancers, and other persons who act, sing, deliver, declaim,
- play in, interpret or otherwise perform literary or artistic works or
- expressions of folklore; (ii) in the case of a phonogram the producer
- being the person or legal entity who first fixes the sounds of a
- performance or other sounds; and, (iii) in the case of broadcasts, the
- organization that transmits the broadcast.
- h. "Work" means the literary and/or artistic work offered under the terms
- of this License including without limitation any production in the
- literary, scientific and artistic domain, whatever may be the mode or
- form of its expression including digital form, such as a book,
- pamphlet and other writing; a lecture, address, sermon or other work
- of the same nature; a dramatic or dramatico-musical work; a
- choreographic work or entertainment in dumb show; a musical
- composition with or without words; a cinematographic work to which are
- assimilated works expressed by a process analogous to cinematography;
- a work of drawing, painting, architecture, sculpture, engraving or
- lithography; a photographic work to which are assimilated works
- expressed by a process analogous to photography; a work of applied
- art; an illustration, map, plan, sketch or three-dimensional work
- relative to geography, topography, architecture or science; a
- performance; a broadcast; a phonogram; a compilation of data to the
- extent it is protected as a copyrightable work; or a work performed by
- a variety or circus performer to the extent it is not otherwise
- considered a literary or artistic work.
- i. "You" means an individual or entity exercising rights under this
- License who has not previously violated the terms of this License with
- respect to the Work, or who has received express permission from the
- Licensor to exercise rights under this License despite a previous
- violation.
- j. "Publicly Perform" means to perform public recitations of the Work and
- to communicate to the public those public recitations, by any means or
- process, including by wire or wireless means or public digital
- performances; to make available to the public Works in such a way that
- members of the public may access these Works from a place and at a
- place individually chosen by them; to perform the Work to the public
- by any means or process and the communication to the public of the
- performances of the Work, including by public digital performance; to
- broadcast and rebroadcast the Work by any means including signs,
- sounds or images.
- k. "Reproduce" means to make copies of the Work by any means including
- without limitation by sound or visual recordings and the right of
- fixation and reproducing fixations of the Work, including storage of a
- protected performance or phonogram in digital form or other electronic
- medium.
-
-2. Fair Dealing Rights. Nothing in this License is intended to reduce,
-limit, or restrict any uses free from copyright or rights arising from
-limitations or exceptions that are provided for in connection with the
-copyright protection under copyright law or other applicable laws.
-
-3. License Grant. Subject to the terms and conditions of this License,
-Licensor hereby grants You a worldwide, royalty-free, non-exclusive,
-perpetual (for the duration of the applicable copyright) license to
-exercise the rights in the Work as stated below:
-
- a. to Reproduce the Work, to incorporate the Work into one or more
- Collections, and to Reproduce the Work as incorporated in the
- Collections;
- b. to create and Reproduce Adaptations provided that any such Adaptation,
- including any translation in any medium, takes reasonable steps to
- clearly label, demarcate or otherwise identify that changes were made
- to the original Work. For example, a translation could be marked "The
- original work was translated from English to Spanish," or a
- modification could indicate "The original work has been modified.";
- c. to Distribute and Publicly Perform the Work including as incorporated
- in Collections; and,
- d. to Distribute and Publicly Perform Adaptations.
- e. For the avoidance of doubt:
-
- i. Non-waivable Compulsory License Schemes. In those jurisdictions in
- which the right to collect royalties through any statutory or
- compulsory licensing scheme cannot be waived, the Licensor
- reserves the exclusive right to collect such royalties for any
- exercise by You of the rights granted under this License;
- ii. Waivable Compulsory License Schemes. In those jurisdictions in
- which the right to collect royalties through any statutory or
- compulsory licensing scheme can be waived, the Licensor waives the
- exclusive right to collect such royalties for any exercise by You
- of the rights granted under this License; and,
- iii. Voluntary License Schemes. The Licensor waives the right to
- collect royalties, whether individually or, in the event that the
- Licensor is a member of a collecting society that administers
- voluntary licensing schemes, via that society, from any exercise
- by You of the rights granted under this License.
-
-The above rights may be exercised in all media and formats whether now
-known or hereafter devised. The above rights include the right to make
-such modifications as are technically necessary to exercise the rights in
-other media and formats. Subject to Section 8(f), all rights not expressly
-granted by Licensor are hereby reserved.
-
-4. Restrictions. The license granted in Section 3 above is expressly made
-subject to and limited by the following restrictions:
-
- a. You may Distribute or Publicly Perform the Work only under the terms
- of this License. You must include a copy of, or the Uniform Resource
- Identifier (URI) for, this License with every copy of the Work You
- Distribute or Publicly Perform. You may not offer or impose any terms
- on the Work that restrict the terms of this License or the ability of
- the recipient of the Work to exercise the rights granted to that
- recipient under the terms of the License. You may not sublicense the
- Work. You must keep intact all notices that refer to this License and
- to the disclaimer of warranties with every copy of the Work You
- Distribute or Publicly Perform. When You Distribute or Publicly
- Perform the Work, You may not impose any effective technological
- measures on the Work that restrict the ability of a recipient of the
- Work from You to exercise the rights granted to that recipient under
- the terms of the License. This Section 4(a) applies to the Work as
- incorporated in a Collection, but this does not require the Collection
- apart from the Work itself to be made subject to the terms of this
- License. If You create a Collection, upon notice from any Licensor You
- must, to the extent practicable, remove from the Collection any credit
- as required by Section 4(c), as requested. If You create an
- Adaptation, upon notice from any Licensor You must, to the extent
- practicable, remove from the Adaptation any credit as required by
- Section 4(c), as requested.
- b. You may Distribute or Publicly Perform an Adaptation only under the
- terms of: (i) this License; (ii) a later version of this License with
- the same License Elements as this License; (iii) a Creative Commons
- jurisdiction license (either this or a later license version) that
- contains the same License Elements as this License (e.g.,
- Attribution-ShareAlike 3.0 US)); (iv) a Creative Commons Compatible
- License. If you license the Adaptation under one of the licenses
- mentioned in (iv), you must comply with the terms of that license. If
- you license the Adaptation under the terms of any of the licenses
- mentioned in (i), (ii) or (iii) (the "Applicable License"), you must
- comply with the terms of the Applicable License generally and the
- following provisions: (I) You must include a copy of, or the URI for,
- the Applicable License with every copy of each Adaptation You
- Distribute or Publicly Perform; (II) You may not offer or impose any
- terms on the Adaptation that restrict the terms of the Applicable
- License or the ability of the recipient of the Adaptation to exercise
- the rights granted to that recipient under the terms of the Applicable
- License; (III) You must keep intact all notices that refer to the
- Applicable License and to the disclaimer of warranties with every copy
- of the Work as included in the Adaptation You Distribute or Publicly
- Perform; (IV) when You Distribute or Publicly Perform the Adaptation,
- You may not impose any effective technological measures on the
- Adaptation that restrict the ability of a recipient of the Adaptation
- from You to exercise the rights granted to that recipient under the
- terms of the Applicable License. This Section 4(b) applies to the
- Adaptation as incorporated in a Collection, but this does not require
- the Collection apart from the Adaptation itself to be made subject to
- the terms of the Applicable License.
- c. If You Distribute, or Publicly Perform the Work or any Adaptations or
- Collections, You must, unless a request has been made pursuant to
- Section 4(a), keep intact all copyright notices for the Work and
- provide, reasonable to the medium or means You are utilizing: (i) the
- name of the Original Author (or pseudonym, if applicable) if supplied,
- and/or if the Original Author and/or Licensor designate another party
- or parties (e.g., a sponsor institute, publishing entity, journal) for
- attribution ("Attribution Parties") in Licensor's copyright notice,
- terms of service or by other reasonable means, the name of such party
- or parties; (ii) the title of the Work if supplied; (iii) to the
- extent reasonably practicable, the URI, if any, that Licensor
- specifies to be associated with the Work, unless such URI does not
- refer to the copyright notice or licensing information for the Work;
- and (iv) , consistent with Ssection 3(b), in the case of an
- Adaptation, a credit identifying the use of the Work in the Adaptation
- (e.g., "French translation of the Work by Original Author," or
- "Screenplay based on original Work by Original Author"). The credit
- required by this Section 4(c) may be implemented in any reasonable
- manner; provided, however, that in the case of a Adaptation or
- Collection, at a minimum such credit will appear, if a credit for all
- contributing authors of the Adaptation or Collection appears, then as
- part of these credits and in a manner at least as prominent as the
- credits for the other contributing authors. For the avoidance of
- doubt, You may only use the credit required by this Section for the
- purpose of attribution in the manner set out above and, by exercising
- Your rights under this License, You may not implicitly or explicitly
- assert or imply any connection with, sponsorship or endorsement by the
- Original Author, Licensor and/or Attribution Parties, as appropriate,
- of You or Your use of the Work, without the separate, express prior
- written permission of the Original Author, Licensor and/or Attribution
- Parties.
- d. Except as otherwise agreed in writing by the Licensor or as may be
- otherwise permitted by applicable law, if You Reproduce, Distribute or
- Publicly Perform the Work either by itself or as part of any
- Adaptations or Collections, You must not distort, mutilate, modify or
- take other derogatory action in relation to the Work which would be
- prejudicial to the Original Author's honor or reputation. Licensor
- agrees that in those jurisdictions (e.g. Japan), in which any exercise
- of the right granted in Section 3(b) of this License (the right to
- make Adaptations) would be deemed to be a distortion, mutilation,
- modification or other derogatory action prejudicial to the Original
- Author's honor and reputation, the Licensor will waive or not assert,
- as appropriate, this Section, to the fullest extent permitted by the
- applicable national law, to enable You to reasonably exercise Your
- right under Section 3(b) of this License (right to make Adaptations)
- but not otherwise.
-
-5. Representations, Warranties and Disclaimer
-
-UNLESS OTHERWISE MUTUALLY AGREED TO BY THE PARTIES IN WRITING, LICENSOR
-OFFERS THE WORK AS-IS AND MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY
-KIND CONCERNING THE WORK, EXPRESS, IMPLIED, STATUTORY OR OTHERWISE,
-INCLUDING, WITHOUT LIMITATION, WARRANTIES OF TITLE, MERCHANTIBILITY,
-FITNESS FOR A PARTICULAR PURPOSE, NONINFRINGEMENT, OR THE ABSENCE OF
-LATENT OR OTHER DEFECTS, ACCURACY, OR THE PRESENCE OF ABSENCE OF ERRORS,
-WHETHER OR NOT DISCOVERABLE. SOME JURISDICTIONS DO NOT ALLOW THE EXCLUSION
-OF IMPLIED WARRANTIES, SO SUCH EXCLUSION MAY NOT APPLY TO YOU.
-
-6. Limitation on Liability. EXCEPT TO THE EXTENT REQUIRED BY APPLICABLE
-LAW, IN NO EVENT WILL LICENSOR BE LIABLE TO YOU ON ANY LEGAL THEORY FOR
-ANY SPECIAL, INCIDENTAL, CONSEQUENTIAL, PUNITIVE OR EXEMPLARY DAMAGES
-ARISING OUT OF THIS LICENSE OR THE USE OF THE WORK, EVEN IF LICENSOR HAS
-BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
-
-7. Termination
-
- a. This License and the rights granted hereunder will terminate
- automatically upon any breach by You of the terms of this License.
- Individuals or entities who have received Adaptations or Collections
- from You under this License, however, will not have their licenses
- terminated provided such individuals or entities remain in full
- compliance with those licenses. Sections 1, 2, 5, 6, 7, and 8 will
- survive any termination of this License.
- b. Subject to the above terms and conditions, the license granted here is
- perpetual (for the duration of the applicable copyright in the Work).
- Notwithstanding the above, Licensor reserves the right to release the
- Work under different license terms or to stop distributing the Work at
- any time; provided, however that any such election will not serve to
- withdraw this License (or any other license that has been, or is
- required to be, granted under the terms of this License), and this
- License will continue in full force and effect unless terminated as
- stated above.
-
-8. Miscellaneous
-
- a. Each time You Distribute or Publicly Perform the Work or a Collection,
- the Licensor offers to the recipient a license to the Work on the same
- terms and conditions as the license granted to You under this License.
- b. Each time You Distribute or Publicly Perform an Adaptation, Licensor
- offers to the recipient a license to the original Work on the same
- terms and conditions as the license granted to You under this License.
- c. If any provision of this License is invalid or unenforceable under
- applicable law, it shall not affect the validity or enforceability of
- the remainder of the terms of this License, and without further action
- by the parties to this agreement, such provision shall be reformed to
- the minimum extent necessary to make such provision valid and
- enforceable.
- d. No term or provision of this License shall be deemed waived and no
- breach consented to unless such waiver or consent shall be in writing
- and signed by the party to be charged with such waiver or consent.
- e. This License constitutes the entire agreement between the parties with
- respect to the Work licensed here. There are no understandings,
- agreements or representations with respect to the Work not specified
- here. Licensor shall not be bound by any additional provisions that
- may appear in any communication from You. This License may not be
- modified without the mutual written agreement of the Licensor and You.
- f. The rights granted under, and the subject matter referenced, in this
- License were drafted utilizing the terminology of the Berne Convention
- for the Protection of Literary and Artistic Works (as amended on
- September 28, 1979), the Rome Convention of 1961, the WIPO Copyright
- Treaty of 1996, the WIPO Performances and Phonograms Treaty of 1996
- and the Universal Copyright Convention (as revised on July 24, 1971).
- These rights and subject matter take effect in the relevant
- jurisdiction in which the License terms are sought to be enforced
- according to the corresponding provisions of the implementation of
- those treaty provisions in the applicable national law. If the
- standard suite of rights granted under applicable copyright law
- includes additional rights not granted under this License, such
- additional rights are deemed to be included in the License; this
- License is not intended to restrict the license of any rights under
- applicable law.
-
-
-Creative Commons Notice
-
- Creative Commons is not a party to this License, and makes no warranty
- whatsoever in connection with the Work. Creative Commons will not be
- liable to You or any party on any legal theory for any damages
- whatsoever, including without limitation any general, special,
- incidental or consequential damages arising in connection to this
- license. Notwithstanding the foregoing two (2) sentences, if Creative
- Commons has expressly identified itself as the Licensor hereunder, it
- shall have all rights and obligations of Licensor.
-
- Except for the limited purpose of indicating to the public that the
- Work is licensed under the CCPL, Creative Commons does not authorize
- the use by either party of the trademark "Creative Commons" or any
- related trademark or logo of Creative Commons without the prior
- written consent of Creative Commons. Any permitted use will be in
- compliance with Creative Commons' then-current trademark usage
- guidelines, as may be published on its website or otherwise made
- available upon request from time to time. For the avoidance of doubt,
- this trademark restriction does not form part of the License.
-
- Creative Commons may be contacted at https://creativecommons.org/.
diff --git a/examples/training/textcat_example_data/CC_BY-SA-4.0.txt b/examples/training/textcat_example_data/CC_BY-SA-4.0.txt
deleted file mode 100644
index a73481c4b..000000000
--- a/examples/training/textcat_example_data/CC_BY-SA-4.0.txt
+++ /dev/null
@@ -1,428 +0,0 @@
-Attribution-ShareAlike 4.0 International
-
-=======================================================================
-
-Creative Commons Corporation ("Creative Commons") is not a law firm and
-does not provide legal services or legal advice. Distribution of
-Creative Commons public licenses does not create a lawyer-client or
-other relationship. Creative Commons makes its licenses and related
-information available on an "as-is" basis. Creative Commons gives no
-warranties regarding its licenses, any material licensed under their
-terms and conditions, or any related information. Creative Commons
-disclaims all liability for damages resulting from their use to the
-fullest extent possible.
-
-Using Creative Commons Public Licenses
-
-Creative Commons public licenses provide a standard set of terms and
-conditions that creators and other rights holders may use to share
-original works of authorship and other material subject to copyright
-and certain other rights specified in the public license below. The
-following considerations are for informational purposes only, are not
-exhaustive, and do not form part of our licenses.
-
- Considerations for licensors: Our public licenses are
- intended for use by those authorized to give the public
- permission to use material in ways otherwise restricted by
- copyright and certain other rights. Our licenses are
- irrevocable. Licensors should read and understand the terms
- and conditions of the license they choose before applying it.
- Licensors should also secure all rights necessary before
- applying our licenses so that the public can reuse the
- material as expected. Licensors should clearly mark any
- material not subject to the license. This includes other CC-
- licensed material, or material used under an exception or
- limitation to copyright. More considerations for licensors:
- wiki.creativecommons.org/Considerations_for_licensors
-
- Considerations for the public: By using one of our public
- licenses, a licensor grants the public permission to use the
- licensed material under specified terms and conditions. If
- the licensor's permission is not necessary for any reason--for
- example, because of any applicable exception or limitation to
- copyright--then that use is not regulated by the license. Our
- licenses grant only permissions under copyright and certain
- other rights that a licensor has authority to grant. Use of
- the licensed material may still be restricted for other
- reasons, including because others have copyright or other
- rights in the material. A licensor may make special requests,
- such as asking that all changes be marked or described.
- Although not required by our licenses, you are encouraged to
- respect those requests where reasonable. More considerations
- for the public:
- wiki.creativecommons.org/Considerations_for_licensees
-
-=======================================================================
-
-Creative Commons Attribution-ShareAlike 4.0 International Public
-License
-
-By exercising the Licensed Rights (defined below), You accept and agree
-to be bound by the terms and conditions of this Creative Commons
-Attribution-ShareAlike 4.0 International Public License ("Public
-License"). To the extent this Public License may be interpreted as a
-contract, You are granted the Licensed Rights in consideration of Your
-acceptance of these terms and conditions, and the Licensor grants You
-such rights in consideration of benefits the Licensor receives from
-making the Licensed Material available under these terms and
-conditions.
-
-
-Section 1 -- Definitions.
-
- a. Adapted Material means material subject to Copyright and Similar
- Rights that is derived from or based upon the Licensed Material
- and in which the Licensed Material is translated, altered,
- arranged, transformed, or otherwise modified in a manner requiring
- permission under the Copyright and Similar Rights held by the
- Licensor. For purposes of this Public License, where the Licensed
- Material is a musical work, performance, or sound recording,
- Adapted Material is always produced where the Licensed Material is
- synched in timed relation with a moving image.
-
- b. Adapter's License means the license You apply to Your Copyright
- and Similar Rights in Your contributions to Adapted Material in
- accordance with the terms and conditions of this Public License.
-
- c. BY-SA Compatible License means a license listed at
- creativecommons.org/compatiblelicenses, approved by Creative
- Commons as essentially the equivalent of this Public License.
-
- d. Copyright and Similar Rights means copyright and/or similar rights
- closely related to copyright including, without limitation,
- performance, broadcast, sound recording, and Sui Generis Database
- Rights, without regard to how the rights are labeled or
- categorized. For purposes of this Public License, the rights
- specified in Section 2(b)(1)-(2) are not Copyright and Similar
- Rights.
-
- e. Effective Technological Measures means those measures that, in the
- absence of proper authority, may not be circumvented under laws
- fulfilling obligations under Article 11 of the WIPO Copyright
- Treaty adopted on December 20, 1996, and/or similar international
- agreements.
-
- f. Exceptions and Limitations means fair use, fair dealing, and/or
- any other exception or limitation to Copyright and Similar Rights
- that applies to Your use of the Licensed Material.
-
- g. License Elements means the license attributes listed in the name
- of a Creative Commons Public License. The License Elements of this
- Public License are Attribution and ShareAlike.
-
- h. Licensed Material means the artistic or literary work, database,
- or other material to which the Licensor applied this Public
- License.
-
- i. Licensed Rights means the rights granted to You subject to the
- terms and conditions of this Public License, which are limited to
- all Copyright and Similar Rights that apply to Your use of the
- Licensed Material and that the Licensor has authority to license.
-
- j. Licensor means the individual(s) or entity(ies) granting rights
- under this Public License.
-
- k. Share means to provide material to the public by any means or
- process that requires permission under the Licensed Rights, such
- as reproduction, public display, public performance, distribution,
- dissemination, communication, or importation, and to make material
- available to the public including in ways that members of the
- public may access the material from a place and at a time
- individually chosen by them.
-
- l. Sui Generis Database Rights means rights other than copyright
- resulting from Directive 96/9/EC of the European Parliament and of
- the Council of 11 March 1996 on the legal protection of databases,
- as amended and/or succeeded, as well as other essentially
- equivalent rights anywhere in the world.
-
- m. You means the individual or entity exercising the Licensed Rights
- under this Public License. Your has a corresponding meaning.
-
-
-Section 2 -- Scope.
-
- a. License grant.
-
- 1. Subject to the terms and conditions of this Public License,
- the Licensor hereby grants You a worldwide, royalty-free,
- non-sublicensable, non-exclusive, irrevocable license to
- exercise the Licensed Rights in the Licensed Material to:
-
- a. reproduce and Share the Licensed Material, in whole or
- in part; and
-
- b. produce, reproduce, and Share Adapted Material.
-
- 2. Exceptions and Limitations. For the avoidance of doubt, where
- Exceptions and Limitations apply to Your use, this Public
- License does not apply, and You do not need to comply with
- its terms and conditions.
-
- 3. Term. The term of this Public License is specified in Section
- 6(a).
-
- 4. Media and formats; technical modifications allowed. The
- Licensor authorizes You to exercise the Licensed Rights in
- all media and formats whether now known or hereafter created,
- and to make technical modifications necessary to do so. The
- Licensor waives and/or agrees not to assert any right or
- authority to forbid You from making technical modifications
- necessary to exercise the Licensed Rights, including
- technical modifications necessary to circumvent Effective
- Technological Measures. For purposes of this Public License,
- simply making modifications authorized by this Section 2(a)
- (4) never produces Adapted Material.
-
- 5. Downstream recipients.
-
- a. Offer from the Licensor -- Licensed Material. Every
- recipient of the Licensed Material automatically
- receives an offer from the Licensor to exercise the
- Licensed Rights under the terms and conditions of this
- Public License.
-
- b. Additional offer from the Licensor -- Adapted Material.
- Every recipient of Adapted Material from You
- automatically receives an offer from the Licensor to
- exercise the Licensed Rights in the Adapted Material
- under the conditions of the Adapter's License You apply.
-
- c. No downstream restrictions. You may not offer or impose
- any additional or different terms or conditions on, or
- apply any Effective Technological Measures to, the
- Licensed Material if doing so restricts exercise of the
- Licensed Rights by any recipient of the Licensed
- Material.
-
- 6. No endorsement. Nothing in this Public License constitutes or
- may be construed as permission to assert or imply that You
- are, or that Your use of the Licensed Material is, connected
- with, or sponsored, endorsed, or granted official status by,
- the Licensor or others designated to receive attribution as
- provided in Section 3(a)(1)(A)(i).
-
- b. Other rights.
-
- 1. Moral rights, such as the right of integrity, are not
- licensed under this Public License, nor are publicity,
- privacy, and/or other similar personality rights; however, to
- the extent possible, the Licensor waives and/or agrees not to
- assert any such rights held by the Licensor to the limited
- extent necessary to allow You to exercise the Licensed
- Rights, but not otherwise.
-
- 2. Patent and trademark rights are not licensed under this
- Public License.
-
- 3. To the extent possible, the Licensor waives any right to
- collect royalties from You for the exercise of the Licensed
- Rights, whether directly or through a collecting society
- under any voluntary or waivable statutory or compulsory
- licensing scheme. In all other cases the Licensor expressly
- reserves any right to collect such royalties.
-
-
-Section 3 -- License Conditions.
-
-Your exercise of the Licensed Rights is expressly made subject to the
-following conditions.
-
- a. Attribution.
-
- 1. If You Share the Licensed Material (including in modified
- form), You must:
-
- a. retain the following if it is supplied by the Licensor
- with the Licensed Material:
-
- i. identification of the creator(s) of the Licensed
- Material and any others designated to receive
- attribution, in any reasonable manner requested by
- the Licensor (including by pseudonym if
- designated);
-
- ii. a copyright notice;
-
- iii. a notice that refers to this Public License;
-
- iv. a notice that refers to the disclaimer of
- warranties;
-
- v. a URI or hyperlink to the Licensed Material to the
- extent reasonably practicable;
-
- b. indicate if You modified the Licensed Material and
- retain an indication of any previous modifications; and
-
- c. indicate the Licensed Material is licensed under this
- Public License, and include the text of, or the URI or
- hyperlink to, this Public License.
-
- 2. You may satisfy the conditions in Section 3(a)(1) in any
- reasonable manner based on the medium, means, and context in
- which You Share the Licensed Material. For example, it may be
- reasonable to satisfy the conditions by providing a URI or
- hyperlink to a resource that includes the required
- information.
-
- 3. If requested by the Licensor, You must remove any of the
- information required by Section 3(a)(1)(A) to the extent
- reasonably practicable.
-
- b. ShareAlike.
-
- In addition to the conditions in Section 3(a), if You Share
- Adapted Material You produce, the following conditions also apply.
-
- 1. The Adapter's License You apply must be a Creative Commons
- license with the same License Elements, this version or
- later, or a BY-SA Compatible License.
-
- 2. You must include the text of, or the URI or hyperlink to, the
- Adapter's License You apply. You may satisfy this condition
- in any reasonable manner based on the medium, means, and
- context in which You Share Adapted Material.
-
- 3. You may not offer or impose any additional or different terms
- or conditions on, or apply any Effective Technological
- Measures to, Adapted Material that restrict exercise of the
- rights granted under the Adapter's License You apply.
-
-
-Section 4 -- Sui Generis Database Rights.
-
-Where the Licensed Rights include Sui Generis Database Rights that
-apply to Your use of the Licensed Material:
-
- a. for the avoidance of doubt, Section 2(a)(1) grants You the right
- to extract, reuse, reproduce, and Share all or a substantial
- portion of the contents of the database;
-
- b. if You include all or a substantial portion of the database
- contents in a database in which You have Sui Generis Database
- Rights, then the database in which You have Sui Generis Database
- Rights (but not its individual contents) is Adapted Material,
-
- including for purposes of Section 3(b); and
- c. You must comply with the conditions in Section 3(a) if You Share
- all or a substantial portion of the contents of the database.
-
-For the avoidance of doubt, this Section 4 supplements and does not
-replace Your obligations under this Public License where the Licensed
-Rights include other Copyright and Similar Rights.
-
-
-Section 5 -- Disclaimer of Warranties and Limitation of Liability.
-
- a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE
- EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS
- AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF
- ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS,
- IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION,
- WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR
- PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS,
- ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT
- KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT
- ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU.
-
- b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE
- TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION,
- NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT,
- INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES,
- COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR
- USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN
- ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR
- DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR
- IN PART, THIS LIMITATION MAY NOT APPLY TO YOU.
-
- c. The disclaimer of warranties and limitation of liability provided
- above shall be interpreted in a manner that, to the extent
- possible, most closely approximates an absolute disclaimer and
- waiver of all liability.
-
-
-Section 6 -- Term and Termination.
-
- a. This Public License applies for the term of the Copyright and
- Similar Rights licensed here. However, if You fail to comply with
- this Public License, then Your rights under this Public License
- terminate automatically.
-
- b. Where Your right to use the Licensed Material has terminated under
- Section 6(a), it reinstates:
-
- 1. automatically as of the date the violation is cured, provided
- it is cured within 30 days of Your discovery of the
- violation; or
-
- 2. upon express reinstatement by the Licensor.
-
- For the avoidance of doubt, this Section 6(b) does not affect any
- right the Licensor may have to seek remedies for Your violations
- of this Public License.
-
- c. For the avoidance of doubt, the Licensor may also offer the
- Licensed Material under separate terms or conditions or stop
- distributing the Licensed Material at any time; however, doing so
- will not terminate this Public License.
-
- d. Sections 1, 5, 6, 7, and 8 survive termination of this Public
- License.
-
-
-Section 7 -- Other Terms and Conditions.
-
- a. The Licensor shall not be bound by any additional or different
- terms or conditions communicated by You unless expressly agreed.
-
- b. Any arrangements, understandings, or agreements regarding the
- Licensed Material not stated herein are separate from and
- independent of the terms and conditions of this Public License.
-
-
-Section 8 -- Interpretation.
-
- a. For the avoidance of doubt, this Public License does not, and
- shall not be interpreted to, reduce, limit, restrict, or impose
- conditions on any use of the Licensed Material that could lawfully
- be made without permission under this Public License.
-
- b. To the extent possible, if any provision of this Public License is
- deemed unenforceable, it shall be automatically reformed to the
- minimum extent necessary to make it enforceable. If the provision
- cannot be reformed, it shall be severed from this Public License
- without affecting the enforceability of the remaining terms and
- conditions.
-
- c. No term or condition of this Public License will be waived and no
- failure to comply consented to unless expressly agreed to by the
- Licensor.
-
- d. Nothing in this Public License constitutes or may be interpreted
- as a limitation upon, or waiver of, any privileges and immunities
- that apply to the Licensor or You, including from the legal
- processes of any jurisdiction or authority.
-
-
-=======================================================================
-
-Creative Commons is not a party to its public
-licenses. Notwithstanding, Creative Commons may elect to apply one of
-its public licenses to material it publishes and in those instances
-will be considered the “Licensor.” The text of the Creative Commons
-public licenses is dedicated to the public domain under the CC0 Public
-Domain Dedication. Except for the limited purpose of indicating that
-material is shared under a Creative Commons public license or as
-otherwise permitted by the Creative Commons policies published at
-creativecommons.org/policies, Creative Commons does not authorize the
-use of the trademark "Creative Commons" or any other trademark or logo
-of Creative Commons without its prior written consent including,
-without limitation, in connection with any unauthorized modifications
-to any of its public licenses or any other arrangements,
-understandings, or agreements concerning use of licensed material. For
-the avoidance of doubt, this paragraph does not form part of the
-public licenses.
-
-Creative Commons may be contacted at creativecommons.org.
-
diff --git a/examples/training/textcat_example_data/README.md b/examples/training/textcat_example_data/README.md
deleted file mode 100644
index 1165f0293..000000000
--- a/examples/training/textcat_example_data/README.md
+++ /dev/null
@@ -1,34 +0,0 @@
-## Examples of textcat training data
-
-spacy JSON training files were generated from JSONL with:
-
-```
-python textcatjsonl_to_trainjson.py -m en file.jsonl .
-```
-
-`cooking.json` is an example with mutually-exclusive classes with two labels:
-
-* `baking`
-* `not_baking`
-
-`jigsaw-toxic-comment.json` is an example with multiple labels per instance:
-
-* `insult`
-* `obscene`
-* `severe_toxic`
-* `toxic`
-
-### Data Sources
-
-* `cooking.jsonl`: https://cooking.stackexchange.com. The meta IDs link to the
- original question as `https://cooking.stackexchange.com/questions/ID`, e.g.,
- `https://cooking.stackexchange.com/questions/2` for the first instance.
-* `jigsaw-toxic-comment.jsonl`: [Jigsaw Toxic Comments Classification
- Challenge](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge)
-
-### Data Licenses
-
-* `cooking.jsonl`: CC BY-SA 4.0 ([`CC_BY-SA-4.0.txt`](CC_BY-SA-4.0.txt))
-* `jigsaw-toxic-comment.jsonl`:
- * text: CC BY-SA 3.0 ([`CC_BY-SA-3.0.txt`](CC_BY-SA-3.0.txt))
- * annotation: CC0 ([`CC0.txt`](CC0.txt))
diff --git a/examples/training/textcat_example_data/cooking.json b/examples/training/textcat_example_data/cooking.json
deleted file mode 100644
index 4bad4db79..000000000
--- a/examples/training/textcat_example_data/cooking.json
+++ /dev/null
@@ -1,3487 +0,0 @@
-[
- {
- "id":0,
- "paragraphs":[
- {
- "raw":"How should I cook bacon in an oven?\nI've heard of people cooking bacon in an oven by laying the strips out on a cookie sheet. When using this method, how long should I cook the bacon for, and at what temperature?\n",
- "sentences":[
- {
- "tokens":[
- {
- "id":0,
- "orth":"How",
- "ner":"O"
- },
- {
- "id":1,
- "orth":"should",
- "ner":"O"
- },
- {
- "id":2,
- "orth":"I",
- "ner":"O"
- },
- {
- "id":3,
- "orth":"cook",
- "ner":"O"
- },
- {
- "id":4,
- "orth":"bacon",
- "ner":"O"
- },
- {
- "id":5,
- "orth":"in",
- "ner":"O"
- },
- {
- "id":6,
- "orth":"an",
- "ner":"O"
- },
- {
- "id":7,
- "orth":"oven",
- "ner":"O"
- },
- {
- "id":8,
- "orth":"?",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":9,
- "orth":"\n",
- "ner":"O"
- },
- {
- "id":10,
- "orth":"I",
- "ner":"O"
- },
- {
- "id":11,
- "orth":"'ve",
- "ner":"O"
- },
- {
- "id":12,
- "orth":"heard",
- "ner":"O"
- },
- {
- "id":13,
- "orth":"of",
- "ner":"O"
- },
- {
- "id":14,
- "orth":"people",
- "ner":"O"
- },
- {
- "id":15,
- "orth":"cooking",
- "ner":"O"
- },
- {
- "id":16,
- "orth":"bacon",
- "ner":"O"
- },
- {
- "id":17,
- "orth":"in",
- "ner":"O"
- },
- {
- "id":18,
- "orth":"an",
- "ner":"O"
- },
- {
- "id":19,
- "orth":"oven",
- "ner":"O"
- },
- {
- "id":20,
- "orth":"by",
- "ner":"O"
- },
- {
- "id":21,
- "orth":"laying",
- "ner":"O"
- },
- {
- "id":22,
- "orth":"the",
- "ner":"O"
- },
- {
- "id":23,
- "orth":"strips",
- "ner":"O"
- },
- {
- "id":24,
- "orth":"out",
- "ner":"O"
- },
- {
- "id":25,
- "orth":"on",
- "ner":"O"
- },
- {
- "id":26,
- "orth":"a",
- "ner":"O"
- },
- {
- "id":27,
- "orth":"cookie",
- "ner":"O"
- },
- {
- "id":28,
- "orth":"sheet",
- "ner":"O"
- },
- {
- "id":29,
- "orth":".",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":30,
- "orth":"When",
- "ner":"O"
- },
- {
- "id":31,
- "orth":"using",
- "ner":"O"
- },
- {
- "id":32,
- "orth":"this",
- "ner":"O"
- },
- {
- "id":33,
- "orth":"method",
- "ner":"O"
- },
- {
- "id":34,
- "orth":",",
- "ner":"O"
- },
- {
- "id":35,
- "orth":"how",
- "ner":"O"
- },
- {
- "id":36,
- "orth":"long",
- "ner":"O"
- },
- {
- "id":37,
- "orth":"should",
- "ner":"O"
- },
- {
- "id":38,
- "orth":"I",
- "ner":"O"
- },
- {
- "id":39,
- "orth":"cook",
- "ner":"O"
- },
- {
- "id":40,
- "orth":"the",
- "ner":"O"
- },
- {
- "id":41,
- "orth":"bacon",
- "ner":"O"
- },
- {
- "id":42,
- "orth":"for",
- "ner":"O"
- },
- {
- "id":43,
- "orth":",",
- "ner":"O"
- },
- {
- "id":44,
- "orth":"and",
- "ner":"O"
- },
- {
- "id":45,
- "orth":"at",
- "ner":"O"
- },
- {
- "id":46,
- "orth":"what",
- "ner":"O"
- },
- {
- "id":47,
- "orth":"temperature",
- "ner":"O"
- },
- {
- "id":48,
- "orth":"?",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":49,
- "orth":"\n",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- }
- ],
- "cats":[
- {
- "label":"baking",
- "value":0.0
- },
- {
- "label":"not_baking",
- "value":1.0
- }
- ]
- },
- {
- "raw":"What is the difference between white and brown eggs?\nI always use brown extra large eggs, but I can't honestly say why I do this other than habit at this point. Are there any distinct advantages or disadvantages like flavor, shelf life, etc?\n",
- "sentences":[
- {
- "tokens":[
- {
- "id":0,
- "orth":"What",
- "ner":"O"
- },
- {
- "id":1,
- "orth":"is",
- "ner":"O"
- },
- {
- "id":2,
- "orth":"the",
- "ner":"O"
- },
- {
- "id":3,
- "orth":"difference",
- "ner":"O"
- },
- {
- "id":4,
- "orth":"between",
- "ner":"O"
- },
- {
- "id":5,
- "orth":"white",
- "ner":"O"
- },
- {
- "id":6,
- "orth":"and",
- "ner":"O"
- },
- {
- "id":7,
- "orth":"brown",
- "ner":"O"
- },
- {
- "id":8,
- "orth":"eggs",
- "ner":"O"
- },
- {
- "id":9,
- "orth":"?",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":10,
- "orth":"\n",
- "ner":"O"
- },
- {
- "id":11,
- "orth":"I",
- "ner":"O"
- },
- {
- "id":12,
- "orth":"always",
- "ner":"O"
- },
- {
- "id":13,
- "orth":"use",
- "ner":"O"
- },
- {
- "id":14,
- "orth":"brown",
- "ner":"O"
- },
- {
- "id":15,
- "orth":"extra",
- "ner":"O"
- },
- {
- "id":16,
- "orth":"large",
- "ner":"O"
- },
- {
- "id":17,
- "orth":"eggs",
- "ner":"O"
- },
- {
- "id":18,
- "orth":",",
- "ner":"O"
- },
- {
- "id":19,
- "orth":"but",
- "ner":"O"
- },
- {
- "id":20,
- "orth":"I",
- "ner":"O"
- },
- {
- "id":21,
- "orth":"ca",
- "ner":"O"
- },
- {
- "id":22,
- "orth":"n't",
- "ner":"O"
- },
- {
- "id":23,
- "orth":"honestly",
- "ner":"O"
- },
- {
- "id":24,
- "orth":"say",
- "ner":"O"
- },
- {
- "id":25,
- "orth":"why",
- "ner":"O"
- },
- {
- "id":26,
- "orth":"I",
- "ner":"O"
- },
- {
- "id":27,
- "orth":"do",
- "ner":"O"
- },
- {
- "id":28,
- "orth":"this",
- "ner":"O"
- },
- {
- "id":29,
- "orth":"other",
- "ner":"O"
- },
- {
- "id":30,
- "orth":"than",
- "ner":"O"
- },
- {
- "id":31,
- "orth":"habit",
- "ner":"O"
- },
- {
- "id":32,
- "orth":"at",
- "ner":"O"
- },
- {
- "id":33,
- "orth":"this",
- "ner":"O"
- },
- {
- "id":34,
- "orth":"point",
- "ner":"O"
- },
- {
- "id":35,
- "orth":".",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":36,
- "orth":"Are",
- "ner":"O"
- },
- {
- "id":37,
- "orth":"there",
- "ner":"O"
- },
- {
- "id":38,
- "orth":"any",
- "ner":"O"
- },
- {
- "id":39,
- "orth":"distinct",
- "ner":"O"
- },
- {
- "id":40,
- "orth":"advantages",
- "ner":"O"
- },
- {
- "id":41,
- "orth":"or",
- "ner":"O"
- },
- {
- "id":42,
- "orth":"disadvantages",
- "ner":"O"
- },
- {
- "id":43,
- "orth":"like",
- "ner":"O"
- },
- {
- "id":44,
- "orth":"flavor",
- "ner":"O"
- },
- {
- "id":45,
- "orth":",",
- "ner":"O"
- },
- {
- "id":46,
- "orth":"shelf",
- "ner":"O"
- },
- {
- "id":47,
- "orth":"life",
- "ner":"O"
- },
- {
- "id":48,
- "orth":",",
- "ner":"O"
- },
- {
- "id":49,
- "orth":"etc",
- "ner":"O"
- },
- {
- "id":50,
- "orth":"?",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":51,
- "orth":"\n",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- }
- ],
- "cats":[
- {
- "label":"baking",
- "value":0.0
- },
- {
- "label":"not_baking",
- "value":1.0
- }
- ]
- },
- {
- "raw":"What is the difference between baking soda and baking powder?\nAnd can I use one in place of the other in certain recipes?\n",
- "sentences":[
- {
- "tokens":[
- {
- "id":0,
- "orth":"What",
- "ner":"O"
- },
- {
- "id":1,
- "orth":"is",
- "ner":"O"
- },
- {
- "id":2,
- "orth":"the",
- "ner":"O"
- },
- {
- "id":3,
- "orth":"difference",
- "ner":"O"
- },
- {
- "id":4,
- "orth":"between",
- "ner":"O"
- },
- {
- "id":5,
- "orth":"baking",
- "ner":"O"
- },
- {
- "id":6,
- "orth":"soda",
- "ner":"O"
- },
- {
- "id":7,
- "orth":"and",
- "ner":"O"
- },
- {
- "id":8,
- "orth":"baking",
- "ner":"O"
- },
- {
- "id":9,
- "orth":"powder",
- "ner":"O"
- },
- {
- "id":10,
- "orth":"?",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":11,
- "orth":"\n",
- "ner":"O"
- },
- {
- "id":12,
- "orth":"And",
- "ner":"O"
- },
- {
- "id":13,
- "orth":"can",
- "ner":"O"
- },
- {
- "id":14,
- "orth":"I",
- "ner":"O"
- },
- {
- "id":15,
- "orth":"use",
- "ner":"O"
- },
- {
- "id":16,
- "orth":"one",
- "ner":"O"
- },
- {
- "id":17,
- "orth":"in",
- "ner":"O"
- },
- {
- "id":18,
- "orth":"place",
- "ner":"O"
- },
- {
- "id":19,
- "orth":"of",
- "ner":"O"
- },
- {
- "id":20,
- "orth":"the",
- "ner":"O"
- },
- {
- "id":21,
- "orth":"other",
- "ner":"O"
- },
- {
- "id":22,
- "orth":"in",
- "ner":"O"
- },
- {
- "id":23,
- "orth":"certain",
- "ner":"O"
- },
- {
- "id":24,
- "orth":"recipes",
- "ner":"O"
- },
- {
- "id":25,
- "orth":"?",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":26,
- "orth":"\n",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- }
- ],
- "cats":[
- {
- "label":"baking",
- "value":0.0
- },
- {
- "label":"not_baking",
- "value":1.0
- }
- ]
- },
- {
- "raw":"In a tomato sauce recipe, how can I cut the acidity?\nIt seems that every time I make a tomato sauce for pasta, the sauce is a little bit too acid for my taste. I've tried using sugar or sodium bicarbonate, but I'm not satisfied with the results.\n",
- "sentences":[
- {
- "tokens":[
- {
- "id":0,
- "orth":"In",
- "ner":"O"
- },
- {
- "id":1,
- "orth":"a",
- "ner":"O"
- },
- {
- "id":2,
- "orth":"tomato",
- "ner":"O"
- },
- {
- "id":3,
- "orth":"sauce",
- "ner":"O"
- },
- {
- "id":4,
- "orth":"recipe",
- "ner":"O"
- },
- {
- "id":5,
- "orth":",",
- "ner":"O"
- },
- {
- "id":6,
- "orth":"how",
- "ner":"O"
- },
- {
- "id":7,
- "orth":"can",
- "ner":"O"
- },
- {
- "id":8,
- "orth":"I",
- "ner":"O"
- },
- {
- "id":9,
- "orth":"cut",
- "ner":"O"
- },
- {
- "id":10,
- "orth":"the",
- "ner":"O"
- },
- {
- "id":11,
- "orth":"acidity",
- "ner":"O"
- },
- {
- "id":12,
- "orth":"?",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":13,
- "orth":"\n",
- "ner":"O"
- },
- {
- "id":14,
- "orth":"It",
- "ner":"O"
- },
- {
- "id":15,
- "orth":"seems",
- "ner":"O"
- },
- {
- "id":16,
- "orth":"that",
- "ner":"O"
- },
- {
- "id":17,
- "orth":"every",
- "ner":"O"
- },
- {
- "id":18,
- "orth":"time",
- "ner":"O"
- },
- {
- "id":19,
- "orth":"I",
- "ner":"O"
- },
- {
- "id":20,
- "orth":"make",
- "ner":"O"
- },
- {
- "id":21,
- "orth":"a",
- "ner":"O"
- },
- {
- "id":22,
- "orth":"tomato",
- "ner":"O"
- },
- {
- "id":23,
- "orth":"sauce",
- "ner":"O"
- },
- {
- "id":24,
- "orth":"for",
- "ner":"O"
- },
- {
- "id":25,
- "orth":"pasta",
- "ner":"O"
- },
- {
- "id":26,
- "orth":",",
- "ner":"O"
- },
- {
- "id":27,
- "orth":"the",
- "ner":"O"
- },
- {
- "id":28,
- "orth":"sauce",
- "ner":"O"
- },
- {
- "id":29,
- "orth":"is",
- "ner":"O"
- },
- {
- "id":30,
- "orth":"a",
- "ner":"O"
- },
- {
- "id":31,
- "orth":"little",
- "ner":"O"
- },
- {
- "id":32,
- "orth":"bit",
- "ner":"O"
- },
- {
- "id":33,
- "orth":"too",
- "ner":"O"
- },
- {
- "id":34,
- "orth":"acid",
- "ner":"O"
- },
- {
- "id":35,
- "orth":"for",
- "ner":"O"
- },
- {
- "id":36,
- "orth":"my",
- "ner":"O"
- },
- {
- "id":37,
- "orth":"taste",
- "ner":"O"
- },
- {
- "id":38,
- "orth":".",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":39,
- "orth":"I",
- "ner":"O"
- },
- {
- "id":40,
- "orth":"'ve",
- "ner":"O"
- },
- {
- "id":41,
- "orth":"tried",
- "ner":"O"
- },
- {
- "id":42,
- "orth":"using",
- "ner":"O"
- },
- {
- "id":43,
- "orth":"sugar",
- "ner":"O"
- },
- {
- "id":44,
- "orth":"or",
- "ner":"O"
- },
- {
- "id":45,
- "orth":"sodium",
- "ner":"O"
- },
- {
- "id":46,
- "orth":"bicarbonate",
- "ner":"O"
- },
- {
- "id":47,
- "orth":",",
- "ner":"O"
- },
- {
- "id":48,
- "orth":"but",
- "ner":"O"
- },
- {
- "id":49,
- "orth":"I",
- "ner":"O"
- },
- {
- "id":50,
- "orth":"'m",
- "ner":"O"
- },
- {
- "id":51,
- "orth":"not",
- "ner":"O"
- },
- {
- "id":52,
- "orth":"satisfied",
- "ner":"O"
- },
- {
- "id":53,
- "orth":"with",
- "ner":"O"
- },
- {
- "id":54,
- "orth":"the",
- "ner":"O"
- },
- {
- "id":55,
- "orth":"results",
- "ner":"O"
- },
- {
- "id":56,
- "orth":".",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":57,
- "orth":"\n",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- }
- ],
- "cats":[
- {
- "label":"baking",
- "value":0.0
- },
- {
- "label":"not_baking",
- "value":1.0
- }
- ]
- },
- {
- "raw":"What ingredients (available in specific regions) can I substitute for parsley?\nI have a recipe that calls for fresh parsley. I have substituted other fresh herbs for their dried equivalents but I don't have fresh or dried parsley. Is there something else (ex another dried herb) that I can use instead of parsley?\nI know it is used mainly for looks rather than taste but I have a pasta recipe that calls for 2 tablespoons of parsley in the sauce and then another 2 tablespoons on top when it is done. I know the parsley on top is more for looks but there must be something about the taste otherwise it would call for parsley within the sauce as well.\nI would especially like to hear about substitutes available in Southeast Asia and other parts of the world where the obvious answers (such as cilantro) are not widely available.\n",
- "sentences":[
- {
- "tokens":[
- {
- "id":0,
- "orth":"What",
- "ner":"O"
- },
- {
- "id":1,
- "orth":"ingredients",
- "ner":"O"
- },
- {
- "id":2,
- "orth":"(",
- "ner":"O"
- },
- {
- "id":3,
- "orth":"available",
- "ner":"O"
- },
- {
- "id":4,
- "orth":"in",
- "ner":"O"
- },
- {
- "id":5,
- "orth":"specific",
- "ner":"O"
- },
- {
- "id":6,
- "orth":"regions",
- "ner":"O"
- },
- {
- "id":7,
- "orth":")",
- "ner":"O"
- },
- {
- "id":8,
- "orth":"can",
- "ner":"O"
- },
- {
- "id":9,
- "orth":"I",
- "ner":"O"
- },
- {
- "id":10,
- "orth":"substitute",
- "ner":"O"
- },
- {
- "id":11,
- "orth":"for",
- "ner":"O"
- },
- {
- "id":12,
- "orth":"parsley",
- "ner":"O"
- },
- {
- "id":13,
- "orth":"?",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":14,
- "orth":"\n",
- "ner":"O"
- },
- {
- "id":15,
- "orth":"I",
- "ner":"O"
- },
- {
- "id":16,
- "orth":"have",
- "ner":"O"
- },
- {
- "id":17,
- "orth":"a",
- "ner":"O"
- },
- {
- "id":18,
- "orth":"recipe",
- "ner":"O"
- },
- {
- "id":19,
- "orth":"that",
- "ner":"O"
- },
- {
- "id":20,
- "orth":"calls",
- "ner":"O"
- },
- {
- "id":21,
- "orth":"for",
- "ner":"O"
- },
- {
- "id":22,
- "orth":"fresh",
- "ner":"O"
- },
- {
- "id":23,
- "orth":"parsley",
- "ner":"O"
- },
- {
- "id":24,
- "orth":".",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":25,
- "orth":"I",
- "ner":"O"
- },
- {
- "id":26,
- "orth":"have",
- "ner":"O"
- },
- {
- "id":27,
- "orth":"substituted",
- "ner":"O"
- },
- {
- "id":28,
- "orth":"other",
- "ner":"O"
- },
- {
- "id":29,
- "orth":"fresh",
- "ner":"O"
- },
- {
- "id":30,
- "orth":"herbs",
- "ner":"O"
- },
- {
- "id":31,
- "orth":"for",
- "ner":"O"
- },
- {
- "id":32,
- "orth":"their",
- "ner":"O"
- },
- {
- "id":33,
- "orth":"dried",
- "ner":"O"
- },
- {
- "id":34,
- "orth":"equivalents",
- "ner":"O"
- },
- {
- "id":35,
- "orth":"but",
- "ner":"O"
- },
- {
- "id":36,
- "orth":"I",
- "ner":"O"
- },
- {
- "id":37,
- "orth":"do",
- "ner":"O"
- },
- {
- "id":38,
- "orth":"n't",
- "ner":"O"
- },
- {
- "id":39,
- "orth":"have",
- "ner":"O"
- },
- {
- "id":40,
- "orth":"fresh",
- "ner":"O"
- },
- {
- "id":41,
- "orth":"or",
- "ner":"O"
- },
- {
- "id":42,
- "orth":"dried",
- "ner":"O"
- },
- {
- "id":43,
- "orth":"parsley",
- "ner":"O"
- },
- {
- "id":44,
- "orth":".",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":45,
- "orth":"Is",
- "ner":"O"
- },
- {
- "id":46,
- "orth":"there",
- "ner":"O"
- },
- {
- "id":47,
- "orth":"something",
- "ner":"O"
- },
- {
- "id":48,
- "orth":"else",
- "ner":"O"
- },
- {
- "id":49,
- "orth":"(",
- "ner":"O"
- },
- {
- "id":50,
- "orth":"ex",
- "ner":"O"
- },
- {
- "id":51,
- "orth":"another",
- "ner":"O"
- },
- {
- "id":52,
- "orth":"dried",
- "ner":"O"
- },
- {
- "id":53,
- "orth":"herb",
- "ner":"O"
- },
- {
- "id":54,
- "orth":")",
- "ner":"O"
- },
- {
- "id":55,
- "orth":"that",
- "ner":"O"
- },
- {
- "id":56,
- "orth":"I",
- "ner":"O"
- },
- {
- "id":57,
- "orth":"can",
- "ner":"O"
- },
- {
- "id":58,
- "orth":"use",
- "ner":"O"
- },
- {
- "id":59,
- "orth":"instead",
- "ner":"O"
- },
- {
- "id":60,
- "orth":"of",
- "ner":"O"
- },
- {
- "id":61,
- "orth":"parsley",
- "ner":"O"
- },
- {
- "id":62,
- "orth":"?",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":63,
- "orth":"\n",
- "ner":"O"
- },
- {
- "id":64,
- "orth":"I",
- "ner":"O"
- },
- {
- "id":65,
- "orth":"know",
- "ner":"O"
- },
- {
- "id":66,
- "orth":"it",
- "ner":"O"
- },
- {
- "id":67,
- "orth":"is",
- "ner":"O"
- },
- {
- "id":68,
- "orth":"used",
- "ner":"O"
- },
- {
- "id":69,
- "orth":"mainly",
- "ner":"O"
- },
- {
- "id":70,
- "orth":"for",
- "ner":"O"
- },
- {
- "id":71,
- "orth":"looks",
- "ner":"O"
- },
- {
- "id":72,
- "orth":"rather",
- "ner":"O"
- },
- {
- "id":73,
- "orth":"than",
- "ner":"O"
- },
- {
- "id":74,
- "orth":"taste",
- "ner":"O"
- },
- {
- "id":75,
- "orth":"but",
- "ner":"O"
- },
- {
- "id":76,
- "orth":"I",
- "ner":"O"
- },
- {
- "id":77,
- "orth":"have",
- "ner":"O"
- },
- {
- "id":78,
- "orth":"a",
- "ner":"O"
- },
- {
- "id":79,
- "orth":"pasta",
- "ner":"O"
- },
- {
- "id":80,
- "orth":"recipe",
- "ner":"O"
- },
- {
- "id":81,
- "orth":"that",
- "ner":"O"
- },
- {
- "id":82,
- "orth":"calls",
- "ner":"O"
- },
- {
- "id":83,
- "orth":"for",
- "ner":"O"
- },
- {
- "id":84,
- "orth":"2",
- "ner":"O"
- },
- {
- "id":85,
- "orth":"tablespoons",
- "ner":"O"
- },
- {
- "id":86,
- "orth":"of",
- "ner":"O"
- },
- {
- "id":87,
- "orth":"parsley",
- "ner":"O"
- },
- {
- "id":88,
- "orth":"in",
- "ner":"O"
- },
- {
- "id":89,
- "orth":"the",
- "ner":"O"
- },
- {
- "id":90,
- "orth":"sauce",
- "ner":"O"
- },
- {
- "id":91,
- "orth":"and",
- "ner":"O"
- },
- {
- "id":92,
- "orth":"then",
- "ner":"O"
- },
- {
- "id":93,
- "orth":"another",
- "ner":"O"
- },
- {
- "id":94,
- "orth":"2",
- "ner":"O"
- },
- {
- "id":95,
- "orth":"tablespoons",
- "ner":"O"
- },
- {
- "id":96,
- "orth":"on",
- "ner":"O"
- },
- {
- "id":97,
- "orth":"top",
- "ner":"O"
- },
- {
- "id":98,
- "orth":"when",
- "ner":"O"
- },
- {
- "id":99,
- "orth":"it",
- "ner":"O"
- },
- {
- "id":100,
- "orth":"is",
- "ner":"O"
- },
- {
- "id":101,
- "orth":"done",
- "ner":"O"
- },
- {
- "id":102,
- "orth":".",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":103,
- "orth":"I",
- "ner":"O"
- },
- {
- "id":104,
- "orth":"know",
- "ner":"O"
- },
- {
- "id":105,
- "orth":"the",
- "ner":"O"
- },
- {
- "id":106,
- "orth":"parsley",
- "ner":"O"
- },
- {
- "id":107,
- "orth":"on",
- "ner":"O"
- },
- {
- "id":108,
- "orth":"top",
- "ner":"O"
- },
- {
- "id":109,
- "orth":"is",
- "ner":"O"
- },
- {
- "id":110,
- "orth":"more",
- "ner":"O"
- },
- {
- "id":111,
- "orth":"for",
- "ner":"O"
- },
- {
- "id":112,
- "orth":"looks",
- "ner":"O"
- },
- {
- "id":113,
- "orth":"but",
- "ner":"O"
- },
- {
- "id":114,
- "orth":"there",
- "ner":"O"
- },
- {
- "id":115,
- "orth":"must",
- "ner":"O"
- },
- {
- "id":116,
- "orth":"be",
- "ner":"O"
- },
- {
- "id":117,
- "orth":"something",
- "ner":"O"
- },
- {
- "id":118,
- "orth":"about",
- "ner":"O"
- },
- {
- "id":119,
- "orth":"the",
- "ner":"O"
- },
- {
- "id":120,
- "orth":"taste",
- "ner":"O"
- },
- {
- "id":121,
- "orth":"otherwise",
- "ner":"O"
- },
- {
- "id":122,
- "orth":"it",
- "ner":"O"
- },
- {
- "id":123,
- "orth":"would",
- "ner":"O"
- },
- {
- "id":124,
- "orth":"call",
- "ner":"O"
- },
- {
- "id":125,
- "orth":"for",
- "ner":"O"
- },
- {
- "id":126,
- "orth":"parsley",
- "ner":"O"
- },
- {
- "id":127,
- "orth":"within",
- "ner":"O"
- },
- {
- "id":128,
- "orth":"the",
- "ner":"O"
- },
- {
- "id":129,
- "orth":"sauce",
- "ner":"O"
- },
- {
- "id":130,
- "orth":"as",
- "ner":"O"
- },
- {
- "id":131,
- "orth":"well",
- "ner":"O"
- },
- {
- "id":132,
- "orth":".",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":133,
- "orth":"\n",
- "ner":"O"
- },
- {
- "id":134,
- "orth":"I",
- "ner":"O"
- },
- {
- "id":135,
- "orth":"would",
- "ner":"O"
- },
- {
- "id":136,
- "orth":"especially",
- "ner":"O"
- },
- {
- "id":137,
- "orth":"like",
- "ner":"O"
- },
- {
- "id":138,
- "orth":"to",
- "ner":"O"
- },
- {
- "id":139,
- "orth":"hear",
- "ner":"O"
- },
- {
- "id":140,
- "orth":"about",
- "ner":"O"
- },
- {
- "id":141,
- "orth":"substitutes",
- "ner":"O"
- },
- {
- "id":142,
- "orth":"available",
- "ner":"O"
- },
- {
- "id":143,
- "orth":"in",
- "ner":"O"
- },
- {
- "id":144,
- "orth":"Southeast",
- "ner":"O"
- },
- {
- "id":145,
- "orth":"Asia",
- "ner":"O"
- },
- {
- "id":146,
- "orth":"and",
- "ner":"O"
- },
- {
- "id":147,
- "orth":"other",
- "ner":"O"
- },
- {
- "id":148,
- "orth":"parts",
- "ner":"O"
- },
- {
- "id":149,
- "orth":"of",
- "ner":"O"
- },
- {
- "id":150,
- "orth":"the",
- "ner":"O"
- },
- {
- "id":151,
- "orth":"world",
- "ner":"O"
- },
- {
- "id":152,
- "orth":"where",
- "ner":"O"
- },
- {
- "id":153,
- "orth":"the",
- "ner":"O"
- },
- {
- "id":154,
- "orth":"obvious",
- "ner":"O"
- },
- {
- "id":155,
- "orth":"answers",
- "ner":"O"
- },
- {
- "id":156,
- "orth":"(",
- "ner":"O"
- },
- {
- "id":157,
- "orth":"such",
- "ner":"O"
- },
- {
- "id":158,
- "orth":"as",
- "ner":"O"
- },
- {
- "id":159,
- "orth":"cilantro",
- "ner":"O"
- },
- {
- "id":160,
- "orth":")",
- "ner":"O"
- },
- {
- "id":161,
- "orth":"are",
- "ner":"O"
- },
- {
- "id":162,
- "orth":"not",
- "ner":"O"
- },
- {
- "id":163,
- "orth":"widely",
- "ner":"O"
- },
- {
- "id":164,
- "orth":"available",
- "ner":"O"
- },
- {
- "id":165,
- "orth":".",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":166,
- "orth":"\n",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- }
- ],
- "cats":[
- {
- "label":"baking",
- "value":0.0
- },
- {
- "label":"not_baking",
- "value":1.0
- }
- ]
- },
- {
- "raw":"What is the internal temperature a steak should be cooked to for Rare/Medium Rare/Medium/Well?\nI'd like to know when to take my steaks off the grill and please everybody.\n",
- "sentences":[
- {
- "tokens":[
- {
- "id":0,
- "orth":"What",
- "ner":"O"
- },
- {
- "id":1,
- "orth":"is",
- "ner":"O"
- },
- {
- "id":2,
- "orth":"the",
- "ner":"O"
- },
- {
- "id":3,
- "orth":"internal",
- "ner":"O"
- },
- {
- "id":4,
- "orth":"temperature",
- "ner":"O"
- },
- {
- "id":5,
- "orth":"a",
- "ner":"O"
- },
- {
- "id":6,
- "orth":"steak",
- "ner":"O"
- },
- {
- "id":7,
- "orth":"should",
- "ner":"O"
- },
- {
- "id":8,
- "orth":"be",
- "ner":"O"
- },
- {
- "id":9,
- "orth":"cooked",
- "ner":"O"
- },
- {
- "id":10,
- "orth":"to",
- "ner":"O"
- },
- {
- "id":11,
- "orth":"for",
- "ner":"O"
- },
- {
- "id":12,
- "orth":"Rare",
- "ner":"O"
- },
- {
- "id":13,
- "orth":"/",
- "ner":"O"
- },
- {
- "id":14,
- "orth":"Medium",
- "ner":"O"
- },
- {
- "id":15,
- "orth":"Rare",
- "ner":"O"
- },
- {
- "id":16,
- "orth":"/",
- "ner":"O"
- },
- {
- "id":17,
- "orth":"Medium",
- "ner":"O"
- },
- {
- "id":18,
- "orth":"/",
- "ner":"O"
- },
- {
- "id":19,
- "orth":"Well",
- "ner":"O"
- },
- {
- "id":20,
- "orth":"?",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":21,
- "orth":"\n",
- "ner":"O"
- },
- {
- "id":22,
- "orth":"I",
- "ner":"O"
- },
- {
- "id":23,
- "orth":"'d",
- "ner":"O"
- },
- {
- "id":24,
- "orth":"like",
- "ner":"O"
- },
- {
- "id":25,
- "orth":"to",
- "ner":"O"
- },
- {
- "id":26,
- "orth":"know",
- "ner":"O"
- },
- {
- "id":27,
- "orth":"when",
- "ner":"O"
- },
- {
- "id":28,
- "orth":"to",
- "ner":"O"
- },
- {
- "id":29,
- "orth":"take",
- "ner":"O"
- },
- {
- "id":30,
- "orth":"my",
- "ner":"O"
- },
- {
- "id":31,
- "orth":"steaks",
- "ner":"O"
- },
- {
- "id":32,
- "orth":"off",
- "ner":"O"
- },
- {
- "id":33,
- "orth":"the",
- "ner":"O"
- },
- {
- "id":34,
- "orth":"grill",
- "ner":"O"
- },
- {
- "id":35,
- "orth":"and",
- "ner":"O"
- },
- {
- "id":36,
- "orth":"please",
- "ner":"O"
- },
- {
- "id":37,
- "orth":"everybody",
- "ner":"O"
- },
- {
- "id":38,
- "orth":".",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":39,
- "orth":"\n",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- }
- ],
- "cats":[
- {
- "label":"baking",
- "value":0.0
- },
- {
- "label":"not_baking",
- "value":1.0
- }
- ]
- },
- {
- "raw":"How should I poach an egg?\nWhat's the best method to poach an egg without it turning into an eggy soupy mess?\n",
- "sentences":[
- {
- "tokens":[
- {
- "id":0,
- "orth":"How",
- "ner":"O"
- },
- {
- "id":1,
- "orth":"should",
- "ner":"O"
- },
- {
- "id":2,
- "orth":"I",
- "ner":"O"
- },
- {
- "id":3,
- "orth":"poach",
- "ner":"O"
- },
- {
- "id":4,
- "orth":"an",
- "ner":"O"
- },
- {
- "id":5,
- "orth":"egg",
- "ner":"O"
- },
- {
- "id":6,
- "orth":"?",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":7,
- "orth":"\n",
- "ner":"O"
- },
- {
- "id":8,
- "orth":"What",
- "ner":"O"
- },
- {
- "id":9,
- "orth":"'s",
- "ner":"O"
- },
- {
- "id":10,
- "orth":"the",
- "ner":"O"
- },
- {
- "id":11,
- "orth":"best",
- "ner":"O"
- },
- {
- "id":12,
- "orth":"method",
- "ner":"O"
- },
- {
- "id":13,
- "orth":"to",
- "ner":"O"
- },
- {
- "id":14,
- "orth":"poach",
- "ner":"O"
- },
- {
- "id":15,
- "orth":"an",
- "ner":"O"
- },
- {
- "id":16,
- "orth":"egg",
- "ner":"O"
- },
- {
- "id":17,
- "orth":"without",
- "ner":"O"
- },
- {
- "id":18,
- "orth":"it",
- "ner":"O"
- },
- {
- "id":19,
- "orth":"turning",
- "ner":"O"
- },
- {
- "id":20,
- "orth":"into",
- "ner":"O"
- },
- {
- "id":21,
- "orth":"an",
- "ner":"O"
- },
- {
- "id":22,
- "orth":"eggy",
- "ner":"O"
- },
- {
- "id":23,
- "orth":"soupy",
- "ner":"O"
- },
- {
- "id":24,
- "orth":"mess",
- "ner":"O"
- },
- {
- "id":25,
- "orth":"?",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":26,
- "orth":"\n",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- }
- ],
- "cats":[
- {
- "label":"baking",
- "value":0.0
- },
- {
- "label":"not_baking",
- "value":1.0
- }
- ]
- },
- {
- "raw":"How can I make my Ice Cream \"creamier\"\nMy ice cream doesn't feel creamy enough. I got the recipe from Good Eats, and I can't tell if it's just the recipe or maybe that I'm just not getting my \"batter\" cold enough before I try to make it (I let it chill overnight in the refrigerator, but it doesn't always come out of the machine looking like \"soft serve\" as he said on the show - it's usually a little thinner).\nRecipe: http://www.foodnetwork.com/recipes/alton-brown/serious-vanilla-ice-cream-recipe/index.html\nThanks!\n",
- "sentences":[
- {
- "tokens":[
- {
- "id":0,
- "orth":"How",
- "ner":"O"
- },
- {
- "id":1,
- "orth":"can",
- "ner":"O"
- },
- {
- "id":2,
- "orth":"I",
- "ner":"O"
- },
- {
- "id":3,
- "orth":"make",
- "ner":"O"
- },
- {
- "id":4,
- "orth":"my",
- "ner":"O"
- },
- {
- "id":5,
- "orth":"Ice",
- "ner":"O"
- },
- {
- "id":6,
- "orth":"Cream",
- "ner":"O"
- },
- {
- "id":7,
- "orth":"\"",
- "ner":"O"
- },
- {
- "id":8,
- "orth":"creamier",
- "ner":"O"
- },
- {
- "id":9,
- "orth":"\"",
- "ner":"O"
- },
- {
- "id":10,
- "orth":"\n",
- "ner":"O"
- },
- {
- "id":11,
- "orth":"My",
- "ner":"O"
- },
- {
- "id":12,
- "orth":"ice",
- "ner":"O"
- },
- {
- "id":13,
- "orth":"cream",
- "ner":"O"
- },
- {
- "id":14,
- "orth":"does",
- "ner":"O"
- },
- {
- "id":15,
- "orth":"n't",
- "ner":"O"
- },
- {
- "id":16,
- "orth":"feel",
- "ner":"O"
- },
- {
- "id":17,
- "orth":"creamy",
- "ner":"O"
- },
- {
- "id":18,
- "orth":"enough",
- "ner":"O"
- },
- {
- "id":19,
- "orth":".",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":20,
- "orth":" ",
- "ner":"O"
- },
- {
- "id":21,
- "orth":"I",
- "ner":"O"
- },
- {
- "id":22,
- "orth":"got",
- "ner":"O"
- },
- {
- "id":23,
- "orth":"the",
- "ner":"O"
- },
- {
- "id":24,
- "orth":"recipe",
- "ner":"O"
- },
- {
- "id":25,
- "orth":"from",
- "ner":"O"
- },
- {
- "id":26,
- "orth":"Good",
- "ner":"O"
- },
- {
- "id":27,
- "orth":"Eats",
- "ner":"O"
- },
- {
- "id":28,
- "orth":",",
- "ner":"O"
- },
- {
- "id":29,
- "orth":"and",
- "ner":"O"
- },
- {
- "id":30,
- "orth":"I",
- "ner":"O"
- },
- {
- "id":31,
- "orth":"ca",
- "ner":"O"
- },
- {
- "id":32,
- "orth":"n't",
- "ner":"O"
- },
- {
- "id":33,
- "orth":"tell",
- "ner":"O"
- },
- {
- "id":34,
- "orth":"if",
- "ner":"O"
- },
- {
- "id":35,
- "orth":"it",
- "ner":"O"
- },
- {
- "id":36,
- "orth":"'s",
- "ner":"O"
- },
- {
- "id":37,
- "orth":"just",
- "ner":"O"
- },
- {
- "id":38,
- "orth":"the",
- "ner":"O"
- },
- {
- "id":39,
- "orth":"recipe",
- "ner":"O"
- },
- {
- "id":40,
- "orth":"or",
- "ner":"O"
- },
- {
- "id":41,
- "orth":"maybe",
- "ner":"O"
- },
- {
- "id":42,
- "orth":"that",
- "ner":"O"
- },
- {
- "id":43,
- "orth":"I",
- "ner":"O"
- },
- {
- "id":44,
- "orth":"'m",
- "ner":"O"
- },
- {
- "id":45,
- "orth":"just",
- "ner":"O"
- },
- {
- "id":46,
- "orth":"not",
- "ner":"O"
- },
- {
- "id":47,
- "orth":"getting",
- "ner":"O"
- },
- {
- "id":48,
- "orth":"my",
- "ner":"O"
- },
- {
- "id":49,
- "orth":"\"",
- "ner":"O"
- },
- {
- "id":50,
- "orth":"batter",
- "ner":"O"
- },
- {
- "id":51,
- "orth":"\"",
- "ner":"O"
- },
- {
- "id":52,
- "orth":"cold",
- "ner":"O"
- },
- {
- "id":53,
- "orth":"enough",
- "ner":"O"
- },
- {
- "id":54,
- "orth":"before",
- "ner":"O"
- },
- {
- "id":55,
- "orth":"I",
- "ner":"O"
- },
- {
- "id":56,
- "orth":"try",
- "ner":"O"
- },
- {
- "id":57,
- "orth":"to",
- "ner":"O"
- },
- {
- "id":58,
- "orth":"make",
- "ner":"O"
- },
- {
- "id":59,
- "orth":"it",
- "ner":"O"
- },
- {
- "id":60,
- "orth":"(",
- "ner":"O"
- },
- {
- "id":61,
- "orth":"I",
- "ner":"O"
- },
- {
- "id":62,
- "orth":"let",
- "ner":"O"
- },
- {
- "id":63,
- "orth":"it",
- "ner":"O"
- },
- {
- "id":64,
- "orth":"chill",
- "ner":"O"
- },
- {
- "id":65,
- "orth":"overnight",
- "ner":"O"
- },
- {
- "id":66,
- "orth":"in",
- "ner":"O"
- },
- {
- "id":67,
- "orth":"the",
- "ner":"O"
- },
- {
- "id":68,
- "orth":"refrigerator",
- "ner":"O"
- },
- {
- "id":69,
- "orth":",",
- "ner":"O"
- },
- {
- "id":70,
- "orth":"but",
- "ner":"O"
- },
- {
- "id":71,
- "orth":"it",
- "ner":"O"
- },
- {
- "id":72,
- "orth":"does",
- "ner":"O"
- },
- {
- "id":73,
- "orth":"n't",
- "ner":"O"
- },
- {
- "id":74,
- "orth":"always",
- "ner":"O"
- },
- {
- "id":75,
- "orth":"come",
- "ner":"O"
- },
- {
- "id":76,
- "orth":"out",
- "ner":"O"
- },
- {
- "id":77,
- "orth":"of",
- "ner":"O"
- },
- {
- "id":78,
- "orth":"the",
- "ner":"O"
- },
- {
- "id":79,
- "orth":"machine",
- "ner":"O"
- },
- {
- "id":80,
- "orth":"looking",
- "ner":"O"
- },
- {
- "id":81,
- "orth":"like",
- "ner":"O"
- },
- {
- "id":82,
- "orth":"\"",
- "ner":"O"
- },
- {
- "id":83,
- "orth":"soft",
- "ner":"O"
- },
- {
- "id":84,
- "orth":"serve",
- "ner":"O"
- },
- {
- "id":85,
- "orth":"\"",
- "ner":"O"
- },
- {
- "id":86,
- "orth":"as",
- "ner":"O"
- },
- {
- "id":87,
- "orth":"he",
- "ner":"O"
- },
- {
- "id":88,
- "orth":"said",
- "ner":"O"
- },
- {
- "id":89,
- "orth":"on",
- "ner":"O"
- },
- {
- "id":90,
- "orth":"the",
- "ner":"O"
- },
- {
- "id":91,
- "orth":"show",
- "ner":"O"
- },
- {
- "id":92,
- "orth":"-",
- "ner":"O"
- },
- {
- "id":93,
- "orth":"it",
- "ner":"O"
- },
- {
- "id":94,
- "orth":"'s",
- "ner":"O"
- },
- {
- "id":95,
- "orth":"usually",
- "ner":"O"
- },
- {
- "id":96,
- "orth":"a",
- "ner":"O"
- },
- {
- "id":97,
- "orth":"little",
- "ner":"O"
- },
- {
- "id":98,
- "orth":"thinner",
- "ner":"O"
- },
- {
- "id":99,
- "orth":")",
- "ner":"O"
- },
- {
- "id":100,
- "orth":".",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":101,
- "orth":"\n",
- "ner":"O"
- },
- {
- "id":102,
- "orth":"Recipe",
- "ner":"O"
- },
- {
- "id":103,
- "orth":":",
- "ner":"O"
- },
- {
- "id":104,
- "orth":"http://www.foodnetwork.com/recipes/alton-brown/serious-vanilla-ice-cream-recipe/index.html",
- "ner":"O"
- },
- {
- "id":105,
- "orth":"\n",
- "ner":"O"
- },
- {
- "id":106,
- "orth":"Thanks",
- "ner":"O"
- },
- {
- "id":107,
- "orth":"!",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":108,
- "orth":"\n",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- }
- ],
- "cats":[
- {
- "label":"baking",
- "value":0.0
- },
- {
- "label":"not_baking",
- "value":1.0
- }
- ]
- },
- {
- "raw":"How long and at what temperature do the various parts of a chicken need to be cooked?\nI'm interested in baking thighs, legs, breasts and wings. How long do each of these items need to bake and at what temperature?\n",
- "sentences":[
- {
- "tokens":[
- {
- "id":0,
- "orth":"How",
- "ner":"O"
- },
- {
- "id":1,
- "orth":"long",
- "ner":"O"
- },
- {
- "id":2,
- "orth":"and",
- "ner":"O"
- },
- {
- "id":3,
- "orth":"at",
- "ner":"O"
- },
- {
- "id":4,
- "orth":"what",
- "ner":"O"
- },
- {
- "id":5,
- "orth":"temperature",
- "ner":"O"
- },
- {
- "id":6,
- "orth":"do",
- "ner":"O"
- },
- {
- "id":7,
- "orth":"the",
- "ner":"O"
- },
- {
- "id":8,
- "orth":"various",
- "ner":"O"
- },
- {
- "id":9,
- "orth":"parts",
- "ner":"O"
- },
- {
- "id":10,
- "orth":"of",
- "ner":"O"
- },
- {
- "id":11,
- "orth":"a",
- "ner":"O"
- },
- {
- "id":12,
- "orth":"chicken",
- "ner":"O"
- },
- {
- "id":13,
- "orth":"need",
- "ner":"O"
- },
- {
- "id":14,
- "orth":"to",
- "ner":"O"
- },
- {
- "id":15,
- "orth":"be",
- "ner":"O"
- },
- {
- "id":16,
- "orth":"cooked",
- "ner":"O"
- },
- {
- "id":17,
- "orth":"?",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":18,
- "orth":"\n",
- "ner":"O"
- },
- {
- "id":19,
- "orth":"I",
- "ner":"O"
- },
- {
- "id":20,
- "orth":"'m",
- "ner":"O"
- },
- {
- "id":21,
- "orth":"interested",
- "ner":"O"
- },
- {
- "id":22,
- "orth":"in",
- "ner":"O"
- },
- {
- "id":23,
- "orth":"baking",
- "ner":"O"
- },
- {
- "id":24,
- "orth":"thighs",
- "ner":"O"
- },
- {
- "id":25,
- "orth":",",
- "ner":"O"
- },
- {
- "id":26,
- "orth":"legs",
- "ner":"O"
- },
- {
- "id":27,
- "orth":",",
- "ner":"O"
- },
- {
- "id":28,
- "orth":"breasts",
- "ner":"O"
- },
- {
- "id":29,
- "orth":"and",
- "ner":"O"
- },
- {
- "id":30,
- "orth":"wings",
- "ner":"O"
- },
- {
- "id":31,
- "orth":".",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":32,
- "orth":" ",
- "ner":"O"
- },
- {
- "id":33,
- "orth":"How",
- "ner":"O"
- },
- {
- "id":34,
- "orth":"long",
- "ner":"O"
- },
- {
- "id":35,
- "orth":"do",
- "ner":"O"
- },
- {
- "id":36,
- "orth":"each",
- "ner":"O"
- },
- {
- "id":37,
- "orth":"of",
- "ner":"O"
- },
- {
- "id":38,
- "orth":"these",
- "ner":"O"
- },
- {
- "id":39,
- "orth":"items",
- "ner":"O"
- },
- {
- "id":40,
- "orth":"need",
- "ner":"O"
- },
- {
- "id":41,
- "orth":"to",
- "ner":"O"
- },
- {
- "id":42,
- "orth":"bake",
- "ner":"O"
- },
- {
- "id":43,
- "orth":"and",
- "ner":"O"
- },
- {
- "id":44,
- "orth":"at",
- "ner":"O"
- },
- {
- "id":45,
- "orth":"what",
- "ner":"O"
- },
- {
- "id":46,
- "orth":"temperature",
- "ner":"O"
- },
- {
- "id":47,
- "orth":"?",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":48,
- "orth":"\n",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- }
- ],
- "cats":[
- {
- "label":"baking",
- "value":1.0
- },
- {
- "label":"not_baking",
- "value":0.0
- }
- ]
- },
- {
- "raw":"Do I need to sift flour that is labeled sifted?\nIs there really an advantage to sifting flour that I bought that was labeled 'sifted'?\n",
- "sentences":[
- {
- "tokens":[
- {
- "id":0,
- "orth":"Do",
- "ner":"O"
- },
- {
- "id":1,
- "orth":"I",
- "ner":"O"
- },
- {
- "id":2,
- "orth":"need",
- "ner":"O"
- },
- {
- "id":3,
- "orth":"to",
- "ner":"O"
- },
- {
- "id":4,
- "orth":"sift",
- "ner":"O"
- },
- {
- "id":5,
- "orth":"flour",
- "ner":"O"
- },
- {
- "id":6,
- "orth":"that",
- "ner":"O"
- },
- {
- "id":7,
- "orth":"is",
- "ner":"O"
- },
- {
- "id":8,
- "orth":"labeled",
- "ner":"O"
- },
- {
- "id":9,
- "orth":"sifted",
- "ner":"O"
- },
- {
- "id":10,
- "orth":"?",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":11,
- "orth":"\n",
- "ner":"O"
- },
- {
- "id":12,
- "orth":"Is",
- "ner":"O"
- },
- {
- "id":13,
- "orth":"there",
- "ner":"O"
- },
- {
- "id":14,
- "orth":"really",
- "ner":"O"
- },
- {
- "id":15,
- "orth":"an",
- "ner":"O"
- },
- {
- "id":16,
- "orth":"advantage",
- "ner":"O"
- },
- {
- "id":17,
- "orth":"to",
- "ner":"O"
- },
- {
- "id":18,
- "orth":"sifting",
- "ner":"O"
- },
- {
- "id":19,
- "orth":"flour",
- "ner":"O"
- },
- {
- "id":20,
- "orth":"that",
- "ner":"O"
- },
- {
- "id":21,
- "orth":"I",
- "ner":"O"
- },
- {
- "id":22,
- "orth":"bought",
- "ner":"O"
- },
- {
- "id":23,
- "orth":"that",
- "ner":"O"
- },
- {
- "id":24,
- "orth":"was",
- "ner":"O"
- },
- {
- "id":25,
- "orth":"labeled",
- "ner":"O"
- },
- {
- "id":26,
- "orth":"'",
- "ner":"O"
- },
- {
- "id":27,
- "orth":"sifted",
- "ner":"O"
- },
- {
- "id":28,
- "orth":"'",
- "ner":"O"
- },
- {
- "id":29,
- "orth":"?",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":30,
- "orth":"\n",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- }
- ],
- "cats":[
- {
- "label":"baking",
- "value":1.0
- },
- {
- "label":"not_baking",
- "value":0.0
- }
- ]
- }
- ]
- }
-]
\ No newline at end of file
diff --git a/examples/training/textcat_example_data/cooking.jsonl b/examples/training/textcat_example_data/cooking.jsonl
deleted file mode 100644
index cfdc9be87..000000000
--- a/examples/training/textcat_example_data/cooking.jsonl
+++ /dev/null
@@ -1,10 +0,0 @@
-{"cats": {"baking": 0.0, "not_baking": 1.0}, "meta": {"id": "2"}, "text": "How should I cook bacon in an oven?\nI've heard of people cooking bacon in an oven by laying the strips out on a cookie sheet. When using this method, how long should I cook the bacon for, and at what temperature?\n"}
-{"cats": {"baking": 0.0, "not_baking": 1.0}, "meta": {"id": "3"}, "text": "What is the difference between white and brown eggs?\nI always use brown extra large eggs, but I can't honestly say why I do this other than habit at this point. Are there any distinct advantages or disadvantages like flavor, shelf life, etc?\n"}
-{"cats": {"baking": 0.0, "not_baking": 1.0}, "meta": {"id": "4"}, "text": "What is the difference between baking soda and baking powder?\nAnd can I use one in place of the other in certain recipes?\n"}
-{"cats": {"baking": 0.0, "not_baking": 1.0}, "meta": {"id": "5"}, "text": "In a tomato sauce recipe, how can I cut the acidity?\nIt seems that every time I make a tomato sauce for pasta, the sauce is a little bit too acid for my taste. I've tried using sugar or sodium bicarbonate, but I'm not satisfied with the results.\n"}
-{"cats": {"baking": 0.0, "not_baking": 1.0}, "meta": {"id": "6"}, "text": "What ingredients (available in specific regions) can I substitute for parsley?\nI have a recipe that calls for fresh parsley. I have substituted other fresh herbs for their dried equivalents but I don't have fresh or dried parsley. Is there something else (ex another dried herb) that I can use instead of parsley?\nI know it is used mainly for looks rather than taste but I have a pasta recipe that calls for 2 tablespoons of parsley in the sauce and then another 2 tablespoons on top when it is done. I know the parsley on top is more for looks but there must be something about the taste otherwise it would call for parsley within the sauce as well.\nI would especially like to hear about substitutes available in Southeast Asia and other parts of the world where the obvious answers (such as cilantro) are not widely available.\n"}
-{"cats": {"baking": 0.0, "not_baking": 1.0}, "meta": {"id": "9"}, "text": "What is the internal temperature a steak should be cooked to for Rare/Medium Rare/Medium/Well?\nI'd like to know when to take my steaks off the grill and please everybody.\n"}
-{"cats": {"baking": 0.0, "not_baking": 1.0}, "meta": {"id": "11"}, "text": "How should I poach an egg?\nWhat's the best method to poach an egg without it turning into an eggy soupy mess?\n"}
-{"cats": {"baking": 0.0, "not_baking": 1.0}, "meta": {"id": "12"}, "text": "How can I make my Ice Cream \"creamier\"\nMy ice cream doesn't feel creamy enough. I got the recipe from Good Eats, and I can't tell if it's just the recipe or maybe that I'm just not getting my \"batter\" cold enough before I try to make it (I let it chill overnight in the refrigerator, but it doesn't always come out of the machine looking like \"soft serve\" as he said on the show - it's usually a little thinner).\nRecipe: http://www.foodnetwork.com/recipes/alton-brown/serious-vanilla-ice-cream-recipe/index.html\nThanks!\n"}
-{"cats": {"baking": 1.0, "not_baking": 0.0}, "meta": {"id": "17"}, "text": "How long and at what temperature do the various parts of a chicken need to be cooked?\nI'm interested in baking thighs, legs, breasts and wings. How long do each of these items need to bake and at what temperature?\n"}
-{"cats": {"baking": 1.0, "not_baking": 0.0}, "meta": {"id": "27"}, "text": "Do I need to sift flour that is labeled sifted?\nIs there really an advantage to sifting flour that I bought that was labeled 'sifted'?\n"}
diff --git a/examples/training/textcat_example_data/jigsaw-toxic-comment.json b/examples/training/textcat_example_data/jigsaw-toxic-comment.json
deleted file mode 100644
index 0c8d8f8e0..000000000
--- a/examples/training/textcat_example_data/jigsaw-toxic-comment.json
+++ /dev/null
@@ -1,2987 +0,0 @@
-[
- {
- "id":0,
- "paragraphs":[
- {
- "raw":"Explanation\nWhy the edits made under my username Hardcore Metallica Fan were reverted? They weren't vandalisms, just closure on some GAs after I voted at New York Dolls FAC. And please don't remove the template from the talk page since I'm retired now.89.205.38.27",
- "sentences":[
- {
- "tokens":[
- {
- "id":0,
- "orth":"Explanation",
- "ner":"O"
- },
- {
- "id":1,
- "orth":"\n",
- "ner":"O"
- },
- {
- "id":2,
- "orth":"Why",
- "ner":"O"
- },
- {
- "id":3,
- "orth":"the",
- "ner":"O"
- },
- {
- "id":4,
- "orth":"edits",
- "ner":"O"
- },
- {
- "id":5,
- "orth":"made",
- "ner":"O"
- },
- {
- "id":6,
- "orth":"under",
- "ner":"O"
- },
- {
- "id":7,
- "orth":"my",
- "ner":"O"
- },
- {
- "id":8,
- "orth":"username",
- "ner":"O"
- },
- {
- "id":9,
- "orth":"Hardcore",
- "ner":"O"
- },
- {
- "id":10,
- "orth":"Metallica",
- "ner":"O"
- },
- {
- "id":11,
- "orth":"Fan",
- "ner":"O"
- },
- {
- "id":12,
- "orth":"were",
- "ner":"O"
- },
- {
- "id":13,
- "orth":"reverted",
- "ner":"O"
- },
- {
- "id":14,
- "orth":"?",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":15,
- "orth":"They",
- "ner":"O"
- },
- {
- "id":16,
- "orth":"were",
- "ner":"O"
- },
- {
- "id":17,
- "orth":"n't",
- "ner":"O"
- },
- {
- "id":18,
- "orth":"vandalisms",
- "ner":"O"
- },
- {
- "id":19,
- "orth":",",
- "ner":"O"
- },
- {
- "id":20,
- "orth":"just",
- "ner":"O"
- },
- {
- "id":21,
- "orth":"closure",
- "ner":"O"
- },
- {
- "id":22,
- "orth":"on",
- "ner":"O"
- },
- {
- "id":23,
- "orth":"some",
- "ner":"O"
- },
- {
- "id":24,
- "orth":"GAs",
- "ner":"O"
- },
- {
- "id":25,
- "orth":"after",
- "ner":"O"
- },
- {
- "id":26,
- "orth":"I",
- "ner":"O"
- },
- {
- "id":27,
- "orth":"voted",
- "ner":"O"
- },
- {
- "id":28,
- "orth":"at",
- "ner":"O"
- },
- {
- "id":29,
- "orth":"New",
- "ner":"O"
- },
- {
- "id":30,
- "orth":"York",
- "ner":"O"
- },
- {
- "id":31,
- "orth":"Dolls",
- "ner":"O"
- },
- {
- "id":32,
- "orth":"FAC",
- "ner":"O"
- },
- {
- "id":33,
- "orth":".",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":34,
- "orth":"And",
- "ner":"O"
- },
- {
- "id":35,
- "orth":"please",
- "ner":"O"
- },
- {
- "id":36,
- "orth":"do",
- "ner":"O"
- },
- {
- "id":37,
- "orth":"n't",
- "ner":"O"
- },
- {
- "id":38,
- "orth":"remove",
- "ner":"O"
- },
- {
- "id":39,
- "orth":"the",
- "ner":"O"
- },
- {
- "id":40,
- "orth":"template",
- "ner":"O"
- },
- {
- "id":41,
- "orth":"from",
- "ner":"O"
- },
- {
- "id":42,
- "orth":"the",
- "ner":"O"
- },
- {
- "id":43,
- "orth":"talk",
- "ner":"O"
- },
- {
- "id":44,
- "orth":"page",
- "ner":"O"
- },
- {
- "id":45,
- "orth":"since",
- "ner":"O"
- },
- {
- "id":46,
- "orth":"I",
- "ner":"O"
- },
- {
- "id":47,
- "orth":"'m",
- "ner":"O"
- },
- {
- "id":48,
- "orth":"retired",
- "ner":"O"
- },
- {
- "id":49,
- "orth":"now.89.205.38.27",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- }
- ],
- "cats":[
- {
- "label":"insult",
- "value":0
- },
- {
- "label":"obscene",
- "value":0
- },
- {
- "label":"severe_toxic",
- "value":0
- },
- {
- "label":"toxic",
- "value":0
- }
- ]
- },
- {
- "raw":"I'm Sorry \n\nI'm sorry I screwed around with someones talk page. It was very bad to do. I know how having the templates on their talk page helps you assert your dominance over them. I know I should bow down to the almighty administrators. But then again, I'm going to go play outside....with your mom. 76.122.79.82",
- "sentences":[
- {
- "tokens":[
- {
- "id":0,
- "orth":"I",
- "ner":"O"
- },
- {
- "id":1,
- "orth":"'m",
- "ner":"O"
- },
- {
- "id":2,
- "orth":"Sorry",
- "ner":"O"
- },
- {
- "id":3,
- "orth":"\n\n",
- "ner":"O"
- },
- {
- "id":4,
- "orth":"I",
- "ner":"O"
- },
- {
- "id":5,
- "orth":"'m",
- "ner":"O"
- },
- {
- "id":6,
- "orth":"sorry",
- "ner":"O"
- },
- {
- "id":7,
- "orth":"I",
- "ner":"O"
- },
- {
- "id":8,
- "orth":"screwed",
- "ner":"O"
- },
- {
- "id":9,
- "orth":"around",
- "ner":"O"
- },
- {
- "id":10,
- "orth":"with",
- "ner":"O"
- },
- {
- "id":11,
- "orth":"someones",
- "ner":"O"
- },
- {
- "id":12,
- "orth":"talk",
- "ner":"O"
- },
- {
- "id":13,
- "orth":"page",
- "ner":"O"
- },
- {
- "id":14,
- "orth":".",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":15,
- "orth":" ",
- "ner":"O"
- },
- {
- "id":16,
- "orth":"It",
- "ner":"O"
- },
- {
- "id":17,
- "orth":"was",
- "ner":"O"
- },
- {
- "id":18,
- "orth":"very",
- "ner":"O"
- },
- {
- "id":19,
- "orth":"bad",
- "ner":"O"
- },
- {
- "id":20,
- "orth":"to",
- "ner":"O"
- },
- {
- "id":21,
- "orth":"do",
- "ner":"O"
- },
- {
- "id":22,
- "orth":".",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":23,
- "orth":" ",
- "ner":"O"
- },
- {
- "id":24,
- "orth":"I",
- "ner":"O"
- },
- {
- "id":25,
- "orth":"know",
- "ner":"O"
- },
- {
- "id":26,
- "orth":"how",
- "ner":"O"
- },
- {
- "id":27,
- "orth":"having",
- "ner":"O"
- },
- {
- "id":28,
- "orth":"the",
- "ner":"O"
- },
- {
- "id":29,
- "orth":"templates",
- "ner":"O"
- },
- {
- "id":30,
- "orth":"on",
- "ner":"O"
- },
- {
- "id":31,
- "orth":"their",
- "ner":"O"
- },
- {
- "id":32,
- "orth":"talk",
- "ner":"O"
- },
- {
- "id":33,
- "orth":"page",
- "ner":"O"
- },
- {
- "id":34,
- "orth":"helps",
- "ner":"O"
- },
- {
- "id":35,
- "orth":"you",
- "ner":"O"
- },
- {
- "id":36,
- "orth":"assert",
- "ner":"O"
- },
- {
- "id":37,
- "orth":"your",
- "ner":"O"
- },
- {
- "id":38,
- "orth":"dominance",
- "ner":"O"
- },
- {
- "id":39,
- "orth":"over",
- "ner":"O"
- },
- {
- "id":40,
- "orth":"them",
- "ner":"O"
- },
- {
- "id":41,
- "orth":".",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":42,
- "orth":" ",
- "ner":"O"
- },
- {
- "id":43,
- "orth":"I",
- "ner":"O"
- },
- {
- "id":44,
- "orth":"know",
- "ner":"O"
- },
- {
- "id":45,
- "orth":"I",
- "ner":"O"
- },
- {
- "id":46,
- "orth":"should",
- "ner":"O"
- },
- {
- "id":47,
- "orth":"bow",
- "ner":"O"
- },
- {
- "id":48,
- "orth":"down",
- "ner":"O"
- },
- {
- "id":49,
- "orth":"to",
- "ner":"O"
- },
- {
- "id":50,
- "orth":"the",
- "ner":"O"
- },
- {
- "id":51,
- "orth":"almighty",
- "ner":"O"
- },
- {
- "id":52,
- "orth":"administrators",
- "ner":"O"
- },
- {
- "id":53,
- "orth":".",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":54,
- "orth":" ",
- "ner":"O"
- },
- {
- "id":55,
- "orth":"But",
- "ner":"O"
- },
- {
- "id":56,
- "orth":"then",
- "ner":"O"
- },
- {
- "id":57,
- "orth":"again",
- "ner":"O"
- },
- {
- "id":58,
- "orth":",",
- "ner":"O"
- },
- {
- "id":59,
- "orth":"I",
- "ner":"O"
- },
- {
- "id":60,
- "orth":"'m",
- "ner":"O"
- },
- {
- "id":61,
- "orth":"going",
- "ner":"O"
- },
- {
- "id":62,
- "orth":"to",
- "ner":"O"
- },
- {
- "id":63,
- "orth":"go",
- "ner":"O"
- },
- {
- "id":64,
- "orth":"play",
- "ner":"O"
- },
- {
- "id":65,
- "orth":"outside",
- "ner":"O"
- },
- {
- "id":66,
- "orth":"....",
- "ner":"O"
- },
- {
- "id":67,
- "orth":"with",
- "ner":"O"
- },
- {
- "id":68,
- "orth":"your",
- "ner":"O"
- },
- {
- "id":69,
- "orth":"mom",
- "ner":"O"
- },
- {
- "id":70,
- "orth":".",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":71,
- "orth":" ",
- "ner":"O"
- },
- {
- "id":72,
- "orth":"76.122.79.82",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- }
- ],
- "cats":[
- {
- "label":"insult",
- "value":0
- },
- {
- "label":"obscene",
- "value":0
- },
- {
- "label":"severe_toxic",
- "value":0
- },
- {
- "label":"toxic",
- "value":1
- }
- ]
- },
- {
- "raw":"Stupid peace of shit stop deleting my stuff asshole go die and fall in a hole go to hell!",
- "sentences":[
- {
- "tokens":[
- {
- "id":0,
- "orth":"Stupid",
- "ner":"O"
- },
- {
- "id":1,
- "orth":"peace",
- "ner":"O"
- },
- {
- "id":2,
- "orth":"of",
- "ner":"O"
- },
- {
- "id":3,
- "orth":"shit",
- "ner":"O"
- },
- {
- "id":4,
- "orth":"stop",
- "ner":"O"
- },
- {
- "id":5,
- "orth":"deleting",
- "ner":"O"
- },
- {
- "id":6,
- "orth":"my",
- "ner":"O"
- },
- {
- "id":7,
- "orth":"stuff",
- "ner":"O"
- },
- {
- "id":8,
- "orth":"asshole",
- "ner":"O"
- },
- {
- "id":9,
- "orth":"go",
- "ner":"O"
- },
- {
- "id":10,
- "orth":"die",
- "ner":"O"
- },
- {
- "id":11,
- "orth":"and",
- "ner":"O"
- },
- {
- "id":12,
- "orth":"fall",
- "ner":"O"
- },
- {
- "id":13,
- "orth":"in",
- "ner":"O"
- },
- {
- "id":14,
- "orth":"a",
- "ner":"O"
- },
- {
- "id":15,
- "orth":"hole",
- "ner":"O"
- },
- {
- "id":16,
- "orth":"go",
- "ner":"O"
- },
- {
- "id":17,
- "orth":"to",
- "ner":"O"
- },
- {
- "id":18,
- "orth":"hell",
- "ner":"O"
- },
- {
- "id":19,
- "orth":"!",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- }
- ],
- "cats":[
- {
- "label":"insult",
- "value":1
- },
- {
- "label":"obscene",
- "value":1
- },
- {
- "label":"severe_toxic",
- "value":1
- },
- {
- "label":"toxic",
- "value":1
- }
- ]
- },
- {
- "raw":"D'aww! He matches this background colour I'm seemingly stuck with. Thanks. (talk) 21:51, January 11, 2016 (UTC)",
- "sentences":[
- {
- "tokens":[
- {
- "id":0,
- "orth":"D'aww",
- "ner":"O"
- },
- {
- "id":1,
- "orth":"!",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":2,
- "orth":"He",
- "ner":"O"
- },
- {
- "id":3,
- "orth":"matches",
- "ner":"O"
- },
- {
- "id":4,
- "orth":"this",
- "ner":"O"
- },
- {
- "id":5,
- "orth":"background",
- "ner":"O"
- },
- {
- "id":6,
- "orth":"colour",
- "ner":"O"
- },
- {
- "id":7,
- "orth":"I",
- "ner":"O"
- },
- {
- "id":8,
- "orth":"'m",
- "ner":"O"
- },
- {
- "id":9,
- "orth":"seemingly",
- "ner":"O"
- },
- {
- "id":10,
- "orth":"stuck",
- "ner":"O"
- },
- {
- "id":11,
- "orth":"with",
- "ner":"O"
- },
- {
- "id":12,
- "orth":".",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":13,
- "orth":"Thanks",
- "ner":"O"
- },
- {
- "id":14,
- "orth":".",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":15,
- "orth":" ",
- "ner":"O"
- },
- {
- "id":16,
- "orth":"(",
- "ner":"O"
- },
- {
- "id":17,
- "orth":"talk",
- "ner":"O"
- },
- {
- "id":18,
- "orth":")",
- "ner":"O"
- },
- {
- "id":19,
- "orth":"21:51",
- "ner":"O"
- },
- {
- "id":20,
- "orth":",",
- "ner":"O"
- },
- {
- "id":21,
- "orth":"January",
- "ner":"O"
- },
- {
- "id":22,
- "orth":"11",
- "ner":"O"
- },
- {
- "id":23,
- "orth":",",
- "ner":"O"
- },
- {
- "id":24,
- "orth":"2016",
- "ner":"O"
- },
- {
- "id":25,
- "orth":"(",
- "ner":"O"
- },
- {
- "id":26,
- "orth":"UTC",
- "ner":"O"
- },
- {
- "id":27,
- "orth":")",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- }
- ],
- "cats":[
- {
- "label":"insult",
- "value":0
- },
- {
- "label":"obscene",
- "value":0
- },
- {
- "label":"severe_toxic",
- "value":0
- },
- {
- "label":"toxic",
- "value":0
- }
- ]
- },
- {
- "raw":"GET FUCKED UP. GET FUCKEEED UP. GOT A DRINK THAT YOU CANT PUT DOWN???/ GET FUCK UP GET FUCKED UP. I'M FUCKED UP RIGHT NOW!",
- "sentences":[
- {
- "tokens":[
- {
- "id":0,
- "orth":"GET",
- "ner":"O"
- },
- {
- "id":1,
- "orth":"FUCKED",
- "ner":"O"
- },
- {
- "id":2,
- "orth":"UP",
- "ner":"O"
- },
- {
- "id":3,
- "orth":".",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":4,
- "orth":"GET",
- "ner":"O"
- },
- {
- "id":5,
- "orth":"FUCKEEED",
- "ner":"O"
- },
- {
- "id":6,
- "orth":"UP",
- "ner":"O"
- },
- {
- "id":7,
- "orth":".",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":8,
- "orth":" ",
- "ner":"O"
- },
- {
- "id":9,
- "orth":"GOT",
- "ner":"O"
- },
- {
- "id":10,
- "orth":"A",
- "ner":"O"
- },
- {
- "id":11,
- "orth":"DRINK",
- "ner":"O"
- },
- {
- "id":12,
- "orth":"THAT",
- "ner":"O"
- },
- {
- "id":13,
- "orth":"YOU",
- "ner":"O"
- },
- {
- "id":14,
- "orth":"CANT",
- "ner":"O"
- },
- {
- "id":15,
- "orth":"PUT",
- "ner":"O"
- },
- {
- "id":16,
- "orth":"DOWN???/",
- "ner":"O"
- },
- {
- "id":17,
- "orth":"GET",
- "ner":"O"
- },
- {
- "id":18,
- "orth":"FUCK",
- "ner":"O"
- },
- {
- "id":19,
- "orth":"UP",
- "ner":"O"
- },
- {
- "id":20,
- "orth":"GET",
- "ner":"O"
- },
- {
- "id":21,
- "orth":"FUCKED",
- "ner":"O"
- },
- {
- "id":22,
- "orth":"UP",
- "ner":"O"
- },
- {
- "id":23,
- "orth":".",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":24,
- "orth":" ",
- "ner":"O"
- },
- {
- "id":25,
- "orth":"I'M",
- "ner":"O"
- },
- {
- "id":26,
- "orth":"FUCKED",
- "ner":"O"
- },
- {
- "id":27,
- "orth":"UP",
- "ner":"O"
- },
- {
- "id":28,
- "orth":"RIGHT",
- "ner":"O"
- },
- {
- "id":29,
- "orth":"NOW",
- "ner":"O"
- },
- {
- "id":30,
- "orth":"!",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- }
- ],
- "cats":[
- {
- "label":"insult",
- "value":0
- },
- {
- "label":"obscene",
- "value":1
- },
- {
- "label":"severe_toxic",
- "value":0
- },
- {
- "label":"toxic",
- "value":1
- }
- ]
- },
- {
- "raw":"Hey man, I'm really not trying to edit war. It's just that this guy is constantly removing relevant information and talking to me through edits instead of my talk page. He seems to care more about the formatting than the actual info.",
- "sentences":[
- {
- "tokens":[
- {
- "id":0,
- "orth":"Hey",
- "ner":"O"
- },
- {
- "id":1,
- "orth":"man",
- "ner":"O"
- },
- {
- "id":2,
- "orth":",",
- "ner":"O"
- },
- {
- "id":3,
- "orth":"I",
- "ner":"O"
- },
- {
- "id":4,
- "orth":"'m",
- "ner":"O"
- },
- {
- "id":5,
- "orth":"really",
- "ner":"O"
- },
- {
- "id":6,
- "orth":"not",
- "ner":"O"
- },
- {
- "id":7,
- "orth":"trying",
- "ner":"O"
- },
- {
- "id":8,
- "orth":"to",
- "ner":"O"
- },
- {
- "id":9,
- "orth":"edit",
- "ner":"O"
- },
- {
- "id":10,
- "orth":"war",
- "ner":"O"
- },
- {
- "id":11,
- "orth":".",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":12,
- "orth":"It",
- "ner":"O"
- },
- {
- "id":13,
- "orth":"'s",
- "ner":"O"
- },
- {
- "id":14,
- "orth":"just",
- "ner":"O"
- },
- {
- "id":15,
- "orth":"that",
- "ner":"O"
- },
- {
- "id":16,
- "orth":"this",
- "ner":"O"
- },
- {
- "id":17,
- "orth":"guy",
- "ner":"O"
- },
- {
- "id":18,
- "orth":"is",
- "ner":"O"
- },
- {
- "id":19,
- "orth":"constantly",
- "ner":"O"
- },
- {
- "id":20,
- "orth":"removing",
- "ner":"O"
- },
- {
- "id":21,
- "orth":"relevant",
- "ner":"O"
- },
- {
- "id":22,
- "orth":"information",
- "ner":"O"
- },
- {
- "id":23,
- "orth":"and",
- "ner":"O"
- },
- {
- "id":24,
- "orth":"talking",
- "ner":"O"
- },
- {
- "id":25,
- "orth":"to",
- "ner":"O"
- },
- {
- "id":26,
- "orth":"me",
- "ner":"O"
- },
- {
- "id":27,
- "orth":"through",
- "ner":"O"
- },
- {
- "id":28,
- "orth":"edits",
- "ner":"O"
- },
- {
- "id":29,
- "orth":"instead",
- "ner":"O"
- },
- {
- "id":30,
- "orth":"of",
- "ner":"O"
- },
- {
- "id":31,
- "orth":"my",
- "ner":"O"
- },
- {
- "id":32,
- "orth":"talk",
- "ner":"O"
- },
- {
- "id":33,
- "orth":"page",
- "ner":"O"
- },
- {
- "id":34,
- "orth":".",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":35,
- "orth":"He",
- "ner":"O"
- },
- {
- "id":36,
- "orth":"seems",
- "ner":"O"
- },
- {
- "id":37,
- "orth":"to",
- "ner":"O"
- },
- {
- "id":38,
- "orth":"care",
- "ner":"O"
- },
- {
- "id":39,
- "orth":"more",
- "ner":"O"
- },
- {
- "id":40,
- "orth":"about",
- "ner":"O"
- },
- {
- "id":41,
- "orth":"the",
- "ner":"O"
- },
- {
- "id":42,
- "orth":"formatting",
- "ner":"O"
- },
- {
- "id":43,
- "orth":"than",
- "ner":"O"
- },
- {
- "id":44,
- "orth":"the",
- "ner":"O"
- },
- {
- "id":45,
- "orth":"actual",
- "ner":"O"
- },
- {
- "id":46,
- "orth":"info",
- "ner":"O"
- },
- {
- "id":47,
- "orth":".",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- }
- ],
- "cats":[
- {
- "label":"insult",
- "value":0
- },
- {
- "label":"obscene",
- "value":0
- },
- {
- "label":"severe_toxic",
- "value":0
- },
- {
- "label":"toxic",
- "value":0
- }
- ]
- },
- {
- "raw":"\"\nMore\nI can't make any real suggestions on improvement - I wondered if the section statistics should be later on, or a subsection of \"\"types of accidents\"\" -I think the references may need tidying so that they are all in the exact same format ie date format etc. I can do that later on, if no-one else does first - if you have any preferences for formatting style on references or want to do it yourself please let me know.\n\nThere appears to be a backlog on articles for review so I guess there may be a delay until a reviewer turns up. It's listed in the relevant form eg Wikipedia:Good_article_nominations#Transport \"",
- "sentences":[
- {
- "tokens":[
- {
- "id":0,
- "orth":"\"",
- "ner":"O"
- },
- {
- "id":1,
- "orth":"\n",
- "ner":"O"
- },
- {
- "id":2,
- "orth":"More",
- "ner":"O"
- },
- {
- "id":3,
- "orth":"\n",
- "ner":"O"
- },
- {
- "id":4,
- "orth":"I",
- "ner":"O"
- },
- {
- "id":5,
- "orth":"ca",
- "ner":"O"
- },
- {
- "id":6,
- "orth":"n't",
- "ner":"O"
- },
- {
- "id":7,
- "orth":"make",
- "ner":"O"
- },
- {
- "id":8,
- "orth":"any",
- "ner":"O"
- },
- {
- "id":9,
- "orth":"real",
- "ner":"O"
- },
- {
- "id":10,
- "orth":"suggestions",
- "ner":"O"
- },
- {
- "id":11,
- "orth":"on",
- "ner":"O"
- },
- {
- "id":12,
- "orth":"improvement",
- "ner":"O"
- },
- {
- "id":13,
- "orth":"-",
- "ner":"O"
- },
- {
- "id":14,
- "orth":"I",
- "ner":"O"
- },
- {
- "id":15,
- "orth":"wondered",
- "ner":"O"
- },
- {
- "id":16,
- "orth":"if",
- "ner":"O"
- },
- {
- "id":17,
- "orth":"the",
- "ner":"O"
- },
- {
- "id":18,
- "orth":"section",
- "ner":"O"
- },
- {
- "id":19,
- "orth":"statistics",
- "ner":"O"
- },
- {
- "id":20,
- "orth":"should",
- "ner":"O"
- },
- {
- "id":21,
- "orth":"be",
- "ner":"O"
- },
- {
- "id":22,
- "orth":"later",
- "ner":"O"
- },
- {
- "id":23,
- "orth":"on",
- "ner":"O"
- },
- {
- "id":24,
- "orth":",",
- "ner":"O"
- },
- {
- "id":25,
- "orth":"or",
- "ner":"O"
- },
- {
- "id":26,
- "orth":"a",
- "ner":"O"
- },
- {
- "id":27,
- "orth":"subsection",
- "ner":"O"
- },
- {
- "id":28,
- "orth":"of",
- "ner":"O"
- },
- {
- "id":29,
- "orth":"\"",
- "ner":"O"
- },
- {
- "id":30,
- "orth":"\"",
- "ner":"O"
- },
- {
- "id":31,
- "orth":"types",
- "ner":"O"
- },
- {
- "id":32,
- "orth":"of",
- "ner":"O"
- },
- {
- "id":33,
- "orth":"accidents",
- "ner":"O"
- },
- {
- "id":34,
- "orth":"\"",
- "ner":"O"
- },
- {
- "id":35,
- "orth":"\"",
- "ner":"O"
- },
- {
- "id":36,
- "orth":" ",
- "ner":"O"
- },
- {
- "id":37,
- "orth":"-I",
- "ner":"O"
- },
- {
- "id":38,
- "orth":"think",
- "ner":"O"
- },
- {
- "id":39,
- "orth":"the",
- "ner":"O"
- },
- {
- "id":40,
- "orth":"references",
- "ner":"O"
- },
- {
- "id":41,
- "orth":"may",
- "ner":"O"
- },
- {
- "id":42,
- "orth":"need",
- "ner":"O"
- },
- {
- "id":43,
- "orth":"tidying",
- "ner":"O"
- },
- {
- "id":44,
- "orth":"so",
- "ner":"O"
- },
- {
- "id":45,
- "orth":"that",
- "ner":"O"
- },
- {
- "id":46,
- "orth":"they",
- "ner":"O"
- },
- {
- "id":47,
- "orth":"are",
- "ner":"O"
- },
- {
- "id":48,
- "orth":"all",
- "ner":"O"
- },
- {
- "id":49,
- "orth":"in",
- "ner":"O"
- },
- {
- "id":50,
- "orth":"the",
- "ner":"O"
- },
- {
- "id":51,
- "orth":"exact",
- "ner":"O"
- },
- {
- "id":52,
- "orth":"same",
- "ner":"O"
- },
- {
- "id":53,
- "orth":"format",
- "ner":"O"
- },
- {
- "id":54,
- "orth":"ie",
- "ner":"O"
- },
- {
- "id":55,
- "orth":"date",
- "ner":"O"
- },
- {
- "id":56,
- "orth":"format",
- "ner":"O"
- },
- {
- "id":57,
- "orth":"etc",
- "ner":"O"
- },
- {
- "id":58,
- "orth":".",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":59,
- "orth":"I",
- "ner":"O"
- },
- {
- "id":60,
- "orth":"can",
- "ner":"O"
- },
- {
- "id":61,
- "orth":"do",
- "ner":"O"
- },
- {
- "id":62,
- "orth":"that",
- "ner":"O"
- },
- {
- "id":63,
- "orth":"later",
- "ner":"O"
- },
- {
- "id":64,
- "orth":"on",
- "ner":"O"
- },
- {
- "id":65,
- "orth":",",
- "ner":"O"
- },
- {
- "id":66,
- "orth":"if",
- "ner":"O"
- },
- {
- "id":67,
- "orth":"no",
- "ner":"O"
- },
- {
- "id":68,
- "orth":"-",
- "ner":"O"
- },
- {
- "id":69,
- "orth":"one",
- "ner":"O"
- },
- {
- "id":70,
- "orth":"else",
- "ner":"O"
- },
- {
- "id":71,
- "orth":"does",
- "ner":"O"
- },
- {
- "id":72,
- "orth":"first",
- "ner":"O"
- },
- {
- "id":73,
- "orth":"-",
- "ner":"O"
- },
- {
- "id":74,
- "orth":"if",
- "ner":"O"
- },
- {
- "id":75,
- "orth":"you",
- "ner":"O"
- },
- {
- "id":76,
- "orth":"have",
- "ner":"O"
- },
- {
- "id":77,
- "orth":"any",
- "ner":"O"
- },
- {
- "id":78,
- "orth":"preferences",
- "ner":"O"
- },
- {
- "id":79,
- "orth":"for",
- "ner":"O"
- },
- {
- "id":80,
- "orth":"formatting",
- "ner":"O"
- },
- {
- "id":81,
- "orth":"style",
- "ner":"O"
- },
- {
- "id":82,
- "orth":"on",
- "ner":"O"
- },
- {
- "id":83,
- "orth":"references",
- "ner":"O"
- },
- {
- "id":84,
- "orth":"or",
- "ner":"O"
- },
- {
- "id":85,
- "orth":"want",
- "ner":"O"
- },
- {
- "id":86,
- "orth":"to",
- "ner":"O"
- },
- {
- "id":87,
- "orth":"do",
- "ner":"O"
- },
- {
- "id":88,
- "orth":"it",
- "ner":"O"
- },
- {
- "id":89,
- "orth":"yourself",
- "ner":"O"
- },
- {
- "id":90,
- "orth":"please",
- "ner":"O"
- },
- {
- "id":91,
- "orth":"let",
- "ner":"O"
- },
- {
- "id":92,
- "orth":"me",
- "ner":"O"
- },
- {
- "id":93,
- "orth":"know",
- "ner":"O"
- },
- {
- "id":94,
- "orth":".",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":95,
- "orth":"\n\n",
- "ner":"O"
- },
- {
- "id":96,
- "orth":"There",
- "ner":"O"
- },
- {
- "id":97,
- "orth":"appears",
- "ner":"O"
- },
- {
- "id":98,
- "orth":"to",
- "ner":"O"
- },
- {
- "id":99,
- "orth":"be",
- "ner":"O"
- },
- {
- "id":100,
- "orth":"a",
- "ner":"O"
- },
- {
- "id":101,
- "orth":"backlog",
- "ner":"O"
- },
- {
- "id":102,
- "orth":"on",
- "ner":"O"
- },
- {
- "id":103,
- "orth":"articles",
- "ner":"O"
- },
- {
- "id":104,
- "orth":"for",
- "ner":"O"
- },
- {
- "id":105,
- "orth":"review",
- "ner":"O"
- },
- {
- "id":106,
- "orth":"so",
- "ner":"O"
- },
- {
- "id":107,
- "orth":"I",
- "ner":"O"
- },
- {
- "id":108,
- "orth":"guess",
- "ner":"O"
- },
- {
- "id":109,
- "orth":"there",
- "ner":"O"
- },
- {
- "id":110,
- "orth":"may",
- "ner":"O"
- },
- {
- "id":111,
- "orth":"be",
- "ner":"O"
- },
- {
- "id":112,
- "orth":"a",
- "ner":"O"
- },
- {
- "id":113,
- "orth":"delay",
- "ner":"O"
- },
- {
- "id":114,
- "orth":"until",
- "ner":"O"
- },
- {
- "id":115,
- "orth":"a",
- "ner":"O"
- },
- {
- "id":116,
- "orth":"reviewer",
- "ner":"O"
- },
- {
- "id":117,
- "orth":"turns",
- "ner":"O"
- },
- {
- "id":118,
- "orth":"up",
- "ner":"O"
- },
- {
- "id":119,
- "orth":".",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":120,
- "orth":"It",
- "ner":"O"
- },
- {
- "id":121,
- "orth":"'s",
- "ner":"O"
- },
- {
- "id":122,
- "orth":"listed",
- "ner":"O"
- },
- {
- "id":123,
- "orth":"in",
- "ner":"O"
- },
- {
- "id":124,
- "orth":"the",
- "ner":"O"
- },
- {
- "id":125,
- "orth":"relevant",
- "ner":"O"
- },
- {
- "id":126,
- "orth":"form",
- "ner":"O"
- },
- {
- "id":127,
- "orth":"eg",
- "ner":"O"
- },
- {
- "id":128,
- "orth":"Wikipedia",
- "ner":"O"
- },
- {
- "id":129,
- "orth":":",
- "ner":"O"
- },
- {
- "id":130,
- "orth":"Good_article_nominations#Transport",
- "ner":"O"
- },
- {
- "id":131,
- "orth":" ",
- "ner":"O"
- },
- {
- "id":132,
- "orth":"\"",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- }
- ],
- "cats":[
- {
- "label":"insult",
- "value":0
- },
- {
- "label":"obscene",
- "value":0
- },
- {
- "label":"severe_toxic",
- "value":0
- },
- {
- "label":"toxic",
- "value":0
- }
- ]
- },
- {
- "raw":"You, sir, are my hero. Any chance you remember what page that's on?",
- "sentences":[
- {
- "tokens":[
- {
- "id":0,
- "orth":"You",
- "ner":"O"
- },
- {
- "id":1,
- "orth":",",
- "ner":"O"
- },
- {
- "id":2,
- "orth":"sir",
- "ner":"O"
- },
- {
- "id":3,
- "orth":",",
- "ner":"O"
- },
- {
- "id":4,
- "orth":"are",
- "ner":"O"
- },
- {
- "id":5,
- "orth":"my",
- "ner":"O"
- },
- {
- "id":6,
- "orth":"hero",
- "ner":"O"
- },
- {
- "id":7,
- "orth":".",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":8,
- "orth":"Any",
- "ner":"O"
- },
- {
- "id":9,
- "orth":"chance",
- "ner":"O"
- },
- {
- "id":10,
- "orth":"you",
- "ner":"O"
- },
- {
- "id":11,
- "orth":"remember",
- "ner":"O"
- },
- {
- "id":12,
- "orth":"what",
- "ner":"O"
- },
- {
- "id":13,
- "orth":"page",
- "ner":"O"
- },
- {
- "id":14,
- "orth":"that",
- "ner":"O"
- },
- {
- "id":15,
- "orth":"'s",
- "ner":"O"
- },
- {
- "id":16,
- "orth":"on",
- "ner":"O"
- },
- {
- "id":17,
- "orth":"?",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- }
- ],
- "cats":[
- {
- "label":"insult",
- "value":0
- },
- {
- "label":"obscene",
- "value":0
- },
- {
- "label":"severe_toxic",
- "value":0
- },
- {
- "label":"toxic",
- "value":0
- }
- ]
- },
- {
- "raw":"\"\n\nCongratulations from me as well, use the tools well. \u00a0\u00b7 talk \"",
- "sentences":[
- {
- "tokens":[
- {
- "id":0,
- "orth":"\"",
- "ner":"O"
- },
- {
- "id":1,
- "orth":"\n\n",
- "ner":"O"
- },
- {
- "id":2,
- "orth":"Congratulations",
- "ner":"O"
- },
- {
- "id":3,
- "orth":"from",
- "ner":"O"
- },
- {
- "id":4,
- "orth":"me",
- "ner":"O"
- },
- {
- "id":5,
- "orth":"as",
- "ner":"O"
- },
- {
- "id":6,
- "orth":"well",
- "ner":"O"
- },
- {
- "id":7,
- "orth":",",
- "ner":"O"
- },
- {
- "id":8,
- "orth":"use",
- "ner":"O"
- },
- {
- "id":9,
- "orth":"the",
- "ner":"O"
- },
- {
- "id":10,
- "orth":"tools",
- "ner":"O"
- },
- {
- "id":11,
- "orth":"well",
- "ner":"O"
- },
- {
- "id":12,
- "orth":".",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":13,
- "orth":"\u00a0",
- "ner":"O"
- },
- {
- "id":14,
- "orth":"\u00b7",
- "ner":"O"
- },
- {
- "id":15,
- "orth":"talk",
- "ner":"O"
- },
- {
- "id":16,
- "orth":"\"",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- }
- ],
- "cats":[
- {
- "label":"insult",
- "value":0
- },
- {
- "label":"obscene",
- "value":0
- },
- {
- "label":"severe_toxic",
- "value":0
- },
- {
- "label":"toxic",
- "value":0
- }
- ]
- },
- {
- "raw":"Why can't you believe how fat Artie is? Did you see him on his recent appearence on the Tonight Show with Jay Leno? He looks absolutely AWFUL! If I had to put money on it, I'd say that Artie Lange is a can't miss candidate for the 2007 Dead pool! \n\n \nKindly keep your malicious fingers off of my above comment, . Everytime you remove it, I will repost it!!!",
- "sentences":[
- {
- "tokens":[
- {
- "id":0,
- "orth":"Why",
- "ner":"O"
- },
- {
- "id":1,
- "orth":"ca",
- "ner":"O"
- },
- {
- "id":2,
- "orth":"n't",
- "ner":"O"
- },
- {
- "id":3,
- "orth":"you",
- "ner":"O"
- },
- {
- "id":4,
- "orth":"believe",
- "ner":"O"
- },
- {
- "id":5,
- "orth":"how",
- "ner":"O"
- },
- {
- "id":6,
- "orth":"fat",
- "ner":"O"
- },
- {
- "id":7,
- "orth":"Artie",
- "ner":"O"
- },
- {
- "id":8,
- "orth":"is",
- "ner":"O"
- },
- {
- "id":9,
- "orth":"?",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":10,
- "orth":"Did",
- "ner":"O"
- },
- {
- "id":11,
- "orth":"you",
- "ner":"O"
- },
- {
- "id":12,
- "orth":"see",
- "ner":"O"
- },
- {
- "id":13,
- "orth":"him",
- "ner":"O"
- },
- {
- "id":14,
- "orth":"on",
- "ner":"O"
- },
- {
- "id":15,
- "orth":"his",
- "ner":"O"
- },
- {
- "id":16,
- "orth":"recent",
- "ner":"O"
- },
- {
- "id":17,
- "orth":"appearence",
- "ner":"O"
- },
- {
- "id":18,
- "orth":"on",
- "ner":"O"
- },
- {
- "id":19,
- "orth":"the",
- "ner":"O"
- },
- {
- "id":20,
- "orth":"Tonight",
- "ner":"O"
- },
- {
- "id":21,
- "orth":"Show",
- "ner":"O"
- },
- {
- "id":22,
- "orth":"with",
- "ner":"O"
- },
- {
- "id":23,
- "orth":"Jay",
- "ner":"O"
- },
- {
- "id":24,
- "orth":"Leno",
- "ner":"O"
- },
- {
- "id":25,
- "orth":"?",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":26,
- "orth":"He",
- "ner":"O"
- },
- {
- "id":27,
- "orth":"looks",
- "ner":"O"
- },
- {
- "id":28,
- "orth":"absolutely",
- "ner":"O"
- },
- {
- "id":29,
- "orth":"AWFUL",
- "ner":"O"
- },
- {
- "id":30,
- "orth":"!",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":31,
- "orth":"If",
- "ner":"O"
- },
- {
- "id":32,
- "orth":"I",
- "ner":"O"
- },
- {
- "id":33,
- "orth":"had",
- "ner":"O"
- },
- {
- "id":34,
- "orth":"to",
- "ner":"O"
- },
- {
- "id":35,
- "orth":"put",
- "ner":"O"
- },
- {
- "id":36,
- "orth":"money",
- "ner":"O"
- },
- {
- "id":37,
- "orth":"on",
- "ner":"O"
- },
- {
- "id":38,
- "orth":"it",
- "ner":"O"
- },
- {
- "id":39,
- "orth":",",
- "ner":"O"
- },
- {
- "id":40,
- "orth":"I",
- "ner":"O"
- },
- {
- "id":41,
- "orth":"'d",
- "ner":"O"
- },
- {
- "id":42,
- "orth":"say",
- "ner":"O"
- },
- {
- "id":43,
- "orth":"that",
- "ner":"O"
- },
- {
- "id":44,
- "orth":"Artie",
- "ner":"O"
- },
- {
- "id":45,
- "orth":"Lange",
- "ner":"O"
- },
- {
- "id":46,
- "orth":"is",
- "ner":"O"
- },
- {
- "id":47,
- "orth":"a",
- "ner":"O"
- },
- {
- "id":48,
- "orth":"ca",
- "ner":"O"
- },
- {
- "id":49,
- "orth":"n't",
- "ner":"O"
- },
- {
- "id":50,
- "orth":"miss",
- "ner":"O"
- },
- {
- "id":51,
- "orth":"candidate",
- "ner":"O"
- },
- {
- "id":52,
- "orth":"for",
- "ner":"O"
- },
- {
- "id":53,
- "orth":"the",
- "ner":"O"
- },
- {
- "id":54,
- "orth":"2007",
- "ner":"O"
- },
- {
- "id":55,
- "orth":"Dead",
- "ner":"O"
- },
- {
- "id":56,
- "orth":"pool",
- "ner":"O"
- },
- {
- "id":57,
- "orth":"!",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":58,
- "orth":" \n\n \n",
- "ner":"O"
- },
- {
- "id":59,
- "orth":"Kindly",
- "ner":"O"
- },
- {
- "id":60,
- "orth":"keep",
- "ner":"O"
- },
- {
- "id":61,
- "orth":"your",
- "ner":"O"
- },
- {
- "id":62,
- "orth":"malicious",
- "ner":"O"
- },
- {
- "id":63,
- "orth":"fingers",
- "ner":"O"
- },
- {
- "id":64,
- "orth":"off",
- "ner":"O"
- },
- {
- "id":65,
- "orth":"of",
- "ner":"O"
- },
- {
- "id":66,
- "orth":"my",
- "ner":"O"
- },
- {
- "id":67,
- "orth":"above",
- "ner":"O"
- },
- {
- "id":68,
- "orth":"comment",
- "ner":"O"
- },
- {
- "id":69,
- "orth":",",
- "ner":"O"
- },
- {
- "id":70,
- "orth":".",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- },
- {
- "tokens":[
- {
- "id":71,
- "orth":"Everytime",
- "ner":"O"
- },
- {
- "id":72,
- "orth":"you",
- "ner":"O"
- },
- {
- "id":73,
- "orth":"remove",
- "ner":"O"
- },
- {
- "id":74,
- "orth":"it",
- "ner":"O"
- },
- {
- "id":75,
- "orth":",",
- "ner":"O"
- },
- {
- "id":76,
- "orth":"I",
- "ner":"O"
- },
- {
- "id":77,
- "orth":"will",
- "ner":"O"
- },
- {
- "id":78,
- "orth":"repost",
- "ner":"O"
- },
- {
- "id":79,
- "orth":"it",
- "ner":"O"
- },
- {
- "id":80,
- "orth":"!",
- "ner":"O"
- },
- {
- "id":81,
- "orth":"!",
- "ner":"O"
- },
- {
- "id":82,
- "orth":"!",
- "ner":"O"
- }
- ],
- "brackets":[
-
- ]
- }
- ],
- "cats":[
- {
- "label":"insult",
- "value":0
- },
- {
- "label":"obscene",
- "value":0
- },
- {
- "label":"severe_toxic",
- "value":0
- },
- {
- "label":"toxic",
- "value":1
- }
- ]
- }
- ]
- }
-]
\ No newline at end of file
diff --git a/examples/training/textcat_example_data/jigsaw-toxic-comment.jsonl b/examples/training/textcat_example_data/jigsaw-toxic-comment.jsonl
deleted file mode 100644
index ac31b6255..000000000
--- a/examples/training/textcat_example_data/jigsaw-toxic-comment.jsonl
+++ /dev/null
@@ -1,10 +0,0 @@
-{"meta": {"id": "0000997932d777bf"}, "text": "Explanation\nWhy the edits made under my username Hardcore Metallica Fan were reverted? They weren't vandalisms, just closure on some GAs after I voted at New York Dolls FAC. And please don't remove the template from the talk page since I'm retired now.89.205.38.27", "cats": {"insult": 0, "obscene": 0, "severe_toxic": 0, "toxic": 0}}
-{"meta": {"id": "001956c382006abd"}, "text": "I'm Sorry \n\nI'm sorry I screwed around with someones talk page. It was very bad to do. I know how having the templates on their talk page helps you assert your dominance over them. I know I should bow down to the almighty administrators. But then again, I'm going to go play outside....with your mom. 76.122.79.82", "cats": {"insult": 0, "obscene": 0, "severe_toxic": 0, "toxic": 1}}
-{"meta": {"id": "0020e7119b96eeeb"}, "text": "Stupid peace of shit stop deleting my stuff asshole go die and fall in a hole go to hell!", "cats": {"insult": 1, "obscene": 1, "severe_toxic": 1, "toxic": 1}}
-{"meta": {"id": "000103f0d9cfb60f"}, "text": "D'aww! He matches this background colour I'm seemingly stuck with. Thanks. (talk) 21:51, January 11, 2016 (UTC)", "cats": {"insult": 0, "obscene": 0, "severe_toxic": 0, "toxic": 0}}
-{"meta": {"id": "001dc38a83d420cf"}, "text": "GET FUCKED UP. GET FUCKEEED UP. GOT A DRINK THAT YOU CANT PUT DOWN???/ GET FUCK UP GET FUCKED UP. I'M FUCKED UP RIGHT NOW!", "cats": {"insult": 0, "obscene": 1, "severe_toxic": 0, "toxic": 1}}
-{"meta": {"id": "000113f07ec002fd"}, "text": "Hey man, I'm really not trying to edit war. It's just that this guy is constantly removing relevant information and talking to me through edits instead of my talk page. He seems to care more about the formatting than the actual info.", "cats": {"insult": 0, "obscene": 0, "severe_toxic": 0, "toxic": 0}}
-{"meta": {"id": "0001b41b1c6bb37e"}, "text": "\"\nMore\nI can't make any real suggestions on improvement - I wondered if the section statistics should be later on, or a subsection of \"\"types of accidents\"\" -I think the references may need tidying so that they are all in the exact same format ie date format etc. I can do that later on, if no-one else does first - if you have any preferences for formatting style on references or want to do it yourself please let me know.\n\nThere appears to be a backlog on articles for review so I guess there may be a delay until a reviewer turns up. It's listed in the relevant form eg Wikipedia:Good_article_nominations#Transport \"", "cats": {"insult": 0, "obscene": 0, "severe_toxic": 0, "toxic": 0}}
-{"meta": {"id": "0001d958c54c6e35"}, "text": "You, sir, are my hero. Any chance you remember what page that's on?", "cats": {"insult": 0, "obscene": 0, "severe_toxic": 0, "toxic": 0}}
-{"meta": {"id": "00025465d4725e87"}, "text": "\"\n\nCongratulations from me as well, use the tools well. · talk \"", "cats": {"insult": 0, "obscene": 0, "severe_toxic": 0, "toxic": 0}}
-{"meta": {"id": "002264ea4d5f2887"}, "text": "Why can't you believe how fat Artie is? Did you see him on his recent appearence on the Tonight Show with Jay Leno? He looks absolutely AWFUL! If I had to put money on it, I'd say that Artie Lange is a can't miss candidate for the 2007 Dead pool! \n\n \nKindly keep your malicious fingers off of my above comment, . Everytime you remove it, I will repost it!!!", "cats": {"insult": 0, "obscene": 0, "severe_toxic": 0, "toxic": 1}}
diff --git a/examples/training/textcat_example_data/textcatjsonl_to_trainjson.py b/examples/training/textcat_example_data/textcatjsonl_to_trainjson.py
deleted file mode 100644
index 339ce39be..000000000
--- a/examples/training/textcat_example_data/textcatjsonl_to_trainjson.py
+++ /dev/null
@@ -1,53 +0,0 @@
-from pathlib import Path
-import plac
-import spacy
-from spacy.gold import docs_to_json
-import srsly
-import sys
-
-@plac.annotations(
- model=("Model name. Defaults to 'en'.", "option", "m", str),
- input_file=("Input file (jsonl)", "positional", None, Path),
- output_dir=("Output directory", "positional", None, Path),
- n_texts=("Number of texts to convert", "option", "t", int),
-)
-def convert(model='en', input_file=None, output_dir=None, n_texts=0):
- # Load model with tokenizer + sentencizer only
- nlp = spacy.load(model)
- nlp.disable_pipes(*nlp.pipe_names)
- sentencizer = nlp.create_pipe("sentencizer")
- nlp.add_pipe(sentencizer, first=True)
-
- texts = []
- cats = []
- count = 0
-
- if not input_file.exists():
- print("Input file not found:", input_file)
- sys.exit(1)
- else:
- with open(input_file) as fileh:
- for line in fileh:
- data = srsly.json_loads(line)
- texts.append(data["text"])
- cats.append(data["cats"])
-
- if output_dir is not None:
- output_dir = Path(output_dir)
- if not output_dir.exists():
- output_dir.mkdir()
- else:
- output_dir = Path(".")
-
- docs = []
- for i, doc in enumerate(nlp.pipe(texts)):
- doc.cats = cats[i]
- docs.append(doc)
- if n_texts > 0 and count == n_texts:
- break
- count += 1
-
- srsly.write_json(output_dir / input_file.with_suffix(".json"), [docs_to_json(docs)])
-
-if __name__ == "__main__":
- plac.call(convert)
diff --git a/examples/training/train_entity_linker.py b/examples/training/train_entity_linker.py
index d2b2c2417..12ed531a6 100644
--- a/examples/training/train_entity_linker.py
+++ b/examples/training/train_entity_linker.py
@@ -8,8 +8,8 @@ For more details, see the documentation:
* Training: https://spacy.io/usage/training
* Entity Linking: https://spacy.io/usage/linguistic-features#entity-linking
-Compatible with: spaCy v2.2
-Last tested with: v2.2
+Compatible with: spaCy vX.X
+Last tested with: vX.X
"""
from __future__ import unicode_literals, print_function
diff --git a/examples/training/training-data.json b/examples/training/training-data.json
index 1f57e1fd9..2565ce149 100644
--- a/examples/training/training-data.json
+++ b/examples/training/training-data.json
@@ -8,7 +8,7 @@
{
"tokens": [
{
- "head": 4,
+ "head": 44,
"dep": "prep",
"tag": "IN",
"orth": "In",
diff --git a/fabfile.py b/fabfile.py
index 56570e8e0..0e69551c3 100644
--- a/fabfile.py
+++ b/fabfile.py
@@ -10,145 +10,113 @@ import sys
PWD = path.dirname(__file__)
-ENV = environ["VENV_DIR"] if "VENV_DIR" in environ else ".env"
+ENV = environ['VENV_DIR'] if 'VENV_DIR' in environ else '.env'
VENV_DIR = Path(PWD) / ENV
@contextlib.contextmanager
-def virtualenv(name, create=False, python="/usr/bin/python3.6"):
+def virtualenv(name, create=False, python='/usr/bin/python3.6'):
python = Path(python).resolve()
env_path = VENV_DIR
if create:
if env_path.exists():
shutil.rmtree(str(env_path))
- local("{python} -m venv {env_path}".format(python=python, env_path=VENV_DIR))
-
+ local('{python} -m venv {env_path}'.format(python=python, env_path=VENV_DIR))
def wrapped_local(cmd, env_vars=[], capture=False, direct=False):
- return local(
- "source {}/bin/activate && {}".format(env_path, cmd),
- shell="/bin/bash",
- capture=False,
- )
-
+ return local('source {}/bin/activate && {}'.format(env_path, cmd),
+ shell='/bin/bash', capture=False)
yield wrapped_local
-def env(lang="python3.6"):
+def env(lang='python3.6'):
if VENV_DIR.exists():
- local("rm -rf {env}".format(env=VENV_DIR))
- if lang.startswith("python3"):
- local("{lang} -m venv {env}".format(lang=lang, env=VENV_DIR))
+ local('rm -rf {env}'.format(env=VENV_DIR))
+ if lang.startswith('python3'):
+ local('{lang} -m venv {env}'.format(lang=lang, env=VENV_DIR))
else:
- local("{lang} -m pip install virtualenv --no-cache-dir".format(lang=lang))
- local(
- "{lang} -m virtualenv {env} --no-cache-dir".format(lang=lang, env=VENV_DIR)
- )
+ local('{lang} -m pip install virtualenv --no-cache-dir'.format(lang=lang))
+ local('{lang} -m virtualenv {env} --no-cache-dir'.format(lang=lang, env=VENV_DIR))
with virtualenv(VENV_DIR) as venv_local:
- print(venv_local("python --version", capture=True))
- venv_local("pip install --upgrade setuptools --no-cache-dir")
- venv_local("pip install pytest --no-cache-dir")
- venv_local("pip install wheel --no-cache-dir")
- venv_local("pip install -r requirements.txt --no-cache-dir")
- venv_local("pip install pex --no-cache-dir")
+ print(venv_local('python --version', capture=True))
+ venv_local('pip install --upgrade setuptools --no-cache-dir')
+ venv_local('pip install pytest --no-cache-dir')
+ venv_local('pip install wheel --no-cache-dir')
+ venv_local('pip install -r requirements.txt --no-cache-dir')
+ venv_local('pip install pex --no-cache-dir')
+
def install():
with virtualenv(VENV_DIR) as venv_local:
- venv_local("pip install dist/*.tar.gz")
+ venv_local('pip install dist/*.tar.gz')
def make():
with lcd(path.dirname(__file__)):
- local(
- "export PYTHONPATH=`pwd` && source .env/bin/activate && python setup.py build_ext --inplace",
- shell="/bin/bash",
- )
-
+ local('export PYTHONPATH=`pwd` && source .env/bin/activate && python setup.py build_ext --inplace',
+ shell='/bin/bash')
def sdist():
with virtualenv(VENV_DIR) as venv_local:
with lcd(path.dirname(__file__)):
- local("python -m pip install -U setuptools srsly")
- local("python setup.py sdist")
-
+ local('python -m pip install -U setuptools')
+ local('python setup.py sdist')
def wheel():
with virtualenv(VENV_DIR) as venv_local:
with lcd(path.dirname(__file__)):
- venv_local("python setup.py bdist_wheel")
-
+ venv_local('python setup.py bdist_wheel')
def pex():
with virtualenv(VENV_DIR) as venv_local:
with lcd(path.dirname(__file__)):
- sha = local("git rev-parse --short HEAD", capture=True)
- venv_local(
- "pex dist/*.whl -e spacy -o dist/spacy-%s.pex" % sha, direct=True
- )
+ sha = local('git rev-parse --short HEAD', capture=True)
+ venv_local('pex dist/*.whl -e spacy -o dist/spacy-%s.pex' % sha,
+ direct=True)
def clean():
with lcd(path.dirname(__file__)):
- local("rm -f dist/*.whl")
- local("rm -f dist/*.pex")
+ local('rm -f dist/*.whl')
+ local('rm -f dist/*.pex')
with virtualenv(VENV_DIR) as venv_local:
- venv_local("python setup.py clean --all")
+ venv_local('python setup.py clean --all')
def test():
with virtualenv(VENV_DIR) as venv_local:
with lcd(path.dirname(__file__)):
- venv_local("pytest -x spacy/tests")
-
+ venv_local('pytest -x spacy/tests')
def train():
- args = environ.get("SPACY_TRAIN_ARGS", "")
+ args = environ.get('SPACY_TRAIN_ARGS', '')
with virtualenv(VENV_DIR) as venv_local:
- venv_local("spacy train {args}".format(args=args))
+ venv_local('spacy train {args}'.format(args=args))
-def conll17(treebank_dir, experiment_dir, vectors_dir, config, corpus=""):
- is_not_clean = local("git status --porcelain", capture=True)
+def conll17(treebank_dir, experiment_dir, vectors_dir, config, corpus=''):
+ is_not_clean = local('git status --porcelain', capture=True)
if is_not_clean:
print("Repository is not clean")
print(is_not_clean)
sys.exit(1)
- git_sha = local("git rev-parse --short HEAD", capture=True)
- config_checksum = local("sha256sum {config}".format(config=config), capture=True)
- experiment_dir = Path(experiment_dir) / "{}--{}".format(
- config_checksum[:6], git_sha
- )
+ git_sha = local('git rev-parse --short HEAD', capture=True)
+ config_checksum = local('sha256sum {config}'.format(config=config), capture=True)
+ experiment_dir = Path(experiment_dir) / '{}--{}'.format(config_checksum[:6], git_sha)
if not experiment_dir.exists():
experiment_dir.mkdir()
- test_data_dir = Path(treebank_dir) / "ud-test-v2.0-conll2017"
+ test_data_dir = Path(treebank_dir) / 'ud-test-v2.0-conll2017'
assert test_data_dir.exists()
assert test_data_dir.is_dir()
if corpus:
corpora = [corpus]
else:
- corpora = ["UD_English", "UD_Chinese", "UD_Japanese", "UD_Vietnamese"]
+ corpora = ['UD_English', 'UD_Chinese', 'UD_Japanese', 'UD_Vietnamese']
- local(
- "cp {config} {experiment_dir}/config.json".format(
- config=config, experiment_dir=experiment_dir
- )
- )
+ local('cp {config} {experiment_dir}/config.json'.format(config=config, experiment_dir=experiment_dir))
with virtualenv(VENV_DIR) as venv_local:
for corpus in corpora:
- venv_local(
- "spacy ud-train {treebank_dir} {experiment_dir} {config} {corpus} -v {vectors_dir}".format(
- treebank_dir=treebank_dir,
- experiment_dir=experiment_dir,
- config=config,
- corpus=corpus,
- vectors_dir=vectors_dir,
- )
- )
- venv_local(
- "spacy ud-run-test {test_data_dir} {experiment_dir} {corpus}".format(
- test_data_dir=test_data_dir,
- experiment_dir=experiment_dir,
- config=config,
- corpus=corpus,
- )
- )
+ venv_local('spacy ud-train {treebank_dir} {experiment_dir} {config} {corpus} -v {vectors_dir}'.format(
+ treebank_dir=treebank_dir, experiment_dir=experiment_dir, config=config, corpus=corpus, vectors_dir=vectors_dir))
+ venv_local('spacy ud-run-test {test_data_dir} {experiment_dir} {corpus}'.format(
+ test_data_dir=test_data_dir, experiment_dir=experiment_dir, config=config, corpus=corpus))
diff --git a/requirements.txt b/requirements.txt
index ebe660b97..a6d721e96 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -1,8 +1,8 @@
# Our libraries
cymem>=2.0.2,<2.1.0
-preshed>=3.0.2,<3.1.0
-thinc>=7.1.1,<7.2.0
-blis>=0.4.0,<0.5.0
+preshed>=2.0.1,<2.1.0
+thinc>=7.0.8,<7.1.0
+blis>=0.2.2,<0.3.0
murmurhash>=0.28.0,<1.1.0
wasabi>=0.2.0,<1.1.0
srsly>=0.1.0,<1.1.0
diff --git a/setup.py b/setup.py
index abe3fb509..984de2250 100755
--- a/setup.py
+++ b/setup.py
@@ -27,7 +27,7 @@ def is_new_osx():
return False
-PACKAGE_DATA = {"": ["*.pyx", "*.pxd", "*.txt", "*.tokens", "*.json", "*.json.gz"]}
+PACKAGE_DATA = {"": ["*.pyx", "*.pxd", "*.txt", "*.tokens", "*.json"]}
PACKAGES = find_packages()
@@ -43,7 +43,6 @@ MOD_NAMES = [
"spacy.kb",
"spacy.morphology",
"spacy.pipeline.pipes",
- "spacy.pipeline.morphologizer",
"spacy.syntax.stateclass",
"spacy.syntax._state",
"spacy.tokenizer",
@@ -57,7 +56,6 @@ MOD_NAMES = [
"spacy.tokens.doc",
"spacy.tokens.span",
"spacy.tokens.token",
- "spacy.tokens.morphanalysis",
"spacy.tokens._retokenize",
"spacy.matcher.matcher",
"spacy.matcher.phrasematcher",
@@ -247,9 +245,9 @@ def setup_package():
"numpy>=1.15.0",
"murmurhash>=0.28.0,<1.1.0",
"cymem>=2.0.2,<2.1.0",
- "preshed>=3.0.2,<3.1.0",
- "thinc>=7.1.1,<7.2.0",
- "blis>=0.4.0,<0.5.0",
+ "preshed>=2.0.1,<2.1.0",
+ "thinc>=7.0.8,<7.1.0",
+ "blis>=0.2.2,<0.3.0",
"plac<1.0.0,>=0.9.6",
"requests>=2.13.0,<3.0.0",
"wasabi>=0.2.0,<1.1.0",
@@ -283,6 +281,7 @@ def setup_package():
"Programming Language :: Python :: 2",
"Programming Language :: Python :: 2.7",
"Programming Language :: Python :: 3",
+ "Programming Language :: Python :: 3.4",
"Programming Language :: Python :: 3.5",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
diff --git a/spacy/_ml.py b/spacy/_ml.py
index 6104324ab..660d20c46 100644
--- a/spacy/_ml.py
+++ b/spacy/_ml.py
@@ -15,7 +15,7 @@ from thinc.api import uniqued, wrap, noop
from thinc.api import with_square_sequences
from thinc.linear.linear import LinearModel
from thinc.neural.ops import NumpyOps, CupyOps
-from thinc.neural.util import get_array_module, copy_array
+from thinc.neural.util import get_array_module
from thinc.neural.optimizers import Adam
from thinc import describe
@@ -286,7 +286,10 @@ def link_vectors_to_models(vocab):
if vectors.name is None:
vectors.name = VECTORS_KEY
if vectors.data.size != 0:
- user_warning(Warnings.W020.format(shape=vectors.data.shape))
+ print(
+ "Warning: Unnamed vectors -- this won't allow multiple vectors "
+ "models to be loaded. (Shape: (%d, %d))" % vectors.data.shape
+ )
ops = Model.ops
for word in vocab:
if word.orth in vectors.key2row:
@@ -320,9 +323,6 @@ def Tok2Vec(width, embed_size, **kwargs):
pretrained_vectors = kwargs.get("pretrained_vectors", None)
cnn_maxout_pieces = kwargs.get("cnn_maxout_pieces", 3)
subword_features = kwargs.get("subword_features", True)
- char_embed = kwargs.get("char_embed", False)
- if char_embed:
- subword_features = False
conv_depth = kwargs.get("conv_depth", 4)
bilstm_depth = kwargs.get("bilstm_depth", 0)
cols = [ID, NORM, PREFIX, SUFFIX, SHAPE, ORTH]
@@ -362,14 +362,6 @@ def Tok2Vec(width, embed_size, **kwargs):
>> LN(Maxout(width, width * 4, pieces=3)),
column=cols.index(ORTH),
)
- elif char_embed:
- embed = concatenate_lists(
- CharacterEmbed(nM=64, nC=8),
- FeatureExtracter(cols) >> with_flatten(norm),
- )
- reduce_dimensions = LN(
- Maxout(width, 64 * 8 + width, pieces=cnn_maxout_pieces)
- )
else:
embed = norm
@@ -377,15 +369,9 @@ def Tok2Vec(width, embed_size, **kwargs):
ExtractWindow(nW=1)
>> LN(Maxout(width, width * 3, pieces=cnn_maxout_pieces))
)
- if char_embed:
- tok2vec = embed >> with_flatten(
- reduce_dimensions >> convolution ** conv_depth, pad=conv_depth
- )
- else:
- tok2vec = FeatureExtracter(cols) >> with_flatten(
- embed >> convolution ** conv_depth, pad=conv_depth
- )
-
+ tok2vec = FeatureExtracter(cols) >> with_flatten(
+ embed >> convolution ** conv_depth, pad=conv_depth
+ )
if bilstm_depth >= 1:
tok2vec = tok2vec >> PyTorchBiLSTM(width, width, bilstm_depth)
# Work around thinc API limitations :(. TODO: Revise in Thinc 7
@@ -518,46 +504,6 @@ def getitem(i):
return layerize(getitem_fwd)
-@describe.attributes(
- W=Synapses("Weights matrix", lambda obj: (obj.nO, obj.nI), lambda W, ops: None)
-)
-class MultiSoftmax(Affine):
- """Neural network layer that predicts several multi-class attributes at once.
- For instance, we might predict one class with 6 variables, and another with 5.
- We predict the 11 neurons required for this, and then softmax them such
- that columns 0-6 make a probability distribution and coumns 6-11 make another.
- """
-
- name = "multisoftmax"
-
- def __init__(self, out_sizes, nI=None, **kwargs):
- Model.__init__(self, **kwargs)
- self.out_sizes = out_sizes
- self.nO = sum(out_sizes)
- self.nI = nI
-
- def predict(self, input__BI):
- output__BO = self.ops.affine(self.W, self.b, input__BI)
- i = 0
- for out_size in self.out_sizes:
- self.ops.softmax(output__BO[:, i : i + out_size], inplace=True)
- i += out_size
- return output__BO
-
- def begin_update(self, input__BI, drop=0.0):
- output__BO = self.predict(input__BI)
-
- def finish_update(grad__BO, sgd=None):
- self.d_W += self.ops.gemm(grad__BO, input__BI, trans1=True)
- self.d_b += grad__BO.sum(axis=0)
- grad__BI = self.ops.gemm(grad__BO, self.W)
- if sgd is not None:
- sgd(self._mem.weights, self._mem.gradient, key=self.id)
- return grad__BI
-
- return output__BO, finish_update
-
-
def build_tagger_model(nr_class, **cfg):
embed_size = util.env_opt("embed_size", 2000)
if "token_vector_width" in cfg:
@@ -584,33 +530,6 @@ def build_tagger_model(nr_class, **cfg):
return model
-def build_morphologizer_model(class_nums, **cfg):
- embed_size = util.env_opt("embed_size", 7000)
- if "token_vector_width" in cfg:
- token_vector_width = cfg["token_vector_width"]
- else:
- token_vector_width = util.env_opt("token_vector_width", 128)
- pretrained_vectors = cfg.get("pretrained_vectors")
- char_embed = cfg.get("char_embed", True)
- with Model.define_operators({">>": chain, "+": add, "**": clone}):
- if "tok2vec" in cfg:
- tok2vec = cfg["tok2vec"]
- else:
- tok2vec = Tok2Vec(
- token_vector_width,
- embed_size,
- char_embed=char_embed,
- pretrained_vectors=pretrained_vectors,
- )
- softmax = with_flatten(MultiSoftmax(class_nums, token_vector_width))
- softmax.out_sizes = class_nums
- model = tok2vec >> softmax
- model.nI = None
- model.tok2vec = tok2vec
- model.softmax = softmax
- return model
-
-
@layerize
def SpacyVectors(docs, drop=0.0):
batch = []
@@ -801,8 +720,7 @@ def concatenate_lists(*layers, **kwargs): # pragma: no cover
concat = concatenate(*layers)
def concatenate_lists_fwd(Xs, drop=0.0):
- if drop is not None:
- drop *= drop_factor
+ drop *= drop_factor
lengths = ops.asarray([len(X) for X in Xs], dtype="i")
flat_y, bp_flat_y = concat.begin_update(Xs, drop=drop)
ys = ops.unflatten(flat_y, lengths)
@@ -892,67 +810,6 @@ def _replace_word(word, random_words, mask="[MASK]"):
return word
-def _uniform_init(lo, hi):
- def wrapped(W, ops):
- copy_array(W, ops.xp.random.uniform(lo, hi, W.shape))
-
- return wrapped
-
-
-@describe.attributes(
- nM=Dimension("Vector dimensions"),
- nC=Dimension("Number of characters per word"),
- vectors=Synapses(
- "Embed matrix", lambda obj: (obj.nC, obj.nV, obj.nM), _uniform_init(-0.1, 0.1)
- ),
- d_vectors=Gradient("vectors"),
-)
-class CharacterEmbed(Model):
- def __init__(self, nM=None, nC=None, **kwargs):
- Model.__init__(self, **kwargs)
- self.nM = nM
- self.nC = nC
-
- @property
- def nO(self):
- return self.nM * self.nC
-
- @property
- def nV(self):
- return 256
-
- def begin_update(self, docs, drop=0.0):
- if not docs:
- return []
- ids = []
- output = []
- weights = self.vectors
- # This assists in indexing; it's like looping over this dimension.
- # Still consider this weird witch craft...But thanks to Mark Neumann
- # for the tip.
- nCv = self.ops.xp.arange(self.nC)
- for doc in docs:
- doc_ids = doc.to_utf8_array(nr_char=self.nC)
- doc_vectors = self.ops.allocate((len(doc), self.nC, self.nM))
- # Let's say I have a 2d array of indices, and a 3d table of data. What numpy
- # incantation do I chant to get
- # output[i, j, k] == data[j, ids[i, j], k]?
- doc_vectors[:, nCv] = weights[nCv, doc_ids[:, nCv]]
- output.append(doc_vectors.reshape((len(doc), self.nO)))
- ids.append(doc_ids)
-
- def backprop_character_embed(d_vectors, sgd=None):
- gradient = self.d_vectors
- for doc_ids, d_doc_vectors in zip(ids, d_vectors):
- d_doc_vectors = d_doc_vectors.reshape((len(doc_ids), self.nC, self.nM))
- gradient[nCv, doc_ids[:, nCv]] += d_doc_vectors[:, nCv]
- if sgd is not None:
- sgd(self._mem.weights, self._mem.gradient, key=self.id)
- return None
-
- return output, backprop_character_embed
-
-
def get_cossim_loss(yh, y):
# Add a small constant to avoid 0 vectors
yh = yh + 1e-8
diff --git a/spacy/about.py b/spacy/about.py
index 7bb8e7ead..9587c9071 100644
--- a/spacy/about.py
+++ b/spacy/about.py
@@ -1,12 +1,16 @@
+# inspired from:
+# https://python-packaging-user-guide.readthedocs.org/en/latest/single_source_version/
+# https://github.com/pypa/warehouse/blob/master/warehouse/__about__.py
# fmt: off
+
__title__ = "spacy"
-__version__ = "2.2.0.dev15"
-__summary__ = "Industrial-strength Natural Language Processing (NLP) in Python"
+__version__ = "2.1.8"
+__summary__ = "Industrial-strength Natural Language Processing (NLP) with Python and Cython"
__uri__ = "https://spacy.io"
-__author__ = "Explosion"
+__author__ = "Explosion AI"
__email__ = "contact@explosion.ai"
__license__ = "MIT"
-__release__ = False
+__release__ = True
__download_url__ = "https://github.com/explosion/spacy-models/releases/download"
__compatibility__ = "https://raw.githubusercontent.com/explosion/spacy-models/master/compatibility.json"
diff --git a/spacy/attrs.pyx b/spacy/attrs.pyx
index 40236630a..8eeea363f 100644
--- a/spacy/attrs.pyx
+++ b/spacy/attrs.pyx
@@ -144,12 +144,8 @@ def intify_attrs(stringy_attrs, strings_map=None, _do_deprecated=False):
for name, value in stringy_attrs.items():
if isinstance(name, int):
int_key = name
- elif name in IDS:
- int_key = IDS[name]
- elif name.upper() in IDS:
- int_key = IDS[name.upper()]
else:
- continue
+ int_key = IDS[name.upper()]
if strings_map is not None and isinstance(value, basestring):
if hasattr(strings_map, 'add'):
value = strings_map.add(value)
diff --git a/spacy/cli/debug_data.py b/spacy/cli/debug_data.py
index b649e6666..0a9a0f7ef 100644
--- a/spacy/cli/debug_data.py
+++ b/spacy/cli/debug_data.py
@@ -34,6 +34,12 @@ BLANK_MODEL_THRESHOLD = 2000
str,
),
ignore_warnings=("Ignore warnings, only show stats and errors", "flag", "IW", bool),
+ ignore_validation=(
+ "Don't exit if JSON format validation fails",
+ "flag",
+ "IV",
+ bool,
+ ),
verbose=("Print additional information and explanations", "flag", "V", bool),
no_format=("Don't pretty-print the results", "flag", "NF", bool),
)
@@ -44,14 +50,10 @@ def debug_data(
base_model=None,
pipeline="tagger,parser,ner",
ignore_warnings=False,
+ ignore_validation=False,
verbose=False,
no_format=False,
):
- """
- Analyze, debug and validate your training and development data, get useful
- stats, and find problems like invalid entity annotations, cyclic
- dependencies, low data labels and more.
- """
msg = Printer(pretty=not no_format, ignore_warnings=ignore_warnings)
# Make sure all files and paths exists if they are needed
@@ -70,9 +72,21 @@ def debug_data(
msg.divider("Data format validation")
- # TODO: Validate data format using the JSON schema
+ # Validate data format using the JSON schema
# TODO: update once the new format is ready
# TODO: move validation to GoldCorpus in order to be able to load from dir
+ train_data_errors = [] # TODO: validate_json
+ dev_data_errors = [] # TODO: validate_json
+ if not train_data_errors:
+ msg.good("Training data JSON format is valid")
+ if not dev_data_errors:
+ msg.good("Development data JSON format is valid")
+ for error in train_data_errors:
+ msg.fail("Training data: {}".format(error))
+ for error in dev_data_errors:
+ msg.fail("Develoment data: {}".format(error))
+ if (train_data_errors or dev_data_errors) and not ignore_validation:
+ sys.exit(1)
# Create the gold corpus to be able to better analyze data
loading_train_error_message = ""
@@ -270,7 +284,7 @@ def debug_data(
if "textcat" in pipeline:
msg.divider("Text Classification")
- labels = [label for label in gold_train_data["cats"]]
+ labels = [label for label in gold_train_data["textcat"]]
model_labels = _get_labels_from_model(nlp, "textcat")
new_labels = [l for l in labels if l not in model_labels]
existing_labels = [l for l in labels if l in model_labels]
@@ -281,45 +295,13 @@ def debug_data(
)
if new_labels:
labels_with_counts = _format_labels(
- gold_train_data["cats"].most_common(), counts=True
+ gold_train_data["textcat"].most_common(), counts=True
)
msg.text("New: {}".format(labels_with_counts), show=verbose)
if existing_labels:
msg.text(
"Existing: {}".format(_format_labels(existing_labels)), show=verbose
)
- if set(gold_train_data["cats"]) != set(gold_dev_data["cats"]):
- msg.fail(
- "The train and dev labels are not the same. "
- "Train labels: {}. "
- "Dev labels: {}.".format(
- _format_labels(gold_train_data["cats"]),
- _format_labels(gold_dev_data["cats"]),
- )
- )
- if gold_train_data["n_cats_multilabel"] > 0:
- msg.info(
- "The train data contains instances without "
- "mutually-exclusive classes. Use '--textcat-multilabel' "
- "when training."
- )
- if gold_dev_data["n_cats_multilabel"] == 0:
- msg.warn(
- "Potential train/dev mismatch: the train data contains "
- "instances without mutually-exclusive classes while the "
- "dev data does not."
- )
- else:
- msg.info(
- "The train data contains only instances with "
- "mutually-exclusive classes."
- )
- if gold_dev_data["n_cats_multilabel"] > 0:
- msg.fail(
- "Train/dev mismatch: the dev data contains instances "
- "without mutually-exclusive classes while the train data "
- "contains only instances with mutually-exclusive classes."
- )
if "tagger" in pipeline:
msg.divider("Part-of-speech Tagging")
@@ -348,7 +330,6 @@ def debug_data(
)
if "parser" in pipeline:
- has_low_data_warning = False
msg.divider("Dependency Parsing")
# profile sentence length
@@ -537,7 +518,6 @@ def _compile_gold(train_docs, pipeline):
"n_sents": 0,
"n_nonproj": 0,
"n_cycles": 0,
- "n_cats_multilabel": 0,
"texts": set(),
}
for doc, gold in train_docs:
@@ -560,8 +540,6 @@ def _compile_gold(train_docs, pipeline):
data["ner"]["-"] += 1
if "textcat" in pipeline:
data["cats"].update(gold.cats)
- if list(gold.cats.values()).count(1.0) != 1:
- data["n_cats_multilabel"] += 1
if "tagger" in pipeline:
data["tags"].update([x for x in gold.tags if x is not None])
if "parser" in pipeline:
diff --git a/spacy/cli/download.py b/spacy/cli/download.py
index 64ab03a75..8a993178a 100644
--- a/spacy/cli/download.py
+++ b/spacy/cli/download.py
@@ -28,16 +28,6 @@ def download(model, direct=False, *pip_args):
can be shortcut, model name or, if --direct flag is set, full model name
with version. For direct downloads, the compatibility check will be skipped.
"""
- if not require_package("spacy") and "--no-deps" not in pip_args:
- msg.warn(
- "Skipping model package dependencies and setting `--no-deps`. "
- "You don't seem to have the spaCy package itself installed "
- "(maybe because you've built from source?), so installing the "
- "model dependencies would cause spaCy to be downloaded, which "
- "probably isn't what you want. If the model package has other "
- "dependencies, you'll have to install them manually."
- )
- pip_args = pip_args + ("--no-deps",)
dl_tpl = "{m}-{v}/{m}-{v}.tar.gz#egg={m}=={v}"
if direct:
components = model.split("-")
@@ -82,15 +72,12 @@ def download(model, direct=False, *pip_args):
# is_package check currently fails, because pkg_resources.working_set
# is not refreshed automatically (see #3923). We're trying to work
# around this here be requiring the package explicitly.
- require_package(model_name)
-
-
-def require_package(name):
- try:
- pkg_resources.working_set.require(name)
- return True
- except: # noqa: E722
- return False
+ try:
+ pkg_resources.working_set.require(model_name)
+ except: # noqa: E722
+ # Maybe it's possible to remove this – mostly worried about cross-
+ # platform and cross-Python copmpatibility here
+ pass
def get_json(url, desc):
@@ -130,7 +117,7 @@ def get_version(model, comp):
def download_model(filename, user_pip_args=None):
download_url = about.__download_url__ + "/" + filename
- pip_args = ["--no-cache-dir"]
+ pip_args = ["--no-cache-dir", "--no-deps"]
if user_pip_args:
pip_args.extend(user_pip_args)
cmd = [sys.executable, "-m", "pip", "install"] + pip_args + [download_url]
diff --git a/spacy/cli/evaluate.py b/spacy/cli/evaluate.py
index 1114ada08..0a57ef2da 100644
--- a/spacy/cli/evaluate.py
+++ b/spacy/cli/evaluate.py
@@ -61,7 +61,6 @@ def evaluate(
"NER P": "%.2f" % scorer.ents_p,
"NER R": "%.2f" % scorer.ents_r,
"NER F": "%.2f" % scorer.ents_f,
- "Textcat": "%.2f" % scorer.textcat_score,
}
msg.table(results, title="Results")
diff --git a/spacy/cli/init_model.py b/spacy/cli/init_model.py
index c285a12a6..955b420aa 100644
--- a/spacy/cli/init_model.py
+++ b/spacy/cli/init_model.py
@@ -35,13 +35,6 @@ msg = Printer()
clusters_loc=("Optional location of brown clusters data", "option", "c", str),
vectors_loc=("Optional vectors file in Word2Vec format", "option", "v", str),
prune_vectors=("Optional number of vectors to prune to", "option", "V", int),
- vectors_name=(
- "Optional name for the word vectors, e.g. en_core_web_lg.vectors",
- "option",
- "vn",
- str,
- ),
- model_name=("Optional name for the model meta", "option", "mn", str),
)
def init_model(
lang,
@@ -51,8 +44,6 @@ def init_model(
jsonl_loc=None,
vectors_loc=None,
prune_vectors=-1,
- vectors_name=None,
- model_name=None,
):
"""
Create a new model from raw data, like word frequencies, Brown clusters
@@ -84,10 +75,10 @@ def init_model(
lex_attrs = read_attrs_from_deprecated(freqs_loc, clusters_loc)
with msg.loading("Creating model..."):
- nlp = create_model(lang, lex_attrs, name=model_name)
+ nlp = create_model(lang, lex_attrs)
msg.good("Successfully created model")
if vectors_loc is not None:
- add_vectors(nlp, vectors_loc, prune_vectors, vectors_name)
+ add_vectors(nlp, vectors_loc, prune_vectors)
vec_added = len(nlp.vocab.vectors)
lex_added = len(nlp.vocab)
msg.good(
@@ -147,7 +138,7 @@ def read_attrs_from_deprecated(freqs_loc, clusters_loc):
return lex_attrs
-def create_model(lang, lex_attrs, name=None):
+def create_model(lang, lex_attrs):
lang_class = get_lang_class(lang)
nlp = lang_class()
for lexeme in nlp.vocab:
@@ -166,12 +157,10 @@ def create_model(lang, lex_attrs, name=None):
else:
oov_prob = DEFAULT_OOV_PROB
nlp.vocab.cfg.update({"oov_prob": oov_prob})
- if name:
- nlp.meta["name"] = name
return nlp
-def add_vectors(nlp, vectors_loc, prune_vectors, name=None):
+def add_vectors(nlp, vectors_loc, prune_vectors):
vectors_loc = ensure_path(vectors_loc)
if vectors_loc and vectors_loc.parts[-1].endswith(".npz"):
nlp.vocab.vectors = Vectors(data=numpy.load(vectors_loc.open("rb")))
@@ -192,10 +181,7 @@ def add_vectors(nlp, vectors_loc, prune_vectors, name=None):
lexeme.is_oov = False
if vectors_data is not None:
nlp.vocab.vectors = Vectors(data=vectors_data, keys=vector_keys)
- if name is None:
- nlp.vocab.vectors.name = "%s_model.vectors" % nlp.meta["lang"]
- else:
- nlp.vocab.vectors.name = name
+ nlp.vocab.vectors.name = "%s_model.vectors" % nlp.meta["lang"]
nlp.meta["vectors"]["name"] = nlp.vocab.vectors.name
if prune_vectors >= 1:
nlp.vocab.prune_vectors(prune_vectors)
diff --git a/spacy/cli/train.py b/spacy/cli/train.py
index 2588a81a2..fe30e1a3c 100644
--- a/spacy/cli/train.py
+++ b/spacy/cli/train.py
@@ -21,35 +21,54 @@ from .. import about
@plac.annotations(
- # fmt: off
lang=("Model language", "positional", None, str),
output_path=("Output directory to store model in", "positional", None, Path),
train_path=("Location of JSON-formatted training data", "positional", None, Path),
dev_path=("Location of JSON-formatted development data", "positional", None, Path),
- raw_text=("Path to jsonl file with unlabelled text documents.", "option", "rt", Path),
+ raw_text=(
+ "Path to jsonl file with unlabelled text documents.",
+ "option",
+ "rt",
+ Path,
+ ),
base_model=("Name of model to update (optional)", "option", "b", str),
pipeline=("Comma-separated names of pipeline components", "option", "p", str),
vectors=("Model to load vectors from", "option", "v", str),
n_iter=("Number of iterations", "option", "n", int),
- n_early_stopping=("Maximum number of training epochs without dev accuracy improvement", "option", "ne", int),
+ n_early_stopping=(
+ "Maximum number of training epochs without dev accuracy improvement",
+ "option",
+ "ne",
+ int,
+ ),
n_examples=("Number of examples", "option", "ns", int),
use_gpu=("Use GPU", "option", "g", int),
version=("Model version", "option", "V", str),
meta_path=("Optional path to meta.json to use as base.", "option", "m", Path),
- init_tok2vec=("Path to pretrained weights for the token-to-vector parts of the models. See 'spacy pretrain'. Experimental.", "option", "t2v", Path),
- parser_multitasks=("Side objectives for parser CNN, e.g. 'dep' or 'dep,tag'", "option", "pt", str),
- entity_multitasks=("Side objectives for NER CNN, e.g. 'dep' or 'dep,tag'", "option", "et", str),
+ init_tok2vec=(
+ "Path to pretrained weights for the token-to-vector parts of the models. See 'spacy pretrain'. Experimental.",
+ "option",
+ "t2v",
+ Path,
+ ),
+ parser_multitasks=(
+ "Side objectives for parser CNN, e.g. 'dep' or 'dep,tag'",
+ "option",
+ "pt",
+ str,
+ ),
+ entity_multitasks=(
+ "Side objectives for NER CNN, e.g. 'dep' or 'dep,tag'",
+ "option",
+ "et",
+ str,
+ ),
noise_level=("Amount of corruption for data augmentation", "option", "nl", float),
- orth_variant_level=("Amount of orthography variation for data augmentation", "option", "ovl", float),
eval_beam_widths=("Beam widths to evaluate, e.g. 4,8", "option", "bw", str),
gold_preproc=("Use gold preprocessing", "flag", "G", bool),
learn_tokens=("Make parser learn gold-standard tokenization", "flag", "T", bool),
- textcat_multilabel=("Textcat classes aren't mutually exclusive (multilabel)", "flag", "TML", bool),
- textcat_arch=("Textcat model architecture", "option", "ta", str),
- textcat_positive_label=("Textcat positive label for binary classes with two labels", "option", "tpl", str),
verbose=("Display more information for debug", "flag", "VV", bool),
debug=("Run data diagnostics before training", "flag", "D", bool),
- # fmt: on
)
def train(
lang,
@@ -70,13 +89,9 @@ def train(
parser_multitasks="",
entity_multitasks="",
noise_level=0.0,
- orth_variant_level=0.0,
eval_beam_widths="",
gold_preproc=False,
learn_tokens=False,
- textcat_multilabel=False,
- textcat_arch="bow",
- textcat_positive_label=None,
verbose=False,
debug=False,
):
@@ -162,37 +177,9 @@ def train(
if pipe not in nlp.pipe_names:
if pipe == "parser":
pipe_cfg = {"learn_tokens": learn_tokens}
- elif pipe == "textcat":
- pipe_cfg = {
- "exclusive_classes": not textcat_multilabel,
- "architecture": textcat_arch,
- "positive_label": textcat_positive_label,
- }
else:
pipe_cfg = {}
nlp.add_pipe(nlp.create_pipe(pipe, config=pipe_cfg))
- else:
- if pipe == "textcat":
- textcat_cfg = nlp.get_pipe("textcat").cfg
- base_cfg = {
- "exclusive_classes": textcat_cfg["exclusive_classes"],
- "architecture": textcat_cfg["architecture"],
- "positive_label": textcat_cfg["positive_label"],
- }
- pipe_cfg = {
- "exclusive_classes": not textcat_multilabel,
- "architecture": textcat_arch,
- "positive_label": textcat_positive_label,
- }
- if base_cfg != pipe_cfg:
- msg.fail(
- "The base textcat model configuration does"
- "not match the provided training options. "
- "Existing cfg: {}, provided cfg: {}".format(
- base_cfg, pipe_cfg
- ),
- exits=1,
- )
else:
msg.text("Starting with blank model '{}'".format(lang))
lang_cls = util.get_lang_class(lang)
@@ -200,12 +187,6 @@ def train(
for pipe in pipeline:
if pipe == "parser":
pipe_cfg = {"learn_tokens": learn_tokens}
- elif pipe == "textcat":
- pipe_cfg = {
- "exclusive_classes": not textcat_multilabel,
- "architecture": textcat_arch,
- "positive_label": textcat_positive_label,
- }
else:
pipe_cfg = {}
nlp.add_pipe(nlp.create_pipe(pipe, config=pipe_cfg))
@@ -246,89 +227,12 @@ def train(
components = _load_pretrained_tok2vec(nlp, init_tok2vec)
msg.text("Loaded pretrained tok2vec for: {}".format(components))
- # Verify textcat config
- if "textcat" in pipeline:
- textcat_labels = nlp.get_pipe("textcat").cfg["labels"]
- if textcat_positive_label and textcat_positive_label not in textcat_labels:
- msg.fail(
- "The textcat_positive_label (tpl) '{}' does not match any "
- "label in the training data.".format(textcat_positive_label),
- exits=1,
- )
- if textcat_positive_label and len(textcat_labels) != 2:
- msg.fail(
- "A textcat_positive_label (tpl) '{}' was provided for training "
- "data that does not appear to be a binary classification "
- "problem with two labels.".format(textcat_positive_label),
- exits=1,
- )
- train_docs = corpus.train_docs(
- nlp, noise_level=noise_level, gold_preproc=gold_preproc, max_length=0
- )
- train_labels = set()
- if textcat_multilabel:
- multilabel_found = False
- for text, gold in train_docs:
- train_labels.update(gold.cats.keys())
- if list(gold.cats.values()).count(1.0) != 1:
- multilabel_found = True
- if not multilabel_found and not base_model:
- msg.warn(
- "The textcat training instances look like they have "
- "mutually-exclusive classes. Remove the flag "
- "'--textcat-multilabel' to train a classifier with "
- "mutually-exclusive classes."
- )
- if not textcat_multilabel:
- for text, gold in train_docs:
- train_labels.update(gold.cats.keys())
- if list(gold.cats.values()).count(1.0) != 1 and not base_model:
- msg.warn(
- "Some textcat training instances do not have exactly "
- "one positive label. Modifying training options to "
- "include the flag '--textcat-multilabel' for classes "
- "that are not mutually exclusive."
- )
- nlp.get_pipe("textcat").cfg["exclusive_classes"] = False
- textcat_multilabel = True
- break
- if base_model and set(textcat_labels) != train_labels:
- msg.fail(
- "Cannot extend textcat model using data with different "
- "labels. Base model labels: {}, training data labels: "
- "{}.".format(textcat_labels, list(train_labels)),
- exits=1,
- )
- if textcat_multilabel:
- msg.text(
- "Textcat evaluation score: ROC AUC score macro-averaged across "
- "the labels '{}'".format(", ".join(textcat_labels))
- )
- elif textcat_positive_label and len(textcat_labels) == 2:
- msg.text(
- "Textcat evaluation score: F1-score for the "
- "label '{}'".format(textcat_positive_label)
- )
- elif len(textcat_labels) > 1:
- if len(textcat_labels) == 2:
- msg.warn(
- "If the textcat component is a binary classifier with "
- "exclusive classes, provide '--textcat_positive_label' for "
- "an evaluation on the positive class."
- )
- msg.text(
- "Textcat evaluation score: F1-score macro-averaged across "
- "the labels '{}'".format(", ".join(textcat_labels))
- )
- else:
- msg.fail(
- "Unsupported textcat configuration. Use `spacy debug-data` "
- "for more information."
- )
-
# fmt: off
- row_head, output_stats = _configure_training_output(pipeline, use_gpu, has_beam_widths)
- row_widths = [len(w) for w in row_head]
+ row_head = ["Itn", "Dep Loss", "NER Loss", "UAS", "NER P", "NER R", "NER F", "Tag %", "Token %", "CPU WPS", "GPU WPS"]
+ row_widths = [3, 10, 10, 7, 7, 7, 7, 7, 7, 7, 7]
+ if has_beam_widths:
+ row_head.insert(1, "Beam W.")
+ row_widths.insert(1, 7)
row_settings = {"widths": row_widths, "aligns": tuple(["r" for i in row_head]), "spacing": 2}
# fmt: on
print("")
@@ -339,11 +243,7 @@ def train(
best_score = 0.0
for i in range(n_iter):
train_docs = corpus.train_docs(
- nlp,
- noise_level=noise_level,
- orth_variant_level=orth_variant_level,
- gold_preproc=gold_preproc,
- max_length=0,
+ nlp, noise_level=noise_level, gold_preproc=gold_preproc, max_length=0
)
if raw_text:
random.shuffle(raw_text)
@@ -386,7 +286,7 @@ def train(
)
nwords = sum(len(doc_gold[0]) for doc_gold in dev_docs)
start_time = timer()
- scorer = nlp_loaded.evaluate(dev_docs, verbose=verbose)
+ scorer = nlp_loaded.evaluate(dev_docs, debug)
end_time = timer()
if use_gpu < 0:
gpu_wps = None
@@ -402,7 +302,7 @@ def train(
corpus.dev_docs(nlp_loaded, gold_preproc=gold_preproc)
)
start_time = timer()
- scorer = nlp_loaded.evaluate(dev_docs, verbose=verbose)
+ scorer = nlp_loaded.evaluate(dev_docs)
end_time = timer()
cpu_wps = nwords / (end_time - start_time)
acc_loc = output_path / ("model%d" % i) / "accuracy.json"
@@ -436,7 +336,6 @@ def train(
}
meta.setdefault("name", "model%d" % i)
meta.setdefault("version", version)
- meta["labels"] = nlp.meta["labels"]
meta_loc = output_path / ("model%d" % i) / "meta.json"
srsly.write_json(meta_loc, meta)
util.set_env_log(verbose)
@@ -445,19 +344,10 @@ def train(
i,
losses,
scorer.scores,
- output_stats,
beam_width=beam_width if has_beam_widths else None,
cpu_wps=cpu_wps,
gpu_wps=gpu_wps,
)
- if i == 0 and "textcat" in pipeline:
- textcats_per_cat = scorer.scores.get("textcats_per_cat", {})
- for cat, cat_score in textcats_per_cat.items():
- if cat_score.get("roc_auc_score", 0) < 0:
- msg.warn(
- "Textcat ROC AUC score is undefined due to "
- "only one value in label '{}'.".format(cat)
- )
msg.row(progress, **row_settings)
# Early stopping
if n_early_stopping is not None:
@@ -498,8 +388,6 @@ def _score_for_model(meta):
mean_acc.append((acc["uas"] + acc["las"]) / 2)
if "ner" in pipes:
mean_acc.append((acc["ents_p"] + acc["ents_r"] + acc["ents_f"]) / 3)
- if "textcat" in pipes:
- mean_acc.append(acc["textcat_score"])
return sum(mean_acc) / len(mean_acc)
@@ -583,55 +471,40 @@ def _get_metrics(component):
return ("token_acc",)
-def _configure_training_output(pipeline, use_gpu, has_beam_widths):
- row_head = ["Itn"]
- output_stats = []
- for pipe in pipeline:
- if pipe == "tagger":
- row_head.extend(["Tag Loss ", " Tag % "])
- output_stats.extend(["tag_loss", "tags_acc"])
- elif pipe == "parser":
- row_head.extend(["Dep Loss ", " UAS ", " LAS "])
- output_stats.extend(["dep_loss", "uas", "las"])
- elif pipe == "ner":
- row_head.extend(["NER Loss ", "NER P ", "NER R ", "NER F "])
- output_stats.extend(["ner_loss", "ents_p", "ents_r", "ents_f"])
- elif pipe == "textcat":
- row_head.extend(["Textcat Loss", "Textcat"])
- output_stats.extend(["textcat_loss", "textcat_score"])
- row_head.extend(["Token %", "CPU WPS"])
- output_stats.extend(["token_acc", "cpu_wps"])
-
- if use_gpu >= 0:
- row_head.extend(["GPU WPS"])
- output_stats.extend(["gpu_wps"])
-
- if has_beam_widths:
- row_head.insert(1, "Beam W.")
- return row_head, output_stats
-
-
-def _get_progress(
- itn, losses, dev_scores, output_stats, beam_width=None, cpu_wps=0.0, gpu_wps=0.0
-):
+def _get_progress(itn, losses, dev_scores, beam_width=None, cpu_wps=0.0, gpu_wps=0.0):
scores = {}
- for stat in output_stats:
- scores[stat] = 0.0
+ for col in [
+ "dep_loss",
+ "tag_loss",
+ "uas",
+ "tags_acc",
+ "token_acc",
+ "ents_p",
+ "ents_r",
+ "ents_f",
+ "cpu_wps",
+ "gpu_wps",
+ ]:
+ scores[col] = 0.0
scores["dep_loss"] = losses.get("parser", 0.0)
scores["ner_loss"] = losses.get("ner", 0.0)
scores["tag_loss"] = losses.get("tagger", 0.0)
- scores["textcat_loss"] = losses.get("textcat", 0.0)
+ scores.update(dev_scores)
scores["cpu_wps"] = cpu_wps
scores["gpu_wps"] = gpu_wps or 0.0
- scores.update(dev_scores)
- formatted_scores = []
- for stat in output_stats:
- format_spec = "{:.3f}"
- if stat.endswith("_wps"):
- format_spec = "{:.0f}"
- formatted_scores.append(format_spec.format(scores[stat]))
- result = [itn + 1]
- result.extend(formatted_scores)
+ result = [
+ itn,
+ "{:.3f}".format(scores["dep_loss"]),
+ "{:.3f}".format(scores["ner_loss"]),
+ "{:.3f}".format(scores["uas"]),
+ "{:.3f}".format(scores["ents_p"]),
+ "{:.3f}".format(scores["ents_r"]),
+ "{:.3f}".format(scores["ents_f"]),
+ "{:.3f}".format(scores["tags_acc"]),
+ "{:.3f}".format(scores["token_acc"]),
+ "{:.0f}".format(scores["cpu_wps"]),
+ "{:.0f}".format(scores["gpu_wps"]),
+ ]
if beam_width is not None:
result.insert(1, beam_width)
return result
diff --git a/spacy/errors.py b/spacy/errors.py
index 30c7a5f48..587a6e700 100644
--- a/spacy/errors.py
+++ b/spacy/errors.py
@@ -84,10 +84,6 @@ class Warnings(object):
W018 = ("Entity '{entity}' already exists in the Knowledge base.")
W019 = ("Changing vectors name from {old} to {new}, to avoid clash with "
"previously loaded vectors. See Issue #3853.")
- W020 = ("Unnamed vectors. This won't allow multiple vectors models to be "
- "loaded. (Shape: {shape})")
- W021 = ("Unexpected hash collision in PhraseMatcher. Matches may be "
- "incorrect. Modify PhraseMatcher._terminal_hash to fix.")
@add_codes
@@ -122,7 +118,7 @@ class Errors(object):
E011 = ("Unknown operator: '{op}'. Options: {opts}")
E012 = ("Cannot add pattern for zero tokens to matcher.\nKey: {key}")
E013 = ("Error selecting action in matcher")
- E014 = ("Unknown tag ID: {tag}")
+ E014 = ("Uknown tag ID: {tag}")
E015 = ("Conflicting morphology exception for ({tag}, {orth}). Use "
"`force=True` to overwrite.")
E016 = ("MultitaskObjective target should be function or one of: dep, "
@@ -461,25 +457,6 @@ class Errors(object):
E160 = ("Can't find language data file: {path}")
E161 = ("Found an internal inconsistency when predicting entity links. "
"This is likely a bug in spaCy, so feel free to open an issue.")
- E162 = ("Cannot evaluate textcat model on data with different labels.\n"
- "Labels in model: {model_labels}\nLabels in evaluation "
- "data: {eval_labels}")
- E163 = ("cumsum was found to be unstable: its last element does not "
- "correspond to sum")
- E164 = ("x is neither increasing nor decreasing: {}.")
- E165 = ("Only one class present in y_true. ROC AUC score is not defined in "
- "that case.")
- E166 = ("Can only merge DocBins with the same pre-defined attributes.\n"
- "Current DocBin: {current}\nOther DocBin: {other}")
- E167 = ("Unknown morphological feature: '{feat}' ({feat_id}). This can "
- "happen if the tagger was trained with a different set of "
- "morphological features. If you're using a pre-trained model, make "
- "sure that your models are up to date:\npython -m spacy validate")
- E168 = ("Unknown field: {field}")
- E169 = ("Can't find module: {module}")
- E170 = ("Cannot apply transition {name}: invalid for the current state.")
- E171 = ("Matcher.add received invalid on_match callback argument: expected "
- "callable or None, but got: {arg_type}")
@add_codes
diff --git a/spacy/glossary.py b/spacy/glossary.py
index 52abc7bb5..ff38e7138 100644
--- a/spacy/glossary.py
+++ b/spacy/glossary.py
@@ -307,10 +307,4 @@ GLOSSARY = {
# https://pdfs.semanticscholar.org/5744/578cc243d92287f47448870bb426c66cc941.pdf
"PER": "Named person or family.",
"MISC": "Miscellaneous entities, e.g. events, nationalities, products or works of art",
- # https://github.com/ltgoslo/norne
- "EVT": "Festivals, cultural events, sports events, weather phenomena, wars, etc.",
- "PROD": "Product, i.e. artificially produced entities including speeches, radio shows, programming languages, contracts, laws and ideas",
- "DRV": "Words (and phrases?) that are dervied from a name, but not a name in themselves, e.g. 'Oslo-mannen' ('the man from Oslo')",
- "GPE_LOC": "Geo-political entity, with a locative sense, e.g. 'John lives in Spain'",
- "GPE_ORG": "Geo-political entity, with an organisation sense, e.g. 'Spain declined to meet with Belgium'",
}
diff --git a/spacy/gold.pxd b/spacy/gold.pxd
index 20a25a939..a3123f7fa 100644
--- a/spacy/gold.pxd
+++ b/spacy/gold.pxd
@@ -24,7 +24,6 @@ cdef class GoldParse:
cdef public int loss
cdef public list words
cdef public list tags
- cdef public list morphology
cdef public list heads
cdef public list labels
cdef public dict orths
diff --git a/spacy/gold.pyx b/spacy/gold.pyx
index 4cc44f757..f6ec8d3fa 100644
--- a/spacy/gold.pyx
+++ b/spacy/gold.pyx
@@ -7,7 +7,6 @@ import random
import numpy
import tempfile
import shutil
-import itertools
from pathlib import Path
import srsly
@@ -57,7 +56,6 @@ def tags_to_entities(tags):
def merge_sents(sents):
m_deps = [[], [], [], [], [], []]
m_brackets = []
- m_cats = sents.pop()
i = 0
for (ids, words, tags, heads, labels, ner), brackets in sents:
m_deps[0].extend(id_ + i for id_ in ids)
@@ -69,7 +67,6 @@ def merge_sents(sents):
m_brackets.extend((b["first"] + i, b["last"] + i, b["label"])
for b in brackets)
i += len(ids)
- m_deps.append(m_cats)
return [(m_deps, m_brackets)]
@@ -201,7 +198,6 @@ class GoldCorpus(object):
n = 0
i = 0
for raw_text, paragraph_tuples in self.train_tuples:
- cats = paragraph_tuples.pop()
for sent_tuples, brackets in paragraph_tuples:
n += len(sent_tuples[1])
if self.limit and i >= self.limit:
@@ -210,14 +206,13 @@ class GoldCorpus(object):
return n
def train_docs(self, nlp, gold_preproc=False, max_length=None,
- noise_level=0.0, orth_variant_level=0.0):
+ noise_level=0.0):
locs = list((self.tmp_dir / 'train').iterdir())
random.shuffle(locs)
train_tuples = self.read_tuples(locs, limit=self.limit)
gold_docs = self.iter_gold_docs(nlp, train_tuples, gold_preproc,
max_length=max_length,
noise_level=noise_level,
- orth_variant_level=orth_variant_level,
make_projective=True)
yield from gold_docs
@@ -231,132 +226,43 @@ class GoldCorpus(object):
@classmethod
def iter_gold_docs(cls, nlp, tuples, gold_preproc, max_length=None,
- noise_level=0.0, orth_variant_level=0.0, make_projective=False):
+ noise_level=0.0, make_projective=False):
for raw_text, paragraph_tuples in tuples:
if gold_preproc:
raw_text = None
else:
paragraph_tuples = merge_sents(paragraph_tuples)
- docs, paragraph_tuples = cls._make_docs(nlp, raw_text,
- paragraph_tuples, gold_preproc, noise_level=noise_level,
- orth_variant_level=orth_variant_level)
+ docs = cls._make_docs(nlp, raw_text, paragraph_tuples, gold_preproc,
+ noise_level=noise_level)
golds = cls._make_golds(docs, paragraph_tuples, make_projective)
for doc, gold in zip(docs, golds):
if (not max_length) or len(doc) < max_length:
yield doc, gold
@classmethod
- def _make_docs(cls, nlp, raw_text, paragraph_tuples, gold_preproc, noise_level=0.0, orth_variant_level=0.0):
+ def _make_docs(cls, nlp, raw_text, paragraph_tuples, gold_preproc, noise_level=0.0):
if raw_text is not None:
- raw_text, paragraph_tuples = make_orth_variants(nlp, raw_text, paragraph_tuples, orth_variant_level=orth_variant_level)
raw_text = add_noise(raw_text, noise_level)
- return [nlp.make_doc(raw_text)], paragraph_tuples
+ return [nlp.make_doc(raw_text)]
else:
- docs = []
- raw_text, paragraph_tuples = make_orth_variants(nlp, None, paragraph_tuples, orth_variant_level=orth_variant_level)
return [Doc(nlp.vocab, words=add_noise(sent_tuples[1], noise_level))
- for (sent_tuples, brackets) in paragraph_tuples], paragraph_tuples
-
+ for (sent_tuples, brackets) in paragraph_tuples]
@classmethod
def _make_golds(cls, docs, paragraph_tuples, make_projective):
if len(docs) != len(paragraph_tuples):
n_annots = len(paragraph_tuples)
raise ValueError(Errors.E070.format(n_docs=len(docs), n_annots=n_annots))
- return [GoldParse.from_annot_tuples(doc, sent_tuples,
+ if len(docs) == 1:
+ return [GoldParse.from_annot_tuples(docs[0], paragraph_tuples[0][0],
+ make_projective=make_projective)]
+ else:
+ return [GoldParse.from_annot_tuples(doc, sent_tuples,
make_projective=make_projective)
for doc, (sent_tuples, brackets)
in zip(docs, paragraph_tuples)]
-def make_orth_variants(nlp, raw, paragraph_tuples, orth_variant_level=0.0):
- if random.random() >= orth_variant_level:
- return raw, paragraph_tuples
- if random.random() >= 0.5:
- lower = True
- if raw is not None:
- raw = raw.lower()
- ndsv = nlp.Defaults.single_orth_variants
- ndpv = nlp.Defaults.paired_orth_variants
- # modify words in paragraph_tuples
- variant_paragraph_tuples = []
- for sent_tuples, brackets in paragraph_tuples:
- ids, words, tags, heads, labels, ner, cats = sent_tuples
- if lower:
- words = [w.lower() for w in words]
- # single variants
- punct_choices = [random.choice(x["variants"]) for x in ndsv]
- for word_idx in range(len(words)):
- for punct_idx in range(len(ndsv)):
- if tags[word_idx] in ndsv[punct_idx]["tags"] \
- and words[word_idx] in ndsv[punct_idx]["variants"]:
- words[word_idx] = punct_choices[punct_idx]
- # paired variants
- punct_choices = [random.choice(x["variants"]) for x in ndpv]
- for word_idx in range(len(words)):
- for punct_idx in range(len(ndpv)):
- if tags[word_idx] in ndpv[punct_idx]["tags"] \
- and words[word_idx] in itertools.chain.from_iterable(ndpv[punct_idx]["variants"]):
- # backup option: random left vs. right from pair
- pair_idx = random.choice([0, 1])
- # best option: rely on paired POS tags like `` / ''
- if len(ndpv[punct_idx]["tags"]) == 2:
- pair_idx = ndpv[punct_idx]["tags"].index(tags[word_idx])
- # next best option: rely on position in variants
- # (may not be unambiguous, so order of variants matters)
- else:
- for pair in ndpv[punct_idx]["variants"]:
- if words[word_idx] in pair:
- pair_idx = pair.index(words[word_idx])
- words[word_idx] = punct_choices[punct_idx][pair_idx]
-
- variant_paragraph_tuples.append(((ids, words, tags, heads, labels, ner, cats), brackets))
- # modify raw to match variant_paragraph_tuples
- if raw is not None:
- variants = []
- for single_variants in ndsv:
- variants.extend(single_variants["variants"])
- for paired_variants in ndpv:
- variants.extend(list(itertools.chain.from_iterable(paired_variants["variants"])))
- # store variants in reverse length order to be able to prioritize
- # longer matches (e.g., "---" before "--")
- variants = sorted(variants, key=lambda x: len(x))
- variants.reverse()
- variant_raw = ""
- raw_idx = 0
- # add initial whitespace
- while raw_idx < len(raw) and re.match("\s", raw[raw_idx]):
- variant_raw += raw[raw_idx]
- raw_idx += 1
- for sent_tuples, brackets in variant_paragraph_tuples:
- ids, words, tags, heads, labels, ner, cats = sent_tuples
- for word in words:
- match_found = False
- # add identical word
- if word not in variants and raw[raw_idx:].startswith(word):
- variant_raw += word
- raw_idx += len(word)
- match_found = True
- # add variant word
- else:
- for variant in variants:
- if not match_found and \
- raw[raw_idx:].startswith(variant):
- raw_idx += len(variant)
- variant_raw += word
- match_found = True
- # something went wrong, abort
- # (add a warning message?)
- if not match_found:
- return raw, paragraph_tuples
- # add following whitespace
- while raw_idx < len(raw) and re.match("\s", raw[raw_idx]):
- variant_raw += raw[raw_idx]
- raw_idx += 1
- return variant_raw, variant_paragraph_tuples
- return raw, variant_paragraph_tuples
-
-
def add_noise(orig, noise_level):
if random.random() >= noise_level:
return orig
@@ -371,8 +277,12 @@ def add_noise(orig, noise_level):
def _corrupt(c, noise_level):
if random.random() >= noise_level:
return c
- elif c in [".", "'", "!", "?", ","]:
+ elif c == " ":
return "\n"
+ elif c == "\n":
+ return " "
+ elif c in [".", "'", "!", "?", ","]:
+ return ""
else:
return c.lower()
@@ -420,10 +330,6 @@ def json_to_tuple(doc):
sents.append([
[ids, words, tags, heads, labels, ner],
sent.get("brackets", [])])
- cats = {}
- for cat in paragraph.get("cats", {}):
- cats[cat["label"]] = cat["value"]
- sents.append(cats)
if sents:
yield [paragraph.get("raw", None), sents]
@@ -537,12 +443,11 @@ cdef class GoldParse:
"""
@classmethod
def from_annot_tuples(cls, doc, annot_tuples, make_projective=False):
- _, words, tags, heads, deps, entities, cats = annot_tuples
+ _, words, tags, heads, deps, entities = annot_tuples
return cls(doc, words=words, tags=tags, heads=heads, deps=deps,
- entities=entities, cats=cats,
- make_projective=make_projective)
+ entities=entities, make_projective=make_projective)
- def __init__(self, doc, annot_tuples=None, words=None, tags=None, morphology=None,
+ def __init__(self, doc, annot_tuples=None, words=None, tags=None,
heads=None, deps=None, entities=None, make_projective=False,
cats=None, links=None, **_):
"""Create a GoldParse.
@@ -577,13 +482,11 @@ cdef class GoldParse:
if words is None:
words = [token.text for token in doc]
if tags is None:
- tags = [None for _ in words]
+ tags = [None for _ in doc]
if heads is None:
- heads = [None for _ in words]
+ heads = [None for token in doc]
if deps is None:
- deps = [None for _ in words]
- if morphology is None:
- morphology = [None for _ in words]
+ deps = [None for _ in doc]
if entities is None:
entities = ["-" for _ in doc]
elif len(entities) == 0:
@@ -595,6 +498,7 @@ cdef class GoldParse:
if not isinstance(entities[0], basestring):
# Assume we have entities specified by character offset.
entities = biluo_tags_from_offsets(doc, entities)
+
self.mem = Pool()
self.loss = 0
self.length = len(doc)
@@ -614,7 +518,6 @@ cdef class GoldParse:
self.heads = [None] * len(doc)
self.labels = [None] * len(doc)
self.ner = [None] * len(doc)
- self.morphology = [None] * len(doc)
# This needs to be done before we align the words
if make_projective and heads is not None and deps is not None:
@@ -641,13 +544,11 @@ cdef class GoldParse:
self.tags[i] = "_SP"
self.heads[i] = None
self.labels[i] = None
- self.ner[i] = None
- self.morphology[i] = set()
+ self.ner[i] = "O"
if gold_i is None:
if i in i2j_multi:
self.words[i] = words[i2j_multi[i]]
self.tags[i] = tags[i2j_multi[i]]
- self.morphology[i] = morphology[i2j_multi[i]]
is_last = i2j_multi[i] != i2j_multi.get(i+1)
is_first = i2j_multi[i] != i2j_multi.get(i-1)
# Set next word in multi-token span as head, until last
@@ -684,7 +585,6 @@ cdef class GoldParse:
else:
self.words[i] = words[gold_i]
self.tags[i] = tags[gold_i]
- self.morphology[i] = morphology[gold_i]
if heads[gold_i] is None:
self.heads[i] = None
else:
@@ -692,20 +592,9 @@ cdef class GoldParse:
self.labels[i] = deps[gold_i]
self.ner[i] = entities[gold_i]
- # Prevent whitespace that isn't within entities from being tagged as
- # an entity.
- for i in range(len(self.ner)):
- if self.tags[i] == "_SP":
- prev_ner = self.ner[i-1] if i >= 1 else None
- next_ner = self.ner[i+1] if (i+1) < len(self.ner) else None
- if prev_ner == "O" or next_ner == "O":
- self.ner[i] = "O"
-
cycle = nonproj.contains_cycle(self.heads)
if cycle is not None:
- raise ValueError(Errors.E069.format(cycle=cycle,
- cycle_tokens=" ".join(["'{}'".format(self.words[tok_id]) for tok_id in cycle]),
- doc_tokens=" ".join(words[:50])))
+ raise ValueError(Errors.E069.format(cycle=cycle, cycle_tokens=" ".join(["'{}'".format(self.words[tok_id]) for tok_id in cycle]), doc_tokens=" ".join(words[:50])))
def __len__(self):
"""Get the number of gold-standard tokens.
@@ -749,10 +638,7 @@ def docs_to_json(docs, id=0):
docs = [docs]
json_doc = {"id": id, "paragraphs": []}
for i, doc in enumerate(docs):
- json_para = {'raw': doc.text, "sentences": [], "cats": []}
- for cat, val in doc.cats.items():
- json_cat = {"label": cat, "value": val}
- json_para["cats"].append(json_cat)
+ json_para = {'raw': doc.text, "sentences": []}
ent_offsets = [(e.start_char, e.end_char, e.label_) for e in doc.ents]
biluo_tags = biluo_tags_from_offsets(doc, ent_offsets)
for j, sent in enumerate(doc.sents):
diff --git a/spacy/kb.pyx b/spacy/kb.pyx
index 6cbc06e2c..176ac17de 100644
--- a/spacy/kb.pyx
+++ b/spacy/kb.pyx
@@ -24,7 +24,7 @@ cdef class Candidate:
algorithm which will disambiguate the various candidates to the correct one.
Each candidate (alias, entity) pair is assigned to a certain prior probability.
- DOCS: https://spacy.io/api/kb/#candidate_init
+ DOCS: https://spacy.io/api/candidate
"""
def __init__(self, KnowledgeBase kb, entity_hash, entity_freq, entity_vector, alias_hash, prior_prob):
diff --git a/spacy/lang/char_classes.py b/spacy/lang/char_classes.py
index cb5b50ffc..131bdcd51 100644
--- a/spacy/lang/char_classes.py
+++ b/spacy/lang/char_classes.py
@@ -201,9 +201,7 @@ _ukrainian = r"а-щюяіїєґА-ЩЮЯІЇЄҐ"
_upper = LATIN_UPPER + _russian_upper + _tatar_upper + _greek_upper + _ukrainian_upper
_lower = LATIN_LOWER + _russian_lower + _tatar_lower + _greek_lower + _ukrainian_lower
-_uncased = (
- _bengali + _hebrew + _persian + _sinhala + _hindi + _kannada + _tamil + _telugu
-)
+_uncased = _bengali + _hebrew + _persian + _sinhala + _hindi + _kannada + _tamil + _telugu
ALPHA = group_chars(LATIN + _russian + _tatar + _greek + _ukrainian + _uncased)
ALPHA_LOWER = group_chars(_lower + _uncased)
diff --git a/spacy/lang/de/__init__.py b/spacy/lang/de/__init__.py
index b96069235..1b5aee6a8 100644
--- a/spacy/lang/de/__init__.py
+++ b/spacy/lang/de/__init__.py
@@ -27,20 +27,6 @@ class GermanDefaults(Language.Defaults):
stop_words = STOP_WORDS
syntax_iterators = SYNTAX_ITERATORS
resources = {"lemma_lookup": "lemma_lookup.json"}
- single_orth_variants = [
- {"tags": ["$("], "variants": ["…", "..."]},
- {"tags": ["$("], "variants": ["-", "—", "–", "--", "---", "——"]},
- ]
- paired_orth_variants = [
- {
- "tags": ["$("],
- "variants": [("'", "'"), (",", "'"), ("‚", "‘"), ("›", "‹"), ("‹", "›")],
- },
- {
- "tags": ["$("],
- "variants": [("``", "''"), ('"', '"'), ("„", "“"), ("»", "«"), ("«", "»")],
- },
- ]
class German(Language):
diff --git a/spacy/lang/de/tag_map.py b/spacy/lang/de/tag_map.py
index 394478145..3bb6247c4 100644
--- a/spacy/lang/de/tag_map.py
+++ b/spacy/lang/de/tag_map.py
@@ -10,7 +10,7 @@ TAG_MAP = {
"$,": {POS: PUNCT, "PunctType": "comm"},
"$.": {POS: PUNCT, "PunctType": "peri"},
"ADJA": {POS: ADJ},
- "ADJD": {POS: ADJ},
+ "ADJD": {POS: ADJ, "Variant": "short"},
"ADV": {POS: ADV},
"APPO": {POS: ADP, "AdpType": "post"},
"APPR": {POS: ADP, "AdpType": "prep"},
@@ -32,7 +32,7 @@ TAG_MAP = {
"PDAT": {POS: DET, "PronType": "dem"},
"PDS": {POS: PRON, "PronType": "dem"},
"PIAT": {POS: DET, "PronType": "ind|neg|tot"},
- "PIDAT": {POS: DET, "PronType": "ind|neg|tot"},
+ "PIDAT": {POS: DET, "AdjType": "pdt", "PronType": "ind|neg|tot"},
"PIS": {POS: PRON, "PronType": "ind|neg|tot"},
"PPER": {POS: PRON, "PronType": "prs"},
"PPOSAT": {POS: DET, "Poss": "yes", "PronType": "prs"},
@@ -42,7 +42,7 @@ TAG_MAP = {
"PRF": {POS: PRON, "PronType": "prs", "Reflex": "yes"},
"PTKA": {POS: PART},
"PTKANT": {POS: PART, "PartType": "res"},
- "PTKNEG": {POS: PART, "Polarity": "neg"},
+ "PTKNEG": {POS: PART, "Polarity": "Neg"},
"PTKVZ": {POS: PART, "PartType": "vbp"},
"PTKZU": {POS: PART, "PartType": "inf"},
"PWAT": {POS: DET, "PronType": "int"},
diff --git a/spacy/lang/el/lemmatizer/__init__.py b/spacy/lang/el/lemmatizer/__init__.py
index 994bf9c16..c0ce5c2ad 100644
--- a/spacy/lang/el/lemmatizer/__init__.py
+++ b/spacy/lang/el/lemmatizer/__init__.py
@@ -46,10 +46,9 @@ class GreekLemmatizer(object):
)
return lemmas
- def lookup(self, string, orth=None):
- key = orth if orth is not None else string
- if key in self.lookup_table:
- return self.lookup_table[key]
+ def lookup(self, string):
+ if string in self.lookup_table:
+ return self.lookup_table[string]
return string
diff --git a/spacy/lang/en/__init__.py b/spacy/lang/en/__init__.py
index e4c745c83..7d00c749c 100644
--- a/spacy/lang/en/__init__.py
+++ b/spacy/lang/en/__init__.py
@@ -38,14 +38,6 @@ class EnglishDefaults(Language.Defaults):
"lemma_index": "lemmatizer/lemma_index.json",
"lemma_exc": "lemmatizer/lemma_exc.json",
}
- single_orth_variants = [
- {"tags": ["NFP"], "variants": ["…", "..."]},
- {"tags": [":"], "variants": ["-", "—", "–", "--", "---", "——"]},
- ]
- paired_orth_variants = [
- {"tags": ["``", "''"], "variants": [("'", "'"), ("‘", "’")]},
- {"tags": ["``", "''"], "variants": [('"', '"'), ("“", "”")]},
- ]
class English(Language):
diff --git a/spacy/lang/en/lemmatizer/lemma_lookup.json b/spacy/lang/en/lemmatizer/lemma_lookup.json
index 15d41e4ba..d0f92c37c 100644
--- a/spacy/lang/en/lemmatizer/lemma_lookup.json
+++ b/spacy/lang/en/lemmatizer/lemma_lookup.json
@@ -20574,7 +20574,7 @@
"lengthier": "lengthy",
"lengthiest": "lengthy",
"lengths": "length",
- "lenses": "lens",
+ "lenses": "lense",
"lent": "lend",
"lenticels": "lenticel",
"lentils": "lentil",
diff --git a/spacy/lang/en/morph_rules.py b/spacy/lang/en/morph_rules.py
index 5ed4eac59..198182ff0 100644
--- a/spacy/lang/en/morph_rules.py
+++ b/spacy/lang/en/morph_rules.py
@@ -3,59 +3,55 @@ from __future__ import unicode_literals
from ...symbols import LEMMA, PRON_LEMMA
-# Several entries here look pretty suspicious. These will get the POS SCONJ
-# given the tag IN, when an adpositional reading seems much more likely for
-# a lot of these prepositions. I'm not sure what I was running in 04395ffa4
-# when I did this? It doesn't seem right.
_subordinating_conjunctions = [
"that",
"if",
"as",
"because",
- # "of",
- # "for",
- # "before",
- # "in",
+ "of",
+ "for",
+ "before",
+ "in",
"while",
- # "after",
+ "after",
"since",
"like",
- # "with",
+ "with",
"so",
- # "to",
- # "by",
- # "on",
- # "about",
+ "to",
+ "by",
+ "on",
+ "about",
"than",
"whether",
"although",
- # "from",
+ "from",
"though",
- # "until",
+ "until",
"unless",
"once",
- # "without",
- # "at",
- # "into",
+ "without",
+ "at",
+ "into",
"cause",
- # "over",
+ "over",
"upon",
"till",
"whereas",
- # "beyond",
+ "beyond",
"whilst",
"except",
"despite",
"wether",
- # "then",
+ "then",
"but",
"becuse",
"whie",
- # "below",
- # "against",
+ "below",
+ "against",
"it",
"w/out",
- # "toward",
+ "toward",
"albeit",
"save",
"besides",
@@ -67,17 +63,16 @@ _subordinating_conjunctions = [
"out",
"near",
"seince",
- # "towards",
+ "towards",
"tho",
"sice",
"will",
]
-# This seems kind of wrong too?
-# _relative_pronouns = ["this", "that", "those", "these"]
+_relative_pronouns = ["this", "that", "those", "these"]
MORPH_RULES = {
- # "DT": {word: {"POS": "PRON"} for word in _relative_pronouns},
+ "DT": {word: {"POS": "PRON"} for word in _relative_pronouns},
"IN": {word: {"POS": "SCONJ"} for word in _subordinating_conjunctions},
"NN": {
"something": {"POS": "PRON"},
diff --git a/spacy/lang/en/tag_map.py b/spacy/lang/en/tag_map.py
index 9bd884a3a..246258f57 100644
--- a/spacy/lang/en/tag_map.py
+++ b/spacy/lang/en/tag_map.py
@@ -14,10 +14,10 @@ TAG_MAP = {
'""': {POS: PUNCT, "PunctType": "quot", "PunctSide": "fin"},
"''": {POS: PUNCT, "PunctType": "quot", "PunctSide": "fin"},
":": {POS: PUNCT},
- "$": {POS: SYM},
- "#": {POS: SYM},
- "AFX": {POS: ADJ, "Hyph": "yes"},
- "CC": {POS: CCONJ, "ConjType": "comp"},
+ "$": {POS: SYM, "Other": {"SymType": "currency"}},
+ "#": {POS: SYM, "Other": {"SymType": "numbersign"}},
+ "AFX": {POS: X, "Hyph": "yes"},
+ "CC": {POS: CCONJ, "ConjType": "coor"},
"CD": {POS: NUM, "NumType": "card"},
"DT": {POS: DET},
"EX": {POS: PRON, "AdvType": "ex"},
@@ -34,7 +34,7 @@ TAG_MAP = {
"NNP": {POS: PROPN, "NounType": "prop", "Number": "sing"},
"NNPS": {POS: PROPN, "NounType": "prop", "Number": "plur"},
"NNS": {POS: NOUN, "Number": "plur"},
- "PDT": {POS: DET},
+ "PDT": {POS: DET, "AdjType": "pdt", "PronType": "prn"},
"POS": {POS: PART, "Poss": "yes"},
"PRP": {POS: PRON, "PronType": "prs"},
"PRP$": {POS: PRON, "PronType": "prs", "Poss": "yes"},
@@ -56,12 +56,12 @@ TAG_MAP = {
"VerbForm": "fin",
"Tense": "pres",
"Number": "sing",
- "Person": "three",
+ "Person": 3,
},
- "WDT": {POS: PRON},
- "WP": {POS: PRON},
- "WP$": {POS: PRON, "Poss": "yes"},
- "WRB": {POS: ADV},
+ "WDT": {POS: PRON, "PronType": "int|rel"},
+ "WP": {POS: PRON, "PronType": "int|rel"},
+ "WP$": {POS: PRON, "Poss": "yes", "PronType": "int|rel"},
+ "WRB": {POS: ADV, "PronType": "int|rel"},
"ADD": {POS: X},
"NFP": {POS: PUNCT},
"GW": {POS: X},
diff --git a/spacy/lang/en/tokenizer_exceptions.py b/spacy/lang/en/tokenizer_exceptions.py
index c45197771..21e664f7f 100644
--- a/spacy/lang/en/tokenizer_exceptions.py
+++ b/spacy/lang/en/tokenizer_exceptions.py
@@ -30,7 +30,14 @@ for pron in ["i"]:
for orth in [pron, pron.title()]:
_exc[orth + "'m"] = [
{ORTH: orth, LEMMA: PRON_LEMMA, NORM: pron, TAG: "PRP"},
- {ORTH: "'m", LEMMA: "be", NORM: "am", TAG: "VBP"},
+ {
+ ORTH: "'m",
+ LEMMA: "be",
+ NORM: "am",
+ TAG: "VBP",
+ "tenspect": 1,
+ "number": 1,
+ },
]
_exc[orth + "m"] = [
diff --git a/spacy/lang/fr/lemmatizer/__init__.py b/spacy/lang/fr/lemmatizer/__init__.py
index dfd822188..a0a0d2021 100644
--- a/spacy/lang/fr/lemmatizer/__init__.py
+++ b/spacy/lang/fr/lemmatizer/__init__.py
@@ -114,9 +114,9 @@ class FrenchLemmatizer(object):
def punct(self, string, morphology=None):
return self(string, "punct", morphology)
- def lookup(self, string, orth=None):
- if orth is not None and orth in self.lookup_table:
- return self.lookup_table[orth][0]
+ def lookup(self, string):
+ if string in self.lookup_table:
+ return self.lookup_table[string][0]
return string
diff --git a/spacy/lang/hi/stop_words.py b/spacy/lang/hi/stop_words.py
index efad18c84..430a18a22 100644
--- a/spacy/lang/hi/stop_words.py
+++ b/spacy/lang/hi/stop_words.py
@@ -2,8 +2,7 @@
from __future__ import unicode_literals
-# Source: https://github.com/taranjeet/hindi-tokenizer/blob/master/stopwords.txt, https://data.mendeley.com/datasets/bsr3frvvjc/1#file-a21d5092-99d7-45d8-b044-3ae9edd391c6
-
+# Source: https://github.com/taranjeet/hindi-tokenizer/blob/master/stopwords.txt
STOP_WORDS = set(
"""
अंदर
@@ -19,7 +18,6 @@ STOP_WORDS = set(
अंदर
आदि
आप
-अगर
इंहिं
इंहें
इंहों
@@ -173,9 +171,6 @@ STOP_WORDS = set(
मानो
मे
में
-मैं
-मुझको
-मेरा
यदि
यह
यहाँ
@@ -232,7 +227,6 @@ STOP_WORDS = set(
है
हैं
हो
-हूँ
होता
होति
होती
diff --git a/spacy/lang/ja/__init__.py b/spacy/lang/ja/__init__.py
index 056a6893b..3a6074bba 100644
--- a/spacy/lang/ja/__init__.py
+++ b/spacy/lang/ja/__init__.py
@@ -37,11 +37,6 @@ def resolve_pos(token):
in the sentence. This function adds information to the POS tag to
resolve ambiguous mappings.
"""
-
- # this is only used for consecutive ascii spaces
- if token.pos == "空白":
- return "空白"
-
# TODO: This is a first take. The rules here are crude approximations.
# For many of these, full dependencies are needed to properly resolve
# PoS mappings.
@@ -59,7 +54,6 @@ def detailed_tokens(tokenizer, text):
node = tokenizer.parseToNode(text)
node = node.next # first node is beginning of sentence and empty, skip it
words = []
- spaces = []
while node.posid != 0:
surface = node.surface
base = surface # a default value. Updated if available later.
@@ -70,20 +64,8 @@ def detailed_tokens(tokenizer, text):
# dictionary
base = parts[7]
words.append(ShortUnitWord(surface, base, pos))
-
- # The way MeCab stores spaces is that the rlength of the next token is
- # the length of that token plus any preceding whitespace, **in bytes**.
- # also note that this is only for half-width / ascii spaces. Full width
- # spaces just become tokens.
- scount = node.next.rlength - node.next.length
- spaces.append(bool(scount))
- while scount > 1:
- words.append(ShortUnitWord(" ", " ", "空白"))
- spaces.append(False)
- scount -= 1
-
node = node.next
- return words, spaces
+ return words
class JapaneseTokenizer(DummyTokenizer):
@@ -93,8 +75,9 @@ class JapaneseTokenizer(DummyTokenizer):
self.tokenizer.parseToNode("") # see #2901
def __call__(self, text):
- dtokens, spaces = detailed_tokens(self.tokenizer, text)
+ dtokens = detailed_tokens(self.tokenizer, text)
words = [x.surface for x in dtokens]
+ spaces = [False] * len(words)
doc = Doc(self.vocab, words=words, spaces=spaces)
mecab_tags = []
for token, dtoken in zip(doc, dtokens):
diff --git a/spacy/lang/ja/tag_map.py b/spacy/lang/ja/tag_map.py
index 4ff0a35ee..6b114eb10 100644
--- a/spacy/lang/ja/tag_map.py
+++ b/spacy/lang/ja/tag_map.py
@@ -2,7 +2,7 @@
from __future__ import unicode_literals
from ...symbols import POS, PUNCT, INTJ, X, ADJ, AUX, ADP, PART, SCONJ, NOUN
-from ...symbols import SYM, PRON, VERB, ADV, PROPN, NUM, DET, SPACE
+from ...symbols import SYM, PRON, VERB, ADV, PROPN, NUM, DET
TAG_MAP = {
@@ -21,8 +21,6 @@ TAG_MAP = {
"感動詞,一般,*,*": {POS: INTJ},
# this is specifically for unicode full-width space
"空白,*,*,*": {POS: X},
- # This is used when sequential half-width spaces are present
- "空白": {POS: SPACE},
"形状詞,一般,*,*": {POS: ADJ},
"形状詞,タリ,*,*": {POS: ADJ},
"形状詞,助動詞語幹,*,*": {POS: ADJ},
diff --git a/spacy/lang/ko/__init__.py b/spacy/lang/ko/__init__.py
index ec79a95ab..c8cd9c3fd 100644
--- a/spacy/lang/ko/__init__.py
+++ b/spacy/lang/ko/__init__.py
@@ -1,6 +1,8 @@
# encoding: utf8
from __future__ import unicode_literals, print_function
+import sys
+
from .stop_words import STOP_WORDS
from .tag_map import TAG_MAP
from ...attrs import LANG
@@ -8,12 +10,35 @@ from ...language import Language
from ...tokens import Doc
from ...compat import copy_reg
from ...util import DummyTokenizer
+from ...compat import is_python3, is_python_pre_3_5
+
+is_python_post_3_7 = is_python3 and sys.version_info[1] >= 7
+
+# fmt: off
+if is_python_pre_3_5:
+ from collections import namedtuple
+ Morpheme = namedtuple("Morpheme", "surface lemma tag")
+elif is_python_post_3_7:
+ from dataclasses import dataclass
+
+ @dataclass(frozen=True)
+ class Morpheme:
+ surface: str
+ lemma: str
+ tag: str
+else:
+ from typing import NamedTuple
+
+ class Morpheme(NamedTuple):
+
+ surface = str("")
+ lemma = str("")
+ tag = str("")
def try_mecab_import():
try:
from natto import MeCab
-
return MeCab
except ImportError:
raise ImportError(
@@ -21,8 +46,6 @@ def try_mecab_import():
"[mecab-ko-dic](https://bitbucket.org/eunjeon/mecab-ko-dic), "
"and [natto-py](https://github.com/buruzaemon/natto-py)"
)
-
-
# fmt: on
@@ -46,13 +69,13 @@ class KoreanTokenizer(DummyTokenizer):
def __call__(self, text):
dtokens = list(self.detailed_tokens(text))
- surfaces = [dt["surface"] for dt in dtokens]
+ surfaces = [dt.surface for dt in dtokens]
doc = Doc(self.vocab, words=surfaces, spaces=list(check_spaces(text, surfaces)))
for token, dtoken in zip(doc, dtokens):
- first_tag, sep, eomi_tags = dtoken["tag"].partition("+")
+ first_tag, sep, eomi_tags = dtoken.tag.partition("+")
token.tag_ = first_tag # stem(어간) or pre-final(선어말 어미)
- token.lemma_ = dtoken["lemma"]
- doc.user_data["full_tags"] = [dt["tag"] for dt in dtokens]
+ token.lemma_ = dtoken.lemma
+ doc.user_data["full_tags"] = [dt.tag for dt in dtokens]
return doc
def detailed_tokens(self, text):
@@ -68,7 +91,7 @@ class KoreanTokenizer(DummyTokenizer):
lemma, _, remainder = expr.partition("/")
if lemma == "*":
lemma = surface
- yield {"surface": surface, "lemma": lemma, "tag": tag}
+ yield Morpheme(surface, lemma, tag)
class KoreanDefaults(Language.Defaults):
diff --git a/spacy/lang/lt/tag_map.py b/spacy/lang/lt/tag_map.py
index 6ea4f8ae0..eab231b2c 100644
--- a/spacy/lang/lt/tag_map.py
+++ b/spacy/lang/lt/tag_map.py
@@ -1605,7 +1605,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Imp",
"Number": "Plur",
- "Person": "one",
+ "Person": "1",
"Polarity": "Pos",
"VerbForm": "Fin",
},
@@ -1613,7 +1613,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Cnd",
"Number": "Plur",
- "Person": "one",
+ "Person": "1",
"Polarity": "Pos",
"VerbForm": "Fin",
},
@@ -1621,7 +1621,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Imp",
"Number": "Plur",
- "Person": "one",
+ "Person": "1",
"Polarity": "Pos",
"Reflex": "Yes",
"VerbForm": "Fin",
@@ -1630,7 +1630,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Imp",
"Number": "Plur",
- "Person": "one",
+ "Person": "1",
"Polarity": "Neg",
"VerbForm": "Fin",
},
@@ -1638,7 +1638,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Cnd",
"Number": "Plur",
- "Person": "one",
+ "Person": "1",
"Polarity": "Neg",
"Reflex": "Yes",
"VerbForm": "Fin",
@@ -1647,7 +1647,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Cnd",
"Number": "Sing",
- "Person": "one",
+ "Person": "1",
"Polarity": "Pos",
"VerbForm": "Fin",
},
@@ -1655,7 +1655,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Cnd",
"Number": "Sing",
- "Person": "one",
+ "Person": "1",
"Polarity": "Pos",
"Reflex": "Yes",
"VerbForm": "Fin",
@@ -1664,7 +1664,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Cnd",
"Number": "Sing",
- "Person": "one",
+ "Person": "1",
"Polarity": "Neg",
"VerbForm": "Fin",
},
@@ -1672,7 +1672,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Cnd",
"Number": "Sing",
- "Person": "one",
+ "Person": "1",
"Polarity": "Neg",
"Reflex": "Yes",
"VerbForm": "Fin",
@@ -1681,7 +1681,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Imp",
"Number": "Plur",
- "Person": "two",
+ "Person": "2",
"Polarity": "Pos",
"VerbForm": "Fin",
},
@@ -1689,7 +1689,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Cnd",
"Number": "Plur",
- "Person": "two",
+ "Person": "2",
"Polarity": "Pos",
"VerbForm": "Fin",
},
@@ -1697,7 +1697,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Imp",
"Number": "Plur",
- "Person": "two",
+ "Person": "2",
"Polarity": "Pos",
"Reflex": "Yes",
"VerbForm": "Fin",
@@ -1706,7 +1706,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Imp",
"Number": "Plur",
- "Person": "two",
+ "Person": "2",
"Polarity": "Neg",
"VerbForm": "Fin",
},
@@ -1714,7 +1714,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Imp",
"Number": "Plur",
- "Person": "two",
+ "Person": "2",
"Polarity": "Neg",
"Reflex": "Yes",
"VerbForm": "Fin",
@@ -1723,7 +1723,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Imp",
"Number": "Sing",
- "Person": "two",
+ "Person": "2",
"Polarity": "Pos",
"VerbForm": "Fin",
},
@@ -1731,7 +1731,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Cnd",
"Number": "Sing",
- "Person": "two",
+ "Person": "2",
"Polarity": "Pos",
"VerbForm": "Fin",
},
@@ -1739,7 +1739,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Imp",
"Number": "Sing",
- "Person": "two",
+ "Person": "2",
"Polarity": "Pos",
"Reflex": "Yes",
"VerbForm": "Fin",
@@ -1748,7 +1748,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Imp",
"Number": "Sing",
- "Person": "two",
+ "Person": "2",
"Polarity": "Neg",
"VerbForm": "Fin",
},
@@ -1756,21 +1756,21 @@ TAG_MAP = {
POS: VERB,
"Mood": "Cnd",
"Number": "Sing",
- "Person": "two",
+ "Person": "2",
"Polarity": "Neg",
"VerbForm": "Fin",
},
"Vgm-3---n--ns-": {
POS: VERB,
"Mood": "Cnd",
- "Person": "three",
+ "Person": "3",
"Polarity": "Pos",
"VerbForm": "Fin",
},
"Vgm-3---n--ys-": {
POS: VERB,
"Mood": "Cnd",
- "Person": "three",
+ "Person": "3",
"Polarity": "Pos",
"Reflex": "Yes",
"VerbForm": "Fin",
@@ -1778,14 +1778,14 @@ TAG_MAP = {
"Vgm-3---y--ns-": {
POS: VERB,
"Mood": "Cnd",
- "Person": "three",
+ "Person": "3",
"Polarity": "Neg",
"VerbForm": "Fin",
},
"Vgm-3---y--ys-": {
POS: VERB,
"Mood": "Cnd",
- "Person": "three",
+ "Person": "3",
"Polarity": "Neg",
"Reflex": "Yes",
"VerbForm": "Fin",
@@ -1794,7 +1794,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Cnd",
"Number": "Plur",
- "Person": "three",
+ "Person": "3",
"Polarity": "Pos",
"VerbForm": "Fin",
},
@@ -1802,7 +1802,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Cnd",
"Number": "Plur",
- "Person": "three",
+ "Person": "3",
"Polarity": "Pos",
"Reflex": "Yes",
"VerbForm": "Fin",
@@ -1811,7 +1811,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Cnd",
"Number": "Plur",
- "Person": "three",
+ "Person": "3",
"Polarity": "Neg",
"VerbForm": "Fin",
},
@@ -1819,7 +1819,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Cnd",
"Number": "Sing",
- "Person": "three",
+ "Person": "3",
"Polarity": "Pos",
"VerbForm": "Fin",
},
@@ -1827,7 +1827,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Cnd",
"Number": "Sing",
- "Person": "three",
+ "Person": "3",
"Polarity": "Pos",
"Reflex": "Yes",
"VerbForm": "Fin",
@@ -1836,7 +1836,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Cnd",
"Number": "Sing",
- "Person": "three",
+ "Person": "3",
"Polarity": "Neg",
"VerbForm": "Fin",
},
@@ -1844,7 +1844,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Cnd",
"Number": "Sing",
- "Person": "three",
+ "Person": "3",
"Polarity": "Neg",
"Reflex": "Yes",
"VerbForm": "Fin",
@@ -1853,7 +1853,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Plur",
- "Person": "one",
+ "Person": "1",
"Polarity": "Pos",
"Tense": "Past",
"VerbForm": "Fin",
@@ -1862,7 +1862,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Plur",
- "Person": "one",
+ "Person": "1",
"Polarity": "Pos",
"Reflex": "Yes",
"Tense": "Past",
@@ -1872,7 +1872,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Plur",
- "Person": "one",
+ "Person": "1",
"Polarity": "Neg",
"Tense": "Past",
"VerbForm": "Fin",
@@ -1881,7 +1881,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Plur",
- "Person": "one",
+ "Person": "1",
"Polarity": "Neg",
"Reflex": "Yes",
"Tense": "Past",
@@ -1891,7 +1891,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Sing",
- "Person": "one",
+ "Person": "1",
"Polarity": "Pos",
"Tense": "Past",
"VerbForm": "Fin",
@@ -1900,7 +1900,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Sing",
- "Person": "one",
+ "Person": "1",
"Polarity": "Pos",
"Reflex": "Yes",
"Tense": "Past",
@@ -1910,7 +1910,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Sing",
- "Person": "one",
+ "Person": "1",
"Polarity": "Neg",
"Tense": "Past",
"VerbForm": "Fin",
@@ -1919,7 +1919,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Sing",
- "Person": "one",
+ "Person": "1",
"Polarity": "Neg",
"Reflex": "Yes",
"Tense": "Past",
@@ -1929,7 +1929,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Plur",
- "Person": "two",
+ "Person": "2",
"Polarity": "Pos",
"Tense": "Past",
"VerbForm": "Fin",
@@ -1938,7 +1938,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Plur",
- "Person": "two",
+ "Person": "2",
"Polarity": "Pos",
"Reflex": "Yes",
"Tense": "Past",
@@ -1948,7 +1948,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Plur",
- "Person": "two",
+ "Person": "2",
"Polarity": "Neg",
"Tense": "Past",
"VerbForm": "Fin",
@@ -1957,7 +1957,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Sing",
- "Person": "two",
+ "Person": "2",
"Polarity": "Pos",
"Tense": "Past",
"VerbForm": "Fin",
@@ -1966,7 +1966,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Sing",
- "Person": "two",
+ "Person": "2",
"Polarity": "Neg",
"Tense": "Past",
"VerbForm": "Fin",
@@ -1974,7 +1974,7 @@ TAG_MAP = {
"Vgma3---n--ni-": {
POS: VERB,
"Mood": "Ind",
- "Person": "three",
+ "Person": "3",
"Polarity": "Pos",
"Tense": "Past",
"VerbForm": "Fin",
@@ -1982,7 +1982,7 @@ TAG_MAP = {
"Vgma3---n--yi-": {
POS: VERB,
"Mood": "Ind",
- "Person": "three",
+ "Person": "3",
"Polarity": "Pos",
"Reflex": "Yes",
"Tense": "Past",
@@ -1991,7 +1991,7 @@ TAG_MAP = {
"Vgma3---y--ni-": {
POS: VERB,
"Mood": "Ind",
- "Person": "three",
+ "Person": "3",
"Polarity": "Neg",
"Tense": "Past",
"VerbForm": "Fin",
@@ -1999,7 +1999,7 @@ TAG_MAP = {
"Vgma3--y--ni-": {
POS: VERB,
"Case": "Nom",
- "Person": "three",
+ "Person": "3",
"Tense": "Past",
"VerbForm": "Fin",
},
@@ -2007,7 +2007,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Plur",
- "Person": "three",
+ "Person": "3",
"Polarity": "Pos",
"Tense": "Past",
"VerbForm": "Fin",
@@ -2016,7 +2016,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Plur",
- "Person": "three",
+ "Person": "3",
"Polarity": "Pos",
"Reflex": "Yes",
"Tense": "Past",
@@ -2026,7 +2026,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Plur",
- "Person": "three",
+ "Person": "3",
"Polarity": "Neg",
"Tense": "Past",
"VerbForm": "Fin",
@@ -2035,7 +2035,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Plur",
- "Person": "three",
+ "Person": "3",
"Polarity": "Neg",
"Reflex": "Yes",
"Tense": "Past",
@@ -2045,7 +2045,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Sing",
- "Person": "three",
+ "Person": "3",
"Polarity": "Pos",
"Tense": "Past",
"VerbForm": "Fin",
@@ -2054,7 +2054,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Sing",
- "Person": "three",
+ "Person": "3",
"Polarity": "Pos",
"Reflex": "Yes",
"Tense": "Past",
@@ -2064,7 +2064,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Sing",
- "Person": "three",
+ "Person": "3",
"Polarity": "Pos",
"Reflex": "Yes",
"Tense": "Past",
@@ -2074,7 +2074,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Sing",
- "Person": "three",
+ "Person": "3",
"Polarity": "Neg",
"Tense": "Past",
"VerbForm": "Fin",
@@ -2083,7 +2083,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Sing",
- "Person": "three",
+ "Person": "3",
"Polarity": "Neg",
"Reflex": "Yes",
"Tense": "Past",
@@ -2093,7 +2093,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Plur",
- "Person": "one",
+ "Person": "1",
"Polarity": "Pos",
"Tense": "Fut",
"VerbForm": "Fin",
@@ -2102,7 +2102,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Plur",
- "Person": "one",
+ "Person": "1",
"Polarity": "Pos",
"Reflex": "Yes",
"Tense": "Fut",
@@ -2112,7 +2112,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Plur",
- "Person": "one",
+ "Person": "1",
"Polarity": "Neg",
"Tense": "Fut",
"VerbForm": "Fin",
@@ -2121,7 +2121,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Sing",
- "Person": "one",
+ "Person": "1",
"Polarity": "Pos",
"Tense": "Fut",
"VerbForm": "Fin",
@@ -2130,7 +2130,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Sing",
- "Person": "one",
+ "Person": "1",
"Polarity": "Pos",
"Reflex": "Yes",
"Tense": "Fut",
@@ -2140,7 +2140,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Sing",
- "Person": "one",
+ "Person": "1",
"Polarity": "Neg",
"Tense": "Fut",
"VerbForm": "Fin",
@@ -2149,7 +2149,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Plur",
- "Person": "two",
+ "Person": "2",
"Polarity": "Pos",
"Tense": "Fut",
"VerbForm": "Fin",
@@ -2158,7 +2158,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Plur",
- "Person": "two",
+ "Person": "2",
"Polarity": "Pos",
"Reflex": "Yes",
"Tense": "Fut",
@@ -2168,7 +2168,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Sing",
- "Person": "two",
+ "Person": "2",
"Polarity": "Pos",
"Tense": "Fut",
"VerbForm": "Fin",
@@ -2177,7 +2177,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Sing",
- "Person": "two",
+ "Person": "2",
"Polarity": "Pos",
"Reflex": "Yes",
"Tense": "Fut",
@@ -2187,7 +2187,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Sing",
- "Person": "two",
+ "Person": "2",
"Polarity": "Neg",
"Tense": "Fut",
"VerbForm": "Fin",
@@ -2196,7 +2196,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Sing",
- "Person": "two",
+ "Person": "2",
"Polarity": "Neg",
"Reflex": "Yes",
"Tense": "Fut",
@@ -2205,7 +2205,7 @@ TAG_MAP = {
"Vgmf3---n--ni-": {
POS: VERB,
"Mood": "Ind",
- "Person": "three",
+ "Person": "3",
"Polarity": "Pos",
"Tense": "Fut",
"VerbForm": "Fin",
@@ -2213,7 +2213,7 @@ TAG_MAP = {
"Vgmf3---y--ni-": {
POS: VERB,
"Mood": "Ind",
- "Person": "three",
+ "Person": "3",
"Polarity": "Neg",
"Tense": "Fut",
"VerbForm": "Fin",
@@ -2222,7 +2222,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Plur",
- "Person": "three",
+ "Person": "3",
"Polarity": "Pos",
"Tense": "Fut",
"VerbForm": "Fin",
@@ -2231,7 +2231,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Plur",
- "Person": "three",
+ "Person": "3",
"Polarity": "Pos",
"Reflex": "Yes",
"Tense": "Fut",
@@ -2241,7 +2241,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Plur",
- "Person": "three",
+ "Person": "3",
"Polarity": "Neg",
"Tense": "Fut",
"VerbForm": "Fin",
@@ -2250,7 +2250,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Sing",
- "Person": "three",
+ "Person": "3",
"Polarity": "Pos",
"Tense": "Fut",
"VerbForm": "Fin",
@@ -2259,7 +2259,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Sing",
- "Person": "three",
+ "Person": "3",
"Polarity": "Pos",
"Reflex": "Yes",
"Tense": "Fut",
@@ -2269,7 +2269,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Sing",
- "Person": "three",
+ "Person": "3",
"Polarity": "Neg",
"Tense": "Fut",
"VerbForm": "Fin",
@@ -2278,7 +2278,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Sing",
- "Person": "three",
+ "Person": "3",
"Polarity": "Neg",
"Reflex": "Yes",
"Tense": "Fut",
@@ -2288,7 +2288,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Plur",
- "Person": "one",
+ "Person": "1",
"Polarity": "Pos",
"Tense": "Pres",
"VerbForm": "Fin",
@@ -2297,7 +2297,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Plur",
- "Person": "one",
+ "Person": "1",
"Polarity": "Pos",
"Reflex": "Yes",
"Tense": "Pres",
@@ -2307,7 +2307,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Plur",
- "Person": "one",
+ "Person": "1",
"Polarity": "Neg",
"Tense": "Pres",
"VerbForm": "Fin",
@@ -2316,7 +2316,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Plur",
- "Person": "one",
+ "Person": "1",
"Polarity": "Neg",
"Reflex": "Yes",
"Tense": "Pres",
@@ -2326,7 +2326,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Sing",
- "Person": "one",
+ "Person": "1",
"Polarity": "Pos",
"Tense": "Pres",
"VerbForm": "Fin",
@@ -2335,7 +2335,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Sing",
- "Person": "one",
+ "Person": "1",
"Polarity": "Pos",
"Tense": "Pres",
"VerbForm": "Fin",
@@ -2344,7 +2344,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Sing",
- "Person": "one",
+ "Person": "1",
"Polarity": "Pos",
"Reflex": "Yes",
"Tense": "Pres",
@@ -2354,7 +2354,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Sing",
- "Person": "one",
+ "Person": "1",
"Polarity": "Neg",
"Tense": "Pres",
"VerbForm": "Fin",
@@ -2363,7 +2363,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Sing",
- "Person": "one",
+ "Person": "1",
"Polarity": "Neg",
"Reflex": "Yes",
"Tense": "Pres",
@@ -2373,7 +2373,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Plur",
- "Person": "two",
+ "Person": "2",
"Polarity": "Pos",
"Tense": "Pres",
"VerbForm": "Fin",
@@ -2382,7 +2382,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Plur",
- "Person": "two",
+ "Person": "2",
"Polarity": "Pos",
"Reflex": "Yes",
"Tense": "Pres",
@@ -2392,7 +2392,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Plur",
- "Person": "two",
+ "Person": "2",
"Polarity": "Neg",
"Tense": "Pres",
"VerbForm": "Fin",
@@ -2401,7 +2401,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Plur",
- "Person": "two",
+ "Person": "2",
"Polarity": "Neg",
"Reflex": "Yes",
"Tense": "Pres",
@@ -2411,7 +2411,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Sing",
- "Person": "two",
+ "Person": "2",
"Polarity": "Pos",
"Tense": "Pres",
"VerbForm": "Fin",
@@ -2420,7 +2420,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Sing",
- "Person": "two",
+ "Person": "2",
"Polarity": "Pos",
"Reflex": "Yes",
"Tense": "Pres",
@@ -2430,7 +2430,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Sing",
- "Person": "two",
+ "Person": "2",
"Polarity": "Neg",
"Tense": "Pres",
"VerbForm": "Fin",
@@ -2438,7 +2438,7 @@ TAG_MAP = {
"Vgmp3---n--ni-": {
POS: VERB,
"Mood": "Ind",
- "Person": "three",
+ "Person": "3",
"Polarity": "Pos",
"Tense": "Pres",
"VerbForm": "Fin",
@@ -2446,7 +2446,7 @@ TAG_MAP = {
"Vgmp3---n--yi-": {
POS: VERB,
"Mood": "Ind",
- "Person": "three",
+ "Person": "3",
"Polarity": "Pos",
"Reflex": "Yes",
"Tense": "Pres",
@@ -2455,7 +2455,7 @@ TAG_MAP = {
"Vgmp3---y--ni-": {
POS: VERB,
"Mood": "Ind",
- "Person": "three",
+ "Person": "3",
"Polarity": "Neg",
"Tense": "Pres",
"VerbForm": "Fin",
@@ -2463,7 +2463,7 @@ TAG_MAP = {
"Vgmp3---y--yi-": {
POS: VERB,
"Mood": "Ind",
- "Person": "three",
+ "Person": "3",
"Polarity": "Neg",
"Reflex": "Yes",
"Tense": "Pres",
@@ -2473,7 +2473,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Plur",
- "Person": "three",
+ "Person": "3",
"Polarity": "Pos",
"Tense": "Pres",
"VerbForm": "Fin",
@@ -2482,7 +2482,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Plur",
- "Person": "three",
+ "Person": "3",
"Polarity": "Pos",
"Reflex": "Yes",
"Tense": "Pres",
@@ -2492,7 +2492,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Plur",
- "Person": "three",
+ "Person": "3",
"Polarity": "Neg",
"Tense": "Pres",
"VerbForm": "Fin",
@@ -2501,7 +2501,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Plur",
- "Person": "three",
+ "Person": "3",
"Polarity": "Neg",
"Reflex": "Yes",
"Tense": "Pres",
@@ -2511,7 +2511,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Sing",
- "Person": "three",
+ "Person": "3",
"Polarity": "Pos",
"Tense": "Pres",
"VerbForm": "Fin",
@@ -2520,7 +2520,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Sing",
- "Person": "three",
+ "Person": "3",
"Polarity": "Pos",
"Tense": "Pres",
"VerbForm": "Fin",
@@ -2529,7 +2529,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Sing",
- "Person": "three",
+ "Person": "3",
"Polarity": "Pos",
"Tense": "Pres",
"VerbForm": "Fin",
@@ -2538,7 +2538,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Sing",
- "Person": "three",
+ "Person": "3",
"Polarity": "Pos",
"Reflex": "Yes",
"Tense": "Pres",
@@ -2548,7 +2548,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Sing",
- "Person": "three",
+ "Person": "3",
"Polarity": "Neg",
"Tense": "Pres",
"VerbForm": "Fin",
@@ -2557,7 +2557,7 @@ TAG_MAP = {
POS: VERB,
"Mood": "Ind",
"Number": "Sing",
- "Person": "three",
+ "Person": "3",
"Polarity": "Neg",
"Reflex": "Yes",
"Tense": "Pres",
@@ -2568,7 +2568,7 @@ TAG_MAP = {
"Aspect": "Hab",
"Mood": "Ind",
"Number": "Sing",
- "Person": "one",
+ "Person": "1",
"Polarity": "Pos",
"Tense": "Past",
"VerbForm": "Fin",
@@ -2578,7 +2578,7 @@ TAG_MAP = {
"Aspect": "Hab",
"Mood": "Ind",
"Number": "Sing",
- "Person": "one",
+ "Person": "1",
"Polarity": "Pos",
"Reflex": "Yes",
"Tense": "Past",
@@ -2589,7 +2589,7 @@ TAG_MAP = {
"Aspect": "Hab",
"Mood": "Ind",
"Number": "Sing",
- "Person": "one",
+ "Person": "1",
"Polarity": "Neg",
"Tense": "Past",
"VerbForm": "Fin",
@@ -2599,7 +2599,7 @@ TAG_MAP = {
"Aspect": "Hab",
"Mood": "Ind",
"Number": "Sing",
- "Person": "two",
+ "Person": "2",
"Polarity": "Pos",
"Tense": "Past",
"VerbForm": "Fin",
@@ -2608,7 +2608,7 @@ TAG_MAP = {
POS: VERB,
"Aspect": "Hab",
"Mood": "Ind",
- "Person": "three",
+ "Person": "3",
"Polarity": "Pos",
"Tense": "Past",
"VerbForm": "Fin",
@@ -2618,7 +2618,7 @@ TAG_MAP = {
"Aspect": "Hab",
"Mood": "Ind",
"Number": "Plur",
- "Person": "three",
+ "Person": "3",
"Polarity": "Pos",
"Tense": "Past",
"VerbForm": "Fin",
@@ -2628,7 +2628,7 @@ TAG_MAP = {
"Aspect": "Hab",
"Mood": "Ind",
"Number": "Plur",
- "Person": "three",
+ "Person": "3",
"Polarity": "Pos",
"Reflex": "Yes",
"Tense": "Past",
@@ -2639,7 +2639,7 @@ TAG_MAP = {
"Aspect": "Hab",
"Mood": "Ind",
"Number": "Sing",
- "Person": "three",
+ "Person": "3",
"Polarity": "Pos",
"Tense": "Past",
"VerbForm": "Fin",
@@ -2649,7 +2649,7 @@ TAG_MAP = {
"Aspect": "Hab",
"Mood": "Ind",
"Number": "Sing",
- "Person": "three",
+ "Person": "3",
"Polarity": "Pos",
"Reflex": "Yes",
"Tense": "Past",
@@ -2660,7 +2660,7 @@ TAG_MAP = {
"Aspect": "Hab",
"Mood": "Ind",
"Number": "Sing",
- "Person": "three",
+ "Person": "3",
"Polarity": "Neg",
"Tense": "Past",
"VerbForm": "Fin",
@@ -2670,7 +2670,7 @@ TAG_MAP = {
"Aspect": "Perf",
"Mood": "Ind",
"Number": "Sing",
- "Person": "three",
+ "Person": "3",
"Polarity": "Pos",
"Tense": "Past",
"VerbForm": "Fin",
diff --git a/spacy/lang/nl/lemmatizer/__init__.py b/spacy/lang/nl/lemmatizer/__init__.py
index ee4eaabb3..1e5d9aa1f 100644
--- a/spacy/lang/nl/lemmatizer/__init__.py
+++ b/spacy/lang/nl/lemmatizer/__init__.py
@@ -73,7 +73,7 @@ class DutchLemmatizer(object):
return [lemma[0]]
except KeyError:
pass
- # string corresponds to key in lookup table
+ # string corresponds to key in lookup table
lookup_table = self.lookup_table
looked_up_lemma = lookup_table.get(string)
if looked_up_lemma and looked_up_lemma in lemma_index:
@@ -103,12 +103,9 @@ class DutchLemmatizer(object):
# Overrides parent method so that a lowercased version of the string is
# used to search the lookup table. This is necessary because our lookup
# table consists entirely of lowercase keys.
- def lookup(self, string, orth=None):
+ def lookup(self, string):
string = string.lower()
- if orth is not None:
- return self.lookup_table.get(orth, string)
- else:
- return self.lookup_table.get(string, string)
+ return self.lookup_table.get(string, string)
def noun(self, string, morphology=None):
return self(string, "noun", morphology)
diff --git a/spacy/lang/ru/lemmatizer.py b/spacy/lang/ru/lemmatizer.py
index 70120566b..300d61c52 100644
--- a/spacy/lang/ru/lemmatizer.py
+++ b/spacy/lang/ru/lemmatizer.py
@@ -73,7 +73,7 @@ class RussianLemmatizer(Lemmatizer):
if (
feature in morphology
and feature in analysis_morph
- and morphology[feature].lower() != analysis_morph[feature].lower()
+ and morphology[feature] != analysis_morph[feature]
):
break
else:
@@ -115,7 +115,7 @@ class RussianLemmatizer(Lemmatizer):
def pron(self, string, morphology=None):
return self(string, "pron", morphology)
- def lookup(self, string, orth=None):
+ def lookup(self, string):
analyses = self._morph.parse(string)
if len(analyses) == 1:
return analyses[0].normal_form
diff --git a/spacy/lang/uk/lemmatizer.py b/spacy/lang/uk/lemmatizer.py
index d40bdf2df..ab56c824d 100644
--- a/spacy/lang/uk/lemmatizer.py
+++ b/spacy/lang/uk/lemmatizer.py
@@ -70,7 +70,7 @@ class UkrainianLemmatizer(Lemmatizer):
if (
feature in morphology
and feature in analysis_morph
- and morphology[feature].lower() != analysis_morph[feature].lower()
+ and morphology[feature] != analysis_morph[feature]
):
break
else:
@@ -112,7 +112,7 @@ class UkrainianLemmatizer(Lemmatizer):
def pron(self, string, morphology=None):
return self(string, "pron", morphology)
- def lookup(self, string, orth=None):
+ def lookup(self, string):
analyses = self._morph.parse(string)
if len(analyses) == 1:
return analyses[0].normal_form
diff --git a/spacy/language.py b/spacy/language.py
index a28f2a84e..09dd22cf2 100644
--- a/spacy/language.py
+++ b/spacy/language.py
@@ -20,7 +20,6 @@ from .pipeline import Tensorizer, EntityRecognizer, EntityLinker
from .pipeline import SimilarityHook, TextCategorizer, Sentencizer
from .pipeline import merge_noun_chunks, merge_entities, merge_subtokens
from .pipeline import EntityRuler
-from .pipeline import Morphologizer
from .compat import izip, basestring_
from .gold import GoldParse
from .scorer import Scorer
@@ -39,8 +38,6 @@ from . import about
class BaseDefaults(object):
@classmethod
def create_lemmatizer(cls, nlp=None, lookups=None):
- if lookups is None:
- lookups = cls.create_lookups(nlp=nlp)
rules, index, exc, lookup = util.get_lemma_tables(lookups)
return Lemmatizer(index, exc, rules, lookup)
@@ -111,8 +108,6 @@ class BaseDefaults(object):
syntax_iterators = {}
resources = {}
writing_system = {"direction": "ltr", "has_case": True, "has_letters": True}
- single_orth_variants = []
- paired_orth_variants = []
class Language(object):
@@ -133,7 +128,6 @@ class Language(object):
"tokenizer": lambda nlp: nlp.Defaults.create_tokenizer(nlp),
"tensorizer": lambda nlp, **cfg: Tensorizer(nlp.vocab, **cfg),
"tagger": lambda nlp, **cfg: Tagger(nlp.vocab, **cfg),
- "morphologizer": lambda nlp, **cfg: Morphologizer(nlp.vocab, **cfg),
"parser": lambda nlp, **cfg: DependencyParser(nlp.vocab, **cfg),
"ner": lambda nlp, **cfg: EntityRecognizer(nlp.vocab, **cfg),
"entity_linker": lambda nlp, **cfg: EntityLinker(nlp.vocab, **cfg),
@@ -257,8 +251,7 @@ class Language(object):
@property
def pipe_labels(self):
- """Get the labels set by the pipeline components, if available (if
- the component exposes a labels property).
+ """Get the labels set by the pipeline components, if available.
RETURNS (dict): Labels keyed by component name.
"""
@@ -449,25 +442,6 @@ class Language(object):
def make_doc(self, text):
return self.tokenizer(text)
- def _format_docs_and_golds(self, docs, golds):
- """Format golds and docs before update models."""
- expected_keys = ("words", "tags", "heads", "deps", "entities", "cats", "links")
- gold_objs = []
- doc_objs = []
- for doc, gold in zip(docs, golds):
- if isinstance(doc, basestring_):
- doc = self.make_doc(doc)
- if not isinstance(gold, GoldParse):
- unexpected = [k for k in gold if k not in expected_keys]
- if unexpected:
- err = Errors.E151.format(unexp=unexpected, exp=expected_keys)
- raise ValueError(err)
- gold = GoldParse(doc, **gold)
- doc_objs.append(doc)
- gold_objs.append(gold)
-
- return doc_objs, gold_objs
-
def update(self, docs, golds, drop=0.0, sgd=None, losses=None, component_cfg=None):
"""Update the models in the pipeline.
@@ -481,6 +455,7 @@ class Language(object):
DOCS: https://spacy.io/api/language#update
"""
+ expected_keys = ("words", "tags", "heads", "deps", "entities", "cats", "links")
if len(docs) != len(golds):
raise IndexError(Errors.E009.format(n_docs=len(docs), n_golds=len(golds)))
if len(docs) == 0:
@@ -490,7 +465,21 @@ class Language(object):
self._optimizer = create_default_optimizer(Model.ops)
sgd = self._optimizer
# Allow dict of args to GoldParse, instead of GoldParse objects.
- docs, golds = self._format_docs_and_golds(docs, golds)
+ gold_objs = []
+ doc_objs = []
+ for doc, gold in zip(docs, golds):
+ if isinstance(doc, basestring_):
+ doc = self.make_doc(doc)
+ if not isinstance(gold, GoldParse):
+ unexpected = [k for k in gold if k not in expected_keys]
+ if unexpected:
+ err = Errors.E151.format(unexp=unexpected, exp=expected_keys)
+ raise ValueError(err)
+ gold = GoldParse(doc, **gold)
+ doc_objs.append(doc)
+ gold_objs.append(gold)
+ golds = gold_objs
+ docs = doc_objs
grads = {}
def get_grads(W, dW, key=None):
@@ -594,7 +583,6 @@ class Language(object):
# Populate vocab
else:
for _, annots_brackets in get_gold_tuples():
- _ = annots_brackets.pop()
for annots, _ in annots_brackets:
for word in annots[1]:
_ = self.vocab[word] # noqa: F841
@@ -663,7 +651,7 @@ class Language(object):
DOCS: https://spacy.io/api/language#evaluate
"""
if scorer is None:
- scorer = Scorer(pipeline=self.pipeline)
+ scorer = Scorer()
if component_cfg is None:
component_cfg = {}
docs, golds = zip(*docs_golds)
diff --git a/spacy/lemmatizer.py b/spacy/lemmatizer.py
index 26c2227a0..f9e35f44a 100644
--- a/spacy/lemmatizer.py
+++ b/spacy/lemmatizer.py
@@ -2,7 +2,8 @@
from __future__ import unicode_literals
from collections import OrderedDict
-from .symbols import NOUN, VERB, ADJ, PUNCT, PROPN
+from .symbols import POS, NOUN, VERB, ADJ, PUNCT, PROPN
+from .symbols import VerbForm_inf, VerbForm_none, Number_sing, Degree_pos
class Lemmatizer(object):
@@ -54,8 +55,12 @@ class Lemmatizer(object):
Check whether we're dealing with an uninflected paradigm, so we can
avoid lemmatization entirely.
"""
- if morphology is None:
- morphology = {}
+ morphology = {} if morphology is None else morphology
+ others = [
+ key
+ for key in morphology
+ if key not in (POS, "Number", "POS", "VerbForm", "Tense")
+ ]
if univ_pos == "noun" and morphology.get("Number") == "sing":
return True
elif univ_pos == "verb" and morphology.get("VerbForm") == "inf":
@@ -66,17 +71,18 @@ class Lemmatizer(object):
morphology.get("VerbForm") == "fin"
and morphology.get("Tense") == "pres"
and morphology.get("Number") is None
+ and not others
):
return True
elif univ_pos == "adj" and morphology.get("Degree") == "pos":
return True
- elif morphology.get("VerbForm") == "inf":
+ elif VerbForm_inf in morphology:
return True
- elif morphology.get("VerbForm") == "none":
+ elif VerbForm_none in morphology:
return True
- elif morphology.get("VerbForm") == "inf":
+ elif Number_sing in morphology:
return True
- elif morphology.get("Degree") == "pos":
+ elif Degree_pos in morphology:
return True
else:
return False
@@ -93,19 +99,9 @@ class Lemmatizer(object):
def punct(self, string, morphology=None):
return self(string, "punct", morphology)
- def lookup(self, string, orth=None):
- """Look up a lemma in the table, if available. If no lemma is found,
- the original string is returned.
-
- string (unicode): The original string.
- orth (int): Optional hash of the string to look up. If not set, the
- string will be used and hashed.
- RETURNS (unicode): The lemma if the string was found, otherwise the
- original string.
- """
- key = orth if orth is not None else string
- if key in self.lookup_table:
- return self.lookup_table[key]
+ def lookup(self, string):
+ if string in self.lookup_table:
+ return self.lookup_table[string]
return string
diff --git a/spacy/lookups.py b/spacy/lookups.py
index 05a60f289..801b4d00d 100644
--- a/spacy/lookups.py
+++ b/spacy/lookups.py
@@ -1,13 +1,11 @@
-# coding: utf-8
+# coding: utf8
from __future__ import unicode_literals
import srsly
from collections import OrderedDict
-from preshed.bloom import BloomFilter
from .errors import Errors
from .util import SimpleFrozenDict, ensure_path
-from .strings import get_string_id
class Lookups(object):
@@ -16,14 +14,16 @@ class Lookups(object):
so they can be accessed before the pipeline components are applied (e.g.
in the tokenizer and lemmatizer), as well as within the pipeline components
via doc.vocab.lookups.
+
+ Important note: At the moment, this class only performs a very basic
+ dictionary lookup. We're planning to replace this with a more efficient
+ implementation. See #3971 for details.
"""
def __init__(self):
"""Initialize the Lookups object.
RETURNS (Lookups): The newly created object.
-
- DOCS: https://spacy.io/api/lookups#init
"""
self._tables = OrderedDict()
@@ -32,7 +32,7 @@ class Lookups(object):
Lookups.has_table.
name (unicode): Name of the table.
- RETURNS (bool): Whether a table of that name is in the lookups.
+ RETURNS (bool): Whether a table of that name exists.
"""
return self.has_table(name)
@@ -51,12 +51,11 @@ class Lookups(object):
name (unicode): Unique name of table.
data (dict): Optional data to add to the table.
RETURNS (Table): The newly added table.
-
- DOCS: https://spacy.io/api/lookups#add_table
"""
if name in self.tables:
raise ValueError(Errors.E158.format(name=name))
- table = Table(name=name, data=data)
+ table = Table(name=name)
+ table.update(data)
self._tables[name] = table
return table
@@ -65,8 +64,6 @@ class Lookups(object):
name (unicode): Name of the table.
RETURNS (Table): The table.
-
- DOCS: https://spacy.io/api/lookups#get_table
"""
if name not in self._tables:
raise KeyError(Errors.E159.format(name=name, tables=self.tables))
@@ -75,10 +72,8 @@ class Lookups(object):
def remove_table(self, name):
"""Remove a table. Raises an error if the table doesn't exist.
- name (unicode): Name of the table to remove.
+ name (unicode): The name to remove.
RETURNS (Table): The removed table.
-
- DOCS: https://spacy.io/api/lookups#remove_table
"""
if name not in self._tables:
raise KeyError(Errors.E159.format(name=name, tables=self.tables))
@@ -89,57 +84,45 @@ class Lookups(object):
name (unicode): Name of the table.
RETURNS (bool): Whether a table of that name exists.
-
- DOCS: https://spacy.io/api/lookups#has_table
"""
return name in self._tables
- def to_bytes(self, **kwargs):
+ def to_bytes(self, exclude=tuple(), **kwargs):
"""Serialize the lookups to a bytestring.
+ exclude (list): String names of serialization fields to exclude.
RETURNS (bytes): The serialized Lookups.
-
- DOCS: https://spacy.io/api/lookups#to_bytes
"""
return srsly.msgpack_dumps(self._tables)
- def from_bytes(self, bytes_data, **kwargs):
+ def from_bytes(self, bytes_data, exclude=tuple(), **kwargs):
"""Load the lookups from a bytestring.
- bytes_data (bytes): The data to load.
- RETURNS (Lookups): The loaded Lookups.
-
- DOCS: https://spacy.io/api/lookups#from_bytes
+ exclude (list): String names of serialization fields to exclude.
+ RETURNS (bytes): The loaded Lookups.
"""
- for key, value in srsly.msgpack_loads(bytes_data).items():
- self._tables[key] = Table(key)
- self._tables[key].update(value)
+ self._tables = OrderedDict()
+ msg = srsly.msgpack_loads(bytes_data)
+ for key, value in msg.items():
+ self._tables[key] = Table.from_dict(value)
return self
def to_disk(self, path, **kwargs):
- """Save the lookups to a directory as lookups.bin. Expects a path to a
- directory, which will be created if it doesn't exist.
+ """Save the lookups to a directory as lookups.bin.
path (unicode / Path): The file path.
-
- DOCS: https://spacy.io/api/lookups#to_disk
"""
if len(self._tables):
path = ensure_path(path)
- if not path.exists():
- path.mkdir()
filepath = path / "lookups.bin"
with filepath.open("wb") as file_:
file_.write(self.to_bytes())
def from_disk(self, path, **kwargs):
- """Load lookups from a directory containing a lookups.bin. Will skip
- loading if the file doesn't exist.
+ """Load lookups from a directory containing a lookups.bin.
- path (unicode / Path): The directory path.
+ path (unicode / Path): The file path.
RETURNS (Lookups): The loaded lookups.
-
- DOCS: https://spacy.io/api/lookups#from_disk
"""
path = ensure_path(path)
filepath = path / "lookups.bin"
@@ -153,118 +136,22 @@ class Lookups(object):
class Table(OrderedDict):
"""A table in the lookups. Subclass of builtin dict that implements a
slightly more consistent and unified API.
-
- Includes a Bloom filter to speed up missed lookups.
"""
-
@classmethod
def from_dict(cls, data, name=None):
- """Initialize a new table from a dict.
-
- data (dict): The dictionary.
- name (unicode): Optional table name for reference.
- RETURNS (Table): The newly created object.
-
- DOCS: https://spacy.io/api/lookups#table.from_dict
- """
self = cls(name=name)
self.update(data)
return self
- def __init__(self, name=None, data=None):
+ def __init__(self, name=None):
"""Initialize a new table.
name (unicode): Optional table name for reference.
- data (dict): Initial data, used to hint Bloom Filter.
RETURNS (Table): The newly created object.
-
- DOCS: https://spacy.io/api/lookups#table.init
"""
OrderedDict.__init__(self)
self.name = name
- # Assume a default size of 1M items
- self.default_size = 1e6
- size = len(data) if data and len(data) > 0 else self.default_size
- self.bloom = BloomFilter.from_error_rate(size)
- if data:
- self.update(data)
-
- def __setitem__(self, key, value):
- """Set new key/value pair. String keys will be hashed.
-
- key (unicode / int): The key to set.
- value: The value to set.
- """
- key = get_string_id(key)
- OrderedDict.__setitem__(self, key, value)
- self.bloom.add(key)
def set(self, key, value):
- """Set new key/value pair. String keys will be hashed.
- Same as table[key] = value.
-
- key (unicode / int): The key to set.
- value: The value to set.
- """
+ """Set new key/value pair. Same as table[key] = value."""
self[key] = value
-
- def __getitem__(self, key):
- """Get the value for a given key. String keys will be hashed.
-
- key (unicode / int): The key to get.
- RETURNS: The value.
- """
- key = get_string_id(key)
- return OrderedDict.__getitem__(self, key)
-
- def get(self, key, default=None):
- """Get the value for a given key. String keys will be hashed.
-
- key (unicode / int): The key to get.
- default: The default value to return.
- RETURNS: The value.
- """
- key = get_string_id(key)
- return OrderedDict.get(self, key, default)
-
- def __contains__(self, key):
- """Check whether a key is in the table. String keys will be hashed.
-
- key (unicode / int): The key to check.
- RETURNS (bool): Whether the key is in the table.
- """
- key = get_string_id(key)
- # This can give a false positive, so we need to check it after
- if key not in self.bloom:
- return False
- return OrderedDict.__contains__(self, key)
-
- def to_bytes(self):
- """Serialize table to a bytestring.
-
- RETURNS (bytes): The serialized table.
-
- DOCS: https://spacy.io/api/lookups#table.to_bytes
- """
- data = [
- ("name", self.name),
- ("dict", dict(self.items())),
- ("bloom", self.bloom.to_bytes()),
- ]
- return srsly.msgpack_dumps(OrderedDict(data))
-
- def from_bytes(self, bytes_data):
- """Load a table from a bytestring.
-
- bytes_data (bytes): The data to load.
- RETURNS (Table): The loaded table.
-
- DOCS: https://spacy.io/api/lookups#table.from_bytes
- """
- loaded = srsly.msgpack_loads(bytes_data)
- data = loaded.get("dict", {})
- self.name = loaded["name"]
- self.bloom = BloomFilter().from_bytes(loaded["bloom"])
- self.clear()
- self.update(data)
- return self
diff --git a/spacy/matcher/matcher.pyx b/spacy/matcher/matcher.pyx
index 950a7b977..c698c8024 100644
--- a/spacy/matcher/matcher.pyx
+++ b/spacy/matcher/matcher.pyx
@@ -103,8 +103,6 @@ cdef class Matcher:
*patterns (list): List of token descriptions.
"""
errors = {}
- if on_match is not None and not hasattr(on_match, "__call__"):
- raise ValueError(Errors.E171.format(arg_type=type(on_match)))
for i, pattern in enumerate(patterns):
if len(pattern) == 0:
raise ValueError(Errors.E012.format(key=key))
@@ -164,37 +162,18 @@ cdef class Matcher:
return default
return (self._callbacks[key], self._patterns[key])
- def pipe(self, docs, batch_size=1000, n_threads=-1, return_matches=False,
- as_tuples=False):
+ def pipe(self, docs, batch_size=1000, n_threads=-1):
"""Match a stream of documents, yielding them in turn.
docs (iterable): A stream of documents.
batch_size (int): Number of documents to accumulate into a working set.
- return_matches (bool): Yield the match lists along with the docs, making
- results (doc, matches) tuples.
- as_tuples (bool): Interpret the input stream as (doc, context) tuples,
- and yield (result, context) tuples out.
- If both return_matches and as_tuples are True, the output will
- be a sequence of ((doc, matches), context) tuples.
YIELDS (Doc): Documents, in order.
"""
if n_threads != -1:
deprecation_warning(Warnings.W016)
-
- if as_tuples:
- for doc, context in docs:
- matches = self(doc)
- if return_matches:
- yield ((doc, matches), context)
- else:
- yield (doc, context)
- else:
- for doc in docs:
- matches = self(doc)
- if return_matches:
- yield (doc, matches)
- else:
- yield doc
+ for doc in docs:
+ self(doc)
+ yield doc
def __call__(self, Doc doc):
"""Find all token sequences matching the supplied pattern.
diff --git a/spacy/matcher/phrasematcher.pxd b/spacy/matcher/phrasematcher.pxd
index 753b2da74..3aba1686f 100644
--- a/spacy/matcher/phrasematcher.pxd
+++ b/spacy/matcher/phrasematcher.pxd
@@ -1,27 +1,5 @@
from libcpp.vector cimport vector
-from cymem.cymem cimport Pool
-from preshed.maps cimport key_t, MapStruct
+from ..typedefs cimport hash_t
-from ..attrs cimport attr_id_t
-from ..tokens.doc cimport Doc
-from ..vocab cimport Vocab
-
-
-cdef class PhraseMatcher:
- cdef Vocab vocab
- cdef attr_id_t attr
- cdef object _callbacks
- cdef object _docs
- cdef bint _validate
- cdef MapStruct* c_map
- cdef Pool mem
- cdef key_t _terminal_hash
-
- cdef void find_matches(self, Doc doc, vector[MatchStruct] *matches) nogil
-
-
-cdef struct MatchStruct:
- key_t match_id
- int start
- int end
+ctypedef vector[hash_t] hash_vec
diff --git a/spacy/matcher/phrasematcher.pyx b/spacy/matcher/phrasematcher.pyx
index b6c9e01d2..9e8801cc1 100644
--- a/spacy/matcher/phrasematcher.pyx
+++ b/spacy/matcher/phrasematcher.pyx
@@ -2,16 +2,28 @@
# cython: profile=True
from __future__ import unicode_literals
-from libc.stdint cimport uintptr_t
+from libcpp.vector cimport vector
+from cymem.cymem cimport Pool
+from murmurhash.mrmr cimport hash64
+from preshed.maps cimport PreshMap
-from preshed.maps cimport map_init, map_set, map_get, map_clear, map_iter
-
-from ..attrs cimport ORTH, POS, TAG, DEP, LEMMA
-from ..structs cimport TokenC
-from ..tokens.token cimport Token
+from .matcher cimport Matcher
+from ..attrs cimport ORTH, POS, TAG, DEP, LEMMA, attr_id_t
+from ..vocab cimport Vocab
+from ..tokens.doc cimport Doc, get_token_attr
+from ..typedefs cimport attr_t, hash_t
from ._schemas import TOKEN_PATTERN_SCHEMA
from ..errors import Errors, Warnings, deprecation_warning, user_warning
+from ..attrs import FLAG61 as U_ENT
+from ..attrs import FLAG60 as B2_ENT
+from ..attrs import FLAG59 as B3_ENT
+from ..attrs import FLAG58 as B4_ENT
+from ..attrs import FLAG43 as L2_ENT
+from ..attrs import FLAG42 as L3_ENT
+from ..attrs import FLAG41 as L4_ENT
+from ..attrs import FLAG42 as I3_ENT
+from ..attrs import FLAG41 as I4_ENT
cdef class PhraseMatcher:
@@ -21,11 +33,18 @@ cdef class PhraseMatcher:
DOCS: https://spacy.io/api/phrasematcher
USAGE: https://spacy.io/usage/rule-based-matching#phrasematcher
-
- Adapted from FlashText: https://github.com/vi3k6i5/flashtext
- MIT License (see `LICENSE`)
- Copyright (c) 2017 Vikash Singh (vikash.duliajan@gmail.com)
"""
+ cdef Pool mem
+ cdef Vocab vocab
+ cdef Matcher matcher
+ cdef PreshMap phrase_ids
+ cdef vector[hash_vec] ent_id_matrix
+ cdef int max_length
+ cdef attr_id_t attr
+ cdef public object _callbacks
+ cdef public object _patterns
+ cdef public object _docs
+ cdef public object _validate
def __init__(self, Vocab vocab, max_length=0, attr="ORTH", validate=False):
"""Initialize the PhraseMatcher.
@@ -39,17 +58,11 @@ cdef class PhraseMatcher:
"""
if max_length != 0:
deprecation_warning(Warnings.W010)
- self.vocab = vocab
- self._callbacks = {}
- self._docs = {}
- self._validate = validate
-
self.mem = Pool()
- self.c_map = self.mem.alloc(1, sizeof(MapStruct))
- self._terminal_hash = 826361138722620965
- map_init(self.mem, self.c_map, 8)
-
- if isinstance(attr, (int, long)):
+ self.max_length = max_length
+ self.vocab = vocab
+ self.matcher = Matcher(self.vocab, validate=False)
+ if isinstance(attr, long):
self.attr = attr
else:
attr = attr.upper()
@@ -58,15 +71,28 @@ cdef class PhraseMatcher:
if attr not in TOKEN_PATTERN_SCHEMA["items"]["properties"]:
raise ValueError(Errors.E152.format(attr=attr))
self.attr = self.vocab.strings[attr]
+ self.phrase_ids = PreshMap()
+ abstract_patterns = [
+ [{U_ENT: True}],
+ [{B2_ENT: True}, {L2_ENT: True}],
+ [{B3_ENT: True}, {I3_ENT: True}, {L3_ENT: True}],
+ [{B4_ENT: True}, {I4_ENT: True}, {I4_ENT: True, "OP": "+"}, {L4_ENT: True}],
+ ]
+ self.matcher.add("Candidate", None, *abstract_patterns)
+ self._callbacks = {}
+ self._docs = {}
+ self._validate = validate
def __len__(self):
- """Get the number of match IDs added to the matcher.
+ """Get the number of rules added to the matcher. Note that this only
+ returns the number of rules (identical with the number of IDs), not the
+ number of individual patterns.
RETURNS (int): The number of rules.
DOCS: https://spacy.io/api/phrasematcher#len
"""
- return len(self._callbacks)
+ return len(self._docs)
def __contains__(self, key):
"""Check whether the matcher contains rules for a match ID.
@@ -76,79 +102,13 @@ cdef class PhraseMatcher:
DOCS: https://spacy.io/api/phrasematcher#contains
"""
- return key in self._callbacks
+ cdef hash_t ent_id = self.matcher._normalize_key(key)
+ return ent_id in self._callbacks
def __reduce__(self):
- data = (self.vocab, self._docs, self._callbacks, self.attr)
+ data = (self.vocab, self._docs, self._callbacks)
return (unpickle_matcher, data, None, None)
- def remove(self, key):
- """Remove a rule from the matcher by match ID. A KeyError is raised if
- the key does not exist.
-
- key (unicode): The match ID.
-
- DOCS: https://spacy.io/api/phrasematcher#remove
- """
- if key not in self._docs:
- raise KeyError(key)
- cdef MapStruct* current_node
- cdef MapStruct* terminal_map
- cdef MapStruct* node_pointer
- cdef void* result
- cdef key_t terminal_key
- cdef void* value
- cdef int c_i = 0
- cdef vector[MapStruct*] path_nodes
- cdef vector[key_t] path_keys
- cdef key_t key_to_remove
- for keyword in self._docs[key]:
- current_node = self.c_map
- for token in keyword:
- result = map_get(current_node, token)
- if result:
- path_nodes.push_back(current_node)
- path_keys.push_back(token)
- current_node = result
- else:
- # if token is not found, break out of the loop
- current_node = NULL
- break
- # remove the tokens from trie node if there are no other
- # keywords with them
- result = map_get(current_node, self._terminal_hash)
- if current_node != NULL and result:
- terminal_map = result
- terminal_keys = []
- c_i = 0
- while map_iter(terminal_map, &c_i, &terminal_key, &value):
- terminal_keys.append(self.vocab.strings[terminal_key])
- # if this is the only remaining key, remove unnecessary paths
- if terminal_keys == [key]:
- while not path_nodes.empty():
- node_pointer = path_nodes.back()
- path_nodes.pop_back()
- key_to_remove = path_keys.back()
- path_keys.pop_back()
- result = map_get(node_pointer, key_to_remove)
- if node_pointer.filled == 1:
- map_clear(node_pointer, key_to_remove)
- self.mem.free(result)
- else:
- # more than one key means more than 1 path,
- # delete not required path and keep the others
- map_clear(node_pointer, key_to_remove)
- self.mem.free(result)
- break
- # otherwise simply remove the key
- else:
- result = map_get(current_node, self._terminal_hash)
- if result:
- map_clear(result, self.vocab.strings[key])
-
- del self._callbacks[key]
- del self._docs[key]
-
def add(self, key, on_match, *docs):
"""Add a match-rule to the phrase-matcher. A match-rule consists of: an ID
key, an on_match callback, and one or more patterns.
@@ -159,53 +119,53 @@ cdef class PhraseMatcher:
DOCS: https://spacy.io/api/phrasematcher#add
"""
-
- _ = self.vocab[key]
- self._callbacks[key] = on_match
- self._docs.setdefault(key, set())
-
- cdef MapStruct* current_node
- cdef MapStruct* internal_node
- cdef void* result
-
+ cdef Doc doc
+ cdef hash_t ent_id = self.matcher._normalize_key(key)
+ self._callbacks[ent_id] = on_match
+ self._docs[ent_id] = docs
+ cdef int length
+ cdef int i
+ cdef hash_t phrase_hash
+ cdef Pool mem = Pool()
for doc in docs:
- if len(doc) == 0:
+ length = doc.length
+ if length == 0:
continue
- if isinstance(doc, Doc):
- if self.attr in (POS, TAG, LEMMA) and not doc.is_tagged:
- raise ValueError(Errors.E155.format())
- if self.attr == DEP and not doc.is_parsed:
- raise ValueError(Errors.E156.format())
- if self._validate and (doc.is_tagged or doc.is_parsed) \
- and self.attr not in (DEP, POS, TAG, LEMMA):
- string_attr = self.vocab.strings[self.attr]
- user_warning(Warnings.W012.format(key=key, attr=string_attr))
- keyword = self._convert_to_array(doc)
+ if self.attr in (POS, TAG, LEMMA) and not doc.is_tagged:
+ raise ValueError(Errors.E155.format())
+ if self.attr == DEP and not doc.is_parsed:
+ raise ValueError(Errors.E156.format())
+ if self._validate and (doc.is_tagged or doc.is_parsed) \
+ and self.attr not in (DEP, POS, TAG, LEMMA):
+ string_attr = self.vocab.strings[self.attr]
+ user_warning(Warnings.W012.format(key=key, attr=string_attr))
+ tags = get_biluo(length)
+ phrase_key = mem.alloc(length, sizeof(attr_t))
+ for i, tag in enumerate(tags):
+ attr_value = self.get_lex_value(doc, i)
+ lexeme = self.vocab[attr_value]
+ lexeme.set_flag(tag, True)
+ phrase_key[i] = lexeme.orth
+ phrase_hash = hash64(phrase_key, length * sizeof(attr_t), 0)
+
+ if phrase_hash in self.phrase_ids:
+ phrase_index = self.phrase_ids[phrase_hash]
+ ent_id_list = self.ent_id_matrix[phrase_index]
+ ent_id_list.append(ent_id)
+ self.ent_id_matrix[phrase_index] = ent_id_list
+
else:
- keyword = doc
- self._docs[key].add(tuple(keyword))
+ ent_id_list = hash_vec(1)
+ ent_id_list[0] = ent_id
+ new_index = self.ent_id_matrix.size()
+ if new_index == 0:
+ # PreshMaps can not contain 0 as value, so storing a dummy at 0
+ self.ent_id_matrix.push_back(hash_vec(0))
+ new_index = 1
+ self.ent_id_matrix.push_back(ent_id_list)
+ self.phrase_ids.set(phrase_hash, new_index)
- current_node = self.c_map
- for token in keyword:
- if token == self._terminal_hash:
- user_warning(Warnings.W021)
- break
- result = map_get(current_node, token)
- if not result:
- internal_node = self.mem.alloc(1, sizeof(MapStruct))
- map_init(self.mem, internal_node, 8)
- map_set(self.mem, current_node, token, internal_node)
- result = internal_node
- current_node = result
- result = map_get(current_node, self._terminal_hash)
- if not result:
- internal_node = self.mem.alloc(1, sizeof(MapStruct))
- map_init(self.mem, internal_node, 8)
- map_set(self.mem, current_node, self._terminal_hash, internal_node)
- result = internal_node
- map_set(self.mem, result, self.vocab.strings[key], NULL)
-
- def __call__(self, doc):
+ def __call__(self, Doc doc):
"""Find all sequences matching the supplied patterns on the `Doc`.
doc (Doc): The document to match over.
@@ -216,63 +176,25 @@ cdef class PhraseMatcher:
DOCS: https://spacy.io/api/phrasematcher#call
"""
matches = []
- if doc is None or len(doc) == 0:
- # if doc is empty or None just return empty list
- return matches
-
- cdef vector[MatchStruct] c_matches
- self.find_matches(doc, &c_matches)
- for i in range(c_matches.size()):
- matches.append((c_matches[i].match_id, c_matches[i].start, c_matches[i].end))
+ if self.attr == ORTH:
+ match_doc = doc
+ else:
+ # If we're not matching on the ORTH, match_doc will be a Doc whose
+ # token.orth values are the attribute values we're matching on,
+ # e.g. Doc(nlp.vocab, words=[token.pos_ for token in doc])
+ words = [self.get_lex_value(doc, i) for i in range(len(doc))]
+ match_doc = Doc(self.vocab, words=words)
+ for _, start, end in self.matcher(match_doc):
+ ent_ids = self.accept_match(match_doc, start, end)
+ if ent_ids is not None:
+ for ent_id in ent_ids:
+ matches.append((ent_id, start, end))
for i, (ent_id, start, end) in enumerate(matches):
on_match = self._callbacks.get(ent_id)
if on_match is not None:
on_match(self, doc, i, matches)
return matches
- cdef void find_matches(self, Doc doc, vector[MatchStruct] *matches) nogil:
- cdef MapStruct* current_node = self.c_map
- cdef int start = 0
- cdef int idx = 0
- cdef int idy = 0
- cdef key_t key
- cdef void* value
- cdef int i = 0
- cdef MatchStruct ms
- cdef void* result
- while idx < doc.length:
- start = idx
- token = Token.get_struct_attr(&doc.c[idx], self.attr)
- # look for sequences from this position
- result = map_get(current_node, token)
- if result:
- current_node = result
- idy = idx + 1
- while idy < doc.length:
- result = map_get(current_node, self._terminal_hash)
- if result:
- i = 0
- while map_iter(result, &i, &key, &value):
- ms = make_matchstruct(key, start, idy)
- matches.push_back(ms)
- inner_token = Token.get_struct_attr(&doc.c[idy], self.attr)
- result = map_get(current_node, inner_token)
- if result:
- current_node = result
- idy += 1
- else:
- break
- else:
- # end of doc reached
- result = map_get(current_node, self._terminal_hash)
- if result:
- i = 0
- while map_iter(result, &i, &key, &value):
- ms = make_matchstruct(key, start, idy)
- matches.push_back(ms)
- current_node = self.c_map
- idx += 1
-
def pipe(self, stream, batch_size=1000, n_threads=-1, return_matches=False,
as_tuples=False):
"""Match a stream of documents, yielding them in turn.
@@ -306,21 +228,53 @@ cdef class PhraseMatcher:
else:
yield doc
- def _convert_to_array(self, Doc doc):
- return [Token.get_struct_attr(&doc.c[i], self.attr) for i in range(len(doc))]
+ def accept_match(self, Doc doc, int start, int end):
+ cdef int i, j
+ cdef Pool mem = Pool()
+ phrase_key = mem.alloc(end-start, sizeof(attr_t))
+ for i, j in enumerate(range(start, end)):
+ phrase_key[i] = doc.c[j].lex.orth
+ cdef hash_t key = hash64(phrase_key, (end-start) * sizeof(attr_t), 0)
+
+ ent_index = self.phrase_ids.get(key)
+ if ent_index == 0:
+ return None
+ return self.ent_id_matrix[ent_index]
+
+ def get_lex_value(self, Doc doc, int i):
+ if self.attr == ORTH:
+ # Return the regular orth value of the lexeme
+ return doc.c[i].lex.orth
+ # Get the attribute value instead, e.g. token.pos
+ attr_value = get_token_attr(&doc.c[i], self.attr)
+ if attr_value in (0, 1):
+ # Value is boolean, convert to string
+ string_attr_value = str(attr_value)
+ else:
+ string_attr_value = self.vocab.strings[attr_value]
+ string_attr_name = self.vocab.strings[self.attr]
+ # Concatenate the attr name and value to not pollute lexeme space
+ # e.g. 'POS-VERB' instead of just 'VERB', which could otherwise
+ # create false positive matches
+ return "matcher:{}-{}".format(string_attr_name, string_attr_value)
-def unpickle_matcher(vocab, docs, callbacks, attr):
- matcher = PhraseMatcher(vocab, attr=attr)
+def get_biluo(length):
+ if length == 0:
+ raise ValueError(Errors.E127)
+ elif length == 1:
+ return [U_ENT]
+ elif length == 2:
+ return [B2_ENT, L2_ENT]
+ elif length == 3:
+ return [B3_ENT, I3_ENT, L3_ENT]
+ else:
+ return [B4_ENT, I4_ENT] + [I4_ENT] * (length-3) + [L4_ENT]
+
+
+def unpickle_matcher(vocab, docs, callbacks):
+ matcher = PhraseMatcher(vocab)
for key, specs in docs.items():
callback = callbacks.get(key, None)
matcher.add(key, callback, *specs)
return matcher
-
-
-cdef MatchStruct make_matchstruct(key_t match_id, int start, int end) nogil:
- cdef MatchStruct ms
- ms.match_id = match_id
- ms.start = start
- ms.end = end
- return ms
diff --git a/spacy/morphology.pxd b/spacy/morphology.pxd
index 1a3cedf97..d0110b300 100644
--- a/spacy/morphology.pxd
+++ b/spacy/morphology.pxd
@@ -1,41 +1,301 @@
from cymem.cymem cimport Pool
-from preshed.maps cimport PreshMap, PreshMapArray
+from preshed.maps cimport PreshMapArray
from libc.stdint cimport uint64_t
-from murmurhash cimport mrmr
-from .structs cimport TokenC, MorphAnalysisC
+from .structs cimport TokenC
from .strings cimport StringStore
-from .typedefs cimport hash_t, attr_t, flags_t
+from .typedefs cimport attr_t, flags_t
from .parts_of_speech cimport univ_pos_t
from . cimport symbols
+
+cdef struct RichTagC:
+ uint64_t morph
+ int id
+ univ_pos_t pos
+ attr_t name
+
+
+cdef struct MorphAnalysisC:
+ RichTagC tag
+ attr_t lemma
+
+
cdef class Morphology:
cdef readonly Pool mem
cdef readonly StringStore strings
- cdef PreshMap tags # Keyed by hash, value is pointer to tag
-
cdef public object lemmatizer
cdef readonly object tag_map
- cdef readonly object tag_names
- cdef readonly object reverse_index
- cdef readonly object exc
- cdef readonly object _feat_map
- cdef readonly PreshMapArray _cache
- cdef readonly int n_tags
+ cdef public object n_tags
+ cdef public object reverse_index
+ cdef public object tag_names
+ cdef public object exc
+
+ cdef RichTagC* rich_tags
+ cdef PreshMapArray _cache
- cpdef update(self, hash_t morph, features)
- cdef hash_t insert(self, MorphAnalysisC tag) except 0
-
cdef int assign_untagged(self, TokenC* token) except -1
+
cdef int assign_tag(self, TokenC* token, tag) except -1
+
cdef int assign_tag_id(self, TokenC* token, int tag_id) except -1
- cdef int _assign_tag_from_exceptions(self, TokenC* token, int tag_id) except -1
+ cdef int assign_feature(self, uint64_t* morph, univ_morph_t feat_id, bint value) except -1
-cdef int check_feature(const MorphAnalysisC* tag, attr_t feature) nogil
-cdef attr_t get_field(const MorphAnalysisC* tag, int field) nogil
-cdef list list_features(const MorphAnalysisC* tag)
+cdef enum univ_morph_t:
+ NIL = 0
+ Animacy_anim = symbols.Animacy_anim
+ Animacy_inan
+ Animacy_hum
+ Animacy_nhum
+ Aspect_freq
+ Aspect_imp
+ Aspect_mod
+ Aspect_none
+ Aspect_perf
+ Case_abe
+ Case_abl
+ Case_abs
+ Case_acc
+ Case_ade
+ Case_all
+ Case_cau
+ Case_com
+ Case_dat
+ Case_del
+ Case_dis
+ Case_ela
+ Case_ess
+ Case_gen
+ Case_ill
+ Case_ine
+ Case_ins
+ Case_loc
+ Case_lat
+ Case_nom
+ Case_par
+ Case_sub
+ Case_sup
+ Case_tem
+ Case_ter
+ Case_tra
+ Case_voc
+ Definite_two
+ Definite_def
+ Definite_red
+ Definite_cons # U20
+ Definite_ind
+ Degree_cmp
+ Degree_comp
+ Degree_none
+ Degree_pos
+ Degree_sup
+ Degree_abs
+ Degree_com
+ Degree_dim # du
+ Gender_com
+ Gender_fem
+ Gender_masc
+ Gender_neut
+ Mood_cnd
+ Mood_imp
+ Mood_ind
+ Mood_n
+ Mood_pot
+ Mood_sub
+ Mood_opt
+ Negative_neg
+ Negative_pos
+ Negative_yes
+ Polarity_neg # U20
+ Polarity_pos # U20
+ Number_com
+ Number_dual
+ Number_none
+ Number_plur
+ Number_sing
+ Number_ptan # bg
+ Number_count # bg
+ NumType_card
+ NumType_dist
+ NumType_frac
+ NumType_gen
+ NumType_mult
+ NumType_none
+ NumType_ord
+ NumType_sets
+ Person_one
+ Person_two
+ Person_three
+ Person_none
+ Poss_yes
+ PronType_advPart
+ PronType_art
+ PronType_default
+ PronType_dem
+ PronType_ind
+ PronType_int
+ PronType_neg
+ PronType_prs
+ PronType_rcp
+ PronType_rel
+ PronType_tot
+ PronType_clit
+ PronType_exc # es, ca, it, fa
+ Reflex_yes
+ Tense_fut
+ Tense_imp
+ Tense_past
+ Tense_pres
+ VerbForm_fin
+ VerbForm_ger
+ VerbForm_inf
+ VerbForm_none
+ VerbForm_part
+ VerbForm_partFut
+ VerbForm_partPast
+ VerbForm_partPres
+ VerbForm_sup
+ VerbForm_trans
+ VerbForm_conv # U20
+ VerbForm_gdv # la
+ Voice_act
+ Voice_cau
+ Voice_pass
+ Voice_mid # gkc
+ Voice_int # hb
+ Abbr_yes # cz, fi, sl, U
+ AdpType_prep # cz, U
+ AdpType_post # U
+ AdpType_voc # cz
+ AdpType_comprep # cz
+ AdpType_circ # U
+ AdvType_man
+ AdvType_loc
+ AdvType_tim
+ AdvType_deg
+ AdvType_cau
+ AdvType_mod
+ AdvType_sta
+ AdvType_ex
+ AdvType_adadj
+ ConjType_oper # cz, U
+ ConjType_comp # cz, U
+ Connegative_yes # fi
+ Derivation_minen # fi
+ Derivation_sti # fi
+ Derivation_inen # fi
+ Derivation_lainen # fi
+ Derivation_ja # fi
+ Derivation_ton # fi
+ Derivation_vs # fi
+ Derivation_ttain # fi
+ Derivation_ttaa # fi
+ Echo_rdp # U
+ Echo_ech # U
+ Foreign_foreign # cz, fi, U
+ Foreign_fscript # cz, fi, U
+ Foreign_tscript # cz, U
+ Foreign_yes # sl
+ Gender_dat_masc # bq, U
+ Gender_dat_fem # bq, U
+ Gender_erg_masc # bq
+ Gender_erg_fem # bq
+ Gender_psor_masc # cz, sl, U
+ Gender_psor_fem # cz, sl, U
+ Gender_psor_neut # sl
+ Hyph_yes # cz, U
+ InfForm_one # fi
+ InfForm_two # fi
+ InfForm_three # fi
+ NameType_geo # U, cz
+ NameType_prs # U, cz
+ NameType_giv # U, cz
+ NameType_sur # U, cz
+ NameType_nat # U, cz
+ NameType_com # U, cz
+ NameType_pro # U, cz
+ NameType_oth # U, cz
+ NounType_com # U
+ NounType_prop # U
+ NounType_class # U
+ Number_abs_sing # bq, U
+ Number_abs_plur # bq, U
+ Number_dat_sing # bq, U
+ Number_dat_plur # bq, U
+ Number_erg_sing # bq, U
+ Number_erg_plur # bq, U
+ Number_psee_sing # U
+ Number_psee_plur # U
+ Number_psor_sing # cz, fi, sl, U
+ Number_psor_plur # cz, fi, sl, U
+ NumForm_digit # cz, sl, U
+ NumForm_roman # cz, sl, U
+ NumForm_word # cz, sl, U
+ NumValue_one # cz, U
+ NumValue_two # cz, U
+ NumValue_three # cz, U
+ PartForm_pres # fi
+ PartForm_past # fi
+ PartForm_agt # fi
+ PartForm_neg # fi
+ PartType_mod # U
+ PartType_emp # U
+ PartType_res # U
+ PartType_inf # U
+ PartType_vbp # U
+ Person_abs_one # bq, U
+ Person_abs_two # bq, U
+ Person_abs_three # bq, U
+ Person_dat_one # bq, U
+ Person_dat_two # bq, U
+ Person_dat_three # bq, U
+ Person_erg_one # bq, U
+ Person_erg_two # bq, U
+ Person_erg_three # bq, U
+ Person_psor_one # fi, U
+ Person_psor_two # fi, U
+ Person_psor_three # fi, U
+ Polite_inf # bq, U
+ Polite_pol # bq, U
+ Polite_abs_inf # bq, U
+ Polite_abs_pol # bq, U
+ Polite_erg_inf # bq, U
+ Polite_erg_pol # bq, U
+ Polite_dat_inf # bq, U
+ Polite_dat_pol # bq, U
+ Prefix_yes # U
+ PrepCase_npr # cz
+ PrepCase_pre # U
+ PunctSide_ini # U
+ PunctSide_fin # U
+ PunctType_peri # U
+ PunctType_qest # U
+ PunctType_excl # U
+ PunctType_quot # U
+ PunctType_brck # U
+ PunctType_comm # U
+ PunctType_colo # U
+ PunctType_semi # U
+ PunctType_dash # U
+ Style_arch # cz, fi, U
+ Style_rare # cz, fi, U
+ Style_poet # cz, U
+ Style_norm # cz, U
+ Style_coll # cz, U
+ Style_vrnc # cz, U
+ Style_sing # cz, U
+ Style_expr # cz, U
+ Style_derg # cz, U
+ Style_vulg # cz, U
+ Style_yes # fi, U
+ StyleVariant_styleShort # cz
+ StyleVariant_styleBound # cz, sl
+ VerbType_aux # U
+ VerbType_cop # U
+ VerbType_mod # U
+ VerbType_light # U
+
-cdef tag_to_json(const MorphAnalysisC* tag)
diff --git a/spacy/morphology.pyx b/spacy/morphology.pyx
index c146094a9..e9de621c8 100644
--- a/spacy/morphology.pyx
+++ b/spacy/morphology.pyx
@@ -3,83 +3,18 @@
from __future__ import unicode_literals
from libc.string cimport memset
-import srsly
-from collections import Counter
-from .compat import basestring_
-from .strings import get_string_id
-from . import symbols
from .attrs cimport POS, IS_SPACE
from .attrs import LEMMA, intify_attrs
from .parts_of_speech cimport SPACE
from .parts_of_speech import IDS as POS_IDS
from .lexeme cimport Lexeme
from .errors import Errors
-from .util import ensure_path
-
-
-cdef enum univ_field_t:
- Field_POS
- Field_Abbr
- Field_AdpType
- Field_AdvType
- Field_Animacy
- Field_Aspect
- Field_Case
- Field_ConjType
- Field_Connegative
- Field_Definite
- Field_Degree
- Field_Derivation
- Field_Echo
- Field_Foreign
- Field_Gender
- Field_Hyph
- Field_InfForm
- Field_Mood
- Field_NameType
- Field_Negative
- Field_NounType
- Field_Number
- Field_NumForm
- Field_NumType
- Field_NumValue
- Field_PartForm
- Field_PartType
- Field_Person
- Field_Polarity
- Field_Polite
- Field_Poss
- Field_Prefix
- Field_PrepCase
- Field_PronType
- Field_PunctSide
- Field_PunctType
- Field_Reflex
- Field_Style
- Field_StyleVariant
- Field_Tense
- Field_Typo
- Field_VerbForm
- Field_VerbType
- Field_Voice
def _normalize_props(props):
"""Transform deprecated string keys to correct names."""
out = {}
- props = dict(props)
- for key in FIELDS:
- if key in props:
- value = str(props[key]).lower()
- # We don't have support for disjunctive int|rel features, so
- # just take the first one :(
- if "|" in value:
- value = value.split("|")[0]
- attr = '%s_%s' % (key, value)
- if attr in FEATURES:
- props.pop(key)
- props[attr] = True
for key, value in props.items():
if key == POS:
if hasattr(value, 'upper'):
@@ -89,67 +24,17 @@ def _normalize_props(props):
out[key] = value
elif isinstance(key, int):
out[key] = value
- elif value is True:
- out[key] = value
elif key.lower() == 'pos':
out[POS] = POS_IDS[value.upper()]
- elif key.lower() != 'morph':
+ else:
out[key] = value
return out
-class MorphologyClassMap(object):
- def __init__(self, features):
- self.features = tuple(features)
- self.fields = []
- self.feat2field = {}
- seen_fields = set()
- for feature in features:
- field = feature.split("_", 1)[0]
- if field not in seen_fields:
- self.fields.append(field)
- seen_fields.add(field)
- self.feat2field[feature] = FIELDS[field]
- self.id2feat = {get_string_id(name): name for name in features}
- self.field2feats = {"POS": []}
- self.col2info = []
- self.attr2field = dict(LOWER_FIELDS.items())
- self.feat2offset = {}
- self.field2col = {}
- self.field2id = dict(FIELDS.items())
- self.fieldid2field = {field_id: field for field, field_id in FIELDS.items()}
- for feature in features:
- field = self.fields[self.feat2field[feature]]
- if field not in self.field2col:
- self.field2col[field] = len(self.col2info)
- if field != "POS" and field not in self.field2feats:
- self.col2info.append((field, 0, "NIL"))
- self.field2feats.setdefault(field, ["NIL"])
- offset = len(self.field2feats[field])
- self.field2feats[field].append(feature)
- self.col2info.append((field, offset, feature))
- self.feat2offset[feature] = offset
-
- @property
- def field_sizes(self):
- return [len(self.field2feats[field]) for field in self.fields]
-
- def get_field_offset(self, field):
- return self.field2col[field]
-
-
cdef class Morphology:
- '''Store the possible morphological analyses for a language, and index them
- by hash.
-
- To save space on each token, tokens only know the hash of their morphological
- analysis, so queries of morphological attributes are delegated
- to this class.
- '''
def __init__(self, StringStore string_store, tag_map, lemmatizer, exc=None):
self.mem = Pool()
self.strings = string_store
- self.tags = PreshMap()
# Add special space symbol. We prefix with underscore, to make sure it
# always sorts to the end.
space_attrs = tag_map.get('SP', {POS: SPACE})
@@ -162,109 +47,31 @@ cdef class Morphology:
self.lemmatizer = lemmatizer
self.n_tags = len(tag_map)
self.reverse_index = {}
- self._feat_map = MorphologyClassMap(FEATURES)
- self._load_from_tag_map(tag_map)
+
+ self.rich_tags = self.mem.alloc(self.n_tags+1, sizeof(RichTagC))
+ for i, (tag_str, attrs) in enumerate(sorted(tag_map.items())):
+ self.strings.add(tag_str)
+ self.tag_map[tag_str] = dict(attrs)
+ attrs = _normalize_props(attrs)
+ attrs = intify_attrs(attrs, self.strings, _do_deprecated=True)
+ self.rich_tags[i].id = i
+ self.rich_tags[i].name = self.strings.add(tag_str)
+ self.rich_tags[i].morph = 0
+ self.rich_tags[i].pos = attrs[POS]
+ self.reverse_index[self.rich_tags[i].name] = i
+ # Add a 'null' tag, which we can reference when assign morphology to
+ # untagged tokens.
+ self.rich_tags[self.n_tags].id = self.n_tags
self._cache = PreshMapArray(self.n_tags)
self.exc = {}
if exc is not None:
- for (tag, orth), attrs in exc.items():
- attrs = _normalize_props(attrs)
- self.add_special_case(
- self.strings.as_string(tag), self.strings.as_string(orth), attrs)
-
- def _load_from_tag_map(self, tag_map):
- for i, (tag_str, attrs) in enumerate(sorted(tag_map.items())):
- attrs = _normalize_props(attrs)
- self.add({self._feat_map.id2feat[feat] for feat in attrs
- if feat in self._feat_map.id2feat})
- self.tag_map[tag_str] = dict(attrs)
- self.reverse_index[self.strings.add(tag_str)] = i
+ for (tag_str, orth_str), attrs in exc.items():
+ self.add_special_case(tag_str, orth_str, attrs)
def __reduce__(self):
return (Morphology, (self.strings, self.tag_map, self.lemmatizer,
- self.exc), None, None)
-
- def add(self, features):
- """Insert a morphological analysis in the morphology table, if not already
- present. Returns the hash of the new analysis.
- """
- for f in features:
- if isinstance(f, basestring_):
- self.strings.add(f)
- string_features = features
- features = intify_features(features)
- cdef attr_t feature
- for feature in features:
- if feature != 0 and feature not in self._feat_map.id2feat:
- raise ValueError(Errors.E167.format(feat=self.strings[feature], feat_id=feature))
- cdef MorphAnalysisC tag
- tag = create_rich_tag(features)
- cdef hash_t key = self.insert(tag)
- return key
-
- def get(self, hash_t morph):
- tag = self.tags.get(morph)
- if tag == NULL:
- return []
- else:
- return tag_to_json(tag)
-
- cpdef update(self, hash_t morph, features):
- """Update a morphological analysis with new feature values."""
- tag = (self.tags.get(morph))[0]
- features = intify_features(features)
- cdef attr_t feature
- for feature in features:
- field = FEATURE_FIELDS[FEATURE_NAMES[feature]]
- set_feature(&tag, field, feature, 1)
- morph = self.insert(tag)
- return morph
-
- def lemmatize(self, const univ_pos_t univ_pos, attr_t orth, morphology):
- if orth not in self.strings:
- return orth
- cdef unicode py_string = self.strings[orth]
- if self.lemmatizer is None:
- return self.strings.add(py_string.lower())
- cdef list lemma_strings
- cdef unicode lemma_string
- # Normalize features into a dict keyed by the field, to make life easier
- # for the lemmatizer. Handles string-to-int conversion too.
- string_feats = {}
- for key, value in morphology.items():
- if value is True:
- name, value = self.strings.as_string(key).split('_', 1)
- string_feats[name] = value
- else:
- string_feats[self.strings.as_string(key)] = self.strings.as_string(value)
- lemma_strings = self.lemmatizer(py_string, univ_pos, string_feats)
- lemma_string = lemma_strings[0]
- lemma = self.strings.add(lemma_string)
- return lemma
-
- def add_special_case(self, unicode tag_str, unicode orth_str, attrs,
- force=False):
- """Add a special-case rule to the morphological analyser. Tokens whose
- tag and orth match the rule will receive the specified properties.
-
- tag (unicode): The part-of-speech tag to key the exception.
- orth (unicode): The word-form to key the exception.
- """
- attrs = dict(attrs)
- attrs = _normalize_props(attrs)
- self.add({self._feat_map.id2feat[feat] for feat in attrs
- if feat in self._feat_map.id2feat})
- attrs = intify_attrs(attrs, self.strings, _do_deprecated=True)
- self.exc[(tag_str, self.strings.add(orth_str))] = attrs
-
- cdef hash_t insert(self, MorphAnalysisC tag) except 0:
- cdef hash_t key = hash_tag(tag)
- if self.tags.get(key) == NULL:
- tag_ptr = self.mem.alloc(1, sizeof(MorphAnalysisC))
- tag_ptr[0] = tag
- self.tags.set(key, tag_ptr)
- return key
+ self.exc), None, None)
cdef int assign_untagged(self, TokenC* token) except -1:
"""Set morphological attributes on a token without a POS tag. Uses
@@ -273,11 +80,12 @@ cdef class Morphology:
"""
if token.lemma == 0:
orth_str = self.strings[token.lex.orth]
- lemma = self.lemmatizer.lookup(orth_str, orth=token.lex.orth)
+ lemma = self.lemmatizer.lookup(orth_str)
token.lemma = self.strings.add(lemma)
- cdef int assign_tag(self, TokenC* token, tag_str) except -1:
- cdef attr_t tag = self.strings.as_int(tag_str)
+ cdef int assign_tag(self, TokenC* token, tag) except -1:
+ if isinstance(tag, basestring):
+ tag = self.strings.add(tag)
if tag in self.reverse_index:
tag_id = self.reverse_index[tag]
self.assign_tag_id(token, tag_id)
@@ -287,821 +95,351 @@ cdef class Morphology:
cdef int assign_tag_id(self, TokenC* token, int tag_id) except -1:
if tag_id > self.n_tags:
raise ValueError(Errors.E014.format(tag=tag_id))
- # Ensure spaces get tagged as space.
- # It seems pretty arbitrary to put this logic here, but there's really
- # nowhere better. I guess the justification is that this is where the
- # specific word and the tag interact. Still, we should have a better
- # way to enforce this rule, or figure out why the statistical model fails.
- # Related to Issue #220
+ # TODO: It's pretty arbitrary to put this logic here. I guess the
+ # justification is that this is where the specific word and the tag
+ # interact. Still, we should have a better way to enforce this rule, or
+ # figure out why the statistical model fails. Related to Issue #220
if Lexeme.c_check_flag(token.lex, IS_SPACE):
tag_id = self.reverse_index[self.strings.add('_SP')]
- tag_str = self.tag_names[tag_id]
- features = dict(self.tag_map.get(tag_str, {}))
- if features:
- pos = self.strings.as_int(features.pop(POS))
- else:
- pos = 0
- cdef attr_t lemma = self._cache.get(tag_id, token.lex.orth)
- if lemma == 0:
- # Ugh, self.lemmatize has opposite arg order from self.lemmatizer :(
- lemma = self.lemmatize(pos, token.lex.orth, features)
- self._cache.set(tag_id, token.lex.orth, lemma)
- token.lemma = lemma
- token.pos = pos
- token.tag = self.strings[tag_str]
- token.morph = self.add(features)
- if (self.tag_names[tag_id], token.lex.orth) in self.exc:
- self._assign_tag_from_exceptions(token, tag_id)
+ rich_tag = self.rich_tags[tag_id]
+ analysis = self._cache.get(tag_id, token.lex.orth)
+ if analysis is NULL:
+ analysis = self.mem.alloc(1, sizeof(MorphAnalysisC))
+ tag_str = self.strings[self.rich_tags[tag_id].name]
+ analysis.tag = rich_tag
+ analysis.lemma = self.lemmatize(analysis.tag.pos, token.lex.orth,
+ self.tag_map.get(tag_str, {}))
- cdef int _assign_tag_from_exceptions(self, TokenC* token, int tag_id) except -1:
- key = (self.tag_names[tag_id], token.lex.orth)
- cdef dict attrs
- attrs = self.exc[key]
- token.pos = attrs.get(POS, token.pos)
- token.lemma = attrs.get(LEMMA, token.lemma)
+ self._cache.set(tag_id, token.lex.orth, analysis)
+ if token.lemma == 0:
+ token.lemma = analysis.lemma
+ token.pos = analysis.tag.pos
+ token.tag = analysis.tag.name
+ token.morph = analysis.tag.morph
+
+ cdef int assign_feature(self, uint64_t* flags, univ_morph_t flag_id, bint value) except -1:
+ cdef flags_t one = 1
+ if value:
+ flags[0] |= one << flag_id
+ else:
+ flags[0] &= ~(one << flag_id)
+
+ def add_special_case(self, unicode tag_str, unicode orth_str, attrs,
+ force=False):
+ """Add a special-case rule to the morphological analyser. Tokens whose
+ tag and orth match the rule will receive the specified properties.
+
+ tag (unicode): The part-of-speech tag to key the exception.
+ orth (unicode): The word-form to key the exception.
+ """
+ # TODO: Currently we've assumed that we know the number of tags --
+ # RichTagC is an array, and _cache is a PreshMapArray
+ # This is really bad: it makes the morphology typed to the tagger
+ # classes, which is all wrong.
+ self.exc[(tag_str, orth_str)] = dict(attrs)
+ tag = self.strings.add(tag_str)
+ if tag not in self.reverse_index:
+ return
+ tag_id = self.reverse_index[tag]
+ orth = self.strings.add(orth_str)
+ cdef RichTagC rich_tag = self.rich_tags[tag_id]
+ attrs = intify_attrs(attrs, self.strings, _do_deprecated=True)
+ cached = self._cache.get(tag_id, orth)
+ if cached is NULL:
+ cached = self.mem.alloc(1, sizeof(MorphAnalysisC))
+ elif force:
+ memset(cached, 0, sizeof(cached[0]))
+ else:
+ raise ValueError(Errors.E015.format(tag=tag_str, orth=orth_str))
+
+ cached.tag = rich_tag
+ # TODO: Refactor this to take arbitrary attributes.
+ for name_id, value_id in attrs.items():
+ if name_id == LEMMA:
+ cached.lemma = value_id
+ else:
+ self.assign_feature(&cached.tag.morph, name_id, value_id)
+ if cached.lemma == 0:
+ cached.lemma = self.lemmatize(rich_tag.pos, orth, attrs)
+ self._cache.set(tag_id, orth, cached)
def load_morph_exceptions(self, dict exc):
- # Map (form, pos) to attributes
+ # Map (form, pos) to (lemma, rich tag)
for tag_str, entries in exc.items():
for form_str, attrs in entries.items():
self.add_special_case(tag_str, form_str, attrs)
- @classmethod
- def create_class_map(cls):
- return MorphologyClassMap(FEATURES)
+ def lemmatize(self, const univ_pos_t univ_pos, attr_t orth, morphology):
+ if orth not in self.strings:
+ return orth
+ cdef unicode py_string = self.strings[orth]
+ if self.lemmatizer is None:
+ return self.strings.add(py_string.lower())
+ cdef list lemma_strings
+ cdef unicode lemma_string
+ lemma_strings = self.lemmatizer(py_string, univ_pos, morphology)
+ lemma_string = lemma_strings[0]
+ lemma = self.strings.add(lemma_string)
+ return lemma
-cpdef univ_pos_t get_int_tag(pos_):
- return 0
-
-cpdef intify_features(features):
- return {get_string_id(feature) for feature in features}
-
-cdef hash_t hash_tag(MorphAnalysisC tag) nogil:
- return mrmr.hash64(&tag, sizeof(tag), 0)
-
-
-cdef MorphAnalysisC create_rich_tag(features) except *:
- cdef MorphAnalysisC tag
- cdef attr_t feature
- memset(&tag, 0, sizeof(tag))
- for feature in features:
- field = FEATURE_FIELDS[FEATURE_NAMES[feature]]
- set_feature(&tag, field, feature, 1)
- return tag
-
-
-cdef tag_to_json(const MorphAnalysisC* tag):
- return [FEATURE_NAMES[f] for f in list_features(tag)]
-
-
-cdef MorphAnalysisC tag_from_json(json_tag):
- raise NotImplementedError
-
-
-cdef list list_features(const MorphAnalysisC* tag):
- output = []
- if tag.abbr != 0:
- output.append(tag.abbr)
- if tag.adp_type != 0:
- output.append(tag.adp_type)
- if tag.adv_type != 0:
- output.append(tag.adv_type)
- if tag.animacy != 0:
- output.append(tag.animacy)
- if tag.aspect != 0:
- output.append(tag.aspect)
- if tag.case != 0:
- output.append(tag.case)
- if tag.conj_type != 0:
- output.append(tag.conj_type)
- if tag.connegative != 0:
- output.append(tag.connegative)
- if tag.definite != 0:
- output.append(tag.definite)
- if tag.degree != 0:
- output.append(tag.degree)
- if tag.derivation != 0:
- output.append(tag.derivation)
- if tag.echo != 0:
- output.append(tag.echo)
- if tag.foreign != 0:
- output.append(tag.foreign)
- if tag.gender != 0:
- output.append(tag.gender)
- if tag.hyph != 0:
- output.append(tag.hyph)
- if tag.inf_form != 0:
- output.append(tag.inf_form)
- if tag.mood != 0:
- output.append(tag.mood)
- if tag.negative != 0:
- output.append(tag.negative)
- if tag.number != 0:
- output.append(tag.number)
- if tag.name_type != 0:
- output.append(tag.name_type)
- if tag.noun_type != 0:
- output.append(tag.noun_type)
- if tag.part_form != 0:
- output.append(tag.part_form)
- if tag.part_type != 0:
- output.append(tag.part_type)
- if tag.person != 0:
- output.append(tag.person)
- if tag.polite != 0:
- output.append(tag.polite)
- if tag.polarity != 0:
- output.append(tag.polarity)
- if tag.poss != 0:
- output.append(tag.poss)
- if tag.prefix != 0:
- output.append(tag.prefix)
- if tag.prep_case != 0:
- output.append(tag.prep_case)
- if tag.pron_type != 0:
- output.append(tag.pron_type)
- if tag.punct_type != 0:
- output.append(tag.punct_type)
- if tag.reflex != 0:
- output.append(tag.reflex)
- if tag.style != 0:
- output.append(tag.style)
- if tag.style_variant != 0:
- output.append(tag.style_variant)
- if tag.typo != 0:
- output.append(tag.typo)
- if tag.verb_form != 0:
- output.append(tag.verb_form)
- if tag.voice != 0:
- output.append(tag.voice)
- if tag.verb_type != 0:
- output.append(tag.verb_type)
- return output
-
-
-cdef attr_t get_field(const MorphAnalysisC* tag, int field_id) nogil:
- field = field_id
- if field == Field_POS:
- return tag.pos
- if field == Field_Abbr:
- return tag.abbr
- elif field == Field_AdpType:
- return tag.adp_type
- elif field == Field_AdvType:
- return tag.adv_type
- elif field == Field_Animacy:
- return tag.animacy
- elif field == Field_Aspect:
- return tag.aspect
- elif field == Field_Case:
- return tag.case
- elif field == Field_ConjType:
- return tag.conj_type
- elif field == Field_Connegative:
- return tag.connegative
- elif field == Field_Definite:
- return tag.definite
- elif field == Field_Degree:
- return tag.degree
- elif field == Field_Derivation:
- return tag.derivation
- elif field == Field_Echo:
- return tag.echo
- elif field == Field_Foreign:
- return tag.foreign
- elif field == Field_Gender:
- return tag.gender
- elif field == Field_Hyph:
- return tag.hyph
- elif field == Field_InfForm:
- return tag.inf_form
- elif field == Field_Mood:
- return tag.mood
- elif field == Field_Negative:
- return tag.negative
- elif field == Field_Number:
- return tag.number
- elif field == Field_NameType:
- return tag.name_type
- elif field == Field_NounType:
- return tag.noun_type
- elif field == Field_NumForm:
- return tag.num_form
- elif field == Field_NumType:
- return tag.num_type
- elif field == Field_NumValue:
- return tag.num_value
- elif field == Field_PartForm:
- return tag.part_form
- elif field == Field_PartType:
- return tag.part_type
- elif field == Field_Person:
- return tag.person
- elif field == Field_Polite:
- return tag.polite
- elif field == Field_Polarity:
- return tag.polarity
- elif field == Field_Poss:
- return tag.poss
- elif field == Field_Prefix:
- return tag.prefix
- elif field == Field_PrepCase:
- return tag.prep_case
- elif field == Field_PronType:
- return tag.pron_type
- elif field == Field_PunctSide:
- return tag.punct_side
- elif field == Field_PunctType:
- return tag.punct_type
- elif field == Field_Reflex:
- return tag.reflex
- elif field == Field_Style:
- return tag.style
- elif field == Field_StyleVariant:
- return tag.style_variant
- elif field == Field_Tense:
- return tag.tense
- elif field == Field_Typo:
- return tag.typo
- elif field == Field_VerbForm:
- return tag.verb_form
- elif field == Field_Voice:
- return tag.voice
- elif field == Field_VerbType:
- return tag.verb_type
- else:
- raise ValueError(Errors.E168.format(field=field_id))
-
-
-cdef int check_feature(const MorphAnalysisC* tag, attr_t feature) nogil:
- if tag.abbr == feature:
- return 1
- elif tag.adp_type == feature:
- return 1
- elif tag.adv_type == feature:
- return 1
- elif tag.animacy == feature:
- return 1
- elif tag.aspect == feature:
- return 1
- elif tag.case == feature:
- return 1
- elif tag.conj_type == feature:
- return 1
- elif tag.connegative == feature:
- return 1
- elif tag.definite == feature:
- return 1
- elif tag.degree == feature:
- return 1
- elif tag.derivation == feature:
- return 1
- elif tag.echo == feature:
- return 1
- elif tag.foreign == feature:
- return 1
- elif tag.gender == feature:
- return 1
- elif tag.hyph == feature:
- return 1
- elif tag.inf_form == feature:
- return 1
- elif tag.mood == feature:
- return 1
- elif tag.negative == feature:
- return 1
- elif tag.number == feature:
- return 1
- elif tag.name_type == feature:
- return 1
- elif tag.noun_type == feature:
- return 1
- elif tag.num_form == feature:
- return 1
- elif tag.num_type == feature:
- return 1
- elif tag.num_value == feature:
- return 1
- elif tag.part_form == feature:
- return 1
- elif tag.part_type == feature:
- return 1
- elif tag.person == feature:
- return 1
- elif tag.polite == feature:
- return 1
- elif tag.polarity == feature:
- return 1
- elif tag.poss == feature:
- return 1
- elif tag.prefix == feature:
- return 1
- elif tag.prep_case == feature:
- return 1
- elif tag.pron_type == feature:
- return 1
- elif tag.punct_side == feature:
- return 1
- elif tag.punct_type == feature:
- return 1
- elif tag.reflex == feature:
- return 1
- elif tag.style == feature:
- return 1
- elif tag.style_variant == feature:
- return 1
- elif tag.tense == feature:
- return 1
- elif tag.typo == feature:
- return 1
- elif tag.verb_form == feature:
- return 1
- elif tag.voice == feature:
- return 1
- elif tag.verb_type == feature:
- return 1
- else:
- return 0
-
-cdef int set_feature(MorphAnalysisC* tag,
- univ_field_t field, attr_t feature, int value) except -1:
- if value == True:
- value_ = feature
- else:
- value_ = 0
- prev_value = get_field(tag, field)
- if prev_value != 0 and value_ == 0 and field != Field_POS:
- tag.length -= 1
- elif prev_value == 0 and value_ != 0 and field != Field_POS:
- tag.length += 1
- if feature == 0:
- pass
- elif field == Field_POS:
- tag.pos = get_string_id(FEATURE_NAMES[value_].split('_')[1])
- elif field == Field_Abbr:
- tag.abbr = value_
- elif field == Field_AdpType:
- tag.adp_type = value_
- elif field == Field_AdvType:
- tag.adv_type = value_
- elif field == Field_Animacy:
- tag.animacy = value_
- elif field == Field_Aspect:
- tag.aspect = value_
- elif field == Field_Case:
- tag.case = value_
- elif field == Field_ConjType:
- tag.conj_type = value_
- elif field == Field_Connegative:
- tag.connegative = value_
- elif field == Field_Definite:
- tag.definite = value_
- elif field == Field_Degree:
- tag.degree = value_
- elif field == Field_Derivation:
- tag.derivation = value_
- elif field == Field_Echo:
- tag.echo = value_
- elif field == Field_Foreign:
- tag.foreign = value_
- elif field == Field_Gender:
- tag.gender = value_
- elif field == Field_Hyph:
- tag.hyph = value_
- elif field == Field_InfForm:
- tag.inf_form = value_
- elif field == Field_Mood:
- tag.mood = value_
- elif field == Field_Negative:
- tag.negative = value_
- elif field == Field_Number:
- tag.number = value_
- elif field == Field_NameType:
- tag.name_type = value_
- elif field == Field_NounType:
- tag.noun_type = value_
- elif field == Field_NumForm:
- tag.num_form = value_
- elif field == Field_NumType:
- tag.num_type = value_
- elif field == Field_NumValue:
- tag.num_value = value_
- elif field == Field_PartForm:
- tag.part_form = value_
- elif field == Field_PartType:
- tag.part_type = value_
- elif field == Field_Person:
- tag.person = value_
- elif field == Field_Polite:
- tag.polite = value_
- elif field == Field_Polarity:
- tag.polarity = value_
- elif field == Field_Poss:
- tag.poss = value_
- elif field == Field_Prefix:
- tag.prefix = value_
- elif field == Field_PrepCase:
- tag.prep_case = value_
- elif field == Field_PronType:
- tag.pron_type = value_
- elif field == Field_PunctSide:
- tag.punct_side = value_
- elif field == Field_PunctType:
- tag.punct_type = value_
- elif field == Field_Reflex:
- tag.reflex = value_
- elif field == Field_Style:
- tag.style = value_
- elif field == Field_StyleVariant:
- tag.style_variant = value_
- elif field == Field_Tense:
- tag.tense = value_
- elif field == Field_Typo:
- tag.typo = value_
- elif field == Field_VerbForm:
- tag.verb_form = value_
- elif field == Field_Voice:
- tag.voice = value_
- elif field == Field_VerbType:
- tag.verb_type = value_
- else:
- raise ValueError(Errors.E167.format(field=FEATURE_NAMES.get(feature), field_id=feature))
-
-
-FIELDS = {
- 'POS': Field_POS,
- 'Abbr': Field_Abbr,
- 'AdpType': Field_AdpType,
- 'AdvType': Field_AdvType,
- 'Animacy': Field_Animacy,
- 'Aspect': Field_Aspect,
- 'Case': Field_Case,
- 'ConjType': Field_ConjType,
- 'Connegative': Field_Connegative,
- 'Definite': Field_Definite,
- 'Degree': Field_Degree,
- 'Derivation': Field_Derivation,
- 'Echo': Field_Echo,
- 'Foreign': Field_Foreign,
- 'Gender': Field_Gender,
- 'Hyph': Field_Hyph,
- 'InfForm': Field_InfForm,
- 'Mood': Field_Mood,
- 'NameType': Field_NameType,
- 'Negative': Field_Negative,
- 'NounType': Field_NounType,
- 'Number': Field_Number,
- 'NumForm': Field_NumForm,
- 'NumType': Field_NumType,
- 'NumValue': Field_NumValue,
- 'PartForm': Field_PartForm,
- 'PartType': Field_PartType,
- 'Person': Field_Person,
- 'Polite': Field_Polite,
- 'Polarity': Field_Polarity,
- 'Poss': Field_Poss,
- 'Prefix': Field_Prefix,
- 'PrepCase': Field_PrepCase,
- 'PronType': Field_PronType,
- 'PunctSide': Field_PunctSide,
- 'PunctType': Field_PunctType,
- 'Reflex': Field_Reflex,
- 'Style': Field_Style,
- 'StyleVariant': Field_StyleVariant,
- 'Tense': Field_Tense,
- 'Typo': Field_Typo,
- 'VerbForm': Field_VerbForm,
- 'VerbType': Field_VerbType,
- 'Voice': Field_Voice,
-}
-
-LOWER_FIELDS = {
- 'pos': Field_POS,
- 'abbr': Field_Abbr,
- 'adp_type': Field_AdpType,
- 'adv_type': Field_AdvType,
- 'animacy': Field_Animacy,
- 'aspect': Field_Aspect,
- 'case': Field_Case,
- 'conj_type': Field_ConjType,
- 'connegative': Field_Connegative,
- 'definite': Field_Definite,
- 'degree': Field_Degree,
- 'derivation': Field_Derivation,
- 'echo': Field_Echo,
- 'foreign': Field_Foreign,
- 'gender': Field_Gender,
- 'hyph': Field_Hyph,
- 'inf_form': Field_InfForm,
- 'mood': Field_Mood,
- 'name_type': Field_NameType,
- 'negative': Field_Negative,
- 'noun_type': Field_NounType,
- 'number': Field_Number,
- 'num_form': Field_NumForm,
- 'num_type': Field_NumType,
- 'num_value': Field_NumValue,
- 'part_form': Field_PartForm,
- 'part_type': Field_PartType,
- 'person': Field_Person,
- 'polarity': Field_Polarity,
- 'polite': Field_Polite,
- 'poss': Field_Poss,
- 'prefix': Field_Prefix,
- 'prep_case': Field_PrepCase,
- 'pron_type': Field_PronType,
- 'punct_side': Field_PunctSide,
- 'punct_type': Field_PunctType,
- 'reflex': Field_Reflex,
- 'style': Field_Style,
- 'style_variant': Field_StyleVariant,
- 'tense': Field_Tense,
- 'typo': Field_Typo,
- 'verb_form': Field_VerbForm,
- 'verb_type': Field_VerbType,
- 'voice': Field_Voice,
+IDS = {
+ "Animacy_anim": Animacy_anim,
+ "Animacy_inan": Animacy_inan,
+ "Animacy_hum": Animacy_hum, # U20
+ "Animacy_nhum": Animacy_nhum,
+ "Aspect_freq": Aspect_freq,
+ "Aspect_imp": Aspect_imp,
+ "Aspect_mod": Aspect_mod,
+ "Aspect_none": Aspect_none,
+ "Aspect_perf": Aspect_perf,
+ "Case_abe": Case_abe,
+ "Case_abl": Case_abl,
+ "Case_abs": Case_abs,
+ "Case_acc": Case_acc,
+ "Case_ade": Case_ade,
+ "Case_all": Case_all,
+ "Case_cau": Case_cau,
+ "Case_com": Case_com,
+ "Case_dat": Case_dat,
+ "Case_del": Case_del,
+ "Case_dis": Case_dis,
+ "Case_ela": Case_ela,
+ "Case_ess": Case_ess,
+ "Case_gen": Case_gen,
+ "Case_ill": Case_ill,
+ "Case_ine": Case_ine,
+ "Case_ins": Case_ins,
+ "Case_loc": Case_loc,
+ "Case_lat": Case_lat,
+ "Case_nom": Case_nom,
+ "Case_par": Case_par,
+ "Case_sub": Case_sub,
+ "Case_sup": Case_sup,
+ "Case_tem": Case_tem,
+ "Case_ter": Case_ter,
+ "Case_tra": Case_tra,
+ "Case_voc": Case_voc,
+ "Definite_two": Definite_two,
+ "Definite_def": Definite_def,
+ "Definite_red": Definite_red,
+ "Definite_cons": Definite_cons, # U20
+ "Definite_ind": Definite_ind,
+ "Degree_cmp": Degree_cmp,
+ "Degree_comp": Degree_comp,
+ "Degree_none": Degree_none,
+ "Degree_pos": Degree_pos,
+ "Degree_sup": Degree_sup,
+ "Degree_abs": Degree_abs,
+ "Degree_com": Degree_com,
+ "Degree_dim ": Degree_dim, # du
+ "Gender_com": Gender_com,
+ "Gender_fem": Gender_fem,
+ "Gender_masc": Gender_masc,
+ "Gender_neut": Gender_neut,
+ "Mood_cnd": Mood_cnd,
+ "Mood_imp": Mood_imp,
+ "Mood_ind": Mood_ind,
+ "Mood_n": Mood_n,
+ "Mood_pot": Mood_pot,
+ "Mood_sub": Mood_sub,
+ "Mood_opt": Mood_opt,
+ "Negative_neg": Negative_neg,
+ "Negative_pos": Negative_pos,
+ "Negative_yes": Negative_yes,
+ "Polarity_neg": Polarity_neg, # U20
+ "Polarity_pos": Polarity_pos, # U20
+ "Number_com": Number_com,
+ "Number_dual": Number_dual,
+ "Number_none": Number_none,
+ "Number_plur": Number_plur,
+ "Number_sing": Number_sing,
+ "Number_ptan ": Number_ptan, # bg
+ "Number_count ": Number_count, # bg
+ "NumType_card": NumType_card,
+ "NumType_dist": NumType_dist,
+ "NumType_frac": NumType_frac,
+ "NumType_gen": NumType_gen,
+ "NumType_mult": NumType_mult,
+ "NumType_none": NumType_none,
+ "NumType_ord": NumType_ord,
+ "NumType_sets": NumType_sets,
+ "Person_one": Person_one,
+ "Person_two": Person_two,
+ "Person_three": Person_three,
+ "Person_none": Person_none,
+ "Poss_yes": Poss_yes,
+ "PronType_advPart": PronType_advPart,
+ "PronType_art": PronType_art,
+ "PronType_default": PronType_default,
+ "PronType_dem": PronType_dem,
+ "PronType_ind": PronType_ind,
+ "PronType_int": PronType_int,
+ "PronType_neg": PronType_neg,
+ "PronType_prs": PronType_prs,
+ "PronType_rcp": PronType_rcp,
+ "PronType_rel": PronType_rel,
+ "PronType_tot": PronType_tot,
+ "PronType_clit": PronType_clit,
+ "PronType_exc ": PronType_exc, # es, ca, it, fa,
+ "Reflex_yes": Reflex_yes,
+ "Tense_fut": Tense_fut,
+ "Tense_imp": Tense_imp,
+ "Tense_past": Tense_past,
+ "Tense_pres": Tense_pres,
+ "VerbForm_fin": VerbForm_fin,
+ "VerbForm_ger": VerbForm_ger,
+ "VerbForm_inf": VerbForm_inf,
+ "VerbForm_none": VerbForm_none,
+ "VerbForm_part": VerbForm_part,
+ "VerbForm_partFut": VerbForm_partFut,
+ "VerbForm_partPast": VerbForm_partPast,
+ "VerbForm_partPres": VerbForm_partPres,
+ "VerbForm_sup": VerbForm_sup,
+ "VerbForm_trans": VerbForm_trans,
+ "VerbForm_conv": VerbForm_conv, # U20
+ "VerbForm_gdv ": VerbForm_gdv, # la,
+ "Voice_act": Voice_act,
+ "Voice_cau": Voice_cau,
+ "Voice_pass": Voice_pass,
+ "Voice_mid ": Voice_mid, # gkc,
+ "Voice_int ": Voice_int, # hb,
+ "Abbr_yes ": Abbr_yes, # cz, fi, sl, U,
+ "AdpType_prep ": AdpType_prep, # cz, U,
+ "AdpType_post ": AdpType_post, # U,
+ "AdpType_voc ": AdpType_voc, # cz,
+ "AdpType_comprep ": AdpType_comprep, # cz,
+ "AdpType_circ ": AdpType_circ, # U,
+ "AdvType_man": AdvType_man,
+ "AdvType_loc": AdvType_loc,
+ "AdvType_tim": AdvType_tim,
+ "AdvType_deg": AdvType_deg,
+ "AdvType_cau": AdvType_cau,
+ "AdvType_mod": AdvType_mod,
+ "AdvType_sta": AdvType_sta,
+ "AdvType_ex": AdvType_ex,
+ "AdvType_adadj": AdvType_adadj,
+ "ConjType_oper ": ConjType_oper, # cz, U,
+ "ConjType_comp ": ConjType_comp, # cz, U,
+ "Connegative_yes ": Connegative_yes, # fi,
+ "Derivation_minen ": Derivation_minen, # fi,
+ "Derivation_sti ": Derivation_sti, # fi,
+ "Derivation_inen ": Derivation_inen, # fi,
+ "Derivation_lainen ": Derivation_lainen, # fi,
+ "Derivation_ja ": Derivation_ja, # fi,
+ "Derivation_ton ": Derivation_ton, # fi,
+ "Derivation_vs ": Derivation_vs, # fi,
+ "Derivation_ttain ": Derivation_ttain, # fi,
+ "Derivation_ttaa ": Derivation_ttaa, # fi,
+ "Echo_rdp ": Echo_rdp, # U,
+ "Echo_ech ": Echo_ech, # U,
+ "Foreign_foreign ": Foreign_foreign, # cz, fi, U,
+ "Foreign_fscript ": Foreign_fscript, # cz, fi, U,
+ "Foreign_tscript ": Foreign_tscript, # cz, U,
+ "Foreign_yes ": Foreign_yes, # sl,
+ "Gender_dat_masc ": Gender_dat_masc, # bq, U,
+ "Gender_dat_fem ": Gender_dat_fem, # bq, U,
+ "Gender_erg_masc ": Gender_erg_masc, # bq,
+ "Gender_erg_fem ": Gender_erg_fem, # bq,
+ "Gender_psor_masc ": Gender_psor_masc, # cz, sl, U,
+ "Gender_psor_fem ": Gender_psor_fem, # cz, sl, U,
+ "Gender_psor_neut ": Gender_psor_neut, # sl,
+ "Hyph_yes ": Hyph_yes, # cz, U,
+ "InfForm_one ": InfForm_one, # fi,
+ "InfForm_two ": InfForm_two, # fi,
+ "InfForm_three ": InfForm_three, # fi,
+ "NameType_geo ": NameType_geo, # U, cz,
+ "NameType_prs ": NameType_prs, # U, cz,
+ "NameType_giv ": NameType_giv, # U, cz,
+ "NameType_sur ": NameType_sur, # U, cz,
+ "NameType_nat ": NameType_nat, # U, cz,
+ "NameType_com ": NameType_com, # U, cz,
+ "NameType_pro ": NameType_pro, # U, cz,
+ "NameType_oth ": NameType_oth, # U, cz,
+ "NounType_com ": NounType_com, # U,
+ "NounType_prop ": NounType_prop, # U,
+ "NounType_class ": NounType_class, # U,
+ "Number_abs_sing ": Number_abs_sing, # bq, U,
+ "Number_abs_plur ": Number_abs_plur, # bq, U,
+ "Number_dat_sing ": Number_dat_sing, # bq, U,
+ "Number_dat_plur ": Number_dat_plur, # bq, U,
+ "Number_erg_sing ": Number_erg_sing, # bq, U,
+ "Number_erg_plur ": Number_erg_plur, # bq, U,
+ "Number_psee_sing ": Number_psee_sing, # U,
+ "Number_psee_plur ": Number_psee_plur, # U,
+ "Number_psor_sing ": Number_psor_sing, # cz, fi, sl, U,
+ "Number_psor_plur ": Number_psor_plur, # cz, fi, sl, U,
+ "NumForm_digit ": NumForm_digit, # cz, sl, U,
+ "NumForm_roman ": NumForm_roman, # cz, sl, U,
+ "NumForm_word ": NumForm_word, # cz, sl, U,
+ "NumValue_one ": NumValue_one, # cz, U,
+ "NumValue_two ": NumValue_two, # cz, U,
+ "NumValue_three ": NumValue_three, # cz, U,
+ "PartForm_pres ": PartForm_pres, # fi,
+ "PartForm_past ": PartForm_past, # fi,
+ "PartForm_agt ": PartForm_agt, # fi,
+ "PartForm_neg ": PartForm_neg, # fi,
+ "PartType_mod ": PartType_mod, # U,
+ "PartType_emp ": PartType_emp, # U,
+ "PartType_res ": PartType_res, # U,
+ "PartType_inf ": PartType_inf, # U,
+ "PartType_vbp ": PartType_vbp, # U,
+ "Person_abs_one ": Person_abs_one, # bq, U,
+ "Person_abs_two ": Person_abs_two, # bq, U,
+ "Person_abs_three ": Person_abs_three, # bq, U,
+ "Person_dat_one ": Person_dat_one, # bq, U,
+ "Person_dat_two ": Person_dat_two, # bq, U,
+ "Person_dat_three ": Person_dat_three, # bq, U,
+ "Person_erg_one ": Person_erg_one, # bq, U,
+ "Person_erg_two ": Person_erg_two, # bq, U,
+ "Person_erg_three ": Person_erg_three, # bq, U,
+ "Person_psor_one ": Person_psor_one, # fi, U,
+ "Person_psor_two ": Person_psor_two, # fi, U,
+ "Person_psor_three ": Person_psor_three, # fi, U,
+ "Polite_inf ": Polite_inf, # bq, U,
+ "Polite_pol ": Polite_pol, # bq, U,
+ "Polite_abs_inf ": Polite_abs_inf, # bq, U,
+ "Polite_abs_pol ": Polite_abs_pol, # bq, U,
+ "Polite_erg_inf ": Polite_erg_inf, # bq, U,
+ "Polite_erg_pol ": Polite_erg_pol, # bq, U,
+ "Polite_dat_inf ": Polite_dat_inf, # bq, U,
+ "Polite_dat_pol ": Polite_dat_pol, # bq, U,
+ "Prefix_yes ": Prefix_yes, # U,
+ "PrepCase_npr ": PrepCase_npr, # cz,
+ "PrepCase_pre ": PrepCase_pre, # U,
+ "PunctSide_ini ": PunctSide_ini, # U,
+ "PunctSide_fin ": PunctSide_fin, # U,
+ "PunctType_peri ": PunctType_peri, # U,
+ "PunctType_qest ": PunctType_qest, # U,
+ "PunctType_excl ": PunctType_excl, # U,
+ "PunctType_quot ": PunctType_quot, # U,
+ "PunctType_brck ": PunctType_brck, # U,
+ "PunctType_comm ": PunctType_comm, # U,
+ "PunctType_colo ": PunctType_colo, # U,
+ "PunctType_semi ": PunctType_semi, # U,
+ "PunctType_dash ": PunctType_dash, # U,
+ "Style_arch ": Style_arch, # cz, fi, U,
+ "Style_rare ": Style_rare, # cz, fi, U,
+ "Style_poet ": Style_poet, # cz, U,
+ "Style_norm ": Style_norm, # cz, U,
+ "Style_coll ": Style_coll, # cz, U,
+ "Style_vrnc ": Style_vrnc, # cz, U,
+ "Style_sing ": Style_sing, # cz, U,
+ "Style_expr ": Style_expr, # cz, U,
+ "Style_derg ": Style_derg, # cz, U,
+ "Style_vulg ": Style_vulg, # cz, U,
+ "Style_yes ": Style_yes, # fi, U,
+ "StyleVariant_styleShort ": StyleVariant_styleShort, # cz,
+ "StyleVariant_styleBound ": StyleVariant_styleBound, # cz, sl,
+ "VerbType_aux ": VerbType_aux, # U,
+ "VerbType_cop ": VerbType_cop, # U,
+ "VerbType_mod ": VerbType_mod, # U,
+ "VerbType_light ": VerbType_light, # U,
}
-FEATURES = [
- "POS_ADJ",
- "POS_ADP",
- "POS_ADV",
- "POS_AUX",
- "POS_CONJ",
- "POS_CCONJ",
- "POS_DET",
- "POS_INTJ",
- "POS_NOUN",
- "POS_NUM",
- "POS_PART",
- "POS_PRON",
- "POS_PROPN",
- "POS_PUNCT",
- "POS_SCONJ",
- "POS_SYM",
- "POS_VERB",
- "POS_X",
- "POS_EOL",
- "POS_SPACE",
- "Abbr_yes",
- "AdpType_circ",
- "AdpType_comprep",
- "AdpType_prep",
- "AdpType_post",
- "AdpType_voc",
- "AdvType_adadj",
- "AdvType_cau",
- "AdvType_deg",
- "AdvType_ex",
- "AdvType_loc",
- "AdvType_man",
- "AdvType_mod",
- "AdvType_sta",
- "AdvType_tim",
- "Animacy_anim",
- "Animacy_hum",
- "Animacy_inan",
- "Animacy_nhum",
- "Aspect_hab",
- "Aspect_imp",
- "Aspect_iter",
- "Aspect_perf",
- "Aspect_prog",
- "Aspect_prosp",
- "Aspect_none",
- "Case_abe",
- "Case_abl",
- "Case_abs",
- "Case_acc",
- "Case_ade",
- "Case_all",
- "Case_cau",
- "Case_com",
- "Case_dat",
- "Case_del",
- "Case_dis",
- "Case_ela",
- "Case_ess",
- "Case_gen",
- "Case_ill",
- "Case_ine",
- "Case_ins",
- "Case_loc",
- "Case_lat",
- "Case_nom",
- "Case_par",
- "Case_sub",
- "Case_sup",
- "Case_tem",
- "Case_ter",
- "Case_tra",
- "Case_voc",
- "ConjType_comp",
- "ConjType_oper",
- "Connegative_yes",
- "Definite_cons",
- "Definite_def",
- "Definite_ind",
- "Definite_red",
- "Definite_two",
- "Degree_abs",
- "Degree_cmp",
- "Degree_comp",
- "Degree_none",
- "Degree_pos",
- "Degree_sup",
- "Degree_com",
- "Degree_dim",
- "Derivation_minen",
- "Derivation_sti",
- "Derivation_inen",
- "Derivation_lainen",
- "Derivation_ja",
- "Derivation_ton",
- "Derivation_vs",
- "Derivation_ttain",
- "Derivation_ttaa",
- "Echo_rdp",
- "Echo_ech",
- "Foreign_foreign",
- "Foreign_fscript",
- "Foreign_tscript",
- "Foreign_yes",
- "Gender_com",
- "Gender_fem",
- "Gender_masc",
- "Gender_neut",
- "Gender_dat_masc",
- "Gender_dat_fem",
- "Gender_erg_masc",
- "Gender_erg_fem",
- "Gender_psor_masc",
- "Gender_psor_fem",
- "Gender_psor_neut",
- "Hyph_yes",
- "InfForm_one",
- "InfForm_two",
- "InfForm_three",
- "Mood_cnd",
- "Mood_imp",
- "Mood_ind",
- "Mood_n",
- "Mood_pot",
- "Mood_sub",
- "Mood_opt",
- "NameType_geo",
- "NameType_prs",
- "NameType_giv",
- "NameType_sur",
- "NameType_nat",
- "NameType_com",
- "NameType_pro",
- "NameType_oth",
- "Negative_neg",
- "Negative_pos",
- "Negative_yes",
- "NounType_com",
- "NounType_prop",
- "NounType_class",
- "Number_com",
- "Number_dual",
- "Number_none",
- "Number_plur",
- "Number_sing",
- "Number_ptan",
- "Number_count",
- "Number_abs_sing",
- "Number_abs_plur",
- "Number_dat_sing",
- "Number_dat_plur",
- "Number_erg_sing",
- "Number_erg_plur",
- "Number_psee_sing",
- "Number_psee_plur",
- "Number_psor_sing",
- "Number_psor_plur",
- "NumForm_digit",
- "NumForm_roman",
- "NumForm_word",
- "NumForm_combi",
- "NumType_card",
- "NumType_dist",
- "NumType_frac",
- "NumType_gen",
- "NumType_mult",
- "NumType_none",
- "NumType_ord",
- "NumType_sets",
- "NumType_dual",
- "NumValue_one",
- "NumValue_two",
- "NumValue_three",
- "PartForm_pres",
- "PartForm_past",
- "PartForm_agt",
- "PartForm_neg",
- "PartType_mod",
- "PartType_emp",
- "PartType_res",
- "PartType_inf",
- "PartType_vbp",
- "Person_one",
- "Person_two",
- "Person_three",
- "Person_none",
- "Person_abs_one",
- "Person_abs_two",
- "Person_abs_three",
- "Person_dat_one",
- "Person_dat_two",
- "Person_dat_three",
- "Person_erg_one",
- "Person_erg_two",
- "Person_erg_three",
- "Person_psor_one",
- "Person_psor_two",
- "Person_psor_three",
- "Polarity_neg",
- "Polarity_pos",
- "Polite_inf",
- "Polite_pol",
- "Polite_abs_inf",
- "Polite_abs_pol",
- "Polite_erg_inf",
- "Polite_erg_pol",
- "Polite_dat_inf",
- "Polite_dat_pol",
- "Poss_yes",
- "Prefix_yes",
- "PrepCase_npr",
- "PrepCase_pre",
- "PronType_advPart",
- "PronType_art",
- "PronType_default",
- "PronType_dem",
- "PronType_ind",
- "PronType_int",
- "PronType_neg",
- "PronType_prs",
- "PronType_rcp",
- "PronType_rel",
- "PronType_tot",
- "PronType_clit",
- "PronType_exc",
- "PunctSide_ini",
- "PunctSide_fin",
- "PunctType_peri",
- "PunctType_qest",
- "PunctType_excl",
- "PunctType_quot",
- "PunctType_brck",
- "PunctType_comm",
- "PunctType_colo",
- "PunctType_semi",
- "PunctType_dash",
- "Reflex_yes",
- "Style_arch",
- "Style_rare",
- "Style_poet",
- "Style_norm",
- "Style_coll",
- "Style_vrnc",
- "Style_sing",
- "Style_expr",
- "Style_derg",
- "Style_vulg",
- "Style_yes",
- "StyleVariant_styleShort",
- "StyleVariant_styleBound",
- "Tense_fut",
- "Tense_imp",
- "Tense_past",
- "Tense_pres",
- "Typo_yes",
- "VerbForm_fin",
- "VerbForm_ger",
- "VerbForm_inf",
- "VerbForm_none",
- "VerbForm_part",
- "VerbForm_partFut",
- "VerbForm_partPast",
- "VerbForm_partPres",
- "VerbForm_sup",
- "VerbForm_trans",
- "VerbForm_conv",
- "VerbForm_gdv",
- "VerbType_aux",
- "VerbType_cop",
- "VerbType_mod",
- "VerbType_light",
- "Voice_act",
- "Voice_cau",
- "Voice_pass",
- "Voice_mid",
- "Voice_int",
-]
-
-FEATURE_NAMES = {get_string_id(f): f for f in FEATURES}
-FEATURE_FIELDS = {f: FIELDS[f.split('_', 1)[0]] for f in FEATURES}
+NAMES = [key for key, value in sorted(IDS.items(), key=lambda item: item[1])]
+# Unfortunate hack here, to work around problem with long cpdef enum
+# (which is generating an enormous amount of C++ in Cython 0.24+)
+# We keep the enum cdef, and just make sure the names are available to Python
+locals().update(IDS)
diff --git a/spacy/pipeline/__init__.py b/spacy/pipeline/__init__.py
index 2f30fbbee..5d7b079d9 100644
--- a/spacy/pipeline/__init__.py
+++ b/spacy/pipeline/__init__.py
@@ -3,7 +3,6 @@ from __future__ import unicode_literals
from .pipes import Tagger, DependencyParser, EntityRecognizer, EntityLinker
from .pipes import TextCategorizer, Tensorizer, Pipe, Sentencizer
-from .morphologizer import Morphologizer
from .entityruler import EntityRuler
from .hooks import SentenceSegmenter, SimilarityHook
from .functions import merge_entities, merge_noun_chunks, merge_subtokens
@@ -16,7 +15,6 @@ __all__ = [
"TextCategorizer",
"Tensorizer",
"Pipe",
- "Morphologizer",
"EntityRuler",
"Sentencizer",
"SentenceSegmenter",
diff --git a/spacy/pipeline/entityruler.py b/spacy/pipeline/entityruler.py
index 956d67291..a1d3f922e 100644
--- a/spacy/pipeline/entityruler.py
+++ b/spacy/pipeline/entityruler.py
@@ -180,28 +180,21 @@ class EntityRuler(object):
DOCS: https://spacy.io/api/entityruler#add_patterns
"""
- # disable the nlp components after this one in case they hadn't been initialized / deserialised yet
- try:
- current_index = self.nlp.pipe_names.index(self.name)
- subsequent_pipes = [pipe for pipe in self.nlp.pipe_names[current_index + 1:]]
- except ValueError:
- subsequent_pipes = []
- with self.nlp.disable_pipes(*subsequent_pipes):
- for entry in patterns:
- label = entry["label"]
- if "id" in entry:
- label = self._create_label(label, entry["id"])
- pattern = entry["pattern"]
- if isinstance(pattern, basestring_):
- self.phrase_patterns[label].append(self.nlp(pattern))
- elif isinstance(pattern, list):
- self.token_patterns[label].append(pattern)
- else:
- raise ValueError(Errors.E097.format(pattern=pattern))
- for label, patterns in self.token_patterns.items():
- self.matcher.add(label, None, *patterns)
- for label, patterns in self.phrase_patterns.items():
- self.phrase_matcher.add(label, None, *patterns)
+ for entry in patterns:
+ label = entry["label"]
+ if "id" in entry:
+ label = self._create_label(label, entry["id"])
+ pattern = entry["pattern"]
+ if isinstance(pattern, basestring_):
+ self.phrase_patterns[label].append(self.nlp(pattern))
+ elif isinstance(pattern, list):
+ self.token_patterns[label].append(pattern)
+ else:
+ raise ValueError(Errors.E097.format(pattern=pattern))
+ for label, patterns in self.token_patterns.items():
+ self.matcher.add(label, None, *patterns)
+ for label, patterns in self.phrase_patterns.items():
+ self.phrase_matcher.add(label, None, *patterns)
def _split_label(self, label):
"""Split Entity label into ent_label and ent_id if it contains self.ent_id_sep
diff --git a/spacy/pipeline/morphologizer.pyx b/spacy/pipeline/morphologizer.pyx
deleted file mode 100644
index b14e2bec7..000000000
--- a/spacy/pipeline/morphologizer.pyx
+++ /dev/null
@@ -1,164 +0,0 @@
-from __future__ import unicode_literals
-from collections import OrderedDict, defaultdict
-
-import numpy
-cimport numpy as np
-
-from thinc.api import chain
-from thinc.neural.util import to_categorical, copy_array, get_array_module
-from .. import util
-from .pipes import Pipe
-from .._ml import Tok2Vec, build_morphologizer_model
-from .._ml import link_vectors_to_models, zero_init, flatten
-from .._ml import create_default_optimizer
-from ..errors import Errors, TempErrors
-from ..compat import basestring_
-from ..tokens.doc cimport Doc
-from ..vocab cimport Vocab
-from ..morphology cimport Morphology
-
-
-class Morphologizer(Pipe):
- name = 'morphologizer'
-
- @classmethod
- def Model(cls, **cfg):
- if cfg.get('pretrained_dims') and not cfg.get('pretrained_vectors'):
- raise ValueError(TempErrors.T008)
- class_map = Morphology.create_class_map()
- return build_morphologizer_model(class_map.field_sizes, **cfg)
-
- def __init__(self, vocab, model=True, **cfg):
- self.vocab = vocab
- self.model = model
- self.cfg = OrderedDict(sorted(cfg.items()))
- self.cfg.setdefault('cnn_maxout_pieces', 2)
- self._class_map = self.vocab.morphology.create_class_map()
-
- @property
- def labels(self):
- return self.vocab.morphology.tag_names
-
- @property
- def tok2vec(self):
- if self.model in (None, True, False):
- return None
- else:
- return chain(self.model.tok2vec, flatten)
-
- def __call__(self, doc):
- features, tokvecs = self.predict([doc])
- self.set_annotations([doc], features, tensors=tokvecs)
- return doc
-
- def pipe(self, stream, batch_size=128, n_threads=-1):
- for docs in util.minibatch(stream, size=batch_size):
- docs = list(docs)
- features, tokvecs = self.predict(docs)
- self.set_annotations(docs, features, tensors=tokvecs)
- yield from docs
-
- def predict(self, docs):
- if not any(len(doc) for doc in docs):
- # Handle case where there are no tokens in any docs.
- n_labels = self.model.nO
- guesses = [self.model.ops.allocate((0, n_labels)) for doc in docs]
- tokvecs = self.model.ops.allocate((0, self.model.tok2vec.nO))
- return guesses, tokvecs
- tokvecs = self.model.tok2vec(docs)
- scores = self.model.softmax(tokvecs)
- return scores, tokvecs
-
- def set_annotations(self, docs, batch_scores, tensors=None):
- if isinstance(docs, Doc):
- docs = [docs]
- cdef Doc doc
- cdef Vocab vocab = self.vocab
- offsets = [self._class_map.get_field_offset(field)
- for field in self._class_map.fields]
- for i, doc in enumerate(docs):
- doc_scores = batch_scores[i]
- doc_guesses = scores_to_guesses(doc_scores, self.model.softmax.out_sizes)
- # Convert the neuron indices into feature IDs.
- doc_feat_ids = numpy.zeros((len(doc), len(self._class_map.fields)), dtype='i')
- for j in range(len(doc)):
- for k, offset in enumerate(offsets):
- if doc_guesses[j, k] == 0:
- doc_feat_ids[j, k] = 0
- else:
- doc_feat_ids[j, k] = offset + doc_guesses[j, k]
- # Get the set of feature names.
- feats = {self._class_map.col2info[f][2] for f in doc_feat_ids[j]}
- if "NIL" in feats:
- feats.remove("NIL")
- # Now add the analysis, and set the hash.
- doc.c[j].morph = self.vocab.morphology.add(feats)
- if doc[j].morph.pos != 0:
- doc.c[j].pos = doc[j].morph.pos
-
- def update(self, docs, golds, drop=0., sgd=None, losses=None):
- if losses is not None and self.name not in losses:
- losses[self.name] = 0.
-
- tag_scores, bp_tag_scores = self.model.begin_update(docs, drop=drop)
- loss, d_tag_scores = self.get_loss(docs, golds, tag_scores)
- bp_tag_scores(d_tag_scores, sgd=sgd)
-
- if losses is not None:
- losses[self.name] += loss
-
- def get_loss(self, docs, golds, scores):
- guesses = []
- for doc_scores in scores:
- guesses.append(scores_to_guesses(doc_scores, self.model.softmax.out_sizes))
- guesses = self.model.ops.xp.vstack(guesses)
- scores = self.model.ops.xp.vstack(scores)
- if not isinstance(scores, numpy.ndarray):
- scores = scores.get()
- if not isinstance(guesses, numpy.ndarray):
- guesses = guesses.get()
- cdef int idx = 0
- # Do this on CPU, as we can't vectorize easily.
- target = numpy.zeros(scores.shape, dtype='f')
- field_sizes = self.model.softmax.out_sizes
- for doc, gold in zip(docs, golds):
- for t, features in enumerate(gold.morphology):
- if features is None:
- target[idx] = scores[idx]
- else:
- gold_fields = {}
- for feature in features:
- field = self._class_map.feat2field[feature]
- gold_fields[field] = self._class_map.feat2offset[feature]
- for field in self._class_map.fields:
- field_id = self._class_map.field2id[field]
- col_offset = self._class_map.field2col[field]
- if field_id in gold_fields:
- target[idx, col_offset + gold_fields[field_id]] = 1.
- else:
- target[idx, col_offset] = 1.
- #print(doc[t])
- #for col, info in enumerate(self._class_map.col2info):
- # print(col, info, scores[idx, col], target[idx, col])
- idx += 1
- target = self.model.ops.asarray(target, dtype='f')
- scores = self.model.ops.asarray(scores, dtype='f')
- d_scores = scores - target
- loss = (d_scores**2).sum()
- d_scores = self.model.ops.unflatten(d_scores, [len(d) for d in docs])
- return float(loss), d_scores
-
- def use_params(self, params):
- with self.model.use_params(params):
- yield
-
-def scores_to_guesses(scores, out_sizes):
- xp = get_array_module(scores)
- guesses = xp.zeros((scores.shape[0], len(out_sizes)), dtype='i')
- offset = 0
- for i, size in enumerate(out_sizes):
- slice_ = scores[:, offset : offset + size]
- col_guesses = slice_.argmax(axis=1)
- guesses[:, i] = col_guesses
- offset += size
- return guesses
diff --git a/spacy/pipeline/pipes.pyx b/spacy/pipeline/pipes.pyx
index 9ac3affc9..190116a2e 100644
--- a/spacy/pipeline/pipes.pyx
+++ b/spacy/pipeline/pipes.pyx
@@ -69,7 +69,7 @@ class Pipe(object):
predictions = self.predict([doc])
if isinstance(predictions, tuple) and len(predictions) == 2:
scores, tensors = predictions
- self.set_annotations([doc], scores, tensors=tensors)
+ self.set_annotations([doc], scores, tensor=tensors)
else:
self.set_annotations([doc], predictions)
return doc
@@ -90,7 +90,7 @@ class Pipe(object):
predictions = self.predict(docs)
if isinstance(predictions, tuple) and len(tuple) == 2:
scores, tensors = predictions
- self.set_annotations(docs, scores, tensors=tensors)
+ self.set_annotations(docs, scores, tensor=tensors)
else:
self.set_annotations(docs, predictions)
yield from docs
@@ -424,22 +424,18 @@ class Tagger(Pipe):
cdef Doc doc
cdef int idx = 0
cdef Vocab vocab = self.vocab
- assign_morphology = self.cfg.get("set_morphology", True)
for i, doc in enumerate(docs):
doc_tag_ids = batch_tag_ids[i]
if hasattr(doc_tag_ids, "get"):
doc_tag_ids = doc_tag_ids.get()
for j, tag_id in enumerate(doc_tag_ids):
# Don't clobber preset POS tags
- if doc.c[j].tag == 0:
- if doc.c[j].pos == 0 and assign_morphology:
- # Don't clobber preset lemmas
- lemma = doc.c[j].lemma
- vocab.morphology.assign_tag_id(&doc.c[j], tag_id)
- if lemma != 0 and lemma != doc.c[j].lex.orth:
- doc.c[j].lemma = lemma
- else:
- doc.c[j].tag = self.vocab.strings[self.labels[tag_id]]
+ if doc.c[j].tag == 0 and doc.c[j].pos == 0:
+ # Don't clobber preset lemmas
+ lemma = doc.c[j].lemma
+ vocab.morphology.assign_tag_id(&doc.c[j], tag_id)
+ if lemma != 0 and lemma != doc.c[j].lex.orth:
+ doc.c[j].lemma = lemma
idx += 1
if tensors is not None and len(tensors):
if isinstance(doc.tensor, numpy.ndarray) \
@@ -504,7 +500,6 @@ class Tagger(Pipe):
orig_tag_map = dict(self.vocab.morphology.tag_map)
new_tag_map = OrderedDict()
for raw_text, annots_brackets in get_gold_tuples():
- _ = annots_brackets.pop()
for annots, brackets in annots_brackets:
ids, words, tags, heads, deps, ents = annots
for tag in tags:
@@ -937,6 +932,11 @@ class TextCategorizer(Pipe):
def labels(self, value):
self.cfg["labels"] = tuple(value)
+ def __call__(self, doc):
+ scores, tensors = self.predict([doc])
+ self.set_annotations([doc], scores, tensors=tensors)
+ return doc
+
def pipe(self, stream, batch_size=128, n_threads=-1):
for docs in util.minibatch(stream, size=batch_size):
docs = list(docs)
@@ -1017,10 +1017,6 @@ class TextCategorizer(Pipe):
return 1
def begin_training(self, get_gold_tuples=lambda: [], pipeline=None, sgd=None, **kwargs):
- for raw_text, annots_brackets in get_gold_tuples():
- cats = annots_brackets.pop()
- for cat in cats:
- self.add_label(cat)
if self.model is True:
self.cfg["pretrained_vectors"] = kwargs.get("pretrained_vectors")
self.require_labels()
diff --git a/spacy/scorer.py b/spacy/scorer.py
index 9c057d0a3..4032cc4dd 100644
--- a/spacy/scorer.py
+++ b/spacy/scorer.py
@@ -1,10 +1,7 @@
# coding: utf8
from __future__ import division, print_function, unicode_literals
-import numpy as np
-
from .gold import tags_to_entities, GoldParse
-from .errors import Errors
class PRFScore(object):
@@ -37,39 +34,10 @@ class PRFScore(object):
return 2 * ((p * r) / (p + r + 1e-100))
-class ROCAUCScore(object):
- """
- An AUC ROC score.
- """
-
- def __init__(self):
- self.golds = []
- self.cands = []
- self.saved_score = 0.0
- self.saved_score_at_len = 0
-
- def score_set(self, cand, gold):
- self.cands.append(cand)
- self.golds.append(gold)
-
- @property
- def score(self):
- if len(self.golds) == self.saved_score_at_len:
- return self.saved_score
- try:
- self.saved_score = _roc_auc_score(self.golds, self.cands)
- # catch ValueError: Only one class present in y_true.
- # ROC AUC score is not defined in that case.
- except ValueError:
- self.saved_score = -float("inf")
- self.saved_score_at_len = len(self.golds)
- return self.saved_score
-
-
class Scorer(object):
"""Compute evaluation scores."""
- def __init__(self, eval_punct=False, pipeline=None):
+ def __init__(self, eval_punct=False):
"""Initialize the Scorer.
eval_punct (bool): Evaluate the dependency attachments to and from
@@ -86,24 +54,6 @@ class Scorer(object):
self.ner = PRFScore()
self.ner_per_ents = dict()
self.eval_punct = eval_punct
- self.textcat = None
- self.textcat_per_cat = dict()
- self.textcat_positive_label = None
- self.textcat_multilabel = False
-
- if pipeline:
- for name, model in pipeline:
- if name == "textcat":
- self.textcat_positive_label = model.cfg.get("positive_label", None)
- if self.textcat_positive_label:
- self.textcat = PRFScore()
- if not model.cfg.get("exclusive_classes", False):
- self.textcat_multilabel = True
- for label in model.cfg.get("labels", []):
- self.textcat_per_cat[label] = ROCAUCScore()
- else:
- for label in model.cfg.get("labels", []):
- self.textcat_per_cat[label] = PRFScore()
@property
def tags_acc(self):
@@ -151,47 +101,10 @@ class Scorer(object):
for k, v in self.ner_per_ents.items()
}
- @property
- def textcat_score(self):
- """RETURNS (float): f-score on positive label for binary exclusive,
- macro-averaged f-score for 3+ exclusive,
- macro-averaged AUC ROC score for multilabel (-1 if undefined)
- """
- if not self.textcat_multilabel:
- # binary multiclass
- if self.textcat_positive_label:
- return self.textcat.fscore * 100
- # other multiclass
- return (
- sum([score.fscore for label, score in self.textcat_per_cat.items()])
- / (len(self.textcat_per_cat) + 1e-100)
- * 100
- )
- # multilabel
- return max(
- sum([score.score for label, score in self.textcat_per_cat.items()])
- / (len(self.textcat_per_cat) + 1e-100),
- -1,
- )
-
- @property
- def textcats_per_cat(self):
- """RETURNS (dict): Scores per textcat label.
- """
- if not self.textcat_multilabel:
- return {
- k: {"p": v.precision * 100, "r": v.recall * 100, "f": v.fscore * 100}
- for k, v in self.textcat_per_cat.items()
- }
- return {
- k: {"roc_auc_score": max(v.score, -1)}
- for k, v in self.textcat_per_cat.items()
- }
-
@property
def scores(self):
"""RETURNS (dict): All scores with keys `uas`, `las`, `ents_p`,
- `ents_r`, `ents_f`, `tags_acc`, `token_acc`, and `textcat_score`.
+ `ents_r`, `ents_f`, `tags_acc` and `token_acc`.
"""
return {
"uas": self.uas,
@@ -202,8 +115,6 @@ class Scorer(object):
"ents_per_type": self.ents_per_type,
"tags_acc": self.tags_acc,
"token_acc": self.token_acc,
- "textcat_score": self.textcat_score,
- "textcats_per_cat": self.textcats_per_cat,
}
def score(self, doc, gold, verbose=False, punct_labels=("p", "punct")):
@@ -281,301 +192,9 @@ class Scorer(object):
self.unlabelled.score_set(
set(item[:2] for item in cand_deps), set(item[:2] for item in gold_deps)
)
- if (
- len(gold.cats) > 0
- and set(self.textcat_per_cat) == set(gold.cats)
- and set(gold.cats) == set(doc.cats)
- ):
- goldcat = max(gold.cats, key=gold.cats.get)
- candcat = max(doc.cats, key=doc.cats.get)
- if self.textcat_positive_label:
- self.textcat.score_set(
- set([self.textcat_positive_label]) & set([candcat]),
- set([self.textcat_positive_label]) & set([goldcat]),
- )
- for label in self.textcat_per_cat:
- if self.textcat_multilabel:
- self.textcat_per_cat[label].score_set(
- doc.cats[label], gold.cats[label]
- )
- else:
- self.textcat_per_cat[label].score_set(
- set([label]) & set([candcat]), set([label]) & set([goldcat])
- )
- elif len(self.textcat_per_cat) > 0:
- model_labels = set(self.textcat_per_cat)
- eval_labels = set(gold.cats)
- raise ValueError(
- Errors.E162.format(model_labels=model_labels, eval_labels=eval_labels)
- )
if verbose:
gold_words = [item[1] for item in gold.orig_annot]
for w_id, h_id, dep in cand_deps - gold_deps:
print("F", gold_words[w_id], dep, gold_words[h_id])
for w_id, h_id, dep in gold_deps - cand_deps:
print("M", gold_words[w_id], dep, gold_words[h_id])
-
-
-#############################################################################
-#
-# The following implementation of roc_auc_score() is adapted from
-# scikit-learn, which is distributed under the following license:
-#
-# New BSD License
-#
-# Copyright (c) 2007–2019 The scikit-learn developers.
-# All rights reserved.
-#
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are met:
-#
-# a. Redistributions of source code must retain the above copyright notice,
-# this list of conditions and the following disclaimer.
-# b. Redistributions in binary form must reproduce the above copyright
-# notice, this list of conditions and the following disclaimer in the
-# documentation and/or other materials provided with the distribution.
-# c. Neither the name of the Scikit-learn Developers nor the names of
-# its contributors may be used to endorse or promote products
-# derived from this software without specific prior written
-# permission.
-#
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
-# ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE FOR
-# ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
-# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
-# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
-# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
-# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
-# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
-# DAMAGE.
-
-
-def _roc_auc_score(y_true, y_score):
- """Compute Area Under the Receiver Operating Characteristic Curve (ROC AUC)
- from prediction scores.
-
- Note: this implementation is restricted to the binary classification task
-
- Parameters
- ----------
- y_true : array, shape = [n_samples] or [n_samples, n_classes]
- True binary labels or binary label indicators.
- The multiclass case expects shape = [n_samples] and labels
- with values in ``range(n_classes)``.
-
- y_score : array, shape = [n_samples] or [n_samples, n_classes]
- Target scores, can either be probability estimates of the positive
- class, confidence values, or non-thresholded measure of decisions
- (as returned by "decision_function" on some classifiers). For binary
- y_true, y_score is supposed to be the score of the class with greater
- label. The multiclass case expects shape = [n_samples, n_classes]
- where the scores correspond to probability estimates.
-
- Returns
- -------
- auc : float
-
- References
- ----------
- .. [1] `Wikipedia entry for the Receiver operating characteristic
- `_
-
- .. [2] Fawcett T. An introduction to ROC analysis[J]. Pattern Recognition
- Letters, 2006, 27(8):861-874.
-
- .. [3] `Analyzing a portion of the ROC curve. McClish, 1989
- `_
- """
- if len(np.unique(y_true)) != 2:
- raise ValueError(Errors.E165)
- fpr, tpr, _ = _roc_curve(y_true, y_score)
- return _auc(fpr, tpr)
-
-
-def _roc_curve(y_true, y_score):
- """Compute Receiver operating characteristic (ROC)
-
- Note: this implementation is restricted to the binary classification task.
-
- Parameters
- ----------
-
- y_true : array, shape = [n_samples]
- True binary labels. If labels are not either {-1, 1} or {0, 1}, then
- pos_label should be explicitly given.
-
- y_score : array, shape = [n_samples]
- Target scores, can either be probability estimates of the positive
- class, confidence values, or non-thresholded measure of decisions
- (as returned by "decision_function" on some classifiers).
-
- Returns
- -------
- fpr : array, shape = [>2]
- Increasing false positive rates such that element i is the false
- positive rate of predictions with score >= thresholds[i].
-
- tpr : array, shape = [>2]
- Increasing true positive rates such that element i is the true
- positive rate of predictions with score >= thresholds[i].
-
- thresholds : array, shape = [n_thresholds]
- Decreasing thresholds on the decision function used to compute
- fpr and tpr. `thresholds[0]` represents no instances being predicted
- and is arbitrarily set to `max(y_score) + 1`.
-
- Notes
- -----
- Since the thresholds are sorted from low to high values, they
- are reversed upon returning them to ensure they correspond to both ``fpr``
- and ``tpr``, which are sorted in reversed order during their calculation.
-
- References
- ----------
- .. [1] `Wikipedia entry for the Receiver operating characteristic
- `_
-
- .. [2] Fawcett T. An introduction to ROC analysis[J]. Pattern Recognition
- Letters, 2006, 27(8):861-874.
- """
- fps, tps, thresholds = _binary_clf_curve(y_true, y_score)
-
- # Add an extra threshold position
- # to make sure that the curve starts at (0, 0)
- tps = np.r_[0, tps]
- fps = np.r_[0, fps]
- thresholds = np.r_[thresholds[0] + 1, thresholds]
-
- if fps[-1] <= 0:
- fpr = np.repeat(np.nan, fps.shape)
- else:
- fpr = fps / fps[-1]
-
- if tps[-1] <= 0:
- tpr = np.repeat(np.nan, tps.shape)
- else:
- tpr = tps / tps[-1]
-
- return fpr, tpr, thresholds
-
-
-def _binary_clf_curve(y_true, y_score):
- """Calculate true and false positives per binary classification threshold.
-
- Parameters
- ----------
- y_true : array, shape = [n_samples]
- True targets of binary classification
-
- y_score : array, shape = [n_samples]
- Estimated probabilities or decision function
-
- Returns
- -------
- fps : array, shape = [n_thresholds]
- A count of false positives, at index i being the number of negative
- samples assigned a score >= thresholds[i]. The total number of
- negative samples is equal to fps[-1] (thus true negatives are given by
- fps[-1] - fps).
-
- tps : array, shape = [n_thresholds <= len(np.unique(y_score))]
- An increasing count of true positives, at index i being the number
- of positive samples assigned a score >= thresholds[i]. The total
- number of positive samples is equal to tps[-1] (thus false negatives
- are given by tps[-1] - tps).
-
- thresholds : array, shape = [n_thresholds]
- Decreasing score values.
- """
- pos_label = 1.0
-
- y_true = np.ravel(y_true)
- y_score = np.ravel(y_score)
-
- # make y_true a boolean vector
- y_true = y_true == pos_label
-
- # sort scores and corresponding truth values
- desc_score_indices = np.argsort(y_score, kind="mergesort")[::-1]
- y_score = y_score[desc_score_indices]
- y_true = y_true[desc_score_indices]
- weight = 1.0
-
- # y_score typically has many tied values. Here we extract
- # the indices associated with the distinct values. We also
- # concatenate a value for the end of the curve.
- distinct_value_indices = np.where(np.diff(y_score))[0]
- threshold_idxs = np.r_[distinct_value_indices, y_true.size - 1]
-
- # accumulate the true positives with decreasing threshold
- tps = _stable_cumsum(y_true * weight)[threshold_idxs]
- fps = 1 + threshold_idxs - tps
- return fps, tps, y_score[threshold_idxs]
-
-
-def _stable_cumsum(arr, axis=None, rtol=1e-05, atol=1e-08):
- """Use high precision for cumsum and check that final value matches sum
-
- Parameters
- ----------
- arr : array-like
- To be cumulatively summed as flat
- axis : int, optional
- Axis along which the cumulative sum is computed.
- The default (None) is to compute the cumsum over the flattened array.
- rtol : float
- Relative tolerance, see ``np.allclose``
- atol : float
- Absolute tolerance, see ``np.allclose``
- """
- out = np.cumsum(arr, axis=axis, dtype=np.float64)
- expected = np.sum(arr, axis=axis, dtype=np.float64)
- if not np.all(
- np.isclose(
- out.take(-1, axis=axis), expected, rtol=rtol, atol=atol, equal_nan=True
- )
- ):
- raise ValueError(Errors.E163)
- return out
-
-
-def _auc(x, y):
- """Compute Area Under the Curve (AUC) using the trapezoidal rule
-
- This is a general function, given points on a curve. For computing the
- area under the ROC-curve, see :func:`roc_auc_score`.
-
- Parameters
- ----------
- x : array, shape = [n]
- x coordinates. These must be either monotonic increasing or monotonic
- decreasing.
- y : array, shape = [n]
- y coordinates.
-
- Returns
- -------
- auc : float
- """
- x = np.ravel(x)
- y = np.ravel(y)
-
- direction = 1
- dx = np.diff(x)
- if np.any(dx < 0):
- if np.all(dx <= 0):
- direction = -1
- else:
- raise ValueError(Errors.E164.format(x))
-
- area = direction * np.trapz(y, x)
- if isinstance(area, np.memmap):
- # Reductions such as .sum used internally in np.trapz do not return a
- # scalar by default for numpy.memmap instances contrary to
- # regular numpy.ndarray instances.
- area = area.dtype.type(area)
- return area
diff --git a/spacy/strings.pyx b/spacy/strings.pyx
index f3457e1a5..df86f8ac7 100644
--- a/spacy/strings.pyx
+++ b/spacy/strings.pyx
@@ -119,7 +119,9 @@ cdef class StringStore:
return ""
elif string_or_id in SYMBOLS_BY_STR:
return SYMBOLS_BY_STR[string_or_id]
+
cdef hash_t key
+
if isinstance(string_or_id, unicode):
key = hash_string(string_or_id)
return key
@@ -137,20 +139,6 @@ cdef class StringStore:
else:
return decode_Utf8Str(utf8str)
- def as_int(self, key):
- """If key is an int, return it; otherwise, get the int value."""
- if not isinstance(key, basestring):
- return key
- else:
- return self[key]
-
- def as_string(self, key):
- """If key is a string, return it; otherwise, get the string value."""
- if isinstance(key, basestring):
- return key
- else:
- return self[key]
-
def add(self, string):
"""Add a string to the StringStore.
diff --git a/spacy/structs.pxd b/spacy/structs.pxd
index 468277f6b..6c643b4cd 100644
--- a/spacy/structs.pxd
+++ b/spacy/structs.pxd
@@ -78,54 +78,6 @@ cdef struct TokenC:
hash_t ent_id
-cdef struct MorphAnalysisC:
- univ_pos_t pos
- int length
-
- attr_t abbr
- attr_t adp_type
- attr_t adv_type
- attr_t animacy
- attr_t aspect
- attr_t case
- attr_t conj_type
- attr_t connegative
- attr_t definite
- attr_t degree
- attr_t derivation
- attr_t echo
- attr_t foreign
- attr_t gender
- attr_t hyph
- attr_t inf_form
- attr_t mood
- attr_t negative
- attr_t number
- attr_t name_type
- attr_t noun_type
- attr_t num_form
- attr_t num_type
- attr_t num_value
- attr_t part_form
- attr_t part_type
- attr_t person
- attr_t polite
- attr_t polarity
- attr_t poss
- attr_t prefix
- attr_t prep_case
- attr_t pron_type
- attr_t punct_side
- attr_t punct_type
- attr_t reflex
- attr_t style
- attr_t style_variant
- attr_t tense
- attr_t typo
- attr_t verb_form
- attr_t voice
- attr_t verb_type
-
# Internal struct, for storage and disambiguation of entities.
cdef struct KBEntryC:
diff --git a/spacy/syntax/arc_eager.pyx b/spacy/syntax/arc_eager.pyx
index 5a7355061..eb39124ce 100644
--- a/spacy/syntax/arc_eager.pyx
+++ b/spacy/syntax/arc_eager.pyx
@@ -342,7 +342,6 @@ cdef class ArcEager(TransitionSystem):
actions[RIGHT][label] = 1
actions[REDUCE][label] = 1
for raw_text, sents in kwargs.get('gold_parses', []):
- _ = sents.pop()
for (ids, words, tags, heads, labels, iob), ctnts in sents:
heads, labels = nonproj.projectivize(heads, labels)
for child, head, label in zip(ids, heads, labels):
diff --git a/spacy/syntax/ner.pyx b/spacy/syntax/ner.pyx
index 3bd096463..767e4c2e0 100644
--- a/spacy/syntax/ner.pyx
+++ b/spacy/syntax/ner.pyx
@@ -66,14 +66,12 @@ cdef class BiluoPushDown(TransitionSystem):
UNIT: Counter(),
OUT: Counter()
}
- actions[OUT][''] = 1 # Represents a token predicted to be outside of any entity
- actions[UNIT][''] = 1 # Represents a token prohibited to be in an entity
+ actions[OUT][''] = 1
for entity_type in kwargs.get('entity_types', []):
for action in (BEGIN, IN, LAST, UNIT):
actions[action][entity_type] = 1
moves = ('M', 'B', 'I', 'L', 'U')
for raw_text, sents in kwargs.get('gold_parses', []):
- _ = sents.pop()
for (ids, words, tags, heads, labels, biluo), _ in sents:
for i, ner_tag in enumerate(biluo):
if ner_tag != 'O' and ner_tag != '-':
@@ -163,7 +161,8 @@ cdef class BiluoPushDown(TransitionSystem):
for i in range(self.n_moves):
if self.c[i].move == move and self.c[i].label == label:
return self.c[i]
- raise KeyError(Errors.E022.format(name=name))
+ else:
+ raise KeyError(Errors.E022.format(name=name))
cdef Transition init_transition(self, int clas, int move, attr_t label) except *:
# TODO: Apparent Cython bug here when we try to use the Transition()
@@ -267,7 +266,7 @@ cdef class Begin:
return False
elif label == 0:
return False
- elif preset_ent_iob == 1:
+ elif preset_ent_iob == 1 or preset_ent_iob == 2:
# Ensure we don't clobber preset entities. If no entity preset,
# ent_iob is 0
return False
@@ -283,8 +282,8 @@ cdef class Begin:
# Otherwise, force acceptance, even if we're across a sentence
# boundary or the token is whitespace.
return True
- elif st.B_(1).ent_iob == 3:
- # If the next word is B, we can't B now
+ elif st.B_(1).ent_iob == 2 or st.B_(1).ent_iob == 3:
+ # If the next word is B or O, we can't B now
return False
elif st.B_(1).sent_start == 1:
# Don't allow entities to extend across sentence boundaries
@@ -327,7 +326,6 @@ cdef class In:
@staticmethod
cdef bint is_valid(const StateC* st, attr_t label) nogil:
cdef int preset_ent_iob = st.B_(0).ent_iob
- cdef attr_t preset_ent_label = st.B_(0).ent_type
if label == 0:
return False
elif st.E_(0).ent_type != label:
@@ -337,22 +335,13 @@ cdef class In:
elif st.B(1) == -1:
# If we're at the end, we can't I.
return False
+ elif preset_ent_iob == 2:
+ return False
elif preset_ent_iob == 3:
return False
- elif st.B_(1).ent_iob == 3:
- # If we know the next word is B, we can't be I (must be L)
+ elif st.B_(1).ent_iob == 2 or st.B_(1).ent_iob == 3:
+ # If we know the next word is B or O, we can't be I (must be L)
return False
- elif preset_ent_iob == 1:
- if st.B_(1).ent_iob in (0, 2):
- # if next preset is missing or O, this can't be I (must be L)
- return False
- elif label != preset_ent_label:
- # If label isn't right, reject
- return False
- else:
- # Otherwise, force acceptance, even if we're across a sentence
- # boundary or the token is whitespace.
- return True
elif st.B(1) != -1 and st.B_(1).sent_start == 1:
# Don't allow entities to extend across sentence boundaries
return False
@@ -398,24 +387,17 @@ cdef class In:
else:
return 1
+
cdef class Last:
@staticmethod
cdef bint is_valid(const StateC* st, attr_t label) nogil:
- cdef int preset_ent_iob = st.B_(0).ent_iob
- cdef attr_t preset_ent_label = st.B_(0).ent_type
if label == 0:
return False
elif not st.entity_is_open():
return False
- elif preset_ent_iob == 1 and st.B_(1).ent_iob != 1:
+ elif st.B_(0).ent_iob == 1 and st.B_(1).ent_iob != 1:
# If a preset entity has I followed by not-I, is L
- if label != preset_ent_label:
- # If label isn't right, reject
- return False
- else:
- # Otherwise, force acceptance, even if we're across a sentence
- # boundary or the token is whitespace.
- return True
+ return True
elif st.E_(0).ent_type != label:
return False
elif st.B_(1).ent_iob == 1:
@@ -468,13 +450,12 @@ cdef class Unit:
cdef int preset_ent_iob = st.B_(0).ent_iob
cdef attr_t preset_ent_label = st.B_(0).ent_type
if label == 0:
- # this is only allowed if it's a preset blocked annotation
- if preset_ent_label == 0 and preset_ent_iob == 3:
- return True
- else:
- return False
+ return False
elif st.entity_is_open():
return False
+ elif preset_ent_iob == 2:
+ # Don't clobber preset O
+ return False
elif st.B_(1).ent_iob == 1:
# If next token is In, we can't be Unit -- must be Begin
return False
diff --git a/spacy/syntax/nn_parser.pyx b/spacy/syntax/nn_parser.pyx
index aeb4a5306..85f7b5bb9 100644
--- a/spacy/syntax/nn_parser.pyx
+++ b/spacy/syntax/nn_parser.pyx
@@ -135,9 +135,7 @@ cdef class Parser:
names = []
for i in range(self.moves.n_moves):
name = self.moves.move_name(self.moves.c[i].move, self.moves.c[i].label)
- # Explicitly removing the internal "U-" token used for blocking entities
- if name != "U-":
- names.append(name)
+ names.append(name)
return names
nr_feature = 8
@@ -163,16 +161,10 @@ cdef class Parser:
added = self.moves.add_action(action, label)
if added:
resized = True
- if resized:
- self._resize()
-
- def _resize(self):
- if "nr_class" in self.cfg:
+ if resized and "nr_class" in self.cfg:
self.cfg["nr_class"] = self.moves.n_moves
- if self.model not in (True, False, None):
+ if self.model not in (True, False, None) and resized:
self.model.resize_output(self.moves.n_moves)
- if self._rehearsal_model not in (True, False, None):
- self._rehearsal_model.resize_output(self.moves.n_moves)
def add_multitask_objective(self, target):
# Defined in subclasses, to avoid circular import
@@ -243,9 +235,7 @@ cdef class Parser:
if isinstance(docs, Doc):
docs = [docs]
if not any(len(doc) for doc in docs):
- result = self.moves.init_batch(docs)
- self._resize()
- return result
+ return self.moves.init_batch(docs)
if beam_width < 2:
return self.greedy_parse(docs, drop=drop)
else:
@@ -259,7 +249,7 @@ cdef class Parser:
# This is pretty dirty, but the NER can resize itself in init_batch,
# if labels are missing. We therefore have to check whether we need to
# expand our model output.
- self._resize()
+ self.model.resize_output(self.moves.n_moves)
model = self.model(docs)
weights = get_c_weights(model)
for state in batch:
@@ -279,7 +269,7 @@ cdef class Parser:
# This is pretty dirty, but the NER can resize itself in init_batch,
# if labels are missing. We therefore have to check whether we need to
# expand our model output.
- self._resize()
+ self.model.resize_output(self.moves.n_moves)
model = self.model(docs)
token_ids = numpy.zeros((len(docs) * beam_width, self.nr_feature),
dtype='i', order='C')
@@ -453,7 +443,8 @@ cdef class Parser:
# This is pretty dirty, but the NER can resize itself in init_batch,
# if labels are missing. We therefore have to check whether we need to
# expand our model output.
- self._resize()
+ self.model.resize_output(self.moves.n_moves)
+ self._rehearsal_model.resize_output(self.moves.n_moves)
# Prepare the stepwise model, and get the callback for finishing the batch
tutor, _ = self._rehearsal_model.begin_update(docs, drop=0.0)
model, finish_update = self.model.begin_update(docs, drop=0.0)
@@ -594,7 +585,6 @@ cdef class Parser:
doc_sample = []
gold_sample = []
for raw_text, annots_brackets in islice(get_gold_tuples(), 1000):
- _ = annots_brackets.pop()
for annots, brackets in annots_brackets:
ids, words, tags, heads, deps, ents = annots
doc_sample.append(Doc(self.vocab, words=words))
diff --git a/spacy/syntax/transition_system.pyx b/spacy/syntax/transition_system.pyx
index 58b3a6993..523cd6699 100644
--- a/spacy/syntax/transition_system.pyx
+++ b/spacy/syntax/transition_system.pyx
@@ -63,13 +63,6 @@ cdef class TransitionSystem:
cdef Doc doc
beams = []
cdef int offset = 0
-
- # Doc objects might contain labels that we need to register actions for. We need to check for that
- # *before* we create any Beam objects, because the Beam object needs the correct number of
- # actions. It's sort of dumb, but the best way is to just call init_batch() -- that triggers the additions,
- # and it doesn't matter that we create and discard the state objects.
- self.init_batch(docs)
-
for doc in docs:
beam = Beam(self.n_moves, beam_width, min_density=beam_density)
beam.initialize(self.init_beam_state, doc.length, doc.c)
@@ -103,7 +96,8 @@ cdef class TransitionSystem:
def apply_transition(self, StateClass state, name):
if not self.is_valid(state, name):
- raise ValueError(Errors.E170.format(name=name))
+ raise ValueError(
+ "Cannot apply transition {name}: invalid for the current state.".format(name=name))
action = self.lookup_transition(name)
action.do(state.c, action.label)
diff --git a/spacy/tests/conftest.py b/spacy/tests/conftest.py
index 0763af32b..c88f3314e 100644
--- a/spacy/tests/conftest.py
+++ b/spacy/tests/conftest.py
@@ -185,12 +185,6 @@ def ru_tokenizer():
return get_lang_class("ru").Defaults.create_tokenizer()
-@pytest.fixture
-def ru_lemmatizer():
- pytest.importorskip("pymorphy2")
- return get_lang_class("ru").Defaults.create_lemmatizer()
-
-
@pytest.fixture(scope="session")
def sr_tokenizer():
return get_lang_class("sr").Defaults.create_tokenizer()
diff --git a/spacy/tests/doc/test_add_entities.py b/spacy/tests/doc/test_add_entities.py
index 6c69e699a..433541c48 100644
--- a/spacy/tests/doc/test_add_entities.py
+++ b/spacy/tests/doc/test_add_entities.py
@@ -1,11 +1,11 @@
# coding: utf-8
from __future__ import unicode_literals
-from spacy.pipeline import EntityRecognizer
-from spacy.tokens import Span
-import pytest
-
+from ...pipeline import EntityRecognizer
from ..util import get_doc
+from ...tokens import Span
+
+import pytest
def test_doc_add_entities_set_ents_iob(en_vocab):
@@ -16,23 +16,10 @@ def test_doc_add_entities_set_ents_iob(en_vocab):
ner(doc)
assert len(list(doc.ents)) == 0
assert [w.ent_iob_ for w in doc] == (["O"] * len(doc))
-
doc.ents = [(doc.vocab.strings["ANIMAL"], 3, 4)]
- assert [w.ent_iob_ for w in doc] == ["O", "O", "O", "B"]
-
+ assert [w.ent_iob_ for w in doc] == ["", "", "", "B"]
doc.ents = [(doc.vocab.strings["WORD"], 0, 2)]
- assert [w.ent_iob_ for w in doc] == ["B", "I", "O", "O"]
-
-
-def test_ents_reset(en_vocab):
- text = ["This", "is", "a", "lion"]
- doc = get_doc(en_vocab, text)
- ner = EntityRecognizer(en_vocab)
- ner.begin_training([])
- ner(doc)
- assert [t.ent_iob_ for t in doc] == (["O"] * len(doc))
- doc.ents = list(doc.ents)
- assert [t.ent_iob_ for t in doc] == (["O"] * len(doc))
+ assert [w.ent_iob_ for w in doc] == ["B", "I", "", ""]
def test_add_overlapping_entities(en_vocab):
diff --git a/spacy/tests/doc/test_creation.py b/spacy/tests/doc/test_creation.py
index b222f6bf0..ce42b39b9 100644
--- a/spacy/tests/doc/test_creation.py
+++ b/spacy/tests/doc/test_creation.py
@@ -5,13 +5,11 @@ import pytest
from spacy.vocab import Vocab
from spacy.tokens import Doc
from spacy.lemmatizer import Lemmatizer
-from spacy.lookups import Table
@pytest.fixture
def lemmatizer():
- lookup = Table(data={"dogs": "dog", "boxen": "box", "mice": "mouse"})
- return Lemmatizer(lookup=lookup)
+ return Lemmatizer(lookup={"dogs": "dog", "boxen": "box", "mice": "mouse"})
@pytest.fixture
diff --git a/spacy/tests/doc/test_morphanalysis.py b/spacy/tests/doc/test_morphanalysis.py
deleted file mode 100644
index 5d570af53..000000000
--- a/spacy/tests/doc/test_morphanalysis.py
+++ /dev/null
@@ -1,33 +0,0 @@
-# coding: utf-8
-from __future__ import unicode_literals
-
-import pytest
-
-
-@pytest.fixture
-def i_has(en_tokenizer):
- doc = en_tokenizer("I has")
- doc[0].tag_ = "PRP"
- doc[1].tag_ = "VBZ"
- return doc
-
-
-def test_token_morph_id(i_has):
- assert i_has[0].morph.id
- assert i_has[1].morph.id != 0
- assert i_has[0].morph.id != i_has[1].morph.id
-
-
-def test_morph_props(i_has):
- assert i_has[0].morph.pron_type == i_has.vocab.strings["PronType_prs"]
- assert i_has[0].morph.pron_type_ == "PronType_prs"
- assert i_has[1].morph.pron_type == 0
-
-
-def test_morph_iter(i_has):
- assert list(i_has[0].morph) == ["PronType_prs"]
- assert list(i_has[1].morph) == ["Number_sing", "Person_three", "VerbForm_fin"]
-
-
-def test_morph_get(i_has):
- assert i_has[0].morph.get("pron_type") == "PronType_prs"
diff --git a/spacy/tests/lang/ja/test_tokenizer.py b/spacy/tests/lang/ja/test_tokenizer.py
index ad8bfaa00..c95e7bc40 100644
--- a/spacy/tests/lang/ja/test_tokenizer.py
+++ b/spacy/tests/lang/ja/test_tokenizer.py
@@ -47,10 +47,3 @@ def test_ja_tokenizer_tags(ja_tokenizer, text, expected_tags):
def test_ja_tokenizer_pos(ja_tokenizer, text, expected_pos):
pos = [token.pos_ for token in ja_tokenizer(text)]
assert pos == expected_pos
-
-
-def test_extra_spaces(ja_tokenizer):
- # note: three spaces after "I"
- tokens = ja_tokenizer("I like cheese.")
- assert tokens[1].orth_ == " "
- assert tokens[2].orth_ == " "
diff --git a/spacy/tests/lang/lt/test_lemmatizer.py b/spacy/tests/lang/lt/test_lemmatizer.py
index f7408fc16..9b2969849 100644
--- a/spacy/tests/lang/lt/test_lemmatizer.py
+++ b/spacy/tests/lang/lt/test_lemmatizer.py
@@ -17,4 +17,4 @@ TEST_CASES = [
@pytest.mark.parametrize("tokens,lemmas", TEST_CASES)
def test_lt_lemmatizer(lt_lemmatizer, tokens, lemmas):
- assert lemmas == [lt_lemmatizer.lookup_table.get(token, token) for token in tokens]
+ assert lemmas == [lt_lemmatizer.lookup(token) for token in tokens]
diff --git a/spacy/tests/lang/ru/test_lemmatizer.py b/spacy/tests/lang/ru/test_lemmatizer.py
index b228fded8..b92dfa29c 100644
--- a/spacy/tests/lang/ru/test_lemmatizer.py
+++ b/spacy/tests/lang/ru/test_lemmatizer.py
@@ -2,10 +2,17 @@
from __future__ import unicode_literals
import pytest
+from spacy.lang.ru import Russian
from ...util import get_doc
+@pytest.fixture
+def ru_lemmatizer():
+ pytest.importorskip("pymorphy2")
+ return Russian.Defaults.create_lemmatizer()
+
+
def test_ru_doc_lemmatization(ru_tokenizer):
words = ["мама", "мыла", "раму"]
tags = [
diff --git a/spacy/tests/lang/sr/test_exceptions.py b/spacy/tests/lang/sr/test_еxceptions.py
similarity index 100%
rename from spacy/tests/lang/sr/test_exceptions.py
rename to spacy/tests/lang/sr/test_еxceptions.py
diff --git a/spacy/tests/matcher/test_matcher_api.py b/spacy/tests/matcher/test_matcher_api.py
index 0d640e1a2..df35a1be2 100644
--- a/spacy/tests/matcher/test_matcher_api.py
+++ b/spacy/tests/matcher/test_matcher_api.py
@@ -410,11 +410,3 @@ def test_matcher_schema_token_attributes(en_vocab, pattern, text):
assert len(matcher) == 1
matches = matcher(doc)
assert len(matches) == 1
-
-
-def test_matcher_valid_callback(en_vocab):
- """Test that on_match can only be None or callable."""
- matcher = Matcher(en_vocab)
- with pytest.raises(ValueError):
- matcher.add("TEST", [], [{"TEXT": "test"}])
- matcher(Doc(en_vocab, words=["test"]))
diff --git a/spacy/tests/matcher/test_phrase_matcher.py b/spacy/tests/matcher/test_phrase_matcher.py
index 486cbb984..b82f9a058 100644
--- a/spacy/tests/matcher/test_phrase_matcher.py
+++ b/spacy/tests/matcher/test_phrase_matcher.py
@@ -8,31 +8,10 @@ from ..util import get_doc
def test_matcher_phrase_matcher(en_vocab):
+ doc = Doc(en_vocab, words=["Google", "Now"])
+ matcher = PhraseMatcher(en_vocab)
+ matcher.add("COMPANY", None, doc)
doc = Doc(en_vocab, words=["I", "like", "Google", "Now", "best"])
- # intermediate phrase
- pattern = Doc(en_vocab, words=["Google", "Now"])
- matcher = PhraseMatcher(en_vocab)
- matcher.add("COMPANY", None, pattern)
- assert len(matcher(doc)) == 1
- # initial token
- pattern = Doc(en_vocab, words=["I"])
- matcher = PhraseMatcher(en_vocab)
- matcher.add("I", None, pattern)
- assert len(matcher(doc)) == 1
- # initial phrase
- pattern = Doc(en_vocab, words=["I", "like"])
- matcher = PhraseMatcher(en_vocab)
- matcher.add("ILIKE", None, pattern)
- assert len(matcher(doc)) == 1
- # final token
- pattern = Doc(en_vocab, words=["best"])
- matcher = PhraseMatcher(en_vocab)
- matcher.add("BEST", None, pattern)
- assert len(matcher(doc)) == 1
- # final phrase
- pattern = Doc(en_vocab, words=["Now", "best"])
- matcher = PhraseMatcher(en_vocab)
- matcher.add("NOWBEST", None, pattern)
assert len(matcher(doc)) == 1
@@ -52,68 +31,6 @@ def test_phrase_matcher_contains(en_vocab):
assert "TEST2" not in matcher
-def test_phrase_matcher_repeated_add(en_vocab):
- matcher = PhraseMatcher(en_vocab)
- # match ID only gets added once
- matcher.add("TEST", None, Doc(en_vocab, words=["like"]))
- matcher.add("TEST", None, Doc(en_vocab, words=["like"]))
- matcher.add("TEST", None, Doc(en_vocab, words=["like"]))
- matcher.add("TEST", None, Doc(en_vocab, words=["like"]))
- doc = Doc(en_vocab, words=["I", "like", "Google", "Now", "best"])
- assert "TEST" in matcher
- assert "TEST2" not in matcher
- assert len(matcher(doc)) == 1
-
-
-def test_phrase_matcher_remove(en_vocab):
- matcher = PhraseMatcher(en_vocab)
- matcher.add("TEST1", None, Doc(en_vocab, words=["like"]))
- matcher.add("TEST2", None, Doc(en_vocab, words=["best"]))
- doc = Doc(en_vocab, words=["I", "like", "Google", "Now", "best"])
- assert "TEST1" in matcher
- assert "TEST2" in matcher
- assert "TEST3" not in matcher
- assert len(matcher(doc)) == 2
- matcher.remove("TEST1")
- assert "TEST1" not in matcher
- assert "TEST2" in matcher
- assert "TEST3" not in matcher
- assert len(matcher(doc)) == 1
- matcher.remove("TEST2")
- assert "TEST1" not in matcher
- assert "TEST2" not in matcher
- assert "TEST3" not in matcher
- assert len(matcher(doc)) == 0
- with pytest.raises(KeyError):
- matcher.remove("TEST3")
- assert "TEST1" not in matcher
- assert "TEST2" not in matcher
- assert "TEST3" not in matcher
- assert len(matcher(doc)) == 0
-
-
-def test_phrase_matcher_overlapping_with_remove(en_vocab):
- matcher = PhraseMatcher(en_vocab)
- matcher.add("TEST", None, Doc(en_vocab, words=["like"]))
- # TEST2 is added alongside TEST
- matcher.add("TEST2", None, Doc(en_vocab, words=["like"]))
- doc = Doc(en_vocab, words=["I", "like", "Google", "Now", "best"])
- assert "TEST" in matcher
- assert len(matcher) == 2
- assert len(matcher(doc)) == 2
- # removing TEST does not remove the entry for TEST2
- matcher.remove("TEST")
- assert "TEST" not in matcher
- assert len(matcher) == 1
- assert len(matcher(doc)) == 1
- assert matcher(doc)[0][0] == en_vocab.strings["TEST2"]
- # removing TEST2 removes all
- matcher.remove("TEST2")
- assert "TEST2" not in matcher
- assert len(matcher) == 0
- assert len(matcher(doc)) == 0
-
-
def test_phrase_matcher_string_attrs(en_vocab):
words1 = ["I", "like", "cats"]
pos1 = ["PRON", "VERB", "NOUN"]
diff --git a/spacy/tests/morphology/__init__.py b/spacy/tests/morphology/__init__.py
deleted file mode 100644
index e69de29bb..000000000
diff --git a/spacy/tests/morphology/test_morph_features.py b/spacy/tests/morphology/test_morph_features.py
deleted file mode 100644
index 4b8f0d754..000000000
--- a/spacy/tests/morphology/test_morph_features.py
+++ /dev/null
@@ -1,48 +0,0 @@
-# coding: utf-8
-from __future__ import unicode_literals
-
-import pytest
-from spacy.morphology import Morphology
-from spacy.strings import StringStore, get_string_id
-from spacy.lemmatizer import Lemmatizer
-
-
-@pytest.fixture
-def morphology():
- return Morphology(StringStore(), {}, Lemmatizer())
-
-
-def test_init(morphology):
- pass
-
-
-def test_add_morphology_with_string_names(morphology):
- morphology.add({"Case_gen", "Number_sing"})
-
-
-def test_add_morphology_with_int_ids(morphology):
- morphology.add({get_string_id("Case_gen"), get_string_id("Number_sing")})
-
-
-def test_add_morphology_with_mix_strings_and_ints(morphology):
- morphology.add({get_string_id("PunctSide_ini"), "VerbType_aux"})
-
-
-def test_morphology_tags_hash_distinctly(morphology):
- tag1 = morphology.add({"PunctSide_ini", "VerbType_aux"})
- tag2 = morphology.add({"Case_gen", "Number_sing"})
- assert tag1 != tag2
-
-
-def test_morphology_tags_hash_independent_of_order(morphology):
- tag1 = morphology.add({"Case_gen", "Number_sing"})
- tag2 = morphology.add({"Number_sing", "Case_gen"})
- assert tag1 == tag2
-
-
-def test_update_morphology_tag(morphology):
- tag1 = morphology.add({"Case_gen"})
- tag2 = morphology.update(tag1, {"Number_sing"})
- assert tag1 != tag2
- tag3 = morphology.add({"Number_sing", "Case_gen"})
- assert tag2 == tag3
diff --git a/spacy/tests/parser/test_ner.py b/spacy/tests/parser/test_ner.py
index 4dc7542ed..43c00a963 100644
--- a/spacy/tests/parser/test_ner.py
+++ b/spacy/tests/parser/test_ner.py
@@ -2,9 +2,7 @@
from __future__ import unicode_literals
import pytest
-from spacy.lang.en import English
-
-from spacy.pipeline import EntityRecognizer, EntityRuler
+from spacy.pipeline import EntityRecognizer
from spacy.vocab import Vocab
from spacy.syntax.ner import BiluoPushDown
from spacy.gold import GoldParse
@@ -82,190 +80,14 @@ def test_get_oracle_moves_negative_O(tsys, vocab):
assert names
-def test_oracle_moves_missing_B(en_vocab):
- words = ["B", "52", "Bomber"]
- biluo_tags = [None, None, "L-PRODUCT"]
-
- doc = Doc(en_vocab, words=words)
- gold = GoldParse(doc, words=words, entities=biluo_tags)
-
- moves = BiluoPushDown(en_vocab.strings)
- move_types = ("M", "B", "I", "L", "U", "O")
- for tag in biluo_tags:
- if tag is None:
- continue
- elif tag == "O":
- moves.add_action(move_types.index("O"), "")
- else:
- action, label = tag.split("-")
- moves.add_action(move_types.index("B"), label)
- moves.add_action(move_types.index("I"), label)
- moves.add_action(move_types.index("L"), label)
- moves.add_action(move_types.index("U"), label)
- moves.preprocess_gold(gold)
- seq = moves.get_oracle_sequence(doc, gold)
-
-
-def test_oracle_moves_whitespace(en_vocab):
- words = ["production", "\n", "of", "Northrop", "\n", "Corp.", "\n", "'s", "radar"]
- biluo_tags = ["O", "O", "O", "B-ORG", None, "I-ORG", "L-ORG", "O", "O"]
-
- doc = Doc(en_vocab, words=words)
- gold = GoldParse(doc, words=words, entities=biluo_tags)
-
- moves = BiluoPushDown(en_vocab.strings)
- move_types = ("M", "B", "I", "L", "U", "O")
- for tag in biluo_tags:
- if tag is None:
- continue
- elif tag == "O":
- moves.add_action(move_types.index("O"), "")
- else:
- action, label = tag.split("-")
- moves.add_action(move_types.index(action), label)
- moves.preprocess_gold(gold)
- moves.get_oracle_sequence(doc, gold)
-
-
-def test_accept_blocked_token():
- """Test succesful blocking of tokens to be in an entity."""
- # 1. test normal behaviour
- nlp1 = English()
- doc1 = nlp1("I live in New York")
- ner1 = EntityRecognizer(doc1.vocab)
- assert [token.ent_iob_ for token in doc1] == ["", "", "", "", ""]
- assert [token.ent_type_ for token in doc1] == ["", "", "", "", ""]
-
- # Add the OUT action
- ner1.moves.add_action(5, "")
- ner1.add_label("GPE")
- # Get into the state just before "New"
- state1 = ner1.moves.init_batch([doc1])[0]
- ner1.moves.apply_transition(state1, "O")
- ner1.moves.apply_transition(state1, "O")
- ner1.moves.apply_transition(state1, "O")
- # Check that B-GPE is valid.
- assert ner1.moves.is_valid(state1, "B-GPE")
-
- # 2. test blocking behaviour
- nlp2 = English()
- doc2 = nlp2("I live in New York")
- ner2 = EntityRecognizer(doc2.vocab)
-
- # set "New York" to a blocked entity
- doc2.ents = [(0, 3, 5)]
- assert [token.ent_iob_ for token in doc2] == ["", "", "", "B", "B"]
- assert [token.ent_type_ for token in doc2] == ["", "", "", "", ""]
-
- # Check that B-GPE is now invalid.
- ner2.moves.add_action(4, "")
- ner2.moves.add_action(5, "")
- ner2.add_label("GPE")
- state2 = ner2.moves.init_batch([doc2])[0]
- ner2.moves.apply_transition(state2, "O")
- ner2.moves.apply_transition(state2, "O")
- ner2.moves.apply_transition(state2, "O")
- # we can only use U- for "New"
- assert not ner2.moves.is_valid(state2, "B-GPE")
- assert ner2.moves.is_valid(state2, "U-")
- ner2.moves.apply_transition(state2, "U-")
- # we can only use U- for "York"
- assert not ner2.moves.is_valid(state2, "B-GPE")
- assert ner2.moves.is_valid(state2, "U-")
-
-
-def test_overwrite_token():
- nlp = English()
- ner1 = nlp.create_pipe("ner")
- nlp.add_pipe(ner1, name="ner")
- nlp.begin_training()
-
- # The untrained NER will predict O for each token
- doc = nlp("I live in New York")
- assert [token.ent_iob_ for token in doc] == ["O", "O", "O", "O", "O"]
- assert [token.ent_type_ for token in doc] == ["", "", "", "", ""]
-
- # Check that a new ner can overwrite O
- ner2 = EntityRecognizer(doc.vocab)
- ner2.moves.add_action(5, "")
- ner2.add_label("GPE")
- state = ner2.moves.init_batch([doc])[0]
- assert ner2.moves.is_valid(state, "B-GPE")
- assert ner2.moves.is_valid(state, "U-GPE")
- ner2.moves.apply_transition(state, "B-GPE")
- assert ner2.moves.is_valid(state, "I-GPE")
- assert ner2.moves.is_valid(state, "L-GPE")
-
-
-def test_ruler_before_ner():
- """ Test that an NER works after an entity_ruler: the second can add annotations """
- nlp = English()
-
- # 1 : Entity Ruler - should set "this" to B and everything else to empty
- ruler = EntityRuler(nlp)
- patterns = [{"label": "THING", "pattern": "This"}]
- ruler.add_patterns(patterns)
- nlp.add_pipe(ruler)
-
- # 2: untrained NER - should set everything else to O
- untrained_ner = nlp.create_pipe("ner")
- untrained_ner.add_label("MY_LABEL")
- nlp.add_pipe(untrained_ner)
- nlp.begin_training()
-
- doc = nlp("This is Antti Korhonen speaking in Finland")
- expected_iobs = ["B", "O", "O", "O", "O", "O", "O"]
- expected_types = ["THING", "", "", "", "", "", ""]
- assert [token.ent_iob_ for token in doc] == expected_iobs
- assert [token.ent_type_ for token in doc] == expected_types
-
-
-def test_ner_before_ruler():
- """ Test that an entity_ruler works after an NER: the second can overwrite O annotations """
- nlp = English()
-
- # 1: untrained NER - should set everything to O
- untrained_ner = nlp.create_pipe("ner")
- untrained_ner.add_label("MY_LABEL")
- nlp.add_pipe(untrained_ner, name="uner")
- nlp.begin_training()
-
- # 2 : Entity Ruler - should set "this" to B and keep everything else O
- ruler = EntityRuler(nlp)
- patterns = [{"label": "THING", "pattern": "This"}]
- ruler.add_patterns(patterns)
- nlp.add_pipe(ruler)
-
- doc = nlp("This is Antti Korhonen speaking in Finland")
- expected_iobs = ["B", "O", "O", "O", "O", "O", "O"]
- expected_types = ["THING", "", "", "", "", "", ""]
- assert [token.ent_iob_ for token in doc] == expected_iobs
- assert [token.ent_type_ for token in doc] == expected_types
-
-
-def test_block_ner():
- """ Test functionality for blocking tokens so they can't be in a named entity """
- # block "Antti L Korhonen" from being a named entity
- nlp = English()
- nlp.add_pipe(BlockerComponent1(2, 5))
- untrained_ner = nlp.create_pipe("ner")
- untrained_ner.add_label("MY_LABEL")
- nlp.add_pipe(untrained_ner, name="uner")
- nlp.begin_training()
- doc = nlp("This is Antti L Korhonen speaking in Finland")
- expected_iobs = ["O", "O", "B", "B", "B", "O", "O", "O"]
- expected_types = ["", "", "", "", "", "", "", ""]
- assert [token.ent_iob_ for token in doc] == expected_iobs
- assert [token.ent_type_ for token in doc] == expected_types
-
-
-class BlockerComponent1(object):
- name = "my_blocker"
-
- def __init__(self, start, end):
- self.start = start
- self.end = end
-
- def __call__(self, doc):
- doc.ents = [(0, self.start, self.end)]
- return doc
+def test_doc_add_entities_set_ents_iob(en_vocab):
+ doc = Doc(en_vocab, words=["This", "is", "a", "lion"])
+ ner = EntityRecognizer(en_vocab)
+ ner.begin_training([])
+ ner(doc)
+ assert len(list(doc.ents)) == 0
+ assert [w.ent_iob_ for w in doc] == (["O"] * len(doc))
+ doc.ents = [(doc.vocab.strings["ANIMAL"], 3, 4)]
+ assert [w.ent_iob_ for w in doc] == ["", "", "", "B"]
+ doc.ents = [(doc.vocab.strings["WORD"], 0, 2)]
+ assert [w.ent_iob_ for w in doc] == ["B", "I", "", ""]
diff --git a/spacy/tests/regression/test_issue1-1000.py b/spacy/tests/regression/test_issue1-1000.py
index b3f347765..febf2b5b3 100644
--- a/spacy/tests/regression/test_issue1-1000.py
+++ b/spacy/tests/regression/test_issue1-1000.py
@@ -426,7 +426,7 @@ def test_issue957(en_tokenizer):
def test_issue999(train_data):
"""Test that adding entities and resuming training works passably OK.
There are two issues here:
- 1) We have to read labels. This isn't very nice.
+ 1) We have to readd labels. This isn't very nice.
2) There's no way to set the learning rate for the weight update, so we
end up out-of-scale, causing it to learn too fast.
"""
diff --git a/spacy/tests/regression/test_issue1501-2000.py b/spacy/tests/regression/test_issue1501-2000.py
index 520090bb4..24f725ab8 100644
--- a/spacy/tests/regression/test_issue1501-2000.py
+++ b/spacy/tests/regression/test_issue1501-2000.py
@@ -187,7 +187,7 @@ def test_issue1799():
def test_issue1807():
"""Test vocab.set_vector also adds the word to the vocab."""
- vocab = Vocab(vectors_name="test_issue1807")
+ vocab = Vocab()
assert "hello" not in vocab
vocab.set_vector("hello", numpy.ones((50,), dtype="f"))
assert "hello" in vocab
diff --git a/spacy/tests/regression/test_issue2501-3000.py b/spacy/tests/regression/test_issue2501-3000.py
index a0b1e2aac..cf29c2535 100644
--- a/spacy/tests/regression/test_issue2501-3000.py
+++ b/spacy/tests/regression/test_issue2501-3000.py
@@ -184,7 +184,7 @@ def test_issue2833(en_vocab):
def test_issue2871():
"""Test that vectors recover the correct key for spaCy reserved words."""
words = ["dog", "cat", "SUFFIX"]
- vocab = Vocab(vectors_name="test_issue2871")
+ vocab = Vocab()
vocab.vectors.resize(shape=(3, 10))
vector_data = numpy.zeros((3, 10), dtype="f")
for word in words:
diff --git a/spacy/tests/regression/test_issue3001-3500.py b/spacy/tests/regression/test_issue3001-3500.py
index c430678d3..3b0c2f1ed 100644
--- a/spacy/tests/regression/test_issue3001-3500.py
+++ b/spacy/tests/regression/test_issue3001-3500.py
@@ -30,20 +30,20 @@ def test_issue3002():
def test_issue3009(en_vocab):
"""Test problem with matcher quantifiers"""
patterns = [
- [{"LEMMA": "have"}, {"LOWER": "to"}, {"LOWER": "do"}, {"TAG": "IN"}],
+ [{"LEMMA": "have"}, {"LOWER": "to"}, {"LOWER": "do"}, {"POS": "ADP"}],
[
{"LEMMA": "have"},
{"IS_ASCII": True, "IS_PUNCT": False, "OP": "*"},
{"LOWER": "to"},
{"LOWER": "do"},
- {"TAG": "IN"},
+ {"POS": "ADP"},
],
[
{"LEMMA": "have"},
{"IS_ASCII": True, "IS_PUNCT": False, "OP": "?"},
{"LOWER": "to"},
{"LOWER": "do"},
- {"TAG": "IN"},
+ {"POS": "ADP"},
],
]
words = ["also", "has", "to", "do", "with"]
diff --git a/spacy/tests/regression/test_issue4042.py b/spacy/tests/regression/test_issue4042.py
deleted file mode 100644
index 500be9f2a..000000000
--- a/spacy/tests/regression/test_issue4042.py
+++ /dev/null
@@ -1,82 +0,0 @@
-# coding: utf8
-from __future__ import unicode_literals
-
-import spacy
-from spacy.pipeline import EntityRecognizer, EntityRuler
-from spacy.lang.en import English
-from spacy.tokens import Span
-from spacy.util import ensure_path
-
-from ..util import make_tempdir
-
-
-def test_issue4042():
- """Test that serialization of an EntityRuler before NER works fine."""
- nlp = English()
-
- # add ner pipe
- ner = nlp.create_pipe("ner")
- ner.add_label("SOME_LABEL")
- nlp.add_pipe(ner)
- nlp.begin_training()
-
- # Add entity ruler
- ruler = EntityRuler(nlp)
- patterns = [
- {"label": "MY_ORG", "pattern": "Apple"},
- {"label": "MY_GPE", "pattern": [{"lower": "san"}, {"lower": "francisco"}]},
- ]
- ruler.add_patterns(patterns)
- nlp.add_pipe(ruler, before="ner") # works fine with "after"
- doc1 = nlp("What do you think about Apple ?")
- assert doc1.ents[0].label_ == "MY_ORG"
-
- with make_tempdir() as d:
- output_dir = ensure_path(d)
- if not output_dir.exists():
- output_dir.mkdir()
- nlp.to_disk(output_dir)
-
- nlp2 = spacy.load(output_dir)
- doc2 = nlp2("What do you think about Apple ?")
- assert doc2.ents[0].label_ == "MY_ORG"
-
-
-def test_issue4042_bug2():
- """
- Test that serialization of an NER works fine when new labels were added.
- This is the second bug of two bugs underlying the issue 4042.
- """
- nlp1 = English()
- vocab = nlp1.vocab
-
- # add ner pipe
- ner1 = nlp1.create_pipe("ner")
- ner1.add_label("SOME_LABEL")
- nlp1.add_pipe(ner1)
- nlp1.begin_training()
-
- # add a new label to the doc
- doc1 = nlp1("What do you think about Apple ?")
- assert len(ner1.labels) == 1
- assert "SOME_LABEL" in ner1.labels
- apple_ent = Span(doc1, 5, 6, label="MY_ORG")
- doc1.ents = list(doc1.ents) + [apple_ent]
-
- # reapply the NER - at this point it should resize itself
- ner1(doc1)
- assert len(ner1.labels) == 2
- assert "SOME_LABEL" in ner1.labels
- assert "MY_ORG" in ner1.labels
-
- with make_tempdir() as d:
- # assert IO goes fine
- output_dir = ensure_path(d)
- if not output_dir.exists():
- output_dir.mkdir()
- ner1.to_disk(output_dir)
-
- nlp2 = English(vocab)
- ner2 = EntityRecognizer(vocab)
- ner2.from_disk(output_dir)
- assert len(ner2.labels) == 2
diff --git a/spacy/tests/regression/test_issue4054.py b/spacy/tests/regression/test_issue4054.py
index cc84cebf8..2c9d73751 100644
--- a/spacy/tests/regression/test_issue4054.py
+++ b/spacy/tests/regression/test_issue4054.py
@@ -2,12 +2,12 @@
from __future__ import unicode_literals
from spacy.vocab import Vocab
+
import spacy
from spacy.lang.en import English
+from spacy.tests.util import make_tempdir
from spacy.util import ensure_path
-from ..util import make_tempdir
-
def test_issue4054(en_vocab):
"""Test that a new blank model can be made with a vocab from file,
diff --git a/spacy/tests/regression/test_issue4267.py b/spacy/tests/regression/test_issue4267.py
deleted file mode 100644
index 5fc61e142..000000000
--- a/spacy/tests/regression/test_issue4267.py
+++ /dev/null
@@ -1,42 +0,0 @@
-# coding: utf8
-from __future__ import unicode_literals
-
-import pytest
-
-import spacy
-
-from spacy.lang.en import English
-from spacy.pipeline import EntityRuler
-from spacy.tokens import Span
-
-
-def test_issue4267():
- """ Test that running an entity_ruler after ner gives consistent results"""
- nlp = English()
- ner = nlp.create_pipe("ner")
- ner.add_label("PEOPLE")
- nlp.add_pipe(ner)
- nlp.begin_training()
-
- assert "ner" in nlp.pipe_names
-
- # assert that we have correct IOB annotations
- doc1 = nlp("hi")
- assert doc1.is_nered
- for token in doc1:
- assert token.ent_iob == 2
-
- # add entity ruler and run again
- ruler = EntityRuler(nlp)
- patterns = [{"label": "SOFTWARE", "pattern": "spacy"}]
-
- ruler.add_patterns(patterns)
- nlp.add_pipe(ruler)
- assert "entity_ruler" in nlp.pipe_names
- assert "ner" in nlp.pipe_names
-
- # assert that we still have correct IOB annotations
- doc2 = nlp("hi")
- assert doc2.is_nered
- for token in doc2:
- assert token.ent_iob == 2
diff --git a/spacy/tests/regression/test_issue4278.py b/spacy/tests/regression/test_issue4278.py
index cb09340ff..4c85d15c4 100644
--- a/spacy/tests/regression/test_issue4278.py
+++ b/spacy/tests/regression/test_issue4278.py
@@ -13,7 +13,7 @@ class DummyPipe(Pipe):
def predict(self, docs):
return ([1, 2, 3], [4, 5, 6])
- def set_annotations(self, docs, scores, tensors=None):
+ def set_annotations(self, docs, scores, tensor=None):
return docs
diff --git a/spacy/tests/regression/test_issue4313.py b/spacy/tests/regression/test_issue4313.py
deleted file mode 100644
index c68f745a7..000000000
--- a/spacy/tests/regression/test_issue4313.py
+++ /dev/null
@@ -1,39 +0,0 @@
-# coding: utf8
-from __future__ import unicode_literals
-
-from collections import defaultdict
-
-from spacy.pipeline import EntityRecognizer
-
-from spacy.lang.en import English
-from spacy.tokens import Span
-
-
-def test_issue4313():
- """ This should not crash or exit with some strange error code """
- beam_width = 16
- beam_density = 0.0001
- nlp = English()
- ner = EntityRecognizer(nlp.vocab)
- ner.add_label("SOME_LABEL")
- ner.begin_training([])
- nlp.add_pipe(ner)
-
- # add a new label to the doc
- doc = nlp("What do you think about Apple ?")
- assert len(ner.labels) == 1
- assert "SOME_LABEL" in ner.labels
- apple_ent = Span(doc, 5, 6, label="MY_ORG")
- doc.ents = list(doc.ents) + [apple_ent]
-
- # ensure the beam_parse still works with the new label
- docs = [doc]
- beams = nlp.entity.beam_parse(
- docs, beam_width=beam_width, beam_density=beam_density
- )
-
- for doc, beam in zip(docs, beams):
- entity_scores = defaultdict(float)
- for score, ents in nlp.entity.moves.get_beam_parses(beam):
- for start, end, label in ents:
- entity_scores[(start, end, label)] += score
diff --git a/spacy/tests/serialize/test_serialize_kb.py b/spacy/tests/serialize/test_serialize_kb.py
index b19c11864..67fd9f0d4 100644
--- a/spacy/tests/serialize/test_serialize_kb.py
+++ b/spacy/tests/serialize/test_serialize_kb.py
@@ -1,10 +1,10 @@
# coding: utf-8
from __future__ import unicode_literals
-from spacy.util import ensure_path
-from spacy.kb import KnowledgeBase
-
from ..util import make_tempdir
+from ...util import ensure_path
+
+from spacy.kb import KnowledgeBase
def test_serialize_kb_disk(en_vocab):
diff --git a/spacy/tests/test_displacy.py b/spacy/tests/test_displacy.py
index 2d1f1bd8f..5e99d261a 100644
--- a/spacy/tests/test_displacy.py
+++ b/spacy/tests/test_displacy.py
@@ -32,7 +32,7 @@ def test_displacy_parse_deps(en_vocab):
assert isinstance(deps, dict)
assert deps["words"] == [
{"text": "This", "tag": "DET"},
- {"text": "is", "tag": "AUX"},
+ {"text": "is", "tag": "VERB"},
{"text": "a", "tag": "DET"},
{"text": "sentence", "tag": "NOUN"},
]
diff --git a/spacy/tests/test_gold.py b/spacy/tests/test_gold.py
index 4f79c4463..860540be2 100644
--- a/spacy/tests/test_gold.py
+++ b/spacy/tests/test_gold.py
@@ -3,12 +3,8 @@ from __future__ import unicode_literals
from spacy.gold import biluo_tags_from_offsets, offsets_from_biluo_tags
from spacy.gold import spans_from_biluo_tags, GoldParse
-from spacy.gold import GoldCorpus, docs_to_json
-from spacy.lang.en import English
from spacy.tokens import Doc
-from .util import make_tempdir
import pytest
-import srsly
def test_gold_biluo_U(en_vocab):
@@ -85,28 +81,3 @@ def test_gold_ner_missing_tags(en_tokenizer):
doc = en_tokenizer("I flew to Silicon Valley via London.")
biluo_tags = [None, "O", "O", "B-LOC", "L-LOC", "O", "U-GPE", "O"]
gold = GoldParse(doc, entities=biluo_tags) # noqa: F841
-
-
-def test_roundtrip_docs_to_json():
- text = "I flew to Silicon Valley via London."
- cats = {"TRAVEL": 1.0, "BAKING": 0.0}
- nlp = English()
- doc = nlp(text)
- doc.cats = cats
- doc[0].is_sent_start = True
- for i in range(1, len(doc)):
- doc[i].is_sent_start = False
-
- with make_tempdir() as tmpdir:
- json_file = tmpdir / "roundtrip.json"
- srsly.write_json(json_file, [docs_to_json(doc)])
- goldcorpus = GoldCorpus(str(json_file), str(json_file))
-
- reloaded_doc, goldparse = next(goldcorpus.train_docs(nlp))
-
- assert len(doc) == goldcorpus.count_train()
- assert text == reloaded_doc.text
- assert "TRAVEL" in goldparse.cats
- assert "BAKING" in goldparse.cats
- assert cats["TRAVEL"] == goldparse.cats["TRAVEL"]
- assert cats["BAKING"] == goldparse.cats["BAKING"]
diff --git a/spacy/tests/test_scorer.py b/spacy/tests/test_scorer.py
index 9cc4f75b2..a747d3adb 100644
--- a/spacy/tests/test_scorer.py
+++ b/spacy/tests/test_scorer.py
@@ -1,12 +1,9 @@
# coding: utf-8
from __future__ import unicode_literals
-from numpy.testing import assert_almost_equal, assert_array_almost_equal
-import pytest
from pytest import approx
from spacy.gold import GoldParse
-from spacy.scorer import Scorer, ROCAUCScore
-from spacy.scorer import _roc_auc_score, _roc_curve
+from spacy.scorer import Scorer
from .util import get_doc
test_ner_cardinal = [
@@ -69,73 +66,3 @@ def test_ner_per_type(en_vocab):
assert results["ents_per_type"]["ORG"]["p"] == 50
assert results["ents_per_type"]["ORG"]["r"] == 100
assert results["ents_per_type"]["ORG"]["f"] == approx(66.66666)
-
-
-def test_roc_auc_score():
- # Binary classification, toy tests from scikit-learn test suite
- y_true = [0, 1]
- y_score = [0, 1]
- tpr, fpr, _ = _roc_curve(y_true, y_score)
- roc_auc = _roc_auc_score(y_true, y_score)
- assert_array_almost_equal(tpr, [0, 0, 1])
- assert_array_almost_equal(fpr, [0, 1, 1])
- assert_almost_equal(roc_auc, 1.0)
-
- y_true = [0, 1]
- y_score = [1, 0]
- tpr, fpr, _ = _roc_curve(y_true, y_score)
- roc_auc = _roc_auc_score(y_true, y_score)
- assert_array_almost_equal(tpr, [0, 1, 1])
- assert_array_almost_equal(fpr, [0, 0, 1])
- assert_almost_equal(roc_auc, 0.0)
-
- y_true = [1, 0]
- y_score = [1, 1]
- tpr, fpr, _ = _roc_curve(y_true, y_score)
- roc_auc = _roc_auc_score(y_true, y_score)
- assert_array_almost_equal(tpr, [0, 1])
- assert_array_almost_equal(fpr, [0, 1])
- assert_almost_equal(roc_auc, 0.5)
-
- y_true = [1, 0]
- y_score = [1, 0]
- tpr, fpr, _ = _roc_curve(y_true, y_score)
- roc_auc = _roc_auc_score(y_true, y_score)
- assert_array_almost_equal(tpr, [0, 0, 1])
- assert_array_almost_equal(fpr, [0, 1, 1])
- assert_almost_equal(roc_auc, 1.0)
-
- y_true = [1, 0]
- y_score = [0.5, 0.5]
- tpr, fpr, _ = _roc_curve(y_true, y_score)
- roc_auc = _roc_auc_score(y_true, y_score)
- assert_array_almost_equal(tpr, [0, 1])
- assert_array_almost_equal(fpr, [0, 1])
- assert_almost_equal(roc_auc, 0.5)
-
- # same result as above with ROCAUCScore wrapper
- score = ROCAUCScore()
- score.score_set(0.5, 1)
- score.score_set(0.5, 0)
- assert_almost_equal(score.score, 0.5)
-
- # check that errors are raised in undefined cases and score is -inf
- y_true = [0, 0]
- y_score = [0.25, 0.75]
- with pytest.raises(ValueError):
- _roc_auc_score(y_true, y_score)
-
- score = ROCAUCScore()
- score.score_set(0.25, 0)
- score.score_set(0.75, 0)
- assert score.score == -float("inf")
-
- y_true = [1, 1]
- y_score = [0.25, 0.75]
- with pytest.raises(ValueError):
- _roc_auc_score(y_true, y_score)
-
- score = ROCAUCScore()
- score.score_set(0.25, 1)
- score.score_set(0.75, 1)
- assert score.score == -float("inf")
diff --git a/spacy/tests/vocab_vectors/test_lookups.py b/spacy/tests/vocab_vectors/test_lookups.py
index f78dd33c4..16ffe83fc 100644
--- a/spacy/tests/vocab_vectors/test_lookups.py
+++ b/spacy/tests/vocab_vectors/test_lookups.py
@@ -2,8 +2,7 @@
from __future__ import unicode_literals
import pytest
-from spacy.lookups import Lookups, Table
-from spacy.strings import get_string_id
+from spacy.lookups import Lookups
from spacy.vocab import Vocab
from ..util import make_tempdir
@@ -20,9 +19,9 @@ def test_lookups_api():
table = lookups.get_table(table_name)
assert table.name == table_name
assert len(table) == 2
- assert table["hello"] == "world"
- table["a"] = "b"
- assert table["a"] == "b"
+ assert table.get("hello") == "world"
+ table.set("a", "b")
+ assert table.get("a") == "b"
table = lookups.get_table(table_name)
assert len(table) == 3
with pytest.raises(KeyError):
@@ -37,44 +36,8 @@ def test_lookups_api():
lookups.get_table(table_name)
-def test_table_api():
- table = Table(name="table")
- assert table.name == "table"
- assert len(table) == 0
- assert "abc" not in table
- data = {"foo": "bar", "hello": "world"}
- table = Table(name="table", data=data)
- assert len(table) == len(data)
- assert "foo" in table
- assert get_string_id("foo") in table
- assert table["foo"] == "bar"
- assert table[get_string_id("foo")] == "bar"
- assert table.get("foo") == "bar"
- assert table.get("abc") is None
- table["abc"] = 123
- assert table["abc"] == 123
- assert table[get_string_id("abc")] == 123
- table.set("def", 456)
- assert table["def"] == 456
- assert table[get_string_id("def")] == 456
-
-
-def test_table_api_to_from_bytes():
- data = {"foo": "bar", "hello": "world", "abc": 123}
- table = Table(name="table", data=data)
- table_bytes = table.to_bytes()
- new_table = Table().from_bytes(table_bytes)
- assert new_table.name == "table"
- assert len(new_table) == 3
- assert new_table["foo"] == "bar"
- assert new_table[get_string_id("foo")] == "bar"
- new_table2 = Table(data={"def": 456})
- new_table2.from_bytes(table_bytes)
- assert len(new_table2) == 3
- assert "def" not in new_table2
-
-
-@pytest.mark.skip(reason="This fails on Python 3.5")
+# This fails on Python 3.5
+@pytest.mark.xfail
def test_lookups_to_from_bytes():
lookups = Lookups()
lookups.add_table("table1", {"foo": "bar", "hello": "world"})
@@ -87,14 +50,15 @@ def test_lookups_to_from_bytes():
assert "table2" in new_lookups
table1 = new_lookups.get_table("table1")
assert len(table1) == 2
- assert table1["foo"] == "bar"
+ assert table1.get("foo") == "bar"
table2 = new_lookups.get_table("table2")
assert len(table2) == 3
- assert table2["b"] == 2
+ assert table2.get("b") == 2
assert new_lookups.to_bytes() == lookups_bytes
-@pytest.mark.skip(reason="This fails on Python 3.5")
+# This fails on Python 3.5
+@pytest.mark.xfail
def test_lookups_to_from_disk():
lookups = Lookups()
lookups.add_table("table1", {"foo": "bar", "hello": "world"})
@@ -108,13 +72,14 @@ def test_lookups_to_from_disk():
assert "table2" in new_lookups
table1 = new_lookups.get_table("table1")
assert len(table1) == 2
- assert table1["foo"] == "bar"
+ assert table1.get("foo") == "bar"
table2 = new_lookups.get_table("table2")
assert len(table2) == 3
- assert table2["b"] == 2
+ assert table2.get("b") == 2
-@pytest.mark.skip(reason="This fails on Python 3.5")
+# This fails on Python 3.5
+@pytest.mark.xfail
def test_lookups_to_from_bytes_via_vocab():
table_name = "test"
vocab = Vocab()
@@ -128,11 +93,12 @@ def test_lookups_to_from_bytes_via_vocab():
assert table_name in new_vocab.lookups
table = new_vocab.lookups.get_table(table_name)
assert len(table) == 2
- assert table["hello"] == "world"
+ assert table.get("hello") == "world"
assert new_vocab.to_bytes() == vocab_bytes
-@pytest.mark.skip(reason="This fails on Python 3.5")
+# This fails on Python 3.5
+@pytest.mark.xfail
def test_lookups_to_from_disk_via_vocab():
table_name = "test"
vocab = Vocab()
@@ -147,4 +113,4 @@ def test_lookups_to_from_disk_via_vocab():
assert table_name in new_vocab.lookups
table = new_vocab.lookups.get_table(table_name)
assert len(table) == 2
- assert table["hello"] == "world"
+ assert table.get("hello") == "world"
diff --git a/spacy/tests/vocab_vectors/test_vectors.py b/spacy/tests/vocab_vectors/test_vectors.py
index 4226bca3b..2a828de9c 100644
--- a/spacy/tests/vocab_vectors/test_vectors.py
+++ b/spacy/tests/vocab_vectors/test_vectors.py
@@ -259,7 +259,7 @@ def test_vectors_doc_doc_similarity(vocab, text1, text2):
def test_vocab_add_vector():
- vocab = Vocab(vectors_name="test_vocab_add_vector")
+ vocab = Vocab()
data = numpy.ndarray((5, 3), dtype="f")
data[0] = 1.0
data[1] = 2.0
@@ -272,7 +272,7 @@ def test_vocab_add_vector():
def test_vocab_prune_vectors():
- vocab = Vocab(vectors_name="test_vocab_prune_vectors")
+ vocab = Vocab()
_ = vocab["cat"] # noqa: F841
_ = vocab["dog"] # noqa: F841
_ = vocab["kitten"] # noqa: F841
diff --git a/spacy/tokens/__init__.py b/spacy/tokens/__init__.py
index 536ec8349..5722d45bc 100644
--- a/spacy/tokens/__init__.py
+++ b/spacy/tokens/__init__.py
@@ -4,6 +4,5 @@ from __future__ import unicode_literals
from .doc import Doc
from .token import Token
from .span import Span
-from ._serialize import DocBin
-__all__ = ["Doc", "Token", "Span", "DocBin"]
+__all__ = ["Doc", "Token", "Span"]
diff --git a/spacy/tokens/_retokenize.pyx b/spacy/tokens/_retokenize.pyx
index 5b0747fa0..741be7e6a 100644
--- a/spacy/tokens/_retokenize.pyx
+++ b/spacy/tokens/_retokenize.pyx
@@ -146,12 +146,11 @@ def _merge(Doc doc, merges):
syntactic root of the span.
RETURNS (Token): The first newly merged token.
"""
- cdef int i, merge_index, start, end, token_index, current_span_index, current_offset, offset, span_index
+ cdef int i, merge_index, start, end, token_index
cdef Span span
cdef const LexemeC* lex
cdef TokenC* token
cdef Pool mem = Pool()
- cdef int merged_iob = 0
tokens = mem.alloc(len(merges), sizeof(TokenC))
spans = []
diff --git a/spacy/tokens/_serialize.py b/spacy/tokens/_serialize.py
index 634d7450a..41f524839 100644
--- a/spacy/tokens/_serialize.py
+++ b/spacy/tokens/_serialize.py
@@ -8,77 +8,36 @@ from thinc.neural.ops import NumpyOps
from ..compat import copy_reg
from ..tokens import Doc
-from ..attrs import SPACY, ORTH, intify_attrs
-from ..errors import Errors
+from ..attrs import SPACY, ORTH
-class DocBin(object):
- """Pack Doc objects for binary serialization.
-
- The DocBin class lets you efficiently serialize the information from a
- collection of Doc objects. You can control which information is serialized
- by passing a list of attribute IDs, and optionally also specify whether the
- user data is serialized. The DocBin is faster and produces smaller data
- sizes than pickle, and allows you to deserialize without executing arbitrary
- Python code.
-
- The serialization format is gzipped msgpack, where the msgpack object has
- the following structure:
-
- {
- "attrs": List[uint64], # e.g. [TAG, HEAD, ENT_IOB, ENT_TYPE]
- "tokens": bytes, # Serialized numpy uint64 array with the token data
- "spaces": bytes, # Serialized numpy boolean array with spaces data
- "lengths": bytes, # Serialized numpy int32 array with the doc lengths
- "strings": List[unicode] # List of unique strings in the token data
- }
-
- Strings for the words, tags, labels etc are represented by 64-bit hashes in
- the token data, and every string that occurs at least once is passed via the
- strings object. This means the storage is more efficient if you pack more
- documents together, because you have less duplication in the strings.
-
- A notable downside to this format is that you can't easily extract just one
- document from the DocBin.
- """
+class DocBox(object):
+ """Serialize analyses from a collection of doc objects."""
def __init__(self, attrs=None, store_user_data=False):
- """Create a DocBin object to hold serialized annotations.
+ """Create a DocBox object, to hold serialized annotations.
attrs (list): List of attributes to serialize. 'orth' and 'spacy' are
always serialized, so they're not required. Defaults to None.
- store_user_data (bool): Whether to include the `Doc.user_data`.
- RETURNS (DocBin): The newly constructed object.
-
- DOCS: https://spacy.io/api/docbin#init
"""
attrs = attrs or []
- attrs = sorted(intify_attrs(attrs))
+ # Ensure ORTH is always attrs[0]
self.attrs = [attr for attr in attrs if attr != ORTH and attr != SPACY]
- self.attrs.insert(0, ORTH) # Ensure ORTH is always attrs[0]
+ self.attrs.insert(0, ORTH)
self.tokens = []
self.spaces = []
self.user_data = []
self.strings = set()
self.store_user_data = store_user_data
- def __len__(self):
- """RETURNS: The number of Doc objects added to the DocBin."""
- return len(self.tokens)
-
def add(self, doc):
- """Add a Doc's annotations to the DocBin for serialization.
-
- doc (Doc): The Doc object to add.
-
- DOCS: https://spacy.io/api/docbin#add
- """
+ """Add a doc's annotations to the DocBox for serialization."""
array = doc.to_array(self.attrs)
if len(array.shape) == 1:
array = array.reshape((array.shape[0], 1))
self.tokens.append(array)
spaces = doc.to_array(SPACY)
- assert array.shape[0] == spaces.shape[0] # this should never happen
+ assert array.shape[0] == spaces.shape[0]
spaces = spaces.reshape((spaces.shape[0], 1))
self.spaces.append(numpy.asarray(spaces, dtype=bool))
self.strings.update(w.text for w in doc)
@@ -86,13 +45,7 @@ class DocBin(object):
self.user_data.append(srsly.msgpack_dumps(doc.user_data))
def get_docs(self, vocab):
- """Recover Doc objects from the annotations, using the given vocab.
-
- vocab (Vocab): The shared vocab.
- YIELDS (Doc): The Doc objects.
-
- DOCS: https://spacy.io/api/docbin#get_docs
- """
+ """Recover Doc objects from the annotations, using the given vocab."""
for string in self.strings:
vocab[string]
orth_col = self.attrs.index(ORTH)
@@ -107,16 +60,8 @@ class DocBin(object):
yield doc
def merge(self, other):
- """Extend the annotations of this DocBin with the annotations from
- another. Will raise an error if the pre-defined attrs of the two
- DocBins don't match.
-
- other (DocBin): The DocBin to merge into the current bin.
-
- DOCS: https://spacy.io/api/docbin#merge
- """
- if self.attrs != other.attrs:
- raise ValueError(Errors.E166.format(current=self.attrs, other=other.attrs))
+ """Extend the annotations of this DocBox with the annotations from another."""
+ assert self.attrs == other.attrs
self.tokens.extend(other.tokens)
self.spaces.extend(other.spaces)
self.strings.update(other.strings)
@@ -124,14 +69,9 @@ class DocBin(object):
self.user_data.extend(other.user_data)
def to_bytes(self):
- """Serialize the DocBin's annotations to a bytestring.
-
- RETURNS (bytes): The serialized DocBin.
-
- DOCS: https://spacy.io/api/docbin#to_bytes
- """
+ """Serialize the DocBox's annotations into a byte string."""
for tokens in self.tokens:
- assert len(tokens.shape) == 2, tokens.shape # this should never happen
+ assert len(tokens.shape) == 2, tokens.shape
lengths = [len(tokens) for tokens in self.tokens]
msg = {
"attrs": self.attrs,
@@ -144,15 +84,9 @@ class DocBin(object):
msg["user_data"] = self.user_data
return gzip.compress(srsly.msgpack_dumps(msg))
- def from_bytes(self, bytes_data):
- """Deserialize the DocBin's annotations from a bytestring.
-
- bytes_data (bytes): The data to load from.
- RETURNS (DocBin): The loaded DocBin.
-
- DOCS: https://spacy.io/api/docbin#from_bytes
- """
- msg = srsly.msgpack_loads(gzip.decompress(bytes_data))
+ def from_bytes(self, string):
+ """Deserialize the DocBox's annotations from a byte string."""
+ msg = srsly.msgpack_loads(gzip.decompress(string))
self.attrs = msg["attrs"]
self.strings = set(msg["strings"])
lengths = numpy.fromstring(msg["lengths"], dtype="int32")
@@ -166,35 +100,35 @@ class DocBin(object):
if self.store_user_data and "user_data" in msg:
self.user_data = list(msg["user_data"])
for tokens in self.tokens:
- assert len(tokens.shape) == 2, tokens.shape # this should never happen
+ assert len(tokens.shape) == 2, tokens.shape
return self
-def merge_bins(bins):
+def merge_boxes(boxes):
merged = None
- for byte_string in bins:
+ for byte_string in boxes:
if byte_string is not None:
- doc_bin = DocBin(store_user_data=True).from_bytes(byte_string)
+ box = DocBox(store_user_data=True).from_bytes(byte_string)
if merged is None:
- merged = doc_bin
+ merged = box
else:
- merged.merge(doc_bin)
+ merged.merge(box)
if merged is not None:
return merged.to_bytes()
else:
return b""
-def pickle_bin(doc_bin):
- return (unpickle_bin, (doc_bin.to_bytes(),))
+def pickle_box(box):
+ return (unpickle_box, (box.to_bytes(),))
-def unpickle_bin(byte_string):
- return DocBin().from_bytes(byte_string)
+def unpickle_box(byte_string):
+ return DocBox().from_bytes(byte_string)
-copy_reg.pickle(DocBin, pickle_bin, unpickle_bin)
+copy_reg.pickle(DocBox, pickle_box, unpickle_box)
# Compatibility, as we had named it this previously.
-Binder = DocBin
+Binder = DocBox
-__all__ = ["DocBin"]
+__all__ = ["DocBox"]
diff --git a/spacy/tokens/doc.pyx b/spacy/tokens/doc.pyx
index 80a808bae..e863b0807 100644
--- a/spacy/tokens/doc.pyx
+++ b/spacy/tokens/doc.pyx
@@ -256,7 +256,7 @@ cdef class Doc:
def is_nered(self):
"""Check if the document has named entities set. Will return True if
*any* of the tokens has a named entity tag set (even if the others are
- unknown values).
+ uknown values).
"""
if len(self) == 0:
return True
@@ -525,11 +525,13 @@ cdef class Doc:
def __set__(self, ents):
# TODO:
- # 1. Test basic data-driven ORTH gazetteer
- # 2. Test more nuanced date and currency regex
+ # 1. Allow negative matches
+ # 2. Ensure pre-set NERs are not over-written during statistical
+ # prediction
+ # 3. Test basic data-driven ORTH gazetteer
+ # 4. Test more nuanced date and currency regex
tokens_in_ents = {}
cdef attr_t entity_type
- cdef attr_t kb_id
cdef int ent_start, ent_end
for ent_info in ents:
entity_type, kb_id, ent_start, ent_end = get_entity_info(ent_info)
@@ -543,31 +545,27 @@ cdef class Doc:
tokens_in_ents[token_index] = (ent_start, ent_end, entity_type, kb_id)
cdef int i
for i in range(self.length):
- # default values
- entity_type = 0
- kb_id = 0
-
- # Set ent_iob to Missing (0) bij default unless this token was nered before
- ent_iob = 0
- if self.c[i].ent_iob != 0:
- ent_iob = 2
-
- # overwrite if the token was part of a specified entity
- if i in tokens_in_ents.keys():
- ent_start, ent_end, entity_type, kb_id = tokens_in_ents[i]
- if entity_type is None or entity_type <= 0:
- # Blocking this token from being overwritten by downstream NER
- ent_iob = 3
- elif ent_start == i:
- # Marking the start of an entity
- ent_iob = 3
- else:
- # Marking the inside of an entity
- ent_iob = 1
-
- self.c[i].ent_type = entity_type
- self.c[i].ent_kb_id = kb_id
- self.c[i].ent_iob = ent_iob
+ self.c[i].ent_type = 0
+ self.c[i].ent_kb_id = 0
+ self.c[i].ent_iob = 0 # Means missing.
+ cdef attr_t ent_type
+ cdef int start, end
+ for ent_info in ents:
+ ent_type, ent_kb_id, start, end = get_entity_info(ent_info)
+ if ent_type is None or ent_type < 0:
+ # Mark as O
+ for i in range(start, end):
+ self.c[i].ent_type = 0
+ self.c[i].ent_kb_id = 0
+ self.c[i].ent_iob = 2
+ else:
+ # Mark (inside) as I
+ for i in range(start, end):
+ self.c[i].ent_type = ent_type
+ self.c[i].ent_kb_id = ent_kb_id
+ self.c[i].ent_iob = 1
+ # Set start as B
+ self.c[start].ent_iob = 3
@property
def noun_chunks(self):
@@ -1091,37 +1089,6 @@ cdef class Doc:
data["_"][attr] = value
return data
- def to_utf8_array(self, int nr_char=-1):
- """Encode word strings to utf8, and export to a fixed-width array
- of characters. Characters are placed into the array in the order:
- 0, -1, 1, -2, etc
- For example, if the array is sliced array[:, :8], the array will
- contain the first 4 characters and last 4 characters of each word ---
- with the middle characters clipped out. The value 255 is used as a pad
- value.
- """
- byte_strings = [token.orth_.encode('utf8') for token in self]
- if nr_char == -1:
- nr_char = max(len(bs) for bs in byte_strings)
- cdef np.ndarray output = numpy.zeros((len(byte_strings), nr_char), dtype='uint8')
- output.fill(255)
- cdef int i, j, start_idx, end_idx
- cdef bytes byte_string
- cdef unsigned char utf8_char
- for i, byte_string in enumerate(byte_strings):
- j = 0
- start_idx = 0
- end_idx = len(byte_string) - 1
- while j < nr_char and start_idx <= end_idx:
- output[i, j] = byte_string[start_idx]
- start_idx += 1
- j += 1
- if j < nr_char and start_idx <= end_idx:
- output[i, j] = byte_string[end_idx]
- end_idx -= 1
- j += 1
- return output
-
cdef int token_by_start(const TokenC* tokens, int length, int start_char) except -2:
cdef int i
diff --git a/spacy/tokens/morphanalysis.pxd b/spacy/tokens/morphanalysis.pxd
deleted file mode 100644
index 22844454a..000000000
--- a/spacy/tokens/morphanalysis.pxd
+++ /dev/null
@@ -1,9 +0,0 @@
-from ..vocab cimport Vocab
-from ..typedefs cimport hash_t
-from ..structs cimport MorphAnalysisC
-
-
-cdef class MorphAnalysis:
- cdef readonly Vocab vocab
- cdef hash_t key
- cdef MorphAnalysisC c
diff --git a/spacy/tokens/morphanalysis.pyx b/spacy/tokens/morphanalysis.pyx
deleted file mode 100644
index e09870741..000000000
--- a/spacy/tokens/morphanalysis.pyx
+++ /dev/null
@@ -1,423 +0,0 @@
-from libc.string cimport memset
-
-from ..vocab cimport Vocab
-from ..typedefs cimport hash_t, attr_t
-from ..morphology cimport list_features, check_feature, get_field, tag_to_json
-
-from ..strings import get_string_id
-
-
-cdef class MorphAnalysis:
- """Control access to morphological features for a token."""
- def __init__(self, Vocab vocab, features=tuple()):
- self.vocab = vocab
- self.key = self.vocab.morphology.add(features)
- analysis = self.vocab.morphology.tags.get(self.key)
- if analysis is not NULL:
- self.c = analysis[0]
- else:
- memset(&self.c, 0, sizeof(self.c))
-
- @classmethod
- def from_id(cls, Vocab vocab, hash_t key):
- """Create a morphological analysis from a given ID."""
- cdef MorphAnalysis morph = MorphAnalysis.__new__(MorphAnalysis, vocab)
- morph.vocab = vocab
- morph.key = key
- analysis = vocab.morphology.tags.get(key)
- if analysis is not NULL:
- morph.c = analysis[0]
- else:
- memset(&morph.c, 0, sizeof(morph.c))
- return morph
-
- def __contains__(self, feature):
- """Test whether the morphological analysis contains some feature."""
- cdef attr_t feat_id = get_string_id(feature)
- return check_feature(&self.c, feat_id)
-
- def __iter__(self):
- """Iterate over the features in the analysis."""
- cdef attr_t feature
- for feature in list_features(&self.c):
- yield self.vocab.strings[feature]
-
- def __len__(self):
- """The number of features in the analysis."""
- return self.c.length
-
- def __str__(self):
- return self.to_json()
-
- def __repr__(self):
- return self.to_json()
-
- def __hash__(self):
- return self.key
-
- def get(self, unicode field):
- """Retrieve a feature by field."""
- cdef int field_id = self.vocab.morphology._feat_map.attr2field[field]
- return self.vocab.strings[get_field(&self.c, field_id)]
-
- def to_json(self):
- """Produce a json serializable representation, which will be a list of
- strings.
- """
- return tag_to_json(&self.c)
-
- @property
- def is_base_form(self):
- raise NotImplementedError
-
- @property
- def pos(self):
- return self.c.pos
-
- @property
- def pos_(self):
- return self.vocab.strings[self.c.pos]
-
- property id:
- def __get__(self):
- return self.key
-
- property abbr:
- def __get__(self):
- return self.c.abbr
-
- property adp_type:
- def __get__(self):
- return self.c.adp_type
-
- property adv_type:
- def __get__(self):
- return self.c.adv_type
-
- property animacy:
- def __get__(self):
- return self.c.animacy
-
- property aspect:
- def __get__(self):
- return self.c.aspect
-
- property case:
- def __get__(self):
- return self.c.case
-
- property conj_type:
- def __get__(self):
- return self.c.conj_type
-
- property connegative:
- def __get__(self):
- return self.c.connegative
-
- property definite:
- def __get__(self):
- return self.c.definite
-
- property degree:
- def __get__(self):
- return self.c.degree
-
- property derivation:
- def __get__(self):
- return self.c.derivation
-
- property echo:
- def __get__(self):
- return self.c.echo
-
- property foreign:
- def __get__(self):
- return self.c.foreign
-
- property gender:
- def __get__(self):
- return self.c.gender
-
- property hyph:
- def __get__(self):
- return self.c.hyph
-
- property inf_form:
- def __get__(self):
- return self.c.inf_form
-
- property mood:
- def __get__(self):
- return self.c.mood
-
- property name_type:
- def __get__(self):
- return self.c.name_type
-
- property negative:
- def __get__(self):
- return self.c.negative
-
- property noun_type:
- def __get__(self):
- return self.c.noun_type
-
- property number:
- def __get__(self):
- return self.c.number
-
- property num_form:
- def __get__(self):
- return self.c.num_form
-
- property num_type:
- def __get__(self):
- return self.c.num_type
-
- property num_value:
- def __get__(self):
- return self.c.num_value
-
- property part_form:
- def __get__(self):
- return self.c.part_form
-
- property part_type:
- def __get__(self):
- return self.c.part_type
-
- property person:
- def __get__(self):
- return self.c.person
-
- property polite:
- def __get__(self):
- return self.c.polite
-
- property polarity:
- def __get__(self):
- return self.c.polarity
-
- property poss:
- def __get__(self):
- return self.c.poss
-
- property prefix:
- def __get__(self):
- return self.c.prefix
-
- property prep_case:
- def __get__(self):
- return self.c.prep_case
-
- property pron_type:
- def __get__(self):
- return self.c.pron_type
-
- property punct_side:
- def __get__(self):
- return self.c.punct_side
-
- property punct_type:
- def __get__(self):
- return self.c.punct_type
-
- property reflex:
- def __get__(self):
- return self.c.reflex
-
- property style:
- def __get__(self):
- return self.c.style
-
- property style_variant:
- def __get__(self):
- return self.c.style_variant
-
- property tense:
- def __get__(self):
- return self.c.tense
-
- property typo:
- def __get__(self):
- return self.c.typo
-
- property verb_form:
- def __get__(self):
- return self.c.verb_form
-
- property voice:
- def __get__(self):
- return self.c.voice
-
- property verb_type:
- def __get__(self):
- return self.c.verb_type
-
- property abbr_:
- def __get__(self):
- return self.vocab.strings[self.c.abbr]
-
- property adp_type_:
- def __get__(self):
- return self.vocab.strings[self.c.adp_type]
-
- property adv_type_:
- def __get__(self):
- return self.vocab.strings[self.c.adv_type]
-
- property animacy_:
- def __get__(self):
- return self.vocab.strings[self.c.animacy]
-
- property aspect_:
- def __get__(self):
- return self.vocab.strings[self.c.aspect]
-
- property case_:
- def __get__(self):
- return self.vocab.strings[self.c.case]
-
- property conj_type_:
- def __get__(self):
- return self.vocab.strings[self.c.conj_type]
-
- property connegative_:
- def __get__(self):
- return self.vocab.strings[self.c.connegative]
-
- property definite_:
- def __get__(self):
- return self.vocab.strings[self.c.definite]
-
- property degree_:
- def __get__(self):
- return self.vocab.strings[self.c.degree]
-
- property derivation_:
- def __get__(self):
- return self.vocab.strings[self.c.derivation]
-
- property echo_:
- def __get__(self):
- return self.vocab.strings[self.c.echo]
-
- property foreign_:
- def __get__(self):
- return self.vocab.strings[self.c.foreign]
-
- property gender_:
- def __get__(self):
- return self.vocab.strings[self.c.gender]
-
- property hyph_:
- def __get__(self):
- return self.vocab.strings[self.c.hyph]
-
- property inf_form_:
- def __get__(self):
- return self.vocab.strings[self.c.inf_form]
-
- property name_type_:
- def __get__(self):
- return self.vocab.strings[self.c.name_type]
-
- property negative_:
- def __get__(self):
- return self.vocab.strings[self.c.negative]
-
- property mood_:
- def __get__(self):
- return self.vocab.strings[self.c.mood]
-
- property number_:
- def __get__(self):
- return self.vocab.strings[self.c.number]
-
- property num_form_:
- def __get__(self):
- return self.vocab.strings[self.c.num_form]
-
- property num_type_:
- def __get__(self):
- return self.vocab.strings[self.c.num_type]
-
- property num_value_:
- def __get__(self):
- return self.vocab.strings[self.c.num_value]
-
- property part_form_:
- def __get__(self):
- return self.vocab.strings[self.c.part_form]
-
- property part_type_:
- def __get__(self):
- return self.vocab.strings[self.c.part_type]
-
- property person_:
- def __get__(self):
- return self.vocab.strings[self.c.person]
-
- property polite_:
- def __get__(self):
- return self.vocab.strings[self.c.polite]
-
- property polarity_:
- def __get__(self):
- return self.vocab.strings[self.c.polarity]
-
- property poss_:
- def __get__(self):
- return self.vocab.strings[self.c.poss]
-
- property prefix_:
- def __get__(self):
- return self.vocab.strings[self.c.prefix]
-
- property prep_case_:
- def __get__(self):
- return self.vocab.strings[self.c.prep_case]
-
- property pron_type_:
- def __get__(self):
- return self.vocab.strings[self.c.pron_type]
-
- property punct_side_:
- def __get__(self):
- return self.vocab.strings[self.c.punct_side]
-
- property punct_type_:
- def __get__(self):
- return self.vocab.strings[self.c.punct_type]
-
- property reflex_:
- def __get__(self):
- return self.vocab.strings[self.c.reflex]
-
- property style_:
- def __get__(self):
- return self.vocab.strings[self.c.style]
-
- property style_variant_:
- def __get__(self):
- return self.vocab.strings[self.c.style_variant]
-
- property tense_:
- def __get__(self):
- return self.vocab.strings[self.c.tense]
-
- property typo_:
- def __get__(self):
- return self.vocab.strings[self.c.typo]
-
- property verb_form_:
- def __get__(self):
- return self.vocab.strings[self.c.verb_form]
-
- property voice_:
- def __get__(self):
- return self.vocab.strings[self.c.voice]
-
- property verb_type_:
- def __get__(self):
- return self.vocab.strings[self.c.verb_type]
diff --git a/spacy/tokens/token.pyx b/spacy/tokens/token.pyx
index 8b15a4223..07c6f1c99 100644
--- a/spacy/tokens/token.pyx
+++ b/spacy/tokens/token.pyx
@@ -26,7 +26,6 @@ from .. import util
from ..compat import is_config
from ..errors import Errors, Warnings, user_warning, models_warning
from .underscore import Underscore, get_ext_args
-from .morphanalysis cimport MorphAnalysis
cdef class Token:
@@ -219,10 +218,6 @@ cdef class Token:
xp = get_array_module(vector)
return (xp.dot(vector, other.vector) / (self.vector_norm * other.vector_norm))
- @property
- def morph(self):
- return MorphAnalysis.from_id(self.vocab, self.c.morph)
-
@property
def lex_id(self):
"""RETURNS (int): Sequential ID of the token's lexical type."""
@@ -335,7 +330,7 @@ cdef class Token:
"""
def __get__(self):
if self.c.lemma == 0:
- lemma_ = self.vocab.morphology.lemmatizer.lookup(self.orth_, orth=self.orth)
+ lemma_ = self.vocab.morphology.lemmatizer.lookup(self.orth_)
return self.vocab.strings[lemma_]
else:
return self.c.lemma
@@ -754,8 +749,7 @@ cdef class Token:
def ent_iob_(self):
"""IOB code of named entity tag. "B" means the token begins an entity,
"I" means it is inside an entity, "O" means it is outside an entity,
- and "" means no entity tag is set. "B" with an empty ent_type
- means that the token is blocked from further processing by NER.
+ and "" means no entity tag is set.
RETURNS (unicode): IOB code of named entity tag.
"""
@@ -863,7 +857,7 @@ cdef class Token:
"""
def __get__(self):
if self.c.lemma == 0:
- return self.vocab.morphology.lemmatizer.lookup(self.orth_, orth=self.orth)
+ return self.vocab.morphology.lemmatizer.lookup(self.orth_)
else:
return self.vocab.strings[self.c.lemma]
diff --git a/spacy/util.py b/spacy/util.py
index dbe965392..e88d66452 100644
--- a/spacy/util.py
+++ b/spacy/util.py
@@ -136,7 +136,7 @@ def load_language_data(path):
def get_module_path(module):
if not hasattr(module, "__module__"):
- raise ValueError(Errors.E169.format(module=repr(module)))
+ raise ValueError("Can't find module {}".format(repr(module)))
return Path(sys.modules[module.__module__].__file__).parent
diff --git a/spacy/vectors.pyx b/spacy/vectors.pyx
index 3c238fe2d..2cb5b077f 100644
--- a/spacy/vectors.pyx
+++ b/spacy/vectors.pyx
@@ -63,7 +63,7 @@ cdef class Vectors:
shape (tuple): Size of the table, as (# entries, # columns)
data (numpy.ndarray): The vector data.
keys (iterable): A sequence of keys, aligned with the data.
- name (unicode): A name to identify the vectors table.
+ name (string): A name to identify the vectors table.
RETURNS (Vectors): The newly created object.
DOCS: https://spacy.io/api/vectors#init
diff --git a/spacy/vocab.pyx b/spacy/vocab.pyx
index 62c1791b9..7e360d409 100644
--- a/spacy/vocab.pyx
+++ b/spacy/vocab.pyx
@@ -18,10 +18,10 @@ from .structs cimport SerializedLexemeC
from .compat import copy_reg, basestring_
from .errors import Errors
from .lemmatizer import Lemmatizer
+from .lookups import Lookups
from .attrs import intify_attrs, NORM
from .vectors import Vectors
from ._ml import link_vectors_to_models
-from .lookups import Lookups
from . import util
@@ -33,8 +33,7 @@ cdef class Vocab:
DOCS: https://spacy.io/api/vocab
"""
def __init__(self, lex_attr_getters=None, tag_map=None, lemmatizer=None,
- strings=tuple(), lookups=None, oov_prob=-20., vectors_name=None,
- **deprecated_kwargs):
+ strings=tuple(), lookups=None, oov_prob=-20., **deprecated_kwargs):
"""Create the vocabulary.
lex_attr_getters (dict): A dictionary mapping attribute IDs to
@@ -45,7 +44,6 @@ cdef class Vocab:
strings (StringStore): StringStore that maps strings to integers, and
vice versa.
lookups (Lookups): Container for large lookup tables and dictionaries.
- name (unicode): Optional name to identify the vectors table.
RETURNS (Vocab): The newly constructed object.
"""
lex_attr_getters = lex_attr_getters if lex_attr_getters is not None else {}
@@ -64,7 +62,7 @@ cdef class Vocab:
_ = self[string]
self.lex_attr_getters = lex_attr_getters
self.morphology = Morphology(self.strings, tag_map, lemmatizer)
- self.vectors = Vectors(name=vectors_name)
+ self.vectors = Vectors()
self.lookups = lookups
@property
@@ -320,7 +318,7 @@ cdef class Vocab:
keys = xp.asarray([key for (prob, i, key) in priority], dtype="uint64")
keep = xp.ascontiguousarray(self.vectors.data[indices[:nr_row]])
toss = xp.ascontiguousarray(self.vectors.data[indices[nr_row:]])
- self.vectors = Vectors(data=keep, keys=keys, name=self.vectors.name)
+ self.vectors = Vectors(data=keep, keys=keys)
syn_keys, syn_rows, scores = self.vectors.most_similar(toss, batch_size=batch_size)
remap = {}
for i, key in enumerate(keys[nr_row:]):
diff --git a/website/README.md b/website/README.md
index a02d5a151..be817225d 100644
--- a/website/README.md
+++ b/website/README.md
@@ -309,7 +309,7 @@ indented block as plain text and preserve whitespace.
### Using spaCy
import spacy
nlp = spacy.load("en_core_web_sm")
-doc = nlp("This is a sentence.")
+doc = nlp(u"This is a sentence.")
for token in doc:
print(token.text, token.pos_)
```
@@ -335,9 +335,9 @@ from spacy.matcher import Matcher
nlp = spacy.load('en_core_web_sm')
matcher = Matcher(nlp.vocab)
-pattern = [{"LOWER": "hello"}, {"IS_PUNCT": True}, {"LOWER": "world"}]
-matcher.add("HelloWorld", None, pattern)
-doc = nlp("Hello, world! Hello world!")
+pattern = [{'LOWER': 'hello'}, {'IS_PUNCT': True}, {'LOWER': 'world'}]
+matcher.add('HelloWorld', None, pattern)
+doc = nlp(u'Hello, world! Hello world!')
matches = matcher(doc)
```
@@ -360,7 +360,7 @@ interactive widget defaults to a regular code block.
### {executable="true"}
import spacy
nlp = spacy.load("en_core_web_sm")
-doc = nlp("This is a sentence.")
+doc = nlp(u"This is a sentence.")
for token in doc:
print(token.text, token.pos_)
```
@@ -457,8 +457,7 @@ sit amet dignissim justo congue.
## Setup and installation {#setup}
Before running the setup, make sure your versions of
-[Node](https://nodejs.org/en/) and [npm](https://www.npmjs.com/) are up to date.
-Node v10.15 or later is required.
+[Node](https://nodejs.org/en/) and [npm](https://www.npmjs.com/) are up to date. Node v10.15 or later is required.
```bash
# Clone the repository
diff --git a/website/docs/api/annotation.md b/website/docs/api/annotation.md
index fb8b67c1e..7f7b46260 100644
--- a/website/docs/api/annotation.md
+++ b/website/docs/api/annotation.md
@@ -16,7 +16,7 @@ menu:
> ```python
> from spacy.lang.en import English
> nlp = English()
-> tokens = nlp("Some\\nspaces and\\ttab characters")
+> tokens = nlp(u"Some\\nspaces and\\ttab characters")
> tokens_text = [t.text for t in tokens]
> assert tokens_text == ["Some", "\\n", "spaces", " ", "and", "\\t", "tab", "characters"]
> ```
@@ -80,8 +80,8 @@ training corpus and can be defined in the respective language data's
-spaCy maps all language-specific part-of-speech tags to a small, fixed set of
-word type tags following the
+spaCy also maps all language-specific part-of-speech tags to a small, fixed set
+of word type tags following the
[Universal Dependencies scheme](http://universaldependencies.org/u/pos/). The
universal tags don't code for any morphological features and only cover the word
type. They're available as the [`Token.pos`](/api/token#attributes) and
@@ -552,10 +552,6 @@ spaCy's JSON format, you can use the
"last": int, # index of last token
"label": string # phrase label
}]
- }],
- "cats": [{ # new in v2.2: categories for text classifier
- "label": string, # text category label
- "value": float / bool # label applies (1.0/true) or not (0.0/false)
}]
}]
}]
diff --git a/website/docs/api/cli.md b/website/docs/api/cli.md
index 7b20b76de..c5e77dc0d 100644
--- a/website/docs/api/cli.md
+++ b/website/docs/api/cli.md
@@ -8,7 +8,6 @@ menu:
- ['Info', 'info']
- ['Validate', 'validate']
- ['Convert', 'convert']
- - ['Debug data', 'debug-data']
- ['Train', 'train']
- ['Pretrain', 'pretrain']
- ['Init Model', 'init-model']
@@ -23,11 +22,11 @@ type `spacy --help`.
## Download {#download}
Download [models](/usage/models) for spaCy. The downloader finds the
-best-matching compatible version, uses `pip install` to download the model as a
-package and creates a [shortcut link](/usage/models#usage) if the model was
-downloaded via a shortcut. Direct downloads don't perform any compatibility
-checks and require the model name to be specified with its version (e.g.
-`en_core_web_sm-2.2.0`).
+best-matching compatible version, uses pip to download the model as a package
+and automatically creates a [shortcut link](/usage/models#usage) to load the
+model by name. Direct downloads don't perform any compatibility checks and
+require the model name to be specified with its version (e.g.
+`en_core_web_sm-2.0.0`).
> #### Downloading best practices
>
@@ -40,16 +39,16 @@ checks and require the model name to be specified with its version (e.g.
> also allow you to add it as a versioned package dependency to your project.
```bash
-$ python -m spacy download [model] [--direct] [pip args]
+$ python -m spacy download [model] [--direct]
```
-| Argument | Type | Description |
-| ------------------------------------- | ------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| `model` | positional | Model name or shortcut (`en`, `de`, `en_core_web_sm`). |
-| `--direct`, `-d` | flag | Force direct download of exact model version. |
-| pip args 2.1 | - | Additional installation options to be passed to `pip install` when installing the model package. For example, `--user` to install to the user home directory or `--no-deps` to not install model dependencies. |
-| `--help`, `-h` | flag | Show help message and available arguments. |
-| **CREATES** | directory, symlink | The installed model package in your `site-packages` directory and a shortcut link as a symlink in `spacy/data` if installed via shortcut. |
+| Argument | Type | Description |
+| ---------------------------------- | ------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| `model` | positional | Model name or shortcut (`en`, `de`, `en_core_web_sm`). |
+| `--direct`, `-d` | flag | Force direct download of exact model version. |
+| other 2.1 | - | Additional installation options to be passed to `pip install` when installing the model package. For example, `--user` to install to the user home directory. |
+| `--help`, `-h` | flag | Show help message and available arguments. |
+| **CREATES** | directory, symlink | The installed model package in your `site-packages` directory and a shortcut link as a symlink in `spacy/data`. |
## Link {#link}
@@ -182,166 +181,6 @@ All output files generated by this command are compatible with
| `ner` | NER with IOB/IOB2 tags, one token per line with columns separated by whitespace. The first column is the token and the final column is the IOB tag. Sentences are separated by blank lines and documents are separated by the line `-DOCSTART- -X- O O`. Supports CoNLL 2003 NER format. See [sample data](https://github.com/explosion/spaCy/tree/master/examples/training/ner_example_data). |
| `iob` | NER with IOB/IOB2 tags, one sentence per line with tokens separated by whitespace and annotation separated by `|`, either `word|B-ENT` or `word|POS|B-ENT`. See [sample data](https://github.com/explosion/spaCy/tree/master/examples/training/ner_example_data). |
-## Debug data {#debug-data new="2.2"}
-
-Analyze, debug and validate your training and development data, get useful
-stats, and find problems like invalid entity annotations, cyclic dependencies,
-low data labels and more.
-
-```bash
-$ python -m spacy debug-data [lang] [train_path] [dev_path] [--base-model] [--pipeline] [--ignore-warnings] [--verbose] [--no-format]
-```
-
-| Argument | Type | Description |
-| -------------------------- | ---------- | -------------------------------------------------------------------------------------------------- |
-| `lang` | positional | Model language. |
-| `train_path` | positional | Location of JSON-formatted training data. Can be a file or a directory of files. |
-| `dev_path` | positional | Location of JSON-formatted development data for evaluation. Can be a file or a directory of files. |
-| `--base-model`, `-b` | option | Optional name of base model to update. Can be any loadable spaCy model. |
-| `--pipeline`, `-p` | option | Comma-separated names of pipeline components to train. Defaults to `'tagger,parser,ner'`. |
-| `--ignore-warnings`, `-IW` | flag | Ignore warnings, only show stats and errors. |
-| `--verbose`, `-V` | flag | Print additional information and explanations. |
-| --no-format, `-NF` | flag | Don't pretty-print the results. Use this if you want to write to a file. |
-
-
-
-```
-=========================== Data format validation ===========================
-✔ Corpus is loadable
-
-=============================== Training stats ===============================
-Training pipeline: tagger, parser, ner
-Starting with blank model 'en'
-18127 training docs
-2939 evaluation docs
-⚠ 34 training examples also in evaluation data
-
-============================== Vocab & Vectors ==============================
-ℹ 2083156 total words in the data (56962 unique)
-⚠ 13020 misaligned tokens in the training data
-⚠ 2423 misaligned tokens in the dev data
-10 most common words: 'the' (98429), ',' (91756), '.' (87073), 'to' (50058),
-'of' (49559), 'and' (44416), 'a' (34010), 'in' (31424), 'that' (22792), 'is'
-(18952)
-ℹ No word vectors present in the model
-
-========================== Named Entity Recognition ==========================
-ℹ 18 new labels, 0 existing labels
-528978 missing values (tokens with '-' label)
-New: 'ORG' (23860), 'PERSON' (21395), 'GPE' (21193), 'DATE' (18080), 'CARDINAL'
-(10490), 'NORP' (9033), 'MONEY' (5164), 'PERCENT' (3761), 'ORDINAL' (2122),
-'LOC' (2113), 'TIME' (1616), 'WORK_OF_ART' (1229), 'QUANTITY' (1150), 'FAC'
-(1134), 'EVENT' (974), 'PRODUCT' (935), 'LAW' (444), 'LANGUAGE' (338)
-✔ Good amount of examples for all labels
-✔ Examples without occurences available for all labels
-✔ No entities consisting of or starting/ending with whitespace
-
-=========================== Part-of-speech Tagging ===========================
-ℹ 49 labels in data (57 labels in tag map)
-'NN' (266331), 'IN' (227365), 'DT' (185600), 'NNP' (164404), 'JJ' (119830),
-'NNS' (110957), '.' (101482), ',' (92476), 'RB' (90090), 'PRP' (90081), 'VB'
-(74538), 'VBD' (68199), 'CC' (62862), 'VBZ' (50712), 'VBP' (43420), 'VBN'
-(42193), 'CD' (40326), 'VBG' (34764), 'TO' (31085), 'MD' (25863), 'PRP$'
-(23335), 'HYPH' (13833), 'POS' (13427), 'UH' (13322), 'WP' (10423), 'WDT'
-(9850), 'RP' (8230), 'WRB' (8201), ':' (8168), '''' (7392), '``' (6984), 'NNPS'
-(5817), 'JJR' (5689), '$' (3710), 'EX' (3465), 'JJS' (3118), 'RBR' (2872),
-'-RRB-' (2825), '-LRB-' (2788), 'PDT' (2078), 'XX' (1316), 'RBS' (1142), 'FW'
-(794), 'NFP' (557), 'SYM' (440), 'WP$' (294), 'LS' (293), 'ADD' (191), 'AFX'
-(24)
-✔ All labels present in tag map for language 'en'
-
-============================= Dependency Parsing =============================
-ℹ Found 111703 sentences with an average length of 18.6 words.
-ℹ Found 2251 nonprojective train sentences
-ℹ Found 303 nonprojective dev sentences
-ℹ 47 labels in train data
-ℹ 211 labels in projectivized train data
-'punct' (236796), 'prep' (188853), 'pobj' (182533), 'det' (172674), 'nsubj'
-(169481), 'compound' (116142), 'ROOT' (111697), 'amod' (107945), 'dobj' (93540),
-'aux' (86802), 'advmod' (86197), 'cc' (62679), 'conj' (59575), 'poss' (36449),
-'ccomp' (36343), 'advcl' (29017), 'mark' (27990), 'nummod' (24582), 'relcl'
-(21359), 'xcomp' (21081), 'attr' (18347), 'npadvmod' (17740), 'acomp' (17204),
-'auxpass' (15639), 'appos' (15368), 'neg' (15266), 'nsubjpass' (13922), 'case'
-(13408), 'acl' (12574), 'pcomp' (10340), 'nmod' (9736), 'intj' (9285), 'prt'
-(8196), 'quantmod' (7403), 'dep' (4300), 'dative' (4091), 'agent' (3908), 'expl'
-(3456), 'parataxis' (3099), 'oprd' (2326), 'predet' (1946), 'csubj' (1494),
-'subtok' (1147), 'preconj' (692), 'meta' (469), 'csubjpass' (64), 'iobj' (1)
-⚠ Low number of examples for label 'iobj' (1)
-⚠ Low number of examples for 130 labels in the projectivized dependency
-trees used for training. You may want to projectivize labels such as punct
-before training in order to improve parser performance.
-⚠ Projectivized labels with low numbers of examples: appos||attr: 12
-advmod||dobj: 13 prep||ccomp: 12 nsubjpass||ccomp: 15 pcomp||prep: 14
-amod||dobj: 9 attr||xcomp: 14 nmod||nsubj: 17 prep||advcl: 2 prep||prep: 5
-nsubj||conj: 12 advcl||advmod: 18 ccomp||advmod: 11 ccomp||pcomp: 5 acl||pobj:
-10 npadvmod||acomp: 7 dobj||pcomp: 14 nsubjpass||pcomp: 1 nmod||pobj: 8
-amod||attr: 6 nmod||dobj: 12 aux||conj: 1 neg||conj: 1 dative||xcomp: 11
-pobj||dative: 3 xcomp||acomp: 19 advcl||pobj: 2 nsubj||advcl: 2 csubj||ccomp: 1
-advcl||acl: 1 relcl||nmod: 2 dobj||advcl: 10 advmod||advcl: 3 nmod||nsubjpass: 6
-amod||pobj: 5 cc||neg: 1 attr||ccomp: 16 advcl||xcomp: 3 nmod||attr: 4
-advcl||nsubjpass: 5 advcl||ccomp: 4 ccomp||conj: 1 punct||acl: 1 meta||acl: 1
-parataxis||acl: 1 prep||acl: 1 amod||nsubj: 7 ccomp||ccomp: 3 acomp||xcomp: 5
-dobj||acl: 5 prep||oprd: 6 advmod||acl: 2 dative||advcl: 1 pobj||agent: 5
-xcomp||amod: 1 dep||advcl: 1 prep||amod: 8 relcl||compound: 1 advcl||csubj: 3
-npadvmod||conj: 2 npadvmod||xcomp: 4 advmod||nsubj: 3 ccomp||amod: 7
-advcl||conj: 1 nmod||conj: 2 advmod||nsubjpass: 2 dep||xcomp: 2 appos||ccomp: 1
-advmod||dep: 1 advmod||advmod: 5 aux||xcomp: 8 dep||advmod: 1 dative||ccomp: 2
-prep||dep: 1 conj||conj: 1 dep||ccomp: 4 cc||ROOT: 1 prep||ROOT: 1 nsubj||pcomp:
-3 advmod||prep: 2 relcl||dative: 1 acl||conj: 1 advcl||attr: 4 prep||npadvmod: 1
-nsubjpass||xcomp: 1 neg||advmod: 1 xcomp||oprd: 1 advcl||advcl: 1 dobj||dep: 3
-nsubjpass||parataxis: 1 attr||pcomp: 1 ccomp||parataxis: 1 advmod||attr: 1
-nmod||oprd: 1 appos||nmod: 2 advmod||relcl: 1 appos||npadvmod: 1 appos||conj: 1
-prep||expl: 1 nsubjpass||conj: 1 punct||pobj: 1 cc||pobj: 1 conj||pobj: 1
-punct||conj: 1 ccomp||dep: 1 oprd||xcomp: 3 ccomp||xcomp: 1 ccomp||nsubj: 1
-nmod||dep: 1 xcomp||ccomp: 1 acomp||advcl: 1 intj||advmod: 1 advmod||acomp: 2
-relcl||oprd: 1 advmod||prt: 1 advmod||pobj: 1 appos||nummod: 1 relcl||npadvmod:
-3 mark||advcl: 1 aux||ccomp: 1 amod||nsubjpass: 1 npadvmod||advmod: 1 conj||dep:
-1 nummod||pobj: 1 amod||npadvmod: 1 intj||pobj: 1 nummod||npadvmod: 1
-xcomp||xcomp: 1 aux||dep: 1 advcl||relcl: 1
-⚠ The following labels were found only in the train data: xcomp||amod,
-advcl||relcl, prep||nsubjpass, acl||nsubj, nsubjpass||conj, xcomp||oprd,
-advmod||conj, advmod||advmod, iobj, advmod||nsubjpass, dobj||conj, ccomp||amod,
-meta||acl, xcomp||xcomp, prep||attr, prep||ccomp, advcl||acomp, acl||dobj,
-advcl||advcl, pobj||agent, prep||advcl, nsubjpass||xcomp, prep||dep,
-acomp||xcomp, aux||ccomp, ccomp||dep, conj||dep, relcl||compound,
-nsubjpass||ccomp, nmod||dobj, advmod||advcl, advmod||acl, dobj||advcl,
-dative||xcomp, prep||nsubj, ccomp||ccomp, nsubj||ccomp, xcomp||acomp,
-prep||acomp, dep||advmod, acl||pobj, appos||dobj, npadvmod||acomp, cc||ROOT,
-relcl||nsubj, nmod||pobj, acl||nsubjpass, ccomp||advmod, pcomp||prep,
-amod||dobj, advmod||attr, advcl||csubj, appos||attr, dobj||pcomp, prep||ROOT,
-relcl||pobj, advmod||pobj, amod||nsubj, ccomp||xcomp, prep||oprd,
-npadvmod||advmod, appos||nummod, advcl||pobj, neg||advmod, acl||attr,
-appos||nsubjpass, csubj||ccomp, amod||nsubjpass, intj||pobj, dep||advcl,
-cc||neg, xcomp||ccomp, dative||ccomp, nmod||oprd, pobj||dative, prep||dobj,
-dep||ccomp, relcl||attr, ccomp||nsubj, advcl||xcomp, nmod||dep, advcl||advmod,
-ccomp||conj, pobj||prep, advmod||acomp, advmod||relcl, attr||pcomp,
-ccomp||parataxis, oprd||xcomp, intj||advmod, nmod||nsubjpass, prep||npadvmod,
-parataxis||acl, prep||pobj, advcl||dobj, amod||pobj, prep||acl, conj||pobj,
-advmod||dep, punct||pobj, ccomp||acomp, acomp||advcl, nummod||npadvmod,
-dobj||dep, npadvmod||xcomp, advcl||conj, relcl||npadvmod, punct||acl,
-relcl||dobj, dobj||xcomp, nsubjpass||parataxis, dative||advcl, relcl||nmod,
-advcl||ccomp, appos||npadvmod, ccomp||pcomp, prep||amod, mark||advcl,
-prep||advmod, prep||xcomp, appos||nsubj, attr||ccomp, advmod||prt, dobj||ccomp,
-aux||conj, advcl||nsubj, conj||conj, advmod||ccomp, advcl||nsubjpass,
-attr||xcomp, nmod||conj, npadvmod||conj, relcl||dative, prep||expl,
-nsubjpass||pcomp, advmod||xcomp, advmod||dobj, appos||pobj, nsubj||conj,
-relcl||nsubjpass, advcl||attr, appos||ccomp, advmod||prep, prep||conj,
-nmod||attr, punct||conj, neg||conj, dep||xcomp, aux||xcomp, dobj||acl,
-nummod||pobj, amod||npadvmod, nsubj||pcomp, advcl||acl, appos||nmod,
-relcl||oprd, prep||prep, cc||pobj, nmod||nsubj, amod||attr, aux||dep,
-appos||conj, advmod||nsubj, nsubj||advcl, acl||conj
-To train a parser, your data should include at least 20 instances of each label.
-⚠ Multiple root labels (ROOT, nsubj, aux, npadvmod, prep) found in
-training data. spaCy's parser uses a single root label ROOT so this distinction
-will not be available.
-
-================================== Summary ==================================
-✔ 5 checks passed
-⚠ 8 warnings
-```
-
-
-
## Train {#train}
Train a model. Expects data in spaCy's
@@ -361,41 +200,36 @@ will only train the tagger and parser.
```bash
$ python -m spacy train [lang] [output_path] [train_path] [dev_path]
-[--base-model] [--pipeline] [--vectors] [--n-iter] [--n-early-stopping]
-[--n-examples] [--use-gpu] [--version] [--meta-path] [--init-tok2vec]
-[--parser-multitasks] [--entity-multitasks] [--gold-preproc] [--noise-level]
-[--orth-variant-level] [--learn-tokens] [--textcat-arch] [--textcat-multilabel]
-[--textcat-positive-label] [--verbose]
+[--base-model] [--pipeline] [--vectors] [--n-iter] [--n-early-stopping] [--n-examples] [--use-gpu]
+[--version] [--meta-path] [--init-tok2vec] [--parser-multitasks]
+[--entity-multitasks] [--gold-preproc] [--noise-level] [--learn-tokens]
+[--verbose]
```
-| Argument | Type | Description |
-| --------------------------------------------------------------- | ------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| `lang` | positional | Model language. |
-| `output_path` | positional | Directory to store model in. Will be created if it doesn't exist. |
-| `train_path` | positional | Location of JSON-formatted training data. Can be a file or a directory of files. |
-| `dev_path` | positional | Location of JSON-formatted development data for evaluation. Can be a file or a directory of files. |
-| `--base-model`, `-b` 2.1 | option | Optional name of base model to update. Can be any loadable spaCy model. |
-| `--pipeline`, `-p` 2.1 | option | Comma-separated names of pipeline components to train. Defaults to `'tagger,parser,ner'`. |
-| `--vectors`, `-v` | option | Model to load vectors from. |
-| `--n-iter`, `-n` | option | Number of iterations (default: `30`). |
-| `--n-early-stopping`, `-ne` | option | Maximum number of training epochs without dev accuracy improvement. |
-| `--n-examples`, `-ns` | option | Number of examples to use (defaults to `0` for all examples). |
-| `--use-gpu`, `-g` | option | Whether to use GPU. Can be either `0`, `1` or `-1`. |
-| `--version`, `-V` | option | Model version. Will be written out to the model's `meta.json` after training. |
-| `--meta-path`, `-m` 2 | option | Optional path to model [`meta.json`](/usage/training#models-generating). All relevant properties like `lang`, `pipeline` and `spacy_version` will be overwritten. |
-| `--init-tok2vec`, `-t2v` 2.1 | option | Path to pretrained weights for the token-to-vector parts of the models. See `spacy pretrain`. Experimental. |
-| `--parser-multitasks`, `-pt` | option | Side objectives for parser CNN, e.g. `'dep'` or `'dep,tag'` |
-| `--entity-multitasks`, `-et` | option | Side objectives for NER CNN, e.g. `'dep'` or `'dep,tag'` |
-| `--noise-level`, `-nl` | option | Float indicating the amount of corruption for data augmentation. |
-| `--orth-variant-level`, `-ovl` 2.2 | option | Float indicating the orthography variation for data augmentation (e.g. `0.3` for making 30% of occurrences of some tokens subject to replacement). |
-| `--gold-preproc`, `-G` | flag | Use gold preprocessing. |
-| `--learn-tokens`, `-T` | flag | Make parser learn gold-standard tokenization by merging ] subtokens. Typically used for languages like Chinese. |
-| `--textcat-multilabel`, `-TML` 2.2 | flag | Text classification classes aren't mutually exclusive (multilabel). |
-| `--textcat-arch`, `-ta` 2.2 | option | Text classification model architecture. Defaults to `"bow"`. |
-| `--textcat-positive-label`, `-tpl` 2.2 | option | Text classification positive label for binary classes with two labels. |
-| `--verbose`, `-VV` 2.0.13 | flag | Show more detailed messages during training. |
-| `--help`, `-h` | flag | Show help message and available arguments. |
-| **CREATES** | model, pickle | A spaCy model on each epoch. |
+| Argument | Type | Description |
+| ----------------------------------------------------- | ------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| `lang` | positional | Model language. |
+| `output_path` | positional | Directory to store model in. Will be created if it doesn't exist. |
+| `train_path` | positional | Location of JSON-formatted training data. Can be a file or a directory of files. |
+| `dev_path` | positional | Location of JSON-formatted development data for evaluation. Can be a file or a directory of files. |
+| `--base-model`, `-b` 2.1 | option | Optional name of base model to update. Can be any loadable spaCy model. |
+| `--pipeline`, `-p` 2.1 | option | Comma-separated names of pipeline components to train. Defaults to `'tagger,parser,ner'`. |
+| `--vectors`, `-v` | option | Model to load vectors from. |
+| `--n-iter`, `-n` | option | Number of iterations (default: `30`). |
+| `--n-early-stopping`, `-ne` | option | Maximum number of training epochs without dev accuracy improvement. |
+| `--n-examples`, `-ns` | option | Number of examples to use (defaults to `0` for all examples). |
+| `--use-gpu`, `-g` | option | Whether to use GPU. Can be either `0`, `1` or `-1`. |
+| `--version`, `-V` | option | Model version. Will be written out to the model's `meta.json` after training. |
+| `--meta-path`, `-m` 2 | option | Optional path to model [`meta.json`](/usage/training#models-generating). All relevant properties like `lang`, `pipeline` and `spacy_version` will be overwritten. |
+| `--init-tok2vec`, `-t2v` 2.1 | option | Path to pretrained weights for the token-to-vector parts of the models. See `spacy pretrain`. Experimental. |
+| `--parser-multitasks`, `-pt` | option | Side objectives for parser CNN, e.g. `'dep'` or `'dep,tag'` |
+| `--entity-multitasks`, `-et` | option | Side objectives for NER CNN, e.g. `'dep'` or `'dep,tag'` |
+| `--noise-level`, `-nl` | option | Float indicating the amount of corruption for data augmentation. |
+| `--gold-preproc`, `-G` | flag | Use gold preprocessing. |
+| `--learn-tokens`, `-T` | flag | Make parser learn gold-standard tokenization by merging ] subtokens. Typically used for languages like Chinese. |
+| `--verbose`, `-VV` 2.0.13 | flag | Show more detailed messages during training. |
+| `--help`, `-h` | flag | Show help message and available arguments. |
+| **CREATES** | model, pickle | A spaCy model on each epoch. |
### Environment variables for hyperparameters {#train-hyperparams new="2"}
@@ -540,7 +374,6 @@ $ python -m spacy init-model [lang] [output_dir] [--jsonl-loc] [--vectors-loc]
| `--jsonl-loc`, `-j` | option | Optional location of JSONL-formatted [vocabulary file](/api/annotation#vocab-jsonl) with lexical attributes. |
| `--vectors-loc`, `-v` | option | Optional location of vectors. Should be a file where the first row contains the dimensions of the vectors, followed by a space-separated Word2Vec table. File can be provided in `.txt` format or as a zipped text file in `.zip` or `.tar.gz` format. |
| `--prune-vectors`, `-V` | flag | Number of vectors to prune the vocabulary to. Defaults to `-1` for no pruning. |
-| `--vectors-name`, `-vn` | option | Name to assign to the word vectors in the `meta.json`, e.g. `en_core_web_md.vectors`. |
| **CREATES** | model | A spaCy model containing the vocab and vectors. |
## Evaluate {#evaluate new="2"}
diff --git a/website/docs/api/cython-classes.md b/website/docs/api/cython-classes.md
index 77d6fdd10..4d188d90f 100644
--- a/website/docs/api/cython-classes.md
+++ b/website/docs/api/cython-classes.md
@@ -45,9 +45,9 @@ Append a token to the `Doc`. The token can be provided as a
> from spacy.vocab cimport Vocab
>
> doc = Doc(Vocab())
-> lexeme = doc.vocab.get("hello")
+> lexeme = doc.vocab.get(u'hello')
> doc.push_back(lexeme, True)
-> assert doc.text == "hello "
+> assert doc.text == u'hello '
> ```
| Name | Type | Description |
@@ -164,7 +164,7 @@ vocabulary.
> #### Example
>
> ```python
-> lexeme = vocab.get(vocab.mem, "hello")
+> lexeme = vocab.get(vocab.mem, u'hello')
> ```
| Name | Type | Description |
diff --git a/website/docs/api/cython-structs.md b/website/docs/api/cython-structs.md
index 935bce25d..0e427a8d5 100644
--- a/website/docs/api/cython-structs.md
+++ b/website/docs/api/cython-structs.md
@@ -88,7 +88,7 @@ Find a token in a `TokenC*` array by the offset of its first character.
> from spacy.tokens.doc cimport Doc, token_by_start
> from spacy.vocab cimport Vocab
>
-> doc = Doc(Vocab(), words=["hello", "world"])
+> doc = Doc(Vocab(), words=[u'hello', u'world'])
> assert token_by_start(doc.c, doc.length, 6) == 1
> assert token_by_start(doc.c, doc.length, 4) == -1
> ```
@@ -110,7 +110,7 @@ Find a token in a `TokenC*` array by the offset of its final character.
> from spacy.tokens.doc cimport Doc, token_by_end
> from spacy.vocab cimport Vocab
>
-> doc = Doc(Vocab(), words=["hello", "world"])
+> doc = Doc(Vocab(), words=[u'hello', u'world'])
> assert token_by_end(doc.c, doc.length, 5) == 0
> assert token_by_end(doc.c, doc.length, 1) == -1
> ```
@@ -134,7 +134,7 @@ attribute, in order to make the parse tree navigation consistent.
> from spacy.tokens.doc cimport Doc, set_children_from_heads
> from spacy.vocab cimport Vocab
>
-> doc = Doc(Vocab(), words=["Baileys", "from", "a", "shoe"])
+> doc = Doc(Vocab(), words=[u'Baileys', u'from', u'a', u'shoe'])
> doc.c[0].head = 0
> doc.c[1].head = 0
> doc.c[2].head = 3
diff --git a/website/docs/api/dependencyparser.md b/website/docs/api/dependencyparser.md
index df0df3e38..58acc4425 100644
--- a/website/docs/api/dependencyparser.md
+++ b/website/docs/api/dependencyparser.md
@@ -58,7 +58,7 @@ and all pipeline components are applied to the `Doc` in order. Both
>
> ```python
> parser = DependencyParser(nlp.vocab)
-> doc = nlp("This is a sentence.")
+> doc = nlp(u"This is a sentence.")
> # This usually happens under the hood
> processed = parser(doc)
> ```
diff --git a/website/docs/api/doc.md b/website/docs/api/doc.md
index ad684f51e..431d3a092 100644
--- a/website/docs/api/doc.md
+++ b/website/docs/api/doc.md
@@ -20,11 +20,11 @@ Construct a `Doc` object. The most common way to get a `Doc` object is via the
>
> ```python
> # Construction 1
-> doc = nlp("Some text")
+> doc = nlp(u"Some text")
>
> # Construction 2
> from spacy.tokens import Doc
-> words = ["hello", "world", "!"]
+> words = [u"hello", u"world", u"!"]
> spaces = [True, False, False]
> doc = Doc(nlp.vocab, words=words, spaces=spaces)
> ```
@@ -45,7 +45,7 @@ Negative indexing is supported, and follows the usual Python semantics, i.e.
> #### Example
>
> ```python
-> doc = nlp("Give it back! He pleaded.")
+> doc = nlp(u"Give it back! He pleaded.")
> assert doc[0].text == "Give"
> assert doc[-1].text == "."
> span = doc[1:3]
@@ -76,8 +76,8 @@ Iterate over `Token` objects, from which the annotations can be easily accessed.
> #### Example
>
> ```python
-> doc = nlp("Give it back")
-> assert [t.text for t in doc] == ["Give", "it", "back"]
+> doc = nlp(u'Give it back')
+> assert [t.text for t in doc] == [u'Give', u'it', u'back']
> ```
This is the main way of accessing [`Token`](/api/token) objects, which are the
@@ -96,7 +96,7 @@ Get the number of tokens in the document.
> #### Example
>
> ```python
-> doc = nlp("Give it back! He pleaded.")
+> doc = nlp(u"Give it back! He pleaded.")
> assert len(doc) == 7
> ```
@@ -114,9 +114,9 @@ details, see the documentation on
>
> ```python
> from spacy.tokens import Doc
-> city_getter = lambda doc: any(city in doc.text for city in ("New York", "Paris", "Berlin"))
-> Doc.set_extension("has_city", getter=city_getter)
-> doc = nlp("I like New York")
+> city_getter = lambda doc: any(city in doc.text for city in ('New York', 'Paris', 'Berlin'))
+> Doc.set_extension('has_city', getter=city_getter)
+> doc = nlp(u'I like New York')
> assert doc._.has_city
> ```
@@ -192,8 +192,8 @@ the character indices don't map to a valid span.
> #### Example
>
> ```python
-> doc = nlp("I like New York")
-> span = doc.char_span(7, 15, label="GPE")
+> doc = nlp(u"I like New York")
+> span = doc.char_span(7, 15, label=u"GPE")
> assert span.text == "New York"
> ```
@@ -213,8 +213,8 @@ using an average of word vectors.
> #### Example
>
> ```python
-> apples = nlp("I like apples")
-> oranges = nlp("I like oranges")
+> apples = nlp(u"I like apples")
+> oranges = nlp(u"I like oranges")
> apples_oranges = apples.similarity(oranges)
> oranges_apples = oranges.similarity(apples)
> assert apples_oranges == oranges_apples
@@ -235,7 +235,7 @@ attribute ID.
>
> ```python
> from spacy.attrs import ORTH
-> doc = nlp("apple apple orange banana")
+> doc = nlp(u"apple apple orange banana")
> assert doc.count_by(ORTH) == {7024L: 1, 119552L: 1, 2087L: 2}
> doc.to_array([ORTH])
> # array([[11880], [11880], [7561], [12800]])
@@ -255,7 +255,7 @@ ancestor is found, e.g. if span excludes a necessary ancestor.
> #### Example
>
> ```python
-> doc = nlp("This is a test")
+> doc = nlp(u"This is a test")
> matrix = doc.get_lca_matrix()
> # array([[0, 1, 1, 1], [1, 1, 1, 1], [1, 1, 2, 3], [1, 1, 3, 3]], dtype=int32)
> ```
@@ -274,7 +274,7 @@ They'll be added to an `"_"` key in the data, e.g. `"_": {"foo": "bar"}`.
> #### Example
>
> ```python
-> doc = nlp("Hello")
+> doc = nlp(u"Hello")
> json_doc = doc.to_json()
> ```
>
@@ -342,7 +342,7 @@ array of attributes.
> ```python
> from spacy.attrs import LOWER, POS, ENT_TYPE, IS_ALPHA
> from spacy.tokens import Doc
-> doc = nlp("Hello world!")
+> doc = nlp(u"Hello world!")
> np_array = doc.to_array([LOWER, POS, ENT_TYPE, IS_ALPHA])
> doc2 = Doc(doc.vocab, words=[t.text for t in doc])
> doc2.from_array([LOWER, POS, ENT_TYPE, IS_ALPHA], np_array)
@@ -396,7 +396,7 @@ Serialize, i.e. export the document contents to a binary string.
> #### Example
>
> ```python
-> doc = nlp("Give it back! He pleaded.")
+> doc = nlp(u"Give it back! He pleaded.")
> doc_bytes = doc.to_bytes()
> ```
@@ -413,9 +413,10 @@ Deserialize, i.e. import the document contents from a binary string.
>
> ```python
> from spacy.tokens import Doc
-> doc = nlp("Give it back! He pleaded.")
-> doc_bytes = doc.to_bytes()
-> doc2 = Doc(doc.vocab).from_bytes(doc_bytes)
+> text = u"Give it back! He pleaded."
+> doc = nlp(text)
+> bytes = doc.to_bytes()
+> doc2 = Doc(doc.vocab).from_bytes(bytes)
> assert doc.text == doc2.text
> ```
@@ -456,9 +457,9 @@ dictionary mapping attribute names to values as the `"_"` key.
> #### Example
>
> ```python
-> doc = nlp("I like David Bowie")
+> doc = nlp(u"I like David Bowie")
> with doc.retokenize() as retokenizer:
-> attrs = {"LEMMA": "David Bowie"}
+> attrs = {"LEMMA": u"David Bowie"}
> retokenizer.merge(doc[2:4], attrs=attrs)
> ```
@@ -488,7 +489,7 @@ underlying lexeme (if they're context-independent lexical attributes like
> #### Example
>
> ```python
-> doc = nlp("I live in NewYork")
+> doc = nlp(u"I live in NewYork")
> with doc.retokenize() as retokenizer:
> heads = [(doc[3], 1), doc[2]]
> attrs = {"POS": ["PROPN", "PROPN"],
@@ -520,9 +521,9 @@ and end token boundaries, the document remains unchanged.
> #### Example
>
> ```python
-> doc = nlp("Los Angeles start.")
+> doc = nlp(u"Los Angeles start.")
> doc.merge(0, len("Los Angeles"), "NNP", "Los Angeles", "GPE")
-> assert [t.text for t in doc] == ["Los Angeles", "start", "."]
+> assert [t.text for t in doc] == [u"Los Angeles", u"start", u"."]
> ```
| Name | Type | Description |
@@ -540,11 +541,11 @@ objects, if the entity recognizer has been applied.
> #### Example
>
> ```python
-> doc = nlp("Mr. Best flew to New York on Saturday morning.")
+> doc = nlp(u"Mr. Best flew to New York on Saturday morning.")
> ents = list(doc.ents)
> assert ents[0].label == 346
-> assert ents[0].label_ == "PERSON"
-> assert ents[0].text == "Mr. Best"
+> assert ents[0].label_ == u"PERSON"
+> assert ents[0].text == u"Mr. Best"
> ```
| Name | Type | Description |
@@ -562,10 +563,10 @@ relative clauses.
> #### Example
>
> ```python
-> doc = nlp("A phrase with another phrase occurs.")
+> doc = nlp(u"A phrase with another phrase occurs.")
> chunks = list(doc.noun_chunks)
-> assert chunks[0].text == "A phrase"
-> assert chunks[1].text == "another phrase"
+> assert chunks[0].text == u"A phrase"
+> assert chunks[1].text == u"another phrase"
> ```
| Name | Type | Description |
@@ -582,10 +583,10 @@ will be unavailable.
> #### Example
>
> ```python
-> doc = nlp("This is a sentence. Here's another...")
+> doc = nlp(u"This is a sentence. Here's another...")
> sents = list(doc.sents)
> assert len(sents) == 2
-> assert [s.root.text for s in sents] == ["is", "'s"]
+> assert [s.root.text for s in sents] == [u"is", u"'s"]
> ```
| Name | Type | Description |
@@ -599,7 +600,7 @@ A boolean value indicating whether a word vector is associated with the object.
> #### Example
>
> ```python
-> doc = nlp("I like apples")
+> doc = nlp(u"I like apples")
> assert doc.has_vector
> ```
@@ -615,8 +616,8 @@ vectors.
> #### Example
>
> ```python
-> doc = nlp("I like apples")
-> assert doc.vector.dtype == "float32"
+> doc = nlp(u"I like apples")
+> assert doc.vector.dtype == 'float32'
> assert doc.vector.shape == (300,)
> ```
@@ -631,8 +632,8 @@ The L2 norm of the document's vector representation.
> #### Example
>
> ```python
-> doc1 = nlp("I like apples")
-> doc2 = nlp("I like oranges")
+> doc1 = nlp(u"I like apples")
+> doc2 = nlp(u"I like oranges")
> doc1.vector_norm # 4.54232424414368
> doc2.vector_norm # 3.304373298575751
> assert doc1.vector_norm != doc2.vector_norm
diff --git a/website/docs/api/docbin.md b/website/docs/api/docbin.md
deleted file mode 100644
index a4525906e..000000000
--- a/website/docs/api/docbin.md
+++ /dev/null
@@ -1,149 +0,0 @@
----
-title: DocBin
-tag: class
-new: 2.2
-teaser: Pack Doc objects for binary serialization
-source: spacy/tokens/_serialize.py
----
-
-The `DocBin` class lets you efficiently serialize the information from a
-collection of `Doc` objects. You can control which information is serialized by
-passing a list of attribute IDs, and optionally also specify whether the user
-data is serialized. The `DocBin` is faster and produces smaller data sizes than
-pickle, and allows you to deserialize without executing arbitrary Python code. A
-notable downside to this format is that you can't easily extract just one
-document from the `DocBin`. The serialization format is gzipped msgpack, where
-the msgpack object has the following structure:
-
-```python
-### msgpack object strcutrue
-{
- "attrs": List[uint64], # e.g. [TAG, HEAD, ENT_IOB, ENT_TYPE]
- "tokens": bytes, # Serialized numpy uint64 array with the token data
- "spaces": bytes, # Serialized numpy boolean array with spaces data
- "lengths": bytes, # Serialized numpy int32 array with the doc lengths
- "strings": List[unicode] # List of unique strings in the token data
-}
-```
-
-Strings for the words, tags, labels etc are represented by 64-bit hashes in the
-token data, and every string that occurs at least once is passed via the strings
-object. This means the storage is more efficient if you pack more documents
-together, because you have less duplication in the strings. For usage examples,
-see the docs on [serializing `Doc` objects](/usage/saving-loading#docs).
-
-## DocBin.\_\_init\_\_ {#init tag="method"}
-
-Create a `DocBin` object to hold serialized annotations.
-
-> #### Example
->
-> ```python
-> from spacy.tokens import DocBin
-> doc_bin = DocBin(attrs=["ENT_IOB", "ENT_TYPE"])
-> ```
-
-| Argument | Type | Description |
-| ----------------- | -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
-| `attrs` | list | List of attributes to serialize. `orth` (hash of token text) and `spacy` (whether the token is followed by whitespace) are always serialized, so they're not required. Defaults to `None`. |
-| `store_user_data` | bool | Whether to include the `Doc.user_data`. Defaults to `False`. |
-| **RETURNS** | `DocBin` | The newly constructed object. |
-
-## DocBin.\_\len\_\_ {#len tag="method"}
-
-Get the number of `Doc` objects that were added to the `DocBin`.
-
-> #### Example
->
-> ```python
-> doc_bin = DocBin(attrs=["LEMMA"])
-> doc = nlp("This is a document to serialize.")
-> doc_bin.add(doc)
-> assert len(doc_bin) == 1
-> ```
-
-| Argument | Type | Description |
-| ----------- | ---- | ------------------------------------------- |
-| **RETURNS** | int | The number of `Doc`s added to the `DocBin`. |
-
-## DocBin.add {#add tag="method"}
-
-Add a `Doc`'s annotations to the `DocBin` for serialization.
-
-> #### Example
->
-> ```python
-> doc_bin = DocBin(attrs=["LEMMA"])
-> doc = nlp("This is a document to serialize.")
-> doc_bin.add(doc)
-> ```
-
-| Argument | Type | Description |
-| -------- | ----- | ------------------------ |
-| `doc` | `Doc` | The `Doc` object to add. |
-
-## DocBin.get_docs {#get_docs tag="method"}
-
-Recover `Doc` objects from the annotations, using the given vocab.
-
-> #### Example
->
-> ```python
-> docs = list(doc_bin.get_docs(nlp.vocab))
-> ```
-
-| Argument | Type | Description |
-| ---------- | ------- | ------------------ |
-| `vocab` | `Vocab` | The shared vocab. |
-| **YIELDS** | `Doc` | The `Doc` objects. |
-
-## DocBin.merge {#merge tag="method"}
-
-Extend the annotations of this `DocBin` with the annotations from another. Will
-raise an error if the pre-defined attrs of the two `DocBin`s don't match.
-
-> #### Example
->
-> ```python
-> doc_bin1 = DocBin(attrs=["LEMMA", "POS"])
-> doc_bin1.add(nlp("Hello world"))
-> doc_bin2 = DocBin(attrs=["LEMMA", "POS"])
-> doc_bin2.add(nlp("This is a sentence"))
-> merged_bins = doc_bin1.merge(doc_bin2)
-> assert len(merged_bins) == 2
-> ```
-
-| Argument | Type | Description |
-| -------- | -------- | ------------------------------------------- |
-| `other` | `DocBin` | The `DocBin` to merge into the current bin. |
-
-## DocBin.to_bytes {#to_bytes tag="method"}
-
-Serialize the `DocBin`'s annotations to a bytestring.
-
-> #### Example
->
-> ```python
-> doc_bin = DocBin(attrs=["DEP", "HEAD"])
-> doc_bin_bytes = doc_bin.to_bytes()
-> ```
-
-| Argument | Type | Description |
-| ----------- | ----- | ------------------------ |
-| **RETURNS** | bytes | The serialized `DocBin`. |
-
-## DocBin.from_bytes {#from_bytes tag="method"}
-
-Deserialize the `DocBin`'s annotations from a bytestring.
-
-> #### Example
->
-> ```python
-> doc_bin_bytes = doc_bin.to_bytes()
-> new_doc_bin = DocBin().from_bytes(doc_bin_bytes)
-> ```
-
-| Argument | Type | Description |
-| ------------ | -------- | ---------------------- |
-| `bytes_data` | bytes | The data to load from. |
-| **RETURNS** | `DocBin` | The loaded `DocBin`. |
diff --git a/website/docs/api/entitylinker.md b/website/docs/api/entitylinker.md
deleted file mode 100644
index 88131761f..000000000
--- a/website/docs/api/entitylinker.md
+++ /dev/null
@@ -1,300 +0,0 @@
----
-title: EntityLinker
-teaser:
- Functionality to disambiguate a named entity in text to a unique knowledge
- base identifier.
-tag: class
-source: spacy/pipeline/pipes.pyx
-new: 2.2
----
-
-This class is a subclass of `Pipe` and follows the same API. The pipeline
-component is available in the [processing pipeline](/usage/processing-pipelines)
-via the ID `"entity_linker"`.
-
-## EntityLinker.Model {#model tag="classmethod"}
-
-Initialize a model for the pipe. The model should implement the
-`thinc.neural.Model` API, and should contain a field `tok2vec` that contains the
-context encoder. Wrappers are under development for most major machine learning
-libraries.
-
-| Name | Type | Description |
-| ----------- | ------ | ------------------------------------- |
-| `**kwargs` | - | Parameters for initializing the model |
-| **RETURNS** | object | The initialized model. |
-
-## EntityLinker.\_\_init\_\_ {#init tag="method"}
-
-Create a new pipeline instance. In your application, you would normally use a
-shortcut for this and instantiate the component using its string name and
-[`nlp.create_pipe`](/api/language#create_pipe).
-
-> #### Example
->
-> ```python
-> # Construction via create_pipe
-> entity_linker = nlp.create_pipe("entity_linker")
->
-> # Construction from class
-> from spacy.pipeline import EntityLinker
-> entity_linker = EntityLinker(nlp.vocab)
-> entity_linker.from_disk("/path/to/model")
-> ```
-
-| Name | Type | Description |
-| -------------- | ----------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------- |
-| `vocab` | `Vocab` | The shared vocabulary. |
-| `model` | `thinc.neural.Model` / `True` | The model powering the pipeline component. If no model is supplied, the model is created when you call `begin_training`, `from_disk` or `from_bytes`. |
-| `hidden_width` | int | Width of the hidden layer of the entity linking model, defaults to 128. |
-| `incl_prior` | bool | Whether or not to include prior probabilities in the model. Defaults to True. |
-| `incl_context` | bool | Whether or not to include the local context in the model (if not: only prior probabilites are used). Defaults to True. |
-| **RETURNS** | `EntityLinker` | The newly constructed object. |
-
-## EntityLinker.\_\_call\_\_ {#call tag="method"}
-
-Apply the pipe to one document. The document is modified in place, and returned.
-This usually happens under the hood when the `nlp` object is called on a text
-and all pipeline components are applied to the `Doc` in order. Both
-[`__call__`](/api/entitylinker#call) and [`pipe`](/api/entitylinker#pipe)
-delegate to the [`predict`](/api/entitylinker#predict) and
-[`set_annotations`](/api/entitylinker#set_annotations) methods.
-
-> #### Example
->
-> ```python
-> entity_linker = EntityLinker(nlp.vocab)
-> doc = nlp("This is a sentence.")
-> # This usually happens under the hood
-> processed = entity_linker(doc)
-> ```
-
-| Name | Type | Description |
-| ----------- | ----- | ------------------------ |
-| `doc` | `Doc` | The document to process. |
-| **RETURNS** | `Doc` | The processed document. |
-
-## EntityLinker.pipe {#pipe tag="method"}
-
-Apply the pipe to a stream of documents. This usually happens under the hood
-when the `nlp` object is called on a text and all pipeline components are
-applied to the `Doc` in order. Both [`__call__`](/api/entitylinker#call) and
-[`pipe`](/api/entitylinker#pipe) delegate to the
-[`predict`](/api/entitylinker#predict) and
-[`set_annotations`](/api/entitylinker#set_annotations) methods.
-
-> #### Example
->
-> ```python
-> entity_linker = EntityLinker(nlp.vocab)
-> for doc in entity_linker.pipe(docs, batch_size=50):
-> pass
-> ```
-
-| Name | Type | Description |
-| ------------ | -------- | ------------------------------------------------------ |
-| `stream` | iterable | A stream of documents. |
-| `batch_size` | int | The number of texts to buffer. Defaults to `128`. |
-| **YIELDS** | `Doc` | Processed documents in the order of the original text. |
-
-## EntityLinker.predict {#predict tag="method"}
-
-Apply the pipeline's model to a batch of docs, without modifying them.
-
-> #### Example
->
-> ```python
-> entity_linker = EntityLinker(nlp.vocab)
-> kb_ids, tensors = entity_linker.predict([doc1, doc2])
-> ```
-
-| Name | Type | Description |
-| ----------- | -------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| `docs` | iterable | The documents to predict. |
-| **RETURNS** | tuple | A `(kb_ids, tensors)` tuple where `kb_ids` are the model's predicted KB identifiers for the entities in the `docs`, and `tensors` are the token representations used to predict these identifiers. |
-
-## EntityLinker.set_annotations {#set_annotations tag="method"}
-
-Modify a batch of documents, using pre-computed entity IDs for a list of named
-entities.
-
-> #### Example
->
-> ```python
-> entity_linker = EntityLinker(nlp.vocab)
-> kb_ids, tensors = entity_linker.predict([doc1, doc2])
-> entity_linker.set_annotations([doc1, doc2], kb_ids, tensors)
-> ```
-
-| Name | Type | Description |
-| --------- | -------- | ------------------------------------------------------------------------------------------------- |
-| `docs` | iterable | The documents to modify. |
-| `kb_ids` | iterable | The knowledge base identifiers for the entities in the docs, predicted by `EntityLinker.predict`. |
-| `tensors` | iterable | The token representations used to predict the identifiers. |
-
-## EntityLinker.update {#update tag="method"}
-
-Learn from a batch of documents and gold-standard information, updating both the
-pipe's entity linking model and context encoder. Delegates to
-[`predict`](/api/entitylinker#predict) and
-[`get_loss`](/api/entitylinker#get_loss).
-
-> #### Example
->
-> ```python
-> entity_linker = EntityLinker(nlp.vocab)
-> losses = {}
-> optimizer = nlp.begin_training()
-> entity_linker.update([doc1, doc2], [gold1, gold2], losses=losses, sgd=optimizer)
-> ```
-
-| Name | Type | Description |
-| -------- | -------- | ------------------------------------------------------------------------------------------------------- |
-| `docs` | iterable | A batch of documents to learn from. |
-| `golds` | iterable | The gold-standard data. Must have the same length as `docs`. |
-| `drop` | float | The dropout rate, used both for the EL model and the context encoder. |
-| `sgd` | callable | The optimizer for the EL model. Should take two arguments `weights` and `gradient`, and an optional ID. |
-| `losses` | dict | Optional record of the loss during training. The value keyed by the model's name is updated. |
-
-## EntityLinker.get_loss {#get_loss tag="method"}
-
-Find the loss and gradient of loss for the entities in a batch of documents and
-their predicted scores.
-
-> #### Example
->
-> ```python
-> entity_linker = EntityLinker(nlp.vocab)
-> kb_ids, tensors = entity_linker.predict(docs)
-> loss, d_loss = entity_linker.get_loss(docs, [gold1, gold2], kb_ids, tensors)
-> ```
-
-| Name | Type | Description |
-| ----------- | -------- | ------------------------------------------------------------ |
-| `docs` | iterable | The batch of documents. |
-| `golds` | iterable | The gold-standard data. Must have the same length as `docs`. |
-| `kb_ids` | iterable | KB identifiers representing the model's predictions. |
-| `tensors` | iterable | The token representations used to predict the identifiers |
-| **RETURNS** | tuple | The loss and the gradient, i.e. `(loss, gradient)`. |
-
-## EntityLinker.set_kb {#set_kb tag="method"}
-
-Define the knowledge base (KB) used for disambiguating named entities to KB
-identifiers.
-
-> #### Example
->
-> ```python
-> entity_linker = EntityLinker(nlp.vocab)
-> entity_linker.set_kb(kb)
-> ```
-
-| Name | Type | Description |
-| ---- | --------------- | ------------------------------- |
-| `kb` | `KnowledgeBase` | The [`KnowledgeBase`](/api/kb). |
-
-## EntityLinker.begin_training {#begin_training tag="method"}
-
-Initialize the pipe for training, using data examples if available. If no model
-has been initialized yet, the model is added. Before calling this method, a
-knowledge base should have been defined with
-[`set_kb`](/api/entitylinker#set_kb).
-
-> #### Example
->
-> ```python
-> entity_linker = EntityLinker(nlp.vocab)
-> entity_linker.set_kb(kb)
-> nlp.add_pipe(entity_linker, last=True)
-> optimizer = entity_linker.begin_training(pipeline=nlp.pipeline)
-> ```
-
-| Name | Type | Description |
-| ------------- | -------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| `gold_tuples` | iterable | Optional gold-standard annotations from which to construct [`GoldParse`](/api/goldparse) objects. |
-| `pipeline` | list | Optional list of pipeline components that this component is part of. |
-| `sgd` | callable | An optional optimizer. Should take two arguments `weights` and `gradient`, and an optional ID. Will be created via [`EntityLinker`](/api/entitylinker#create_optimizer) if not set. |
-| **RETURNS** | callable | An optimizer. |
-
-## EntityLinker.create_optimizer {#create_optimizer tag="method"}
-
-Create an optimizer for the pipeline component.
-
-> #### Example
->
-> ```python
-> entity_linker = EntityLinker(nlp.vocab)
-> optimizer = entity_linker.create_optimizer()
-> ```
-
-| Name | Type | Description |
-| ----------- | -------- | -------------- |
-| **RETURNS** | callable | The optimizer. |
-
-## EntityLinker.use_params {#use_params tag="method, contextmanager"}
-
-Modify the pipe's EL model, to use the given parameter values.
-
-> #### Example
->
-> ```python
-> entity_linker = EntityLinker(nlp.vocab)
-> with entity_linker.use_params(optimizer.averages):
-> entity_linker.to_disk("/best_model")
-> ```
-
-| Name | Type | Description |
-| -------- | ---- | ---------------------------------------------------------------------------------------------------------- |
-| `params` | dict | The parameter values to use in the model. At the end of the context, the original parameters are restored. |
-
-## EntityLinker.to_disk {#to_disk tag="method"}
-
-Serialize the pipe to disk.
-
-> #### Example
->
-> ```python
-> entity_linker = EntityLinker(nlp.vocab)
-> entity_linker.to_disk("/path/to/entity_linker")
-> ```
-
-| Name | Type | Description |
-| --------- | ---------------- | --------------------------------------------------------------------------------------------------------------------- |
-| `path` | unicode / `Path` | A path to a directory, which will be created if it doesn't exist. Paths may be either strings or `Path`-like objects. |
-| `exclude` | list | String names of [serialization fields](#serialization-fields) to exclude. |
-
-## EntityLinker.from_disk {#from_disk tag="method"}
-
-Load the pipe from disk. Modifies the object in place and returns it.
-
-> #### Example
->
-> ```python
-> entity_linker = EntityLinker(nlp.vocab)
-> entity_linker.from_disk("/path/to/entity_linker")
-> ```
-
-| Name | Type | Description |
-| ----------- | ---------------- | -------------------------------------------------------------------------- |
-| `path` | unicode / `Path` | A path to a directory. Paths may be either strings or `Path`-like objects. |
-| `exclude` | list | String names of [serialization fields](#serialization-fields) to exclude. |
-| **RETURNS** | `EntityLinker` | The modified `EntityLinker` object. |
-
-## Serialization fields {#serialization-fields}
-
-During serialization, spaCy will export several data fields used to restore
-different aspects of the object. If needed, you can exclude them from
-serialization by passing in the string names via the `exclude` argument.
-
-> #### Example
->
-> ```python
-> data = entity_linker.to_disk("/path", exclude=["vocab"])
-> ```
-
-| Name | Description |
-| ------- | -------------------------------------------------------------- |
-| `vocab` | The shared [`Vocab`](/api/vocab). |
-| `cfg` | The config file. You usually don't want to exclude this. |
-| `model` | The binary model data. You usually don't want to exclude this. |
-| `kb` | The knowledge base. You usually don't want to exclude this. |
diff --git a/website/docs/api/entityrecognizer.md b/website/docs/api/entityrecognizer.md
index 9a2766c07..7279a7f77 100644
--- a/website/docs/api/entityrecognizer.md
+++ b/website/docs/api/entityrecognizer.md
@@ -58,7 +58,7 @@ and all pipeline components are applied to the `Doc` in order. Both
>
> ```python
> ner = EntityRecognizer(nlp.vocab)
-> doc = nlp("This is a sentence.")
+> doc = nlp(u"This is a sentence.")
> # This usually happens under the hood
> processed = ner(doc)
> ```
@@ -99,7 +99,7 @@ Apply the pipeline's model to a batch of docs, without modifying them.
>
> ```python
> ner = EntityRecognizer(nlp.vocab)
-> scores, tensors = ner.predict([doc1, doc2])
+> scores = ner.predict([doc1, doc2])
> ```
| Name | Type | Description |
@@ -115,15 +115,14 @@ Modify a batch of documents, using pre-computed scores.
>
> ```python
> ner = EntityRecognizer(nlp.vocab)
-> scores, tensors = ner.predict([doc1, doc2])
-> ner.set_annotations([doc1, doc2], scores, tensors)
+> scores = ner.predict([doc1, doc2])
+> ner.set_annotations([doc1, doc2], scores)
> ```
-| Name | Type | Description |
-| --------- | -------- | ---------------------------------------------------------- |
-| `docs` | iterable | The documents to modify. |
-| `scores` | - | The scores to set, produced by `EntityRecognizer.predict`. |
-| `tensors` | iterable | The token representations used to predict the scores. |
+| Name | Type | Description |
+| -------- | -------- | ---------------------------------------------------------- |
+| `docs` | iterable | The documents to modify. |
+| `scores` | - | The scores to set, produced by `EntityRecognizer.predict`. |
## EntityRecognizer.update {#update tag="method"}
@@ -211,13 +210,13 @@ Modify the pipe's model, to use the given parameter values.
>
> ```python
> ner = EntityRecognizer(nlp.vocab)
-> with ner.use_params(optimizer.averages):
+> with ner.use_params():
> ner.to_disk("/best_model")
> ```
| Name | Type | Description |
| -------- | ---- | ---------------------------------------------------------------------------------------------------------- |
-| `params` | dict | The parameter values to use in the model. At the end of the context, the original parameters are restored. |
+| `params` | - | The parameter values to use in the model. At the end of the context, the original parameters are restored. |
## EntityRecognizer.add_label {#add_label tag="method"}
diff --git a/website/docs/api/entityruler.md b/website/docs/api/entityruler.md
index 5b93fceac..006ba90e6 100644
--- a/website/docs/api/entityruler.md
+++ b/website/docs/api/entityruler.md
@@ -10,9 +10,7 @@ token-based rules or exact phrase matches. It can be combined with the
statistical [`EntityRecognizer`](/api/entityrecognizer) to boost accuracy, or
used on its own to implement a purely rule-based entity recognition system.
After initialization, the component is typically added to the processing
-pipeline using [`nlp.add_pipe`](/api/language#add_pipe). For usage examples, see
-the docs on
-[rule-based entity recogntion](/usage/rule-based-matching#entityruler).
+pipeline using [`nlp.add_pipe`](/api/language#add_pipe).
## EntityRuler.\_\_init\_\_ {#init tag="method"}
diff --git a/website/docs/api/goldparse.md b/website/docs/api/goldparse.md
index 2dd24316f..5a2d8a110 100644
--- a/website/docs/api/goldparse.md
+++ b/website/docs/api/goldparse.md
@@ -23,7 +23,6 @@ gradient for those labels will be zero.
| `deps` | iterable | A sequence of strings, representing the syntactic relation types. |
| `entities` | iterable | A sequence of named entity annotations, either as BILUO tag strings, or as `(start_char, end_char, label)` tuples, representing the entity positions. If BILUO tag strings, you can specify missing values by setting the tag to None. |
| `cats` | dict | Labels for text classification. Each key in the dictionary may be a string or an int, or a `(start_char, end_char, label)` tuple, indicating that the label is applied to only part of the document (usually a sentence). |
-| `links` | dict | Labels for entity linking. A dict with `(start_char, end_char)` keys, and the values being dicts with `kb_id:value` entries, representing external KB IDs mapped to either 1.0 (positive) or 0.0 (negative). |
| **RETURNS** | `GoldParse` | The newly constructed object. |
## GoldParse.\_\_len\_\_ {#len tag="method"}
@@ -44,17 +43,16 @@ Whether the provided syntactic annotations form a projective dependency tree.
## Attributes {#attributes}
-| Name | Type | Description |
-| ------------------------------------ | ---- | -------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| `words` | list | The words. |
-| `tags` | list | The part-of-speech tag annotations. |
-| `heads` | list | The syntactic head annotations. |
-| `labels` | list | The syntactic relation-type annotations. |
-| `ner` | list | The named entity annotations as BILUO tags. |
-| `cand_to_gold` | list | The alignment from candidate tokenization to gold tokenization. |
-| `gold_to_cand` | list | The alignment from gold tokenization to candidate tokenization. |
-| `cats` 2 | list | Entries in the list should be either a label, or a `(start, end, label)` triple. The tuple form is used for categories applied to spans of the document. |
-| `links` 2.2 | dict | Keys in the dictionary are `(start_char, end_char)` triples, and the values are dictionaries with `kb_id:value` entries. |
+| Name | Type | Description |
+| --------------------------------- | ---- | -------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| `words` | list | The words. |
+| `tags` | list | The part-of-speech tag annotations. |
+| `heads` | list | The syntactic head annotations. |
+| `labels` | list | The syntactic relation-type annotations. |
+| `ner` | list | The named entity annotations as BILUO tags. |
+| `cand_to_gold` | list | The alignment from candidate tokenization to gold tokenization. |
+| `gold_to_cand` | list | The alignment from gold tokenization to candidate tokenization. |
+| `cats` 2 | list | Entries in the list should be either a label, or a `(start, end, label)` triple. The tuple form is used for categories applied to spans of the document. |
## Utilities {#util}
@@ -69,7 +67,7 @@ Convert a list of Doc objects into the
> ```python
> from spacy.gold import docs_to_json
>
-> doc = nlp("I like London")
+> doc = nlp(u"I like London")
> json_data = docs_to_json([doc])
> ```
@@ -150,7 +148,7 @@ single-token entity.
> ```python
> from spacy.gold import biluo_tags_from_offsets
>
-> doc = nlp("I like London.")
+> doc = nlp(u"I like London.")
> entities = [(7, 13, "LOC")]
> tags = biluo_tags_from_offsets(doc, entities)
> assert tags == ["O", "O", "U-LOC", "O"]
@@ -172,7 +170,7 @@ entity offsets.
> ```python
> from spacy.gold import offsets_from_biluo_tags
>
-> doc = nlp("I like London.")
+> doc = nlp(u"I like London.")
> tags = ["O", "O", "U-LOC", "O"]
> entities = offsets_from_biluo_tags(doc, tags)
> assert entities == [(7, 13, "LOC")]
@@ -195,7 +193,7 @@ token-based tags, e.g. to overwrite the `doc.ents`.
> ```python
> from spacy.gold import spans_from_biluo_tags
>
-> doc = nlp("I like London.")
+> doc = nlp(u"I like London.")
> tags = ["O", "O", "U-LOC", "O"]
> doc.ents = spans_from_biluo_tags(doc, tags)
> ```
diff --git a/website/docs/api/kb.md b/website/docs/api/kb.md
deleted file mode 100644
index 639ababb6..000000000
--- a/website/docs/api/kb.md
+++ /dev/null
@@ -1,268 +0,0 @@
----
-title: KnowledgeBase
-teaser: A storage class for entities and aliases of a specific knowledge base (ontology)
-tag: class
-source: spacy/kb.pyx
-new: 2.2
----
-
-The `KnowledgeBase` object provides a method to generate [`Candidate`](/api/kb/#candidate_init)
-objects, which are plausible external identifiers given a certain textual mention.
-Each such `Candidate` holds information from the relevant KB entities,
-such as its frequency in text and possible aliases.
-Each entity in the knowledge base also has a pre-trained entity vector of a fixed size.
-
-## KnowledgeBase.\_\_init\_\_ {#init tag="method"}
-
-Create the knowledge base.
-
-> #### Example
->
-> ```python
-> from spacy.kb import KnowledgeBase
-> vocab = nlp.vocab
-> kb = KnowledgeBase(vocab=vocab, entity_vector_length=64)
-> ```
-
-| Name | Type | Description |
-| ----------------------- | ---------------- | ----------------------------------------- |
-| `vocab` | `Vocab` | A `Vocab` object. |
-| `entity_vector_length` | int | Length of the fixed-size entity vectors. |
-| **RETURNS** | `KnowledgeBase` | The newly constructed object. |
-
-
-## KnowledgeBase.entity_vector_length {#entity_vector_length tag="property"}
-
-The length of the fixed-size entity vectors in the knowledge base.
-
-| Name | Type | Description |
-| ----------- | ---- | ----------------------------------------- |
-| **RETURNS** | int | Length of the fixed-size entity vectors. |
-
-## KnowledgeBase.add_entity {#add_entity tag="method"}
-
-Add an entity to the knowledge base, specifying its corpus frequency
-and entity vector, which should be of length [`entity_vector_length`](/api/kb#entity_vector_length).
-
-> #### Example
->
-> ```python
-> kb.add_entity(entity="Q42", freq=32, entity_vector=vector1)
-> kb.add_entity(entity="Q463035", freq=111, entity_vector=vector2)
-> ```
-
-| Name | Type | Description |
-| --------------- | ------------- | ------------------------------------------------- |
-| `entity` | unicode | The unique entity identifier |
-| `freq` | float | The frequency of the entity in a typical corpus |
-| `entity_vector` | vector | The pre-trained vector of the entity |
-
-## KnowledgeBase.set_entities {#set_entities tag="method"}
-
-Define the full list of entities in the knowledge base, specifying the corpus frequency
-and entity vector for each entity.
-
-> #### Example
->
-> ```python
-> kb.set_entities(entity_list=["Q42", "Q463035"], freq_list=[32, 111], vector_list=[vector1, vector2])
-> ```
-
-| Name | Type | Description |
-| ------------- | ------------- | ------------------------------------------------- |
-| `entity_list` | iterable | List of unique entity identifiers |
-| `freq_list` | iterable | List of entity frequencies |
-| `vector_list` | iterable | List of entity vectors |
-
-## KnowledgeBase.add_alias {#add_alias tag="method"}
-
-Add an alias or mention to the knowledge base, specifying its potential KB identifiers
-and their prior probabilities. The entity identifiers should refer to entities previously
-added with [`add_entity`](/api/kb#add_entity) or [`set_entities`](/api/kb#set_entities).
-The sum of the prior probabilities should not exceed 1.
-
-> #### Example
->
-> ```python
-> kb.add_alias(alias="Douglas", entities=["Q42", "Q463035"], probabilities=[0.6, 0.3])
-> ```
-
-| Name | Type | Description |
-| -------------- | ------------- | -------------------------------------------------- |
-| `alias` | unicode | The textual mention or alias |
-| `entities` | iterable | The potential entities that the alias may refer to |
-| `probabilities`| iterable | The prior probabilities of each entity |
-
-## KnowledgeBase.\_\_len\_\_ {#len tag="method"}
-
-Get the total number of entities in the knowledge base.
-
-> #### Example
->
-> ```python
-> total_entities = len(kb)
-> ```
-
-| Name | Type | Description |
-| ----------- | ---- | --------------------------------------------- |
-| **RETURNS** | int | The number of entities in the knowledge base. |
-
-## KnowledgeBase.get_entity_strings {#get_entity_strings tag="method"}
-
-Get a list of all entity IDs in the knowledge base.
-
-> #### Example
->
-> ```python
-> all_entities = kb.get_entity_strings()
-> ```
-
-| Name | Type | Description |
-| ----------- | ---- | --------------------------------------------- |
-| **RETURNS** | list | The list of entities in the knowledge base. |
-
-## KnowledgeBase.get_size_aliases {#get_size_aliases tag="method"}
-
-Get the total number of aliases in the knowledge base.
-
-> #### Example
->
-> ```python
-> total_aliases = kb.get_size_aliases()
-> ```
-
-| Name | Type | Description |
-| ----------- | ---- | --------------------------------------------- |
-| **RETURNS** | int | The number of aliases in the knowledge base. |
-
-## KnowledgeBase.get_alias_strings {#get_alias_strings tag="method"}
-
-Get a list of all aliases in the knowledge base.
-
-> #### Example
->
-> ```python
-> all_aliases = kb.get_alias_strings()
-> ```
-
-| Name | Type | Description |
-| ----------- | ---- | --------------------------------------------- |
-| **RETURNS** | list | The list of aliases in the knowledge base. |
-
-## KnowledgeBase.get_candidates {#get_candidates tag="method"}
-
-Given a certain textual mention as input, retrieve a list of candidate entities
-of type [`Candidate`](/api/kb/#candidate_init).
-
-> #### Example
->
-> ```python
-> candidates = kb.get_candidates("Douglas")
-> ```
-
-| Name | Type | Description |
-| ------------- | ------------- | -------------------------------------------------- |
-| `alias` | unicode | The textual mention or alias |
-| **RETURNS** | iterable | The list of relevant `Candidate` objects |
-
-## KnowledgeBase.get_vector {#get_vector tag="method"}
-
-Given a certain entity ID, retrieve its pre-trained entity vector.
-
-> #### Example
->
-> ```python
-> vector = kb.get_vector("Q42")
-> ```
-
-| Name | Type | Description |
-| ------------- | ------------- | -------------------------------------------------- |
-| `entity` | unicode | The entity ID |
-| **RETURNS** | vector | The entity vector |
-
-## KnowledgeBase.get_prior_prob {#get_prior_prob tag="method"}
-
-Given a certain entity ID and a certain textual mention, retrieve
-the prior probability of the fact that the mention links to the entity ID.
-
-> #### Example
->
-> ```python
-> probability = kb.get_prior_prob("Q42", "Douglas")
-> ```
-
-| Name | Type | Description |
-| ------------- | ------------- | --------------------------------------------------------------- |
-| `entity` | unicode | The entity ID |
-| `alias` | unicode | The textual mention or alias |
-| **RETURNS** | float | The prior probability of the `alias` referring to the `entity` |
-
-## KnowledgeBase.dump {#dump tag="method"}
-
-Save the current state of the knowledge base to a directory.
-
-> #### Example
->
-> ```python
-> kb.dump(loc)
-> ```
-
-| Name | Type | Description |
-| ------------- | ---------------- | ------------------------------------------------------------------------------------------------------------------------ |
-| `loc` | unicode / `Path` | A path to a directory, which will be created if it doesn't exist. Paths may be either strings or `Path`-like objects. |
-
-## KnowledgeBase.load_bulk {#load_bulk tag="method"}
-
-Restore the state of the knowledge base from a given directory. Note that the [`Vocab`](/api/vocab)
-should also be the same as the one used to create the KB.
-
-> #### Example
->
-> ```python
-> from spacy.kb import KnowledgeBase
-> from spacy.vocab import Vocab
-> vocab = Vocab().from_disk("/path/to/vocab")
-> kb = KnowledgeBase(vocab=vocab, entity_vector_length=64)
-> kb.load_bulk("/path/to/kb")
-> ```
-
-
-| Name | Type | Description |
-| ----------- | ---------------- | ----------------------------------------------------------------------------------------- |
-| `loc` | unicode / `Path` | A path to a directory. Paths may be either strings or `Path`-like objects. |
-| **RETURNS** | `KnowledgeBase` | The modified `KnowledgeBase` object. |
-
-
-## Candidate.\_\_init\_\_ {#candidate_init tag="method"}
-
-Construct a `Candidate` object. Usually this constructor is not called directly,
-but instead these objects are returned by the [`get_candidates`](/api/kb#get_candidates) method
-of a `KnowledgeBase`.
-
-> #### Example
->
-> ```python
-> from spacy.kb import Candidate
-> candidate = Candidate(kb, entity_hash, entity_freq, entity_vector, alias_hash, prior_prob)
-> ```
-
-| Name | Type | Description |
-| ------------- | --------------- | -------------------------------------------------------------- |
-| `kb` | `KnowledgeBase` | The knowledge base that defined this candidate. |
-| `entity_hash` | int | The hash of the entity's KB ID. |
-| `entity_freq` | float | The entity frequency as recorded in the KB. |
-| `alias_hash` | int | The hash of the textual mention or alias. |
-| `prior_prob` | float | The prior probability of the `alias` referring to the `entity` |
-| **RETURNS** | `Candidate` | The newly constructed object. |
-
-## Candidate attributes {#candidate_attributes}
-
-| Name | Type | Description |
-| ---------------------- | ------------ | ------------------------------------------------------------------ |
-| `entity` | int | The entity's unique KB identifier |
-| `entity_` | unicode | The entity's unique KB identifier |
-| `alias` | int | The alias or textual mention |
-| `alias_` | unicode | The alias or textual mention |
-| `prior_prob` | long | The prior probability of the `alias` referring to the `entity` |
-| `entity_freq` | long | The frequency of the entity in a typical corpus |
-| `entity_vector` | vector | The pre-trained vector of the entity |
diff --git a/website/docs/api/language.md b/website/docs/api/language.md
index c44339ff5..3fcdeb195 100644
--- a/website/docs/api/language.md
+++ b/website/docs/api/language.md
@@ -45,7 +45,7 @@ contain arbitrary whitespace. Alignment into the original string is preserved.
> #### Example
>
> ```python
-> doc = nlp("An example sentence. Another sentence.")
+> doc = nlp(u"An example sentence. Another sentence.")
> assert (doc[0].text, doc[0].head.tag_) == ("An", "NN")
> ```
@@ -61,8 +61,8 @@ Pipeline components to prevent from being loaded can now be added as a list to
`disable`, instead of specifying one keyword argument per component.
```diff
-- doc = nlp("I don't want parsed", parse=False)
-+ doc = nlp("I don't want parsed", disable=["parser"])
+- doc = nlp(u"I don't want parsed", parse=False)
++ doc = nlp(u"I don't want parsed", disable=["parser"])
```
@@ -86,7 +86,7 @@ multiprocessing.
> #### Example
>
> ```python
-> texts = ["One document.", "...", "Lots of documents"]
+> texts = [u"One document.", u"...", u"Lots of documents"]
> for doc in nlp.pipe(texts, batch_size=50):
> assert doc.is_parsed
> ```
@@ -140,7 +140,6 @@ Evaluate a model's pipeline components.
| `batch_size` | int | The batch size to use. |
| `scorer` | `Scorer` | Optional [`Scorer`](/api/scorer) to use. If not passed in, a new one will be created. |
| `component_cfg` 2.1 | dict | Config parameters for specific pipeline components, keyed by component name. |
-| **RETURNS** | Scorer | The scorer containing the evaluation scores. |
## Language.begin_training {#begin_training tag="method"}
@@ -444,16 +443,15 @@ per component.
## Attributes {#attributes}
-| Name | Type | Description |
-| ------------------------------------------ | ----------- | ----------------------------------------------------------------------------------------------- |
-| `vocab` | `Vocab` | A container for the lexical types. |
-| `tokenizer` | `Tokenizer` | The tokenizer. |
-| `make_doc` | `callable` | Callable that takes a unicode text and returns a `Doc`. |
-| `pipeline` | list | List of `(name, component)` tuples describing the current processing pipeline, in order. |
-| `pipe_names` 2 | list | List of pipeline component names, in order. |
-| `pipe_labels` 2.2 | dict | List of labels set by the pipeline components, if available, keyed by component name. |
-| `meta` | dict | Custom meta data for the Language class. If a model is loaded, contains meta data of the model. |
-| `path` 2 | `Path` | Path to the model data directory, if a model is loaded. Otherwise `None`. |
+| Name | Type | Description |
+| --------------------------------------- | ------------------ | ----------------------------------------------------------------------------------------------- |
+| `vocab` | `Vocab` | A container for the lexical types. |
+| `tokenizer` | `Tokenizer` | The tokenizer. |
+| `make_doc` | `lambda text: Doc` | Create a `Doc` object from unicode text. |
+| `pipeline` | list | List of `(name, component)` tuples describing the current processing pipeline, in order. |
+| `pipe_names` 2 | list | List of pipeline component names, in order. |
+| `meta` | dict | Custom meta data for the Language class. If a model is loaded, contains meta data of the model. |
+| `path` 2 | `Path` | Path to the model data directory, if a model is loaded. Otherwise `None`. |
## Class attributes {#class-attributes}
diff --git a/website/docs/api/lemmatizer.md b/website/docs/api/lemmatizer.md
index 805e96b0f..7bc2691e5 100644
--- a/website/docs/api/lemmatizer.md
+++ b/website/docs/api/lemmatizer.md
@@ -35,10 +35,10 @@ Lemmatize a string.
>
> ```python
> from spacy.lemmatizer import Lemmatizer
-> rules = {"noun": [["s", ""]]}
-> lemmatizer = Lemmatizer(index={}, exceptions={}, rules=rules)
-> lemmas = lemmatizer("ducks", "NOUN")
-> assert lemmas == ["duck"]
+> from spacy.lang.en import LEMMA_INDEX, LEMMA_EXC, LEMMA_RULES
+> lemmatizer = Lemmatizer(LEMMA_INDEX, LEMMA_EXC, LEMMA_RULES)
+> lemmas = lemmatizer(u"ducks", u"NOUN")
+> assert lemmas == [u"duck"]
> ```
| Name | Type | Description |
@@ -52,22 +52,21 @@ Lemmatize a string.
Look up a lemma in the lookup table, if available. If no lemma is found, the
original string is returned. Languages can provide a
-[lookup table](/usage/adding-languages#lemmatizer) via the `resources`, set on
-the individual `Language` class.
+[lookup table](/usage/adding-languages#lemmatizer) via the `lemma_lookup`
+variable, set on the individual `Language` class.
> #### Example
>
> ```python
-> lookup = {"going": "go"}
+> lookup = {u"going": u"go"}
> lemmatizer = Lemmatizer(lookup=lookup)
-> assert lemmatizer.lookup("going") == "go"
+> assert lemmatizer.lookup(u"going") == u"go"
> ```
-| Name | Type | Description |
-| ----------- | ------- | ----------------------------------------------------------------------------------------------------------- |
-| `string` | unicode | The string to look up. |
-| `orth` | int | Optional hash of the string to look up. If not set, the string will be used and hashed. Defaults to `None`. |
-| **RETURNS** | unicode | The lemma if the string was found, otherwise the original string. |
+| Name | Type | Description |
+| ----------- | ------- | ----------------------------------------------------------------- |
+| `string` | unicode | The string to look up. |
+| **RETURNS** | unicode | The lemma if the string was found, otherwise the original string. |
## Lemmatizer.is_base_form {#is_base_form tag="method"}
diff --git a/website/docs/api/lexeme.md b/website/docs/api/lexeme.md
index 398b71708..018dc72d8 100644
--- a/website/docs/api/lexeme.md
+++ b/website/docs/api/lexeme.md
@@ -27,7 +27,7 @@ Change the value of a boolean flag.
>
> ```python
> COOL_FLAG = nlp.vocab.add_flag(lambda text: False)
-> nlp.vocab["spaCy"].set_flag(COOL_FLAG, True)
+> nlp.vocab[u'spaCy'].set_flag(COOL_FLAG, True)
> ```
| Name | Type | Description |
@@ -42,9 +42,9 @@ Check the value of a boolean flag.
> #### Example
>
> ```python
-> is_my_library = lambda text: text in ["spaCy", "Thinc"]
+> is_my_library = lambda text: text in [u"spaCy", u"Thinc"]
> MY_LIBRARY = nlp.vocab.add_flag(is_my_library)
-> assert nlp.vocab["spaCy"].check_flag(MY_LIBRARY) == True
+> assert nlp.vocab[u"spaCy"].check_flag(MY_LIBRARY) == True
> ```
| Name | Type | Description |
@@ -59,8 +59,8 @@ Compute a semantic similarity estimate. Defaults to cosine over vectors.
> #### Example
>
> ```python
-> apple = nlp.vocab["apple"]
-> orange = nlp.vocab["orange"]
+> apple = nlp.vocab[u"apple"]
+> orange = nlp.vocab[u"orange"]
> apple_orange = apple.similarity(orange)
> orange_apple = orange.similarity(apple)
> assert apple_orange == orange_apple
@@ -78,7 +78,7 @@ A boolean value indicating whether a word vector is associated with the lexeme.
> #### Example
>
> ```python
-> apple = nlp.vocab["apple"]
+> apple = nlp.vocab[u"apple"]
> assert apple.has_vector
> ```
@@ -93,7 +93,7 @@ A real-valued meaning representation.
> #### Example
>
> ```python
-> apple = nlp.vocab["apple"]
+> apple = nlp.vocab[u"apple"]
> assert apple.vector.dtype == "float32"
> assert apple.vector.shape == (300,)
> ```
@@ -109,8 +109,8 @@ The L2 norm of the lexeme's vector representation.
> #### Example
>
> ```python
-> apple = nlp.vocab["apple"]
-> pasta = nlp.vocab["pasta"]
+> apple = nlp.vocab[u"apple"]
+> pasta = nlp.vocab[u"pasta"]
> apple.vector_norm # 7.1346845626831055
> pasta.vector_norm # 7.759851932525635
> assert apple.vector_norm != pasta.vector_norm
diff --git a/website/docs/api/lookups.md b/website/docs/api/lookups.md
deleted file mode 100644
index 9878546ea..000000000
--- a/website/docs/api/lookups.md
+++ /dev/null
@@ -1,318 +0,0 @@
----
-title: Lookups
-teaser: A container for large lookup tables and dictionaries
-tag: class
-source: spacy/lookups.py
-new: 2.2
----
-
-This class allows convenient accesss to large lookup tables and dictionaries,
-e.g. lemmatization data or tokenizer exception lists using Bloom filters.
-Lookups are available via the [`Vocab`](/api/vocab) as `vocab.lookups`, so they
-can be accessed before the pipeline components are applied (e.g. in the
-tokenizer and lemmatizer), as well as within the pipeline components via
-`doc.vocab.lookups`.
-
-## Lookups.\_\_init\_\_ {#init tag="method"}
-
-Create a `Lookups` object.
-
-> #### Example
->
-> ```python
-> from spacy.lookups import Lookups
-> lookups = Lookups()
-> ```
-
-| Name | Type | Description |
-| ----------- | --------- | ----------------------------- |
-| **RETURNS** | `Lookups` | The newly constructed object. |
-
-## Lookups.\_\_len\_\_ {#len tag="method"}
-
-Get the current number of tables in the lookups.
-
-> #### Example
->
-> ```python
-> lookups = Lookups()
-> assert len(lookups) == 0
-> ```
-
-| Name | Type | Description |
-| ----------- | ---- | ------------------------------------ |
-| **RETURNS** | int | The number of tables in the lookups. |
-
-## Lookups.\_\contains\_\_ {#contains tag="method"}
-
-Check if the lookups contain a table of a given name. Delegates to
-[`Lookups.has_table`](/api/lookups#has_table).
-
-> #### Example
->
-> ```python
-> lookups = Lookups()
-> lookups.add_table("some_table")
-> assert "some_table" in lookups
-> ```
-
-| Name | Type | Description |
-| ----------- | ------- | ----------------------------------------------- |
-| `name` | unicode | Name of the table. |
-| **RETURNS** | bool | Whether a table of that name is in the lookups. |
-
-## Lookups.tables {#tables tag="property"}
-
-Get the names of all tables in the lookups.
-
-> #### Example
->
-> ```python
-> lookups = Lookups()
-> lookups.add_table("some_table")
-> assert lookups.tables == ["some_table"]
-> ```
-
-| Name | Type | Description |
-| ----------- | ---- | ----------------------------------- |
-| **RETURNS** | list | Names of the tables in the lookups. |
-
-## Lookups.add_table {#add_table tag="method"}
-
-Add a new table with optional data to the lookups. Raises an error if the table
-exists.
-
-> #### Example
->
-> ```python
-> lookups = Lookups()
-> lookups.add_table("some_table", {"foo": "bar"})
-> ```
-
-| Name | Type | Description |
-| ----------- | ----------------------------- | ---------------------------------- |
-| `name` | unicode | Unique name of the table. |
-| `data` | dict | Optional data to add to the table. |
-| **RETURNS** | [`Table`](/api/lookups#table) | The newly added table. |
-
-## Lookups.get_table {#get_table tag="method"}
-
-Get a table from the lookups. Raises an error if the table doesn't exist.
-
-> #### Example
->
-> ```python
-> lookups = Lookups()
-> lookups.add_table("some_table", {"foo": "bar"})
-> table = lookups.get_table("some_table")
-> assert table["foo"] == "bar"
-> ```
-
-| Name | Type | Description |
-| ----------- | ----------------------------- | ------------------ |
-| `name` | unicode | Name of the table. |
-| **RETURNS** | [`Table`](/api/lookups#table) | The table. |
-
-## Lookups.remove_table {#remove_table tag="method"}
-
-Remove a table from the lookups. Raises an error if the table doesn't exist.
-
-> #### Example
->
-> ```python
-> lookups = Lookups()
-> lookups.add_table("some_table")
-> removed_table = lookups.remove_table("some_table")
-> assert "some_table" not in lookups
-> ```
-
-| Name | Type | Description |
-| ----------- | ----------------------------- | ---------------------------- |
-| `name` | unicode | Name of the table to remove. |
-| **RETURNS** | [`Table`](/api/lookups#table) | The removed table. |
-
-## Lookups.has_table {#has_table tag="method"}
-
-Check if the lookups contain a table of a given name. Equivalent to
-[`Lookups.__contains__`](/api/lookups#contains).
-
-> #### Example
->
-> ```python
-> lookups = Lookups()
-> lookups.add_table("some_table")
-> assert lookups.has_table("some_table")
-> ```
-
-| Name | Type | Description |
-| ----------- | ------- | ----------------------------------------------- |
-| `name` | unicode | Name of the table. |
-| **RETURNS** | bool | Whether a table of that name is in the lookups. |
-
-## Lookups.to_bytes {#to_bytes tag="method"}
-
-Serialize the lookups to a bytestring.
-
-> #### Example
->
-> ```python
-> lookup_bytes = lookups.to_bytes()
-> ```
-
-| Name | Type | Description |
-| ----------- | ----- | ----------------------- |
-| **RETURNS** | bytes | The serialized lookups. |
-
-## Lookups.from_bytes {#from_bytes tag="method"}
-
-Load the lookups from a bytestring.
-
-> #### Example
->
-> ```python
-> lookup_bytes = lookups.to_bytes()
-> lookups = Lookups()
-> lookups.from_bytes(lookup_bytes)
-> ```
-
-| Name | Type | Description |
-| ------------ | --------- | ---------------------- |
-| `bytes_data` | bytes | The data to load from. |
-| **RETURNS** | `Lookups` | The loaded lookups. |
-
-## Lookups.to_disk {#to_disk tag="method"}
-
-Save the lookups to a directory as `lookups.bin`. Expects a path to a directory,
-which will be created if it doesn't exist.
-
-> #### Example
->
-> ```python
-> lookups.to_disk("/path/to/lookups")
-> ```
-
-| Name | Type | Description |
-| ------ | ---------------- | --------------------------------------------------------------------------------------------------------------------- |
-| `path` | unicode / `Path` | A path to a directory, which will be created if it doesn't exist. Paths may be either strings or `Path`-like objects. |
-
-## Lookups.from_disk {#from_disk tag="method"}
-
-Load lookups from a directory containing a `lookups.bin`. Will skip loading if
-the file doesn't exist.
-
-> #### Example
->
-> ```python
-> from spacy.lookups import Lookups
-> lookups = Lookups()
-> lookups.from_disk("/path/to/lookups")
-> ```
-
-| Name | Type | Description |
-| ----------- | ---------------- | -------------------------------------------------------------------------- |
-| `path` | unicode / `Path` | A path to a directory. Paths may be either strings or `Path`-like objects. |
-| **RETURNS** | `Lookups` | The loaded lookups. |
-
-## Table {#table tag="class, ordererddict"}
-
-A table in the lookups. Subclass of `OrderedDict` that implements a slightly
-more consistent and unified API and includes a Bloom filter to speed up missed
-lookups. Supports **all other methods and attributes** of `OrderedDict` /
-`dict`, and the customized methods listed here. Methods that get or set keys
-accept both integers and strings (which will be hashed before being added to the
-table).
-
-### Table.\_\_init\_\_ {#table.init tag="method"}
-
-Initialize a new table.
-
-> #### Example
->
-> ```python
-> from spacy.lookups import Table
-> data = {"foo": "bar", "baz": 100}
-> table = Table(name="some_table", data=data)
-> assert "foo" in table
-> assert table["foo"] == "bar"
-> ```
-
-| Name | Type | Description |
-| ----------- | ------- | ---------------------------------- |
-| `name` | unicode | Optional table name for reference. |
-| **RETURNS** | `Table` | The newly constructed object. |
-
-### Table.from_dict {#table.from_dict tag="classmethod"}
-
-Initialize a new table from a dict.
-
-> #### Example
->
-> ```python
-> from spacy.lookups import Table
-> data = {"foo": "bar", "baz": 100}
-> table = Table.from_dict(data, name="some_table")
-> ```
-
-| Name | Type | Description |
-| ----------- | ------- | ---------------------------------- |
-| `data` | dict | The dictionary. |
-| `name` | unicode | Optional table name for reference. |
-| **RETURNS** | `Table` | The newly constructed object. |
-
-### Table.set {#table.set tag="method"}
-
-Set a new key / value pair. String keys will be hashed. Same as
-`table[key] = value`.
-
-> #### Example
->
-> ```python
-> from spacy.lookups import Table
-> table = Table()
-> table.set("foo", "bar")
-> assert table["foo"] == "bar"
-> ```
-
-| Name | Type | Description |
-| ------- | ------------- | ----------- |
-| `key` | unicode / int | The key. |
-| `value` | - | The value. |
-
-### Table.to_bytes {#table.to_bytes tag="method"}
-
-Serialize the table to a bytestring.
-
-> #### Example
->
-> ```python
-> table_bytes = table.to_bytes()
-> ```
-
-| Name | Type | Description |
-| ----------- | ----- | --------------------- |
-| **RETURNS** | bytes | The serialized table. |
-
-### Table.from_bytes {#table.from_bytes tag="method"}
-
-Load a table from a bytestring.
-
-> #### Example
->
-> ```python
-> table_bytes = table.to_bytes()
-> table = Table()
-> table.from_bytes(table_bytes)
-> ```
-
-| Name | Type | Description |
-| ------------ | ------- | ----------------- |
-| `bytes_data` | bytes | The data to load. |
-| **RETURNS** | `Table` | The loaded table. |
-
-### Attributes {#table-attributes}
-
-| Name | Type | Description |
-| -------------- | --------------------------- | ----------------------------------------------------- |
-| `name` | unicode | Table name. |
-| `default_size` | int | Default size of bloom filters if no data is provided. |
-| `bloom` | `preshed.bloom.BloomFilter` | The bloom filters. |
diff --git a/website/docs/api/matcher.md b/website/docs/api/matcher.md
index 84d9ed888..fb0ba1617 100644
--- a/website/docs/api/matcher.md
+++ b/website/docs/api/matcher.md
@@ -50,7 +50,7 @@ Find all token sequences matching the supplied patterns on the `Doc`.
> matcher = Matcher(nlp.vocab)
> pattern = [{"LOWER": "hello"}, {"LOWER": "world"}]
> matcher.add("HelloWorld", None, pattern)
-> doc = nlp("hello world!")
+> doc = nlp(u'hello world!')
> matches = matcher(doc)
> ```
@@ -147,7 +147,7 @@ overwritten.
> matcher = Matcher(nlp.vocab)
> matcher.add("HelloWorld", on_match, [{"LOWER": "hello"}, {"LOWER": "world"}])
> matcher.add("GoogleMaps", on_match, [{"ORTH": "Google"}, {"ORTH": "Maps"}])
-> doc = nlp("HELLO WORLD on Google Maps.")
+> doc = nlp(u"HELLO WORLD on Google Maps.")
> matches = matcher(doc)
> ```
diff --git a/website/docs/api/phrasematcher.md b/website/docs/api/phrasematcher.md
index 40b8d6c1a..c61fa575d 100644
--- a/website/docs/api/phrasematcher.md
+++ b/website/docs/api/phrasematcher.md
@@ -59,8 +59,8 @@ Find all token sequences matching the supplied patterns on the `Doc`.
> from spacy.matcher import PhraseMatcher
>
> matcher = PhraseMatcher(nlp.vocab)
-> matcher.add("OBAMA", None, nlp("Barack Obama"))
-> doc = nlp("Barack Obama lifts America one last time in emotional farewell")
+> matcher.add("OBAMA", None, nlp(u"Barack Obama"))
+> doc = nlp(u"Barack Obama lifts America one last time in emotional farewell")
> matches = matcher(doc)
> ```
@@ -99,7 +99,7 @@ patterns.
> ```python
> matcher = PhraseMatcher(nlp.vocab)
> assert len(matcher) == 0
-> matcher.add("OBAMA", None, nlp("Barack Obama"))
+> matcher.add("OBAMA", None, nlp(u"Barack Obama"))
> assert len(matcher) == 1
> ```
@@ -116,7 +116,7 @@ Check whether the matcher contains rules for a match ID.
> ```python
> matcher = PhraseMatcher(nlp.vocab)
> assert "OBAMA" not in matcher
-> matcher.add("OBAMA", None, nlp("Barack Obama"))
+> matcher.add("OBAMA", None, nlp(u"Barack Obama"))
> assert "OBAMA" in matcher
> ```
@@ -140,10 +140,10 @@ overwritten.
> print('Matched!', matches)
>
> matcher = PhraseMatcher(nlp.vocab)
-> matcher.add("OBAMA", on_match, nlp("Barack Obama"))
-> matcher.add("HEALTH", on_match, nlp("health care reform"),
-> nlp("healthcare reform"))
-> doc = nlp("Barack Obama urges Congress to find courage to defend his healthcare reforms")
+> matcher.add("OBAMA", on_match, nlp(u"Barack Obama"))
+> matcher.add("HEALTH", on_match, nlp(u"health care reform"),
+> nlp(u"healthcare reform"))
+> doc = nlp(u"Barack Obama urges Congress to find courage to defend his healthcare reforms")
> matches = matcher(doc)
> ```
@@ -152,22 +152,3 @@ overwritten.
| `match_id` | unicode | An ID for the thing you're matching. |
| `on_match` | callable or `None` | Callback function to act on matches. Takes the arguments `matcher`, `doc`, `i` and `matches`. |
| `*docs` | list | `Doc` objects of the phrases to match. |
-
-## PhraseMatcher.remove {#remove tag="method" new="2.2"}
-
-Remove a rule from the matcher by match ID. A `KeyError` is raised if the key
-does not exist.
-
-> #### Example
->
-> ```python
-> matcher = PhraseMatcher(nlp.vocab)
-> matcher.add("OBAMA", None, nlp("Barack Obama"))
-> assert "OBAMA" in matcher
-> matcher.remove("OBAMA")
-> assert "OBAMA" not in matcher
-> ```
-
-| Name | Type | Description |
-| ----- | ------- | ------------------------- |
-| `key` | unicode | The ID of the match rule. |
diff --git a/website/docs/api/pipeline-functions.md b/website/docs/api/pipeline-functions.md
index 6e2b473b1..63b3cd164 100644
--- a/website/docs/api/pipeline-functions.md
+++ b/website/docs/api/pipeline-functions.md
@@ -17,13 +17,13 @@ the processing pipeline using [`nlp.add_pipe`](/api/language#add_pipe).
> #### Example
>
> ```python
-> texts = [t.text for t in nlp("I have a blue car")]
+> texts = [t.text for t in nlp(u"I have a blue car")]
> assert texts == ["I", "have", "a", "blue", "car"]
>
> merge_nps = nlp.create_pipe("merge_noun_chunks")
> nlp.add_pipe(merge_nps)
>
-> texts = [t.text for t in nlp("I have a blue car")]
+> texts = [t.text for t in nlp(u"I have a blue car")]
> assert texts == ["I", "have", "a blue car"]
> ```
@@ -50,13 +50,13 @@ the processing pipeline using [`nlp.add_pipe`](/api/language#add_pipe).
> #### Example
>
> ```python
-> texts = [t.text for t in nlp("I like David Bowie")]
+> texts = [t.text for t in nlp(u"I like David Bowie")]
> assert texts == ["I", "like", "David", "Bowie"]
>
> merge_ents = nlp.create_pipe("merge_entities")
> nlp.add_pipe(merge_ents)
>
-> texts = [t.text for t in nlp("I like David Bowie")]
+> texts = [t.text for t in nlp(u"I like David Bowie")]
> assert texts == ["I", "like", "David Bowie"]
> ```
diff --git a/website/docs/api/scorer.md b/website/docs/api/scorer.md
index 35348217b..2af4ec0ce 100644
--- a/website/docs/api/scorer.md
+++ b/website/docs/api/scorer.md
@@ -46,16 +46,14 @@ Update the evaluation scores from a single [`Doc`](/api/doc) /
## Properties
-| Name | Type | Description |
-| ----------------------------------------------- | ----- | --------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| `token_acc` | float | Tokenization accuracy. |
-| `tags_acc` | float | Part-of-speech tag accuracy (fine grained tags, i.e. `Token.tag`). |
-| `uas` | float | Unlabelled dependency score. |
-| `las` | float | Labelled dependency score. |
-| `ents_p` | float | Named entity accuracy (precision). |
-| `ents_r` | float | Named entity accuracy (recall). |
-| `ents_f` | float | Named entity accuracy (F-score). |
-| `ents_per_type` 2.1.5 | dict | Scores per entity label. Keyed by label, mapped to a dict of `p`, `r` and `f` scores. |
-| `textcat_score` 2.2 | float | F-score on positive label for binary exclusive, macro-averaged F-score for 3+ exclusive, macro-averaged AUC ROC score for multilabel (`-1` if undefined). |
-| `textcats_per_cat` 2.2 | dict | Scores per textcat label, keyed by label. |
-| `scores` | dict | All scores, keyed by type. |
+| Name | Type | Description |
+| ---------------------------------------------- | ----- | ------------------------------------------------------------------------------------------------------------- |
+| `token_acc` | float | Tokenization accuracy. |
+| `tags_acc` | float | Part-of-speech tag accuracy (fine grained tags, i.e. `Token.tag`). |
+| `uas` | float | Unlabelled dependency score. |
+| `las` | float | Labelled dependency score. |
+| `ents_p` | float | Named entity accuracy (precision). |
+| `ents_r` | float | Named entity accuracy (recall). |
+| `ents_f` | float | Named entity accuracy (F-score). |
+| `ents_per_type` 2.1.5 | dict | Scores per entity label. Keyed by label, mapped to a dict of `p`, `r` and `f` scores. |
+| `scores` | dict | All scores with keys `uas`, `las`, `ents_p`, `ents_r`, `ents_f`, `ents_per_type`, `tags_acc` and `token_acc`. |
diff --git a/website/docs/api/sentencizer.md b/website/docs/api/sentencizer.md
index 237cd6a8a..26d205c24 100644
--- a/website/docs/api/sentencizer.md
+++ b/website/docs/api/sentencizer.md
@@ -59,7 +59,7 @@ the component has been added to the pipeline using
> nlp = English()
> sentencizer = nlp.create_pipe("sentencizer")
> nlp.add_pipe(sentencizer)
-> doc = nlp("This is a sentence. This is another sentence.")
+> doc = nlp(u"This is a sentence. This is another sentence.")
> assert list(doc.sents) == 2
> ```
diff --git a/website/docs/api/span.md b/website/docs/api/span.md
index 64b77b89d..c807c7bbf 100644
--- a/website/docs/api/span.md
+++ b/website/docs/api/span.md
@@ -13,20 +13,19 @@ Create a Span object from the slice `doc[start : end]`.
> #### Example
>
> ```python
-> doc = nlp("Give it back! He pleaded.")
+> doc = nlp(u"Give it back! He pleaded.")
> span = doc[1:4]
-> assert [t.text for t in span] == ["it", "back", "!"]
+> assert [t.text for t in span] == [u"it", u"back", u"!"]
> ```
-| Name | Type | Description |
-| ----------- | ---------------------------------------- | ----------------------------------------------------------------------------------------------------------------- |
-| `doc` | `Doc` | The parent document. |
-| `start` | int | The index of the first token of the span. |
-| `end` | int | The index of the first token after the span. |
-| `label` | int / unicode | A label to attach to the span, e.g. for named entities. As of v2.1, the label can also be a unicode string. |
-| `kb_id` | int / unicode | A knowledge base ID to attach to the span, e.g. for named entities. The ID can be an integer or a unicode string. |
-| `vector` | `numpy.ndarray[ndim=1, dtype='float32']` | A meaning representation of the span. |
-| **RETURNS** | `Span` | The newly constructed object. |
+| Name | Type | Description |
+| ----------- | ---------------------------------------- | ----------------------------------------------------------------------------------------------------------- |
+| `doc` | `Doc` | The parent document. |
+| `start` | int | The index of the first token of the span. |
+| `end` | int | The index of the first token after the span. |
+| `label` | int / unicode | A label to attach to the span, e.g. for named entities. As of v2.1, the label can also be a unicode string. |
+| `vector` | `numpy.ndarray[ndim=1, dtype='float32']` | A meaning representation of the span. |
+| **RETURNS** | `Span` | The newly constructed object. |
## Span.\_\_getitem\_\_ {#getitem tag="method"}
@@ -35,7 +34,7 @@ Get a `Token` object.
> #### Example
>
> ```python
-> doc = nlp("Give it back! He pleaded.")
+> doc = nlp(u"Give it back! He pleaded.")
> span = doc[1:4]
> assert span[1].text == "back"
> ```
@@ -50,9 +49,9 @@ Get a `Span` object.
> #### Example
>
> ```python
-> doc = nlp("Give it back! He pleaded.")
+> doc = nlp(u"Give it back! He pleaded.")
> span = doc[1:4]
-> assert span[1:3].text == "back!"
+> assert span[1:3].text == u"back!"
> ```
| Name | Type | Description |
@@ -67,9 +66,9 @@ Iterate over `Token` objects.
> #### Example
>
> ```python
-> doc = nlp("Give it back! He pleaded.")
+> doc = nlp(u"Give it back! He pleaded.")
> span = doc[1:4]
-> assert [t.text for t in span] == ["it", "back", "!"]
+> assert [t.text for t in span] == [u"it", u"back", u"!"]
> ```
| Name | Type | Description |
@@ -83,7 +82,7 @@ Get the number of tokens in the span.
> #### Example
>
> ```python
-> doc = nlp("Give it back! He pleaded.")
+> doc = nlp(u"Give it back! He pleaded.")
> span = doc[1:4]
> assert len(span) == 3
> ```
@@ -102,9 +101,9 @@ For details, see the documentation on
>
> ```python
> from spacy.tokens import Span
-> city_getter = lambda span: any(city in span.text for city in ("New York", "Paris", "Berlin"))
+> city_getter = lambda span: any(city in span.text for city in (u"New York", u"Paris", u"Berlin"))
> Span.set_extension("has_city", getter=city_getter)
-> doc = nlp("I like New York in Autumn")
+> doc = nlp(u"I like New York in Autumn")
> assert doc[1:4]._.has_city
> ```
@@ -180,7 +179,7 @@ using an average of word vectors.
> #### Example
>
> ```python
-> doc = nlp("green apples and red oranges")
+> doc = nlp(u"green apples and red oranges")
> green_apples = doc[:2]
> red_oranges = doc[3:]
> apples_oranges = green_apples.similarity(red_oranges)
@@ -202,7 +201,7 @@ ancestor is found, e.g. if span excludes a necessary ancestor.
> #### Example
>
> ```python
-> doc = nlp("I like New York in Autumn")
+> doc = nlp(u"I like New York in Autumn")
> span = doc[1:4]
> matrix = span.get_lca_matrix()
> # array([[0, 0, 0], [0, 1, 2], [0, 2, 2]], dtype=int32)
@@ -222,7 +221,7 @@ shape `(N, M)`, where `N` is the length of the document. The values will be
>
> ```python
> from spacy.attrs import LOWER, POS, ENT_TYPE, IS_ALPHA
-> doc = nlp("I like New York in Autumn.")
+> doc = nlp(u"I like New York in Autumn.")
> span = doc[2:3]
> # All strings mapped to integers, for easy export to numpy
> np_array = span.to_array([LOWER, POS, ENT_TYPE, IS_ALPHA])
@@ -248,11 +247,11 @@ Retokenize the document, such that the span is merged into a single token.
> #### Example
>
> ```python
-> doc = nlp("I like New York in Autumn.")
+> doc = nlp(u"I like New York in Autumn.")
> span = doc[2:4]
> span.merge()
> assert len(doc) == 6
-> assert doc[2].text == "New York"
+> assert doc[2].text == u"New York"
> ```
| Name | Type | Description |
@@ -268,12 +267,12 @@ if the entity recognizer has been applied.
> #### Example
>
> ```python
-> doc = nlp("Mr. Best flew to New York on Saturday morning.")
+> doc = nlp(u"Mr. Best flew to New York on Saturday morning.")
> span = doc[0:6]
> ents = list(span.ents)
> assert ents[0].label == 346
> assert ents[0].label_ == "PERSON"
-> assert ents[0].text == "Mr. Best"
+> assert ents[0].text == u"Mr. Best"
> ```
| Name | Type | Description |
@@ -287,10 +286,10 @@ Create a new `Doc` object corresponding to the `Span`, with a copy of the data.
> #### Example
>
> ```python
-> doc = nlp("I like New York in Autumn.")
+> doc = nlp(u"I like New York in Autumn.")
> span = doc[2:4]
> doc2 = span.as_doc()
-> assert doc2.text == "New York"
+> assert doc2.text == u"New York"
> ```
| Name | Type | Description |
@@ -307,12 +306,12 @@ taken.
> #### Example
>
> ```python
-> doc = nlp("I like New York in Autumn.")
+> doc = nlp(u"I like New York in Autumn.")
> i, like, new, york, in_, autumn, dot = range(len(doc))
-> assert doc[new].head.text == "York"
-> assert doc[york].head.text == "like"
+> assert doc[new].head.text == u"York"
+> assert doc[york].head.text == u"like"
> new_york = doc[new:york+1]
-> assert new_york.root.text == "York"
+> assert new_york.root.text == u"York"
> ```
| Name | Type | Description |
@@ -326,9 +325,9 @@ A tuple of tokens coordinated to `span.root`.
> #### Example
>
> ```python
-> doc = nlp("I like apples and oranges")
+> doc = nlp(u"I like apples and oranges")
> apples_conjuncts = doc[2:3].conjuncts
-> assert [t.text for t in apples_conjuncts] == ["oranges"]
+> assert [t.text for t in apples_conjuncts] == [u"oranges"]
> ```
| Name | Type | Description |
@@ -342,9 +341,9 @@ Tokens that are to the left of the span, whose heads are within the span.
> #### Example
>
> ```python
-> doc = nlp("I like New York in Autumn.")
+> doc = nlp(u"I like New York in Autumn.")
> lefts = [t.text for t in doc[3:7].lefts]
-> assert lefts == ["New"]
+> assert lefts == [u"New"]
> ```
| Name | Type | Description |
@@ -358,9 +357,9 @@ Tokens that are to the right of the span, whose heads are within the span.
> #### Example
>
> ```python
-> doc = nlp("I like New York in Autumn.")
+> doc = nlp(u"I like New York in Autumn.")
> rights = [t.text for t in doc[2:4].rights]
-> assert rights == ["in"]
+> assert rights == [u"in"]
> ```
| Name | Type | Description |
@@ -375,7 +374,7 @@ the span.
> #### Example
>
> ```python
-> doc = nlp("I like New York in Autumn.")
+> doc = nlp(u"I like New York in Autumn.")
> assert doc[3:7].n_lefts == 1
> ```
@@ -391,7 +390,7 @@ the span.
> #### Example
>
> ```python
-> doc = nlp("I like New York in Autumn.")
+> doc = nlp(u"I like New York in Autumn.")
> assert doc[2:4].n_rights == 1
> ```
@@ -406,9 +405,9 @@ Tokens within the span and tokens which descend from them.
> #### Example
>
> ```python
-> doc = nlp("Give it back! He pleaded.")
+> doc = nlp(u"Give it back! He pleaded.")
> subtree = [t.text for t in doc[:3].subtree]
-> assert subtree == ["Give", "it", "back", "!"]
+> assert subtree == [u"Give", u"it", u"back", u"!"]
> ```
| Name | Type | Description |
@@ -422,7 +421,7 @@ A boolean value indicating whether a word vector is associated with the object.
> #### Example
>
> ```python
-> doc = nlp("I like apples")
+> doc = nlp(u"I like apples")
> assert doc[1:].has_vector
> ```
@@ -438,7 +437,7 @@ vectors.
> #### Example
>
> ```python
-> doc = nlp("I like apples")
+> doc = nlp(u"I like apples")
> assert doc[1:].vector.dtype == "float32"
> assert doc[1:].vector.shape == (300,)
> ```
@@ -454,7 +453,7 @@ The L2 norm of the span's vector representation.
> #### Example
>
> ```python
-> doc = nlp("I like apples")
+> doc = nlp(u"I like apples")
> doc[1:].vector_norm # 4.800883928527915
> doc[2:].vector_norm # 6.895897646384268
> assert doc[1:].vector_norm != doc[2:].vector_norm
@@ -479,11 +478,9 @@ The L2 norm of the span's vector representation.
| `text_with_ws` | unicode | The text content of the span with a trailing whitespace character if the last token has one. |
| `orth` | int | ID of the verbatim text content. |
| `orth_` | unicode | Verbatim text content (identical to `Span.text`). Exists mostly for consistency with the other attributes. |
-| `label` | int | The hash value of the span's label. |
+| `label` | int | The span's label. |
| `label_` | unicode | The span's label. |
| `lemma_` | unicode | The span's lemma. |
-| `kb_id` | int | The hash value of the knowledge base ID referred to by the span. |
-| `kb_id_` | unicode | The knowledge base ID referred to by the span. |
| `ent_id` | int | The hash value of the named entity the token is an instance of. |
| `ent_id_` | unicode | The string ID of the named entity the token is an instance of. |
| `sentiment` | float | A scalar value indicating the positivity or negativity of the span. |
diff --git a/website/docs/api/stringstore.md b/website/docs/api/stringstore.md
index 268f19125..40d27a62a 100644
--- a/website/docs/api/stringstore.md
+++ b/website/docs/api/stringstore.md
@@ -16,7 +16,7 @@ Create the `StringStore`.
>
> ```python
> from spacy.strings import StringStore
-> stringstore = StringStore(["apple", "orange"])
+> stringstore = StringStore([u"apple", u"orange"])
> ```
| Name | Type | Description |
@@ -31,7 +31,7 @@ Get the number of strings in the store.
> #### Example
>
> ```python
-> stringstore = StringStore(["apple", "orange"])
+> stringstore = StringStore([u"apple", u"orange"])
> assert len(stringstore) == 2
> ```
@@ -46,10 +46,10 @@ Retrieve a string from a given hash, or vice versa.
> #### Example
>
> ```python
-> stringstore = StringStore(["apple", "orange"])
-> apple_hash = stringstore["apple"]
+> stringstore = StringStore([u"apple", u"orange"])
+> apple_hash = stringstore[u"apple"]
> assert apple_hash == 8566208034543834098
-> assert stringstore[apple_hash] == "apple"
+> assert stringstore[apple_hash] == u"apple"
> ```
| Name | Type | Description |
@@ -64,9 +64,9 @@ Check whether a string is in the store.
> #### Example
>
> ```python
-> stringstore = StringStore(["apple", "orange"])
-> assert "apple" in stringstore
-> assert not "cherry" in stringstore
+> stringstore = StringStore([u"apple", u"orange"])
+> assert u"apple" in stringstore
+> assert not u"cherry" in stringstore
> ```
| Name | Type | Description |
@@ -82,9 +82,9 @@ store will always include an empty string `''` at position `0`.
> #### Example
>
> ```python
-> stringstore = StringStore(["apple", "orange"])
+> stringstore = StringStore([u"apple", u"orange"])
> all_strings = [s for s in stringstore]
-> assert all_strings == ["apple", "orange"]
+> assert all_strings == [u"apple", u"orange"]
> ```
| Name | Type | Description |
@@ -98,12 +98,12 @@ Add a string to the `StringStore`.
> #### Example
>
> ```python
-> stringstore = StringStore(["apple", "orange"])
-> banana_hash = stringstore.add("banana")
+> stringstore = StringStore([u"apple", u"orange"])
+> banana_hash = stringstore.add(u"banana")
> assert len(stringstore) == 3
> assert banana_hash == 2525716904149915114
-> assert stringstore[banana_hash] == "banana"
-> assert stringstore["banana"] == banana_hash
+> assert stringstore[banana_hash] == u"banana"
+> assert stringstore[u"banana"] == banana_hash
> ```
| Name | Type | Description |
@@ -182,7 +182,7 @@ Get a 64-bit hash for a given string.
>
> ```python
> from spacy.strings import hash_string
-> assert hash_string("apple") == 8566208034543834098
+> assert hash_string(u"apple") == 8566208034543834098
> ```
| Name | Type | Description |
diff --git a/website/docs/api/tagger.md b/website/docs/api/tagger.md
index bd3382f89..a1d921b41 100644
--- a/website/docs/api/tagger.md
+++ b/website/docs/api/tagger.md
@@ -57,7 +57,7 @@ and all pipeline components are applied to the `Doc` in order. Both
>
> ```python
> tagger = Tagger(nlp.vocab)
-> doc = nlp("This is a sentence.")
+> doc = nlp(u"This is a sentence.")
> # This usually happens under the hood
> processed = tagger(doc)
> ```
@@ -97,7 +97,7 @@ Apply the pipeline's model to a batch of docs, without modifying them.
>
> ```python
> tagger = Tagger(nlp.vocab)
-> scores, tensors = tagger.predict([doc1, doc2])
+> scores = tagger.predict([doc1, doc2])
> ```
| Name | Type | Description |
@@ -113,15 +113,14 @@ Modify a batch of documents, using pre-computed scores.
>
> ```python
> tagger = Tagger(nlp.vocab)
-> scores, tensors = tagger.predict([doc1, doc2])
-> tagger.set_annotations([doc1, doc2], scores, tensors)
+> scores = tagger.predict([doc1, doc2])
+> tagger.set_annotations([doc1, doc2], scores)
> ```
-| Name | Type | Description |
-| --------- | -------- | ----------------------------------------------------- |
-| `docs` | iterable | The documents to modify. |
-| `scores` | - | The scores to set, produced by `Tagger.predict`. |
-| `tensors` | iterable | The token representations used to predict the scores. |
+| Name | Type | Description |
+| -------- | -------- | ------------------------------------------------ |
+| `docs` | iterable | The documents to modify. |
+| `scores` | - | The scores to set, produced by `Tagger.predict`. |
## Tagger.update {#update tag="method"}
diff --git a/website/docs/api/textcategorizer.md b/website/docs/api/textcategorizer.md
index 1a0280265..310122b9c 100644
--- a/website/docs/api/textcategorizer.md
+++ b/website/docs/api/textcategorizer.md
@@ -75,7 +75,7 @@ delegate to the [`predict`](/api/textcategorizer#predict) and
>
> ```python
> textcat = TextCategorizer(nlp.vocab)
-> doc = nlp("This is a sentence.")
+> doc = nlp(u"This is a sentence.")
> # This usually happens under the hood
> processed = textcat(doc)
> ```
@@ -116,7 +116,7 @@ Apply the pipeline's model to a batch of docs, without modifying them.
>
> ```python
> textcat = TextCategorizer(nlp.vocab)
-> scores, tensors = textcat.predict([doc1, doc2])
+> scores = textcat.predict([doc1, doc2])
> ```
| Name | Type | Description |
@@ -132,15 +132,14 @@ Modify a batch of documents, using pre-computed scores.
>
> ```python
> textcat = TextCategorizer(nlp.vocab)
-> scores, tensors = textcat.predict([doc1, doc2])
-> textcat.set_annotations([doc1, doc2], scores, tensors)
+> scores = textcat.predict([doc1, doc2])
+> textcat.set_annotations([doc1, doc2], scores)
> ```
-| Name | Type | Description |
-| --------- | -------- | --------------------------------------------------------- |
-| `docs` | iterable | The documents to modify. |
-| `scores` | - | The scores to set, produced by `TextCategorizer.predict`. |
-| `tensors` | iterable | The token representations used to predict the scores. |
+| Name | Type | Description |
+| -------- | -------- | --------------------------------------------------------- |
+| `docs` | iterable | The documents to modify. |
+| `scores` | - | The scores to set, produced by `TextCategorizer.predict`. |
## TextCategorizer.update {#update tag="method"}
@@ -228,13 +227,13 @@ Modify the pipe's model, to use the given parameter values.
>
> ```python
> textcat = TextCategorizer(nlp.vocab)
-> with textcat.use_params(optimizer.averages):
+> with textcat.use_params():
> textcat.to_disk("/best_model")
> ```
| Name | Type | Description |
| -------- | ---- | ---------------------------------------------------------------------------------------------------------- |
-| `params` | dict | The parameter values to use in the model. At the end of the context, the original parameters are restored. |
+| `params` | - | The parameter values to use in the model. At the end of the context, the original parameters are restored. |
## TextCategorizer.add_label {#add_label tag="method"}
diff --git a/website/docs/api/token.md b/website/docs/api/token.md
index 8d7ee5928..24816b401 100644
--- a/website/docs/api/token.md
+++ b/website/docs/api/token.md
@@ -12,9 +12,9 @@ Construct a `Token` object.
> #### Example
>
> ```python
-> doc = nlp("Give it back! He pleaded.")
+> doc = nlp(u"Give it back! He pleaded.")
> token = doc[0]
-> assert token.text == "Give"
+> assert token.text == u"Give"
> ```
| Name | Type | Description |
@@ -31,7 +31,7 @@ The number of unicode characters in the token, i.e. `token.text`.
> #### Example
>
> ```python
-> doc = nlp("Give it back! He pleaded.")
+> doc = nlp(u"Give it back! He pleaded.")
> token = doc[0]
> assert len(token) == 4
> ```
@@ -50,9 +50,9 @@ For details, see the documentation on
>
> ```python
> from spacy.tokens import Token
-> fruit_getter = lambda token: token.text in ("apple", "pear", "banana")
+> fruit_getter = lambda token: token.text in (u"apple", u"pear", u"banana")
> Token.set_extension("is_fruit", getter=fruit_getter)
-> doc = nlp("I have an apple")
+> doc = nlp(u"I have an apple")
> assert doc[3]._.is_fruit
> ```
@@ -128,7 +128,7 @@ Check the value of a boolean flag.
>
> ```python
> from spacy.attrs import IS_TITLE
-> doc = nlp("Give it back! He pleaded.")
+> doc = nlp(u"Give it back! He pleaded.")
> token = doc[0]
> assert token.check_flag(IS_TITLE) == True
> ```
@@ -145,7 +145,7 @@ Compute a semantic similarity estimate. Defaults to cosine over vectors.
> #### Example
>
> ```python
-> apples, _, oranges = nlp("apples and oranges")
+> apples, _, oranges = nlp(u"apples and oranges")
> apples_oranges = apples.similarity(oranges)
> oranges_apples = oranges.similarity(apples)
> assert apples_oranges == oranges_apples
@@ -163,9 +163,9 @@ Get a neighboring token.
> #### Example
>
> ```python
-> doc = nlp("Give it back! He pleaded.")
+> doc = nlp(u"Give it back! He pleaded.")
> give_nbor = doc[0].nbor()
-> assert give_nbor.text == "it"
+> assert give_nbor.text == u"it"
> ```
| Name | Type | Description |
@@ -181,7 +181,7 @@ dependency tree.
> #### Example
>
> ```python
-> doc = nlp("Give it back! He pleaded.")
+> doc = nlp(u"Give it back! He pleaded.")
> give = doc[0]
> it = doc[1]
> assert give.is_ancestor(it)
@@ -199,11 +199,11 @@ The rightmost token of this token's syntactic descendants.
> #### Example
>
> ```python
-> doc = nlp("Give it back! He pleaded.")
+> doc = nlp(u"Give it back! He pleaded.")
> it_ancestors = doc[1].ancestors
-> assert [t.text for t in it_ancestors] == ["Give"]
+> assert [t.text for t in it_ancestors] == [u"Give"]
> he_ancestors = doc[4].ancestors
-> assert [t.text for t in he_ancestors] == ["pleaded"]
+> assert [t.text for t in he_ancestors] == [u"pleaded"]
> ```
| Name | Type | Description |
@@ -217,9 +217,9 @@ A tuple of coordinated tokens, not including the token itself.
> #### Example
>
> ```python
-> doc = nlp("I like apples and oranges")
+> doc = nlp(u"I like apples and oranges")
> apples_conjuncts = doc[2].conjuncts
-> assert [t.text for t in apples_conjuncts] == ["oranges"]
+> assert [t.text for t in apples_conjuncts] == [u"oranges"]
> ```
| Name | Type | Description |
@@ -233,9 +233,9 @@ A sequence of the token's immediate syntactic children.
> #### Example
>
> ```python
-> doc = nlp("Give it back! He pleaded.")
+> doc = nlp(u"Give it back! He pleaded.")
> give_children = doc[0].children
-> assert [t.text for t in give_children] == ["it", "back", "!"]
+> assert [t.text for t in give_children] == [u"it", u"back", u"!"]
> ```
| Name | Type | Description |
@@ -249,9 +249,9 @@ The leftward immediate children of the word, in the syntactic dependency parse.
> #### Example
>
> ```python
-> doc = nlp("I like New York in Autumn.")
+> doc = nlp(u"I like New York in Autumn.")
> lefts = [t.text for t in doc[3].lefts]
-> assert lefts == ["New"]
+> assert lefts == [u'New']
> ```
| Name | Type | Description |
@@ -265,9 +265,9 @@ The rightward immediate children of the word, in the syntactic dependency parse.
> #### Example
>
> ```python
-> doc = nlp("I like New York in Autumn.")
+> doc = nlp(u"I like New York in Autumn.")
> rights = [t.text for t in doc[3].rights]
-> assert rights == ["in"]
+> assert rights == [u"in"]
> ```
| Name | Type | Description |
@@ -282,7 +282,7 @@ dependency parse.
> #### Example
>
> ```python
-> doc = nlp("I like New York in Autumn.")
+> doc = nlp(u"I like New York in Autumn.")
> assert doc[3].n_lefts == 1
> ```
@@ -298,7 +298,7 @@ dependency parse.
> #### Example
>
> ```python
-> doc = nlp("I like New York in Autumn.")
+> doc = nlp(u"I like New York in Autumn.")
> assert doc[3].n_rights == 1
> ```
@@ -313,9 +313,9 @@ A sequence containing the token and all the token's syntactic descendants.
> #### Example
>
> ```python
-> doc = nlp("Give it back! He pleaded.")
+> doc = nlp(u"Give it back! He pleaded.")
> give_subtree = doc[0].subtree
-> assert [t.text for t in give_subtree] == ["Give", "it", "back", "!"]
+> assert [t.text for t in give_subtree] == [u"Give", u"it", u"back", u"!"]
> ```
| Name | Type | Description |
@@ -330,7 +330,7 @@ unknown. Defaults to `True` for the first token in the `Doc`.
> #### Example
>
> ```python
-> doc = nlp("Give it back! He pleaded.")
+> doc = nlp(u"Give it back! He pleaded.")
> assert doc[4].is_sent_start
> assert not doc[5].is_sent_start
> ```
@@ -361,7 +361,7 @@ A boolean value indicating whether a word vector is associated with the token.
> #### Example
>
> ```python
-> doc = nlp("I like apples")
+> doc = nlp(u"I like apples")
> apples = doc[2]
> assert apples.has_vector
> ```
@@ -377,7 +377,7 @@ A real-valued meaning representation.
> #### Example
>
> ```python
-> doc = nlp("I like apples")
+> doc = nlp(u"I like apples")
> apples = doc[2]
> assert apples.vector.dtype == "float32"
> assert apples.vector.shape == (300,)
@@ -394,7 +394,7 @@ The L2 norm of the token's vector representation.
> #### Example
>
> ```python
-> doc = nlp("I like apples and pasta")
+> doc = nlp(u"I like apples and pasta")
> apples = doc[2]
> pasta = doc[4]
> apples.vector_norm # 6.89589786529541
@@ -425,10 +425,8 @@ The L2 norm of the token's vector representation.
| `i` | int | The index of the token within the parent document. |
| `ent_type` | int | Named entity type. |
| `ent_type_` | unicode | Named entity type. |
-| `ent_iob` | int | IOB code of named entity tag. `3` means the token begins an entity, `2` means it is outside an entity, `1` means it is inside an entity, and `0` means no entity tag is set. |
+| `ent_iob` | int | IOB code of named entity tag. `3` means the token begins an entity, `2` means it is outside an entity, `1` means it is inside an entity, and `0` means no entity tag is set. | |
| `ent_iob_` | unicode | IOB code of named entity tag. "B" means the token begins an entity, "I" means it is inside an entity, "O" means it is outside an entity, and "" means no entity tag is set. |
-| `ent_kb_id` 2.2 | int | Knowledge base ID that refers to the named entity this token is a part of, if any. |
-| `ent_kb_id_` 2.2 | unicode | Knowledge base ID that refers to the named entity this token is a part of, if any. |
| `ent_id` | int | ID of the entity the token is an instance of, if any. Currently not used, but potentially for coreference resolution. |
| `ent_id_` | unicode | ID of the entity the token is an instance of, if any. Currently not used, but potentially for coreference resolution. |
| `lemma` | int | Base form of the token, with no inflectional suffixes. |
diff --git a/website/docs/api/tokenizer.md b/website/docs/api/tokenizer.md
index d6ab73f14..ce1ba9a21 100644
--- a/website/docs/api/tokenizer.md
+++ b/website/docs/api/tokenizer.md
@@ -5,9 +5,7 @@ tag: class
source: spacy/tokenizer.pyx
---
-Segment text, and create `Doc` objects with the discovered segment boundaries.
-For a deeper understanding, see the docs on
-[how spaCy's tokenizer works](/usage/linguistic-features#how-tokenizer-works).
+Segment text, and create `Doc` objects with the discovered segment boundaries. For a deeper understanding, see the docs on [how spaCy's tokenizer works](/usage/linguistic-features#how-tokenizer-works).
## Tokenizer.\_\_init\_\_ {#init tag="method"}
@@ -51,7 +49,7 @@ Tokenize a string.
> #### Example
>
> ```python
-> tokens = tokenizer("This is a sentence")
+> tokens = tokenizer(u"This is a sentence")
> assert len(tokens) == 4
> ```
@@ -67,7 +65,7 @@ Tokenize a stream of texts.
> #### Example
>
> ```python
-> texts = ["One document.", "...", "Lots of documents"]
+> texts = [u"One document.", u"...", u"Lots of documents"]
> for doc in tokenizer.pipe(texts, batch_size=50):
> pass
> ```
@@ -111,15 +109,14 @@ if no suffix rules match.
Add a special-case tokenization rule. This mechanism is also used to add custom
tokenizer exceptions to the language data. See the usage guide on
-[adding languages](/usage/adding-languages#tokenizer-exceptions) and
-[linguistic features](/usage/linguistic-features#special-cases) for more details
-and examples.
+[adding languages](/usage/adding-languages#tokenizer-exceptions) and [linguistic features](/usage/linguistic-features#special-cases) for more
+details and examples.
> #### Example
>
> ```python
-> from spacy.attrs import ORTH, NORM
-> case = [{ORTH: "do"}, {ORTH: "n't", NORM: "not"}]
+> from spacy.attrs import ORTH, LEMMA
+> case = [{ORTH: "do"}, {ORTH: "n't", LEMMA: "not"}]
> tokenizer.add_special_case("don't", case)
> ```
diff --git a/website/docs/api/top-level.md b/website/docs/api/top-level.md
index 50ba0e3d9..9d166a5c5 100644
--- a/website/docs/api/top-level.md
+++ b/website/docs/api/top-level.md
@@ -112,10 +112,10 @@ list of available terms, see
> #### Example
>
> ```python
-> spacy.explain("NORP")
+> spacy.explain(u"NORP")
> # Nationalities or religious or political groups
>
-> doc = nlp("Hello world")
+> doc = nlp(u"Hello world")
> for word in doc:
> print(word.text, word.tag_, spacy.explain(word.tag_))
> # Hello UH interjection
@@ -181,8 +181,8 @@ browser. Will run a simple web server.
> import spacy
> from spacy import displacy
> nlp = spacy.load("en_core_web_sm")
-> doc1 = nlp("This is a sentence.")
-> doc2 = nlp("This is another sentence.")
+> doc1 = nlp(u"This is a sentence.")
+> doc2 = nlp(u"This is another sentence.")
> displacy.serve([doc1, doc2], style="dep")
> ```
@@ -192,7 +192,7 @@ browser. Will run a simple web server.
| `style` | unicode | Visualization style, `'dep'` or `'ent'`. | `'dep'` |
| `page` | bool | Render markup as full HTML page. | `True` |
| `minify` | bool | Minify HTML markup. | `False` |
-| `options` | dict | [Visualizer-specific options](#displacy_options), e.g. colors. | `{}` |
+| `options` | dict | [Visualizer-specific options](#displacy_options), e.g. colors. | `{}` |
| `manual` | bool | Don't parse `Doc` and instead, expect a dict or list of dicts. [See here](/usage/visualizers#manual-usage) for formats and examples. | `False` |
| `port` | int | Port to serve visualization. | `5000` |
| `host` | unicode | Host to serve visualization. | `'0.0.0.0'` |
@@ -207,7 +207,7 @@ Render a dependency parse tree or named entity visualization.
> import spacy
> from spacy import displacy
> nlp = spacy.load("en_core_web_sm")
-> doc = nlp("This is a sentence.")
+> doc = nlp(u"This is a sentence.")
> html = displacy.render(doc, style="dep")
> ```
@@ -218,7 +218,7 @@ Render a dependency parse tree or named entity visualization.
| `page` | bool | Render markup as full HTML page. | `False` |
| `minify` | bool | Minify HTML markup. | `False` |
| `jupyter` | bool | Explicitly enable or disable "[Jupyter](http://jupyter.org/) mode" to return markup ready to be rendered in a notebook. Detected automatically if `None`. | `None` |
-| `options` | dict | [Visualizer-specific options](#displacy_options), e.g. colors. | `{}` |
+| `options` | dict | [Visualizer-specific options](#displacy_options), e.g. colors. | `{}` |
| `manual` | bool | Don't parse `Doc` and instead, expect a dict or list of dicts. [See here](/usage/visualizers#manual-usage) for formats and examples. | `False` |
| **RETURNS** | unicode | Rendered HTML markup. |
@@ -262,18 +262,15 @@ If a setting is not present in the options, the default value will be used.
> displacy.serve(doc, style="ent", options=options)
> ```
-| Name | Type | Description | Default |
-| --------------------------------------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------ |
-| `ents` | list | Entity types to highlight (`None` for all types). | `None` |
-| `colors` | dict | Color overrides. Entity types in uppercase should be mapped to color names or values. | `{}` |
-| `template` 2.2 | unicode | Optional template to overwrite the HTML used to render entity spans. Should be a format string and can use `{bg}`, `{text}` and `{label}`. | see [`templates.py`](https://github.com/explosion/spaCy/blob/master/spacy/displacy/templates.py) |
+| Name | Type | Description | Default |
+| -------- | ---- | ------------------------------------------------------------------------------------- | ------- |
+| `ents` | list | Entity types to highlight (`None` for all types). | `None` |
+| `colors` | dict | Color overrides. Entity types in uppercase should be mapped to color names or values. | `{}` |
By default, displaCy comes with colors for all
[entity types supported by spaCy](/api/annotation#named-entities). If you're
using custom entity types, you can use the `colors` setting to add your own
-colors for them. Your application or model package can also expose a
-[`spacy_displacy_colors` entry point](/usage/saving-loading#entry-points-displacy)
-to add custom labels and their colors automatically.
+colors for them.
## Utility functions {#util source="spacy/util.py"}
@@ -514,9 +511,9 @@ an error if key doesn't match `ORTH` values.
>
> ```python
> BASE = {"a.": [{ORTH: "a."}], ":)": [{ORTH: ":)"}]}
-> NEW = {"a.": [{ORTH: "a.", NORM: "all"}]}
+> NEW = {"a.": [{ORTH: "a.", LEMMA: "all"}]}
> exceptions = util.update_exc(BASE, NEW)
-> # {"a.": [{ORTH: "a.", NORM: "all"}], ":)": [{ORTH: ":)"}]}
+> # {"a.": [{ORTH: "a.", LEMMA: "all"}], ":)": [{ORTH: ":)"}]}
> ```
| Name | Type | Description |
@@ -651,11 +648,11 @@ for batching. Larger `bufsize` means less bias.
> shuffled = itershuffle(values)
> ```
-| Name | Type | Description |
-| ---------- | -------- | ----------------------------------- |
-| `iterable` | iterable | Iterator to shuffle. |
-| `bufsize` | int | Items to hold back (default: 1000). |
-| **YIELDS** | iterable | The shuffled iterator. |
+| Name | Type | Description |
+| ---------- | -------- | ------------------------------------- |
+| `iterable` | iterable | Iterator to shuffle. |
+| `bufsize` | int | Items to hold back (default: 1000). |
+| **YIELDS** | iterable | The shuffled iterator. |
### util.filter_spans {#util.filter_spans tag="function" new="2.1.4"}
diff --git a/website/docs/api/vectors.md b/website/docs/api/vectors.md
index ae62d8cfc..c04085091 100644
--- a/website/docs/api/vectors.md
+++ b/website/docs/api/vectors.md
@@ -26,7 +26,7 @@ you can add vectors to later.
> empty_vectors = Vectors(shape=(10000, 300))
>
> data = numpy.zeros((3, 300), dtype='f')
-> keys = ["cat", "dog", "rat"]
+> keys = [u"cat", u"dog", u"rat"]
> vectors = Vectors(data=data, keys=keys)
> ```
@@ -35,7 +35,6 @@ you can add vectors to later.
| `data` | `ndarray[ndim=1, dtype='float32']` | The vector data. |
| `keys` | iterable | A sequence of keys aligned with the data. |
| `shape` | tuple | Size of the table as `(n_entries, n_columns)`, the number of entries and number of columns. Not required if you're initializing the object with `data` and `keys`. |
-| `name` | unicode | A name to identify the vectors table. |
| **RETURNS** | `Vectors` | The newly created object. |
## Vectors.\_\_getitem\_\_ {#getitem tag="method"}
@@ -46,9 +45,9 @@ raised.
> #### Example
>
> ```python
-> cat_id = nlp.vocab.strings["cat"]
+> cat_id = nlp.vocab.strings[u"cat"]
> cat_vector = nlp.vocab.vectors[cat_id]
-> assert cat_vector == nlp.vocab["cat"].vector
+> assert cat_vector == nlp.vocab[u"cat"].vector
> ```
| Name | Type | Description |
@@ -63,7 +62,7 @@ Set a vector for the given key.
> #### Example
>
> ```python
-> cat_id = nlp.vocab.strings["cat"]
+> cat_id = nlp.vocab.strings[u"cat"]
> vector = numpy.random.uniform(-1, 1, (300,))
> nlp.vocab.vectors[cat_id] = vector
> ```
@@ -110,7 +109,7 @@ Check whether a key has been mapped to a vector entry in the table.
> #### Example
>
> ```python
-> cat_id = nlp.vocab.strings["cat"]
+> cat_id = nlp.vocab.strings[u"cat"]
> nlp.vectors.add(cat_id, numpy.random.uniform(-1, 1, (300,)))
> assert cat_id in vectors
> ```
@@ -133,9 +132,9 @@ mapping separately. If you need to manage the strings, you should use the
>
> ```python
> vector = numpy.random.uniform(-1, 1, (300,))
-> cat_id = nlp.vocab.strings["cat"]
+> cat_id = nlp.vocab.strings[u"cat"]
> nlp.vocab.vectors.add(cat_id, vector=vector)
-> nlp.vocab.vectors.add("dog", row=0)
+> nlp.vocab.vectors.add(u"dog", row=0)
> ```
| Name | Type | Description |
@@ -219,8 +218,8 @@ Look up one or more keys by row, or vice versa.
> #### Example
>
> ```python
-> row = nlp.vocab.vectors.find(key="cat")
-> rows = nlp.vocab.vectors.find(keys=["cat", "dog"])
+> row = nlp.vocab.vectors.find(key=u"cat")
+> rows = nlp.vocab.vectors.find(keys=[u"cat", u"dog"])
> key = nlp.vocab.vectors.find(row=256)
> keys = nlp.vocab.vectors.find(rows=[18, 256, 985])
> ```
@@ -242,7 +241,7 @@ vector table.
>
> ```python
> vectors = Vectors(shape(1, 300))
-> vectors.add("cat", numpy.random.uniform(-1, 1, (300,)))
+> vectors.add(u"cat", numpy.random.uniform(-1, 1, (300,)))
> rows, dims = vectors.shape
> assert rows == 1
> assert dims == 300
@@ -277,7 +276,7 @@ If a table is full, it can be resized using
>
> ```python
> vectors = Vectors(shape=(1, 300))
-> vectors.add("cat", numpy.random.uniform(-1, 1, (300,)))
+> vectors.add(u"cat", numpy.random.uniform(-1, 1, (300,)))
> assert vectors.is_full
> ```
diff --git a/website/docs/api/vocab.md b/website/docs/api/vocab.md
index ea0c2d219..cd21a91d6 100644
--- a/website/docs/api/vocab.md
+++ b/website/docs/api/vocab.md
@@ -18,17 +18,16 @@ Create the vocabulary.
>
> ```python
> from spacy.vocab import Vocab
-> vocab = Vocab(strings=["hello", "world"])
+> vocab = Vocab(strings=[u"hello", u"world"])
> ```
-| Name | Type | Description |
-| ------------------------------------------- | -------------------- | ------------------------------------------------------------------------------------------------------------------ |
-| `lex_attr_getters` | dict | A dictionary mapping attribute IDs to functions to compute them. Defaults to `None`. |
-| `tag_map` | dict | A dictionary mapping fine-grained tags to coarse-grained parts-of-speech, and optionally morphological attributes. |
-| `lemmatizer` | object | A lemmatizer. Defaults to `None`. |
-| `strings` | `StringStore` / list | A [`StringStore`](/api/stringstore) that maps strings to hash values, and vice versa, or a list of strings. |
-| `vectors_name` 2.2 | unicode | A name to identify the vectors table. |
-| **RETURNS** | `Vocab` | The newly constructed object. |
+| Name | Type | Description |
+| ------------------ | -------------------- | ------------------------------------------------------------------------------------------------------------------ |
+| `lex_attr_getters` | dict | A dictionary mapping attribute IDs to functions to compute them. Defaults to `None`. |
+| `tag_map` | dict | A dictionary mapping fine-grained tags to coarse-grained parts-of-speech, and optionally morphological attributes. |
+| `lemmatizer` | object | A lemmatizer. Defaults to `None`. |
+| `strings` | `StringStore` / list | A [`StringStore`](/api/stringstore) that maps strings to hash values, and vice versa, or a list of strings. |
+| **RETURNS** | `Vocab` | The newly constructed object. |
## Vocab.\_\_len\_\_ {#len tag="method"}
@@ -37,7 +36,7 @@ Get the current number of lexemes in the vocabulary.
> #### Example
>
> ```python
-> doc = nlp("This is a sentence.")
+> doc = nlp(u"This is a sentence.")
> assert len(nlp.vocab) > 0
> ```
@@ -53,8 +52,8 @@ unicode string is given, a new lexeme is created and stored.
> #### Example
>
> ```python
-> apple = nlp.vocab.strings["apple"]
-> assert nlp.vocab[apple] == nlp.vocab["apple"]
+> apple = nlp.vocab.strings[u"apple"]
+> assert nlp.vocab[apple] == nlp.vocab[u"apple"]
> ```
| Name | Type | Description |
@@ -85,8 +84,8 @@ given string, you need to look it up in
> #### Example
>
> ```python
-> apple = nlp.vocab.strings["apple"]
-> oov = nlp.vocab.strings["dskfodkfos"]
+> apple = nlp.vocab.strings[u"apple"]
+> oov = nlp.vocab.strings[u"dskfodkfos"]
> assert apple in nlp.vocab
> assert oov not in nlp.vocab
> ```
@@ -107,11 +106,11 @@ using `token.check_flag(flag_id)`.
>
> ```python
> def is_my_product(text):
-> products = ["spaCy", "Thinc", "displaCy"]
+> products = [u"spaCy", u"Thinc", u"displaCy"]
> return text in products
>
> MY_PRODUCT = nlp.vocab.add_flag(is_my_product)
-> doc = nlp("I like spaCy")
+> doc = nlp(u"I like spaCy")
> assert doc[2].check_flag(MY_PRODUCT) == True
> ```
@@ -171,7 +170,7 @@ or hash value. If no vectors data is loaded, a `ValueError` is raised.
> #### Example
>
> ```python
-> nlp.vocab.get_vector("apple")
+> nlp.vocab.get_vector(u"apple")
> ```
| Name | Type | Description |
@@ -187,7 +186,7 @@ or hash value.
> #### Example
>
> ```python
-> nlp.vocab.set_vector("apple", array([...]))
+> nlp.vocab.set_vector(u"apple", array([...]))
> ```
| Name | Type | Description |
@@ -203,8 +202,8 @@ Words can be looked up by string or hash value.
> #### Example
>
> ```python
-> if nlp.vocab.has_vector("apple"):
-> vector = nlp.vocab.get_vector("apple")
+> if nlp.vocab.has_vector(u"apple"):
+> vector = nlp.vocab.get_vector(u"apple")
> ```
| Name | Type | Description |
@@ -283,9 +282,9 @@ Load state from a binary string.
> #### Example
>
> ```python
-> apple_id = nlp.vocab.strings["apple"]
+> apple_id = nlp.vocab.strings[u"apple"]
> assert type(apple_id) == int
-> PERSON = nlp.vocab.strings["PERSON"]
+> PERSON = nlp.vocab.strings[u"PERSON"]
> assert type(PERSON) == int
> ```
@@ -294,7 +293,6 @@ Load state from a binary string.
| `strings` | `StringStore` | A table managing the string-to-int mapping. |
| `vectors` 2 | `Vectors` | A table associating word IDs to word vectors. |
| `vectors_length` | int | Number of dimensions for each word vector. |
-| `lookups` | `Lookups` | The available lookup tables in this vocab. |
| `writing_system` 2.1 | dict | A dict with information about the language's writing system. |
## Serialization fields {#serialization-fields}
@@ -315,4 +313,3 @@ serialization by passing in the string names via the `exclude` argument.
| `strings` | The strings in the [`StringStore`](/api/stringstore). |
| `lexemes` | The lexeme data. |
| `vectors` | The word vectors, if available. |
-| `lookups` | The lookup tables, if available. |
diff --git a/website/docs/images/displacy-ent-snek.html b/website/docs/images/displacy-ent-snek.html
deleted file mode 100644
index 1e4920fb5..000000000
--- a/website/docs/images/displacy-ent-snek.html
+++ /dev/null
@@ -1,18 +0,0 @@
-
- 🌱🌿 🐍 SNEK ____ 🌳🌲 ____ 👨🌾 HUMAN 🏘️
-
diff --git a/website/docs/usage/101/_named-entities.md b/website/docs/usage/101/_named-entities.md
index 1ecaf9fe7..54db6dbe8 100644
--- a/website/docs/usage/101/_named-entities.md
+++ b/website/docs/usage/101/_named-entities.md
@@ -12,7 +12,7 @@ Named entities are available as the `ents` property of a `Doc`:
import spacy
nlp = spacy.load("en_core_web_sm")
-doc = nlp("Apple is looking at buying U.K. startup for $1 billion")
+doc = nlp(u"Apple is looking at buying U.K. startup for $1 billion")
for ent in doc.ents:
print(ent.text, ent.start_char, ent.end_char, ent.label_)
@@ -21,7 +21,7 @@ for ent in doc.ents:
> - **Text:** The original entity text.
> - **Start:** Index of start of entity in the `Doc`.
> - **End:** Index of end of entity in the `Doc`.
-> - **Label:** Entity label, i.e. type.
+> - **LabeL:** Entity label, i.e. type.
| Text | Start | End | Label | Description |
| ----------- | :---: | :-: | ------- | ---------------------------------------------------- |
diff --git a/website/docs/usage/101/_pipelines.md b/website/docs/usage/101/_pipelines.md
index d33ea45fd..68308a381 100644
--- a/website/docs/usage/101/_pipelines.md
+++ b/website/docs/usage/101/_pipelines.md
@@ -12,14 +12,14 @@ passed on to the next component.
> - **Creates:** Objects, attributes and properties modified and set by the
> component.
-| Name | Component | Creates | Description |
-| ----------------- | ------------------------------------------------------------------ | ----------------------------------------------------------- | ------------------------------------------------ |
-| **tokenizer** | [`Tokenizer`](/api/tokenizer) | `Doc` | Segment text into tokens. |
-| **tagger** | [`Tagger`](/api/tagger) | `Doc[i].tag` | Assign part-of-speech tags. |
-| **parser** | [`DependencyParser`](/api/dependencyparser) | `Doc[i].head`, `Doc[i].dep`, `Doc.sents`, `Doc.noun_chunks` | Assign dependency labels. |
-| **ner** | [`EntityRecognizer`](/api/entityrecognizer) | `Doc.ents`, `Doc[i].ent_iob`, `Doc[i].ent_type` | Detect and label named entities. |
-| **textcat** | [`TextCategorizer`](/api/textcategorizer) | `Doc.cats` | Assign document labels. |
-| ... | [custom components](/usage/processing-pipelines#custom-components) | `Doc._.xxx`, `Token._.xxx`, `Span._.xxx` | Assign custom attributes, methods or properties. |
+| Name | Component | Creates | Description |
+| ------------- | ------------------------------------------------------------------ | ----------------------------------------------------------- | ------------------------------------------------ |
+| **tokenizer** | [`Tokenizer`](/api/tokenizer) | `Doc` | Segment text into tokens. |
+| **tagger** | [`Tagger`](/api/tagger) | `Doc[i].tag` | Assign part-of-speech tags. |
+| **parser** | [`DependencyParser`](/api/dependencyparser) | `Doc[i].head`, `Doc[i].dep`, `Doc.sents`, `Doc.noun_chunks` | Assign dependency labels. |
+| **ner** | [`EntityRecognizer`](/api/entityrecognizer) | `Doc.ents`, `Doc[i].ent_iob`, `Doc[i].ent_type` | Detect and label named entities. |
+| **textcat** | [`TextCategorizer`](/api/textcategorizer) | `Doc.cats` | Assign document labels. |
+| ... | [custom components](/usage/processing-pipelines#custom-components) | `Doc._.xxx`, `Token._.xxx`, `Span._.xxx` | Assign custom attributes, methods or properties. |
The processing pipeline always **depends on the statistical model** and its
capabilities. For example, a pipeline can only include an entity recognizer
@@ -49,10 +49,6 @@ them, its dependency predictions may be different. Similarly, it matters if you
add the [`EntityRuler`](/api/entityruler) before or after the statistical entity
recognizer: if it's added before, the entity recognizer will take the existing
entities into account when making predictions.
-The [`EntityLinker`](/api/entitylinker), which resolves named entities to
-knowledge base IDs, should be preceded by
-a pipeline component that recognizes entities such as the
-[`EntityRecognizer`](/api/entityrecognizer).
diff --git a/website/docs/usage/101/_pos-deps.md b/website/docs/usage/101/_pos-deps.md
index 9d04d6ffc..d86ee123d 100644
--- a/website/docs/usage/101/_pos-deps.md
+++ b/website/docs/usage/101/_pos-deps.md
@@ -15,8 +15,8 @@ need to add an underscore `_` to its name:
### {executable="true"}
import spacy
-nlp = spacy.load("en_core_web_sm")
-doc = nlp("Apple is looking at buying U.K. startup for $1 billion")
+nlp = spacy.load('en_core_web_sm')
+doc = nlp(u'Apple is looking at buying U.K. startup for $1 billion')
for token in doc:
print(token.text, token.lemma_, token.pos_, token.tag_, token.dep_,
@@ -45,7 +45,7 @@ for token in doc:
| for | for | `ADP` | `IN` | `prep` | `xxx` | `True` | `True` |
| \$ | \$ | `SYM` | `$` | `quantmod` | `$` | `False` | `False` |
| 1 | 1 | `NUM` | `CD` | `compound` | `d` | `False` | `False` |
-| billion | billion | `NUM` | `CD` | `pobj` | `xxxx` | `True` | `False` |
+| billion | billion | `NUM` | `CD` | `probj` | `xxxx` | `True` | `False` |
> #### Tip: Understanding tags and labels
>
diff --git a/website/docs/usage/101/_serialization.md b/website/docs/usage/101/_serialization.md
index 01a9c39d1..828b796b3 100644
--- a/website/docs/usage/101/_serialization.md
+++ b/website/docs/usage/101/_serialization.md
@@ -13,9 +13,9 @@ file or a byte string. This process is called serialization. spaCy comes with
> object to and from disk, but it's also used for distributed computing, e.g.
> with
> [PySpark](https://spark.apache.org/docs/0.9.0/python-programming-guide.html)
-> or [Dask](https://dask.org). When you unpickle an object, you're agreeing to
-> execute whatever code it contains. It's like calling `eval()` on a string – so
-> don't unpickle objects from untrusted sources.
+> or [Dask](http://dask.pydata.org/en/latest/). When you unpickle an object,
+> you're agreeing to execute whatever code it contains. It's like calling
+> `eval()` on a string – so don't unpickle objects from untrusted sources.
All container classes, i.e. [`Language`](/api/language) (`nlp`),
[`Doc`](/api/doc), [`Vocab`](/api/vocab) and [`StringStore`](/api/stringstore)
diff --git a/website/docs/usage/101/_tokenization.md b/website/docs/usage/101/_tokenization.md
index 764f1e62a..e5f3d3080 100644
--- a/website/docs/usage/101/_tokenization.md
+++ b/website/docs/usage/101/_tokenization.md
@@ -9,7 +9,7 @@ tokens, and we can iterate over them:
import spacy
nlp = spacy.load("en_core_web_sm")
-doc = nlp("Apple is looking at buying U.K. startup for $1 billion")
+doc = nlp(u"Apple is looking at buying U.K. startup for $1 billion")
for token in doc:
print(token.text)
```
diff --git a/website/docs/usage/101/_training.md b/website/docs/usage/101/_training.md
index baf3a1891..61e047748 100644
--- a/website/docs/usage/101/_training.md
+++ b/website/docs/usage/101/_training.md
@@ -20,7 +20,7 @@ difference, the more significant the gradient and the updates to our model.
![The training process](../../images/training.svg)
When training a model, we don't just want it to memorize our examples – we want
-it to come up with a theory that can be **generalized across other examples**.
+it to come up with theory that can be **generalized across other examples**.
After all, we don't just want the model to learn that this one instance of
"Amazon" right here is a company – we want it to learn that "Amazon", in
contexts _like this_, is most likely a company. That's why the training data
diff --git a/website/docs/usage/101/_vectors-similarity.md b/website/docs/usage/101/_vectors-similarity.md
index 73c35950f..2001d1481 100644
--- a/website/docs/usage/101/_vectors-similarity.md
+++ b/website/docs/usage/101/_vectors-similarity.md
@@ -48,8 +48,8 @@ norm, which can be used to normalize vectors.
### {executable="true"}
import spacy
-nlp = spacy.load("en_core_web_md")
-tokens = nlp("dog cat banana afskfsd")
+nlp = spacy.load('en_core_web_md')
+tokens = nlp(u'dog cat banana afskfsd')
for token in tokens:
print(token.text, token.has_vector, token.vector_norm, token.is_oov)
@@ -88,8 +88,8 @@ definition of similarity.
### {executable="true"}
import spacy
-nlp = spacy.load("en_core_web_md") # make sure to use larger model!
-tokens = nlp("dog cat banana")
+nlp = spacy.load('en_core_web_md') # make sure to use larger model!
+tokens = nlp(u'dog cat banana')
for token1 in tokens:
for token2 in tokens:
diff --git a/website/docs/usage/adding-languages.md b/website/docs/usage/adding-languages.md
index 94d75ea31..374d948b2 100644
--- a/website/docs/usage/adding-languages.md
+++ b/website/docs/usage/adding-languages.md
@@ -71,19 +71,21 @@ from the global rules. Others, like the tokenizer and norm exceptions, are very
specific and will make a big difference to spaCy's performance on the particular
language and training a language model.
-| Variable | Type | Description |
-| ---------------------- | ----- | ---------------------------------------------------------------------------------------------------------- |
-| `STOP_WORDS` | set | Individual words. |
-| `TOKENIZER_EXCEPTIONS` | dict | Keyed by strings mapped to list of one dict per token with token attributes. |
-| `TOKEN_MATCH` | regex | Regexes to match complex tokens, e.g. URLs. |
-| `NORM_EXCEPTIONS` | dict | Keyed by strings, mapped to their norms. |
-| `TOKENIZER_PREFIXES` | list | Strings or regexes, usually not customized. |
-| `TOKENIZER_SUFFIXES` | list | Strings or regexes, usually not customized. |
-| `TOKENIZER_INFIXES` | list | Strings or regexes, usually not customized. |
-| `LEX_ATTRS` | dict | Attribute ID mapped to function. |
-| `SYNTAX_ITERATORS` | dict | Iterator ID mapped to function. Currently only supports `'noun_chunks'`. |
-| `TAG_MAP` | dict | Keyed by strings mapped to [Universal Dependencies](http://universaldependencies.org/u/pos/all.html) tags. |
-| `MORPH_RULES` | dict | Keyed by strings mapped to a dict of their morphological features. |
+| Variable | Type | Description |
+| ----------------------------------------- | ----- | ---------------------------------------------------------------------------------------------------------- |
+| `STOP_WORDS` | set | Individual words. |
+| `TOKENIZER_EXCEPTIONS` | dict | Keyed by strings mapped to list of one dict per token with token attributes. |
+| `TOKEN_MATCH` | regex | Regexes to match complex tokens, e.g. URLs. |
+| `NORM_EXCEPTIONS` | dict | Keyed by strings, mapped to their norms. |
+| `TOKENIZER_PREFIXES` | list | Strings or regexes, usually not customized. |
+| `TOKENIZER_SUFFIXES` | list | Strings or regexes, usually not customized. |
+| `TOKENIZER_INFIXES` | list | Strings or regexes, usually not customized. |
+| `LEX_ATTRS` | dict | Attribute ID mapped to function. |
+| `SYNTAX_ITERATORS` | dict | Iterator ID mapped to function. Currently only supports `'noun_chunks'`. |
+| `LOOKUP` | dict | Keyed by strings mapping to their lemma. |
+| `LEMMA_RULES`, `LEMMA_INDEX`, `LEMMA_EXC` | dict | Lemmatization rules, keyed by part of speech. |
+| `TAG_MAP` | dict | Keyed by strings mapped to [Universal Dependencies](http://universaldependencies.org/u/pos/all.html) tags. |
+| `MORPH_RULES` | dict | Keyed by strings mapped to a dict of their morphological features. |
> #### Should I ever update the global data?
>
@@ -211,7 +213,9 @@ spaCy's [tokenization algorithm](/usage/linguistic-features#how-tokenizer-works)
lets you deal with whitespace-delimited chunks separately. This makes it easy to
define special-case rules, without worrying about how they interact with the
rest of the tokenizer. Whenever the key string is matched, the special-case rule
-is applied, giving the defined sequence of tokens.
+is applied, giving the defined sequence of tokens. You can also attach
+attributes to the subtokens, covered by your special case, such as the subtokens
+`LEMMA` or `TAG`.
Tokenizer exceptions can be added in the following format:
@@ -219,8 +223,8 @@ Tokenizer exceptions can be added in the following format:
### tokenizer_exceptions.py (excerpt)
TOKENIZER_EXCEPTIONS = {
"don't": [
- {ORTH: "do"},
- {ORTH: "n't", NORM: "not"}]
+ {ORTH: "do", LEMMA: "do"},
+ {ORTH: "n't", LEMMA: "not", NORM: "not", TAG: "RB"}]
}
```
@@ -229,12 +233,41 @@ TOKENIZER_EXCEPTIONS = {
If an exception consists of more than one token, the `ORTH` values combined
always need to **match the original string**. The way the original string is
split up can be pretty arbitrary sometimes – for example `"gonna"` is split into
-`"gon"` (norm "going") and `"na"` (norm "to"). Because of how the tokenizer
+`"gon"` (lemma "go") and `"na"` (lemma "to"). Because of how the tokenizer
works, it's currently not possible to split single-letter strings into multiple
tokens.
+Unambiguous abbreviations, like month names or locations in English, should be
+added to exceptions with a lemma assigned, for example
+`{ORTH: "Jan.", LEMMA: "January"}`. Since the exceptions are added in Python,
+you can use custom logic to generate them more efficiently and make your data
+less verbose. How you do this ultimately depends on the language. Here's an
+example of how exceptions for time formats like "1a.m." and "1am" are generated
+in the English
+[`tokenizer_exceptions.py`](https://github.com/explosion/spaCy/tree/master/spacy/en/lang/tokenizer_exceptions.py):
+
+```python
+### tokenizer_exceptions.py (excerpt)
+# use short, internal variable for readability
+_exc = {}
+
+for h in range(1, 12 + 1):
+ for period in ["a.m.", "am"]:
+ # always keep an eye on string interpolation!
+ _exc["%d%s" % (h, period)] = [
+ {ORTH: "%d" % h},
+ {ORTH: period, LEMMA: "a.m."}]
+ for period in ["p.m.", "pm"]:
+ _exc["%d%s" % (h, period)] = [
+ {ORTH: "%d" % h},
+ {ORTH: period, LEMMA: "p.m."}]
+
+# only declare this at the bottom
+TOKENIZER_EXCEPTIONS = _exc
+```
+
> #### Generating tokenizer exceptions
>
> Keep in mind that generating exceptions only makes sense if there's a clearly
@@ -242,8 +275,7 @@ tokens.
> This is not always the case – in Spanish for instance, infinitive or
> imperative reflexive verbs and pronouns are one token (e.g. "vestirme"). In
> cases like this, spaCy shouldn't be generating exceptions for _all verbs_.
-> Instead, this will be handled at a later stage after part-of-speech tagging
-> and lemmatization.
+> Instead, this will be handled at a later stage during lemmatization.
When adding the tokenizer exceptions to the `Defaults`, you can use the
[`update_exc`](/api/top-level#util.update_exc) helper function to merge them
@@ -260,23 +292,33 @@ custom one.
from ...util import update_exc
BASE_EXCEPTIONS = {"a.": [{ORTH: "a."}], ":)": [{ORTH: ":)"}]}
-TOKENIZER_EXCEPTIONS = {"a.": [{ORTH: "a.", NORM: "all"}]}
+TOKENIZER_EXCEPTIONS = {"a.": [{ORTH: "a.", LEMMA: "all"}]}
tokenizer_exceptions = update_exc(BASE_EXCEPTIONS, TOKENIZER_EXCEPTIONS)
-# {"a.": [{ORTH: "a.", NORM: "all"}], ":)": [{ORTH: ":)"}]}
+# {"a.": [{ORTH: "a.", LEMMA: "all"}], ":)": [{ORTH: ":)"}]}
```
+
+
+Unlike verbs and common nouns, there's no clear base form of a personal pronoun.
+Should the lemma of "me" be "I", or should we normalize person as well, giving
+"it" — or maybe "he"? spaCy's solution is to introduce a novel symbol, `-PRON-`,
+which is used as the lemma for all personal pronouns.
+
+
+
### Norm exceptions {#norm-exceptions new="2"}
-In addition to `ORTH`, tokenizer exceptions can also set a `NORM` attribute.
-This is useful to specify a normalized version of the token – for example, the
-norm of "n't" is "not". By default, a token's norm equals its lowercase text. If
-the lowercase spelling of a word exists, norms should always be in lowercase.
+In addition to `ORTH` or `LEMMA`, tokenizer exceptions can also set a `NORM`
+attribute. This is useful to specify a normalized version of the token – for
+example, the norm of "n't" is "not". By default, a token's norm equals its
+lowercase text. If the lowercase spelling of a word exists, norms should always
+be in lowercase.
> #### Norms vs. lemmas
>
> ```python
-> doc = nlp("I'm gonna realise")
+> doc = nlp(u"I'm gonna realise")
> norms = [token.norm_ for token in doc]
> lemmas = [token.lemma_ for token in doc]
> assert norms == ["i", "am", "going", "to", "realize"]
@@ -396,10 +438,10 @@ iterators:
> #### Noun chunks example
>
> ```python
-> doc = nlp("A phrase with another phrase occurs.")
+> doc = nlp(u"A phrase with another phrase occurs.")
> chunks = list(doc.noun_chunks)
-> assert chunks[0].text == "A phrase"
-> assert chunks[1].text == "another phrase"
+> assert chunks[0].text == u"A phrase"
+> assert chunks[1].text == u"another phrase"
> ```
| Language | Code | Source |
@@ -416,50 +458,27 @@ the quickest and easiest way to get started. The data is stored in a dictionary
mapping a string to its lemma. To determine a token's lemma, spaCy simply looks
it up in the table. Here's an example from the Spanish language data:
-```json
-### lang/es/lemma_lookup.json (excerpt)
-{
- "aba": "abar",
- "ababa": "abar",
- "ababais": "abar",
- "ababan": "abar",
- "ababanes": "ababán",
- "ababas": "abar",
- "ababoles": "ababol",
- "ababábites": "ababábite"
+```python
+### lang/es/lemmatizer.py (excerpt)
+LOOKUP = {
+ "aba": "abar",
+ "ababa": "abar",
+ "ababais": "abar",
+ "ababan": "abar",
+ "ababanes": "ababán",
+ "ababas": "abar",
+ "ababoles": "ababol",
+ "ababábites": "ababábite"
}
```
-#### Adding JSON resources {#lemmatizer-resources new="2.2"}
-
-As of v2.2, resources for the lemmatizer are stored as JSON and loaded via the
-new [`Lookups`](/api/lookups) class. This allows easier access to the data,
-serialization with the models and file compression on disk (so your spaCy
-installation is smaller). Resource files can be provided via the `resources`
-attribute on the custom language subclass. All paths are relative to the
-language data directory, i.e. the directory the language's `__init__.py` is in.
+To provide a lookup lemmatizer for your language, import the lookup table and
+add it to the `Language` class as `lemma_lookup`:
```python
-resources = {
- "lemma_lookup": "lemmatizer/lemma_lookup.json",
- "lemma_rules": "lemmatizer/lemma_rules.json",
- "lemma_index": "lemmatizer/lemma_index.json",
- "lemma_exc": "lemmatizer/lemma_exc.json",
-}
+lemma_lookup = LOOKUP
```
-> #### Lookups example
->
-> ```python
-> table = nlp.vocab.lookups.get_table("my_table")
-> value = table.get("some_key")
-> ```
-
-If your language needs other large dictionaries and resources, you can also add
-those files here. The data will become available via a [`Lookups`](/api/lookups)
-table in `nlp.vocab.lookups`, and you'll be able to access it from the tokenizer
-or a custom pipeline component (via `doc.vocab.lookups`).
-
### Tag map {#tag-map}
Most treebanks define a custom part-of-speech tag scheme, striking a balance
diff --git a/website/docs/usage/facts-figures.md b/website/docs/usage/facts-figures.md
index 40b39d871..a3683b668 100644
--- a/website/docs/usage/facts-figures.md
+++ b/website/docs/usage/facts-figures.md
@@ -26,7 +26,7 @@ Here's a quick comparison of the functionalities offered by spaCy,
| Sentence segmentation | ✅ | ✅ | ✅ |
| Dependency parsing | ✅ | ❌ | ✅ |
| Entity recognition | ✅ | ✅ | ✅ |
-| Entity linking | ✅ | ❌ | ❌ |
+| Entity linking | ❌ | ❌ | ❌ |
| Coreference resolution | ❌ | ❌ | ✅ |
### When should I use what? {#comparison-usage}
diff --git a/website/docs/usage/index.md b/website/docs/usage/index.md
index 1d6c0574c..1ffd0de0d 100644
--- a/website/docs/usage/index.md
+++ b/website/docs/usage/index.md
@@ -392,7 +392,7 @@ from is called `spacy`. So, when using spaCy, never call anything else `spacy`.
```python
-doc = nlp("They are")
+doc = nlp(u"They are")
print(doc[0].lemma_)
# -PRON-
```
diff --git a/website/docs/usage/linguistic-features.md b/website/docs/usage/linguistic-features.md
index 4128fa73f..66ad816f5 100644
--- a/website/docs/usage/linguistic-features.md
+++ b/website/docs/usage/linguistic-features.md
@@ -69,6 +69,7 @@ of the two. The system works as follows:
morphological information, without consulting the context of the token. The
lemmatizer also accepts list-based exception files, acquired from
[WordNet](https://wordnet.princeton.edu/).
+
## Dependency Parsing {#dependency-parse model="parser"}
@@ -92,7 +93,7 @@ get the noun chunks in a document, simply iterate over
import spacy
nlp = spacy.load("en_core_web_sm")
-doc = nlp("Autonomous cars shift insurance liability toward manufacturers")
+doc = nlp(u"Autonomous cars shift insurance liability toward manufacturers")
for chunk in doc.noun_chunks:
print(chunk.text, chunk.root.text, chunk.root.dep_,
chunk.root.head.text)
@@ -123,7 +124,7 @@ get the string value with `.dep_`.
import spacy
nlp = spacy.load("en_core_web_sm")
-doc = nlp("Autonomous cars shift insurance liability toward manufacturers")
+doc = nlp(u"Autonomous cars shift insurance liability toward manufacturers")
for token in doc:
print(token.text, token.dep_, token.head.text, token.head.pos_,
[child for child in token.children])
@@ -160,7 +161,7 @@ import spacy
from spacy.symbols import nsubj, VERB
nlp = spacy.load("en_core_web_sm")
-doc = nlp("Autonomous cars shift insurance liability toward manufacturers")
+doc = nlp(u"Autonomous cars shift insurance liability toward manufacturers")
# Finding a verb with a subject from below — good
verbs = set()
@@ -203,7 +204,7 @@ children.
import spacy
nlp = spacy.load("en_core_web_sm")
-doc = nlp("bright red apples on the tree")
+doc = nlp(u"bright red apples on the tree")
print([token.text for token in doc[2].lefts]) # ['bright', 'red']
print([token.text for token in doc[2].rights]) # ['on']
print(doc[2].n_lefts) # 2
@@ -215,7 +216,7 @@ print(doc[2].n_rights) # 1
import spacy
nlp = spacy.load("de_core_news_sm")
-doc = nlp("schöne rote Äpfel auf dem Baum")
+doc = nlp(u"schöne rote Äpfel auf dem Baum")
print([token.text for token in doc[2].lefts]) # ['schöne', 'rote']
print([token.text for token in doc[2].rights]) # ['auf']
```
@@ -239,7 +240,7 @@ sequence of tokens. You can walk up the tree with the
import spacy
nlp = spacy.load("en_core_web_sm")
-doc = nlp("Credit and mortgage account holders must submit their requests")
+doc = nlp(u"Credit and mortgage account holders must submit their requests")
root = [token for token in doc if token.head == token][0]
subject = list(root.lefts)[0]
@@ -269,7 +270,7 @@ end-point of a range, don't forget to `+1`!
import spacy
nlp = spacy.load("en_core_web_sm")
-doc = nlp("Credit and mortgage account holders must submit their requests")
+doc = nlp(u"Credit and mortgage account holders must submit their requests")
span = doc[doc[4].left_edge.i : doc[4].right_edge.i+1]
with doc.retokenize() as retokenizer:
retokenizer.merge(span)
@@ -310,7 +311,7 @@ import spacy
from spacy import displacy
nlp = spacy.load("en_core_web_sm")
-doc = nlp("Autonomous cars shift insurance liability toward manufacturers")
+doc = nlp(u"Autonomous cars shift insurance liability toward manufacturers")
# Since this is an interactive Jupyter environment, we can use displacy.render here
displacy.render(doc, style='dep')
```
@@ -335,7 +336,7 @@ the `nlp` object.
```python
nlp = spacy.load("en_core_web_sm", disable=["parser"])
nlp = English().from_disk("/model", disable=["parser"])
-doc = nlp("I don't want parsed", disable=["parser"])
+doc = nlp(u"I don't want parsed", disable=["parser"])
```
@@ -349,10 +350,10 @@ Language class via [`from_disk`](/api/language#from_disk).
```diff
+ nlp = spacy.load("en_core_web_sm", disable=["parser"])
-+ doc = nlp("I don't want parsed", disable=["parser"])
++ doc = nlp(u"I don't want parsed", disable=["parser"])
- nlp = spacy.load("en_core_web_sm", parser=False)
-- doc = nlp("I don't want parsed", parse=False)
+- doc = nlp(u"I don't want parsed", parse=False)
```
@@ -397,7 +398,7 @@ on a token, it will return an empty string.
import spacy
nlp = spacy.load("en_core_web_sm")
-doc = nlp("San Francisco considers banning sidewalk delivery robots")
+doc = nlp(u"San Francisco considers banning sidewalk delivery robots")
# document level
ents = [(e.text, e.start_char, e.end_char, e.label_) for e in doc.ents]
@@ -406,8 +407,8 @@ print(ents)
# token level
ent_san = [doc[0].text, doc[0].ent_iob_, doc[0].ent_type_]
ent_francisco = [doc[1].text, doc[1].ent_iob_, doc[1].ent_type_]
-print(ent_san) # ['San', 'B', 'GPE']
-print(ent_francisco) # ['Francisco', 'I', 'GPE']
+print(ent_san) # [u'San', u'B', u'GPE']
+print(ent_francisco) # [u'Francisco', u'I', u'GPE']
```
| Text | ent_iob | ent_iob\_ | ent_type\_ | Description |
@@ -434,17 +435,18 @@ import spacy
from spacy.tokens import Span
nlp = spacy.load("en_core_web_sm")
-doc = nlp("FB is hiring a new Vice President of global policy")
+doc = nlp(u"FB is hiring a new Vice President of global policy")
ents = [(e.text, e.start_char, e.end_char, e.label_) for e in doc.ents]
print('Before', ents)
# the model didn't recognise "FB" as an entity :(
-fb_ent = Span(doc, 0, 1, label="ORG") # create a Span for the new entity
+ORG = doc.vocab.strings[u"ORG"] # get hash value of entity label
+fb_ent = Span(doc, 0, 1, label=ORG) # create a Span for the new entity
doc.ents = list(doc.ents) + [fb_ent]
ents = [(e.text, e.start_char, e.end_char, e.label_) for e in doc.ents]
print('After', ents)
-# [('FB', 0, 2, 'ORG')] 🎉
+# [(u'FB', 0, 2, 'ORG')] 🎉
```
Keep in mind that you need to create a `Span` with the start and end index of
@@ -466,13 +468,13 @@ import spacy
from spacy.attrs import ENT_IOB, ENT_TYPE
nlp = spacy.load("en_core_web_sm")
-doc = nlp.make_doc("London is a big city in the United Kingdom.")
+doc = nlp.make_doc(u"London is a big city in the United Kingdom.")
print("Before", doc.ents) # []
header = [ENT_IOB, ENT_TYPE]
attr_array = numpy.zeros((len(doc), len(header)))
attr_array[0, 0] = 3 # B
-attr_array[0, 1] = doc.vocab.strings["GPE"]
+attr_array[0, 1] = doc.vocab.strings[u"GPE"]
doc.from_array(header, attr_array)
print("After", doc.ents) # [London]
```
@@ -531,8 +533,8 @@ train_data = [
```
```python
-doc = Doc(nlp.vocab, ["rats", "make", "good", "pets"])
-gold = GoldParse(doc, entities=["U-ANIMAL", "O", "O", "O"])
+doc = Doc(nlp.vocab, [u"rats", u"make", u"good", u"pets"])
+gold = GoldParse(doc, entities=[u"U-ANIMAL", u"O", u"O", u"O"])
```
@@ -563,7 +565,7 @@ For more details and examples, see the
import spacy
from spacy import displacy
-text = "When Sebastian Thrun started working on self-driving cars at Google in 2007, few people outside of the company took him seriously."
+text = u"When Sebastian Thrun started working on self-driving cars at Google in 2007, few people outside of the company took him seriously."
nlp = spacy.load("en_core_web_sm")
doc = nlp(text)
@@ -574,52 +576,6 @@ import DisplacyEntHtml from 'images/displacy-ent2.html'
-## Entity Linking {#entity-linking}
-
-To ground the named entities into the "real world", spaCy provides functionality
-to perform entity linking, which resolves a textual entity to a unique
-identifier from a knowledge base (KB). The
-[processing scripts](https://github.com/explosion/spaCy/tree/master/bin/wiki_entity_linking)
-we provide use WikiData identifiers, but you can create your own
-[`KnowledgeBase`](/api/kb) and
-[train a new Entity Linking model](/usage/training#entity-linker) using that
-custom-made KB.
-
-### Accessing entity identifiers {#entity-linking-accessing}
-
-The annotated KB identifier is accessible as either a hash value or as a string,
-using the attributes `ent.kb_id` and `ent.kb_id_` of a [`Span`](/api/span)
-object, or the `ent_kb_id` and `ent_kb_id_` attributes of a
-[`Token`](/api/token) object.
-
-```python
-import spacy
-
-nlp = spacy.load("my_custom_el_model")
-doc = nlp("Ada Lovelace was born in London")
-
-# document level
-ents = [(e.text, e.label_, e.kb_id_) for e in doc.ents]
-print(ents) # [('Ada Lovelace', 'PERSON', 'Q7259'), ('London', 'GPE', 'Q84')]
-
-# token level
-ent_ada_0 = [doc[0].text, doc[0].ent_type_, doc[0].ent_kb_id_]
-ent_ada_1 = [doc[1].text, doc[1].ent_type_, doc[1].ent_kb_id_]
-ent_london_5 = [doc[5].text, doc[5].ent_type_, doc[5].ent_kb_id_]
-print(ent_ada_0) # ['Ada', 'PERSON', 'Q7259']
-print(ent_ada_1) # ['Lovelace', 'PERSON', 'Q7259']
-print(ent_london_5) # ['London', 'GPE', 'Q84']
-```
-
-| Text | ent_type\_ | ent_kb_id\_ |
-| -------- | ---------- | ----------- |
-| Ada | `"PERSON"` | `"Q7259"` |
-| Lovelace | `"PERSON"` | `"Q7259"` |
-| was | - | - |
-| born | - | - |
-| in | - | - |
-| London | `"GPE"` | `"Q84"` |
-
## Tokenization {#tokenization}
Tokenization is the task of splitting a text into meaningful segments, called
@@ -649,7 +605,7 @@ import Tokenization101 from 'usage/101/\_tokenization.md'
data in
[`spacy/lang`](https://github.com/explosion/spaCy/tree/master/spacy/lang). The
tokenizer exceptions define special cases like "don't" in English, which needs
-to be split into two tokens: `{ORTH: "do"}` and `{ORTH: "n't", NORM: "not"}`.
+to be split into two tokens: `{ORTH: "do"}` and `{ORTH: "n't", LEMMA: "not"}`.
The prefixes, suffixes and infixes mostly define punctuation rules – for
example, when to split off periods (at the end of a sentence), and when to leave
tokens containing periods intact (abbreviations like "U.S.").
@@ -688,36 +644,53 @@ this specific field. Here's how to add a special case rule to an existing
```python
### {executable="true"}
import spacy
-from spacy.symbols import ORTH
+from spacy.symbols import ORTH, LEMMA, POS, TAG
nlp = spacy.load("en_core_web_sm")
-doc = nlp("gimme that") # phrase to tokenize
+doc = nlp(u"gimme that") # phrase to tokenize
print([w.text for w in doc]) # ['gimme', 'that']
-# Add special case rule
-special_case = [{ORTH: "gim"}, {ORTH: "me"}]
-nlp.tokenizer.add_special_case("gimme", special_case)
+# add special case rule
+special_case = [{ORTH: u"gim", LEMMA: u"give", POS: u"VERB"}, {ORTH: u"me"}]
+nlp.tokenizer.add_special_case(u"gimme", special_case)
-# Check new tokenization
-print([w.text for w in nlp("gimme that")]) # ['gim', 'me', 'that']
+# check new tokenization
+print([w.text for w in nlp(u"gimme that")]) # ['gim', 'me', 'that']
+
+# Pronoun lemma is returned as -PRON-!
+print([w.lemma_ for w in nlp(u"gimme that")]) # ['give', '-PRON-', 'that']
```
+
+
+For details on spaCy's custom pronoun lemma `-PRON-`,
+[see here](/usage/#pron-lemma).
+
+
+
The special case doesn't have to match an entire whitespace-delimited substring.
The tokenizer will incrementally split off punctuation, and keep looking up the
remaining substring:
```python
-assert "gimme" not in [w.text for w in nlp("gimme!")]
-assert "gimme" not in [w.text for w in nlp('("...gimme...?")')]
+assert "gimme" not in [w.text for w in nlp(u"gimme!")]
+assert "gimme" not in [w.text for w in nlp(u'("...gimme...?")')]
```
The special case rules have precedence over the punctuation splitting:
```python
-nlp.tokenizer.add_special_case("...gimme...?", [{ORTH: "...gimme...?"}])
-assert len(nlp("...gimme...?")) == 1
+special_case = [{ORTH: u"...gimme...?", LEMMA: u"give", TAG: u"VB"}]
+nlp.tokenizer.add_special_case(u"...gimme...?", special_case)
+assert len(nlp(u"...gimme...?")) == 1
```
+Because the special-case rules allow you to set arbitrary token attributes, such
+as the part-of-speech, lemma, etc, they make a good mechanism for arbitrary
+fix-up rules. Having this logic live in the tokenizer isn't very satisfying from
+a design perspective, however, so the API may eventually be exposed on the
+[`Language`](/api/language) class itself.
+
### How spaCy's tokenizer works {#how-tokenizer-works}
spaCy introduces a novel tokenization algorithm, that gives a better balance
@@ -817,7 +790,7 @@ def custom_tokenizer(nlp):
nlp = spacy.load("en_core_web_sm")
nlp.tokenizer = custom_tokenizer(nlp)
-doc = nlp("hello-world.")
+doc = nlp(u"hello-world.")
print([t.text for t in doc])
```
@@ -934,7 +907,7 @@ class WhitespaceTokenizer(object):
nlp = spacy.load("en_core_web_sm")
nlp.tokenizer = WhitespaceTokenizer(nlp.vocab)
-doc = nlp("What's happened to me? he thought. It wasn't a dream.")
+doc = nlp(u"What's happened to me? he thought. It wasn't a dream.")
print([t.text for t in doc])
```
@@ -959,7 +932,7 @@ from spacy.tokens import Doc
from spacy.lang.en import English
nlp = English()
-doc = Doc(nlp.vocab, words=["Hello", ",", "world", "!"],
+doc = Doc(nlp.vocab, words=[u"Hello", u",", u"world", u"!"],
spaces=[False, True, False, False])
print([(t.text, t.text_with_ws, t.whitespace_) for t in doc])
```
@@ -976,8 +949,8 @@ from spacy.tokens import Doc
from spacy.lang.en import English
nlp = English()
-bad_spaces = Doc(nlp.vocab, words=["Hello", ",", "world", "!"])
-good_spaces = Doc(nlp.vocab, words=["Hello", ",", "world", "!"],
+bad_spaces = Doc(nlp.vocab, words=[u"Hello", u",", u"world", u"!"])
+good_spaces = Doc(nlp.vocab, words=[u"Hello", u",", u"world", u"!"],
spaces=[False, True, False, False])
print(bad_spaces.text) # 'Hello , world !'
@@ -1259,7 +1232,7 @@ that yields [`Span`](/api/span) objects.
import spacy
nlp = spacy.load("en_core_web_sm")
-doc = nlp("This is a sentence. This is another sentence.")
+doc = nlp(u"This is a sentence. This is another sentence.")
for sent in doc.sents:
print(sent.text)
```
@@ -1279,7 +1252,7 @@ from spacy.lang.en import English
nlp = English() # just the language with no model
sentencizer = nlp.create_pipe("sentencizer")
nlp.add_pipe(sentencizer)
-doc = nlp("This is a sentence. This is another sentence.")
+doc = nlp(u"This is a sentence. This is another sentence.")
for sent in doc.sents:
print(sent.text)
```
@@ -1315,7 +1288,7 @@ take advantage of dependency-based sentence segmentation.
### {executable="true"}
import spacy
-text = "this is a sentence...hello...and another sentence."
+text = u"this is a sentence...hello...and another sentence."
nlp = spacy.load("en_core_web_sm")
doc = nlp(text)
diff --git a/website/docs/usage/models.md b/website/docs/usage/models.md
index c9b22279d..5df4ab458 100644
--- a/website/docs/usage/models.md
+++ b/website/docs/usage/models.md
@@ -106,7 +106,7 @@ python -m spacy download en_core_web_sm
python -m spacy download en
# Download exact model version (doesn't create shortcut link)
-python -m spacy download en_core_web_sm-2.2.0 --direct
+python -m spacy download en_core_web_sm-2.1.0 --direct
```
The download command will [install the model](/usage/models#download-pip) via
@@ -120,7 +120,7 @@ python -m spacy download en_core_web_sm
```python
import spacy
nlp = spacy.load("en_core_web_sm")
-doc = nlp("This is a sentence.")
+doc = nlp(u"This is a sentence.")
```
@@ -145,10 +145,10 @@ click on the archive link and copy it to your clipboard.
```bash
# With external URL
-pip install https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.2.0/en_core_web_sm-2.2.0.tar.gz
+pip install https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.1.0/en_core_web_sm-2.1.0.tar.gz
# With local file
-pip install /Users/you/en_core_web_sm-2.2.0.tar.gz
+pip install /Users/you/en_core_web_sm-2.1.0.tar.gz
```
By default, this will install the model into your `site-packages` directory. You
@@ -173,13 +173,13 @@ model data.
```yaml
### Directory structure {highlight="7"}
-└── en_core_web_md-2.2.0.tar.gz # downloaded archive
+└── en_core_web_md-2.1.0.tar.gz # downloaded archive
├── meta.json # model meta data
├── setup.py # setup file for pip installation
└── en_core_web_md # 📦 model package
├── __init__.py # init for pip installation
├── meta.json # model meta data
- └── en_core_web_md-2.2.0 # model data
+ └── en_core_web_md-2.1.0 # model data
```
You can place the **model package directory** anywhere on your local file
@@ -197,7 +197,7 @@ nlp = spacy.load("en_core_web_sm") # load model package "en_core_web_s
nlp = spacy.load("/path/to/en_core_web_sm") # load package from a directory
nlp = spacy.load("en") # load model with shortcut link "en"
-doc = nlp("This is a sentence.")
+doc = nlp(u"This is a sentence.")
```
@@ -269,7 +269,7 @@ also `import` it and then call its `load()` method with no arguments:
import en_core_web_sm
nlp = en_core_web_sm.load()
-doc = nlp("This is a sentence.")
+doc = nlp(u"This is a sentence.")
```
How you choose to load your models ultimately depends on personal preference.
@@ -325,8 +325,8 @@ URLs.
```text
### requirements.txt
-spacy>=2.2.0,<3.0.0
-https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.2.0/en_core_web_sm-2.2.0.tar.gz#egg=en_core_web_sm
+spacy>=2.0.0,<3.0.0
+https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.1.0/en_core_web_sm-2.1.0.tar.gz#egg=en_core_web_sm
```
Specifying `#egg=` with the package name tells pip which package to expect from
diff --git a/website/docs/usage/processing-pipelines.md b/website/docs/usage/processing-pipelines.md
index dcd182965..f3c59da7b 100644
--- a/website/docs/usage/processing-pipelines.md
+++ b/website/docs/usage/processing-pipelines.md
@@ -20,7 +20,7 @@ component** on the `Doc`, in order. It then returns the processed `Doc` that you
can work with.
```python
-doc = nlp("This is a text")
+doc = nlp(u"This is a text")
```
When processing large volumes of text, the statistical models are usually more
@@ -29,7 +29,7 @@ efficient if you let them work on batches of texts. spaCy's
processed `Doc` objects. The batching is done internally.
```diff
-texts = ["This is a text", "These are lots of texts", "..."]
+texts = [u"This is a text", u"These are lots of texts", u"..."]
- docs = [nlp(text) for text in texts]
+ docs = list(nlp.pipe(texts))
```
@@ -172,7 +172,7 @@ which is then processed by the component next in the pipeline.
```python
### The pipeline under the hood
-doc = nlp.make_doc("This is a sentence") # create a Doc from raw text
+doc = nlp.make_doc(u"This is a sentence") # create a Doc from raw text
for name, proc in nlp.pipeline: # iterate over components in order
doc = proc(doc) # apply each component
```
@@ -213,7 +213,6 @@ require them in the pipeline settings in your model's `meta.json`.
| `tagger` | [`Tagger`](/api/tagger) | Assign part-of-speech-tags. |
| `parser` | [`DependencyParser`](/api/dependencyparser) | Assign dependency labels. |
| `ner` | [`EntityRecognizer`](/api/entityrecognizer) | Assign named entities. |
-| `entity_linker` | [`EntityLinker`](/api/entitylinker) | Assign knowledge base IDs to named entities. Should be added after the entity recognizer. |
| `textcat` | [`TextCategorizer`](/api/textcategorizer) | Assign text categories. |
| `entity_ruler` | [`EntityRuler`](/api/entityruler) | Assign named entities based on pattern rules. |
| `sentencizer` | [`Sentencizer`](/api/sentencizer) | Add rule-based sentence segmentation without the dependency parse. |
@@ -263,12 +262,12 @@ blocks.
### Disable for block
# 1. Use as a contextmanager
with nlp.disable_pipes("tagger", "parser"):
- doc = nlp("I won't be tagged and parsed")
-doc = nlp("I will be tagged and parsed")
+ doc = nlp(u"I won't be tagged and parsed")
+doc = nlp(u"I will be tagged and parsed")
# 2. Restore manually
disabled = nlp.disable_pipes("ner")
-doc = nlp("I won't have named entities")
+doc = nlp(u"I won't have named entities")
disabled.restore()
```
@@ -295,11 +294,11 @@ initializing a Language class via [`from_disk`](/api/language#from_disk).
```diff
- nlp = spacy.load('en', tagger=False, entity=False)
-- doc = nlp("I don't want parsed", parse=False)
+- doc = nlp(u"I don't want parsed", parse=False)
+ nlp = spacy.load("en", disable=["ner"])
+ nlp.remove_pipe("parser")
-+ doc = nlp("I don't want parsed")
++ doc = nlp(u"I don't want parsed")
```
@@ -376,7 +375,7 @@ def my_component(doc):
nlp = spacy.load("en_core_web_sm")
nlp.add_pipe(my_component, name="print_info", last=True)
print(nlp.pipe_names) # ['tagger', 'parser', 'ner', 'print_info']
-doc = nlp("This is a sentence.")
+doc = nlp(u"This is a sentence.")
```
@@ -426,14 +425,14 @@ class EntityMatcher(object):
return doc
nlp = spacy.load("en_core_web_sm")
-terms = ("cat", "dog", "tree kangaroo", "giant sea spider")
+terms = (u"cat", u"dog", u"tree kangaroo", u"giant sea spider")
entity_matcher = EntityMatcher(nlp, terms, "ANIMAL")
nlp.add_pipe(entity_matcher, after="ner")
print(nlp.pipe_names) # The components in the pipeline
-doc = nlp("This is a text about Barack Obama and a tree kangaroo")
+doc = nlp(u"This is a text about Barack Obama and a tree kangaroo")
print([(ent.text, ent.label_) for ent in doc.ents])
```
@@ -471,7 +470,7 @@ def custom_sentencizer(doc):
nlp = spacy.load("en_core_web_sm")
nlp.add_pipe(custom_sentencizer, before="parser") # Insert before the parser
-doc = nlp("This is. A sentence. | This is. Another sentence.")
+doc = nlp(u"This is. A sentence. | This is. Another sentence.")
for sent in doc.sents:
print(sent.text)
```
@@ -517,7 +516,7 @@ config parameters are passed all the way down from
components with custom settings:
```python
-nlp = spacy.load("your_custom_model", terms=["tree kangaroo"], label="ANIMAL")
+nlp = spacy.load("your_custom_model", terms=(u"tree kangaroo"), label="ANIMAL")
```
@@ -617,7 +616,7 @@ raise an `AttributeError`.
### Example
from spacy.tokens import Doc, Span, Token
-fruits = ["apple", "pear", "banana", "orange", "strawberry"]
+fruits = [u"apple", u"pear", u"banana", u"orange", u"strawberry"]
is_fruit_getter = lambda token: token.text in fruits
has_fruit_getter = lambda obj: any([t.text in fruits for t in obj])
@@ -629,7 +628,7 @@ Span.set_extension("has_fruit", getter=has_fruit_getter)
> #### Usage example
>
> ```python
-> doc = nlp("I have an apple and a melon")
+> doc = nlp(u"I have an apple and a melon")
> assert doc[3]._.is_fruit # get Token attributes
> assert not doc[0]._.is_fruit
> assert doc._.has_fruit # get Doc attributes
diff --git a/website/docs/usage/rule-based-matching.md b/website/docs/usage/rule-based-matching.md
index 4c398ecd0..1d67625a5 100644
--- a/website/docs/usage/rule-based-matching.md
+++ b/website/docs/usage/rule-based-matching.md
@@ -90,7 +90,7 @@ the pattern is not going to produce any results. When developing complex
patterns, make sure to check examples against spaCy's tokenization:
```python
-doc = nlp("A complex-example,!")
+doc = nlp(u"A complex-example,!")
print([token.text for token in doc])
```
@@ -113,7 +113,7 @@ matcher = Matcher(nlp.vocab)
pattern = [{"LOWER": "hello"}, {"IS_PUNCT": True}, {"LOWER": "world"}]
matcher.add("HelloWorld", None, pattern)
-doc = nlp("Hello, world! Hello world!")
+doc = nlp(u"Hello, world! Hello world!")
matches = matcher(doc)
for match_id, start, end in matches:
string_id = nlp.vocab.strings[match_id] # Get string representation
@@ -447,7 +447,7 @@ def add_event_ent(matcher, doc, i, matches):
pattern = [{"ORTH": "Google"}, {"ORTH": "I"}, {"ORTH": "/"}, {"ORTH": "O"}]
matcher.add("GoogleIO", add_event_ent, pattern)
-doc = nlp("This is a text about Google I/O")
+doc = nlp(u"This is a text about Google I/O")
matches = matcher(doc)
```
@@ -539,7 +539,7 @@ class BadHTMLMerger(object):
nlp = spacy.load("en_core_web_sm")
html_merger = BadHTMLMerger(nlp)
nlp.add_pipe(html_merger, last=True) # Add component to the pipeline
-doc = nlp("Hello world! This is a test.")
+doc = nlp(u"Hello world! This is a test.")
for token in doc:
print(token.text, token._.bad_html)
@@ -617,7 +617,7 @@ def collect_sents(matcher, doc, i, matches):
pattern = [{"LOWER": "facebook"}, {"LEMMA": "be"}, {"POS": "ADV", "OP": "*"},
{"POS": "ADJ"}]
matcher.add("FacebookIs", collect_sents, pattern) # add pattern
-doc = nlp("I'd say that Facebook is evil. – Facebook is pretty cool, right?")
+doc = nlp(u"I'd say that Facebook is evil. – Facebook is pretty cool, right?")
matches = matcher(doc)
# Serve visualization of sentences containing match with displaCy
@@ -673,7 +673,7 @@ pattern = [{"ORTH": "("}, {"SHAPE": "ddd"}, {"ORTH": ")"}, {"SHAPE": "ddd"},
{"ORTH": "-", "OP": "?"}, {"SHAPE": "ddd"}]
matcher.add("PHONE_NUMBER", None, pattern)
-doc = nlp("Call me at (123) 456 789 or (123) 456 789!")
+doc = nlp(u"Call me at (123) 456 789 or (123) 456 789!")
print([t.text for t in doc])
matches = matcher(doc)
for match_id, start, end in matches:
@@ -719,8 +719,8 @@ from spacy.matcher import Matcher
nlp = English() # We only want the tokenizer, so no need to load a model
matcher = Matcher(nlp.vocab)
-pos_emoji = ["😀", "😃", "😂", "🤣", "😊", "😍"] # Positive emoji
-neg_emoji = ["😞", "😠", "😩", "😢", "😭", "😒"] # Negative emoji
+pos_emoji = [u"😀", u"😃", u"😂", u"🤣", u"😊", u"😍"] # Positive emoji
+neg_emoji = [u"😞", u"😠", u"😩", u"😢", u"😭", u"😒"] # Negative emoji
# Add patterns to match one or more emoji tokens
pos_patterns = [[{"ORTH": emoji}] for emoji in pos_emoji]
@@ -740,7 +740,7 @@ matcher.add("SAD", label_sentiment, *neg_patterns) # Add negative pattern
# Add pattern for valid hashtag, i.e. '#' plus any ASCII token
matcher.add("HASHTAG", None, [{"ORTH": "#"}, {"IS_ASCII": True}])
-doc = nlp("Hello world 😀 #MondayMotivation")
+doc = nlp(u"Hello world 😀 #MondayMotivation")
matches = matcher(doc)
for match_id, start, end in matches:
string_id = doc.vocab.strings[match_id] # Look up string ID
@@ -797,7 +797,7 @@ matcher.add("HASHTAG", None, [{"ORTH": "#"}, {"IS_ASCII": True}])
# Register token extension
Token.set_extension("is_hashtag", default=False)
-doc = nlp("Hello world 😀 #MondayMotivation")
+doc = nlp(u"Hello world 😀 #MondayMotivation")
matches = matcher(doc)
hashtags = []
for match_id, start, end in matches:
@@ -838,13 +838,13 @@ from spacy.matcher import PhraseMatcher
nlp = spacy.load('en_core_web_sm')
matcher = PhraseMatcher(nlp.vocab)
-terms = ["Barack Obama", "Angela Merkel", "Washington, D.C."]
+terms = [u"Barack Obama", u"Angela Merkel", u"Washington, D.C."]
# Only run nlp.make_doc to speed things up
patterns = [nlp.make_doc(text) for text in terms]
matcher.add("TerminologyList", None, *patterns)
-doc = nlp("German Chancellor Angela Merkel and US President Barack Obama "
- "converse in the Oval Office inside the White House in Washington, D.C.")
+doc = nlp(u"German Chancellor Angela Merkel and US President Barack Obama "
+ u"converse in the Oval Office inside the White House in Washington, D.C.")
matches = matcher(doc)
for match_id, start, end in matches:
span = doc[start:end]
@@ -853,8 +853,8 @@ for match_id, start, end in matches:
Since spaCy is used for processing both the patterns and the text to be matched,
you won't have to worry about specific tokenization – for example, you can
-simply pass in `nlp("Washington, D.C.")` and won't have to write a complex token
-pattern covering the exact tokenization of the term.
+simply pass in `nlp(u"Washington, D.C.")` and won't have to write a complex
+token pattern covering the exact tokenization of the term.
@@ -889,10 +889,10 @@ from spacy.matcher import PhraseMatcher
nlp = English()
matcher = PhraseMatcher(nlp.vocab, attr="LOWER")
-patterns = [nlp.make_doc(name) for name in ["Angela Merkel", "Barack Obama"]]
+patterns = [nlp.make_doc(name) for name in [u"Angela Merkel", u"Barack Obama"]]
matcher.add("Names", None, *patterns)
-doc = nlp("angela merkel and us president barack Obama")
+doc = nlp(u"angela merkel and us president barack Obama")
for match_id, start, end in matcher(doc):
print("Matched based on lowercase token text:", doc[start:end])
```
@@ -924,9 +924,9 @@ from spacy.matcher import PhraseMatcher
nlp = English()
matcher = PhraseMatcher(nlp.vocab, attr="SHAPE")
-matcher.add("IP", None, nlp("127.0.0.1"), nlp("127.127.0.0"))
+matcher.add("IP", None, nlp(u"127.0.0.1"), nlp(u"127.127.0.0"))
-doc = nlp("Often the router will have an IP address such as 192.168.1.1 or 192.168.2.1.")
+doc = nlp(u"Often the router will have an IP address such as 192.168.1.1 or 192.168.2.1.")
for match_id, start, end in matcher(doc):
print("Matched based on token shape:", doc[start:end])
```
@@ -982,7 +982,7 @@ patterns = [{"label": "ORG", "pattern": "Apple"},
ruler.add_patterns(patterns)
nlp.add_pipe(ruler)
-doc = nlp("Apple is opening its first big office in San Francisco.")
+doc = nlp(u"Apple is opening its first big office in San Francisco.")
print([(ent.text, ent.label_) for ent in doc.ents])
```
@@ -1006,7 +1006,7 @@ patterns = [{"label": "ORG", "pattern": "MyCorp Inc."}]
ruler.add_patterns(patterns)
nlp.add_pipe(ruler)
-doc = nlp("MyCorp Inc. is a company in the U.S.")
+doc = nlp(u"MyCorp Inc. is a company in the U.S.")
print([(ent.text, ent.label_) for ent in doc.ents])
```
diff --git a/website/docs/usage/saving-loading.md b/website/docs/usage/saving-loading.md
index 3d904f01a..81e90dcc7 100644
--- a/website/docs/usage/saving-loading.md
+++ b/website/docs/usage/saving-loading.md
@@ -59,45 +59,12 @@ initializes the language class, creates and adds the pipeline components and
_then_ loads in the binary data. You can read more about this process
[here](/usage/processing-pipelines#pipelines).
-### Serializing Doc objects efficiently {#docs new="2.2"}
-
-If you're working with lots of data, you'll probably need to pass analyses
-between machines, either to use something like [Dask](https://dask.org) or
-[Spark](https://spark.apache.org), or even just to save out work to disk. Often
-it's sufficient to use the [`Doc.to_array`](/api/doc#to_array) functionality for
-this, and just serialize the numpy arrays – but other times you want a more
-general way to save and restore `Doc` objects.
-
-The [`DocBin`](/api/docbin) class makes it easy to serialize and deserialize a
-collection of `Doc` objects together, and is much more efficient than calling
-[`Doc.to_bytes`](/api/doc#to_bytes) on each individual `Doc` object. You can
-also control what data gets saved, and you can merge pallets together for easy
-map/reduce-style processing.
-
-```python
-### {highlight="4,8,9,13,14"}
-import spacy
-from spacy.tokens import DocBin
-
-doc_bin = DocBin(attrs=["LEMMA", "ENT_IOB", "ENT_TYPE"], store_user_data=True)
-texts = ["Some text", "Lots of texts...", "..."]
-nlp = spacy.load("en_core_web_sm")
-for doc in nlp.pipe(texts):
- doc_bin.add(doc)
-bytes_data = docbin.to_bytes()
-
-# Deserialize later, e.g. in a new process
-nlp = spacy.blank("en")
-doc_bin = DocBin().from_bytes(bytes_data)
-docs = list(doc_bin.get_docs(nlp.vocab))
-```
-
### Using Pickle {#pickle}
> #### Example
>
> ```python
-> doc = nlp("This is a text.")
+> doc = nlp(u"This is a text.")
> data = pickle.dumps(doc)
> ```
@@ -117,8 +84,8 @@ the _same_ `Vocab` object, it will only be included once.
```python
### Pickling objects with shared data {highlight="8-9"}
-doc1 = nlp("Hello world")
-doc2 = nlp("This is a test")
+doc1 = nlp(u"Hello world")
+doc2 = nlp(u"This is a test")
doc1_data = pickle.dumps(doc1)
doc2_data = pickle.dumps(doc2)
@@ -271,31 +238,13 @@ custom components to spaCy automatically.
## Using entry points {#entry-points new="2.1"}
-Entry points let you expose parts of a Python package you write to other Python
-packages. This lets one application easily customize the behavior of another, by
-exposing an entry point in its `setup.py`. For a quick and fun intro to entry
-points in Python, check out
-[this excellent blog post](https://amir.rachum.com/blog/2017/07/28/python-entry-points/).
-spaCy can load custom function from several different entry points to add
-pipeline component factories, language classes and other settings. To make spaCy
-use your entry points, your package needs to expose them and it needs to be
-installed in the same environment – that's it.
-
-| Entry point | Description |
-| ------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| [`spacy_factories`](#entry-points-components) | Group of entry points for pipeline component factories to add to [`Language.factories`](/usage/processing-pipelines#custom-components-factories), keyed by component name. |
-| [`spacy_languages`](#entry-points-languages) | Group of entry points for custom [`Language` subclasses](/usage/adding-languages), keyed by language shortcut. |
-| [`spacy_displacy_colors`](#entry-points-displacy) 2.2 | Group of entry points of custom label colors for the [displaCy visualizer](/usage/visualizers#ent). The key name doesn't matter, but it should point to a dict of labels and color values. Useful for custom models that predict different entity types. |
-
-### Custom components via entry points {#entry-points-components}
-
When you load a model, spaCy will generally use the model's `meta.json` to set
up the language class and construct the pipeline. The pipeline is specified as a
list of strings, e.g. `"pipeline": ["tagger", "paser", "ner"]`. For each of
those strings, spaCy will call `nlp.create_pipe` and look up the name in the
-[built-in factories](/usage/processing-pipelines#custom-components-factories).
-If your model wanted to specify its own custom components, you usually have to
-write to `Language.factories` _before_ loading the model.
+[built-in factories](#custom-components-factories). If your model wanted to
+specify its own custom components, you usually have to write to
+`Language.factories` _before_ loading the model.
```python
pipe = nlp.create_pipe("custom_component") # fails 👎
@@ -311,11 +260,13 @@ added to the built-in factories when the `Language` class is initialized. If a
package in the same environment exposes spaCy entry points, all of this happens
automatically and no further user action is required.
-To stick with the theme of
-[this entry points blog post](https://amir.rachum.com/blog/2017/07/28/python-entry-points/),
-consider the following custom spaCy extension which is initialized with the
-shared `nlp` object and will print a snake when it's called as a pipeline
-component.
+#### Custom components via entry points {#entry-points-components}
+
+For a quick and fun intro to entry points in Python, I recommend
+[this excellent blog post](https://amir.rachum.com/blog/2017/07/28/python-entry-points/).
+To stick with the theme of the post, consider the following custom spaCy
+extension which is initialized with the shared `nlp` object and will print a
+snake when it's called as a pipeline component.
> #### Package directory structure
>
@@ -353,13 +304,15 @@ entry to the factories, you can now expose it in your `setup.py` via the
`entry_points` dictionary:
```python
-### setup.py {highlight="5-7"}
+### setup.py {highlight="5-8"}
from setuptools import setup
setup(
name="snek",
entry_points={
- "spacy_factories": ["snek = snek:SnekFactory"]
+ "spacy_factories": [
+ "snek = snek:SnekFactory"
+ ]
}
)
```
@@ -380,7 +333,7 @@ spaCy is now able to create the pipeline component `'snek'`:
>>> nlp = English()
>>> snek = nlp.create_pipe("snek") # this now works! 🐍🎉
>>> nlp.add_pipe(snek)
->>> doc = nlp("I am snek")
+>>> doc = nlp(u"I am snek")
--..,_ _,.--.
`'.'. .'`__ o `;__.
'.'. .'.'` '---'` `
@@ -457,7 +410,7 @@ The above example will serialize the current snake in a `snek.txt` in the model
data directory. When a model using the `snek` component is loaded, it will open
the `snek.txt` and make it available to the component.
-### Custom language classes via entry points {#entry-points-languages}
+#### Custom language classes via entry points {#entry-points-components}
To stay with the theme of the previous example and
[this blog post on entry points](https://amir.rachum.com/blog/2017/07/28/python-entry-points/),
@@ -493,8 +446,12 @@ from setuptools import setup
setup(
name="snek",
entry_points={
- "spacy_factories": ["snek = snek:SnekFactory"],
-+ "spacy_languages": ["snk = snek:SnekLanguage"]
+ "spacy_factories": [
+ "snek = snek:SnekFactory"
+ ]
++ "spacy_languages": [
++ "sk = snek:SnekLanguage"
++ ]
}
)
```
@@ -524,50 +481,6 @@ SnekLanguage = get_lang_class("snk")
nlp = SnekLanguage()
```
-### Custom displaCy colors via entry points {#entry-points-displacy new="2.2"}
-
-If you're training a named entity recognition model for a custom domain, you may
-end up training different labels that don't have pre-defined colors in the
-[`displacy` visualizer](/usage/visualizers#ent). The `spacy_displacy_colors`
-entry point lets you define a dictionary of entity labels mapped to their color
-values. It's added to the pre-defined colors and can also overwrite existing
-values.
-
-> #### Domain-specific NER labels
->
-> Good examples of models with domain-specific label schemes are
-> [scispaCy](/universe/project/scispacy) and
-> [Blackstone](/universe/project/blackstone).
-
-```python
-### snek.py
-displacy_colors = {"SNEK": "#3dff74", "HUMAN": "#cfc5ff"}
-```
-
-Given the above colors, the entry point can be defined as follows. Entry points
-need to have a name, so we use the key `colors`. However, the name doesn't
-matter and whatever is defined in the entry point group will be used.
-
-```diff
-### setup.py
-from setuptools import setup
-
-setup(
- name="snek",
- entry_points={
-+ "spacy_displacy_colors": ["colors = snek:displacy_colors"]
- }
-)
-```
-
-After installing the package, the the custom colors will be used when
-visualizing text with `displacy`. Whenever the label `SNEK` is assigned, it will
-be displayed in `#3dff74`.
-
-import DisplaCyEntSnekHtml from 'images/displacy-ent-snek.html'
-
-
-
## Saving, loading and distributing models {#models}
After training your model, you'll usually want to save its state, and load it
diff --git a/website/docs/usage/spacy-101.md b/website/docs/usage/spacy-101.md
index 4bfecb3a9..03feb03b1 100644
--- a/website/docs/usage/spacy-101.md
+++ b/website/docs/usage/spacy-101.md
@@ -122,7 +122,6 @@ related to more general machine learning functionality.
| **Lemmatization** | Assigning the base forms of words. For example, the lemma of "was" is "be", and the lemma of "rats" is "rat". |
| **Sentence Boundary Detection** (SBD) | Finding and segmenting individual sentences. |
| **Named Entity Recognition** (NER) | Labelling named "real-world" objects, like persons, companies or locations. |
-| **Entity Linking** (EL) | Disambiguating textual entities to unique identifiers in a Knowledge Base. |
| **Similarity** | Comparing words, text spans and documents and how similar they are to each other. |
| **Text Classification** | Assigning categories or labels to a whole document, or parts of a document. |
| **Rule-based Matching** | Finding sequences of tokens based on their texts and linguistic annotations, similar to regular expressions. |
@@ -179,7 +178,7 @@ processed `Doc`:
import spacy
nlp = spacy.load("en_core_web_sm")
-doc = nlp("Apple is looking at buying U.K. startup for $1 billion")
+doc = nlp(u"Apple is looking at buying U.K. startup for $1 billion")
for token in doc:
print(token.text, token.pos_, token.dep_)
```
@@ -298,8 +297,8 @@ its hash, or a hash to get its string:
import spacy
nlp = spacy.load("en_core_web_sm")
-doc = nlp("I love coffee")
-print(doc.vocab.strings["coffee"]) # 3197928453018144401
+doc = nlp(u"I love coffee")
+print(doc.vocab.strings[u"coffee"]) # 3197928453018144401
print(doc.vocab.strings[3197928453018144401]) # 'coffee'
```
@@ -322,7 +321,7 @@ ever change. Its hash value will also always be the same.
import spacy
nlp = spacy.load("en_core_web_sm")
-doc = nlp("I love coffee")
+doc = nlp(u"I love coffee")
for word in doc:
lexeme = doc.vocab[word.text]
print(lexeme.text, lexeme.orth, lexeme.shape_, lexeme.prefix_, lexeme.suffix_,
@@ -363,14 +362,14 @@ from spacy.tokens import Doc
from spacy.vocab import Vocab
nlp = spacy.load("en_core_web_sm")
-doc = nlp("I love coffee") # Original Doc
-print(doc.vocab.strings["coffee"]) # 3197928453018144401
+doc = nlp(u"I love coffee") # Original Doc
+print(doc.vocab.strings[u"coffee"]) # 3197928453018144401
print(doc.vocab.strings[3197928453018144401]) # 'coffee' 👍
empty_doc = Doc(Vocab()) # New Doc with empty Vocab
# empty_doc.vocab.strings[3197928453018144401] will raise an error :(
-empty_doc.vocab.strings.add("coffee") # Add "coffee" and generate hash
+empty_doc.vocab.strings.add(u"coffee") # Add "coffee" and generate hash
print(empty_doc.vocab.strings[3197928453018144401]) # 'coffee' 👍
new_doc = Doc(doc.vocab) # Create new doc with first doc's vocab
@@ -384,79 +383,6 @@ spaCy will also export the `Vocab` when you save a `Doc` or `nlp` object. This
will give you the object and its encoded annotations, plus the "key" to decode
it.
-## Knowledge Base {#kb}
-
-To support the entity linking task, spaCy stores external knowledge in a
-[`KnowledgeBase`](/api/kb). The knowledge base (KB) uses the `Vocab` to store
-its data efficiently.
-
-> - **Mention**: A textual occurrence of a named entity, e.g. 'Miss Lovelace'.
-> - **KB ID**: A unique identifier refering to a particular real-world concept,
-> e.g. 'Q7259'.
-> - **Alias**: A plausible synonym or description for a certain KB ID, e.g. 'Ada
-> Lovelace'.
-> - **Prior probability**: The probability of a certain mention resolving to a
-> certain KB ID, prior to knowing anything about the context in which the
-> mention is used.
-> - **Entity vector**: A pretrained word vector capturing the entity
-> description.
-
-A knowledge base is created by first adding all entities to it. Next, for each
-potential mention or alias, a list of relevant KB IDs and their prior
-probabilities is added. The sum of these prior probabilities should never exceed
-1 for any given alias.
-
-```python
-### {executable="true"}
-import spacy
-from spacy.kb import KnowledgeBase
-
-# load the model and create an empty KB
-nlp = spacy.load('en_core_web_sm')
-kb = KnowledgeBase(vocab=nlp.vocab, entity_vector_length=3)
-
-# adding entities
-kb.add_entity(entity="Q1004791", freq=6, entity_vector=[0, 3, 5])
-kb.add_entity(entity="Q42", freq=342, entity_vector=[1, 9, -3])
-kb.add_entity(entity="Q5301561", freq=12, entity_vector=[-2, 4, 2])
-
-# adding aliases
-kb.add_alias(alias="Douglas", entities=["Q1004791", "Q42", "Q5301561"], probabilities=[0.6, 0.1, 0.2])
-kb.add_alias(alias="Douglas Adams", entities=["Q42"], probabilities=[0.9])
-
-print()
-print("Number of entities in KB:",kb.get_size_entities()) # 3
-print("Number of aliases in KB:", kb.get_size_aliases()) # 2
-```
-
-### Candidate generation
-
-Given a textual entity, the Knowledge Base can provide a list of plausible
-candidates or entity identifiers. The [`EntityLinker`](/api/entitylinker) will
-take this list of candidates as input, and disambiguate the mention to the most
-probable identifier, given the document context.
-
-```python
-### {executable="true"}
-import spacy
-from spacy.kb import KnowledgeBase
-
-nlp = spacy.load('en_core_web_sm')
-kb = KnowledgeBase(vocab=nlp.vocab, entity_vector_length=3)
-
-# adding entities
-kb.add_entity(entity="Q1004791", freq=6, entity_vector=[0, 3, 5])
-kb.add_entity(entity="Q42", freq=342, entity_vector=[1, 9, -3])
-kb.add_entity(entity="Q5301561", freq=12, entity_vector=[-2, 4, 2])
-
-# adding aliases
-kb.add_alias(alias="Douglas", entities=["Q1004791", "Q42", "Q5301561"], probabilities=[0.6, 0.1, 0.2])
-
-candidates = kb.get_candidates("Douglas")
-for c in candidates:
- print(" ", c.entity_, c.prior_prob, c.entity_vector)
-```
-
## Serialization {#serialization}
import Serialization101 from 'usage/101/\_serialization.md'
@@ -515,11 +441,11 @@ python -m spacy download de_core_news_sm
import spacy
nlp = spacy.load("en_core_web_sm")
-doc = nlp("Hello, world. Here are two sentences.")
+doc = nlp(u"Hello, world. Here are two sentences.")
print([t.text for t in doc])
nlp_de = spacy.load("de_core_news_sm")
-doc_de = nlp_de("Ich bin ein Berliner.")
+doc_de = nlp_de(u"Ich bin ein Berliner.")
print([t.text for t in doc_de])
```
@@ -538,8 +464,8 @@ print([t.text for t in doc_de])
import spacy
nlp = spacy.load("en_core_web_sm")
-doc = nlp("Peach emoji is where it has always been. Peach is the superior "
- "emoji. It's outranking eggplant 🍑 ")
+doc = nlp(u"Peach emoji is where it has always been. Peach is the superior "
+ u"emoji. It's outranking eggplant 🍑 ")
print(doc[0].text) # 'Peach'
print(doc[1].text) # 'emoji'
print(doc[-1].text) # '🍑'
@@ -567,7 +493,7 @@ print(sentences[1].text) # 'Peach is the superior emoji.'
import spacy
nlp = spacy.load("en_core_web_sm")
-doc = nlp("Apple is looking at buying U.K. startup for $1 billion")
+doc = nlp(u"Apple is looking at buying U.K. startup for $1 billion")
apple = doc[0]
print("Fine-grained POS tag", apple.pos_, apple.pos)
print("Coarse-grained POS tag", apple.tag_, apple.tag)
@@ -595,20 +521,20 @@ print("Like an email address?", billion.like_email)
import spacy
nlp = spacy.load("en_core_web_sm")
-doc = nlp("I love coffee")
+doc = nlp(u"I love coffee")
-coffee_hash = nlp.vocab.strings["coffee"] # 3197928453018144401
+coffee_hash = nlp.vocab.strings[u"coffee"] # 3197928453018144401
coffee_text = nlp.vocab.strings[coffee_hash] # 'coffee'
print(coffee_hash, coffee_text)
print(doc[2].orth, coffee_hash) # 3197928453018144401
print(doc[2].text, coffee_text) # 'coffee'
-beer_hash = doc.vocab.strings.add("beer") # 3073001599257881079
+beer_hash = doc.vocab.strings.add(u"beer") # 3073001599257881079
beer_text = doc.vocab.strings[beer_hash] # 'beer'
print(beer_hash, beer_text)
-unicorn_hash = doc.vocab.strings.add("🦄") # 18234233413267120783
-unicorn_text = doc.vocab.strings[unicorn_hash] # '🦄'
+unicorn_hash = doc.vocab.strings.add(u"🦄 ") # 18234233413267120783
+unicorn_text = doc.vocab.strings[unicorn_hash] # '🦄 '
print(unicorn_hash, unicorn_text)
```
@@ -624,17 +550,19 @@ print(unicorn_hash, unicorn_text)
```python
### {executable="true"}
import spacy
-from spacy.tokens import Span
nlp = spacy.load("en_core_web_sm")
-doc = nlp("San Francisco considers banning sidewalk delivery robots")
+doc = nlp(u"San Francisco considers banning sidewalk delivery robots")
for ent in doc.ents:
print(ent.text, ent.start_char, ent.end_char, ent.label_)
-doc = nlp("FB is hiring a new VP of global policy")
-doc.ents = [Span(doc, 0, 1, label="ORG")]
+from spacy.tokens import Span
+
+doc = nlp(u"FB is hiring a new VP of global policy")
+doc.ents = [Span(doc, 0, 1, label=doc.vocab.strings[u"ORG"])]
for ent in doc.ents:
print(ent.text, ent.start_char, ent.end_char, ent.label_)
+
```
@@ -650,7 +578,7 @@ import spacy
import random
nlp = spacy.load("en_core_web_sm")
-train_data = [("Uber blew through $1 million", {"entities": [(0, 4, "ORG")]})]
+train_data = [(u"Uber blew through $1 million", {"entities": [(0, 4, "ORG")]})]
other_pipes = [pipe for pipe in nlp.pipe_names if pipe != "ner"]
with nlp.disable_pipes(*other_pipes):
@@ -678,11 +606,11 @@ nlp.to_disk("/model")
```python
from spacy import displacy
-doc_dep = nlp("This is a sentence.")
+doc_dep = nlp(u"This is a sentence.")
displacy.serve(doc_dep, style="dep")
-doc_ent = nlp("When Sebastian Thrun started working on self-driving cars at Google "
- "in 2007, few people outside of the company took him seriously.")
+doc_ent = nlp(u"When Sebastian Thrun started working on self-driving cars at Google "
+ u"in 2007, few people outside of the company took him seriously.")
displacy.serve(doc_ent, style="ent")
```
@@ -700,7 +628,7 @@ displacy.serve(doc_ent, style="ent")
import spacy
nlp = spacy.load("en_core_web_md")
-doc = nlp("Apple and banana are similar. Pasta and hippo aren't.")
+doc = nlp(u"Apple and banana are similar. Pasta and hippo aren't.")
apple = doc[0]
banana = doc[2]
@@ -762,7 +690,7 @@ pattern2 = [[{"ORTH": emoji, "OP": "+"}] for emoji in ["😀", "😂", "🤣", "
matcher.add("GoogleIO", None, pattern1) # Match "Google I/O" or "Google i/o"
matcher.add("HAPPY", set_sentiment, *pattern2) # Match one or more happy emoji
-doc = nlp("A text about Google I/O 😀😀")
+doc = nlp(u"A text about Google I/O 😀😀")
matches = matcher(doc)
for match_id, start, end in matches:
@@ -782,7 +710,7 @@ print("Sentiment", doc.sentiment)
### Minibatched stream processing {#lightning-tour-minibatched}
```python
-texts = ["One document.", "...", "Lots of documents"]
+texts = [u"One document.", u"...", u"Lots of documents"]
# .pipe streams input, and produces streaming output
iter_texts = (texts[i % 3] for i in range(100000000))
for i, doc in enumerate(nlp.pipe(iter_texts, batch_size=50)):
@@ -798,8 +726,8 @@ for i, doc in enumerate(nlp.pipe(iter_texts, batch_size=50)):
import spacy
nlp = spacy.load("en_core_web_sm")
-doc = nlp("When Sebastian Thrun started working on self-driving cars at Google "
- "in 2007, few people outside of the company took him seriously.")
+doc = nlp(u"When Sebastian Thrun started working on self-driving cars at Google "
+ u"in 2007, few people outside of the company took him seriously.")
dep_labels = []
for token in doc:
@@ -824,7 +752,7 @@ import spacy
from spacy.attrs import ORTH, LIKE_URL
nlp = spacy.load("en_core_web_sm")
-doc = nlp("Check out https://spacy.io")
+doc = nlp(u"Check out https://spacy.io")
for token in doc:
print(token.text, token.orth, token.like_url)
@@ -870,7 +798,7 @@ def put_spans_around_tokens(doc):
nlp = spacy.load("en_core_web_sm")
-doc = nlp("This is a test.\\n\\nHello world.")
+doc = nlp(u"This is a test.\\n\\nHello world.")
html = put_spans_around_tokens(doc)
print(html)
```
diff --git a/website/docs/usage/training.md b/website/docs/usage/training.md
index f84fd0ed4..b84bf4e12 100644
--- a/website/docs/usage/training.md
+++ b/website/docs/usage/training.md
@@ -6,14 +6,12 @@ menu:
- ['NER', 'ner']
- ['Tagger & Parser', 'tagger-parser']
- ['Text Classification', 'textcat']
- - ['Entity Linking', 'entity-linker']
- ['Tips and Advice', 'tips']
---
This guide describes how to train new statistical models for spaCy's
-part-of-speech tagger, named entity recognizer, dependency parser, text
-classifier and entity linker. Once the model is trained, you can then
-[save and load](/usage/saving-loading#models) it.
+part-of-speech tagger, named entity recognizer and dependency parser. Once the
+model is trained, you can then [save and load](/usage/saving-loading#models) it.
## Training basics {#basics}
@@ -41,19 +39,6 @@ mkdir models
python -m spacy train es models ancora-json/es_ancora-ud-train.json ancora-json/es_ancora-ud-dev.json
```
-
-
-If you're running spaCy v2.2 or above, you can use the
-[`debug-data` command](/api/cli#debug-data) to analyze and validate your
-training and development data, get useful stats, and find problems like invalid
-entity annotations, cyclic dependencies, low data labels and more.
-
-```bash
-$ python -m spacy debug-data en train.json dev.json --verbose
-```
-
-
-
You can also use the [`gold.docs_to_json`](/api/goldparse#docs_to_json) helper
to convert a list of `Doc` objects to spaCy's JSON training format.
@@ -236,10 +221,10 @@ of being dropped.
> - [`begin_training()`](/api/language#begin_training): Start the training and
> return an optimizer function to update the model's weights. Can take an
-> optional function converting the training data to spaCy's training format.
-> - [`update()`](/api/language#update): Update the model with the training
-> example and gold data.
-> - [`to_disk()`](/api/language#to_disk): Save the updated model to a directory.
+> optional function converting the training data to spaCy's training
+> format. -[`update()`](/api/language#update): Update the model with the
+> training example and gold data. -[`to_disk()`](/api/language#to_disk): Save
+> the updated model to a directory.
```python
### Example training loop
@@ -298,10 +283,10 @@ imports. It also makes it easier to structure and load your training data.
```python
### Simple training loop
TRAIN_DATA = [
- ("Uber blew through $1 million a week", {"entities": [(0, 4, "ORG")]}),
- ("Google rebrands its business apps", {"entities": [(0, 6, "ORG")]})]
+ (u"Uber blew through $1 million a week", {"entities": [(0, 4, "ORG")]}),
+ (u"Google rebrands its business apps", {"entities": [(0, 6, "ORG")]})]
-nlp = spacy.blank("en")
+nlp = spacy.blank('en')
optimizer = nlp.begin_training()
for i in range(20):
random.shuffle(TRAIN_DATA)
@@ -498,7 +483,7 @@ like this:
![Custom dependencies](../images/displacy-custom-parser.svg)
```python
-doc = nlp("find a hotel with good wifi")
+doc = nlp(u"find a hotel with good wifi")
print([(t.text, t.dep_, t.head.text) for t in doc if t.dep_ != '-'])
# [('find', 'ROOT', 'find'), ('hotel', 'PLACE', 'find'),
# ('good', 'QUALITY', 'wifi'), ('wifi', 'ATTRIBUTE', 'hotel')]
@@ -596,76 +581,6 @@ https://github.com/explosion/spaCy/tree/master/examples/training/train_textcat.p
7. **Save** the trained model using [`nlp.to_disk`](/api/language#to_disk).
8. **Test** the model to make sure the text classifier works as expected.
-## Entity linking {#entity-linker}
-
-To train an entity linking model, you first need to define a knowledge base
-(KB).
-
-### Creating a knowledge base {#kb}
-
-A KB consists of a list of entities with unique identifiers. Each such entity
-has an entity vector that will be used to measure similarity with the context in
-which an entity is used. These vectors are pretrained and stored in the KB
-before the entity linking model will be trained.
-
-The following example shows how to build a knowledge base from scratch, given a
-list of entities and potential aliases. The script further demonstrates how to
-pretrain and store the entity vectors. To run this example, the script needs
-access to a `vocab` instance or an `nlp` model with pretrained word embeddings.
-
-```python
-https://github.com/explosion/spaCy/tree/master/examples/training/pretrain_kb.py
-```
-
-#### Step by step guide {#step-by-step-kb}
-
-1. **Load the model** you want to start with, or create an **empty model** using
- [`spacy.blank`](/api/top-level#spacy.blank) with the ID of your language and
- a pre-defined [`vocab`](/api/vocab) object.
-2. **Pretrain the entity embeddings** by running the descriptions of the
- entities through a simple encoder-decoder network. The current implementation
- requires the `nlp` model to have access to pre-trained word embeddings, but a
- custom implementation of this enoding step can also be used.
-3. **Construct the KB** by defining all entities with their pretrained vectors,
- and all aliases with their prior probabilities.
-4. **Save** the KB using [`kb.dump`](/api/kb#dump).
-5. **Test** the KB to make sure the entities were added correctly.
-
-### Training an entity linking model {#entity-linker-model}
-
-This example shows how to create an entity linker pipe using a previously
-created knowledge base. The entity linker pipe is then trained with your own
-examples. To do so, you'll need to provide **example texts**, and the
-**character offsets** and **knowledge base identifiers** of each entity
-contained in the texts.
-
-```python
-https://github.com/explosion/spaCy/tree/master/examples/training/train_entity_linker.py
-```
-
-#### Step by step guide {#step-by-step-entity-linker}
-
-1. **Load the KB** you want to start with, and specify the path to the `Vocab`
- object that was used to create this KB. Then, create an **empty model** using
- [`spacy.blank`](/api/top-level#spacy.blank) with the ID of your language.
- Don't forget to add the KB to the entity linker, and to add the entity linker
- to the pipeline. In practical applications, you will want a more advanced
- pipeline including also a component for
- [named entity recognition](/usage/training#ner). If you're using a model with
- additional components, make sure to disable all other pipeline components
- during training using [`nlp.disable_pipes`](/api/language#disable_pipes).
- This way, you'll only be training the entity linker.
-2. **Shuffle and loop over** the examples. For each example, **update the
- model** by calling [`nlp.update`](/api/language#update), which steps through
- the annotated examples of the input. For each combination of a mention in
- text and a potential KB identifier, the model makes a **prediction** whether
- or not this is the correct match. It then consults the annotations to see
- whether it was right. If it was wrong, it adjusts its weights so that the
- correct combination will score higher next time.
-3. **Save** the trained model using [`nlp.to_disk`](/api/language#to_disk).
-4. **Test** the model to make sure the entities in the training data are
- recognized correctly.
-
## Optimization tips and advice {#tips}
There are lots of conflicting "recipes" for training deep neural networks at the
diff --git a/website/docs/usage/v2-1.md b/website/docs/usage/v2-1.md
index 4a8ef5a37..d3c9fb504 100644
--- a/website/docs/usage/v2-1.md
+++ b/website/docs/usage/v2-1.md
@@ -99,8 +99,8 @@ flexibility.
>
> ```python
> matcher = PhraseMatcher(nlp.vocab, attr="POS")
-> matcher.add("PATTERN", None, nlp("I love cats"))
-> doc = nlp("You like dogs")
+> matcher.add("PATTERN", None, nlp(u"I love cats"))
+> doc = nlp(u"You like dogs")
> matches = matcher(doc)
> ```
@@ -122,9 +122,9 @@ or `POS` for finding sequences of the same part-of-speech tags.
> #### Example
>
> ```python
-> doc = nlp("I like David Bowie")
+> doc = nlp(u"I like David Bowie")
> with doc.retokenize() as retokenizer:
-> attrs = {"LEMMA": "David Bowie"}
+> attrs = {"LEMMA": u"David Bowie"}
> retokenizer.merge(doc[2:4], attrs=attrs)
> ```
diff --git a/website/docs/usage/v2-2.md b/website/docs/usage/v2-2.md
deleted file mode 100644
index d256037ac..000000000
--- a/website/docs/usage/v2-2.md
+++ /dev/null
@@ -1,351 +0,0 @@
----
-title: What's New in v2.2
-teaser: New features, backwards incompatibilities and migration guide
-menu:
- - ['New Features', 'features']
- - ['Backwards Incompatibilities', 'incompat']
----
-
-## New Features {#features hidden="true"}
-
-spaCy v2.2 features improved statistical models, new pretrained models for
-Norwegian and Lithuanian, better Dutch NER, as well as a new mechanism for
-storing language data that makes the installation about **15× smaller** on
-disk. We've also added a new class to efficiently **serialize annotations**, an
-improved and **10× faster** phrase matching engine, built-in scoring and
-**CLI training for text classification**, a new command to analyze and **debug
-training data**, data augmentation during training and more. For the full
-changelog, see the
-[release notes on GitHub](https://github.com/explosion/spaCy/releases/tag/v2.2.0).
-
-
-
-### Better pretrained models and more languages {#models}
-
-> #### Example
->
-> ```bash
-> python -m spacy download nl_core_news_sm
-> python -m spacy download nb_core_news_sm
-> python -m spacy download lt_core_news_sm
-> ```
-
-The new version also features new and re-trained models for all languages and
-resolves a number of data bugs. The [Dutch model](/models/nl) has been retrained
-with a new and custom-labelled NER corpus using the same extended label scheme
-as the English models. It should now produce significantly better NER results
-overall. We've also added new core models for [Norwegian](/models/nb) (MIT) and
-[Lithuanian](/models/lt) (CC BY-SA).
-
-
-
-**Usage:** [Models directory](/models) **Benchmarks: **
-[Release notes](https://github.com/explosion/spaCy/releases/tag/v2.2.0)
-
-
-
-### Serializable lookup table and dictionary API {#lookups}
-
-> #### Example
->
-> ```python
-> data = {"foo": "bar"}
-> nlp.vocab.lookups.add_table("my_dict", data)
->
-> def custom_component(doc):
-> table = doc.vocab.lookups.get_table("my_dict")
-> print(table.get("foo")) # look something up
-> return doc
-> ```
-
-The new `Lookups` API lets you add large dictionaries and lookup tables to the
-`Vocab` and access them from the tokenizer or custom components and extension
-attributes. Internally, the tables use Bloom filters for efficient lookup
-checks. They're also fully serializable out-of-the-box. All large data resources
-included with spaCy now use this API and are additionally compressed at build
-time. This allowed us to make the installed library roughly **15 times smaller
-on disk**.
-
-
-
-**API:** [`Lookups`](/api/lookups) **Usage: **
-[Adding languages: Lemmatizer](/usage/adding-languages#lemmatizer)
-
-
-
-### Text classification scores and CLI training {#train-textcat-cli}
-
-> #### Example
->
-> ```bash
-> $ python -m spacy train en /output /train /dev \\
-> --pipeline textcat --textcat-arch simple_cnn \\
-> --textcat-multilabel
-> ```
-
-When training your models using the `spacy train` command, you can now also
-include text categories in the JSON-formatted training data. The `Scorer` and
-`nlp.evaluate` now report the text classification scores, calculated as the
-F-score on positive label for binary exclusive tasks, the macro-averaged F-score
-for 3+ exclusive labels or the macro-averaged AUC ROC score for multilabel
-classification.
-
-
-
-**API:** [`spacy train`](/api/cli#train), [`Scorer`](/api/scorer),
-[`Language.evaluate`](/api/language#evaluate)
-
-
-
-### New DocBin class to efficiently serialize Doc collections
-
-> #### Example
->
-> ```python
-> from spacy.tokens import DocBin
-> doc_bin = DocBin(attrs=["LEMMA", "ENT_IOB", "ENT_TYPE"], store_user_data=True)
-> for doc in nlp.pipe(texts):
-> doc_bin.add(doc)
-> bytes_data = doc_bin.to_bytes()
-> # Deserialize later, e.g. in a new process
-> nlp = spacy.blank("en")
-> doc_bin = DocBin().from_bytes(bytes_data)
-> docs = list(doc_bin.get_docs(nlp.vocab))
-> ```
-
-If you're working with lots of data, you'll probably need to pass analyses
-between machines, either to use something like [Dask](https://dask.org) or
-[Spark](https://spark.apache.org), or even just to save out work to disk. Often
-it's sufficient to use the `Doc.to_array` functionality for this, and just
-serialize the numpy arrays – but other times you want a more general way to save
-and restore `Doc` objects.
-
-The new `DocBin` class makes it easy to serialize and deserialize a collection
-of `Doc` objects together, and is much more efficient than calling
-`Doc.to_bytes` on each individual `Doc` object. You can also control what data
-gets saved, and you can merge pallets together for easy map/reduce-style
-processing.
-
-
-
-**API:** [`DocBin`](/api/docbin) **Usage: **
-[Serializing Doc objects](/usage/saving-loading#docs)
-
-
-
-### CLI command to debug and validate training data {#debug-data}
-
-> #### Example
->
-> ```bash
-> $ python -m spacy debug-data en train.json dev.json
-> ```
-
-The new `debug-data` command lets you analyze and validate your training and
-development data, get useful stats, and find problems like invalid entity
-annotations, cyclic dependencies, low data labels and more. If you're training a
-model with `spacy train` and the results seem surprising or confusing,
-`debug-data` may help you track down the problems and improve your training
-data.
-
-
-
-```
-=========================== Data format validation ===========================
-✔ Corpus is loadable
-
-=============================== Training stats ===============================
-Training pipeline: tagger, parser, ner
-Starting with blank model 'en'
-18127 training docs
-2939 evaluation docs
-⚠ 34 training examples also in evaluation data
-
-============================== Vocab & Vectors ==============================
-ℹ 2083156 total words in the data (56962 unique)
-⚠ 13020 misaligned tokens in the training data
-⚠ 2423 misaligned tokens in the dev data
-10 most common words: 'the' (98429), ',' (91756), '.' (87073), 'to' (50058),
-'of' (49559), 'and' (44416), 'a' (34010), 'in' (31424), 'that' (22792), 'is'
-(18952)
-ℹ No word vectors present in the model
-
-========================== Named Entity Recognition ==========================
-ℹ 18 new labels, 0 existing labels
-528978 missing values (tokens with '-' label)
-New: 'ORG' (23860), 'PERSON' (21395), 'GPE' (21193), 'DATE' (18080), 'CARDINAL'
-(10490), 'NORP' (9033), 'MONEY' (5164), 'PERCENT' (3761), 'ORDINAL' (2122),
-'LOC' (2113), 'TIME' (1616), 'WORK_OF_ART' (1229), 'QUANTITY' (1150), 'FAC'
-(1134), 'EVENT' (974), 'PRODUCT' (935), 'LAW' (444), 'LANGUAGE' (338)
-✔ Good amount of examples for all labels
-✔ Examples without occurences available for all labels
-✔ No entities consisting of or starting/ending with whitespace
-
-=========================== Part-of-speech Tagging ===========================
-ℹ 49 labels in data (57 labels in tag map)
-'NN' (266331), 'IN' (227365), 'DT' (185600), 'NNP' (164404), 'JJ' (119830),
-'NNS' (110957), '.' (101482), ',' (92476), 'RB' (90090), 'PRP' (90081), 'VB'
-(74538), 'VBD' (68199), 'CC' (62862), 'VBZ' (50712), 'VBP' (43420), 'VBN'
-(42193), 'CD' (40326), 'VBG' (34764), 'TO' (31085), 'MD' (25863), 'PRP$'
-(23335), 'HYPH' (13833), 'POS' (13427), 'UH' (13322), 'WP' (10423), 'WDT'
-(9850), 'RP' (8230), 'WRB' (8201), ':' (8168), '''' (7392), '``' (6984), 'NNPS'
-(5817), 'JJR' (5689), '$' (3710), 'EX' (3465), 'JJS' (3118), 'RBR' (2872),
-'-RRB-' (2825), '-LRB-' (2788), 'PDT' (2078), 'XX' (1316), 'RBS' (1142), 'FW'
-(794), 'NFP' (557), 'SYM' (440), 'WP$' (294), 'LS' (293), 'ADD' (191), 'AFX'
-(24)
-✔ All labels present in tag map for language 'en'
-
-============================= Dependency Parsing =============================
-ℹ Found 111703 sentences with an average length of 18.6 words.
-ℹ Found 2251 nonprojective train sentences
-ℹ Found 303 nonprojective dev sentences
-ℹ 47 labels in train data
-ℹ 211 labels in projectivized train data
-'punct' (236796), 'prep' (188853), 'pobj' (182533), 'det' (172674), 'nsubj'
-(169481), 'compound' (116142), 'ROOT' (111697), 'amod' (107945), 'dobj' (93540),
-'aux' (86802), 'advmod' (86197), 'cc' (62679), 'conj' (59575), 'poss' (36449),
-'ccomp' (36343), 'advcl' (29017), 'mark' (27990), 'nummod' (24582), 'relcl'
-(21359), 'xcomp' (21081), 'attr' (18347), 'npadvmod' (17740), 'acomp' (17204),
-'auxpass' (15639), 'appos' (15368), 'neg' (15266), 'nsubjpass' (13922), 'case'
-(13408), 'acl' (12574), 'pcomp' (10340), 'nmod' (9736), 'intj' (9285), 'prt'
-(8196), 'quantmod' (7403), 'dep' (4300), 'dative' (4091), 'agent' (3908), 'expl'
-(3456), 'parataxis' (3099), 'oprd' (2326), 'predet' (1946), 'csubj' (1494),
-'subtok' (1147), 'preconj' (692), 'meta' (469), 'csubjpass' (64), 'iobj' (1)
-⚠ Low number of examples for label 'iobj' (1)
-⚠ Low number of examples for 130 labels in the projectivized dependency
-trees used for training. You may want to projectivize labels such as punct
-before training in order to improve parser performance.
-⚠ Projectivized labels with low numbers of examples: appos||attr: 12
-advmod||dobj: 13 prep||ccomp: 12 nsubjpass||ccomp: 15 pcomp||prep: 14
-amod||dobj: 9 attr||xcomp: 14 nmod||nsubj: 17 prep||advcl: 2 prep||prep: 5
-nsubj||conj: 12 advcl||advmod: 18 ccomp||advmod: 11 ccomp||pcomp: 5 acl||pobj:
-10 npadvmod||acomp: 7 dobj||pcomp: 14 nsubjpass||pcomp: 1 nmod||pobj: 8
-amod||attr: 6 nmod||dobj: 12 aux||conj: 1 neg||conj: 1 dative||xcomp: 11
-pobj||dative: 3 xcomp||acomp: 19 advcl||pobj: 2 nsubj||advcl: 2 csubj||ccomp: 1
-advcl||acl: 1 relcl||nmod: 2 dobj||advcl: 10 advmod||advcl: 3 nmod||nsubjpass: 6
-amod||pobj: 5 cc||neg: 1 attr||ccomp: 16 advcl||xcomp: 3 nmod||attr: 4
-advcl||nsubjpass: 5 advcl||ccomp: 4 ccomp||conj: 1 punct||acl: 1 meta||acl: 1
-parataxis||acl: 1 prep||acl: 1 amod||nsubj: 7 ccomp||ccomp: 3 acomp||xcomp: 5
-dobj||acl: 5 prep||oprd: 6 advmod||acl: 2 dative||advcl: 1 pobj||agent: 5
-xcomp||amod: 1 dep||advcl: 1 prep||amod: 8 relcl||compound: 1 advcl||csubj: 3
-npadvmod||conj: 2 npadvmod||xcomp: 4 advmod||nsubj: 3 ccomp||amod: 7
-advcl||conj: 1 nmod||conj: 2 advmod||nsubjpass: 2 dep||xcomp: 2 appos||ccomp: 1
-advmod||dep: 1 advmod||advmod: 5 aux||xcomp: 8 dep||advmod: 1 dative||ccomp: 2
-prep||dep: 1 conj||conj: 1 dep||ccomp: 4 cc||ROOT: 1 prep||ROOT: 1 nsubj||pcomp:
-3 advmod||prep: 2 relcl||dative: 1 acl||conj: 1 advcl||attr: 4 prep||npadvmod: 1
-nsubjpass||xcomp: 1 neg||advmod: 1 xcomp||oprd: 1 advcl||advcl: 1 dobj||dep: 3
-nsubjpass||parataxis: 1 attr||pcomp: 1 ccomp||parataxis: 1 advmod||attr: 1
-nmod||oprd: 1 appos||nmod: 2 advmod||relcl: 1 appos||npadvmod: 1 appos||conj: 1
-prep||expl: 1 nsubjpass||conj: 1 punct||pobj: 1 cc||pobj: 1 conj||pobj: 1
-punct||conj: 1 ccomp||dep: 1 oprd||xcomp: 3 ccomp||xcomp: 1 ccomp||nsubj: 1
-nmod||dep: 1 xcomp||ccomp: 1 acomp||advcl: 1 intj||advmod: 1 advmod||acomp: 2
-relcl||oprd: 1 advmod||prt: 1 advmod||pobj: 1 appos||nummod: 1 relcl||npadvmod:
-3 mark||advcl: 1 aux||ccomp: 1 amod||nsubjpass: 1 npadvmod||advmod: 1 conj||dep:
-1 nummod||pobj: 1 amod||npadvmod: 1 intj||pobj: 1 nummod||npadvmod: 1
-xcomp||xcomp: 1 aux||dep: 1 advcl||relcl: 1
-⚠ The following labels were found only in the train data: xcomp||amod,
-advcl||relcl, prep||nsubjpass, acl||nsubj, nsubjpass||conj, xcomp||oprd,
-advmod||conj, advmod||advmod, iobj, advmod||nsubjpass, dobj||conj, ccomp||amod,
-meta||acl, xcomp||xcomp, prep||attr, prep||ccomp, advcl||acomp, acl||dobj,
-advcl||advcl, pobj||agent, prep||advcl, nsubjpass||xcomp, prep||dep,
-acomp||xcomp, aux||ccomp, ccomp||dep, conj||dep, relcl||compound,
-nsubjpass||ccomp, nmod||dobj, advmod||advcl, advmod||acl, dobj||advcl,
-dative||xcomp, prep||nsubj, ccomp||ccomp, nsubj||ccomp, xcomp||acomp,
-prep||acomp, dep||advmod, acl||pobj, appos||dobj, npadvmod||acomp, cc||ROOT,
-relcl||nsubj, nmod||pobj, acl||nsubjpass, ccomp||advmod, pcomp||prep,
-amod||dobj, advmod||attr, advcl||csubj, appos||attr, dobj||pcomp, prep||ROOT,
-relcl||pobj, advmod||pobj, amod||nsubj, ccomp||xcomp, prep||oprd,
-npadvmod||advmod, appos||nummod, advcl||pobj, neg||advmod, acl||attr,
-appos||nsubjpass, csubj||ccomp, amod||nsubjpass, intj||pobj, dep||advcl,
-cc||neg, xcomp||ccomp, dative||ccomp, nmod||oprd, pobj||dative, prep||dobj,
-dep||ccomp, relcl||attr, ccomp||nsubj, advcl||xcomp, nmod||dep, advcl||advmod,
-ccomp||conj, pobj||prep, advmod||acomp, advmod||relcl, attr||pcomp,
-ccomp||parataxis, oprd||xcomp, intj||advmod, nmod||nsubjpass, prep||npadvmod,
-parataxis||acl, prep||pobj, advcl||dobj, amod||pobj, prep||acl, conj||pobj,
-advmod||dep, punct||pobj, ccomp||acomp, acomp||advcl, nummod||npadvmod,
-dobj||dep, npadvmod||xcomp, advcl||conj, relcl||npadvmod, punct||acl,
-relcl||dobj, dobj||xcomp, nsubjpass||parataxis, dative||advcl, relcl||nmod,
-advcl||ccomp, appos||npadvmod, ccomp||pcomp, prep||amod, mark||advcl,
-prep||advmod, prep||xcomp, appos||nsubj, attr||ccomp, advmod||prt, dobj||ccomp,
-aux||conj, advcl||nsubj, conj||conj, advmod||ccomp, advcl||nsubjpass,
-attr||xcomp, nmod||conj, npadvmod||conj, relcl||dative, prep||expl,
-nsubjpass||pcomp, advmod||xcomp, advmod||dobj, appos||pobj, nsubj||conj,
-relcl||nsubjpass, advcl||attr, appos||ccomp, advmod||prep, prep||conj,
-nmod||attr, punct||conj, neg||conj, dep||xcomp, aux||xcomp, dobj||acl,
-nummod||pobj, amod||npadvmod, nsubj||pcomp, advcl||acl, appos||nmod,
-relcl||oprd, prep||prep, cc||pobj, nmod||nsubj, amod||attr, aux||dep,
-appos||conj, advmod||nsubj, nsubj||advcl, acl||conj
-To train a parser, your data should include at least 20 instances of each label.
-⚠ Multiple root labels (ROOT, nsubj, aux, npadvmod, prep) found in
-training data. spaCy's parser uses a single root label ROOT so this distinction
-will not be available.
-
-================================== Summary ==================================
-✔ 5 checks passed
-⚠ 8 warnings
-```
-
-
-
-
-
-**API:** [`spacy debug-data`](/api/cli#debug-data)
-
-
-
-## Backwards incompatibilities {#incompat}
-
-
-
-If you've been training **your own models**, you'll need to **retrain** them
-with the new version. Also don't forget to upgrade all models to the latest
-versions. Models for v2.0 or v2.1 aren't compatible with models for v2.2. To
-check if all of your models are up to date, you can run the
-[`spacy validate`](/api/cli#validate) command.
-
-
-
-- The [Dutch model](/models/nl) has been trained on a new NER corpus (custom
- labelled UD instead of WikiNER), so their predictions may be very different
- compared to the previous version. The results should be significantly better
- and more generalizable, though.
-- The [`spacy download`](/api/cli#download) command does **not** set the
- `--no-deps` pip argument anymore by default, meaning that model package
- dependencies (if available) will now be also downloaded and installed. If
- spaCy (which is also a model dependency) is not installed in the current
- environment, e.g. if a user has built from source, `--no-deps` is added back
- automatically to prevent spaCy from being downloaded and installed again from
- pip.
-- The built-in
- [`biluo_tags_from_offsets`](/api/goldparse#biluo_tags_from_offsets) converter
- is now stricter and will raise an error if entities are overlapping (instead
- of silently skipping them). If your data contains invalid entity annotations,
- make sure to clean it and resolve conflicts. You can now also use the new
- `debug-data` command to find problems in your data.
-- Pipeline components can now overwrite IOB tags of tokens that are not yet part
- of an entity. Once a token has an `ent_iob` value set, it won't be reset to an
- "unset" state and will always have at least `O` assigned. `list(doc.ents)` now
- actually keeps the annotations on the token level consistent, instead of
- resetting `O` to an empty string.
-- The default punctuation in the [`Sentencizer`](/api/sentencizer) has been
- extended and now includes more characters common in various languages. This
- also means that the results it produces may change, depending on your text. If
- you want the previous behaviour with limited characters, set
- `punct_chars=[".", "!", "?"]` on initialization.
-- The [`PhraseMatcher`](/api/phrasematcher) algorithm was rewritten from scratch
- and it's now 10× faster. The rewrite also resolved a few subtle bugs
- with very large terminology lists. So if you were matching large lists, you
- may see slightly different results – however, the results should now be fully
- correct. See [this PR](https://github.com/explosion/spaCy/pull/4309) for more
- details.
-- Lemmatization tables (rules, exceptions, index and lookups) are now part of
- the `Vocab` and serialized with it. This means that serialized objects (`nlp`,
- pipeline components, vocab) will now include additional data, and models
- written to disk will include additional files.
-- The `Serbian` language class (introduced in v2.1.8) incorrectly used the
- language code `rs` instead of `sr`. This has now been fixed, so `Serbian` is
- now available via `spacy.lang.sr`.
-- The `"sources"` in the `meta.json` have changed from a list of strings to a
- list of dicts. This is mostly internals, but if your code used
- `nlp.meta["sources"]`, you might have to update it.
diff --git a/website/docs/usage/v2.md b/website/docs/usage/v2.md
index 0ac8bfe75..a412eeba4 100644
--- a/website/docs/usage/v2.md
+++ b/website/docs/usage/v2.md
@@ -156,7 +156,7 @@ spaCy or plug in your own machine learning models.
> for itn in range(100):
> for doc, gold in train_data:
> nlp.update([doc], [gold])
-> doc = nlp("This is a text.")
+> doc = nlp(u"This is a text.")
> print(doc.cats)
> ```
@@ -179,13 +179,13 @@ network to assign position-sensitive vectors to each word in the document.
> #### Example
>
> ```python
-> doc = nlp("I love coffee")
-> assert doc.vocab.strings["coffee"] == 3197928453018144401
-> assert doc.vocab.strings[3197928453018144401] == "coffee"
+> doc = nlp(u"I love coffee")
+> assert doc.vocab.strings[u"coffee"] == 3197928453018144401
+> assert doc.vocab.strings[3197928453018144401] == u"coffee"
>
-> beer_hash = doc.vocab.strings.add("beer")
-> assert doc.vocab.strings["beer"] == beer_hash
-> assert doc.vocab.strings[beer_hash] == "beer"
+> beer_hash = doc.vocab.strings.add(u"beer")
+> assert doc.vocab.strings[u"beer"] == beer_hash
+> assert doc.vocab.strings[beer_hash] == u"beer"
> ```
The [`StringStore`](/api/stringstore) now resolves all strings to hash values
@@ -275,7 +275,7 @@ language, you can import the class directly, e.g.
>
> ```python
> from spacy import displacy
-> doc = nlp("This is a sentence about Facebook.")
+> doc = nlp(u"This is a sentence about Facebook.")
> displacy.serve(doc, style="dep") # run the web server
> html = displacy.render(doc, style="ent") # generate HTML
> ```
@@ -322,7 +322,7 @@ lookup-based lemmatization – and **many new languages**!
> matcher.add('HEARTS', None, [{"ORTH": "❤️", "OP": '+'}])
>
> phrasematcher = PhraseMatcher(nlp.vocab)
-> phrasematcher.add("OBAMA", None, nlp("Barack Obama"))
+> phrasematcher.add("OBAMA", None, nlp(u"Barack Obama"))
> ```
Patterns can now be added to the matcher by calling
@@ -477,12 +477,12 @@ to the `disable` keyword argument on load, or by using
[`disable_pipes`](/api/language#disable_pipes) as a method or context manager:
```diff
-- nlp = spacy.load("en_core_web_sm", tagger=False, entity=False)
-- doc = nlp("I don't want parsed", parse=False)
+- nlp = spacy.load("en", tagger=False, entity=False)
+- doc = nlp(u"I don't want parsed", parse=False)
-+ nlp = spacy.load("en_core_web_sm", disable=["tagger", "ner"])
++ nlp = spacy.load("en", disable=["tagger", "ner"])
+ with nlp.disable_pipes("parser"):
-+ doc = nlp("I don't want parsed")
++ doc = nlp(u"I don't want parsed")
```
To add spaCy's built-in pipeline components to your pipeline, you can still
@@ -539,7 +539,7 @@ This means that your application can – and should – only pass around `Doc`
objects and refer to them as the single source of truth.
```diff
-- doc = nlp("This is a regular doc")
+- doc = nlp(u"This is a regular doc")
- doc_array = doc.to_array(["ORTH", "POS"])
- doc_with_meta = {"doc_array": doc_array, "meta": get_doc_meta(doc_array)}
@@ -556,11 +556,11 @@ utilities that interact with the pipeline, consider moving this logic into its
own extension module.
```diff
-- doc = nlp("Doc with a standard pipeline")
+- doc = nlp(u"Doc with a standard pipeline")
- meta = get_meta(doc)
+ nlp.add_pipe(meta_component)
-+ doc = nlp("Doc with a custom pipeline that assigns meta")
++ doc = nlp(u"Doc with a custom pipeline that assigns meta")
+ meta = doc._.meta
```
@@ -572,12 +572,12 @@ to call [`StringStore.add`](/api/stringstore#add) explicitly. You can also now
be sure that the string-to-hash mapping will always match across vocabularies.
```diff
-- nlp.vocab.strings["coffee"] # 3672
-- other_nlp.vocab.strings["coffee"] # 40259
+- nlp.vocab.strings[u"coffee"] # 3672
+- other_nlp.vocab.strings[u"coffee"] # 40259
-+ nlp.vocab.strings.add("coffee")
-+ nlp.vocab.strings["coffee"] # 3197928453018144401
-+ other_nlp.vocab.strings["coffee"] # 3197928453018144401
++ nlp.vocab.strings.add(u"coffee")
++ nlp.vocab.strings[u"coffee"] # 3197928453018144401
++ other_nlp.vocab.strings[u"coffee"] # 3197928453018144401
```
### Adding patterns and callbacks to the matcher {#migrating-matcher}
diff --git a/website/docs/usage/vectors-similarity.md b/website/docs/usage/vectors-similarity.md
index 53648f66e..f7c9d1cd9 100644
--- a/website/docs/usage/vectors-similarity.md
+++ b/website/docs/usage/vectors-similarity.md
@@ -74,8 +74,8 @@ path to [`spacy.load()`](/api/top-level#spacy.load).
```python
nlp_latin = spacy.load("/tmp/la_vectors_wiki_lg")
-doc1 = nlp_latin("Caecilius est in horto")
-doc2 = nlp_latin("servus est in atrio")
+doc1 = nlp_latin(u"Caecilius est in horto")
+doc2 = nlp_latin(u"servus est in atrio")
doc1.similarity(doc2)
```
@@ -168,9 +168,10 @@ vectors to the vocabulary, you can use the
### Adding vectors
from spacy.vocab import Vocab
-vector_data = {"dog": numpy.random.uniform(-1, 1, (300,)),
- "cat": numpy.random.uniform(-1, 1, (300,)),
- "orange": numpy.random.uniform(-1, 1, (300,))}
+vector_data = {u"dog": numpy.random.uniform(-1, 1, (300,)),
+ u"cat": numpy.random.uniform(-1, 1, (300,)),
+ u"orange": numpy.random.uniform(-1, 1, (300,))}
+
vocab = Vocab()
for word, vector in vector_data.items():
vocab.set_vector(word, vector)
@@ -240,7 +241,7 @@ import cupy.cuda
from spacy.vectors import Vectors
vector_table = numpy.zeros((3, 300), dtype="f")
-vectors = Vectors(["dog", "cat", "orange"], vector_table)
+vectors = Vectors([u"dog", u"cat", u"orange"], vector_table)
with cupy.cuda.Device(0):
vectors.data = cupy.asarray(vectors.data)
```
@@ -251,6 +252,6 @@ import torch
from spacy.vectors import Vectors
vector_table = numpy.zeros((3, 300), dtype="f")
-vectors = Vectors(["dog", "cat", "orange"], vector_table)
+vectors = Vectors([u"dog", u"cat", u"orange"], vector_table)
vectors.data = torch.Tensor(vectors.data).cuda(0)
```
diff --git a/website/docs/usage/visualizers.md b/website/docs/usage/visualizers.md
index dd0b0eb50..6172d2f48 100644
--- a/website/docs/usage/visualizers.md
+++ b/website/docs/usage/visualizers.md
@@ -48,7 +48,7 @@ import spacy
from spacy import displacy
nlp = spacy.load("en_core_web_sm")
-doc = nlp("This is a sentence.")
+doc = nlp(u"This is a sentence.")
displacy.serve(doc, style="dep")
```
@@ -101,7 +101,7 @@ import spacy
from spacy import displacy
nlp = spacy.load("en_core_web_sm")
-text = """In ancient Rome, some neighbors live in three adjacent houses. In the center is the house of Senex, who lives there with wife Domina, son Hero, and several slaves, including head slave Hysterium and the musical's main character Pseudolus. A slave belonging to Hero, Pseudolus wishes to buy, win, or steal his freedom. One of the neighboring houses is owned by Marcus Lycus, who is a buyer and seller of beautiful women; the other belongs to the ancient Erronius, who is abroad searching for his long-lost children (stolen in infancy by pirates). One day, Senex and Domina go on a trip and leave Pseudolus in charge of Hero. Hero confides in Pseudolus that he is in love with the lovely Philia, one of the courtesans in the House of Lycus (albeit still a virgin)."""
+text = u"""In ancient Rome, some neighbors live in three adjacent houses. In the center is the house of Senex, who lives there with wife Domina, son Hero, and several slaves, including head slave Hysterium and the musical's main character Pseudolus. A slave belonging to Hero, Pseudolus wishes to buy, win, or steal his freedom. One of the neighboring houses is owned by Marcus Lycus, who is a buyer and seller of beautiful women; the other belongs to the ancient Erronius, who is abroad searching for his long-lost children (stolen in infancy by pirates). One day, Senex and Domina go on a trip and leave Pseudolus in charge of Hero. Hero confides in Pseudolus that he is in love with the lovely Philia, one of the courtesans in the House of Lycus (albeit still a virgin)."""
doc = nlp(text)
sentence_spans = list(doc.sents)
displacy.serve(sentence_spans, style="dep")
@@ -117,7 +117,7 @@ text.
import spacy
from spacy import displacy
-text = "When Sebastian Thrun started working on self-driving cars at Google in 2007, few people outside of the company took him seriously."
+text = u"When Sebastian Thrun started working on self-driving cars at Google in 2007, few people outside of the company took him seriously."
nlp = spacy.load("en_core_web_sm")
doc = nlp(text)
@@ -168,7 +168,7 @@ add a headline to each visualization, you can add a `title` to its `user_data`.
User data is never touched or modified by spaCy.
```python
-doc = nlp("This is a sentence about Google.")
+doc = nlp(u"This is a sentence about Google.")
doc.user_data["title"] = "This is a title"
displacy.serve(doc, style="ent")
```
@@ -193,7 +193,7 @@ import spacy
from spacy import displacy
# In[2]:
-doc = nlp("Rats are various medium-sized, long-tailed rodents.")
+doc = nlp(u"Rats are various medium-sized, long-tailed rodents.")
displacy.render(doc, style="dep")
# In[3]:
@@ -209,6 +209,7 @@ rendering if auto-detection fails.
+
![displaCy visualizer in a Jupyter notebook](../images/displacy_jupyter.jpg)
Internally, displaCy imports `display` and `HTML` from `IPython.core.display`
@@ -235,8 +236,8 @@ import spacy
from spacy import displacy
nlp = spacy.load("en_core_web_sm")
-doc1 = nlp("This is a sentence.")
-doc2 = nlp("This is another sentence.")
+doc1 = nlp(u"This is a sentence.")
+doc2 = nlp(u"This is another sentence.")
html = displacy.render([doc1, doc2], style="dep", page=True)
```
@@ -280,7 +281,7 @@ from spacy import displacy
from pathlib import Path
nlp = spacy.load("en_core_web_sm")
-sentences = ["This is an example.", "This is another one."]
+sentences = [u"This is an example.", u"This is another one."]
for sent in sentences:
doc = nlp(sent)
svg = displacy.render(doc, style="dep", jupyter=False)
diff --git a/website/meta/languages.json b/website/meta/languages.json
index 09a17b568..77b46c798 100644
--- a/website/meta/languages.json
+++ b/website/meta/languages.json
@@ -65,14 +65,19 @@
"example": "Αυτή είναι μια πρόταση.",
"has_examples": true
},
+ {
+ "code": "xx",
+ "name": "Multi-language",
+ "models": ["xx_ent_wiki_sm"],
+ "example": "This is a sentence about Facebook."
+ },
{ "code": "sv", "name": "Swedish", "has_examples": true },
{ "code": "fi", "name": "Finnish", "has_examples": true },
{
"code": "nb",
"name": "Norwegian Bokmål",
"example": "Dette er en setning.",
- "has_examples": true,
- "models": ["nb_core_news_sm"]
+ "has_examples": true
},
{ "code": "da", "name": "Danish", "example": "Dette er en sætning.", "has_examples": true },
{ "code": "hu", "name": "Hungarian", "example": "Ez egy mondat.", "has_examples": true },
@@ -122,7 +127,7 @@
{ "code": "bg", "name": "Bulgarian", "example": "Това е изречение", "has_examples": true },
{ "code": "cs", "name": "Czech" },
{ "code": "is", "name": "Icelandic" },
- { "code": "lt", "name": "Lithuanian", "has_examples": true, "models": ["lt_core_news_sm"] },
+ { "code": "lt", "name": "Lithuanian" },
{ "code": "lv", "name": "Latvian" },
{ "code": "sr", "name": "Serbian" },
{ "code": "sk", "name": "Slovak" },
@@ -177,15 +182,10 @@
"code": "vi",
"name": "Vietnamese",
"dependencies": [{ "name": "Pyvi", "url": "https://github.com/trungtv/pyvi" }]
- },
- {
- "code": "xx",
- "name": "Multi-language",
- "models": ["xx_ent_wiki_sm"],
- "example": "This is a sentence about Facebook."
}
],
"licenses": [
+ { "id": "CC BY 4.0", "url": "https://creativecommons.org/licenses/by/4.0/" },
{ "id": "CC BY 4.0", "url": "https://creativecommons.org/licenses/by/4.0/" },
{ "id": "CC BY-SA", "url": "https://creativecommons.org/licenses/by-sa/3.0/" },
{ "id": "CC BY-SA 3.0", "url": "https://creativecommons.org/licenses/by-sa/3.0/" },
diff --git a/website/meta/sidebars.json b/website/meta/sidebars.json
index 68d46605f..31083b091 100644
--- a/website/meta/sidebars.json
+++ b/website/meta/sidebars.json
@@ -9,7 +9,6 @@
{ "text": "Models & Languages", "url": "/usage/models" },
{ "text": "Facts & Figures", "url": "/usage/facts-figures" },
{ "text": "spaCy 101", "url": "/usage/spacy-101" },
- { "text": "New in v2.2", "url": "/usage/v2-2" },
{ "text": "New in v2.1", "url": "/usage/v2-1" },
{ "text": "New in v2.0", "url": "/usage/v2" }
]
@@ -76,7 +75,6 @@
{ "text": "Tagger", "url": "/api/tagger" },
{ "text": "DependencyParser", "url": "/api/dependencyparser" },
{ "text": "EntityRecognizer", "url": "/api/entityrecognizer" },
- { "text": "EntityLinker", "url": "/api/entitylinker" },
{ "text": "TextCategorizer", "url": "/api/textcategorizer" },
{ "text": "Matcher", "url": "/api/matcher" },
{ "text": "PhraseMatcher", "url": "/api/phrasematcher" },
@@ -91,12 +89,9 @@
{ "text": "Vocab", "url": "/api/vocab" },
{ "text": "StringStore", "url": "/api/stringstore" },
{ "text": "Vectors", "url": "/api/vectors" },
- { "text": "Lookups", "url": "/api/lookups" },
- { "text": "KnowledgeBase", "url": "/api/kb" },
{ "text": "GoldParse", "url": "/api/goldparse" },
{ "text": "GoldCorpus", "url": "/api/goldcorpus" },
- { "text": "Scorer", "url": "/api/scorer" },
- { "text": "DocBin", "url": "/api/docbin" }
+ { "text": "Scorer", "url": "/api/scorer" }
]
},
{
diff --git a/website/meta/site.json b/website/meta/site.json
index 0325e78ca..edb60ab0c 100644
--- a/website/meta/site.json
+++ b/website/meta/site.json
@@ -23,6 +23,7 @@
"apiKey": "371e26ed49d29a27bd36273dfdaf89af",
"indexName": "spacy"
},
+ "spacyVersion": "2.1",
"binderUrl": "ines/spacy-io-binder",
"binderBranch": "live",
"binderVersion": "2.1.8",
diff --git a/website/meta/universe.json b/website/meta/universe.json
index 66b5e4ba7..2997f9300 100644
--- a/website/meta/universe.json
+++ b/website/meta/universe.json
@@ -119,14 +119,14 @@
"emoji = Emoji(nlp)",
"nlp.add_pipe(emoji, first=True)",
"",
- "doc = nlp('This is a test 😻 👍🏿')",
+ "doc = nlp(u'This is a test 😻 👍🏿')",
"assert doc._.has_emoji == True",
"assert doc[2:5]._.has_emoji == True",
"assert doc[0]._.is_emoji == False",
"assert doc[4]._.is_emoji == True",
- "assert doc[5]._.emoji_desc == 'thumbs up dark skin tone'",
+ "assert doc[5]._.emoji_desc == u'thumbs up dark skin tone'",
"assert len(doc._.emoji) == 2",
- "assert doc._.emoji[1] == ('👍🏿', 5, 'thumbs up dark skin tone')"
+ "assert doc._.emoji[1] == (u'👍🏿', 5, u'thumbs up dark skin tone')"
],
"author": "Ines Montani",
"author_links": {
@@ -432,21 +432,17 @@
{
"id": "neuralcoref",
"slogan": "State-of-the-art coreference resolution based on neural nets and spaCy",
- "description": "This coreference resolution module is based on the super fast [spaCy](https://spacy.io/) parser and uses the neural net scoring model described in [Deep Reinforcement Learning for Mention-Ranking Coreference Models](http://cs.stanford.edu/people/kevclark/resources/clark-manning-emnlp2016-deep.pdf) by Kevin Clark and Christopher D. Manning, EMNLP 2016. Since ✨Neuralcoref v2.0, you can train the coreference resolution system on your own dataset — e.g., another language than English! — **provided you have an annotated dataset**. Note that to use neuralcoref with spaCy > 2.1.0, you'll have to install neuralcoref from source.",
+ "description": "This coreference resolution module is based on the super fast [spaCy](https://spacy.io/) parser and uses the neural net scoring model described in [Deep Reinforcement Learning for Mention-Ranking Coreference Models](http://cs.stanford.edu/people/kevclark/resources/clark-manning-emnlp2016-deep.pdf) by Kevin Clark and Christopher D. Manning, EMNLP 2016. With ✨Neuralcoref v2.0, you should now be able to train the coreference resolution system on your own dataset — e.g., another language than English! — **provided you have an annotated dataset**.",
"github": "huggingface/neuralcoref",
"thumb": "https://i.imgur.com/j6FO9O6.jpg",
"code_example": [
- "import spacy",
- "import neuralcoref",
+ "from neuralcoref import Coref",
"",
- "nlp = spacy.load('en')",
- "neuralcoref.add_to_pipe(nlp)",
- "doc1 = nlp('My sister has a dog. She loves him.')",
- "print(doc1._.coref_clusters)",
- "",
- "doc2 = nlp('Angela lives in Boston. She is quite happy in that city.')",
- "for ent in doc2.ents:",
- " print(ent._.coref_cluster)"
+ "coref = Coref()",
+ "clusters = coref.one_shot_coref(utterances=u\"She loves him.\", context=u\"My sister has a dog.\")",
+ "mentions = coref.get_mentions()",
+ "utterances = coref.get_utterances()",
+ "resolved_utterance_text = coref.get_resolved_utterances()"
],
"author": "Hugging Face",
"author_links": {
@@ -739,7 +735,7 @@
"slogan": "Use NLP to go beyond vanilla word2vec",
"description": "sense2vec ([Trask et. al](https://arxiv.org/abs/1511.06388), 2015) is a nice twist on [word2vec](https://en.wikipedia.org/wiki/Word2vec) that lets you learn more interesting, detailed and context-sensitive word vectors. For an interactive example of the technology, see our [sense2vec demo](https://explosion.ai/demos/sense2vec) that lets you explore semantic similarities across all Reddit comments of 2015.",
"github": "explosion/sense2vec",
- "pip": "sense2vec==1.0.0a1",
+ "pip": "sense2vec==1.0.0a0",
"thumb": "https://i.imgur.com/awfdhX6.jpg",
"image": "https://explosion.ai/assets/img/demos/sense2vec.png",
"url": "https://explosion.ai/demos/sense2vec",
@@ -751,8 +747,8 @@
"s2v = Sense2VecComponent('/path/to/reddit_vectors-1.1.0')",
"nlp.add_pipe(s2v)",
"",
- "doc = nlp(\"A sentence about natural language processing.\")",
- "assert doc[3].text == 'natural language processing'",
+ "doc = nlp(u\"A sentence about natural language processing.\")",
+ "assert doc[3].text == u'natural language processing'",
"freq = doc[3]._.s2v_freq",
"vector = doc[3]._.s2v_vec",
"most_similar = doc[3]._.s2v_most_similar(3)",
@@ -1301,7 +1297,7 @@
"",
"nlp = spacy.load('en')",
"nlp.add_pipe(BeneparComponent('benepar_en'))",
- "doc = nlp('The time for action is now. It's never too late to do something.')",
+ "doc = nlp(u'The time for action is now. It's never too late to do something.')",
"sent = list(doc.sents)[0]",
"print(sent._.parse_string)",
"# (S (NP (NP (DT The) (NN time)) (PP (IN for) (NP (NN action)))) (VP (VBZ is) (ADVP (RB now))) (. .))",
@@ -1434,7 +1430,7 @@
"thumb": "https://i.imgur.com/3y2uPUv.jpg",
"code_example": [
"import spacy",
- "from spacy_wordnet.wordnet_annotator import WordnetAnnotator ",
+ "from spacy_wordnet.wornet_annotator import WordnetAnnotator ",
"",
"# Load an spacy model (supported models are \"es\" and \"en\") ",
"nlp = spacy.load('en')",
diff --git a/website/src/components/table.js b/website/src/components/table.js
index 85b8e2144..3c345b046 100644
--- a/website/src/components/table.js
+++ b/website/src/components/table.js
@@ -42,19 +42,12 @@ function isFootRow(children) {
return false
}
-export const Table = ({ fixed, className, ...props }) => {
- const tableClassNames = classNames(classes.root, className, {
- [classes.fixed]: fixed,
- })
- return
-}
-
+export const Table = props =>
export const Th = props =>
-export const Tr = ({ evenodd = true, children, ...props }) => {
+export const Tr = ({ children, ...props }) => {
const foot = isFootRow(children)
- const trClasssNames = classNames({
- [classes.tr]: evenodd,
+ const trClasssNames = classNames(classes.tr, {
[classes.footer]: foot,
'table-footer': foot,
})
diff --git a/website/src/styles/accordion.module.sass b/website/src/styles/accordion.module.sass
index bdcbba9ac..707e29aef 100644
--- a/website/src/styles/accordion.module.sass
+++ b/website/src/styles/accordion.module.sass
@@ -13,7 +13,6 @@
width: 100%
padding: 1rem 1.5rem
border-radius: var(--border-radius)
- text-align: left
&:focus
background: var(--color-theme-opaque)
diff --git a/website/src/styles/code.module.sass b/website/src/styles/code.module.sass
index b268904f5..f72f1ffe6 100644
--- a/website/src/styles/code.module.sass
+++ b/website/src/styles/code.module.sass
@@ -56,7 +56,6 @@
.wrap
white-space: pre-wrap
- word-wrap: break-word
.title,
.juniper-button
diff --git a/website/src/styles/grid.module.sass b/website/src/styles/grid.module.sass
index 482ad03cf..63ea3d160 100644
--- a/website/src/styles/grid.module.sass
+++ b/website/src/styles/grid.module.sass
@@ -37,5 +37,5 @@ $flex-gap: 2rem
.narrow
grid-column-gap: $grid-gap-narrow
-.spacing:not(:empty)
+.spacing
margin-bottom: var(--spacing-md)
diff --git a/website/src/styles/table.module.sass b/website/src/styles/table.module.sass
index 68cc4bace..3e73ffb7f 100644
--- a/website/src/styles/table.module.sass
+++ b/website/src/styles/table.module.sass
@@ -6,9 +6,6 @@
margin-bottom: var(--spacing-md)
max-width: 100%
-.fixed
- table-layout: fixed
-
.tr
thead &:nth-child(odd)
background: transparent
diff --git a/website/src/templates/models.js b/website/src/templates/models.js
index 159744aa8..4713f4b34 100644
--- a/website/src/templates/models.js
+++ b/website/src/templates/models.js
@@ -14,15 +14,13 @@ import Icon from '../components/icon'
import Link from '../components/link'
import Grid from '../components/grid'
import Infobox from '../components/infobox'
-import Accordion from '../components/accordion'
-import { join, arrayToObj, abbrNum, markdownToReact, isString } from '../components/util'
+import { join, arrayToObj, abbrNum, markdownToReact } from '../components/util'
const MODEL_META = {
core: 'Vocabulary, syntax, entities, vectors',
core_sm: 'Vocabulary, syntax, entities',
dep: 'Vocabulary, syntax',
ent: 'Named entities',
- pytt: 'PyTorch Transformers',
vectors: 'Word vectors',
web: 'written text (blogs, news, comments)',
news: 'written text (news, media)',
@@ -45,12 +43,6 @@ const MODEL_META = {
compat: 'Latest compatible model version for your spaCy installation',
}
-const LABEL_SCHEME_META = {
- tagger: 'Part-of-speech tags via Token.tag_',
- parser: 'Dependency labels via Token.dep_',
- ner: 'Named entity labels',
-}
-
const MARKDOWN_COMPONENTS = {
code: InlineCode,
}
@@ -104,23 +96,11 @@ function formatModelMeta(data) {
author: data.author,
url: data.url,
license: data.license,
- labels: data.labels,
vectors: formatVectors(data.vectors),
accuracy: formatAccuracy(data.accuracy),
}
}
-function formatSources(data = []) {
- const sources = data.map(s => (isString(s) ? { name: s } : s))
- return sources.map(({ name, url, author }, i) => (
- <>
- {i > 0 && }
- {name && url ? {name} : name}
- {author && ` (${author})`}
- >
- ))
-}
-
const Help = ({ children }) => (
@@ -155,12 +135,11 @@ const Model = ({ name, langId, langName, baseUrl, repo, compatibility, hasExampl
const releaseUrl = `https://github.com/${repo}/releases/${releaseTag}`
const pipeline =
meta.pipeline && join(meta.pipeline.map(p => {p} ))
- const sources = formatSources(meta.sources)
+ const sources = meta.sources && join(meta.sources)
const author = !meta.url ? meta.author : {meta.author}
const licenseUrl = licenses[meta.license] ? licenses[meta.license].url : null
const license = licenseUrl ? {meta.license} : meta.license
const hasInteractiveCode = size === 'sm' && hasExamples && !isError
- const labels = meta.labels
const rows = [
{ label: 'Language', tag: langId, content: langName },
@@ -239,11 +218,11 @@ const Model = ({ name, langId, langName, baseUrl, repo, compatibility, hasExampl
)}
-
+
{accuracy &&
accuracy.map(({ label, items }, i) =>
!items ? null : (
-
+
{label}
@@ -281,46 +260,6 @@ const Model = ({ name, langId, langName, baseUrl, repo, compatibility, hasExampl
].join('\n')}
)}
- {labels && (
-
-
- The statistical components included in this model package assign the
- following labels. The labels are specific to the corpus that the model was
- trained on. To see the description of a label, you can use{' '}
-
- spacy.explain
-
- .
-
-
-
- {Object.keys(labels).map(pipe => {
- const labelNames = labels[pipe] || []
- const help = LABEL_SCHEME_META[pipe]
- return (
-
-
-
- {pipe} {help && {help} }
-
-
-
- {labelNames.map((label, i) => (
- <>
- {i > 0 && ', '}
-
- {label}
-
- >
- ))}
-
-
- )
- })}
-
-
-
- )}
)
}
diff --git a/website/src/widgets/landing.js b/website/src/widgets/landing.js
index 91fd756fa..e9dec87f4 100644
--- a/website/src/widgets/landing.js
+++ b/website/src/widgets/landing.js
@@ -150,24 +150,6 @@ const Landing = ({ data }) => {
-
- Prodigy is an annotation tool so efficient that data scientists
- can do the annotation themselves, enabling a new level of rapid iteration.
- Whether you're working on entity recognition, intent detection or image
- classification, Prodigy can help you train and evaluate your
- models faster. Stream in your own examples or real-world data from live APIs,
- update your model in real-time and chain models together to build more complex
- systems.
-
-
{
research, development and applications, with keynotes by Sebastian Ruder
(DeepMind) and Yoav Goldberg (Allen AI).
+
+
+ Prodigy is an annotation tool so efficient that data scientists
+ can do the annotation themselves, enabling a new level of rapid iteration.
+ Whether you're working on entity recognition, intent detection or image
+ classification, Prodigy can help you train and evaluate your
+ models faster. Stream in your own examples or real-world data from live APIs,
+ update your model in real-time and chain models together to build more complex
+ systems.
+
diff --git a/website/src/widgets/quickstart-models.js b/website/src/widgets/quickstart-models.js
index d116fae0a..83bb4527b 100644
--- a/website/src/widgets/quickstart-models.js
+++ b/website/src/widgets/quickstart-models.js
@@ -65,7 +65,7 @@ const QuickstartInstall = ({ id, title, description, defaultLang, children }) =>
nlp = {pkg}.load()
- doc = nlp("{exampleText}")
+ doc = nlp(u"{exampleText}")
print([
From 7c701784e58f2ca140a1ef2b1dd6ee4efc1095a4 Mon Sep 17 00:00:00 2001
From: Ines Montani
Date: Mon, 30 Sep 2019 12:01:09 +0200
Subject: [PATCH 12/41] Update models.js
---
website/src/templates/models.js | 42 ++++++++++++++++++++++++++++++++-
1 file changed, 41 insertions(+), 1 deletion(-)
diff --git a/website/src/templates/models.js b/website/src/templates/models.js
index 4713f4b34..1466235e4 100644
--- a/website/src/templates/models.js
+++ b/website/src/templates/models.js
@@ -218,7 +218,7 @@ const Model = ({ name, langId, langName, baseUrl, repo, compatibility, hasExampl
)}
-
+
{accuracy &&
accuracy.map(({ label, items }, i) =>
!items ? null : (
@@ -260,6 +260,46 @@ const Model = ({ name, langId, langName, baseUrl, repo, compatibility, hasExampl
].join('\n')}
)}
+ {labels && (
+
+
+ The statistical components included in this model package assign the
+ following labels. The labels are specific to the corpus that the model was
+ trained on. To see the description of a label, you can use{' '}
+
+ spacy.explain
+
+ .
+
+
+
+ {Object.keys(labels).map(pipe => {
+ const labelNames = labels[pipe] || []
+ const help = LABEL_SCHEME_META[pipe]
+ return (
+
+
+
+ {pipe} {help && {help} }
+
+
+
+ {labelNames.map((label, i) => (
+ <>
+ {i > 0 && ', '}
+
+ {label}
+
+ >
+ ))}
+
+
+ )
+ })}
+
+
+
+ )}
)
}
From 88fee1a768c120874b3222bb5e1b7adff841fc7c Mon Sep 17 00:00:00 2001
From: Ines Montani
Date: Mon, 30 Sep 2019 13:22:17 +0200
Subject: [PATCH 13/41] Update models.js
---
website/src/templates/models.js | 27 ++++++++++++++++++++++++---
1 file changed, 24 insertions(+), 3 deletions(-)
diff --git a/website/src/templates/models.js b/website/src/templates/models.js
index 1466235e4..159744aa8 100644
--- a/website/src/templates/models.js
+++ b/website/src/templates/models.js
@@ -14,13 +14,15 @@ import Icon from '../components/icon'
import Link from '../components/link'
import Grid from '../components/grid'
import Infobox from '../components/infobox'
-import { join, arrayToObj, abbrNum, markdownToReact } from '../components/util'
+import Accordion from '../components/accordion'
+import { join, arrayToObj, abbrNum, markdownToReact, isString } from '../components/util'
const MODEL_META = {
core: 'Vocabulary, syntax, entities, vectors',
core_sm: 'Vocabulary, syntax, entities',
dep: 'Vocabulary, syntax',
ent: 'Named entities',
+ pytt: 'PyTorch Transformers',
vectors: 'Word vectors',
web: 'written text (blogs, news, comments)',
news: 'written text (news, media)',
@@ -43,6 +45,12 @@ const MODEL_META = {
compat: 'Latest compatible model version for your spaCy installation',
}
+const LABEL_SCHEME_META = {
+ tagger: 'Part-of-speech tags via Token.tag_',
+ parser: 'Dependency labels via Token.dep_',
+ ner: 'Named entity labels',
+}
+
const MARKDOWN_COMPONENTS = {
code: InlineCode,
}
@@ -96,11 +104,23 @@ function formatModelMeta(data) {
author: data.author,
url: data.url,
license: data.license,
+ labels: data.labels,
vectors: formatVectors(data.vectors),
accuracy: formatAccuracy(data.accuracy),
}
}
+function formatSources(data = []) {
+ const sources = data.map(s => (isString(s) ? { name: s } : s))
+ return sources.map(({ name, url, author }, i) => (
+ <>
+ {i > 0 && }
+ {name && url ? {name} : name}
+ {author && ` (${author})`}
+ >
+ ))
+}
+
const Help = ({ children }) => (
@@ -135,11 +155,12 @@ const Model = ({ name, langId, langName, baseUrl, repo, compatibility, hasExampl
const releaseUrl = `https://github.com/${repo}/releases/${releaseTag}`
const pipeline =
meta.pipeline && join(meta.pipeline.map(p => {p} ))
- const sources = meta.sources && join(meta.sources)
+ const sources = formatSources(meta.sources)
const author = !meta.url ? meta.author : {meta.author}
const licenseUrl = licenses[meta.license] ? licenses[meta.license].url : null
const license = licenseUrl ? {meta.license} : meta.license
const hasInteractiveCode = size === 'sm' && hasExamples && !isError
+ const labels = meta.labels
const rows = [
{ label: 'Language', tag: langId, content: langName },
@@ -222,7 +243,7 @@ const Model = ({ name, langId, langName, baseUrl, repo, compatibility, hasExampl
{accuracy &&
accuracy.map(({ label, items }, i) =>
!items ? null : (
-
+
{label}
From 31cebf66a8005464944d5363ec5e0263e0cd25d6 Mon Sep 17 00:00:00 2001
From: Ines Montani
Date: Mon, 30 Sep 2019 13:50:08 +0200
Subject: [PATCH 14/41] Update universe.json
---
website/meta/universe.json | 14 ++++++++++++++
1 file changed, 14 insertions(+)
diff --git a/website/meta/universe.json b/website/meta/universe.json
index 2997f9300..8d5227b38 100644
--- a/website/meta/universe.json
+++ b/website/meta/universe.json
@@ -1103,6 +1103,20 @@
"youtube": "WnGPv6HnBok",
"category": ["videos"]
},
+ {
+ "type": "education",
+ "id": "video-intro-to-nlp-episode-2",
+ "title": "Intro to NLP with spaCy",
+ "slogan": "Episode 2: Rule-based Matching",
+ "description": "In this new video series, data science instructor Vincent Warmerdam gets started with spaCy, an open-source library for Natural Language Processing in Python. His mission: building a system to automatically detect programming languages in large volumes of text. Follow his process from the first idea to a prototype all the way to data collection and training a statistical named entity recogntion model from scratch.",
+ "author": "Vincent Warmerdam",
+ "author_links": {
+ "twitter": "fishnets88",
+ "github": "koaning"
+ },
+ "youtube": "KL4-Mpgbahw",
+ "category": ["videos"]
+ },
{
"type": "education",
"id": "video-spacy-irl-entity-linking",
From 6316243941bdff2188ceea4517f5603ba314a691 Mon Sep 17 00:00:00 2001
From: Nipun Sadvilkar
Date: Wed, 30 Oct 2019 16:43:29 +0530
Subject: [PATCH 15/41] =?UTF-8?q?=E2=9C=A8=20=20project:=20pySBD=20-=20Pyt?=
=?UTF-8?q?hon=20Sentence=20Boundary=20Disambiguation=20(#4455)?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
* ✨ project: pySBD - Python Sentence Boundary Disambiguation
* 📝 Update links and description
* 🐛 Fix missing comma
* Update universe.json
pysbd as a spacy component through entrypoints
* 🚨 Fix universe.json
* 📝 Update code_example
---
website/meta/universe.json | 26 ++++++++++++++++++++++++++
1 file changed, 26 insertions(+)
diff --git a/website/meta/universe.json b/website/meta/universe.json
index bc8a27a1a..e64e462d8 100644
--- a/website/meta/universe.json
+++ b/website/meta/universe.json
@@ -1801,6 +1801,32 @@
"github": "microsoft"
}
},
+ {
+ "id": "python-sentence-boundary-disambiguation",
+ "title": "pySBD - python Sentence Boundary Disambiguation",
+ "slogan": "a rule-based sentence boundary detection that works out-of-the-box",
+ "github": "nipunsadvilkar/pySBD",
+ "description": "pySBD is 'real-world' sentence segmenter which extracts a reasonable sentences when the format and domain of the input text are unknown. It is a rules-based algorithm based on [The Golden Rules](https://s3.amazonaws.com/tm-town-nlp-resources/golden_rules.txt) - a set of tests to check accuracy of segmenter in regards to edge case scenarios developed by [TM-Town](https://www.tm-town.com/) dev team. pySBD is python port of ruby gem [Pragmatic Segmenter](https://github.com/diasks2/pragmatic_segmenter).",
+ "pip": "pysbd",
+ "category": ["scientific"],
+ "tags": ["sentence segmentation"],
+ "code_example": [
+ "from pysbd.util import PySBDFactory",
+ "",
+ "nlp = spacy.blank('en')",
+ "nlp.add_pipe(PySBDFactory(nlp))",
+ "",
+ "doc = nlp('My name is Jonas E. Smith. Please turn to p. 55.')",
+ "print(list(doc.sents))",
+ "# [My name is Jonas E. Smith., Please turn to p. 55.]"
+ ],
+ "author": "Nipun Sadvilkar",
+ "author_links": {
+ "twitter": "nipunsadvilkar",
+ "github": "nipunsadvilkar",
+ "website": "https://nipunsadvilkar.github.io"
+ }
+ },
{
"id": "cookiecutter-spacy-fastapi",
"title": "cookiecutter-spacy-fastapi",
From 4cbc172cc6ceeff608df5197b2402c2657465f73 Mon Sep 17 00:00:00 2001
From: Neel Kamath
Date: Wed, 30 Oct 2019 17:50:46 +0530
Subject: [PATCH 16/41] Add "spaCy Server" to spaCy Universe (#4553)
* Add "spaCy Server" to spaCy Universe
* Accept the spaCy Contributor Agreement
---
.github/contributors/neelkamath.md | 106 +++++++++++++++++++++++++++++
website/meta/universe.json | 20 ++++++
2 files changed, 126 insertions(+)
create mode 100644 .github/contributors/neelkamath.md
diff --git a/.github/contributors/neelkamath.md b/.github/contributors/neelkamath.md
new file mode 100644
index 000000000..76502e7c0
--- /dev/null
+++ b/.github/contributors/neelkamath.md
@@ -0,0 +1,106 @@
+# spaCy contributor agreement
+
+This spaCy Contributor Agreement (**"SCA"**) is based on the
+[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
+The SCA applies to any contribution that you make to any product or project
+managed by us (the **"project"**), and sets out the intellectual property rights
+you grant to us in the contributed materials. The term **"us"** shall mean
+[ExplosionAI GmbH](https://explosion.ai/legal). The term
+**"you"** shall mean the person or entity identified below.
+
+If you agree to be bound by these terms, fill in the information requested
+below and include the filled-in version with your first pull request, under the
+folder [`.github/contributors/`](/.github/contributors/). The name of the file
+should be your GitHub username, with the extension `.md`. For example, the user
+example_user would create the file `.github/contributors/example_user.md`.
+
+Read this agreement carefully before signing. These terms and conditions
+constitute a binding legal agreement.
+
+## Contributor Agreement
+
+1. The term "contribution" or "contributed materials" means any source code,
+object code, patch, tool, sample, graphic, specification, manual,
+documentation, or any other material posted or submitted by you to the project.
+
+2. With respect to any worldwide copyrights, or copyright applications and
+registrations, in your contribution:
+
+ * you hereby assign to us joint ownership, and to the extent that such
+ assignment is or becomes invalid, ineffective or unenforceable, you hereby
+ grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
+ royalty-free, unrestricted license to exercise all rights under those
+ copyrights. This includes, at our option, the right to sublicense these same
+ rights to third parties through multiple levels of sublicensees or other
+ licensing arrangements;
+
+ * you agree that each of us can do all things in relation to your
+ contribution as if each of us were the sole owners, and if one of us makes
+ a derivative work of your contribution, the one who makes the derivative
+ work (or has it made will be the sole owner of that derivative work;
+
+ * you agree that you will not assert any moral rights in your contribution
+ against us, our licensees or transferees;
+
+ * you agree that we may register a copyright in your contribution and
+ exercise all ownership rights associated with it; and
+
+ * you agree that neither of us has any duty to consult with, obtain the
+ consent of, pay or render an accounting to the other for any use or
+ distribution of your contribution.
+
+3. With respect to any patents you own, or that you can license without payment
+to any third party, you hereby grant to us a perpetual, irrevocable,
+non-exclusive, worldwide, no-charge, royalty-free license to:
+
+ * make, have made, use, sell, offer to sell, import, and otherwise transfer
+ your contribution in whole or in part, alone or in combination with or
+ included in any product, work or materials arising out of the project to
+ which your contribution was submitted, and
+
+ * at our option, to sublicense these same rights to third parties through
+ multiple levels of sublicensees or other licensing arrangements.
+
+4. Except as set out above, you keep all right, title, and interest in your
+contribution. The rights that you grant to us under these terms are effective
+on the date you first submitted a contribution to us, even if your submission
+took place before the date you sign these terms.
+
+5. You covenant, represent, warrant and agree that:
+
+ * Each contribution that you submit is and shall be an original work of
+ authorship and you can legally grant the rights set out in this SCA;
+
+ * to the best of your knowledge, each contribution will not violate any
+ third party's copyrights, trademarks, patents, or other intellectual
+ property rights; and
+
+ * each contribution shall be in compliance with U.S. export control laws and
+ other applicable export and import laws. You agree to notify us if you
+ become aware of any circumstance which would make any of the foregoing
+ representations inaccurate in any respect. We may publicly disclose your
+ participation in the project, including the fact that you have signed the SCA.
+
+6. This SCA is governed by the laws of the State of California and applicable
+U.S. Federal law. Any choice of law rules will not apply.
+
+7. Please place an “x” on one of the applicable statement below. Please do NOT
+mark both statements:
+
+ * [x] I am signing on behalf of myself as an individual and no other person
+ or entity, including my employer, has or will have rights with respect to my
+ contributions.
+
+ * [ ] I am signing on behalf of my employer or a legal entity and I have the
+ actual authority to contractually bind that entity.
+
+## Contributor Details
+
+| Field | Entry |
+|------------------------------- | ---------------------- |
+| Name | Neel Kamath |
+| Company name (if applicable) | |
+| Title or role (if applicable) | |
+| Date | October 30, 2019 |
+| GitHub username | neelkamath |
+| Website (optional) | https://neelkamath.com |
diff --git a/website/meta/universe.json b/website/meta/universe.json
index e64e462d8..714d15b2f 100644
--- a/website/meta/universe.json
+++ b/website/meta/universe.json
@@ -1,5 +1,25 @@
{
"resources": [
+ {
+ "id": "spacy-server",
+ "title": "spaCy Server",
+ "slogan": "\uD83E\uDD9C Containerized HTTP API for spaCy NLP",
+ "description": "For developers who need programming language agnostic NLP, spaCy Server is a containerized HTTP API that provides industrial-strength natural language processing. Unlike other servers, our server is fast, idiomatic, and well documented.",
+ "github": "neelkamath/spacy-server",
+ "code_example": [
+ "docker run --rm -dp 8080:8080 neelkamath/spacy-server",
+ "curl http://localhost:8080/ner -H 'Content-Type: application/json' -d '{\"sections\": [\"My name is John Doe. I grew up in California.\"]}'"
+ ],
+ "code_language": "shell",
+ "url": "https://hub.docker.com/r/neelkamath/spacy-server",
+ "author": "Neel Kamath",
+ "author_links": {
+ "github": "neelkamath",
+ "website": "https://neelkamath.com"
+ },
+ "category": ["apis"],
+ "tags": ["docker"]
+ },
{
"id": "nlp-architect",
"title": "NLP Architect",
From d8c2365b04547f2380ec2c9896803a36fd6a59ae Mon Sep 17 00:00:00 2001
From: Ines Montani
Date: Wed, 30 Oct 2019 13:29:00 +0100
Subject: [PATCH 17/41] Update universe.json [ci skip]
---
website/meta/universe.json | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/website/meta/universe.json b/website/meta/universe.json
index 714d15b2f..749abc659 100644
--- a/website/meta/universe.json
+++ b/website/meta/universe.json
@@ -1824,9 +1824,9 @@
{
"id": "python-sentence-boundary-disambiguation",
"title": "pySBD - python Sentence Boundary Disambiguation",
- "slogan": "a rule-based sentence boundary detection that works out-of-the-box",
+ "slogan": "Rule-based sentence boundary detection that works out-of-the-box",
"github": "nipunsadvilkar/pySBD",
- "description": "pySBD is 'real-world' sentence segmenter which extracts a reasonable sentences when the format and domain of the input text are unknown. It is a rules-based algorithm based on [The Golden Rules](https://s3.amazonaws.com/tm-town-nlp-resources/golden_rules.txt) - a set of tests to check accuracy of segmenter in regards to edge case scenarios developed by [TM-Town](https://www.tm-town.com/) dev team. pySBD is python port of ruby gem [Pragmatic Segmenter](https://github.com/diasks2/pragmatic_segmenter).",
+ "description": "pySBD is 'real-world' sentence segmenter which extracts reasonable sentences when the format and domain of the input text are unknown. It is a rules-based algorithm based on [The Golden Rules](https://s3.amazonaws.com/tm-town-nlp-resources/golden_rules.txt) - a set of tests to check accuracy of segmenter in regards to edge case scenarios developed by [TM-Town](https://www.tm-town.com/) dev team. pySBD is python port of ruby gem [Pragmatic Segmenter](https://github.com/diasks2/pragmatic_segmenter).",
"pip": "pysbd",
"category": ["scientific"],
"tags": ["sentence segmentation"],
@@ -1845,7 +1845,7 @@
"twitter": "nipunsadvilkar",
"github": "nipunsadvilkar",
"website": "https://nipunsadvilkar.github.io"
- }
+ }
},
{
"id": "cookiecutter-spacy-fastapi",
From 86c3185f34e02ab81356e89b4848046c028151b6 Mon Sep 17 00:00:00 2001
From: Ines Montani
Date: Wed, 30 Oct 2019 14:31:40 +0100
Subject: [PATCH 18/41] Update syntax iterators [ci skip]
---
website/docs/usage/adding-languages.md | 17 +++++++++++------
1 file changed, 11 insertions(+), 6 deletions(-)
diff --git a/website/docs/usage/adding-languages.md b/website/docs/usage/adding-languages.md
index 36e6e6809..4b12c6be1 100644
--- a/website/docs/usage/adding-languages.md
+++ b/website/docs/usage/adding-languages.md
@@ -402,12 +402,17 @@ iterators:
> assert chunks[1].text == "another phrase"
> ```
-| Language | Code | Source |
-| -------- | ---- | ----------------------------------------------------------------------------------------------------------------- |
-| English | `en` | [`lang/en/syntax_iterators.py`](https://github.com/explosion/spaCy/tree/master/spacy/lang/en/syntax_iterators.py) |
-| German | `de` | [`lang/de/syntax_iterators.py`](https://github.com/explosion/spaCy/tree/master/spacy/lang/de/syntax_iterators.py) |
-| French | `fr` | [`lang/fr/syntax_iterators.py`](https://github.com/explosion/spaCy/tree/master/spacy/lang/fr/syntax_iterators.py) |
-| Spanish | `es` | [`lang/es/syntax_iterators.py`](https://github.com/explosion/spaCy/tree/master/spacy/lang/es/syntax_iterators.py) |
+| Language | Code | Source |
+| ---------------- | ---- | ----------------------------------------------------------------------------------------------------------------- |
+| English | `en` | [`lang/en/syntax_iterators.py`](https://github.com/explosion/spaCy/tree/master/spacy/lang/en/syntax_iterators.py) |
+| German | `de` | [`lang/de/syntax_iterators.py`](https://github.com/explosion/spaCy/tree/master/spacy/lang/de/syntax_iterators.py) |
+| French | `fr` | [`lang/fr/syntax_iterators.py`](https://github.com/explosion/spaCy/tree/master/spacy/lang/fr/syntax_iterators.py) |
+| Spanish | `es` | [`lang/es/syntax_iterators.py`](https://github.com/explosion/spaCy/tree/master/spacy/lang/es/syntax_iterators.py) |
+| Greek | `el` | [`lang/el/syntax_iterators.py`](https://github.com/explosion/spaCy/tree/master/spacy/lang/el/syntax_iterators.py) |
+| Norwegian Bokmål | `nb` | [`lang/nb/syntax_iterators.py`](https://github.com/explosion/spaCy/tree/master/spacy/lang/nb/syntax_iterators.py) |
+| Swedish | `sv` | [`lang/sv/syntax_iterators.py`](https://github.com/explosion/spaCy/tree/master/spacy/lang/sv/syntax_iterators.py) |
+| Indonesian | `id` | [`lang/id/syntax_iterators.py`](https://github.com/explosion/spaCy/tree/master/spacy/lang/id/syntax_iterators.py) |
+| Persian | `fa` | [`lang/fa/syntax_iterators.py`](https://github.com/explosion/spaCy/tree/master/spacy/lang/fa/syntax_iterators.py) |
### Lemmatizer {#lemmatizer new="2"}
From aa0616bafa86cd1400ea97c020640722f64c347b Mon Sep 17 00:00:00 2001
From: Tiljander <35637838+Tiljander@users.noreply.github.com>
Date: Thu, 26 Mar 2020 13:13:22 +0100
Subject: [PATCH 19/41] Describing priority rules for overlapping matches
(#5197)
* Describing priority rules for overlapping matches
* Create Tiljander.md
* Describing priority rules for overlapping matches
* Update website/docs/api/entityruler.md
Co-Authored-By: Ines Montani
Co-authored-by: Ines Montani
---
.github/contributors/Tiljander.md | 106 ++++++++++++++++++++++
website/docs/api/entityruler.md | 3 +-
website/docs/usage/rule-based-matching.md | 5 +-
3 files changed, 112 insertions(+), 2 deletions(-)
create mode 100644 .github/contributors/Tiljander.md
diff --git a/.github/contributors/Tiljander.md b/.github/contributors/Tiljander.md
new file mode 100644
index 000000000..89e70efa5
--- /dev/null
+++ b/.github/contributors/Tiljander.md
@@ -0,0 +1,106 @@
+# spaCy contributor agreement
+
+This spaCy Contributor Agreement (**"SCA"**) is based on the
+[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
+The SCA applies to any contribution that you make to any product or project
+managed by us (the **"project"**), and sets out the intellectual property rights
+you grant to us in the contributed materials. The term **"us"** shall mean
+[ExplosionAI UG (haftungsbeschränkt)](https://explosion.ai/legal). The term
+**"you"** shall mean the person or entity identified below.
+
+If you agree to be bound by these terms, fill in the information requested
+below and include the filled-in version with your first pull request, under the
+folder [`.github/contributors/`](/.github/contributors/). The name of the file
+should be your GitHub username, with the extension `.md`. For example, the user
+example_user would create the file `.github/contributors/example_user.md`.
+
+Read this agreement carefully before signing. These terms and conditions
+constitute a binding legal agreement.
+
+## Contributor Agreement
+
+1. The term "contribution" or "contributed materials" means any source code,
+object code, patch, tool, sample, graphic, specification, manual,
+documentation, or any other material posted or submitted by you to the project.
+
+2. With respect to any worldwide copyrights, or copyright applications and
+registrations, in your contribution:
+
+ * you hereby assign to us joint ownership, and to the extent that such
+ assignment is or becomes invalid, ineffective or unenforceable, you hereby
+ grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
+ royalty-free, unrestricted license to exercise all rights under those
+ copyrights. This includes, at our option, the right to sublicense these same
+ rights to third parties through multiple levels of sublicensees or other
+ licensing arrangements;
+
+ * you agree that each of us can do all things in relation to your
+ contribution as if each of us were the sole owners, and if one of us makes
+ a derivative work of your contribution, the one who makes the derivative
+ work (or has it made will be the sole owner of that derivative work;
+
+ * you agree that you will not assert any moral rights in your contribution
+ against us, our licensees or transferees;
+
+ * you agree that we may register a copyright in your contribution and
+ exercise all ownership rights associated with it; and
+
+ * you agree that neither of us has any duty to consult with, obtain the
+ consent of, pay or render an accounting to the other for any use or
+ distribution of your contribution.
+
+3. With respect to any patents you own, or that you can license without payment
+to any third party, you hereby grant to us a perpetual, irrevocable,
+non-exclusive, worldwide, no-charge, royalty-free license to:
+
+ * make, have made, use, sell, offer to sell, import, and otherwise transfer
+ your contribution in whole or in part, alone or in combination with or
+ included in any product, work or materials arising out of the project to
+ which your contribution was submitted, and
+
+ * at our option, to sublicense these same rights to third parties through
+ multiple levels of sublicensees or other licensing arrangements.
+
+4. Except as set out above, you keep all right, title, and interest in your
+contribution. The rights that you grant to us under these terms are effective
+on the date you first submitted a contribution to us, even if your submission
+took place before the date you sign these terms.
+
+5. You covenant, represent, warrant and agree that:
+
+ * Each contribution that you submit is and shall be an original work of
+ authorship and you can legally grant the rights set out in this SCA;
+
+ * to the best of your knowledge, each contribution will not violate any
+ third party's copyrights, trademarks, patents, or other intellectual
+ property rights; and
+
+ * each contribution shall be in compliance with U.S. export control laws and
+ other applicable export and import laws. You agree to notify us if you
+ become aware of any circumstance which would make any of the foregoing
+ representations inaccurate in any respect. We may publicly disclose your
+ participation in the project, including the fact that you have signed the SCA.
+
+6. This SCA is governed by the laws of the State of California and applicable
+U.S. Federal law. Any choice of law rules will not apply.
+
+7. Please place an “x” on one of the applicable statement below. Please do NOT
+mark both statements:
+
+ * [x] I am signing on behalf of myself as an individual and no other person
+ or entity, including my employer, has or will have rights with respect to my
+ contributions.
+
+ * [ ] I am signing on behalf of my employer or a legal entity and I have the
+ actual authority to contractually bind that entity.
+
+## Contributor Details
+
+| Field | Entry |
+|------------------------------- | -------------------- |
+| Name | Henrik Tiljander |
+| Company name (if applicable) | |
+| Title or role (if applicable) | |
+| Date | 24/3/2020 |
+| GitHub username | Tiljander |
+| Website (optional) | |
diff --git a/website/docs/api/entityruler.md b/website/docs/api/entityruler.md
index af3db0dcb..0fd24897d 100644
--- a/website/docs/api/entityruler.md
+++ b/website/docs/api/entityruler.md
@@ -83,7 +83,8 @@ Find matches in the `Doc` and add them to the `doc.ents`. Typically, this
happens automatically after the component has been added to the pipeline using
[`nlp.add_pipe`](/api/language#add_pipe). If the entity ruler was initialized
with `overwrite_ents=True`, existing entities will be replaced if they overlap
-with the matches.
+with the matches. When matches overlap in a Doc, the entity ruler prioritizes longer
+patterns over shorter, and if equal the match occuring first in the Doc is chosen.
> #### Example
>
diff --git a/website/docs/usage/rule-based-matching.md b/website/docs/usage/rule-based-matching.md
index 0ab74034e..1db2405d1 100644
--- a/website/docs/usage/rule-based-matching.md
+++ b/website/docs/usage/rule-based-matching.md
@@ -968,7 +968,10 @@ pattern. The entity ruler accepts two types of patterns:
The [`EntityRuler`](/api/entityruler) is a pipeline component that's typically
added via [`nlp.add_pipe`](/api/language#add_pipe). When the `nlp` object is
called on a text, it will find matches in the `doc` and add them as entities to
-the `doc.ents`, using the specified pattern label as the entity label.
+the `doc.ents`, using the specified pattern label as the entity label. If any
+matches were to overlap, the pattern matching most tokens takes priority. If
+they also happen to be equally long, then the match occuring first in the Doc is
+chosen.
```python
### {executable="true"}
From e47010bf3c9421bbe9e642ca68ae93455ea03d49 Mon Sep 17 00:00:00 2001
From: vincent d warmerdam
Date: Mon, 6 Apr 2020 11:29:30 +0200
Subject: [PATCH 20/41] add "whatlies" to spaCy universe (#5252)
* Add "whatlies"
We're releasing it on our side officially on the 16th of April. If possible, let's announce around the same time :)
* sign contributor thing
* Added fancy gif
as the image
* Update universe.json
Spellin error and spaCy clarification.
---
.github/contributors/koaning.md | 106 ++++++++++++++++++++++++++++++++
website/meta/universe.json | 28 +++++++++
2 files changed, 134 insertions(+)
create mode 100644 .github/contributors/koaning.md
diff --git a/.github/contributors/koaning.md b/.github/contributors/koaning.md
new file mode 100644
index 000000000..ddb28cab0
--- /dev/null
+++ b/.github/contributors/koaning.md
@@ -0,0 +1,106 @@
+# spaCy contributor agreement
+
+This spaCy Contributor Agreement (**"SCA"**) is based on the
+[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
+The SCA applies to any contribution that you make to any product or project
+managed by us (the **"project"**), and sets out the intellectual property rights
+you grant to us in the contributed materials. The term **"us"** shall mean
+[ExplosionAI UG (haftungsbeschränkt)](https://explosion.ai/legal). The term
+**"you"** shall mean the person or entity identified below.
+
+If you agree to be bound by these terms, fill in the information requested
+below and include the filled-in version with your first pull request, under the
+folder [`.github/contributors/`](/.github/contributors/). The name of the file
+should be your GitHub username, with the extension `.md`. For example, the user
+example_user would create the file `.github/contributors/example_user.md`.
+
+Read this agreement carefully before signing. These terms and conditions
+constitute a binding legal agreement.
+
+## Contributor Agreement
+
+1. The term "contribution" or "contributed materials" means any source code,
+object code, patch, tool, sample, graphic, specification, manual,
+documentation, or any other material posted or submitted by you to the project.
+
+2. With respect to any worldwide copyrights, or copyright applications and
+registrations, in your contribution:
+
+ * you hereby assign to us joint ownership, and to the extent that such
+ assignment is or becomes invalid, ineffective or unenforceable, you hereby
+ grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
+ royalty-free, unrestricted license to exercise all rights under those
+ copyrights. This includes, at our option, the right to sublicense these same
+ rights to third parties through multiple levels of sublicensees or other
+ licensing arrangements;
+
+ * you agree that each of us can do all things in relation to your
+ contribution as if each of us were the sole owners, and if one of us makes
+ a derivative work of your contribution, the one who makes the derivative
+ work (or has it made will be the sole owner of that derivative work;
+
+ * you agree that you will not assert any moral rights in your contribution
+ against us, our licensees or transferees;
+
+ * you agree that we may register a copyright in your contribution and
+ exercise all ownership rights associated with it; and
+
+ * you agree that neither of us has any duty to consult with, obtain the
+ consent of, pay or render an accounting to the other for any use or
+ distribution of your contribution.
+
+3. With respect to any patents you own, or that you can license without payment
+to any third party, you hereby grant to us a perpetual, irrevocable,
+non-exclusive, worldwide, no-charge, royalty-free license to:
+
+ * make, have made, use, sell, offer to sell, import, and otherwise transfer
+ your contribution in whole or in part, alone or in combination with or
+ included in any product, work or materials arising out of the project to
+ which your contribution was submitted, and
+
+ * at our option, to sublicense these same rights to third parties through
+ multiple levels of sublicensees or other licensing arrangements.
+
+4. Except as set out above, you keep all right, title, and interest in your
+contribution. The rights that you grant to us under these terms are effective
+on the date you first submitted a contribution to us, even if your submission
+took place before the date you sign these terms.
+
+5. You covenant, represent, warrant and agree that:
+
+ * Each contribution that you submit is and shall be an original work of
+ authorship and you can legally grant the rights set out in this SCA;
+
+ * to the best of your knowledge, each contribution will not violate any
+ third party's copyrights, trademarks, patents, or other intellectual
+ property rights; and
+
+ * each contribution shall be in compliance with U.S. export control laws and
+ other applicable export and import laws. You agree to notify us if you
+ become aware of any circumstance which would make any of the foregoing
+ representations inaccurate in any respect. We may publicly disclose your
+ participation in the project, including the fact that you have signed the SCA.
+
+6. This SCA is governed by the laws of the State of California and applicable
+U.S. Federal law. Any choice of law rules will not apply.
+
+7. Please place an “x” on one of the applicable statement below. Please do NOT
+mark both statements:
+
+ * [x] I am signing on behalf of myself as an individual and no other person
+ or entity, including my employer, has or will have rights with respect to my
+ contributions.
+
+ * [ ] I am signing on behalf of my employer or a legal entity and I have the
+ actual authority to contractually bind that entity.
+
+## Contributor Details
+
+| Field | Entry |
+|------------------------------- | ------------------------ |
+| Name | Vincent D. Warmerdam |
+| Company name (if applicable) | |
+| Title or role (if applicable) | Data Person |
+| Date | 2020-03-01 |
+| GitHub username | koaning |
+| Website (optional) | https://koaning.io |
diff --git a/website/meta/universe.json b/website/meta/universe.json
index 23d052bb9..8071374f7 100644
--- a/website/meta/universe.json
+++ b/website/meta/universe.json
@@ -1,5 +1,33 @@
{
"resources": [
+ {
+ "id": "whatlies",
+ "title": "whatlies",
+ "slogan": "Make interactive visualisations to figure out 'what lies' in word embeddings.",
+ "description": "This small library offers tools to make visualisation easier of both word embeddings as well as operations on them. It has support for spaCy prebuilt models as a first class citizen but also offers support for sense2vec. There's a convenient API to perform linear algebra as well as support for popular transformations like PCA/UMAP/etc.",
+ "github": "rasahq/whatlies",
+ "pip": "whatlies",
+ "thumb": "https://i.imgur.com/rOkOiLv.png",
+ "image": "https://raw.githubusercontent.com/RasaHQ/whatlies/master/docs/gif-two.gif",
+ "code_example": [
+ "from whatlies import EmbeddingSet",
+ "from whatlies.language import SpacyLanguage",
+ "",
+ "lang = SpacyLanguage('en_core_web_md')",
+ "words = ['cat', 'dog', 'fish', 'kitten', 'man', 'woman', ',
+ 'king', 'queen', 'doctor', 'nurse']",
+ "",
+ "emb = lang[words]",
+ "emb.plot_interactive(x_axis='man', y_axis='woman')"
+ ],
+ "category": ["visualizers", "research"],
+ "author": "Vincent D. Warmerdam",
+ "author_links": {
+ "twitter": "fishnets88",
+ "github": "koaning",
+ "website": "https://koaning.io"
+ }
+ },
{
"id": "spacy-stanza",
"title": "spacy-stanza",
From 528c4f6b2ee950b82cfd0ead672d7620cddd1642 Mon Sep 17 00:00:00 2001
From: Sofie Van Landeghem
Date: Fri, 3 Apr 2020 13:01:43 +0200
Subject: [PATCH 21/41] Small doc fixes (#5250)
* fix link
* torchtext instead tochtext
---
website/docs/usage/linguistic-features.md | 2 +-
website/meta/universe.json | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/website/docs/usage/linguistic-features.md b/website/docs/usage/linguistic-features.md
index 685619c88..59712939a 100644
--- a/website/docs/usage/linguistic-features.md
+++ b/website/docs/usage/linguistic-features.md
@@ -1303,7 +1303,7 @@ with doc.retokenize() as retokenizer:
### Overwriting custom extension attributes {#retokenization-extensions}
If you've registered custom
-[extension attributes](/usage/processing-pipelines##custom-components-attributes),
+[extension attributes](/usage/processing-pipelines#custom-components-attributes),
you can overwrite them during tokenization by providing a dictionary of
attribute names mapped to new values as the `"_"` key in the `attrs`. For
merging, you need to provide one dictionary of attributes for the resulting
diff --git a/website/meta/universe.json b/website/meta/universe.json
index 8071374f7..bbd67e8a6 100644
--- a/website/meta/universe.json
+++ b/website/meta/universe.json
@@ -669,7 +669,7 @@
"tags": ["chatbots"]
},
{
- "id": "tochtext",
+ "id": "torchtext",
"title": "torchtext",
"slogan": "Data loaders and abstractions for text and NLP",
"github": "pytorch/text",
From 81d6aee6e791e35f54f9369f14bc2399d6c26380 Mon Sep 17 00:00:00 2001
From: svlandeg
Date: Tue, 7 Apr 2020 14:11:31 +0200
Subject: [PATCH 22/41] fix json
---
website/meta/universe.json | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/website/meta/universe.json b/website/meta/universe.json
index bbd67e8a6..b5e1dbde0 100644
--- a/website/meta/universe.json
+++ b/website/meta/universe.json
@@ -14,8 +14,7 @@
"from whatlies.language import SpacyLanguage",
"",
"lang = SpacyLanguage('en_core_web_md')",
- "words = ['cat', 'dog', 'fish', 'kitten', 'man', 'woman', ',
- 'king', 'queen', 'doctor', 'nurse']",
+ "words = ['cat', 'dog', 'fish', 'kitten', 'man', 'woman', 'king', 'queen', 'doctor', 'nurse']",
"",
"emb = lang[words]",
"emb.plot_interactive(x_axis='man', y_axis='woman')"
From 8f431ad97ce954bed2365b2417cfda73785d5a29 Mon Sep 17 00:00:00 2001
From: Sofie Van Landeghem
Date: Tue, 14 Apr 2020 14:53:47 +0200
Subject: [PATCH 23/41] tag-map-path since 2.2.4 instead of 2.2.3 (#5289)
---
website/docs/api/cli.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/docs/api/cli.md b/website/docs/api/cli.md
index f067ba5a7..4b1b37bc5 100644
--- a/website/docs/api/cli.md
+++ b/website/docs/api/cli.md
@@ -189,7 +189,7 @@ $ python -m spacy debug-data [lang] [train_path] [dev_path] [--base-model] [--pi
| `lang` | positional | Model language. |
| `train_path` | positional | Location of JSON-formatted training data. Can be a file or a directory of files. |
| `dev_path` | positional | Location of JSON-formatted development data for evaluation. Can be a file or a directory of files. |
-| `--tag-map-path`, `-tm` 2.2.3 | option | Location of JSON-formatted tag map. |
+| `--tag-map-path`, `-tm` 2.2.4 | option | Location of JSON-formatted tag map. |
| `--base-model`, `-b` | option | Optional name of base model to update. Can be any loadable spaCy model. |
| `--pipeline`, `-p` | option | Comma-separated names of pipeline components to train. Defaults to `'tagger,parser,ner'`. |
| `--ignore-warnings`, `-IW` | flag | Ignore warnings, only show stats and errors. |
From 51207c9417028027ca84158f87f1e8671ec3d0fa Mon Sep 17 00:00:00 2001
From: Ines Montani
Date: Thu, 16 Apr 2020 14:45:25 +0200
Subject: [PATCH 24/41] Update netlify.toml [ci skip]
---
netlify.toml | 62 ++++++++++++++++++++++++++--------------------------
1 file changed, 31 insertions(+), 31 deletions(-)
diff --git a/netlify.toml b/netlify.toml
index 45bd2c3b6..be809f1d4 100644
--- a/netlify.toml
+++ b/netlify.toml
@@ -7,42 +7,42 @@ redirects = [
{from = "https://alpha.spacy.io/*", to = "https://spacy.io", force = true},
{from = "http://alpha.spacy.io/*", to = "https://spacy.io", force = true},
# Old demos
- {from = "/demos/*", to = "https://explosion.ai/demos/:splat"},
+ {from = "/demos/*", to = "https://explosion.ai/demos/:splat", force = true},
# Old blog
- {from = "/blog/*", to = "https://explosion.ai/blog/:splat"},
- {from = "/feed", to = "https://explosion.ai/feed"},
- {from = "/feed.xml", to = "https://explosion.ai/feed"},
+ {from = "/blog/*", to = "https://explosion.ai/blog/:splat", force = true},
+ {from = "/feed", to = "https://explosion.ai/feed", force = true},
+ {from = "/feed.xml", to = "https://explosion.ai/feed", force = true},
# Old documentation pages (1.x)
- {from = "/docs/usage/processing-text", to = "/usage/linguistic-features"},
- {from = "/docs/usage/deep-learning", to = "/usage/training"},
- {from = "/docs/usage/pos-tagging", to = "/usage/linguistic-features#pos-tagging"},
- {from = "/docs/usage/dependency-parse", to = "/usage/linguistic-features#dependency-parse"},
- {from = "/docs/usage/entity-recognition", to = "/usage/linguistic-features#named-entities"},
- {from = "/docs/usage/word-vectors-similarities", to = "/usage/vectors-similarity"},
- {from = "/docs/usage/customizing-tokenizer", to = "/usage/linguistic-features#tokenization"},
- {from = "/docs/usage/language-processing-pipeline", to = "/usage/processing-pipelines"},
- {from = "/docs/usage/customizing-pipeline", to = "/usage/processing-pipelines"},
- {from = "/docs/usage/training-ner", to = "/usage/training#ner"},
- {from = "/docs/usage/tutorials", to = "/usage/examples"},
- {from = "/docs/usage/data-model", to = "/api"},
- {from = "/docs/usage/cli", to = "/api/cli"},
- {from = "/docs/usage/lightning-tour", to = "/usage/spacy-101#lightning-tour"},
- {from = "/docs/api/language-models", to = "/usage/models#languages"},
- {from = "/docs/api/spacy", to = "/docs/api/top-level"},
- {from = "/docs/api/displacy", to = "/api/top-level#displacy"},
- {from = "/docs/api/util", to = "/api/top-level#util"},
- {from = "/docs/api/features", to = "/models/#architecture"},
- {from = "/docs/api/philosophy", to = "/usage/spacy-101"},
- {from = "/docs/usage/showcase", to = "/universe"},
- {from = "/tutorials/load-new-word-vectors", to = "/usage/vectors-similarity#custom"},
- {from = "/tutorials", to = "/usage/examples"},
+ {from = "/docs/usage/processing-text", to = "/usage/linguistic-features", force = true},
+ {from = "/docs/usage/deep-learning", to = "/usage/training", force = true},
+ {from = "/docs/usage/pos-tagging", to = "/usage/linguistic-features#pos-tagging", force = true},
+ {from = "/docs/usage/dependency-parse", to = "/usage/linguistic-features#dependency-parse", force = true},
+ {from = "/docs/usage/entity-recognition", to = "/usage/linguistic-features#named-entities", force = true},
+ {from = "/docs/usage/word-vectors-similarities", to = "/usage/vectors-similarity", force = true},
+ {from = "/docs/usage/customizing-tokenizer", to = "/usage/linguistic-features#tokenization", force = true},
+ {from = "/docs/usage/language-processing-pipeline", to = "/usage/processing-pipelines", force = true},
+ {from = "/docs/usage/customizing-pipeline", to = "/usage/processing-pipelines", force = true},
+ {from = "/docs/usage/training-ner", to = "/usage/training#ner", force = true},
+ {from = "/docs/usage/tutorials", to = "/usage/examples", force = true},
+ {from = "/docs/usage/data-model", to = "/api", force = true},
+ {from = "/docs/usage/cli", to = "/api/cli", force = true},
+ {from = "/docs/usage/lightning-tour", to = "/usage/spacy-101#lightning-tour", force = true},
+ {from = "/docs/api/language-models", to = "/usage/models#languages", force = true},
+ {from = "/docs/api/spacy", to = "/docs/api/top-level", force = true},
+ {from = "/docs/api/displacy", to = "/api/top-level#displacy", force = true},
+ {from = "/docs/api/util", to = "/api/top-level#util", force = true},
+ {from = "/docs/api/features", to = "/models/#architecture", force = true},
+ {from = "/docs/api/philosophy", to = "/usage/spacy-101", force = true},
+ {from = "/docs/usage/showcase", to = "/universe", force = true},
+ {from = "/tutorials/load-new-word-vectors", to = "/usage/vectors-similarity#custom", force = true},
+ {from = "/tutorials", to = "/usage/examples", force = true},
# Rewrite all other docs pages to /
{from = "/docs/*", to = "/:splat"},
# Updated documentation pages
- {from = "/usage/resources", to = "/universe"},
- {from = "/usage/lightning-tour", to = "/usage/spacy-101#lightning-tour"},
- {from = "/usage/linguistic-features#rule-based-matching", to = "/usage/rule-based-matching"},
- {from = "/models/comparison", to = "/models"},
+ {from = "/usage/resources", to = "/universe", force = true},
+ {from = "/usage/lightning-tour", to = "/usage/spacy-101#lightning-tour", force = true},
+ {from = "/usage/linguistic-features#rule-based-matching", to = "/usage/rule-based-matching", force = true},
+ {from = "/models/comparison", to = "/models", force = true},
{from = "/api/#section-cython", to = "/api/cython", force = true},
{from = "/api/#cython", to = "/api/cython", force = true},
{from = "/api/sentencesegmenter", to="/api/sentencizer"},
From cafe94ee045fe9937cadbdc0e1c96d6eabde5dec Mon Sep 17 00:00:00 2001
From: Sofie Van Landeghem
Date: Wed, 29 Apr 2020 12:53:53 +0200
Subject: [PATCH 25/41] Update NEL examples and documentation (#5370)
* simplify creation of KB by skipping dim reduction
* small fixes to train EL example script
* add KB creation and NEL training example scripts to example section
* update descriptions of example scripts in the documentation
* moving wiki_entity_linking folder from bin to projects
* remove test for wiki NEL functionality that is being moved
# Conflicts:
# bin/wiki_entity_linking/wikipedia_processor.py
---
bin/wiki_entity_linking/README.md | 37 --
bin/wiki_entity_linking/__init__.py | 12 -
.../entity_linker_evaluation.py | 204 -------
bin/wiki_entity_linking/kb_creator.py | 161 -----
bin/wiki_entity_linking/train_descriptions.py | 152 -----
bin/wiki_entity_linking/wiki_io.py | 127 ----
bin/wiki_entity_linking/wiki_namespaces.py | 128 ----
.../wikidata_pretrain_kb.py | 179 ------
bin/wiki_entity_linking/wikidata_processor.py | 154 -----
.../wikidata_train_entity_linker.py | 172 ------
.../wikipedia_processor.py | 565 ------------------
.../training/{pretrain_kb.py => create_kb.py} | 43 +-
examples/training/train_entity_linker.py | 10 +-
website/docs/usage/examples.md | 21 +
website/docs/usage/linguistic-features.md | 4 +-
website/docs/usage/training.md | 22 +-
16 files changed, 50 insertions(+), 1941 deletions(-)
delete mode 100644 bin/wiki_entity_linking/README.md
delete mode 100644 bin/wiki_entity_linking/__init__.py
delete mode 100644 bin/wiki_entity_linking/entity_linker_evaluation.py
delete mode 100644 bin/wiki_entity_linking/kb_creator.py
delete mode 100644 bin/wiki_entity_linking/train_descriptions.py
delete mode 100644 bin/wiki_entity_linking/wiki_io.py
delete mode 100644 bin/wiki_entity_linking/wiki_namespaces.py
delete mode 100644 bin/wiki_entity_linking/wikidata_pretrain_kb.py
delete mode 100644 bin/wiki_entity_linking/wikidata_processor.py
delete mode 100644 bin/wiki_entity_linking/wikidata_train_entity_linker.py
rename examples/training/{pretrain_kb.py => create_kb.py} (75%)
diff --git a/bin/wiki_entity_linking/README.md b/bin/wiki_entity_linking/README.md
deleted file mode 100644
index 4e4af5c21..000000000
--- a/bin/wiki_entity_linking/README.md
+++ /dev/null
@@ -1,37 +0,0 @@
-## Entity Linking with Wikipedia and Wikidata
-
-### Step 1: Create a Knowledge Base (KB) and training data
-
-Run `wikidata_pretrain_kb.py`
-* This takes as input the locations of a **Wikipedia and a Wikidata dump**, and produces a **KB directory** + **training file**
- * WikiData: get `latest-all.json.bz2` from https://dumps.wikimedia.org/wikidatawiki/entities/
- * Wikipedia: get `enwiki-latest-pages-articles-multistream.xml.bz2` from https://dumps.wikimedia.org/enwiki/latest/ (or for any other language)
-* You can set the filtering parameters for KB construction:
- * `max_per_alias` (`-a`): (max) number of candidate entities in the KB per alias/synonym
- * `min_freq` (`-f`): threshold of number of times an entity should occur in the corpus to be included in the KB
- * `min_pair` (`-c`): threshold of number of times an entity+alias combination should occur in the corpus to be included in the KB
-* Further parameters to set:
- * `descriptions_from_wikipedia` (`-wp`): whether to parse descriptions from Wikipedia (`True`) or Wikidata (`False`)
- * `entity_vector_length` (`-v`): length of the pre-trained entity description vectors
- * `lang` (`-la`): language for which to fetch Wikidata information (as the dump contains all languages)
-
-Quick testing and rerunning:
-* When trying out the pipeline for a quick test, set `limit_prior` (`-lp`), `limit_train` (`-lt`) and/or `limit_wd` (`-lw`) to read only parts of the dumps instead of everything.
- * e.g. set `-lt 20000 -lp 2000 -lw 3000 -f 1`
-* If you only want to (re)run certain parts of the pipeline, just remove the corresponding files and they will be recalculated or reparsed.
-
-
-### Step 2: Train an Entity Linking model
-
-Run `wikidata_train_entity_linker.py`
-* This takes the **KB directory** produced by Step 1, and trains an **Entity Linking model**
-* Specify the output directory (`-o`) in which the final, trained model will be saved
-* You can set the learning parameters for the EL training:
- * `epochs` (`-e`): number of training iterations
- * `dropout` (`-p`): dropout rate
- * `lr` (`-n`): learning rate
- * `l2` (`-r`): L2 regularization
-* Specify the number of training and dev testing articles with `train_articles` (`-t`) and `dev_articles` (`-d`) respectively
- * If not specified, the full dataset will be processed - this may take a LONG time !
-* Further parameters to set:
- * `labels_discard` (`-l`): NER label types to discard during training
diff --git a/bin/wiki_entity_linking/__init__.py b/bin/wiki_entity_linking/__init__.py
deleted file mode 100644
index de486bbcf..000000000
--- a/bin/wiki_entity_linking/__init__.py
+++ /dev/null
@@ -1,12 +0,0 @@
-TRAINING_DATA_FILE = "gold_entities.jsonl"
-KB_FILE = "kb"
-KB_MODEL_DIR = "nlp_kb"
-OUTPUT_MODEL_DIR = "nlp"
-
-PRIOR_PROB_PATH = "prior_prob.csv"
-ENTITY_DEFS_PATH = "entity_defs.csv"
-ENTITY_FREQ_PATH = "entity_freq.csv"
-ENTITY_ALIAS_PATH = "entity_alias.csv"
-ENTITY_DESCR_PATH = "entity_descriptions.csv"
-
-LOG_FORMAT = '%(asctime)s - %(levelname)s - %(name)s - %(message)s'
diff --git a/bin/wiki_entity_linking/entity_linker_evaluation.py b/bin/wiki_entity_linking/entity_linker_evaluation.py
deleted file mode 100644
index 2aeffbfc2..000000000
--- a/bin/wiki_entity_linking/entity_linker_evaluation.py
+++ /dev/null
@@ -1,204 +0,0 @@
-# coding: utf-8
-from __future__ import unicode_literals
-
-import logging
-import random
-from tqdm import tqdm
-from collections import defaultdict
-
-logger = logging.getLogger(__name__)
-
-
-class Metrics(object):
- true_pos = 0
- false_pos = 0
- false_neg = 0
-
- def update_results(self, true_entity, candidate):
- candidate_is_correct = true_entity == candidate
-
- # Assume that we have no labeled negatives in the data (i.e. cases where true_entity is "NIL")
- # Therefore, if candidate_is_correct then we have a true positive and never a true negative.
- self.true_pos += candidate_is_correct
- self.false_neg += not candidate_is_correct
- if candidate and candidate not in {"", "NIL"}:
- # A wrong prediction (e.g. Q42 != Q3) counts both as a FP as well as a FN.
- self.false_pos += not candidate_is_correct
-
- def calculate_precision(self):
- if self.true_pos == 0:
- return 0.0
- else:
- return self.true_pos / (self.true_pos + self.false_pos)
-
- def calculate_recall(self):
- if self.true_pos == 0:
- return 0.0
- else:
- return self.true_pos / (self.true_pos + self.false_neg)
-
- def calculate_fscore(self):
- p = self.calculate_precision()
- r = self.calculate_recall()
- if p + r == 0:
- return 0.0
- else:
- return 2 * p * r / (p + r)
-
-
-class EvaluationResults(object):
- def __init__(self):
- self.metrics = Metrics()
- self.metrics_by_label = defaultdict(Metrics)
-
- def update_metrics(self, ent_label, true_entity, candidate):
- self.metrics.update_results(true_entity, candidate)
- self.metrics_by_label[ent_label].update_results(true_entity, candidate)
-
- def report_metrics(self, model_name):
- model_str = model_name.title()
- recall = self.metrics.calculate_recall()
- precision = self.metrics.calculate_precision()
- fscore = self.metrics.calculate_fscore()
- return (
- "{}: ".format(model_str)
- + "F-score = {} | ".format(round(fscore, 3))
- + "Recall = {} | ".format(round(recall, 3))
- + "Precision = {} | ".format(round(precision, 3))
- + "F-score by label = {}".format(
- {k: v.calculate_fscore() for k, v in sorted(self.metrics_by_label.items())}
- )
- )
-
-
-class BaselineResults(object):
- def __init__(self):
- self.random = EvaluationResults()
- self.prior = EvaluationResults()
- self.oracle = EvaluationResults()
-
- def report_performance(self, model):
- results = getattr(self, model)
- return results.report_metrics(model)
-
- def update_baselines(
- self,
- true_entity,
- ent_label,
- random_candidate,
- prior_candidate,
- oracle_candidate,
- ):
- self.oracle.update_metrics(ent_label, true_entity, oracle_candidate)
- self.prior.update_metrics(ent_label, true_entity, prior_candidate)
- self.random.update_metrics(ent_label, true_entity, random_candidate)
-
-
-def measure_performance(dev_data, kb, el_pipe, baseline=True, context=True, dev_limit=None):
- counts = dict()
- baseline_results = BaselineResults()
- context_results = EvaluationResults()
- combo_results = EvaluationResults()
-
- for doc, gold in tqdm(dev_data, total=dev_limit, leave=False, desc='Processing dev data'):
- if len(doc) > 0:
- correct_ents = dict()
- for entity, kb_dict in gold.links.items():
- start, end = entity
- for gold_kb, value in kb_dict.items():
- if value:
- # only evaluating on positive examples
- offset = _offset(start, end)
- correct_ents[offset] = gold_kb
-
- if baseline:
- _add_baseline(baseline_results, counts, doc, correct_ents, kb)
-
- if context:
- # using only context
- el_pipe.cfg["incl_context"] = True
- el_pipe.cfg["incl_prior"] = False
- _add_eval_result(context_results, doc, correct_ents, el_pipe)
-
- # measuring combined accuracy (prior + context)
- el_pipe.cfg["incl_context"] = True
- el_pipe.cfg["incl_prior"] = True
- _add_eval_result(combo_results, doc, correct_ents, el_pipe)
-
- if baseline:
- logger.info("Counts: {}".format({k: v for k, v in sorted(counts.items())}))
- logger.info(baseline_results.report_performance("random"))
- logger.info(baseline_results.report_performance("prior"))
- logger.info(baseline_results.report_performance("oracle"))
-
- if context:
- logger.info(context_results.report_metrics("context only"))
- logger.info(combo_results.report_metrics("context and prior"))
-
-
-def _add_eval_result(results, doc, correct_ents, el_pipe):
- """
- Evaluate the ent.kb_id_ annotations against the gold standard.
- Only evaluate entities that overlap between gold and NER, to isolate the performance of the NEL.
- """
- try:
- doc = el_pipe(doc)
- for ent in doc.ents:
- ent_label = ent.label_
- start = ent.start_char
- end = ent.end_char
- offset = _offset(start, end)
- gold_entity = correct_ents.get(offset, None)
- # the gold annotations are not complete so we can't evaluate missing annotations as 'wrong'
- if gold_entity is not None:
- pred_entity = ent.kb_id_
- results.update_metrics(ent_label, gold_entity, pred_entity)
-
- except Exception as e:
- logging.error("Error assessing accuracy " + str(e))
-
-
-def _add_baseline(baseline_results, counts, doc, correct_ents, kb):
- """
- Measure 3 performance baselines: random selection, prior probabilities, and 'oracle' prediction for upper bound.
- Only evaluate entities that overlap between gold and NER, to isolate the performance of the NEL.
- """
- for ent in doc.ents:
- ent_label = ent.label_
- start = ent.start_char
- end = ent.end_char
- offset = _offset(start, end)
- gold_entity = correct_ents.get(offset, None)
-
- # the gold annotations are not complete so we can't evaluate missing annotations as 'wrong'
- if gold_entity is not None:
- candidates = kb.get_candidates(ent.text)
- oracle_candidate = ""
- prior_candidate = ""
- random_candidate = ""
- if candidates:
- scores = []
-
- for c in candidates:
- scores.append(c.prior_prob)
- if c.entity_ == gold_entity:
- oracle_candidate = c.entity_
-
- best_index = scores.index(max(scores))
- prior_candidate = candidates[best_index].entity_
- random_candidate = random.choice(candidates).entity_
-
- current_count = counts.get(ent_label, 0)
- counts[ent_label] = current_count+1
-
- baseline_results.update_baselines(
- gold_entity,
- ent_label,
- random_candidate,
- prior_candidate,
- oracle_candidate,
- )
-
-
-def _offset(start, end):
- return "{}_{}".format(start, end)
diff --git a/bin/wiki_entity_linking/kb_creator.py b/bin/wiki_entity_linking/kb_creator.py
deleted file mode 100644
index 7778fc701..000000000
--- a/bin/wiki_entity_linking/kb_creator.py
+++ /dev/null
@@ -1,161 +0,0 @@
-# coding: utf-8
-from __future__ import unicode_literals
-
-import logging
-
-from spacy.kb import KnowledgeBase
-
-from bin.wiki_entity_linking.train_descriptions import EntityEncoder
-from bin.wiki_entity_linking import wiki_io as io
-
-
-logger = logging.getLogger(__name__)
-
-
-def create_kb(
- nlp,
- max_entities_per_alias,
- min_entity_freq,
- min_occ,
- entity_def_path,
- entity_descr_path,
- entity_alias_path,
- entity_freq_path,
- prior_prob_path,
- entity_vector_length,
-):
- # Create the knowledge base from Wikidata entries
- kb = KnowledgeBase(vocab=nlp.vocab, entity_vector_length=entity_vector_length)
- entity_list, filtered_title_to_id = _define_entities(nlp, kb, entity_def_path, entity_descr_path, min_entity_freq, entity_freq_path, entity_vector_length)
- _define_aliases(kb, entity_alias_path, entity_list, filtered_title_to_id, max_entities_per_alias, min_occ, prior_prob_path)
- return kb
-
-
-def _define_entities(nlp, kb, entity_def_path, entity_descr_path, min_entity_freq, entity_freq_path, entity_vector_length):
- # read the mappings from file
- title_to_id = io.read_title_to_id(entity_def_path)
- id_to_descr = io.read_id_to_descr(entity_descr_path)
-
- # check the length of the nlp vectors
- if "vectors" in nlp.meta and nlp.vocab.vectors.size:
- input_dim = nlp.vocab.vectors_length
- logger.info("Loaded pretrained vectors of size %s" % input_dim)
- else:
- raise ValueError(
- "The `nlp` object should have access to pretrained word vectors, "
- " cf. https://spacy.io/usage/models#languages."
- )
-
- logger.info("Filtering entities with fewer than {} mentions".format(min_entity_freq))
- entity_frequencies = io.read_entity_to_count(entity_freq_path)
- # filter the entities for in the KB by frequency, because there's just too much data (8M entities) otherwise
- filtered_title_to_id, entity_list, description_list, frequency_list = get_filtered_entities(
- title_to_id,
- id_to_descr,
- entity_frequencies,
- min_entity_freq
- )
- logger.info("Kept {} entities from the set of {}".format(len(description_list), len(title_to_id.keys())))
-
- logger.info("Training entity encoder")
- encoder = EntityEncoder(nlp, input_dim, entity_vector_length)
- encoder.train(description_list=description_list, to_print=True)
-
- logger.info("Getting entity embeddings")
- embeddings = encoder.apply_encoder(description_list)
-
- logger.info("Adding {} entities".format(len(entity_list)))
- kb.set_entities(
- entity_list=entity_list, freq_list=frequency_list, vector_list=embeddings
- )
- return entity_list, filtered_title_to_id
-
-
-def _define_aliases(kb, entity_alias_path, entity_list, filtered_title_to_id, max_entities_per_alias, min_occ, prior_prob_path):
- logger.info("Adding aliases from Wikipedia and Wikidata")
- _add_aliases(
- kb,
- entity_list=entity_list,
- title_to_id=filtered_title_to_id,
- max_entities_per_alias=max_entities_per_alias,
- min_occ=min_occ,
- prior_prob_path=prior_prob_path,
- )
-
-
-def get_filtered_entities(title_to_id, id_to_descr, entity_frequencies,
- min_entity_freq: int = 10):
- filtered_title_to_id = dict()
- entity_list = []
- description_list = []
- frequency_list = []
- for title, entity in title_to_id.items():
- freq = entity_frequencies.get(title, 0)
- desc = id_to_descr.get(entity, None)
- if desc and freq > min_entity_freq:
- entity_list.append(entity)
- description_list.append(desc)
- frequency_list.append(freq)
- filtered_title_to_id[title] = entity
- return filtered_title_to_id, entity_list, description_list, frequency_list
-
-
-def _add_aliases(kb, entity_list, title_to_id, max_entities_per_alias, min_occ, prior_prob_path):
- wp_titles = title_to_id.keys()
-
- # adding aliases with prior probabilities
- # we can read this file sequentially, it's sorted by alias, and then by count
- logger.info("Adding WP aliases")
- with prior_prob_path.open("r", encoding="utf8") as prior_file:
- # skip header
- prior_file.readline()
- line = prior_file.readline()
- previous_alias = None
- total_count = 0
- counts = []
- entities = []
- while line:
- splits = line.replace("\n", "").split(sep="|")
- new_alias = splits[0]
- count = int(splits[1])
- entity = splits[2]
-
- if new_alias != previous_alias and previous_alias:
- # done reading the previous alias --> output
- if len(entities) > 0:
- selected_entities = []
- prior_probs = []
- for ent_count, ent_string in zip(counts, entities):
- if ent_string in wp_titles:
- wd_id = title_to_id[ent_string]
- p_entity_givenalias = ent_count / total_count
- selected_entities.append(wd_id)
- prior_probs.append(p_entity_givenalias)
-
- if selected_entities:
- try:
- kb.add_alias(
- alias=previous_alias,
- entities=selected_entities,
- probabilities=prior_probs,
- )
- except ValueError as e:
- logger.error(e)
- total_count = 0
- counts = []
- entities = []
-
- total_count += count
-
- if len(entities) < max_entities_per_alias and count >= min_occ:
- counts.append(count)
- entities.append(entity)
- previous_alias = new_alias
-
- line = prior_file.readline()
-
-
-def read_kb(nlp, kb_file):
- kb = KnowledgeBase(vocab=nlp.vocab)
- kb.load_bulk(kb_file)
- return kb
diff --git a/bin/wiki_entity_linking/train_descriptions.py b/bin/wiki_entity_linking/train_descriptions.py
deleted file mode 100644
index af08d6b8f..000000000
--- a/bin/wiki_entity_linking/train_descriptions.py
+++ /dev/null
@@ -1,152 +0,0 @@
-# coding: utf-8
-from random import shuffle
-
-import logging
-import numpy as np
-
-from spacy._ml import zero_init, create_default_optimizer
-from spacy.cli.pretrain import get_cossim_loss
-
-from thinc.v2v import Model
-from thinc.api import chain
-from thinc.neural._classes.affine import Affine
-
-logger = logging.getLogger(__name__)
-
-
-class EntityEncoder:
- """
- Train the embeddings of entity descriptions to fit a fixed-size entity vector (e.g. 64D).
- This entity vector will be stored in the KB, for further downstream use in the entity model.
- """
-
- DROP = 0
- BATCH_SIZE = 1000
-
- # Set min. acceptable loss to avoid a 'mean of empty slice' warning by numpy
- MIN_LOSS = 0.01
-
- # Reasonable default to stop training when things are not improving
- MAX_NO_IMPROVEMENT = 20
-
- def __init__(self, nlp, input_dim, desc_width, epochs=5):
- self.nlp = nlp
- self.input_dim = input_dim
- self.desc_width = desc_width
- self.epochs = epochs
-
- def apply_encoder(self, description_list):
- if self.encoder is None:
- raise ValueError("Can not apply encoder before training it")
-
- batch_size = 100000
-
- start = 0
- stop = min(batch_size, len(description_list))
- encodings = []
-
- while start < len(description_list):
- docs = list(self.nlp.pipe(description_list[start:stop]))
- doc_embeddings = [self._get_doc_embedding(doc) for doc in docs]
- enc = self.encoder(np.asarray(doc_embeddings))
- encodings.extend(enc.tolist())
-
- start = start + batch_size
- stop = min(stop + batch_size, len(description_list))
- logger.info("Encoded: {} entities".format(stop))
-
- return encodings
-
- def train(self, description_list, to_print=False):
- processed, loss = self._train_model(description_list)
- if to_print:
- logger.info(
- "Trained entity descriptions on {} ".format(processed) +
- "(non-unique) descriptions across {} ".format(self.epochs) +
- "epochs"
- )
- logger.info("Final loss: {}".format(loss))
-
- def _train_model(self, description_list):
- best_loss = 1.0
- iter_since_best = 0
- self._build_network(self.input_dim, self.desc_width)
-
- processed = 0
- loss = 1
- # copy this list so that shuffling does not affect other functions
- descriptions = description_list.copy()
- to_continue = True
-
- for i in range(self.epochs):
- shuffle(descriptions)
-
- batch_nr = 0
- start = 0
- stop = min(self.BATCH_SIZE, len(descriptions))
-
- while to_continue and start < len(descriptions):
- batch = []
- for descr in descriptions[start:stop]:
- doc = self.nlp(descr)
- doc_vector = self._get_doc_embedding(doc)
- batch.append(doc_vector)
-
- loss = self._update(batch)
- if batch_nr % 25 == 0:
- logger.info("loss: {} ".format(loss))
- processed += len(batch)
-
- # in general, continue training if we haven't reached our ideal min yet
- to_continue = loss > self.MIN_LOSS
-
- # store the best loss and track how long it's been
- if loss < best_loss:
- best_loss = loss
- iter_since_best = 0
- else:
- iter_since_best += 1
-
- # stop learning if we haven't seen improvement since the last few iterations
- if iter_since_best > self.MAX_NO_IMPROVEMENT:
- to_continue = False
-
- batch_nr += 1
- start = start + self.BATCH_SIZE
- stop = min(stop + self.BATCH_SIZE, len(descriptions))
-
- return processed, loss
-
- @staticmethod
- def _get_doc_embedding(doc):
- indices = np.zeros((len(doc),), dtype="i")
- for i, word in enumerate(doc):
- if word.orth in doc.vocab.vectors.key2row:
- indices[i] = doc.vocab.vectors.key2row[word.orth]
- else:
- indices[i] = 0
- word_vectors = doc.vocab.vectors.data[indices]
- doc_vector = np.mean(word_vectors, axis=0)
- return doc_vector
-
- def _build_network(self, orig_width, hidden_with):
- with Model.define_operators({">>": chain}):
- # very simple encoder-decoder model
- self.encoder = Affine(hidden_with, orig_width)
- self.model = self.encoder >> zero_init(
- Affine(orig_width, hidden_with, drop_factor=0.0)
- )
- self.sgd = create_default_optimizer(self.model.ops)
-
- def _update(self, vectors):
- predictions, bp_model = self.model.begin_update(
- np.asarray(vectors), drop=self.DROP
- )
- loss, d_scores = self._get_loss(scores=predictions, golds=np.asarray(vectors))
- bp_model(d_scores, sgd=self.sgd)
- return loss / len(vectors)
-
- @staticmethod
- def _get_loss(golds, scores):
- loss, gradients = get_cossim_loss(scores, golds)
- return loss, gradients
diff --git a/bin/wiki_entity_linking/wiki_io.py b/bin/wiki_entity_linking/wiki_io.py
deleted file mode 100644
index 43ae87f0f..000000000
--- a/bin/wiki_entity_linking/wiki_io.py
+++ /dev/null
@@ -1,127 +0,0 @@
-# coding: utf-8
-from __future__ import unicode_literals
-
-import sys
-import csv
-
-# min() needed to prevent error on windows, cf https://stackoverflow.com/questions/52404416/
-csv.field_size_limit(min(sys.maxsize, 2147483646))
-
-""" This class provides reading/writing methods for temp files """
-
-
-# Entity definition: WP title -> WD ID #
-def write_title_to_id(entity_def_output, title_to_id):
- with entity_def_output.open("w", encoding="utf8") as id_file:
- id_file.write("WP_title" + "|" + "WD_id" + "\n")
- for title, qid in title_to_id.items():
- id_file.write(title + "|" + str(qid) + "\n")
-
-
-def read_title_to_id(entity_def_output):
- title_to_id = dict()
- with entity_def_output.open("r", encoding="utf8") as id_file:
- csvreader = csv.reader(id_file, delimiter="|")
- # skip header
- next(csvreader)
- for row in csvreader:
- title_to_id[row[0]] = row[1]
- return title_to_id
-
-
-# Entity aliases from WD: WD ID -> WD alias #
-def write_id_to_alias(entity_alias_path, id_to_alias):
- with entity_alias_path.open("w", encoding="utf8") as alias_file:
- alias_file.write("WD_id" + "|" + "alias" + "\n")
- for qid, alias_list in id_to_alias.items():
- for alias in alias_list:
- alias_file.write(str(qid) + "|" + alias + "\n")
-
-
-def read_id_to_alias(entity_alias_path):
- id_to_alias = dict()
- with entity_alias_path.open("r", encoding="utf8") as alias_file:
- csvreader = csv.reader(alias_file, delimiter="|")
- # skip header
- next(csvreader)
- for row in csvreader:
- qid = row[0]
- alias = row[1]
- alias_list = id_to_alias.get(qid, [])
- alias_list.append(alias)
- id_to_alias[qid] = alias_list
- return id_to_alias
-
-
-def read_alias_to_id_generator(entity_alias_path):
- """ Read (aliases, qid) tuples """
-
- with entity_alias_path.open("r", encoding="utf8") as alias_file:
- csvreader = csv.reader(alias_file, delimiter="|")
- # skip header
- next(csvreader)
- for row in csvreader:
- qid = row[0]
- alias = row[1]
- yield alias, qid
-
-
-# Entity descriptions from WD: WD ID -> WD alias #
-def write_id_to_descr(entity_descr_output, id_to_descr):
- with entity_descr_output.open("w", encoding="utf8") as descr_file:
- descr_file.write("WD_id" + "|" + "description" + "\n")
- for qid, descr in id_to_descr.items():
- descr_file.write(str(qid) + "|" + descr + "\n")
-
-
-def read_id_to_descr(entity_desc_path):
- id_to_desc = dict()
- with entity_desc_path.open("r", encoding="utf8") as descr_file:
- csvreader = csv.reader(descr_file, delimiter="|")
- # skip header
- next(csvreader)
- for row in csvreader:
- id_to_desc[row[0]] = row[1]
- return id_to_desc
-
-
-# Entity counts from WP: WP title -> count #
-def write_entity_to_count(prior_prob_input, count_output):
- # Write entity counts for quick access later
- entity_to_count = dict()
- total_count = 0
-
- with prior_prob_input.open("r", encoding="utf8") as prior_file:
- # skip header
- prior_file.readline()
- line = prior_file.readline()
-
- while line:
- splits = line.replace("\n", "").split(sep="|")
- # alias = splits[0]
- count = int(splits[1])
- entity = splits[2]
-
- current_count = entity_to_count.get(entity, 0)
- entity_to_count[entity] = current_count + count
-
- total_count += count
-
- line = prior_file.readline()
-
- with count_output.open("w", encoding="utf8") as entity_file:
- entity_file.write("entity" + "|" + "count" + "\n")
- for entity, count in entity_to_count.items():
- entity_file.write(entity + "|" + str(count) + "\n")
-
-
-def read_entity_to_count(count_input):
- entity_to_count = dict()
- with count_input.open("r", encoding="utf8") as csvfile:
- csvreader = csv.reader(csvfile, delimiter="|")
- # skip header
- next(csvreader)
- for row in csvreader:
- entity_to_count[row[0]] = int(row[1])
-
- return entity_to_count
diff --git a/bin/wiki_entity_linking/wiki_namespaces.py b/bin/wiki_entity_linking/wiki_namespaces.py
deleted file mode 100644
index e8f099ccd..000000000
--- a/bin/wiki_entity_linking/wiki_namespaces.py
+++ /dev/null
@@ -1,128 +0,0 @@
-# coding: utf8
-from __future__ import unicode_literals
-
-# List of meta pages in Wikidata, should be kept out of the Knowledge base
-WD_META_ITEMS = [
- "Q163875",
- "Q191780",
- "Q224414",
- "Q4167836",
- "Q4167410",
- "Q4663903",
- "Q11266439",
- "Q13406463",
- "Q15407973",
- "Q18616576",
- "Q19887878",
- "Q22808320",
- "Q23894233",
- "Q33120876",
- "Q42104522",
- "Q47460393",
- "Q64875536",
- "Q66480449",
-]
-
-
-# TODO: add more cases from non-English WP's
-
-# List of prefixes that refer to Wikipedia "file" pages
-WP_FILE_NAMESPACE = ["Bestand", "File"]
-
-# List of prefixes that refer to Wikipedia "category" pages
-WP_CATEGORY_NAMESPACE = ["Kategori", "Category", "Categorie"]
-
-# List of prefixes that refer to Wikipedia "meta" pages
-# these will/should be matched ignoring case
-WP_META_NAMESPACE = (
- WP_FILE_NAMESPACE
- + WP_CATEGORY_NAMESPACE
- + [
- "b",
- "betawikiversity",
- "Book",
- "c",
- "Commons",
- "d",
- "dbdump",
- "download",
- "Draft",
- "Education",
- "Foundation",
- "Gadget",
- "Gadget definition",
- "Gebruiker",
- "gerrit",
- "Help",
- "Image",
- "Incubator",
- "m",
- "mail",
- "mailarchive",
- "media",
- "MediaWiki",
- "MediaWiki talk",
- "Mediawikiwiki",
- "MediaZilla",
- "Meta",
- "Metawikipedia",
- "Module",
- "mw",
- "n",
- "nost",
- "oldwikisource",
- "otrs",
- "OTRSwiki",
- "Overleg gebruiker",
- "outreach",
- "outreachwiki",
- "Portal",
- "phab",
- "Phabricator",
- "Project",
- "q",
- "quality",
- "rev",
- "s",
- "spcom",
- "Special",
- "species",
- "Strategy",
- "sulutil",
- "svn",
- "Talk",
- "Template",
- "Template talk",
- "Testwiki",
- "ticket",
- "TimedText",
- "Toollabs",
- "tools",
- "tswiki",
- "User",
- "User talk",
- "v",
- "voy",
- "w",
- "Wikibooks",
- "Wikidata",
- "wikiHow",
- "Wikinvest",
- "wikilivres",
- "Wikimedia",
- "Wikinews",
- "Wikipedia",
- "Wikipedia talk",
- "Wikiquote",
- "Wikisource",
- "Wikispecies",
- "Wikitech",
- "Wikiversity",
- "Wikivoyage",
- "wikt",
- "wiktionary",
- "wmf",
- "wmania",
- "WP",
- ]
-)
diff --git a/bin/wiki_entity_linking/wikidata_pretrain_kb.py b/bin/wiki_entity_linking/wikidata_pretrain_kb.py
deleted file mode 100644
index 003074feb..000000000
--- a/bin/wiki_entity_linking/wikidata_pretrain_kb.py
+++ /dev/null
@@ -1,179 +0,0 @@
-# coding: utf-8
-"""Script to process Wikipedia and Wikidata dumps and create a knowledge base (KB)
-with specific parameters. Intermediate files are written to disk.
-
-Running the full pipeline on a standard laptop, may take up to 13 hours of processing.
-Use the -p, -d and -s options to speed up processing using the intermediate files
-from a previous run.
-
-For the Wikidata dump: get the latest-all.json.bz2 from https://dumps.wikimedia.org/wikidatawiki/entities/
-For the Wikipedia dump: get enwiki-latest-pages-articles-multistream.xml.bz2
-from https://dumps.wikimedia.org/enwiki/latest/
-
-"""
-from __future__ import unicode_literals
-
-import logging
-from pathlib import Path
-import plac
-
-from bin.wiki_entity_linking import wikipedia_processor as wp, wikidata_processor as wd
-from bin.wiki_entity_linking import wiki_io as io
-from bin.wiki_entity_linking import kb_creator
-from bin.wiki_entity_linking import TRAINING_DATA_FILE, KB_FILE, ENTITY_DESCR_PATH, KB_MODEL_DIR, LOG_FORMAT
-from bin.wiki_entity_linking import ENTITY_FREQ_PATH, PRIOR_PROB_PATH, ENTITY_DEFS_PATH, ENTITY_ALIAS_PATH
-import spacy
-from bin.wiki_entity_linking.kb_creator import read_kb
-
-logger = logging.getLogger(__name__)
-
-
-@plac.annotations(
- wd_json=("Path to the downloaded WikiData JSON dump.", "positional", None, Path),
- wp_xml=("Path to the downloaded Wikipedia XML dump.", "positional", None, Path),
- output_dir=("Output directory", "positional", None, Path),
- model=("Model name or path, should include pretrained vectors.", "positional", None, str),
- max_per_alias=("Max. # entities per alias (default 10)", "option", "a", int),
- min_freq=("Min. count of an entity in the corpus (default 20)", "option", "f", int),
- min_pair=("Min. count of entity-alias pairs (default 5)", "option", "c", int),
- entity_vector_length=("Length of entity vectors (default 64)", "option", "v", int),
- loc_prior_prob=("Location to file with prior probabilities", "option", "p", Path),
- loc_entity_defs=("Location to file with entity definitions", "option", "d", Path),
- loc_entity_desc=("Location to file with entity descriptions", "option", "s", Path),
- descr_from_wp=("Flag for using descriptions from WP instead of WD (default False)", "flag", "wp"),
- limit_prior=("Threshold to limit lines read from WP for prior probabilities", "option", "lp", int),
- limit_train=("Threshold to limit lines read from WP for training set", "option", "lt", int),
- limit_wd=("Threshold to limit lines read from WD", "option", "lw", int),
- lang=("Optional language for which to get Wikidata titles. Defaults to 'en'", "option", "la", str),
-)
-def main(
- wd_json,
- wp_xml,
- output_dir,
- model,
- max_per_alias=10,
- min_freq=20,
- min_pair=5,
- entity_vector_length=64,
- loc_prior_prob=None,
- loc_entity_defs=None,
- loc_entity_alias=None,
- loc_entity_desc=None,
- descr_from_wp=False,
- limit_prior=None,
- limit_train=None,
- limit_wd=None,
- lang="en",
-):
- entity_defs_path = loc_entity_defs if loc_entity_defs else output_dir / ENTITY_DEFS_PATH
- entity_alias_path = loc_entity_alias if loc_entity_alias else output_dir / ENTITY_ALIAS_PATH
- entity_descr_path = loc_entity_desc if loc_entity_desc else output_dir / ENTITY_DESCR_PATH
- entity_freq_path = output_dir / ENTITY_FREQ_PATH
- prior_prob_path = loc_prior_prob if loc_prior_prob else output_dir / PRIOR_PROB_PATH
- training_entities_path = output_dir / TRAINING_DATA_FILE
- kb_path = output_dir / KB_FILE
-
- logger.info("Creating KB with Wikipedia and WikiData")
-
- # STEP 0: set up IO
- if not output_dir.exists():
- output_dir.mkdir(parents=True)
-
- # STEP 1: Load the NLP object
- logger.info("STEP 1: Loading NLP model {}".format(model))
- nlp = spacy.load(model)
-
- # check the length of the nlp vectors
- if "vectors" not in nlp.meta or not nlp.vocab.vectors.size:
- raise ValueError(
- "The `nlp` object should have access to pretrained word vectors, "
- " cf. https://spacy.io/usage/models#languages."
- )
-
- # STEP 2: create prior probabilities from WP
- if not prior_prob_path.exists():
- # It takes about 2h to process 1000M lines of Wikipedia XML dump
- logger.info("STEP 2: Writing prior probabilities to {}".format(prior_prob_path))
- if limit_prior is not None:
- logger.warning("Warning: reading only {} lines of Wikipedia dump".format(limit_prior))
- wp.read_prior_probs(wp_xml, prior_prob_path, limit=limit_prior)
- else:
- logger.info("STEP 2: Reading prior probabilities from {}".format(prior_prob_path))
-
- # STEP 3: calculate entity frequencies
- if not entity_freq_path.exists():
- logger.info("STEP 3: Calculating and writing entity frequencies to {}".format(entity_freq_path))
- io.write_entity_to_count(prior_prob_path, entity_freq_path)
- else:
- logger.info("STEP 3: Reading entity frequencies from {}".format(entity_freq_path))
-
- # STEP 4: reading definitions and (possibly) descriptions from WikiData or from file
- if (not entity_defs_path.exists()) or (not descr_from_wp and not entity_descr_path.exists()):
- # It takes about 10h to process 55M lines of Wikidata JSON dump
- logger.info("STEP 4: Parsing and writing Wikidata entity definitions to {}".format(entity_defs_path))
- if limit_wd is not None:
- logger.warning("Warning: reading only {} lines of Wikidata dump".format(limit_wd))
- title_to_id, id_to_descr, id_to_alias = wd.read_wikidata_entities_json(
- wd_json,
- limit_wd,
- to_print=False,
- lang=lang,
- parse_descr=(not descr_from_wp),
- )
- io.write_title_to_id(entity_defs_path, title_to_id)
-
- logger.info("STEP 4b: Writing Wikidata entity aliases to {}".format(entity_alias_path))
- io.write_id_to_alias(entity_alias_path, id_to_alias)
-
- if not descr_from_wp:
- logger.info("STEP 4c: Writing Wikidata entity descriptions to {}".format(entity_descr_path))
- io.write_id_to_descr(entity_descr_path, id_to_descr)
- else:
- logger.info("STEP 4: Reading entity definitions from {}".format(entity_defs_path))
- logger.info("STEP 4b: Reading entity aliases from {}".format(entity_alias_path))
- if not descr_from_wp:
- logger.info("STEP 4c: Reading entity descriptions from {}".format(entity_descr_path))
-
- # STEP 5: Getting gold entities from Wikipedia
- if (not training_entities_path.exists()) or (descr_from_wp and not entity_descr_path.exists()):
- logger.info("STEP 5: Parsing and writing Wikipedia gold entities to {}".format(training_entities_path))
- if limit_train is not None:
- logger.warning("Warning: reading only {} lines of Wikipedia dump".format(limit_train))
- wp.create_training_and_desc(wp_xml, entity_defs_path, entity_descr_path,
- training_entities_path, descr_from_wp, limit_train)
- if descr_from_wp:
- logger.info("STEP 5b: Parsing and writing Wikipedia descriptions to {}".format(entity_descr_path))
- else:
- logger.info("STEP 5: Reading gold entities from {}".format(training_entities_path))
- if descr_from_wp:
- logger.info("STEP 5b: Reading entity descriptions from {}".format(entity_descr_path))
-
- # STEP 6: creating the actual KB
- # It takes ca. 30 minutes to pretrain the entity embeddings
- if not kb_path.exists():
- logger.info("STEP 6: Creating the KB at {}".format(kb_path))
- kb = kb_creator.create_kb(
- nlp=nlp,
- max_entities_per_alias=max_per_alias,
- min_entity_freq=min_freq,
- min_occ=min_pair,
- entity_def_path=entity_defs_path,
- entity_descr_path=entity_descr_path,
- entity_alias_path=entity_alias_path,
- entity_freq_path=entity_freq_path,
- prior_prob_path=prior_prob_path,
- entity_vector_length=entity_vector_length,
- )
- kb.dump(kb_path)
- logger.info("kb entities: {}".format(kb.get_size_entities()))
- logger.info("kb aliases: {}".format(kb.get_size_aliases()))
- nlp.to_disk(output_dir / KB_MODEL_DIR)
- else:
- logger.info("STEP 6: KB already exists at {}".format(kb_path))
-
- logger.info("Done!")
-
-
-if __name__ == "__main__":
- logging.basicConfig(level=logging.INFO, format=LOG_FORMAT)
- plac.call(main)
diff --git a/bin/wiki_entity_linking/wikidata_processor.py b/bin/wiki_entity_linking/wikidata_processor.py
deleted file mode 100644
index 8a070f567..000000000
--- a/bin/wiki_entity_linking/wikidata_processor.py
+++ /dev/null
@@ -1,154 +0,0 @@
-# coding: utf-8
-from __future__ import unicode_literals
-
-import bz2
-import json
-import logging
-
-from bin.wiki_entity_linking.wiki_namespaces import WD_META_ITEMS
-
-logger = logging.getLogger(__name__)
-
-
-def read_wikidata_entities_json(wikidata_file, limit=None, to_print=False, lang="en", parse_descr=True):
- # Read the JSON wiki data and parse out the entities. Takes about 7-10h to parse 55M lines.
- # get latest-all.json.bz2 from https://dumps.wikimedia.org/wikidatawiki/entities/
-
- site_filter = '{}wiki'.format(lang)
-
- # filter: currently defined as OR: one hit suffices to be removed from further processing
- exclude_list = WD_META_ITEMS
-
- # punctuation
- exclude_list.extend(["Q1383557", "Q10617810"])
-
- # letters etc
- exclude_list.extend(["Q188725", "Q19776628", "Q3841820", "Q17907810", "Q9788", "Q9398093"])
-
- neg_prop_filter = {
- 'P31': exclude_list, # instance of
- 'P279': exclude_list # subclass
- }
-
- title_to_id = dict()
- id_to_descr = dict()
- id_to_alias = dict()
-
- # parse appropriate fields - depending on what we need in the KB
- parse_properties = False
- parse_sitelinks = True
- parse_labels = False
- parse_aliases = True
- parse_claims = True
-
- with bz2.open(wikidata_file, mode='rb') as file:
- for cnt, line in enumerate(file):
- if limit and cnt >= limit:
- break
- if cnt % 500000 == 0 and cnt > 0:
- logger.info("processed {} lines of WikiData JSON dump".format(cnt))
- clean_line = line.strip()
- if clean_line.endswith(b","):
- clean_line = clean_line[:-1]
- if len(clean_line) > 1:
- obj = json.loads(clean_line)
- entry_type = obj["type"]
-
- if entry_type == "item":
- keep = True
-
- claims = obj["claims"]
- if parse_claims:
- for prop, value_set in neg_prop_filter.items():
- claim_property = claims.get(prop, None)
- if claim_property:
- for cp in claim_property:
- cp_id = (
- cp["mainsnak"]
- .get("datavalue", {})
- .get("value", {})
- .get("id")
- )
- cp_rank = cp["rank"]
- if cp_rank != "deprecated" and cp_id in value_set:
- keep = False
-
- if keep:
- unique_id = obj["id"]
-
- if to_print:
- print("ID:", unique_id)
- print("type:", entry_type)
-
- # parsing all properties that refer to other entities
- if parse_properties:
- for prop, claim_property in claims.items():
- cp_dicts = [
- cp["mainsnak"]["datavalue"].get("value")
- for cp in claim_property
- if cp["mainsnak"].get("datavalue")
- ]
- cp_values = [
- cp_dict.get("id")
- for cp_dict in cp_dicts
- if isinstance(cp_dict, dict)
- if cp_dict.get("id") is not None
- ]
- if cp_values:
- if to_print:
- print("prop:", prop, cp_values)
-
- found_link = False
- if parse_sitelinks:
- site_value = obj["sitelinks"].get(site_filter, None)
- if site_value:
- site = site_value["title"]
- if to_print:
- print(site_filter, ":", site)
- title_to_id[site] = unique_id
- found_link = True
-
- if parse_labels:
- labels = obj["labels"]
- if labels:
- lang_label = labels.get(lang, None)
- if lang_label:
- if to_print:
- print(
- "label (" + lang + "):", lang_label["value"]
- )
-
- if found_link and parse_descr:
- descriptions = obj["descriptions"]
- if descriptions:
- lang_descr = descriptions.get(lang, None)
- if lang_descr:
- if to_print:
- print(
- "description (" + lang + "):",
- lang_descr["value"],
- )
- id_to_descr[unique_id] = lang_descr["value"]
-
- if parse_aliases:
- aliases = obj["aliases"]
- if aliases:
- lang_aliases = aliases.get(lang, None)
- if lang_aliases:
- for item in lang_aliases:
- if to_print:
- print(
- "alias (" + lang + "):", item["value"]
- )
- alias_list = id_to_alias.get(unique_id, [])
- alias_list.append(item["value"])
- id_to_alias[unique_id] = alias_list
-
- if to_print:
- print()
-
- # log final number of lines processed
- logger.info("Finished. Processed {} lines of WikiData JSON dump".format(cnt))
- return title_to_id, id_to_descr, id_to_alias
-
-
diff --git a/bin/wiki_entity_linking/wikidata_train_entity_linker.py b/bin/wiki_entity_linking/wikidata_train_entity_linker.py
deleted file mode 100644
index 54f00fc6f..000000000
--- a/bin/wiki_entity_linking/wikidata_train_entity_linker.py
+++ /dev/null
@@ -1,172 +0,0 @@
-# coding: utf-8
-"""Script that takes a previously created Knowledge Base and trains an entity linking
-pipeline. The provided KB directory should hold the kb, the original nlp object and
-its vocab used to create the KB, and a few auxiliary files such as the entity definitions,
-as created by the script `wikidata_create_kb`.
-
-For the Wikipedia dump: get enwiki-latest-pages-articles-multistream.xml.bz2
-from https://dumps.wikimedia.org/enwiki/latest/
-"""
-from __future__ import unicode_literals
-
-import random
-import logging
-import spacy
-from pathlib import Path
-import plac
-from tqdm import tqdm
-
-from bin.wiki_entity_linking import wikipedia_processor
-from bin.wiki_entity_linking import TRAINING_DATA_FILE, KB_MODEL_DIR, KB_FILE, LOG_FORMAT, OUTPUT_MODEL_DIR
-from bin.wiki_entity_linking.entity_linker_evaluation import measure_performance
-from bin.wiki_entity_linking.kb_creator import read_kb
-
-from spacy.util import minibatch, compounding
-
-logger = logging.getLogger(__name__)
-
-
-@plac.annotations(
- dir_kb=("Directory with KB, NLP and related files", "positional", None, Path),
- output_dir=("Output directory", "option", "o", Path),
- loc_training=("Location to training data", "option", "k", Path),
- epochs=("Number of training iterations (default 10)", "option", "e", int),
- dropout=("Dropout to prevent overfitting (default 0.5)", "option", "p", float),
- lr=("Learning rate (default 0.005)", "option", "n", float),
- l2=("L2 regularization", "option", "r", float),
- train_articles=("# training articles (default 90% of all)", "option", "t", int),
- dev_articles=("# dev test articles (default 10% of all)", "option", "d", int),
- labels_discard=("NER labels to discard (default None)", "option", "l", str),
-)
-def main(
- dir_kb,
- output_dir=None,
- loc_training=None,
- epochs=10,
- dropout=0.5,
- lr=0.005,
- l2=1e-6,
- train_articles=None,
- dev_articles=None,
- labels_discard=None
-):
- if not output_dir:
- logger.warning("No output dir specified so no results will be written, are you sure about this ?")
-
- logger.info("Creating Entity Linker with Wikipedia and WikiData")
-
- output_dir = Path(output_dir) if output_dir else dir_kb
- training_path = loc_training if loc_training else dir_kb / TRAINING_DATA_FILE
- nlp_dir = dir_kb / KB_MODEL_DIR
- kb_path = dir_kb / KB_FILE
- nlp_output_dir = output_dir / OUTPUT_MODEL_DIR
-
- # STEP 0: set up IO
- if not output_dir.exists():
- output_dir.mkdir()
-
- # STEP 1 : load the NLP object
- logger.info("STEP 1a: Loading model from {}".format(nlp_dir))
- nlp = spacy.load(nlp_dir)
- logger.info("Original NLP pipeline has following pipeline components: {}".format(nlp.pipe_names))
-
- # check that there is a NER component in the pipeline
- if "ner" not in nlp.pipe_names:
- raise ValueError("The `nlp` object should have a pretrained `ner` component.")
-
- logger.info("STEP 1b: Loading KB from {}".format(kb_path))
- kb = read_kb(nlp, kb_path)
-
- # STEP 2: read the training dataset previously created from WP
- logger.info("STEP 2: Reading training & dev dataset from {}".format(training_path))
- train_indices, dev_indices = wikipedia_processor.read_training_indices(training_path)
- logger.info("Training set has {} articles, limit set to roughly {} articles per epoch"
- .format(len(train_indices), train_articles if train_articles else "all"))
- logger.info("Dev set has {} articles, limit set to rougly {} articles for evaluation"
- .format(len(dev_indices), dev_articles if dev_articles else "all"))
- if dev_articles:
- dev_indices = dev_indices[0:dev_articles]
-
- # STEP 3: create and train an entity linking pipe
- logger.info("STEP 3: Creating and training an Entity Linking pipe for {} epochs".format(epochs))
- if labels_discard:
- labels_discard = [x.strip() for x in labels_discard.split(",")]
- logger.info("Discarding {} NER types: {}".format(len(labels_discard), labels_discard))
- else:
- labels_discard = []
-
- el_pipe = nlp.create_pipe(
- name="entity_linker", config={"pretrained_vectors": nlp.vocab.vectors.name,
- "labels_discard": labels_discard}
- )
- el_pipe.set_kb(kb)
- nlp.add_pipe(el_pipe, last=True)
-
- other_pipes = [pipe for pipe in nlp.pipe_names if pipe != "entity_linker"]
- with nlp.disable_pipes(*other_pipes): # only train Entity Linking
- optimizer = nlp.begin_training()
- optimizer.learn_rate = lr
- optimizer.L2 = l2
-
- logger.info("Dev Baseline Accuracies:")
- dev_data = wikipedia_processor.read_el_docs_golds(nlp=nlp, entity_file_path=training_path,
- dev=True, line_ids=dev_indices,
- kb=kb, labels_discard=labels_discard)
-
- measure_performance(dev_data, kb, el_pipe, baseline=True, context=False, dev_limit=len(dev_indices))
-
- for itn in range(epochs):
- random.shuffle(train_indices)
- losses = {}
- batches = minibatch(train_indices, size=compounding(8.0, 128.0, 1.001))
- batchnr = 0
- articles_processed = 0
-
- # we either process the whole training file, or just a part each epoch
- bar_total = len(train_indices)
- if train_articles:
- bar_total = train_articles
-
- with tqdm(total=bar_total, leave=False, desc='Epoch ' + str(itn)) as pbar:
- for batch in batches:
- if not train_articles or articles_processed < train_articles:
- with nlp.disable_pipes("entity_linker"):
- train_batch = wikipedia_processor.read_el_docs_golds(nlp=nlp, entity_file_path=training_path,
- dev=False, line_ids=batch,
- kb=kb, labels_discard=labels_discard)
- docs, golds = zip(*train_batch)
- try:
- with nlp.disable_pipes(*other_pipes):
- nlp.update(
- docs=docs,
- golds=golds,
- sgd=optimizer,
- drop=dropout,
- losses=losses,
- )
- batchnr += 1
- articles_processed += len(docs)
- pbar.update(len(docs))
- except Exception as e:
- logger.error("Error updating batch:" + str(e))
- if batchnr > 0:
- logging.info("Epoch {} trained on {} articles, train loss {}"
- .format(itn, articles_processed, round(losses["entity_linker"] / batchnr, 2)))
- # re-read the dev_data (data is returned as a generator)
- dev_data = wikipedia_processor.read_el_docs_golds(nlp=nlp, entity_file_path=training_path,
- dev=True, line_ids=dev_indices,
- kb=kb, labels_discard=labels_discard)
- measure_performance(dev_data, kb, el_pipe, baseline=False, context=True, dev_limit=len(dev_indices))
-
- if output_dir:
- # STEP 4: write the NLP pipeline (now including an EL model) to file
- logger.info("Final NLP pipeline has following pipeline components: {}".format(nlp.pipe_names))
- logger.info("STEP 4: Writing trained NLP to {}".format(nlp_output_dir))
- nlp.to_disk(nlp_output_dir)
-
- logger.info("Done!")
-
-
-if __name__ == "__main__":
- logging.basicConfig(level=logging.INFO, format=LOG_FORMAT)
- plac.call(main)
diff --git a/bin/wiki_entity_linking/wikipedia_processor.py b/bin/wiki_entity_linking/wikipedia_processor.py
index 315b1e916..e69de29bb 100644
--- a/bin/wiki_entity_linking/wikipedia_processor.py
+++ b/bin/wiki_entity_linking/wikipedia_processor.py
@@ -1,565 +0,0 @@
-# coding: utf-8
-from __future__ import unicode_literals
-
-import re
-import bz2
-import logging
-import random
-import json
-
-from spacy.gold import GoldParse
-from bin.wiki_entity_linking import wiki_io as io
-from bin.wiki_entity_linking.wiki_namespaces import (
- WP_META_NAMESPACE,
- WP_FILE_NAMESPACE,
- WP_CATEGORY_NAMESPACE,
-)
-
-"""
-Process a Wikipedia dump to calculate entity frequencies and prior probabilities in combination with certain mentions.
-Write these results to file for downstream KB and training data generation.
-
-Process Wikipedia interlinks to generate a training dataset for the EL algorithm.
-"""
-
-ENTITY_FILE = "gold_entities.csv"
-
-map_alias_to_link = dict()
-
-logger = logging.getLogger(__name__)
-
-title_regex = re.compile(r"(?<=).*(?= )")
-id_regex = re.compile(r"(?<=)\d*(?= )")
-text_regex = re.compile(r"(?<=).*(?= 0:
- logger.info("processed {} lines of Wikipedia XML dump".format(cnt))
- clean_line = line.strip().decode("utf-8")
-
- # we attempt at reading the article's ID (but not the revision or contributor ID)
- if "" in clean_line or "" in clean_line:
- read_id = False
- if "" in clean_line:
- read_id = True
-
- if read_id:
- ids = id_regex.search(clean_line)
- if ids:
- current_article_id = ids[0]
-
- # only processing prior probabilities from true training (non-dev) articles
- if not is_dev(current_article_id):
- aliases, entities, normalizations = get_wp_links(clean_line)
- for alias, entity, norm in zip(aliases, entities, normalizations):
- _store_alias(
- alias, entity, normalize_alias=norm, normalize_entity=True
- )
-
- line = file.readline()
- cnt += 1
- logger.info("processed {} lines of Wikipedia XML dump".format(cnt))
- logger.info("Finished. processed {} lines of Wikipedia XML dump".format(cnt))
-
- # write all aliases and their entities and count occurrences to file
- with prior_prob_output.open("w", encoding="utf8") as outputfile:
- outputfile.write("alias" + "|" + "count" + "|" + "entity" + "\n")
- for alias, alias_dict in sorted(map_alias_to_link.items(), key=lambda x: x[0]):
- s_dict = sorted(alias_dict.items(), key=lambda x: x[1], reverse=True)
- for entity, count in s_dict:
- outputfile.write(alias + "|" + str(count) + "|" + entity + "\n")
-
-
-def _store_alias(alias, entity, normalize_alias=False, normalize_entity=True):
- alias = alias.strip()
- entity = entity.strip()
-
- # remove everything after # as this is not part of the title but refers to a specific paragraph
- if normalize_entity:
- # wikipedia titles are always capitalized
- entity = _capitalize_first(entity.split("#")[0])
- if normalize_alias:
- alias = alias.split("#")[0]
-
- if alias and entity:
- alias_dict = map_alias_to_link.get(alias, dict())
- entity_count = alias_dict.get(entity, 0)
- alias_dict[entity] = entity_count + 1
- map_alias_to_link[alias] = alias_dict
-
-
-def get_wp_links(text):
- aliases = []
- entities = []
- normalizations = []
-
- matches = link_regex.findall(text)
- for match in matches:
- match = match[2:][:-2].replace("_", " ").strip()
-
- if ns_regex.match(match):
- pass # ignore the entity if it points to a "meta" page
-
- # this is a simple [[link]], with the alias the same as the mention
- elif "|" not in match:
- aliases.append(match)
- entities.append(match)
- normalizations.append(True)
-
- # in wiki format, the link is written as [[entity|alias]]
- else:
- splits = match.split("|")
- entity = splits[0].strip()
- alias = splits[1].strip()
- # specific wiki format [[alias (specification)|]]
- if len(alias) == 0 and "(" in entity:
- alias = entity.split("(")[0]
- aliases.append(alias)
- entities.append(entity)
- normalizations.append(False)
- else:
- aliases.append(alias)
- entities.append(entity)
- normalizations.append(False)
-
- return aliases, entities, normalizations
-
-
-def _capitalize_first(text):
- if not text:
- return None
- result = text[0].capitalize()
- if len(result) > 0:
- result += text[1:]
- return result
-
-
-def create_training_and_desc(
- wp_input, def_input, desc_output, training_output, parse_desc, limit=None
-):
- wp_to_id = io.read_title_to_id(def_input)
- _process_wikipedia_texts(
- wp_input, wp_to_id, desc_output, training_output, parse_desc, limit
- )
-
-
-def _process_wikipedia_texts(
- wikipedia_input, wp_to_id, output, training_output, parse_descriptions, limit=None
-):
- """
- Read the XML wikipedia data to parse out training data:
- raw text data + positive instances
- """
-
- read_ids = set()
-
- with output.open("a", encoding="utf8") as descr_file, training_output.open(
- "w", encoding="utf8"
- ) as entity_file:
- if parse_descriptions:
- _write_training_description(descr_file, "WD_id", "description")
- with bz2.open(wikipedia_input, mode="rb") as file:
- article_count = 0
- article_text = ""
- article_title = None
- article_id = None
- reading_text = False
- reading_revision = False
-
- for line in file:
- clean_line = line.strip().decode("utf-8")
-
- if clean_line == "":
- reading_revision = True
- elif clean_line == " ":
- reading_revision = False
-
- # Start reading new page
- if clean_line == "":
- article_text = ""
- article_title = None
- article_id = None
- # finished reading this page
- elif clean_line == " ":
- if article_id:
- clean_text, entities = _process_wp_text(
- article_title, article_text, wp_to_id
- )
- if clean_text is not None and entities is not None:
- _write_training_entities(
- entity_file, article_id, clean_text, entities
- )
-
- if article_title in wp_to_id and parse_descriptions:
- description = " ".join(
- clean_text[:1000].split(" ")[:-1]
- )
- _write_training_description(
- descr_file, wp_to_id[article_title], description
- )
- article_count += 1
- if article_count % 10000 == 0 and article_count > 0:
- logger.info(
- "Processed {} articles".format(article_count)
- )
- if limit and article_count >= limit:
- break
- article_text = ""
- article_title = None
- article_id = None
- reading_text = False
- reading_revision = False
-
- # start reading text within a page
- if "")
- clean_text = clean_text.replace(r""", '"')
- clean_text = clean_text.replace(r" ", " ")
- clean_text = clean_text.replace(r"&", "&")
-
- # remove multiple spaces
- while " " in clean_text:
- clean_text = clean_text.replace(" ", " ")
-
- return clean_text.strip()
-
-
-def _remove_links(clean_text, wp_to_id):
- # read the text char by char to get the right offsets for the interwiki links
- entities = []
- final_text = ""
- open_read = 0
- reading_text = True
- reading_entity = False
- reading_mention = False
- reading_special_case = False
- entity_buffer = ""
- mention_buffer = ""
- for index, letter in enumerate(clean_text):
- if letter == "[":
- open_read += 1
- elif letter == "]":
- open_read -= 1
- elif letter == "|":
- if reading_text:
- final_text += letter
- # switch from reading entity to mention in the [[entity|mention]] pattern
- elif reading_entity:
- reading_text = False
- reading_entity = False
- reading_mention = True
- else:
- reading_special_case = True
- else:
- if reading_entity:
- entity_buffer += letter
- elif reading_mention:
- mention_buffer += letter
- elif reading_text:
- final_text += letter
- else:
- raise ValueError("Not sure at point", clean_text[index - 2 : index + 2])
-
- if open_read > 2:
- reading_special_case = True
-
- if open_read == 2 and reading_text:
- reading_text = False
- reading_entity = True
- reading_mention = False
-
- # we just finished reading an entity
- if open_read == 0 and not reading_text:
- if "#" in entity_buffer or entity_buffer.startswith(":"):
- reading_special_case = True
- # Ignore cases with nested structures like File: handles etc
- if not reading_special_case:
- if not mention_buffer:
- mention_buffer = entity_buffer
- start = len(final_text)
- end = start + len(mention_buffer)
- qid = wp_to_id.get(entity_buffer, None)
- if qid:
- entities.append((mention_buffer, qid, start, end))
- final_text += mention_buffer
-
- entity_buffer = ""
- mention_buffer = ""
-
- reading_text = True
- reading_entity = False
- reading_mention = False
- reading_special_case = False
- return final_text, entities
-
-
-def _write_training_description(outputfile, qid, description):
- if description is not None:
- line = str(qid) + "|" + description + "\n"
- outputfile.write(line)
-
-
-def _write_training_entities(outputfile, article_id, clean_text, entities):
- entities_data = [
- {"alias": ent[0], "entity": ent[1], "start": ent[2], "end": ent[3]}
- for ent in entities
- ]
- line = (
- json.dumps(
- {
- "article_id": article_id,
- "clean_text": clean_text,
- "entities": entities_data,
- },
- ensure_ascii=False,
- )
- + "\n"
- )
- outputfile.write(line)
-
-
-def read_training_indices(entity_file_path):
- """ This method creates two lists of indices into the training file: one with indices for the
- training examples, and one for the dev examples."""
- train_indices = []
- dev_indices = []
-
- with entity_file_path.open("r", encoding="utf8") as file:
- for i, line in enumerate(file):
- example = json.loads(line)
- article_id = example["article_id"]
- clean_text = example["clean_text"]
-
- if is_valid_article(clean_text):
- if is_dev(article_id):
- dev_indices.append(i)
- else:
- train_indices.append(i)
-
- return train_indices, dev_indices
-
-
-def read_el_docs_golds(nlp, entity_file_path, dev, line_ids, kb, labels_discard=None):
- """ This method provides training/dev examples that correspond to the entity annotations found by the nlp object.
- For training, it will include both positive and negative examples by using the candidate generator from the kb.
- For testing (kb=None), it will include all positive examples only."""
- if not labels_discard:
- labels_discard = []
-
- texts = []
- entities_list = []
-
- with entity_file_path.open("r", encoding="utf8") as file:
- for i, line in enumerate(file):
- if i in line_ids:
- example = json.loads(line)
- article_id = example["article_id"]
- clean_text = example["clean_text"]
- entities = example["entities"]
-
- if dev != is_dev(article_id) or not is_valid_article(clean_text):
- continue
-
- texts.append(clean_text)
- entities_list.append(entities)
-
- docs = nlp.pipe(texts, batch_size=50)
-
- for doc, entities in zip(docs, entities_list):
- gold = _get_gold_parse(doc, entities, dev=dev, kb=kb, labels_discard=labels_discard)
- if gold and len(gold.links) > 0:
- yield doc, gold
-
-
-def _get_gold_parse(doc, entities, dev, kb, labels_discard):
- gold_entities = {}
- tagged_ent_positions = {
- (ent.start_char, ent.end_char): ent
- for ent in doc.ents
- if ent.label_ not in labels_discard
- }
-
- for entity in entities:
- entity_id = entity["entity"]
- alias = entity["alias"]
- start = entity["start"]
- end = entity["end"]
-
- candidate_ids = []
- if kb and not dev:
- candidates = kb.get_candidates(alias)
- candidate_ids = [cand.entity_ for cand in candidates]
-
- tagged_ent = tagged_ent_positions.get((start, end), None)
- if tagged_ent:
- # TODO: check that alias == doc.text[start:end]
- should_add_ent = (dev or entity_id in candidate_ids) and is_valid_sentence(
- tagged_ent.sent.text
- )
-
- if should_add_ent:
- value_by_id = {entity_id: 1.0}
- if not dev:
- random.shuffle(candidate_ids)
- value_by_id.update(
- {kb_id: 0.0 for kb_id in candidate_ids if kb_id != entity_id}
- )
- gold_entities[(start, end)] = value_by_id
-
- return GoldParse(doc, links=gold_entities)
-
-
-def is_dev(article_id):
- if not article_id:
- return False
- return article_id.endswith("3")
-
-
-def is_valid_article(doc_text):
- # custom length cut-off
- return 10 < len(doc_text) < 30000
-
-
-def is_valid_sentence(sent_text):
- if not 10 < len(sent_text) < 3000:
- # custom length cut-off
- return False
-
- if sent_text.strip().startswith("*") or sent_text.strip().startswith("#"):
- # remove 'enumeration' sentences (occurs often on Wikipedia)
- return False
-
- return True
diff --git a/examples/training/pretrain_kb.py b/examples/training/create_kb.py
similarity index 75%
rename from examples/training/pretrain_kb.py
rename to examples/training/create_kb.py
index 54c68f653..cbdb5c05b 100644
--- a/examples/training/pretrain_kb.py
+++ b/examples/training/create_kb.py
@@ -1,15 +1,15 @@
#!/usr/bin/env python
# coding: utf8
-"""Example of defining and (pre)training spaCy's knowledge base,
+"""Example of defining a knowledge base in spaCy,
which is needed to implement entity linking functionality.
For more details, see the documentation:
* Knowledge base: https://spacy.io/api/kb
* Entity Linking: https://spacy.io/usage/linguistic-features#entity-linking
-Compatible with: spaCy v2.2.3
-Last tested with: v2.2.3
+Compatible with: spaCy v2.2.4
+Last tested with: v2.2.4
"""
from __future__ import unicode_literals, print_function
@@ -20,24 +20,18 @@ from spacy.vocab import Vocab
import spacy
from spacy.kb import KnowledgeBase
-from bin.wiki_entity_linking.train_descriptions import EntityEncoder
-
# Q2146908 (Russ Cochran): American golfer
# Q7381115 (Russ Cochran): publisher
ENTITIES = {"Q2146908": ("American golfer", 342), "Q7381115": ("publisher", 17)}
-INPUT_DIM = 300 # dimension of pretrained input vectors
-DESC_WIDTH = 64 # dimension of output entity vectors
-
@plac.annotations(
model=("Model name, should have pretrained word embeddings", "positional", None, str),
output_dir=("Optional output directory", "option", "o", Path),
- n_iter=("Number of training iterations", "option", "n", int),
)
-def main(model=None, output_dir=None, n_iter=50):
- """Load the model, create the KB and pretrain the entity encodings.
+def main(model=None, output_dir=None):
+ """Load the model and create the KB with pre-defined entity encodings.
If an output_dir is provided, the KB will be stored there in a file 'kb'.
The updated vocab will also be written to a directory in the output_dir."""
@@ -51,33 +45,23 @@ def main(model=None, output_dir=None, n_iter=50):
" cf. https://spacy.io/usage/models#languages."
)
- kb = KnowledgeBase(vocab=nlp.vocab)
+ # You can change the dimension of vectors in your KB by using an encoder that changes the dimensionality.
+ # For simplicity, we'll just use the original vector dimension here instead.
+ vectors_dim = nlp.vocab.vectors.shape[1]
+ kb = KnowledgeBase(vocab=nlp.vocab, entity_vector_length=vectors_dim)
# set up the data
entity_ids = []
- descriptions = []
+ descr_embeddings = []
freqs = []
for key, value in ENTITIES.items():
desc, freq = value
entity_ids.append(key)
- descriptions.append(desc)
+ descr_embeddings.append(nlp(desc).vector)
freqs.append(freq)
- # training entity description encodings
- # this part can easily be replaced with a custom entity encoder
- encoder = EntityEncoder(
- nlp=nlp,
- input_dim=INPUT_DIM,
- desc_width=DESC_WIDTH,
- epochs=n_iter,
- )
- encoder.train(description_list=descriptions, to_print=True)
-
- # get the pretrained entity vectors
- embeddings = encoder.apply_encoder(descriptions)
-
# set the entities, can also be done by calling `kb.add_entity` for each entity
- kb.set_entities(entity_list=entity_ids, freq_list=freqs, vector_list=embeddings)
+ kb.set_entities(entity_list=entity_ids, freq_list=freqs, vector_list=descr_embeddings)
# adding aliases, the entities need to be defined in the KB beforehand
kb.add_alias(
@@ -113,8 +97,8 @@ def main(model=None, output_dir=None, n_iter=50):
vocab2 = Vocab().from_disk(vocab_path)
kb2 = KnowledgeBase(vocab=vocab2)
kb2.load_bulk(kb_path)
- _print_kb(kb2)
print()
+ _print_kb(kb2)
def _print_kb(kb):
@@ -126,6 +110,5 @@ if __name__ == "__main__":
plac.call(main)
# Expected output:
-
# 2 kb entities: ['Q2146908', 'Q7381115']
# 1 kb aliases: ['Russ Cochran']
diff --git a/examples/training/train_entity_linker.py b/examples/training/train_entity_linker.py
index dd7c3a1b2..c7eba8a30 100644
--- a/examples/training/train_entity_linker.py
+++ b/examples/training/train_entity_linker.py
@@ -1,15 +1,15 @@
#!/usr/bin/env python
# coding: utf8
-"""Example of training spaCy's entity linker, starting off with an
-existing model and a pre-defined knowledge base.
+"""Example of training spaCy's entity linker, starting off with a predefined
+knowledge base and corresponding vocab, and a blank English model.
For more details, see the documentation:
* Training: https://spacy.io/usage/training
* Entity Linking: https://spacy.io/usage/linguistic-features#entity-linking
-Compatible with: spaCy v2.2.3
-Last tested with: v2.2.3
+Compatible with: spaCy v2.2.4
+Last tested with: v2.2.4
"""
from __future__ import unicode_literals, print_function
@@ -17,13 +17,11 @@ import plac
import random
from pathlib import Path
-from spacy.symbols import PERSON
from spacy.vocab import Vocab
import spacy
from spacy.kb import KnowledgeBase
from spacy.pipeline import EntityRuler
-from spacy.tokens import Span
from spacy.util import minibatch, compounding
diff --git a/website/docs/usage/examples.md b/website/docs/usage/examples.md
index 180b02ff4..9b210a69a 100644
--- a/website/docs/usage/examples.md
+++ b/website/docs/usage/examples.md
@@ -111,6 +111,27 @@ start.
https://github.com/explosion/spaCy/tree/master/examples/training/train_new_entity_type.py
```
+### Creating a Knowledge Base for Named Entity Linking {#kb}
+
+This example shows how to create a knowledge base in spaCy,
+which is needed to implement entity linking functionality.
+It requires as input a spaCy model with pretrained word vectors,
+and it stores the KB to file (if an `output_dir` is provided).
+
+```python
+https://github.com/explosion/spaCy/tree/master/examples/training/create_kb.py
+```
+
+### Training spaCy's Named Entity Linker {#nel}
+
+This example shows how to train spaCy's entity linker with your own custom
+examples, starting off with a predefined knowledge base and its vocab,
+and using a blank `English` class.
+
+```python
+https://github.com/explosion/spaCy/tree/master/examples/training/train_entity_linker.py
+```
+
### Training spaCy's Dependency Parser {#parser}
This example shows how to update spaCy's dependency parser, starting off with an
diff --git a/website/docs/usage/linguistic-features.md b/website/docs/usage/linguistic-features.md
index 59712939a..d17e5a661 100644
--- a/website/docs/usage/linguistic-features.md
+++ b/website/docs/usage/linguistic-features.md
@@ -579,9 +579,7 @@ import DisplacyEntHtml from 'images/displacy-ent2.html'
To ground the named entities into the "real world", spaCy provides functionality
to perform entity linking, which resolves a textual entity to a unique
-identifier from a knowledge base (KB). The
-[processing scripts](https://github.com/explosion/spaCy/tree/master/bin/wiki_entity_linking)
-we provide use WikiData identifiers, but you can create your own
+identifier from a knowledge base (KB). You can create your own
[`KnowledgeBase`](/api/kb) and
[train a new Entity Linking model](/usage/training#entity-linker) using that
custom-made KB.
diff --git a/website/docs/usage/training.md b/website/docs/usage/training.md
index 479441edf..ecdc6720b 100644
--- a/website/docs/usage/training.md
+++ b/website/docs/usage/training.md
@@ -347,9 +347,9 @@ your data** to find a solution that works best for you.
### Updating the Named Entity Recognizer {#example-train-ner}
This example shows how to update spaCy's entity recognizer with your own
-examples, starting off with an existing, pretrained model, or from scratch
-using a blank `Language` class. To do this, you'll need **example texts** and
-the **character offsets** and **labels** of each entity contained in the texts.
+examples, starting off with an existing, pretrained model, or from scratch using
+a blank `Language` class. To do this, you'll need **example texts** and the
+**character offsets** and **labels** of each entity contained in the texts.
```python
https://github.com/explosion/spaCy/tree/master/examples/training/train_ner.py
@@ -440,8 +440,8 @@ https://github.com/explosion/spaCy/tree/master/examples/training/train_parser.py
training the parser.
2. **Add the dependency labels** to the parser using the
[`add_label`](/api/dependencyparser#add_label) method. If you're starting off
- with a pretrained spaCy model, this is usually not necessary – but it
- doesn't hurt either, just to be safe.
+ with a pretrained spaCy model, this is usually not necessary – but it doesn't
+ hurt either, just to be safe.
3. **Shuffle and loop over** the examples. For each example, **update the
model** by calling [`nlp.update`](/api/language#update), which steps through
the words of the input. At each word, it makes a **prediction**. It then
@@ -605,16 +605,16 @@ To train an entity linking model, you first need to define a knowledge base
A KB consists of a list of entities with unique identifiers. Each such entity
has an entity vector that will be used to measure similarity with the context in
-which an entity is used. These vectors are pretrained and stored in the KB
-before the entity linking model will be trained.
+which an entity is used. These vectors have a fixed length and are stored in the
+KB.
The following example shows how to build a knowledge base from scratch, given a
-list of entities and potential aliases. The script further demonstrates how to
-pretrain and store the entity vectors. To run this example, the script needs
-access to a `vocab` instance or an `nlp` model with pretrained word embeddings.
+list of entities and potential aliases. The script requires an `nlp` model with
+pretrained word vectors to obtain an encoding of an entity's description as its
+vector.
```python
-https://github.com/explosion/spaCy/tree/master/examples/training/pretrain_kb.py
+https://github.com/explosion/spaCy/tree/master/examples/training/create_kb.py
```
#### Step by step guide {#step-by-step-kb}
From b3969c14796d95b4419655f63cebcbde8fee4521 Mon Sep 17 00:00:00 2001
From: adrianeboyd
Date: Fri, 8 May 2020 10:36:25 +0200
Subject: [PATCH 26/41] Clarify Token.pos as UPOS (#5419)
---
website/docs/api/token.md | 4 ++--
website/docs/usage/101/_pos-deps.md | 2 +-
2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/website/docs/api/token.md b/website/docs/api/token.md
index c30c01c20..b397efc55 100644
--- a/website/docs/api/token.md
+++ b/website/docs/api/token.md
@@ -461,8 +461,8 @@ The L2 norm of the token's vector representation.
| `like_email` | bool | Does the token resemble an email address? |
| `is_oov` | bool | Is the token out-of-vocabulary? |
| `is_stop` | bool | Is the token part of a "stop list"? |
-| `pos` | int | Coarse-grained part-of-speech. |
-| `pos_` | unicode | Coarse-grained part-of-speech. |
+| `pos` | int | Coarse-grained part-of-speech from the [Universal POS tag set](https://universaldependencies.org/docs/u/pos/). |
+| `pos_` | unicode | Coarse-grained part-of-speech from the [Universal POS tag set](https://universaldependencies.org/docs/u/pos/). |
| `tag` | int | Fine-grained part-of-speech. |
| `tag_` | unicode | Fine-grained part-of-speech. |
| `dep` | int | Syntactic dependency relation. |
diff --git a/website/docs/usage/101/_pos-deps.md b/website/docs/usage/101/_pos-deps.md
index 9d04d6ffc..1a438e424 100644
--- a/website/docs/usage/101/_pos-deps.md
+++ b/website/docs/usage/101/_pos-deps.md
@@ -25,7 +25,7 @@ for token in doc:
> - **Text:** The original word text.
> - **Lemma:** The base form of the word.
-> - **POS:** The simple part-of-speech tag.
+> - **POS:** The simple [UPOS](https://universaldependencies.org/docs/u/pos/) part-of-speech tag.
> - **Tag:** The detailed part-of-speech tag.
> - **Dep:** Syntactic dependency, i.e. the relation between tokens.
> - **Shape:** The word shape – capitalization, punctuation, digits.
From afb26d788f954f68f347b1db81927d8ccebb4b71 Mon Sep 17 00:00:00 2001
From: Travis Hoppe
Date: Fri, 8 May 2020 02:28:54 -0700
Subject: [PATCH 27/41] Added author information for NLPre (#5414)
* Add author links for NLPre and update category
* Add contributor statement
---
.github/contributors/thoppe.md | 106 +++++++++++++++++++++++++++++++++
website/meta/universe.json | 8 ++-
2 files changed, 113 insertions(+), 1 deletion(-)
create mode 100644 .github/contributors/thoppe.md
diff --git a/.github/contributors/thoppe.md b/.github/contributors/thoppe.md
new file mode 100644
index 000000000..9271a2601
--- /dev/null
+++ b/.github/contributors/thoppe.md
@@ -0,0 +1,106 @@
+# spaCy contributor agreement
+
+This spaCy Contributor Agreement (**"SCA"**) is based on the
+[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
+The SCA applies to any contribution that you make to any product or project
+managed by us (the **"project"**), and sets out the intellectual property rights
+you grant to us in the contributed materials. The term **"us"** shall mean
+[ExplosionAI GmbH](https://explosion.ai/legal). The term
+**"you"** shall mean the person or entity identified below.
+
+If you agree to be bound by these terms, fill in the information requested
+below and include the filled-in version with your first pull request, under the
+folder [`.github/contributors/`](/.github/contributors/). The name of the file
+should be your GitHub username, with the extension `.md`. For example, the user
+example_user would create the file `.github/contributors/example_user.md`.
+
+Read this agreement carefully before signing. These terms and conditions
+constitute a binding legal agreement.
+
+## Contributor Agreement
+
+1. The term "contribution" or "contributed materials" means any source code,
+object code, patch, tool, sample, graphic, specification, manual,
+documentation, or any other material posted or submitted by you to the project.
+
+2. With respect to any worldwide copyrights, or copyright applications and
+registrations, in your contribution:
+
+ * you hereby assign to us joint ownership, and to the extent that such
+ assignment is or becomes invalid, ineffective or unenforceable, you hereby
+ grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
+ royalty-free, unrestricted license to exercise all rights under those
+ copyrights. This includes, at our option, the right to sublicense these same
+ rights to third parties through multiple levels of sublicensees or other
+ licensing arrangements;
+
+ * you agree that each of us can do all things in relation to your
+ contribution as if each of us were the sole owners, and if one of us makes
+ a derivative work of your contribution, the one who makes the derivative
+ work (or has it made will be the sole owner of that derivative work;
+
+ * you agree that you will not assert any moral rights in your contribution
+ against us, our licensees or transferees;
+
+ * you agree that we may register a copyright in your contribution and
+ exercise all ownership rights associated with it; and
+
+ * you agree that neither of us has any duty to consult with, obtain the
+ consent of, pay or render an accounting to the other for any use or
+ distribution of your contribution.
+
+3. With respect to any patents you own, or that you can license without payment
+to any third party, you hereby grant to us a perpetual, irrevocable,
+non-exclusive, worldwide, no-charge, royalty-free license to:
+
+ * make, have made, use, sell, offer to sell, import, and otherwise transfer
+ your contribution in whole or in part, alone or in combination with or
+ included in any product, work or materials arising out of the project to
+ which your contribution was submitted, and
+
+ * at our option, to sublicense these same rights to third parties through
+ multiple levels of sublicensees or other licensing arrangements.
+
+4. Except as set out above, you keep all right, title, and interest in your
+contribution. The rights that you grant to us under these terms are effective
+on the date you first submitted a contribution to us, even if your submission
+took place before the date you sign these terms.
+
+5. You covenant, represent, warrant and agree that:
+
+ * Each contribution that you submit is and shall be an original work of
+ authorship and you can legally grant the rights set out in this SCA;
+
+ * to the best of your knowledge, each contribution will not violate any
+ third party's copyrights, trademarks, patents, or other intellectual
+ property rights; and
+
+ * each contribution shall be in compliance with U.S. export control laws and
+ other applicable export and import laws. You agree to notify us if you
+ become aware of any circumstance which would make any of the foregoing
+ representations inaccurate in any respect. We may publicly disclose your
+ participation in the project, including the fact that you have signed the SCA.
+
+6. This SCA is governed by the laws of the State of California and applicable
+U.S. Federal law. Any choice of law rules will not apply.
+
+7. Please place an “x” on one of the applicable statement below. Please do NOT
+mark both statements:
+
+ * [x] I am signing on behalf of myself as an individual and no other person
+ or entity, including my employer, has or will have rights with respect to my
+ contributions.
+
+ * [ ] I am signing on behalf of my employer or a legal entity and I have the
+ actual authority to contractually bind that entity.
+
+## Contributor Details
+
+| Field | Entry |
+|------------------------------- | -------------------- |
+| Name | Travis Hoppe |
+| Company name (if applicable) | |
+| Title or role (if applicable) | Data Scientist |
+| Date | 07 May 2020 |
+| GitHub username | thoppe |
+| Website (optional) | http://thoppe.github.io/ |
diff --git a/website/meta/universe.json b/website/meta/universe.json
index b5e1dbde0..22673834a 100644
--- a/website/meta/universe.json
+++ b/website/meta/universe.json
@@ -114,7 +114,13 @@
" text = f(text)",
"print(text)"
],
- "category": ["scientific"]
+ "category": ["scientific", "biomedical"],
+ "author": "Travis Hoppe",
+ "author_links": {
+ "github": "thoppe",
+ "twitter":"metasemantic",
+ "website" : "http://thoppe.github.io/"
+ }
},
{
"id": "Chatterbot",
From a3b7ae4f984bc7244402d50bbd6850f421fa29f7 Mon Sep 17 00:00:00 2001
From: Kevin Lu
Date: Wed, 20 May 2020 09:11:32 -0700
Subject: [PATCH 28/41] Update universe.json
---
website/meta/universe.json | 102 +++++++++++++++++++++++++++++++++++--
1 file changed, 99 insertions(+), 3 deletions(-)
diff --git a/website/meta/universe.json b/website/meta/universe.json
index 22673834a..8aaabf408 100644
--- a/website/meta/universe.json
+++ b/website/meta/universe.json
@@ -115,11 +115,11 @@
"print(text)"
],
"category": ["scientific", "biomedical"],
- "author": "Travis Hoppe",
+ "author": "Travis Hoppe",
"author_links": {
"github": "thoppe",
- "twitter":"metasemantic",
- "website" : "http://thoppe.github.io/"
+ "twitter": "metasemantic",
+ "website": "http://thoppe.github.io/"
}
},
{
@@ -2099,6 +2099,102 @@
"predict_output = clf.predict(predict_input)"
],
"category": ["standalone"]
+ },
+ {
+ "id": "spacy_fastlang",
+ "title": "Spacy FastLang",
+ "slogan": "Language detection done fast",
+ "description": "Fast language detection using FastText and Spacy.",
+ "github": "thomasthiebaud/spacy-fastlang",
+ "pip": "spacy_fastlang",
+ "code_example": [
+ "import spacy",
+ "from spacy_fastlang import LanguageDetector",
+ "",
+ "nlp = spacy.load('en_core_web_sm')",
+ "nlp.add_pipe(LanguageDetector())",
+ "doc = nlp('Life is like a box of chocolates. You never know what you're gonna get.')",
+ "",
+ "assert doc._.language == 'en'",
+ "assert doc._.language_score >= 0.8"
+ ],
+ "author": "Thomas Thiebaud",
+ "author_links": {
+ "github": "thomasthiebaud"
+ },
+ "category": ["pipeline"]
+ },
+ {
+ "id": "mlflow",
+ "title": "MLflow",
+ "slogan": "An open source platform for the machine learning lifecycle",
+ "description": "MLflow is an open source platform to manage the ML lifecycle, including experimentation, reproducibility, deployment, and a central model registry. MLflow currently offers four components: Tracking, Projects, Models and Registry.",
+ "github": "mlflow/mlflow",
+ "pip": "mlflow",
+ "thumb": "https://www.mlflow.org/docs/latest/_static/MLflow-logo-final-black.png",
+ "image": "",
+ "url": "https://mlflow.org/",
+ "author": "Databricks",
+ "author_links": {
+ "github": "databricks",
+ "twitter": "databricks",
+ "website": "https://databricks.com/"
+ },
+ "category": ["standalone", "apis"],
+ "code_example": [
+ "import mlflow",
+ "import mlflow.spacy",
+ "",
+ "# MLflow Tracking",
+ "nlp = spacy.load('my_best_model_path/output/model-best')",
+ "with mlflow.start_run(run_name='Spacy'):",
+ " mlflow.set_tag('model_flavor', 'spacy')",
+ " mlflow.spacy.log_model(spacy_model=nlp, artifact_path='model')",
+ " mlflow.log_metric(('accuracy', 0.72))",
+ " my_run_id = mlflow.active_run().info.run_id",
+ "",
+ "",
+ "# MLflow Models",
+ "model_uri = f'runs:/{my_run_id}/model'",
+ "nlp2 = mlflow.spacy.load_model(model_uri=model_uri)"
+ ]
+ },
+ {
+ "id": "pyate",
+ "title": "PyATE",
+ "slogan": "Python Automated Term Extraction",
+ "description": "PyATE is a term extraction library written in Python using Spacy POS tagging with Basic, Combo Basic, C-Value, TermExtractor, and Weirdness.",
+ "github": "kevinlu1248/pyate",
+ "pip": "pyate",
+ "code_example": [
+ "import spacy",
+ "from pyate.term_extraction_pipeline import TermExtractionPipeline",
+ "",
+ "nlp = spacy.load('en_core_web_sm')",
+ "nlp.add_pipe(TermExtractionPipeline())",
+ "# source: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1994795/",
+ "string = 'Central to the development of cancer are genetic changes that endow these “cancer cells” with many of the hallmarks of cancer, such as self-sufficient growth and resistance to anti-growth and pro-death signals. However, while the genetic changes that occur within cancer cells themselves, such as activated oncogenes or dysfunctional tumor suppressors, are responsible for many aspects of cancer development, they are not sufficient. Tumor promotion and progression are dependent on ancillary processes provided by cells of the tumor environment but that are not necessarily cancerous themselves. Inflammation has long been associated with the development of cancer. This review will discuss the reflexive relationship between cancer and inflammation with particular focus on how considering the role of inflammation in physiologic processes such as the maintenance of tissue homeostasis and repair may provide a logical framework for understanding the connection between the inflammatory response and cancer.'",
+ "",
+ "doc = nlp(string)",
+ "print(doc._.combo_basic.sort_values(ascending=False).head(5))",
+ "\"\"\"\"\"\"",
+ "dysfunctional tumor 1.443147",
+ "tumor suppressors 1.443147",
+ "genetic changes 1.386294",
+ "cancer cells 1.386294",
+ "dysfunctional tumor suppressors 1.298612",
+ "\"\"\"\"\"\""
+ ],
+ "code_language": "python",
+ "url": "https://github.com/kevinlu1248/pyate",
+ "author": "Kevin Lu",
+ "author_links": {
+ "twitter": "kevinlu1248",
+ "github": "kevinlu1248",
+ "website": "https://github.com/kevinlu1248/pyate"
+ },
+ "category": ["pipeline", "research"],
+ "tags": ["term_extraction"]
}
],
From 32c2bb3d99606d4516f9db3e6c3b8d00d5d99d2b Mon Sep 17 00:00:00 2001
From: Ines Montani
Date: Thu, 21 May 2020 20:45:33 +0200
Subject: [PATCH 29/41] Add course to landing [ci skip]
---
website/src/styles/landing.module.sass | 1 +
website/src/widgets/landing.js | 47 ++++++++++++++------------
2 files changed, 26 insertions(+), 22 deletions(-)
diff --git a/website/src/styles/landing.module.sass b/website/src/styles/landing.module.sass
index d7340229b..fab07ce9b 100644
--- a/website/src/styles/landing.module.sass
+++ b/website/src/styles/landing.module.sass
@@ -81,6 +81,7 @@
.banner-content-small
display: block
+ margin-bottom: 0 !important
.banner-title
display: block
diff --git a/website/src/widgets/landing.js b/website/src/widgets/landing.js
index 2dc5d40dc..77d32a6ad 100644
--- a/website/src/widgets/landing.js
+++ b/website/src/widgets/landing.js
@@ -9,7 +9,6 @@ import {
LandingGrid,
LandingCard,
LandingCol,
- LandingButton,
LandingDemo,
LandingBannerGrid,
LandingBanner,
@@ -19,7 +18,8 @@ import { H2 } from '../components/typography'
import { Ul, Li } from '../components/list'
import Button from '../components/button'
import Link from '../components/link'
-import irlBackground from '../images/spacy-irl.jpg'
+
+import courseImage from '../../docs/images/course.jpg'
import BenchmarksChoi from 'usage/_benchmarks-choi.md'
@@ -154,13 +154,35 @@ const Landing = ({ data }) => {
+
+
+
+
+
+
+ In this free and interactive online course you’ll learn how to
+ use spaCy to build advanced natural language understanding systems, using both
+ rule-based and machine learning approaches. It includes{' '}
+ 55 exercises featuring videos, slide decks, multiple-choice
+ questions and interactive coding practice in the browser.
+
+
Prodigy is an annotation tool so efficient that data scientists
@@ -171,25 +193,6 @@ const Landing = ({ data }) => {
update your model in real-time and chain models together to build more complex
systems.
-
-
- We were pleased to invite the spaCy community and other folks working on Natural
- Language Processing to Berlin this summer for a small and intimate event{' '}
- July 6, 2019 . We booked a beautiful venue, hand-picked an
- awesome lineup of speakers and scheduled plenty of social time to get to know
- each other and exchange ideas. The YouTube playlist includes 12 talks about NLP
- research, development and applications, with keynotes by Sebastian Ruder
- (DeepMind) and Yoav Goldberg (Allen AI).
-
From 5753b43e60a50e411fb1c92540dbb137e74f333f Mon Sep 17 00:00:00 2001
From: Ines Montani
Date: Mon, 20 Apr 2020 20:33:13 +0200
Subject: [PATCH 30/41] Tidy up and fix alignment of landing cards (#5317)
---
website/src/components/landing.js | 13 ++++++--
website/src/styles/landing.module.sass | 5 +++
website/src/widgets/landing.js | 44 +++++++++++---------------
3 files changed, 34 insertions(+), 28 deletions(-)
diff --git a/website/src/components/landing.js b/website/src/components/landing.js
index 16c342e3f..fb03d2845 100644
--- a/website/src/components/landing.js
+++ b/website/src/components/landing.js
@@ -46,10 +46,17 @@ export const LandingGrid = ({ cols = 3, blocks = false, children }) => (
export const LandingCol = ({ children }) => {children}
-export const LandingCard = ({ title, children }) => (
+export const LandingCard = ({ title, button, url, children }) => (
- {title &&
{title} }
- {children}
+
+ {title && {title} }
+ {children}
+
+ {button && url && (
+
+ )}
)
diff --git a/website/src/styles/landing.module.sass b/website/src/styles/landing.module.sass
index fab07ce9b..c29c0fffb 100644
--- a/website/src/styles/landing.module.sass
+++ b/website/src/styles/landing.module.sass
@@ -49,12 +49,17 @@
margin-bottom: -25rem
.card
+ display: flex
+ flex-direction: column
padding: 3rem 2.5rem
background: var(--color-back)
border-radius: var(--border-radius)
box-shadow: var(--box-shadow)
margin-bottom: 3rem
+.card-text
+ flex: 100%
+
.button
width: 100%
diff --git a/website/src/widgets/landing.js b/website/src/widgets/landing.js
index 77d32a6ad..c96905733 100644
--- a/website/src/widgets/landing.js
+++ b/website/src/widgets/landing.js
@@ -79,34 +79,28 @@ const Landing = ({ data }) => {
in Python
-
-
- spaCy is designed to help you do real work — to build real products, or
- gather real insights. The library respects your time, and tries to avoid
- wasting it. It's easy to install, and its API is simple and productive. We
- like to think of spaCy as the Ruby on Rails of Natural Language Processing.
-
- Get started
+
+ spaCy is designed to help you do real work — to build real products, or gather
+ real insights. The library respects your time, and tries to avoid wasting it.
+ It's easy to install, and its API is simple and productive. We like to think of
+ spaCy as the Ruby on Rails of Natural Language Processing.
-
-
- spaCy excels at large-scale information extraction tasks. It's written from
- the ground up in carefully memory-managed Cython. Independent research in
- 2015 found spaCy to be the fastest in the world. If your application needs
- to process entire web dumps, spaCy is the library you want to be using.
-
- Facts & Figures
+
+ spaCy excels at large-scale information extraction tasks. It's written from the
+ ground up in carefully memory-managed Cython. Independent research in 2015 found
+ spaCy to be the fastest in the world. If your application needs to process
+ entire web dumps, spaCy is the library you want to be using.
-
-
- spaCy is the best way to prepare text for deep learning. It interoperates
- seamlessly with TensorFlow, PyTorch, scikit-learn, Gensim and the rest of
- Python's awesome AI ecosystem. With spaCy, you can easily construct
- linguistically sophisticated statistical models for a variety of NLP
- problems.
-
- Read more
+
+ spaCy is the best way to prepare text for deep learning. It interoperates
+ seamlessly with TensorFlow, PyTorch, scikit-learn, Gensim and the rest of
+ Python's awesome AI ecosystem. With spaCy, you can easily construct
+ linguistically sophisticated statistical models for a variety of NLP problems.
From 0d3cfe155f55490af57a12321fb0be58f04ecc39 Mon Sep 17 00:00:00 2001
From: Rajat <22280243+R1j1t@users.noreply.github.com>
Date: Mon, 25 May 2020 15:00:23 +0530
Subject: [PATCH 31/41] update spacy universe with my project (#5497)
* added contextualSpellCheck in spacy universe meta
* removed extra formatting by code
* updated with permanent links
* run json linter used by spacy
* filled SCA
* updated the description
---
.github/contributors/R1j1t.md | 106 ++++++++++++++++++++++++++++++++++
website/meta/universe.json | 30 ++++++++++
2 files changed, 136 insertions(+)
create mode 100644 .github/contributors/R1j1t.md
diff --git a/.github/contributors/R1j1t.md b/.github/contributors/R1j1t.md
new file mode 100644
index 000000000..a92f1e092
--- /dev/null
+++ b/.github/contributors/R1j1t.md
@@ -0,0 +1,106 @@
+# spaCy contributor agreement
+
+This spaCy Contributor Agreement (**"SCA"**) is based on the
+[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
+The SCA applies to any contribution that you make to any product or project
+managed by us (the **"project"**), and sets out the intellectual property rights
+you grant to us in the contributed materials. The term **"us"** shall mean
+[ExplosionAI GmbH](https://explosion.ai/legal). The term
+**"you"** shall mean the person or entity identified below.
+
+If you agree to be bound by these terms, fill in the information requested
+below and include the filled-in version with your first pull request, under the
+folder [`.github/contributors/`](/.github/contributors/). The name of the file
+should be your GitHub username, with the extension `.md`. For example, the user
+example_user would create the file `.github/contributors/example_user.md`.
+
+Read this agreement carefully before signing. These terms and conditions
+constitute a binding legal agreement.
+
+## Contributor Agreement
+
+1. The term "contribution" or "contributed materials" means any source code,
+object code, patch, tool, sample, graphic, specification, manual,
+documentation, or any other material posted or submitted by you to the project.
+
+2. With respect to any worldwide copyrights, or copyright applications and
+registrations, in your contribution:
+
+ * you hereby assign to us joint ownership, and to the extent that such
+ assignment is or becomes invalid, ineffective or unenforceable, you hereby
+ grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
+ royalty-free, unrestricted license to exercise all rights under those
+ copyrights. This includes, at our option, the right to sublicense these same
+ rights to third parties through multiple levels of sublicensees or other
+ licensing arrangements;
+
+ * you agree that each of us can do all things in relation to your
+ contribution as if each of us were the sole owners, and if one of us makes
+ a derivative work of your contribution, the one who makes the derivative
+ work (or has it made will be the sole owner of that derivative work;
+
+ * you agree that you will not assert any moral rights in your contribution
+ against us, our licensees or transferees;
+
+ * you agree that we may register a copyright in your contribution and
+ exercise all ownership rights associated with it; and
+
+ * you agree that neither of us has any duty to consult with, obtain the
+ consent of, pay or render an accounting to the other for any use or
+ distribution of your contribution.
+
+3. With respect to any patents you own, or that you can license without payment
+to any third party, you hereby grant to us a perpetual, irrevocable,
+non-exclusive, worldwide, no-charge, royalty-free license to:
+
+ * make, have made, use, sell, offer to sell, import, and otherwise transfer
+ your contribution in whole or in part, alone or in combination with or
+ included in any product, work or materials arising out of the project to
+ which your contribution was submitted, and
+
+ * at our option, to sublicense these same rights to third parties through
+ multiple levels of sublicensees or other licensing arrangements.
+
+4. Except as set out above, you keep all right, title, and interest in your
+contribution. The rights that you grant to us under these terms are effective
+on the date you first submitted a contribution to us, even if your submission
+took place before the date you sign these terms.
+
+5. You covenant, represent, warrant and agree that:
+
+ * Each contribution that you submit is and shall be an original work of
+ authorship and you can legally grant the rights set out in this SCA;
+
+ * to the best of your knowledge, each contribution will not violate any
+ third party's copyrights, trademarks, patents, or other intellectual
+ property rights; and
+
+ * each contribution shall be in compliance with U.S. export control laws and
+ other applicable export and import laws. You agree to notify us if you
+ become aware of any circumstance which would make any of the foregoing
+ representations inaccurate in any respect. We may publicly disclose your
+ participation in the project, including the fact that you have signed the SCA.
+
+6. This SCA is governed by the laws of the State of California and applicable
+U.S. Federal law. Any choice of law rules will not apply.
+
+7. Please place an “x” on one of the applicable statement below. Please do NOT
+mark both statements:
+
+ * [x] I am signing on behalf of myself as an individual and no other person
+ or entity, including my employer, has or will have rights with respect to my
+ contributions.
+
+ * [ ] I am signing on behalf of my employer or a legal entity and I have the
+ actual authority to contractually bind that entity.
+
+## Contributor Details
+
+| Field | Entry |
+|------------------------------- | -------------------- |
+| Name | Rajat |
+| Company name (if applicable) | |
+| Title or role (if applicable) | |
+| Date | 24 May 2020 |
+| GitHub username | R1j1t |
+| Website (optional) | |
diff --git a/website/meta/universe.json b/website/meta/universe.json
index 58f4cc2aa..aafec7178 100644
--- a/website/meta/universe.json
+++ b/website/meta/universe.json
@@ -2293,6 +2293,36 @@
},
"category": ["pipeline", "research"],
"tags": ["term_extraction"]
+ },
+ {
+ "id": "contextualSpellCheck",
+ "title": "Contextual Spell Check",
+ "slogan": "Contextual spell correction using BERT (bidirectional representations)",
+ "description": "This package currently focuses on Out of Vocabulary (OOV) word or non-word error (NWE) correction using BERT model. The idea of using BERT was to use the context when correcting NWE. In the coming days, I would like to focus on RWE and optimising the package by implementing it in cython.",
+ "github": "R1j1t/contextualSpellCheck",
+ "pip": "contextualSpellCheck",
+ "code_example": [
+ "import spacy",
+ "import contextualSpellCheck",
+ "",
+ "nlp = spacy.load('en')",
+ "contextualSpellCheck.add_to_pipe(nlp)",
+ "doc = nlp('Income was $9.4 milion compared to the prior year of $2.7 milion.')",
+ "",
+ "print(doc._.performed_spellCheck) #Should be True",
+ "print(doc._.outcome_spellCheck) #Income was $9.4 million compared to the prior year of $2.7 million."
+ ],
+ "code_language": "python",
+ "url": "https://github.com/R1j1t/contextualSpellCheck",
+ "thumb": "https://user-images.githubusercontent.com/22280243/82760949-98e68480-9e14-11ea-952e-4738620fd9e3.png",
+ "image": "https://user-images.githubusercontent.com/22280243/82138959-2852cd00-9842-11ea-918a-49b2a7873ef6.png",
+ "author": "Rajat Goel",
+ "author_links": {
+ "github": "r1j1t",
+ "website": "https://github.com/R1j1t"
+ },
+ "category": ["pipeline", "conversational", "research"],
+ "tags": ["spell check", "correction", "preprocessing", "translation", "correction"]
}
],
From 487be097eaaea29a26d7413837f90de2eb548771 Mon Sep 17 00:00:00 2001
From: Martino Mensio
Date: Mon, 8 Jun 2020 19:26:30 +0100
Subject: [PATCH 32/41] adding spacy-universal-sentence-encoder (#5534)
* adding spacy-universal-sentence-encoder
* update affiliation
* updated code example
---
.github/contributors/MartinoMensio.md | 4 ++--
website/meta/universe.json | 24 ++++++++++++++++++++++++
2 files changed, 26 insertions(+), 2 deletions(-)
diff --git a/.github/contributors/MartinoMensio.md b/.github/contributors/MartinoMensio.md
index 1cd32d622..27e453699 100644
--- a/.github/contributors/MartinoMensio.md
+++ b/.github/contributors/MartinoMensio.md
@@ -99,8 +99,8 @@ mark both statements:
| Field | Entry |
|------------------------------- | -------------------- |
| Name | Martino Mensio |
-| Company name (if applicable) | Polytechnic University of Turin |
-| Title or role (if applicable) | Student |
+| Company name (if applicable) | The Open University |
+| Title or role (if applicable) | PhD Student |
| Date | 17 November 2017 |
| GitHub username | MartinoMensio |
| Website (optional) | https://martinomensio.github.io/ |
diff --git a/website/meta/universe.json b/website/meta/universe.json
index aafec7178..aae6855be 100644
--- a/website/meta/universe.json
+++ b/website/meta/universe.json
@@ -1,5 +1,29 @@
{
"resources": [
+ {
+ "id": "spacy-universal-sentence-encoder",
+ "title": "SpaCy - Universal Sentence Encoder",
+ "slogan": "Make use of Google's Universal Sentence Encoder directly within SpaCy",
+ "description": "This library lets you use Universal Sentence Encoder embeddings of Docs, Spans and Tokens directly from TensorFlow Hub",
+ "github": "MartinoMensio/spacy-universal-sentence-encoder-tfhub",
+ "code_example": [
+ "import spacy_universal_sentence_encoder",
+ "load one of the models: ['en_use_md', 'en_use_lg', 'xx_use_md', 'xx_use_lg']",
+ "nlp = spacy_universal_sentence_encoder.load_model('en_use_lg')",
+ "# get two documents",
+ "doc_1 = nlp('Hi there, how are you?')",
+ "doc_2 = nlp('Hello there, how are you doing today?')",
+ "# use the similarity method that is based on the vectors, on Doc, Span or Token",
+ "print(doc_1.similarity(doc_2[0:7]))"
+ ],
+ "category": ["models", "pipeline"],
+ "author": "Martino Mensio",
+ "author_links": {
+ "twitter": "MartinoMensio",
+ "github": "MartinoMensio",
+ "website": "https://martinomensio.github.io"
+ }
+ },
{
"id": "whatlies",
"title": "whatlies",
From ed240458f6136bc055b1ee1de2c02080b564231d Mon Sep 17 00:00:00 2001
From: Ines Montani
Date: Tue, 16 Jun 2020 18:28:24 +0200
Subject: [PATCH 33/41] Try and upgrade gatsby
---
website/package-lock.json | 13295 +++++++++++++++++++++++++++++++-----
website/package.json | 2 +-
2 files changed, 11442 insertions(+), 1855 deletions(-)
diff --git a/website/package-lock.json b/website/package-lock.json
index cb1731c1b..0058b2423 100644
--- a/website/package-lock.json
+++ b/website/package-lock.json
@@ -12,6 +12,44 @@
"@babel/highlight": "^7.0.0"
}
},
+ "@babel/compat-data": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/compat-data/-/compat-data-7.10.1.tgz",
+ "integrity": "sha512-CHvCj7So7iCkGKPRFUfryXIkU2gSBw7VSZFYLsqVhrS47269VK2Hfi9S/YcublPMW8k1u2bQBlbDruoQEm4fgw==",
+ "requires": {
+ "browserslist": "^4.12.0",
+ "invariant": "^2.2.4",
+ "semver": "^5.5.0"
+ },
+ "dependencies": {
+ "browserslist": {
+ "version": "4.12.0",
+ "resolved": "https://registry.npmjs.org/browserslist/-/browserslist-4.12.0.tgz",
+ "integrity": "sha512-UH2GkcEDSI0k/lRkuDSzFl9ZZ87skSy9w2XAn1MsZnL+4c4rqbBd3e82UWHbYDpztABrPBhZsTEeuxVfHppqDg==",
+ "requires": {
+ "caniuse-lite": "^1.0.30001043",
+ "electron-to-chromium": "^1.3.413",
+ "node-releases": "^1.1.53",
+ "pkg-up": "^2.0.0"
+ }
+ },
+ "caniuse-lite": {
+ "version": "1.0.30001084",
+ "resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30001084.tgz",
+ "integrity": "sha512-ftdc5oGmhEbLUuMZ/Qp3mOpzfZLCxPYKcvGv6v2dJJ+8EdqcvZRbAGOiLmkM/PV1QGta/uwBs8/nCl6sokDW6w=="
+ },
+ "electron-to-chromium": {
+ "version": "1.3.474",
+ "resolved": "https://registry.npmjs.org/electron-to-chromium/-/electron-to-chromium-1.3.474.tgz",
+ "integrity": "sha512-fPkSgT9IBKmVJz02XioNsIpg0WYmkPrvU1lUJblMMJALxyE7/32NGvbJQKKxpNokozPvqfqkuUqVClYsvetcLw=="
+ },
+ "node-releases": {
+ "version": "1.1.58",
+ "resolved": "https://registry.npmjs.org/node-releases/-/node-releases-1.1.58.tgz",
+ "integrity": "sha512-NxBudgVKiRh/2aPWMgPR7bPTX0VPmGx5QBwCtdHitnqFE5/O8DeBXuIMH1nwNnw/aMo6AjOrpsHzfY3UbUJ7yg=="
+ }
+ }
+ },
"@babel/core": {
"version": "7.2.2",
"resolved": "https://registry.npmjs.org/@babel/core/-/core-7.2.2.tgz",
@@ -81,6 +119,49 @@
"esutils": "^2.0.0"
}
},
+ "@babel/helper-builder-react-jsx-experimental": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-builder-react-jsx-experimental/-/helper-builder-react-jsx-experimental-7.10.1.tgz",
+ "integrity": "sha512-irQJ8kpQUV3JasXPSFQ+LCCtJSc5ceZrPFVj6TElR6XCHssi3jV8ch3odIrNtjJFRZZVbrOEfJMI79TPU/h1pQ==",
+ "requires": {
+ "@babel/helper-annotate-as-pure": "^7.10.1",
+ "@babel/helper-module-imports": "^7.10.1",
+ "@babel/types": "^7.10.1"
+ },
+ "dependencies": {
+ "@babel/helper-annotate-as-pure": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-annotate-as-pure/-/helper-annotate-as-pure-7.10.1.tgz",
+ "integrity": "sha512-ewp3rvJEwLaHgyWGe4wQssC2vjks3E80WiUe2BpMb0KhreTjMROCbxXcEovTrbeGVdQct5VjQfrv9EgC+xMzCw==",
+ "requires": {
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helper-module-imports": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-module-imports/-/helper-module-imports-7.10.1.tgz",
+ "integrity": "sha512-SFxgwYmZ3HZPyZwJRiVNLRHWuW2OgE5k2nrVs6D9Iv4PPnXVffuEHy83Sfx/l4SqF+5kyJXjAyUmrG7tNm+qVg==",
+ "requires": {
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/types": {
+ "version": "7.10.2",
+ "resolved": "https://registry.npmjs.org/@babel/types/-/types-7.10.2.tgz",
+ "integrity": "sha512-AD3AwWBSz0AWF0AkCN9VPiWrvldXq+/e3cHa4J89vo4ymjz1XwrBFFVZmkJTsQIPNk+ZVomPSXUJqq8yyjZsng==",
+ "requires": {
+ "@babel/helper-validator-identifier": "^7.10.1",
+ "lodash": "^4.17.13",
+ "to-fast-properties": "^2.0.0"
+ }
+ },
+ "lodash": {
+ "version": "4.17.15",
+ "resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz",
+ "integrity": "sha512-8xOcRHvCjnocdS5cpwXQXVzmmh5e5+saE2QGoeQmbKmRS6J3VQppPOIt0MnmE+4xlZoumy0GPG0D0MVIQbNA1A=="
+ }
+ }
+ },
"@babel/helper-call-delegate": {
"version": "7.1.0",
"resolved": "https://registry.npmjs.org/@babel/helper-call-delegate/-/helper-call-delegate-7.1.0.tgz",
@@ -91,70 +172,184 @@
"@babel/types": "^7.0.0"
}
},
- "@babel/helper-create-class-features-plugin": {
- "version": "7.3.4",
- "resolved": "https://registry.npmjs.org/@babel/helper-create-class-features-plugin/-/helper-create-class-features-plugin-7.3.4.tgz",
- "integrity": "sha512-uFpzw6L2omjibjxa8VGZsJUPL5wJH0zzGKpoz0ccBkzIa6C8kWNUbiBmQ0rgOKWlHJ6qzmfa6lTiGchiV8SC+g==",
+ "@babel/helper-compilation-targets": {
+ "version": "7.10.2",
+ "resolved": "https://registry.npmjs.org/@babel/helper-compilation-targets/-/helper-compilation-targets-7.10.2.tgz",
+ "integrity": "sha512-hYgOhF4To2UTB4LTaZepN/4Pl9LD4gfbJx8A34mqoluT8TLbof1mhUlYuNWTEebONa8+UlCC4X0TEXu7AOUyGA==",
"requires": {
- "@babel/helper-function-name": "^7.1.0",
- "@babel/helper-member-expression-to-functions": "^7.0.0",
- "@babel/helper-optimise-call-expression": "^7.0.0",
- "@babel/helper-plugin-utils": "^7.0.0",
- "@babel/helper-replace-supers": "^7.3.4",
- "@babel/helper-split-export-declaration": "^7.0.0"
+ "@babel/compat-data": "^7.10.1",
+ "browserslist": "^4.12.0",
+ "invariant": "^2.2.4",
+ "levenary": "^1.1.1",
+ "semver": "^5.5.0"
},
"dependencies": {
- "@babel/generator": {
- "version": "7.3.4",
- "resolved": "https://registry.npmjs.org/@babel/generator/-/generator-7.3.4.tgz",
- "integrity": "sha512-8EXhHRFqlVVWXPezBW5keTiQi/rJMQTg/Y9uVCEZ0CAF3PKtCCaVRnp64Ii1ujhkoDhhF1fVsImoN4yJ2uz4Wg==",
+ "browserslist": {
+ "version": "4.12.0",
+ "resolved": "https://registry.npmjs.org/browserslist/-/browserslist-4.12.0.tgz",
+ "integrity": "sha512-UH2GkcEDSI0k/lRkuDSzFl9ZZ87skSy9w2XAn1MsZnL+4c4rqbBd3e82UWHbYDpztABrPBhZsTEeuxVfHppqDg==",
"requires": {
- "@babel/types": "^7.3.4",
- "jsesc": "^2.5.1",
- "lodash": "^4.17.11",
- "source-map": "^0.5.0",
- "trim-right": "^1.0.1"
+ "caniuse-lite": "^1.0.30001043",
+ "electron-to-chromium": "^1.3.413",
+ "node-releases": "^1.1.53",
+ "pkg-up": "^2.0.0"
}
},
- "@babel/helper-replace-supers": {
- "version": "7.3.4",
- "resolved": "https://registry.npmjs.org/@babel/helper-replace-supers/-/helper-replace-supers-7.3.4.tgz",
- "integrity": "sha512-pvObL9WVf2ADs+ePg0jrqlhHoxRXlOa+SHRHzAXIz2xkYuOHfGl+fKxPMaS4Fq+uje8JQPobnertBBvyrWnQ1A==",
+ "caniuse-lite": {
+ "version": "1.0.30001084",
+ "resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30001084.tgz",
+ "integrity": "sha512-ftdc5oGmhEbLUuMZ/Qp3mOpzfZLCxPYKcvGv6v2dJJ+8EdqcvZRbAGOiLmkM/PV1QGta/uwBs8/nCl6sokDW6w=="
+ },
+ "electron-to-chromium": {
+ "version": "1.3.474",
+ "resolved": "https://registry.npmjs.org/electron-to-chromium/-/electron-to-chromium-1.3.474.tgz",
+ "integrity": "sha512-fPkSgT9IBKmVJz02XioNsIpg0WYmkPrvU1lUJblMMJALxyE7/32NGvbJQKKxpNokozPvqfqkuUqVClYsvetcLw=="
+ },
+ "node-releases": {
+ "version": "1.1.58",
+ "resolved": "https://registry.npmjs.org/node-releases/-/node-releases-1.1.58.tgz",
+ "integrity": "sha512-NxBudgVKiRh/2aPWMgPR7bPTX0VPmGx5QBwCtdHitnqFE5/O8DeBXuIMH1nwNnw/aMo6AjOrpsHzfY3UbUJ7yg=="
+ }
+ }
+ },
+ "@babel/helper-create-class-features-plugin": {
+ "version": "7.10.2",
+ "resolved": "https://registry.npmjs.org/@babel/helper-create-class-features-plugin/-/helper-create-class-features-plugin-7.10.2.tgz",
+ "integrity": "sha512-5C/QhkGFh1vqcziq1vAL6SI9ymzUp8BCYjFpvYVhWP4DlATIb3u5q3iUd35mvlyGs8fO7hckkW7i0tmH+5+bvQ==",
+ "requires": {
+ "@babel/helper-function-name": "^7.10.1",
+ "@babel/helper-member-expression-to-functions": "^7.10.1",
+ "@babel/helper-optimise-call-expression": "^7.10.1",
+ "@babel/helper-plugin-utils": "^7.10.1",
+ "@babel/helper-replace-supers": "^7.10.1",
+ "@babel/helper-split-export-declaration": "^7.10.1"
+ },
+ "dependencies": {
+ "@babel/code-frame": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/code-frame/-/code-frame-7.10.1.tgz",
+ "integrity": "sha512-IGhtTmpjGbYzcEDOw7DcQtbQSXcG9ftmAXtWTu9V936vDye4xjjekktFAtgZsWpzTj/X01jocB46mTywm/4SZw==",
"requires": {
- "@babel/helper-member-expression-to-functions": "^7.0.0",
- "@babel/helper-optimise-call-expression": "^7.0.0",
- "@babel/traverse": "^7.3.4",
- "@babel/types": "^7.3.4"
+ "@babel/highlight": "^7.10.1"
+ }
+ },
+ "@babel/generator": {
+ "version": "7.10.2",
+ "resolved": "https://registry.npmjs.org/@babel/generator/-/generator-7.10.2.tgz",
+ "integrity": "sha512-AxfBNHNu99DTMvlUPlt1h2+Hn7knPpH5ayJ8OqDWSeLld+Fi2AYBTC/IejWDM9Edcii4UzZRCsbUt0WlSDsDsA==",
+ "requires": {
+ "@babel/types": "^7.10.2",
+ "jsesc": "^2.5.1",
+ "lodash": "^4.17.13",
+ "source-map": "^0.5.0"
+ }
+ },
+ "@babel/helper-function-name": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-function-name/-/helper-function-name-7.10.1.tgz",
+ "integrity": "sha512-fcpumwhs3YyZ/ttd5Rz0xn0TpIwVkN7X0V38B9TWNfVF42KEkhkAAuPCQ3oXmtTRtiPJrmZ0TrfS0GKF0eMaRQ==",
+ "requires": {
+ "@babel/helper-get-function-arity": "^7.10.1",
+ "@babel/template": "^7.10.1",
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helper-get-function-arity": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-get-function-arity/-/helper-get-function-arity-7.10.1.tgz",
+ "integrity": "sha512-F5qdXkYGOQUb0hpRaPoetF9AnsXknKjWMZ+wmsIRsp5ge5sFh4c3h1eH2pRTTuy9KKAA2+TTYomGXAtEL2fQEw==",
+ "requires": {
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helper-member-expression-to-functions": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-member-expression-to-functions/-/helper-member-expression-to-functions-7.10.1.tgz",
+ "integrity": "sha512-u7XLXeM2n50gb6PWJ9hoO5oO7JFPaZtrh35t8RqKLT1jFKj9IWeD1zrcrYp1q1qiZTdEarfDWfTIP8nGsu0h5g==",
+ "requires": {
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helper-optimise-call-expression": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-optimise-call-expression/-/helper-optimise-call-expression-7.10.1.tgz",
+ "integrity": "sha512-a0DjNS1prnBsoKx83dP2falChcs7p3i8VMzdrSbfLhuQra/2ENC4sbri34dz/rWmDADsmF1q5GbfaXydh0Jbjg==",
+ "requires": {
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helper-plugin-utils": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-plugin-utils/-/helper-plugin-utils-7.10.1.tgz",
+ "integrity": "sha512-fvoGeXt0bJc7VMWZGCAEBEMo/HAjW2mP8apF5eXK0wSqwLAVHAISCWRoLMBMUs2kqeaG77jltVqu4Hn8Egl3nA=="
+ },
+ "@babel/helper-replace-supers": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-replace-supers/-/helper-replace-supers-7.10.1.tgz",
+ "integrity": "sha512-SOwJzEfpuQwInzzQJGjGaiG578UYmyi2Xw668klPWV5n07B73S0a9btjLk/52Mlcxa+5AdIYqws1KyXRfMoB7A==",
+ "requires": {
+ "@babel/helper-member-expression-to-functions": "^7.10.1",
+ "@babel/helper-optimise-call-expression": "^7.10.1",
+ "@babel/traverse": "^7.10.1",
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helper-split-export-declaration": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-split-export-declaration/-/helper-split-export-declaration-7.10.1.tgz",
+ "integrity": "sha512-UQ1LVBPrYdbchNhLwj6fetj46BcFwfS4NllJo/1aJsT+1dLTEnXJL0qHqtY7gPzF8S2fXBJamf1biAXV3X077g==",
+ "requires": {
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/highlight": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/highlight/-/highlight-7.10.1.tgz",
+ "integrity": "sha512-8rMof+gVP8mxYZApLF/JgNDAkdKa+aJt3ZYxF8z6+j/hpeXL7iMsKCPHa2jNMHu/qqBwzQF4OHNoYi8dMA/rYg==",
+ "requires": {
+ "@babel/helper-validator-identifier": "^7.10.1",
+ "chalk": "^2.0.0",
+ "js-tokens": "^4.0.0"
}
},
"@babel/parser": {
- "version": "7.3.4",
- "resolved": "https://registry.npmjs.org/@babel/parser/-/parser-7.3.4.tgz",
- "integrity": "sha512-tXZCqWtlOOP4wgCp6RjRvLmfuhnqTLy9VHwRochJBCP2nDm27JnnuFEnXFASVyQNHk36jD1tAammsCEEqgscIQ=="
+ "version": "7.10.2",
+ "resolved": "https://registry.npmjs.org/@babel/parser/-/parser-7.10.2.tgz",
+ "integrity": "sha512-PApSXlNMJyB4JiGVhCOlzKIif+TKFTvu0aQAhnTvfP/z3vVSN6ZypH5bfUNwFXXjRQtUEBNFd2PtmCmG2Py3qQ=="
+ },
+ "@babel/template": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/template/-/template-7.10.1.tgz",
+ "integrity": "sha512-OQDg6SqvFSsc9A0ej6SKINWrpJiNonRIniYondK2ViKhB06i3c0s+76XUft71iqBEe9S1OKsHwPAjfHnuvnCig==",
+ "requires": {
+ "@babel/code-frame": "^7.10.1",
+ "@babel/parser": "^7.10.1",
+ "@babel/types": "^7.10.1"
+ }
},
"@babel/traverse": {
- "version": "7.3.4",
- "resolved": "https://registry.npmjs.org/@babel/traverse/-/traverse-7.3.4.tgz",
- "integrity": "sha512-TvTHKp6471OYEcE/91uWmhR6PrrYywQntCHSaZ8CM8Vmp+pjAusal4nGB2WCCQd0rvI7nOMKn9GnbcvTUz3/ZQ==",
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/traverse/-/traverse-7.10.1.tgz",
+ "integrity": "sha512-C/cTuXeKt85K+p08jN6vMDz8vSV0vZcI0wmQ36o6mjbuo++kPMdpOYw23W2XH04dbRt9/nMEfA4W3eR21CD+TQ==",
"requires": {
- "@babel/code-frame": "^7.0.0",
- "@babel/generator": "^7.3.4",
- "@babel/helper-function-name": "^7.1.0",
- "@babel/helper-split-export-declaration": "^7.0.0",
- "@babel/parser": "^7.3.4",
- "@babel/types": "^7.3.4",
+ "@babel/code-frame": "^7.10.1",
+ "@babel/generator": "^7.10.1",
+ "@babel/helper-function-name": "^7.10.1",
+ "@babel/helper-split-export-declaration": "^7.10.1",
+ "@babel/parser": "^7.10.1",
+ "@babel/types": "^7.10.1",
"debug": "^4.1.0",
"globals": "^11.1.0",
- "lodash": "^4.17.11"
+ "lodash": "^4.17.13"
}
},
"@babel/types": {
- "version": "7.3.4",
- "resolved": "https://registry.npmjs.org/@babel/types/-/types-7.3.4.tgz",
- "integrity": "sha512-WEkp8MsLftM7O/ty580wAmZzN1nDmCACc5+jFzUt+GUFNNIi3LdRlueYz0YIlmJhlZx1QYDMZL5vdWCL0fNjFQ==",
+ "version": "7.10.2",
+ "resolved": "https://registry.npmjs.org/@babel/types/-/types-7.10.2.tgz",
+ "integrity": "sha512-AD3AwWBSz0AWF0AkCN9VPiWrvldXq+/e3cHa4J89vo4ymjz1XwrBFFVZmkJTsQIPNk+ZVomPSXUJqq8yyjZsng==",
"requires": {
- "esutils": "^2.0.2",
- "lodash": "^4.17.11",
+ "@babel/helper-validator-identifier": "^7.10.1",
+ "lodash": "^4.17.13",
"to-fast-properties": "^2.0.0"
}
},
@@ -165,6 +360,98 @@
"requires": {
"ms": "^2.1.1"
}
+ },
+ "lodash": {
+ "version": "4.17.15",
+ "resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz",
+ "integrity": "sha512-8xOcRHvCjnocdS5cpwXQXVzmmh5e5+saE2QGoeQmbKmRS6J3VQppPOIt0MnmE+4xlZoumy0GPG0D0MVIQbNA1A=="
+ }
+ }
+ },
+ "@babel/helper-create-regexp-features-plugin": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-create-regexp-features-plugin/-/helper-create-regexp-features-plugin-7.10.1.tgz",
+ "integrity": "sha512-Rx4rHS0pVuJn5pJOqaqcZR4XSgeF9G/pO/79t+4r7380tXFJdzImFnxMU19f83wjSrmKHq6myrM10pFHTGzkUA==",
+ "requires": {
+ "@babel/helper-annotate-as-pure": "^7.10.1",
+ "@babel/helper-regex": "^7.10.1",
+ "regexpu-core": "^4.7.0"
+ },
+ "dependencies": {
+ "@babel/helper-annotate-as-pure": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-annotate-as-pure/-/helper-annotate-as-pure-7.10.1.tgz",
+ "integrity": "sha512-ewp3rvJEwLaHgyWGe4wQssC2vjks3E80WiUe2BpMb0KhreTjMROCbxXcEovTrbeGVdQct5VjQfrv9EgC+xMzCw==",
+ "requires": {
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helper-regex": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-regex/-/helper-regex-7.10.1.tgz",
+ "integrity": "sha512-7isHr19RsIJWWLLFn21ubFt223PjQyg1HY7CZEMRr820HttHPpVvrsIN3bUOo44DEfFV4kBXO7Abbn9KTUZV7g==",
+ "requires": {
+ "lodash": "^4.17.13"
+ }
+ },
+ "@babel/types": {
+ "version": "7.10.2",
+ "resolved": "https://registry.npmjs.org/@babel/types/-/types-7.10.2.tgz",
+ "integrity": "sha512-AD3AwWBSz0AWF0AkCN9VPiWrvldXq+/e3cHa4J89vo4ymjz1XwrBFFVZmkJTsQIPNk+ZVomPSXUJqq8yyjZsng==",
+ "requires": {
+ "@babel/helper-validator-identifier": "^7.10.1",
+ "lodash": "^4.17.13",
+ "to-fast-properties": "^2.0.0"
+ }
+ },
+ "jsesc": {
+ "version": "0.5.0",
+ "resolved": "https://registry.npmjs.org/jsesc/-/jsesc-0.5.0.tgz",
+ "integrity": "sha1-597mbjXW/Bb3EP6R1c9p9w8IkR0="
+ },
+ "lodash": {
+ "version": "4.17.15",
+ "resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz",
+ "integrity": "sha512-8xOcRHvCjnocdS5cpwXQXVzmmh5e5+saE2QGoeQmbKmRS6J3VQppPOIt0MnmE+4xlZoumy0GPG0D0MVIQbNA1A=="
+ },
+ "regenerate-unicode-properties": {
+ "version": "8.2.0",
+ "resolved": "https://registry.npmjs.org/regenerate-unicode-properties/-/regenerate-unicode-properties-8.2.0.tgz",
+ "integrity": "sha512-F9DjY1vKLo/tPePDycuH3dn9H1OTPIkVD9Kz4LODu+F2C75mgjAJ7x/gwy6ZcSNRAAkhNlJSOHRe8k3p+K9WhA==",
+ "requires": {
+ "regenerate": "^1.4.0"
+ }
+ },
+ "regexpu-core": {
+ "version": "4.7.0",
+ "resolved": "https://registry.npmjs.org/regexpu-core/-/regexpu-core-4.7.0.tgz",
+ "integrity": "sha512-TQ4KXRnIn6tz6tjnrXEkD/sshygKH/j5KzK86X8MkeHyZ8qst/LZ89j3X4/8HEIfHANTFIP/AbXakeRhWIl5YQ==",
+ "requires": {
+ "regenerate": "^1.4.0",
+ "regenerate-unicode-properties": "^8.2.0",
+ "regjsgen": "^0.5.1",
+ "regjsparser": "^0.6.4",
+ "unicode-match-property-ecmascript": "^1.0.4",
+ "unicode-match-property-value-ecmascript": "^1.2.0"
+ }
+ },
+ "regjsgen": {
+ "version": "0.5.2",
+ "resolved": "https://registry.npmjs.org/regjsgen/-/regjsgen-0.5.2.tgz",
+ "integrity": "sha512-OFFT3MfrH90xIW8OOSyUrk6QHD5E9JOTeGodiJeBS3J6IwlgzJMNE/1bZklWz5oTg+9dCMyEetclvCVXOPoN3A=="
+ },
+ "regjsparser": {
+ "version": "0.6.4",
+ "resolved": "https://registry.npmjs.org/regjsparser/-/regjsparser-0.6.4.tgz",
+ "integrity": "sha512-64O87/dPDgfk8/RQqC4gkZoGyyWFIEUTTh80CU6CWuK5vkCGyekIx+oKcEIYtP/RAxSQltCZHCNu/mdd7fqlJw==",
+ "requires": {
+ "jsesc": "~0.5.0"
+ }
+ },
+ "unicode-match-property-value-ecmascript": {
+ "version": "1.2.0",
+ "resolved": "https://registry.npmjs.org/unicode-match-property-value-ecmascript/-/unicode-match-property-value-ecmascript-1.2.0.tgz",
+ "integrity": "sha512-wjuQHGQVofmSJv1uVISKLE5zO2rNGzM/KCYZch/QQvez7C1hUhBIuZ701fYXExuufJFMPhv2SyL8CyoIfMLbIQ=="
}
}
},
@@ -303,6 +590,11 @@
"@babel/types": "^7.0.0"
}
},
+ "@babel/helper-validator-identifier": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-validator-identifier/-/helper-validator-identifier-7.10.1.tgz",
+ "integrity": "sha512-5vW/JXLALhczRCWP0PnFDMCJAchlBvM7f4uk/jXritBnIa6E1KmqmtrS3yn1LAnxFBypQ3eneLuXjsnfQsgILw=="
+ },
"@babel/helper-wrap-function": {
"version": "7.2.0",
"resolved": "https://registry.npmjs.org/@babel/helper-wrap-function/-/helper-wrap-function-7.2.0.tgz",
@@ -350,12 +642,35 @@
}
},
"@babel/plugin-proposal-class-properties": {
- "version": "7.3.4",
- "resolved": "https://registry.npmjs.org/@babel/plugin-proposal-class-properties/-/plugin-proposal-class-properties-7.3.4.tgz",
- "integrity": "sha512-lUf8D3HLs4yYlAo8zjuneLvfxN7qfKv1Yzbj5vjqaqMJxgJA3Ipwp4VUJ+OrOdz53Wbww6ahwB8UhB2HQyLotA==",
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-proposal-class-properties/-/plugin-proposal-class-properties-7.10.1.tgz",
+ "integrity": "sha512-sqdGWgoXlnOdgMXU+9MbhzwFRgxVLeiGBqTrnuS7LC2IBU31wSsESbTUreT2O418obpfPdGUR2GbEufZF1bpqw==",
"requires": {
- "@babel/helper-create-class-features-plugin": "^7.3.4",
- "@babel/helper-plugin-utils": "^7.0.0"
+ "@babel/helper-create-class-features-plugin": "^7.10.1",
+ "@babel/helper-plugin-utils": "^7.10.1"
+ },
+ "dependencies": {
+ "@babel/helper-plugin-utils": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-plugin-utils/-/helper-plugin-utils-7.10.1.tgz",
+ "integrity": "sha512-fvoGeXt0bJc7VMWZGCAEBEMo/HAjW2mP8apF5eXK0wSqwLAVHAISCWRoLMBMUs2kqeaG77jltVqu4Hn8Egl3nA=="
+ }
+ }
+ },
+ "@babel/plugin-proposal-dynamic-import": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-proposal-dynamic-import/-/plugin-proposal-dynamic-import-7.10.1.tgz",
+ "integrity": "sha512-Cpc2yUVHTEGPlmiQzXj026kqwjEQAD9I4ZC16uzdbgWgitg/UHKHLffKNCQZ5+y8jpIZPJcKcwsr2HwPh+w3XA==",
+ "requires": {
+ "@babel/helper-plugin-utils": "^7.10.1",
+ "@babel/plugin-syntax-dynamic-import": "^7.8.0"
+ },
+ "dependencies": {
+ "@babel/helper-plugin-utils": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-plugin-utils/-/helper-plugin-utils-7.10.1.tgz",
+ "integrity": "sha512-fvoGeXt0bJc7VMWZGCAEBEMo/HAjW2mP8apF5eXK0wSqwLAVHAISCWRoLMBMUs2kqeaG77jltVqu4Hn8Egl3nA=="
+ }
}
},
"@babel/plugin-proposal-json-strings": {
@@ -367,6 +682,38 @@
"@babel/plugin-syntax-json-strings": "^7.2.0"
}
},
+ "@babel/plugin-proposal-nullish-coalescing-operator": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-proposal-nullish-coalescing-operator/-/plugin-proposal-nullish-coalescing-operator-7.10.1.tgz",
+ "integrity": "sha512-56cI/uHYgL2C8HVuHOuvVowihhX0sxb3nnfVRzUeVHTWmRHTZrKuAh/OBIMggGU/S1g/1D2CRCXqP+3u7vX7iA==",
+ "requires": {
+ "@babel/helper-plugin-utils": "^7.10.1",
+ "@babel/plugin-syntax-nullish-coalescing-operator": "^7.8.0"
+ },
+ "dependencies": {
+ "@babel/helper-plugin-utils": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-plugin-utils/-/helper-plugin-utils-7.10.1.tgz",
+ "integrity": "sha512-fvoGeXt0bJc7VMWZGCAEBEMo/HAjW2mP8apF5eXK0wSqwLAVHAISCWRoLMBMUs2kqeaG77jltVqu4Hn8Egl3nA=="
+ }
+ }
+ },
+ "@babel/plugin-proposal-numeric-separator": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-proposal-numeric-separator/-/plugin-proposal-numeric-separator-7.10.1.tgz",
+ "integrity": "sha512-jjfym4N9HtCiNfyyLAVD8WqPYeHUrw4ihxuAynWj6zzp2gf9Ey2f7ImhFm6ikB3CLf5Z/zmcJDri6B4+9j9RsA==",
+ "requires": {
+ "@babel/helper-plugin-utils": "^7.10.1",
+ "@babel/plugin-syntax-numeric-separator": "^7.10.1"
+ },
+ "dependencies": {
+ "@babel/helper-plugin-utils": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-plugin-utils/-/helper-plugin-utils-7.10.1.tgz",
+ "integrity": "sha512-fvoGeXt0bJc7VMWZGCAEBEMo/HAjW2mP8apF5eXK0wSqwLAVHAISCWRoLMBMUs2kqeaG77jltVqu4Hn8Egl3nA=="
+ }
+ }
+ },
"@babel/plugin-proposal-object-rest-spread": {
"version": "7.2.0",
"resolved": "https://registry.npmjs.org/@babel/plugin-proposal-object-rest-spread/-/plugin-proposal-object-rest-spread-7.2.0.tgz",
@@ -385,6 +732,38 @@
"@babel/plugin-syntax-optional-catch-binding": "^7.2.0"
}
},
+ "@babel/plugin-proposal-optional-chaining": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-proposal-optional-chaining/-/plugin-proposal-optional-chaining-7.10.1.tgz",
+ "integrity": "sha512-dqQj475q8+/avvok72CF3AOSV/SGEcH29zT5hhohqqvvZ2+boQoOr7iGldBG5YXTO2qgCgc2B3WvVLUdbeMlGA==",
+ "requires": {
+ "@babel/helper-plugin-utils": "^7.10.1",
+ "@babel/plugin-syntax-optional-chaining": "^7.8.0"
+ },
+ "dependencies": {
+ "@babel/helper-plugin-utils": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-plugin-utils/-/helper-plugin-utils-7.10.1.tgz",
+ "integrity": "sha512-fvoGeXt0bJc7VMWZGCAEBEMo/HAjW2mP8apF5eXK0wSqwLAVHAISCWRoLMBMUs2kqeaG77jltVqu4Hn8Egl3nA=="
+ }
+ }
+ },
+ "@babel/plugin-proposal-private-methods": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-proposal-private-methods/-/plugin-proposal-private-methods-7.10.1.tgz",
+ "integrity": "sha512-RZecFFJjDiQ2z6maFprLgrdnm0OzoC23Mx89xf1CcEsxmHuzuXOdniEuI+S3v7vjQG4F5sa6YtUp+19sZuSxHg==",
+ "requires": {
+ "@babel/helper-create-class-features-plugin": "^7.10.1",
+ "@babel/helper-plugin-utils": "^7.10.1"
+ },
+ "dependencies": {
+ "@babel/helper-plugin-utils": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-plugin-utils/-/helper-plugin-utils-7.10.1.tgz",
+ "integrity": "sha512-fvoGeXt0bJc7VMWZGCAEBEMo/HAjW2mP8apF5eXK0wSqwLAVHAISCWRoLMBMUs2kqeaG77jltVqu4Hn8Egl3nA=="
+ }
+ }
+ },
"@babel/plugin-proposal-unicode-property-regex": {
"version": "7.2.0",
"resolved": "https://registry.npmjs.org/@babel/plugin-proposal-unicode-property-regex/-/plugin-proposal-unicode-property-regex-7.2.0.tgz",
@@ -404,27 +783,48 @@
}
},
"@babel/plugin-syntax-class-properties": {
- "version": "7.2.0",
- "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-class-properties/-/plugin-syntax-class-properties-7.2.0.tgz",
- "integrity": "sha512-UxYaGXYQ7rrKJS/PxIKRkv3exi05oH7rokBAsmCSsCxz1sVPZ7Fu6FzKoGgUvmY+0YgSkYHgUoCh5R5bCNBQlw==",
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-class-properties/-/plugin-syntax-class-properties-7.10.1.tgz",
+ "integrity": "sha512-Gf2Yx/iRs1JREDtVZ56OrjjgFHCaldpTnuy9BHla10qyVT3YkIIGEtoDWhyop0ksu1GvNjHIoYRBqm3zoR1jyQ==",
"requires": {
- "@babel/helper-plugin-utils": "^7.0.0"
+ "@babel/helper-plugin-utils": "^7.10.1"
+ },
+ "dependencies": {
+ "@babel/helper-plugin-utils": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-plugin-utils/-/helper-plugin-utils-7.10.1.tgz",
+ "integrity": "sha512-fvoGeXt0bJc7VMWZGCAEBEMo/HAjW2mP8apF5eXK0wSqwLAVHAISCWRoLMBMUs2kqeaG77jltVqu4Hn8Egl3nA=="
+ }
}
},
"@babel/plugin-syntax-dynamic-import": {
- "version": "7.2.0",
- "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-dynamic-import/-/plugin-syntax-dynamic-import-7.2.0.tgz",
- "integrity": "sha512-mVxuJ0YroI/h/tbFTPGZR8cv6ai+STMKNBq0f8hFxsxWjl94qqhsb+wXbpNMDPU3cfR1TIsVFzU3nXyZMqyK4w==",
+ "version": "7.8.3",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-dynamic-import/-/plugin-syntax-dynamic-import-7.8.3.tgz",
+ "integrity": "sha512-5gdGbFon+PszYzqs83S3E5mpi7/y/8M9eC90MRTZfduQOYW76ig6SOSPNe41IG5LoP3FGBn2N0RjVDSQiS94kQ==",
"requires": {
- "@babel/helper-plugin-utils": "^7.0.0"
+ "@babel/helper-plugin-utils": "^7.8.0"
+ },
+ "dependencies": {
+ "@babel/helper-plugin-utils": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-plugin-utils/-/helper-plugin-utils-7.10.1.tgz",
+ "integrity": "sha512-fvoGeXt0bJc7VMWZGCAEBEMo/HAjW2mP8apF5eXK0wSqwLAVHAISCWRoLMBMUs2kqeaG77jltVqu4Hn8Egl3nA=="
+ }
}
},
"@babel/plugin-syntax-flow": {
- "version": "7.2.0",
- "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-flow/-/plugin-syntax-flow-7.2.0.tgz",
- "integrity": "sha512-r6YMuZDWLtLlu0kqIim5o/3TNRAlWb073HwT3e2nKf9I8IIvOggPrnILYPsrrKilmn/mYEMCf/Z07w3yQJF6dg==",
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-flow/-/plugin-syntax-flow-7.10.1.tgz",
+ "integrity": "sha512-b3pWVncLBYoPP60UOTc7NMlbtsHQ6ITim78KQejNHK6WJ2mzV5kCcg4mIWpasAfJEgwVTibwo2e+FU7UEIKQUg==",
"requires": {
- "@babel/helper-plugin-utils": "^7.0.0"
+ "@babel/helper-plugin-utils": "^7.10.1"
+ },
+ "dependencies": {
+ "@babel/helper-plugin-utils": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-plugin-utils/-/helper-plugin-utils-7.10.1.tgz",
+ "integrity": "sha512-fvoGeXt0bJc7VMWZGCAEBEMo/HAjW2mP8apF5eXK0wSqwLAVHAISCWRoLMBMUs2kqeaG77jltVqu4Hn8Egl3nA=="
+ }
}
},
"@babel/plugin-syntax-json-strings": {
@@ -443,6 +843,36 @@
"@babel/helper-plugin-utils": "^7.0.0"
}
},
+ "@babel/plugin-syntax-nullish-coalescing-operator": {
+ "version": "7.8.3",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-nullish-coalescing-operator/-/plugin-syntax-nullish-coalescing-operator-7.8.3.tgz",
+ "integrity": "sha512-aSff4zPII1u2QD7y+F8oDsz19ew4IGEJg9SVW+bqwpwtfFleiQDMdzA/R+UlWDzfnHFCxxleFT0PMIrR36XLNQ==",
+ "requires": {
+ "@babel/helper-plugin-utils": "^7.8.0"
+ },
+ "dependencies": {
+ "@babel/helper-plugin-utils": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-plugin-utils/-/helper-plugin-utils-7.10.1.tgz",
+ "integrity": "sha512-fvoGeXt0bJc7VMWZGCAEBEMo/HAjW2mP8apF5eXK0wSqwLAVHAISCWRoLMBMUs2kqeaG77jltVqu4Hn8Egl3nA=="
+ }
+ }
+ },
+ "@babel/plugin-syntax-numeric-separator": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-numeric-separator/-/plugin-syntax-numeric-separator-7.10.1.tgz",
+ "integrity": "sha512-uTd0OsHrpe3tH5gRPTxG8Voh99/WCU78vIm5NMRYPAqC8lR4vajt6KkCAknCHrx24vkPdd/05yfdGSB4EIY2mg==",
+ "requires": {
+ "@babel/helper-plugin-utils": "^7.10.1"
+ },
+ "dependencies": {
+ "@babel/helper-plugin-utils": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-plugin-utils/-/helper-plugin-utils-7.10.1.tgz",
+ "integrity": "sha512-fvoGeXt0bJc7VMWZGCAEBEMo/HAjW2mP8apF5eXK0wSqwLAVHAISCWRoLMBMUs2kqeaG77jltVqu4Hn8Egl3nA=="
+ }
+ }
+ },
"@babel/plugin-syntax-object-rest-spread": {
"version": "7.2.0",
"resolved": "https://registry.npmjs.org/@babel/plugin-syntax-object-rest-spread/-/plugin-syntax-object-rest-spread-7.2.0.tgz",
@@ -459,6 +889,36 @@
"@babel/helper-plugin-utils": "^7.0.0"
}
},
+ "@babel/plugin-syntax-optional-chaining": {
+ "version": "7.8.3",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-optional-chaining/-/plugin-syntax-optional-chaining-7.8.3.tgz",
+ "integrity": "sha512-KoK9ErH1MBlCPxV0VANkXW2/dw4vlbGDrFgz8bmUsBGYkFRcbRwMh6cIJubdPrkxRwuGdtCk0v/wPTKbQgBjkg==",
+ "requires": {
+ "@babel/helper-plugin-utils": "^7.8.0"
+ },
+ "dependencies": {
+ "@babel/helper-plugin-utils": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-plugin-utils/-/helper-plugin-utils-7.10.1.tgz",
+ "integrity": "sha512-fvoGeXt0bJc7VMWZGCAEBEMo/HAjW2mP8apF5eXK0wSqwLAVHAISCWRoLMBMUs2kqeaG77jltVqu4Hn8Egl3nA=="
+ }
+ }
+ },
+ "@babel/plugin-syntax-top-level-await": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-top-level-await/-/plugin-syntax-top-level-await-7.10.1.tgz",
+ "integrity": "sha512-hgA5RYkmZm8FTFT3yu2N9Bx7yVVOKYT6yEdXXo6j2JTm0wNxgqaGeQVaSHRjhfnQbX91DtjFB6McRFSlcJH3xQ==",
+ "requires": {
+ "@babel/helper-plugin-utils": "^7.10.1"
+ },
+ "dependencies": {
+ "@babel/helper-plugin-utils": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-plugin-utils/-/helper-plugin-utils-7.10.1.tgz",
+ "integrity": "sha512-fvoGeXt0bJc7VMWZGCAEBEMo/HAjW2mP8apF5eXK0wSqwLAVHAISCWRoLMBMUs2kqeaG77jltVqu4Hn8Egl3nA=="
+ }
+ }
+ },
"@babel/plugin-transform-arrow-functions": {
"version": "7.2.0",
"resolved": "https://registry.npmjs.org/@babel/plugin-transform-arrow-functions/-/plugin-transform-arrow-functions-7.2.0.tgz",
@@ -553,12 +1013,19 @@
}
},
"@babel/plugin-transform-flow-strip-types": {
- "version": "7.3.4",
- "resolved": "https://registry.npmjs.org/@babel/plugin-transform-flow-strip-types/-/plugin-transform-flow-strip-types-7.3.4.tgz",
- "integrity": "sha512-PmQC9R7DwpBFA+7ATKMyzViz3zCaMNouzZMPZN2K5PnbBbtL3AXFYTkDk+Hey5crQq2A90UG5Uthz0mel+XZrA==",
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-transform-flow-strip-types/-/plugin-transform-flow-strip-types-7.10.1.tgz",
+ "integrity": "sha512-i4o0YwiJBIsIx7/liVCZ3Q2WkWr1/Yu39PksBOnh/khW2SwIFsGa5Ze+MSon5KbDfrEHP9NeyefAgvUSXzaEkw==",
"requires": {
- "@babel/helper-plugin-utils": "^7.0.0",
- "@babel/plugin-syntax-flow": "^7.2.0"
+ "@babel/helper-plugin-utils": "^7.10.1",
+ "@babel/plugin-syntax-flow": "^7.10.1"
+ },
+ "dependencies": {
+ "@babel/helper-plugin-utils": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-plugin-utils/-/helper-plugin-utils-7.10.1.tgz",
+ "integrity": "sha512-fvoGeXt0bJc7VMWZGCAEBEMo/HAjW2mP8apF5eXK0wSqwLAVHAISCWRoLMBMUs2kqeaG77jltVqu4Hn8Egl3nA=="
+ }
}
},
"@babel/plugin-transform-for-of": {
@@ -587,11 +1054,18 @@
}
},
"@babel/plugin-transform-member-expression-literals": {
- "version": "7.2.0",
- "resolved": "https://registry.npmjs.org/@babel/plugin-transform-member-expression-literals/-/plugin-transform-member-expression-literals-7.2.0.tgz",
- "integrity": "sha512-HiU3zKkSU6scTidmnFJ0bMX8hz5ixC93b4MHMiYebmk2lUVNGOboPsqQvx5LzooihijUoLR/v7Nc1rbBtnc7FA==",
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-transform-member-expression-literals/-/plugin-transform-member-expression-literals-7.10.1.tgz",
+ "integrity": "sha512-UmaWhDokOFT2GcgU6MkHC11i0NQcL63iqeufXWfRy6pUOGYeCGEKhvfFO6Vz70UfYJYHwveg62GS83Rvpxn+NA==",
"requires": {
- "@babel/helper-plugin-utils": "^7.0.0"
+ "@babel/helper-plugin-utils": "^7.10.1"
+ },
+ "dependencies": {
+ "@babel/helper-plugin-utils": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-plugin-utils/-/helper-plugin-utils-7.10.1.tgz",
+ "integrity": "sha512-fvoGeXt0bJc7VMWZGCAEBEMo/HAjW2mP8apF5eXK0wSqwLAVHAISCWRoLMBMUs2kqeaG77jltVqu4Hn8Egl3nA=="
+ }
}
},
"@babel/plugin-transform-modules-amd": {
@@ -631,6 +1105,14 @@
"@babel/helper-plugin-utils": "^7.0.0"
}
},
+ "@babel/plugin-transform-named-capturing-groups-regex": {
+ "version": "7.8.3",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-transform-named-capturing-groups-regex/-/plugin-transform-named-capturing-groups-regex-7.8.3.tgz",
+ "integrity": "sha512-f+tF/8UVPU86TrCb06JoPWIdDpTNSGGcAtaD9mLP0aYGA0OS0j7j7DHJR0GTFrUZPUU6loZhbsVZgTh0N+Qdnw==",
+ "requires": {
+ "@babel/helper-create-regexp-features-plugin": "^7.8.3"
+ }
+ },
"@babel/plugin-transform-new-target": {
"version": "7.0.0",
"resolved": "https://registry.npmjs.org/@babel/plugin-transform-new-target/-/plugin-transform-new-target-7.0.0.tgz",
@@ -659,11 +1141,18 @@
}
},
"@babel/plugin-transform-property-literals": {
- "version": "7.2.0",
- "resolved": "https://registry.npmjs.org/@babel/plugin-transform-property-literals/-/plugin-transform-property-literals-7.2.0.tgz",
- "integrity": "sha512-9q7Dbk4RhgcLp8ebduOpCbtjh7C0itoLYHXd9ueASKAG/is5PQtMR5VJGka9NKqGhYEGn5ITahd4h9QeBMylWQ==",
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-transform-property-literals/-/plugin-transform-property-literals-7.10.1.tgz",
+ "integrity": "sha512-Kr6+mgag8auNrgEpbfIWzdXYOvqDHZOF0+Bx2xh4H2EDNwcbRb9lY6nkZg8oSjsX+DH9Ebxm9hOqtKW+gRDeNA==",
"requires": {
- "@babel/helper-plugin-utils": "^7.0.0"
+ "@babel/helper-plugin-utils": "^7.10.1"
+ },
+ "dependencies": {
+ "@babel/helper-plugin-utils": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-plugin-utils/-/helper-plugin-utils-7.10.1.tgz",
+ "integrity": "sha512-fvoGeXt0bJc7VMWZGCAEBEMo/HAjW2mP8apF5eXK0wSqwLAVHAISCWRoLMBMUs2kqeaG77jltVqu4Hn8Egl3nA=="
+ }
}
},
"@babel/plugin-transform-react-constant-elements": {
@@ -693,6 +1182,31 @@
"@babel/plugin-syntax-jsx": "^7.2.0"
}
},
+ "@babel/plugin-transform-react-jsx-development": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-transform-react-jsx-development/-/plugin-transform-react-jsx-development-7.10.1.tgz",
+ "integrity": "sha512-XwDy/FFoCfw9wGFtdn5Z+dHh6HXKHkC6DwKNWpN74VWinUagZfDcEJc3Y8Dn5B3WMVnAllX8Kviaw7MtC5Epwg==",
+ "requires": {
+ "@babel/helper-builder-react-jsx-experimental": "^7.10.1",
+ "@babel/helper-plugin-utils": "^7.10.1",
+ "@babel/plugin-syntax-jsx": "^7.10.1"
+ },
+ "dependencies": {
+ "@babel/helper-plugin-utils": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-plugin-utils/-/helper-plugin-utils-7.10.1.tgz",
+ "integrity": "sha512-fvoGeXt0bJc7VMWZGCAEBEMo/HAjW2mP8apF5eXK0wSqwLAVHAISCWRoLMBMUs2kqeaG77jltVqu4Hn8Egl3nA=="
+ },
+ "@babel/plugin-syntax-jsx": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-jsx/-/plugin-syntax-jsx-7.10.1.tgz",
+ "integrity": "sha512-+OxyOArpVFXQeXKLO9o+r2I4dIoVoy6+Uu0vKELrlweDM3QJADZj+Z+5ERansZqIZBcLj42vHnDI8Rz9BnRIuQ==",
+ "requires": {
+ "@babel/helper-plugin-utils": "^7.10.1"
+ }
+ }
+ }
+ },
"@babel/plugin-transform-react-jsx-self": {
"version": "7.2.0",
"resolved": "https://registry.npmjs.org/@babel/plugin-transform-react-jsx-self/-/plugin-transform-react-jsx-self-7.2.0.tgz",
@@ -711,6 +1225,45 @@
"@babel/plugin-syntax-jsx": "^7.2.0"
}
},
+ "@babel/plugin-transform-react-pure-annotations": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-transform-react-pure-annotations/-/plugin-transform-react-pure-annotations-7.10.1.tgz",
+ "integrity": "sha512-mfhoiai083AkeewsBHUpaS/FM1dmUENHBMpS/tugSJ7VXqXO5dCN1Gkint2YvM1Cdv1uhmAKt1ZOuAjceKmlLA==",
+ "requires": {
+ "@babel/helper-annotate-as-pure": "^7.10.1",
+ "@babel/helper-plugin-utils": "^7.10.1"
+ },
+ "dependencies": {
+ "@babel/helper-annotate-as-pure": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-annotate-as-pure/-/helper-annotate-as-pure-7.10.1.tgz",
+ "integrity": "sha512-ewp3rvJEwLaHgyWGe4wQssC2vjks3E80WiUe2BpMb0KhreTjMROCbxXcEovTrbeGVdQct5VjQfrv9EgC+xMzCw==",
+ "requires": {
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helper-plugin-utils": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-plugin-utils/-/helper-plugin-utils-7.10.1.tgz",
+ "integrity": "sha512-fvoGeXt0bJc7VMWZGCAEBEMo/HAjW2mP8apF5eXK0wSqwLAVHAISCWRoLMBMUs2kqeaG77jltVqu4Hn8Egl3nA=="
+ },
+ "@babel/types": {
+ "version": "7.10.2",
+ "resolved": "https://registry.npmjs.org/@babel/types/-/types-7.10.2.tgz",
+ "integrity": "sha512-AD3AwWBSz0AWF0AkCN9VPiWrvldXq+/e3cHa4J89vo4ymjz1XwrBFFVZmkJTsQIPNk+ZVomPSXUJqq8yyjZsng==",
+ "requires": {
+ "@babel/helper-validator-identifier": "^7.10.1",
+ "lodash": "^4.17.13",
+ "to-fast-properties": "^2.0.0"
+ }
+ },
+ "lodash": {
+ "version": "4.17.15",
+ "resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz",
+ "integrity": "sha512-8xOcRHvCjnocdS5cpwXQXVzmmh5e5+saE2QGoeQmbKmRS6J3VQppPOIt0MnmE+4xlZoumy0GPG0D0MVIQbNA1A=="
+ }
+ }
+ },
"@babel/plugin-transform-regenerator": {
"version": "7.0.0",
"resolved": "https://registry.npmjs.org/@babel/plugin-transform-regenerator/-/plugin-transform-regenerator-7.0.0.tgz",
@@ -719,15 +1272,60 @@
"regenerator-transform": "^0.13.3"
}
},
- "@babel/plugin-transform-runtime": {
- "version": "7.3.4",
- "resolved": "https://registry.npmjs.org/@babel/plugin-transform-runtime/-/plugin-transform-runtime-7.3.4.tgz",
- "integrity": "sha512-PaoARuztAdd5MgeVjAxnIDAIUet5KpogqaefQvPOmPYCxYoaPhautxDh3aO8a4xHsKgT/b9gSxR0BKK1MIewPA==",
+ "@babel/plugin-transform-reserved-words": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-transform-reserved-words/-/plugin-transform-reserved-words-7.10.1.tgz",
+ "integrity": "sha512-qN1OMoE2nuqSPmpTqEM7OvJ1FkMEV+BjVeZZm9V9mq/x1JLKQ4pcv8riZJMNN3u2AUGl0ouOMjRr2siecvHqUQ==",
"requires": {
- "@babel/helper-module-imports": "^7.0.0",
- "@babel/helper-plugin-utils": "^7.0.0",
+ "@babel/helper-plugin-utils": "^7.10.1"
+ },
+ "dependencies": {
+ "@babel/helper-plugin-utils": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-plugin-utils/-/helper-plugin-utils-7.10.1.tgz",
+ "integrity": "sha512-fvoGeXt0bJc7VMWZGCAEBEMo/HAjW2mP8apF5eXK0wSqwLAVHAISCWRoLMBMUs2kqeaG77jltVqu4Hn8Egl3nA=="
+ }
+ }
+ },
+ "@babel/plugin-transform-runtime": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-transform-runtime/-/plugin-transform-runtime-7.10.1.tgz",
+ "integrity": "sha512-4w2tcglDVEwXJ5qxsY++DgWQdNJcCCsPxfT34wCUwIf2E7dI7pMpH8JczkMBbgBTNzBX62SZlNJ9H+De6Zebaw==",
+ "requires": {
+ "@babel/helper-module-imports": "^7.10.1",
+ "@babel/helper-plugin-utils": "^7.10.1",
"resolve": "^1.8.1",
"semver": "^5.5.1"
+ },
+ "dependencies": {
+ "@babel/helper-module-imports": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-module-imports/-/helper-module-imports-7.10.1.tgz",
+ "integrity": "sha512-SFxgwYmZ3HZPyZwJRiVNLRHWuW2OgE5k2nrVs6D9Iv4PPnXVffuEHy83Sfx/l4SqF+5kyJXjAyUmrG7tNm+qVg==",
+ "requires": {
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helper-plugin-utils": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-plugin-utils/-/helper-plugin-utils-7.10.1.tgz",
+ "integrity": "sha512-fvoGeXt0bJc7VMWZGCAEBEMo/HAjW2mP8apF5eXK0wSqwLAVHAISCWRoLMBMUs2kqeaG77jltVqu4Hn8Egl3nA=="
+ },
+ "@babel/types": {
+ "version": "7.10.2",
+ "resolved": "https://registry.npmjs.org/@babel/types/-/types-7.10.2.tgz",
+ "integrity": "sha512-AD3AwWBSz0AWF0AkCN9VPiWrvldXq+/e3cHa4J89vo4ymjz1XwrBFFVZmkJTsQIPNk+ZVomPSXUJqq8yyjZsng==",
+ "requires": {
+ "@babel/helper-validator-identifier": "^7.10.1",
+ "lodash": "^4.17.13",
+ "to-fast-properties": "^2.0.0"
+ }
+ },
+ "lodash": {
+ "version": "4.17.15",
+ "resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz",
+ "integrity": "sha512-8xOcRHvCjnocdS5cpwXQXVzmmh5e5+saE2QGoeQmbKmRS6J3VQppPOIt0MnmE+4xlZoumy0GPG0D0MVIQbNA1A=="
+ }
}
},
"@babel/plugin-transform-shorthand-properties": {
@@ -772,6 +1370,21 @@
"@babel/helper-plugin-utils": "^7.0.0"
}
},
+ "@babel/plugin-transform-unicode-escapes": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-transform-unicode-escapes/-/plugin-transform-unicode-escapes-7.10.1.tgz",
+ "integrity": "sha512-zZ0Poh/yy1d4jeDWpx/mNwbKJVwUYJX73q+gyh4bwtG0/iUlzdEu0sLMda8yuDFS6LBQlT/ST1SJAR6zYwXWgw==",
+ "requires": {
+ "@babel/helper-plugin-utils": "^7.10.1"
+ },
+ "dependencies": {
+ "@babel/helper-plugin-utils": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-plugin-utils/-/helper-plugin-utils-7.10.1.tgz",
+ "integrity": "sha512-fvoGeXt0bJc7VMWZGCAEBEMo/HAjW2mP8apF5eXK0wSqwLAVHAISCWRoLMBMUs2kqeaG77jltVqu4Hn8Egl3nA=="
+ }
+ }
+ },
"@babel/plugin-transform-unicode-regex": {
"version": "7.2.0",
"resolved": "https://registry.npmjs.org/@babel/plugin-transform-unicode-regex/-/plugin-transform-unicode-regex-7.2.0.tgz",
@@ -783,12 +1396,24 @@
}
},
"@babel/polyfill": {
- "version": "7.2.5",
- "resolved": "https://registry.npmjs.org/@babel/polyfill/-/polyfill-7.2.5.tgz",
- "integrity": "sha512-8Y/t3MWThtMLYr0YNC/Q76tqN1w30+b0uQMeFUYauG2UGTR19zyUtFrAzT23zNtBxPp+LbE5E/nwV/q/r3y6ug==",
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/polyfill/-/polyfill-7.10.1.tgz",
+ "integrity": "sha512-TviueJ4PBW5p48ra8IMtLXVkDucrlOZAIZ+EXqS3Ot4eukHbWiqcn7DcqpA1k5PcKtmJ4Xl9xwdv6yQvvcA+3g==",
"requires": {
- "core-js": "^2.5.7",
- "regenerator-runtime": "^0.12.0"
+ "core-js": "^2.6.5",
+ "regenerator-runtime": "^0.13.4"
+ },
+ "dependencies": {
+ "core-js": {
+ "version": "2.6.11",
+ "resolved": "https://registry.npmjs.org/core-js/-/core-js-2.6.11.tgz",
+ "integrity": "sha512-5wjnpaT/3dV+XB4borEsnAYQchn00XSgTAWKDkEqv+K8KevjbzmofK6hfJ9TZIlpj2N0xQpazy7PiRQiWHqzWg=="
+ },
+ "regenerator-runtime": {
+ "version": "0.13.5",
+ "resolved": "https://registry.npmjs.org/regenerator-runtime/-/regenerator-runtime-0.13.5.tgz",
+ "integrity": "sha512-ZS5w8CpKFinUzOwW3c83oPeVXoNsrLsaCoLtJvAClH135j/R77RuymhiSErhm2lKcwSCIpmvIWSbDkIfAqKQlA=="
+ }
}
},
"@babel/preset-env": {
@@ -851,6 +1476,67 @@
}
}
},
+ "@babel/preset-modules": {
+ "version": "0.1.3",
+ "resolved": "https://registry.npmjs.org/@babel/preset-modules/-/preset-modules-0.1.3.tgz",
+ "integrity": "sha512-Ra3JXOHBq2xd56xSF7lMKXdjBn3T772Y1Wet3yWnkDly9zHvJki029tAFzvAAK5cf4YV3yoxuP61crYRol6SVg==",
+ "requires": {
+ "@babel/helper-plugin-utils": "^7.0.0",
+ "@babel/plugin-proposal-unicode-property-regex": "^7.4.4",
+ "@babel/plugin-transform-dotall-regex": "^7.4.4",
+ "@babel/types": "^7.4.4",
+ "esutils": "^2.0.2"
+ },
+ "dependencies": {
+ "@babel/plugin-proposal-unicode-property-regex": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-proposal-unicode-property-regex/-/plugin-proposal-unicode-property-regex-7.10.1.tgz",
+ "integrity": "sha512-JjfngYRvwmPwmnbRZyNiPFI8zxCZb8euzbCG/LxyKdeTb59tVciKo9GK9bi6JYKInk1H11Dq9j/zRqIH4KigfQ==",
+ "requires": {
+ "@babel/helper-create-regexp-features-plugin": "^7.10.1",
+ "@babel/helper-plugin-utils": "^7.10.1"
+ },
+ "dependencies": {
+ "@babel/helper-plugin-utils": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-plugin-utils/-/helper-plugin-utils-7.10.1.tgz",
+ "integrity": "sha512-fvoGeXt0bJc7VMWZGCAEBEMo/HAjW2mP8apF5eXK0wSqwLAVHAISCWRoLMBMUs2kqeaG77jltVqu4Hn8Egl3nA=="
+ }
+ }
+ },
+ "@babel/plugin-transform-dotall-regex": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-transform-dotall-regex/-/plugin-transform-dotall-regex-7.10.1.tgz",
+ "integrity": "sha512-19VIMsD1dp02RvduFUmfzj8uknaO3uiHHF0s3E1OHnVsNj8oge8EQ5RzHRbJjGSetRnkEuBYO7TG1M5kKjGLOA==",
+ "requires": {
+ "@babel/helper-create-regexp-features-plugin": "^7.10.1",
+ "@babel/helper-plugin-utils": "^7.10.1"
+ },
+ "dependencies": {
+ "@babel/helper-plugin-utils": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-plugin-utils/-/helper-plugin-utils-7.10.1.tgz",
+ "integrity": "sha512-fvoGeXt0bJc7VMWZGCAEBEMo/HAjW2mP8apF5eXK0wSqwLAVHAISCWRoLMBMUs2kqeaG77jltVqu4Hn8Egl3nA=="
+ }
+ }
+ },
+ "@babel/types": {
+ "version": "7.10.2",
+ "resolved": "https://registry.npmjs.org/@babel/types/-/types-7.10.2.tgz",
+ "integrity": "sha512-AD3AwWBSz0AWF0AkCN9VPiWrvldXq+/e3cHa4J89vo4ymjz1XwrBFFVZmkJTsQIPNk+ZVomPSXUJqq8yyjZsng==",
+ "requires": {
+ "@babel/helper-validator-identifier": "^7.10.1",
+ "lodash": "^4.17.13",
+ "to-fast-properties": "^2.0.0"
+ }
+ },
+ "lodash": {
+ "version": "4.17.15",
+ "resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz",
+ "integrity": "sha512-8xOcRHvCjnocdS5cpwXQXVzmmh5e5+saE2QGoeQmbKmRS6J3VQppPOIt0MnmE+4xlZoumy0GPG0D0MVIQbNA1A=="
+ }
+ }
+ },
"@babel/preset-react": {
"version": "7.0.0",
"resolved": "https://registry.npmjs.org/@babel/preset-react/-/preset-react-7.0.0.tgz",
@@ -871,6 +1557,27 @@
"regenerator-runtime": "^0.12.0"
}
},
+ "@babel/runtime-corejs3": {
+ "version": "7.10.2",
+ "resolved": "https://registry.npmjs.org/@babel/runtime-corejs3/-/runtime-corejs3-7.10.2.tgz",
+ "integrity": "sha512-+a2M/u7r15o3dV1NEizr9bRi+KUVnrs/qYxF0Z06DAPx/4VCWaz1WA7EcbE+uqGgt39lp5akWGmHsTseIkHkHg==",
+ "requires": {
+ "core-js-pure": "^3.0.0",
+ "regenerator-runtime": "^0.13.4"
+ },
+ "dependencies": {
+ "regenerator-runtime": {
+ "version": "0.13.5",
+ "resolved": "https://registry.npmjs.org/regenerator-runtime/-/regenerator-runtime-0.13.5.tgz",
+ "integrity": "sha512-ZS5w8CpKFinUzOwW3c83oPeVXoNsrLsaCoLtJvAClH135j/R77RuymhiSErhm2lKcwSCIpmvIWSbDkIfAqKQlA=="
+ }
+ }
+ },
+ "@babel/standalone": {
+ "version": "7.10.2",
+ "resolved": "https://registry.npmjs.org/@babel/standalone/-/standalone-7.10.2.tgz",
+ "integrity": "sha512-PNQuj9oQH6BL/3l9iiL8hJLQwX14woA2/FHcPtNIZAc7IgFZYJdtMBMXiy4xcefADHTSvoBnmc2AybrHRW1IKQ=="
+ },
"@babel/template": {
"version": "7.2.2",
"resolved": "https://registry.npmjs.org/@babel/template/-/template-7.2.2.tgz",
@@ -957,6 +1664,1249 @@
}
}
},
+ "@graphql-tools/code-file-loader": {
+ "version": "6.0.10",
+ "resolved": "https://registry.npmjs.org/@graphql-tools/code-file-loader/-/code-file-loader-6.0.10.tgz",
+ "integrity": "sha512-p/9GAdZF+/liuNFTTIHISCXUX2Cfzk4tmHdigKbRbo1ho2TFNvf9uGZQSZUdm2QxgZosAWfXjXY++jZkDJztSg==",
+ "requires": {
+ "@graphql-tools/graphql-tag-pluck": "6.0.10",
+ "@graphql-tools/utils": "6.0.10",
+ "fs-extra": "9.0.1",
+ "tslib": "~2.0.0"
+ },
+ "dependencies": {
+ "fs-extra": {
+ "version": "9.0.1",
+ "resolved": "https://registry.npmjs.org/fs-extra/-/fs-extra-9.0.1.tgz",
+ "integrity": "sha512-h2iAoN838FqAFJY2/qVpzFXy+EBxfVE220PalAqQLDVsFOHLJrZvut5puAbCdNv6WJk+B8ihI+k0c7JK5erwqQ==",
+ "requires": {
+ "at-least-node": "^1.0.0",
+ "graceful-fs": "^4.2.0",
+ "jsonfile": "^6.0.1",
+ "universalify": "^1.0.0"
+ }
+ },
+ "graceful-fs": {
+ "version": "4.2.4",
+ "resolved": "https://registry.npmjs.org/graceful-fs/-/graceful-fs-4.2.4.tgz",
+ "integrity": "sha512-WjKPNJF79dtJAVniUlGGWHYGz2jWxT6VhN/4m1NdkbZ2nOsEF+cI1Edgql5zCRhs/VsQYRvrXctxktVXZUkixw=="
+ },
+ "jsonfile": {
+ "version": "6.0.1",
+ "resolved": "https://registry.npmjs.org/jsonfile/-/jsonfile-6.0.1.tgz",
+ "integrity": "sha512-jR2b5v7d2vIOust+w3wtFKZIfpC2pnRmFAhAC/BuweZFQR8qZzxH1OyrQ10HmdVYiXWkYUqPVsz91cG7EL2FBg==",
+ "requires": {
+ "graceful-fs": "^4.1.6",
+ "universalify": "^1.0.0"
+ }
+ },
+ "tslib": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/tslib/-/tslib-2.0.0.tgz",
+ "integrity": "sha512-lTqkx847PI7xEDYJntxZH89L2/aXInsyF2luSafe/+0fHOMjlBNXdH6th7f70qxLDhul7KZK0zC8V5ZIyHl0/g=="
+ },
+ "universalify": {
+ "version": "1.0.0",
+ "resolved": "https://registry.npmjs.org/universalify/-/universalify-1.0.0.tgz",
+ "integrity": "sha512-rb6X1W158d7pRQBg5gkR8uPaSfiids68LTJQYOtEUhoJUWBdaQHsuT/EUduxXYxcrt4r5PJ4fuHW1MHT6p0qug=="
+ }
+ }
+ },
+ "@graphql-tools/delegate": {
+ "version": "6.0.10",
+ "resolved": "https://registry.npmjs.org/@graphql-tools/delegate/-/delegate-6.0.10.tgz",
+ "integrity": "sha512-FBHrmpSI9QpNbvqc5D4wdQW0WrNVUA2ylFhzsNRk9yvlKzcVKqiTrOpb++j7TLB+tG06dpSkfAssPcgZvU60fw==",
+ "requires": {
+ "@graphql-tools/schema": "6.0.10",
+ "@graphql-tools/utils": "6.0.10",
+ "aggregate-error": "3.0.1",
+ "tslib": "~2.0.0"
+ },
+ "dependencies": {
+ "tslib": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/tslib/-/tslib-2.0.0.tgz",
+ "integrity": "sha512-lTqkx847PI7xEDYJntxZH89L2/aXInsyF2luSafe/+0fHOMjlBNXdH6th7f70qxLDhul7KZK0zC8V5ZIyHl0/g=="
+ }
+ }
+ },
+ "@graphql-tools/git-loader": {
+ "version": "6.0.10",
+ "resolved": "https://registry.npmjs.org/@graphql-tools/git-loader/-/git-loader-6.0.10.tgz",
+ "integrity": "sha512-KNtbGgijL2zVH+cQlYCcvYL+fUDxYjzEuwnTvi8iSUtSIVFTdphQIg7+kVuk9sCBdKj7kegFMzHlzh3pfEji1g==",
+ "requires": {
+ "@graphql-tools/graphql-tag-pluck": "6.0.10",
+ "@graphql-tools/utils": "6.0.10",
+ "simple-git": "2.6.0"
+ }
+ },
+ "@graphql-tools/github-loader": {
+ "version": "6.0.10",
+ "resolved": "https://registry.npmjs.org/@graphql-tools/github-loader/-/github-loader-6.0.10.tgz",
+ "integrity": "sha512-UdeZAfz76CUfDFIjLPtFQaBq3kJlMJObKzh7r9T+dizpbmjl1+kfN2idaGtTJIzCnbWEPtbWMJDtc4ioqpj9oQ==",
+ "requires": {
+ "@graphql-tools/graphql-tag-pluck": "6.0.10",
+ "@graphql-tools/utils": "6.0.10",
+ "cross-fetch": "3.0.4"
+ },
+ "dependencies": {
+ "cross-fetch": {
+ "version": "3.0.4",
+ "resolved": "https://registry.npmjs.org/cross-fetch/-/cross-fetch-3.0.4.tgz",
+ "integrity": "sha512-MSHgpjQqgbT/94D4CyADeNoYh52zMkCX4pcJvPP5WqPsLFMKjr2TCMg381ox5qI0ii2dPwaLx/00477knXqXVw==",
+ "requires": {
+ "node-fetch": "2.6.0",
+ "whatwg-fetch": "3.0.0"
+ }
+ },
+ "node-fetch": {
+ "version": "2.6.0",
+ "resolved": "https://registry.npmjs.org/node-fetch/-/node-fetch-2.6.0.tgz",
+ "integrity": "sha512-8dG4H5ujfvFiqDmVu9fQ5bOHUC15JMjMY/Zumv26oOvvVJjM67KF8koCWIabKQ1GJIa9r2mMZscBq/TbdOcmNA=="
+ }
+ }
+ },
+ "@graphql-tools/graphql-file-loader": {
+ "version": "6.0.10",
+ "resolved": "https://registry.npmjs.org/@graphql-tools/graphql-file-loader/-/graphql-file-loader-6.0.10.tgz",
+ "integrity": "sha512-SL0KBUkFaZNldTvImlV1OhsL7EjROgoodC5OijjVyDubemAIWp1tjKZmQGCdmc/iJZXDx8vWR1tXi7REatHB2w==",
+ "requires": {
+ "@graphql-tools/import": "6.0.10",
+ "@graphql-tools/utils": "6.0.10",
+ "fs-extra": "9.0.1",
+ "tslib": "~2.0.0"
+ },
+ "dependencies": {
+ "fs-extra": {
+ "version": "9.0.1",
+ "resolved": "https://registry.npmjs.org/fs-extra/-/fs-extra-9.0.1.tgz",
+ "integrity": "sha512-h2iAoN838FqAFJY2/qVpzFXy+EBxfVE220PalAqQLDVsFOHLJrZvut5puAbCdNv6WJk+B8ihI+k0c7JK5erwqQ==",
+ "requires": {
+ "at-least-node": "^1.0.0",
+ "graceful-fs": "^4.2.0",
+ "jsonfile": "^6.0.1",
+ "universalify": "^1.0.0"
+ }
+ },
+ "graceful-fs": {
+ "version": "4.2.4",
+ "resolved": "https://registry.npmjs.org/graceful-fs/-/graceful-fs-4.2.4.tgz",
+ "integrity": "sha512-WjKPNJF79dtJAVniUlGGWHYGz2jWxT6VhN/4m1NdkbZ2nOsEF+cI1Edgql5zCRhs/VsQYRvrXctxktVXZUkixw=="
+ },
+ "jsonfile": {
+ "version": "6.0.1",
+ "resolved": "https://registry.npmjs.org/jsonfile/-/jsonfile-6.0.1.tgz",
+ "integrity": "sha512-jR2b5v7d2vIOust+w3wtFKZIfpC2pnRmFAhAC/BuweZFQR8qZzxH1OyrQ10HmdVYiXWkYUqPVsz91cG7EL2FBg==",
+ "requires": {
+ "graceful-fs": "^4.1.6",
+ "universalify": "^1.0.0"
+ }
+ },
+ "tslib": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/tslib/-/tslib-2.0.0.tgz",
+ "integrity": "sha512-lTqkx847PI7xEDYJntxZH89L2/aXInsyF2luSafe/+0fHOMjlBNXdH6th7f70qxLDhul7KZK0zC8V5ZIyHl0/g=="
+ },
+ "universalify": {
+ "version": "1.0.0",
+ "resolved": "https://registry.npmjs.org/universalify/-/universalify-1.0.0.tgz",
+ "integrity": "sha512-rb6X1W158d7pRQBg5gkR8uPaSfiids68LTJQYOtEUhoJUWBdaQHsuT/EUduxXYxcrt4r5PJ4fuHW1MHT6p0qug=="
+ }
+ }
+ },
+ "@graphql-tools/graphql-tag-pluck": {
+ "version": "6.0.10",
+ "resolved": "https://registry.npmjs.org/@graphql-tools/graphql-tag-pluck/-/graphql-tag-pluck-6.0.10.tgz",
+ "integrity": "sha512-HNWg9kKexWZT3jM5NelEHGrJvVnNFL1FgF+YUWEIrB9/3MK6QB28cWoB+v7CzzLIOr2hn/UHBeCMvz6EmnxWLA==",
+ "requires": {
+ "@babel/parser": "7.10.2",
+ "@babel/traverse": "7.10.1",
+ "@babel/types": "7.10.2",
+ "@graphql-tools/utils": "6.0.10",
+ "vue-template-compiler": "^2.6.11"
+ },
+ "dependencies": {
+ "@babel/code-frame": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/code-frame/-/code-frame-7.10.1.tgz",
+ "integrity": "sha512-IGhtTmpjGbYzcEDOw7DcQtbQSXcG9ftmAXtWTu9V936vDye4xjjekktFAtgZsWpzTj/X01jocB46mTywm/4SZw==",
+ "requires": {
+ "@babel/highlight": "^7.10.1"
+ }
+ },
+ "@babel/generator": {
+ "version": "7.10.2",
+ "resolved": "https://registry.npmjs.org/@babel/generator/-/generator-7.10.2.tgz",
+ "integrity": "sha512-AxfBNHNu99DTMvlUPlt1h2+Hn7knPpH5ayJ8OqDWSeLld+Fi2AYBTC/IejWDM9Edcii4UzZRCsbUt0WlSDsDsA==",
+ "requires": {
+ "@babel/types": "^7.10.2",
+ "jsesc": "^2.5.1",
+ "lodash": "^4.17.13",
+ "source-map": "^0.5.0"
+ }
+ },
+ "@babel/helper-function-name": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-function-name/-/helper-function-name-7.10.1.tgz",
+ "integrity": "sha512-fcpumwhs3YyZ/ttd5Rz0xn0TpIwVkN7X0V38B9TWNfVF42KEkhkAAuPCQ3oXmtTRtiPJrmZ0TrfS0GKF0eMaRQ==",
+ "requires": {
+ "@babel/helper-get-function-arity": "^7.10.1",
+ "@babel/template": "^7.10.1",
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helper-get-function-arity": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-get-function-arity/-/helper-get-function-arity-7.10.1.tgz",
+ "integrity": "sha512-F5qdXkYGOQUb0hpRaPoetF9AnsXknKjWMZ+wmsIRsp5ge5sFh4c3h1eH2pRTTuy9KKAA2+TTYomGXAtEL2fQEw==",
+ "requires": {
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helper-split-export-declaration": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-split-export-declaration/-/helper-split-export-declaration-7.10.1.tgz",
+ "integrity": "sha512-UQ1LVBPrYdbchNhLwj6fetj46BcFwfS4NllJo/1aJsT+1dLTEnXJL0qHqtY7gPzF8S2fXBJamf1biAXV3X077g==",
+ "requires": {
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/highlight": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/highlight/-/highlight-7.10.1.tgz",
+ "integrity": "sha512-8rMof+gVP8mxYZApLF/JgNDAkdKa+aJt3ZYxF8z6+j/hpeXL7iMsKCPHa2jNMHu/qqBwzQF4OHNoYi8dMA/rYg==",
+ "requires": {
+ "@babel/helper-validator-identifier": "^7.10.1",
+ "chalk": "^2.0.0",
+ "js-tokens": "^4.0.0"
+ }
+ },
+ "@babel/parser": {
+ "version": "7.10.2",
+ "resolved": "https://registry.npmjs.org/@babel/parser/-/parser-7.10.2.tgz",
+ "integrity": "sha512-PApSXlNMJyB4JiGVhCOlzKIif+TKFTvu0aQAhnTvfP/z3vVSN6ZypH5bfUNwFXXjRQtUEBNFd2PtmCmG2Py3qQ=="
+ },
+ "@babel/template": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/template/-/template-7.10.1.tgz",
+ "integrity": "sha512-OQDg6SqvFSsc9A0ej6SKINWrpJiNonRIniYondK2ViKhB06i3c0s+76XUft71iqBEe9S1OKsHwPAjfHnuvnCig==",
+ "requires": {
+ "@babel/code-frame": "^7.10.1",
+ "@babel/parser": "^7.10.1",
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/traverse": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/traverse/-/traverse-7.10.1.tgz",
+ "integrity": "sha512-C/cTuXeKt85K+p08jN6vMDz8vSV0vZcI0wmQ36o6mjbuo++kPMdpOYw23W2XH04dbRt9/nMEfA4W3eR21CD+TQ==",
+ "requires": {
+ "@babel/code-frame": "^7.10.1",
+ "@babel/generator": "^7.10.1",
+ "@babel/helper-function-name": "^7.10.1",
+ "@babel/helper-split-export-declaration": "^7.10.1",
+ "@babel/parser": "^7.10.1",
+ "@babel/types": "^7.10.1",
+ "debug": "^4.1.0",
+ "globals": "^11.1.0",
+ "lodash": "^4.17.13"
+ }
+ },
+ "@babel/types": {
+ "version": "7.10.2",
+ "resolved": "https://registry.npmjs.org/@babel/types/-/types-7.10.2.tgz",
+ "integrity": "sha512-AD3AwWBSz0AWF0AkCN9VPiWrvldXq+/e3cHa4J89vo4ymjz1XwrBFFVZmkJTsQIPNk+ZVomPSXUJqq8yyjZsng==",
+ "requires": {
+ "@babel/helper-validator-identifier": "^7.10.1",
+ "lodash": "^4.17.13",
+ "to-fast-properties": "^2.0.0"
+ }
+ },
+ "debug": {
+ "version": "4.1.1",
+ "resolved": "https://registry.npmjs.org/debug/-/debug-4.1.1.tgz",
+ "integrity": "sha512-pYAIzeRo8J6KPEaJ0VWOh5Pzkbw/RetuzehGM7QRRX5he4fPHx2rdKMB256ehJCkX+XRQm16eZLqLNS8RSZXZw==",
+ "requires": {
+ "ms": "^2.1.1"
+ }
+ },
+ "lodash": {
+ "version": "4.17.15",
+ "resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz",
+ "integrity": "sha512-8xOcRHvCjnocdS5cpwXQXVzmmh5e5+saE2QGoeQmbKmRS6J3VQppPOIt0MnmE+4xlZoumy0GPG0D0MVIQbNA1A=="
+ }
+ }
+ },
+ "@graphql-tools/import": {
+ "version": "6.0.10",
+ "resolved": "https://registry.npmjs.org/@graphql-tools/import/-/import-6.0.10.tgz",
+ "integrity": "sha512-nrQR7pQkxm9Zmx6VjtffGeLvz+YgPm+ZN9h/AP/dlRjYJSev7LFlzDwAvk4TyFX4qbAY7RjoZ74qn2ezw1Y0Hw==",
+ "requires": {
+ "fs-extra": "9.0.1",
+ "resolve-from": "5.0.0"
+ },
+ "dependencies": {
+ "fs-extra": {
+ "version": "9.0.1",
+ "resolved": "https://registry.npmjs.org/fs-extra/-/fs-extra-9.0.1.tgz",
+ "integrity": "sha512-h2iAoN838FqAFJY2/qVpzFXy+EBxfVE220PalAqQLDVsFOHLJrZvut5puAbCdNv6WJk+B8ihI+k0c7JK5erwqQ==",
+ "requires": {
+ "at-least-node": "^1.0.0",
+ "graceful-fs": "^4.2.0",
+ "jsonfile": "^6.0.1",
+ "universalify": "^1.0.0"
+ }
+ },
+ "graceful-fs": {
+ "version": "4.2.4",
+ "resolved": "https://registry.npmjs.org/graceful-fs/-/graceful-fs-4.2.4.tgz",
+ "integrity": "sha512-WjKPNJF79dtJAVniUlGGWHYGz2jWxT6VhN/4m1NdkbZ2nOsEF+cI1Edgql5zCRhs/VsQYRvrXctxktVXZUkixw=="
+ },
+ "jsonfile": {
+ "version": "6.0.1",
+ "resolved": "https://registry.npmjs.org/jsonfile/-/jsonfile-6.0.1.tgz",
+ "integrity": "sha512-jR2b5v7d2vIOust+w3wtFKZIfpC2pnRmFAhAC/BuweZFQR8qZzxH1OyrQ10HmdVYiXWkYUqPVsz91cG7EL2FBg==",
+ "requires": {
+ "graceful-fs": "^4.1.6",
+ "universalify": "^1.0.0"
+ }
+ },
+ "resolve-from": {
+ "version": "5.0.0",
+ "resolved": "https://registry.npmjs.org/resolve-from/-/resolve-from-5.0.0.tgz",
+ "integrity": "sha512-qYg9KP24dD5qka9J47d0aVky0N+b4fTU89LN9iDnjB5waksiC49rvMB0PrUJQGoTmH50XPiqOvAjDfaijGxYZw=="
+ },
+ "universalify": {
+ "version": "1.0.0",
+ "resolved": "https://registry.npmjs.org/universalify/-/universalify-1.0.0.tgz",
+ "integrity": "sha512-rb6X1W158d7pRQBg5gkR8uPaSfiids68LTJQYOtEUhoJUWBdaQHsuT/EUduxXYxcrt4r5PJ4fuHW1MHT6p0qug=="
+ }
+ }
+ },
+ "@graphql-tools/json-file-loader": {
+ "version": "6.0.10",
+ "resolved": "https://registry.npmjs.org/@graphql-tools/json-file-loader/-/json-file-loader-6.0.10.tgz",
+ "integrity": "sha512-BcLPQzG71AT91b7hvmjwqZpFWx/w6/HR7zqSFIuorLuL+E1q9Bs1RCIDSsAgrkX4MN6732ZUeoXnGmtcgukpkw==",
+ "requires": {
+ "@graphql-tools/utils": "6.0.10",
+ "fs-extra": "9.0.1",
+ "tslib": "~2.0.0"
+ },
+ "dependencies": {
+ "fs-extra": {
+ "version": "9.0.1",
+ "resolved": "https://registry.npmjs.org/fs-extra/-/fs-extra-9.0.1.tgz",
+ "integrity": "sha512-h2iAoN838FqAFJY2/qVpzFXy+EBxfVE220PalAqQLDVsFOHLJrZvut5puAbCdNv6WJk+B8ihI+k0c7JK5erwqQ==",
+ "requires": {
+ "at-least-node": "^1.0.0",
+ "graceful-fs": "^4.2.0",
+ "jsonfile": "^6.0.1",
+ "universalify": "^1.0.0"
+ }
+ },
+ "graceful-fs": {
+ "version": "4.2.4",
+ "resolved": "https://registry.npmjs.org/graceful-fs/-/graceful-fs-4.2.4.tgz",
+ "integrity": "sha512-WjKPNJF79dtJAVniUlGGWHYGz2jWxT6VhN/4m1NdkbZ2nOsEF+cI1Edgql5zCRhs/VsQYRvrXctxktVXZUkixw=="
+ },
+ "jsonfile": {
+ "version": "6.0.1",
+ "resolved": "https://registry.npmjs.org/jsonfile/-/jsonfile-6.0.1.tgz",
+ "integrity": "sha512-jR2b5v7d2vIOust+w3wtFKZIfpC2pnRmFAhAC/BuweZFQR8qZzxH1OyrQ10HmdVYiXWkYUqPVsz91cG7EL2FBg==",
+ "requires": {
+ "graceful-fs": "^4.1.6",
+ "universalify": "^1.0.0"
+ }
+ },
+ "tslib": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/tslib/-/tslib-2.0.0.tgz",
+ "integrity": "sha512-lTqkx847PI7xEDYJntxZH89L2/aXInsyF2luSafe/+0fHOMjlBNXdH6th7f70qxLDhul7KZK0zC8V5ZIyHl0/g=="
+ },
+ "universalify": {
+ "version": "1.0.0",
+ "resolved": "https://registry.npmjs.org/universalify/-/universalify-1.0.0.tgz",
+ "integrity": "sha512-rb6X1W158d7pRQBg5gkR8uPaSfiids68LTJQYOtEUhoJUWBdaQHsuT/EUduxXYxcrt4r5PJ4fuHW1MHT6p0qug=="
+ }
+ }
+ },
+ "@graphql-tools/links": {
+ "version": "6.0.10",
+ "resolved": "https://registry.npmjs.org/@graphql-tools/links/-/links-6.0.10.tgz",
+ "integrity": "sha512-eLNq4E3zJZy2L94fI3eVOoTttlI+Atb+THlnSK0dPFrFpIC9Jm1C8G6kG0FvTVJ9TzPTo6TlFjTqJO40sJFhcQ==",
+ "requires": {
+ "@graphql-tools/utils": "6.0.10",
+ "apollo-link": "1.2.14",
+ "apollo-upload-client": "13.0.0",
+ "cross-fetch": "3.0.4",
+ "form-data": "3.0.0",
+ "tslib": "~2.0.0"
+ },
+ "dependencies": {
+ "combined-stream": {
+ "version": "1.0.8",
+ "resolved": "https://registry.npmjs.org/combined-stream/-/combined-stream-1.0.8.tgz",
+ "integrity": "sha512-FQN4MRfuJeHf7cBbBMJFXhKSDq+2kAArBlmRBvcvFE5BB1HZKXtSFASDhdlz9zOYwxh8lDdnvmMOe/+5cdoEdg==",
+ "requires": {
+ "delayed-stream": "~1.0.0"
+ }
+ },
+ "cross-fetch": {
+ "version": "3.0.4",
+ "resolved": "https://registry.npmjs.org/cross-fetch/-/cross-fetch-3.0.4.tgz",
+ "integrity": "sha512-MSHgpjQqgbT/94D4CyADeNoYh52zMkCX4pcJvPP5WqPsLFMKjr2TCMg381ox5qI0ii2dPwaLx/00477knXqXVw==",
+ "requires": {
+ "node-fetch": "2.6.0",
+ "whatwg-fetch": "3.0.0"
+ }
+ },
+ "form-data": {
+ "version": "3.0.0",
+ "resolved": "https://registry.npmjs.org/form-data/-/form-data-3.0.0.tgz",
+ "integrity": "sha512-CKMFDglpbMi6PyN+brwB9Q/GOw0eAnsrEZDgcsH5Krhz5Od/haKHAX0NmQfha2zPPz0JpWzA7GJHGSnvCRLWsg==",
+ "requires": {
+ "asynckit": "^0.4.0",
+ "combined-stream": "^1.0.8",
+ "mime-types": "^2.1.12"
+ }
+ },
+ "node-fetch": {
+ "version": "2.6.0",
+ "resolved": "https://registry.npmjs.org/node-fetch/-/node-fetch-2.6.0.tgz",
+ "integrity": "sha512-8dG4H5ujfvFiqDmVu9fQ5bOHUC15JMjMY/Zumv26oOvvVJjM67KF8koCWIabKQ1GJIa9r2mMZscBq/TbdOcmNA=="
+ },
+ "tslib": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/tslib/-/tslib-2.0.0.tgz",
+ "integrity": "sha512-lTqkx847PI7xEDYJntxZH89L2/aXInsyF2luSafe/+0fHOMjlBNXdH6th7f70qxLDhul7KZK0zC8V5ZIyHl0/g=="
+ }
+ }
+ },
+ "@graphql-tools/load": {
+ "version": "6.0.10",
+ "resolved": "https://registry.npmjs.org/@graphql-tools/load/-/load-6.0.10.tgz",
+ "integrity": "sha512-/Q07DuSvhRTu7iYr+iZDXuXLjQJ/0uZEadjC4uKthD4gX6x4bvV49GLdqka+J1zq02C5U5mAOdDT7+lHIrEBFg==",
+ "requires": {
+ "@graphql-tools/merge": "6.0.10",
+ "@graphql-tools/utils": "6.0.10",
+ "globby": "11.0.1",
+ "import-from": "3.0.0",
+ "is-glob": "4.0.1",
+ "p-limit": "3.0.1",
+ "tslib": "~2.0.0",
+ "unixify": "1.0.0",
+ "valid-url": "1.0.9"
+ },
+ "dependencies": {
+ "@nodelib/fs.stat": {
+ "version": "2.0.3",
+ "resolved": "https://registry.npmjs.org/@nodelib/fs.stat/-/fs.stat-2.0.3.tgz",
+ "integrity": "sha512-bQBFruR2TAwoevBEd/NWMoAAtNGzTRgdrqnYCc7dhzfoNvqPzLyqlEQnzZ3kVnNrSp25iyxE00/3h2fqGAGArA=="
+ },
+ "array-union": {
+ "version": "2.1.0",
+ "resolved": "https://registry.npmjs.org/array-union/-/array-union-2.1.0.tgz",
+ "integrity": "sha512-HGyxoOTYUyCM6stUe6EJgnd4EoewAI7zMdfqO+kGjnlZmBDz/cR5pf8r/cR4Wq60sL/p0IkcjUEEPwS3GFrIyw=="
+ },
+ "braces": {
+ "version": "3.0.2",
+ "resolved": "https://registry.npmjs.org/braces/-/braces-3.0.2.tgz",
+ "integrity": "sha512-b8um+L1RzM3WDSzvhm6gIz1yfTbBt6YTlcEKAvsmqCZZFw46z626lVj9j1yEPW33H5H+lBQpZMP1k8l+78Ha0A==",
+ "requires": {
+ "fill-range": "^7.0.1"
+ }
+ },
+ "dir-glob": {
+ "version": "3.0.1",
+ "resolved": "https://registry.npmjs.org/dir-glob/-/dir-glob-3.0.1.tgz",
+ "integrity": "sha512-WkrWp9GR4KXfKGYzOLmTuGVi1UWFfws377n9cc55/tb6DuqyF6pcQ5AbiHEshaDpY9v6oaSr2XCDidGmMwdzIA==",
+ "requires": {
+ "path-type": "^4.0.0"
+ }
+ },
+ "fast-glob": {
+ "version": "3.2.4",
+ "resolved": "https://registry.npmjs.org/fast-glob/-/fast-glob-3.2.4.tgz",
+ "integrity": "sha512-kr/Oo6PX51265qeuCYsyGypiO5uJFgBS0jksyG7FUeCyQzNwYnzrNIMR1NXfkZXsMYXYLRAHgISHBz8gQcxKHQ==",
+ "requires": {
+ "@nodelib/fs.stat": "^2.0.2",
+ "@nodelib/fs.walk": "^1.2.3",
+ "glob-parent": "^5.1.0",
+ "merge2": "^1.3.0",
+ "micromatch": "^4.0.2",
+ "picomatch": "^2.2.1"
+ }
+ },
+ "fill-range": {
+ "version": "7.0.1",
+ "resolved": "https://registry.npmjs.org/fill-range/-/fill-range-7.0.1.tgz",
+ "integrity": "sha512-qOo9F+dMUmC2Lcb4BbVvnKJxTPjCm+RRpe4gDuGrzkL7mEVl/djYSu2OdQ2Pa302N4oqkSg9ir6jaLWJ2USVpQ==",
+ "requires": {
+ "to-regex-range": "^5.0.1"
+ }
+ },
+ "glob-parent": {
+ "version": "5.1.1",
+ "resolved": "https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.1.tgz",
+ "integrity": "sha512-FnI+VGOpnlGHWZxthPGR+QhR78fuiK0sNLkHQv+bL9fQi57lNNdquIbna/WrfROrolq8GK5Ek6BiMwqL/voRYQ==",
+ "requires": {
+ "is-glob": "^4.0.1"
+ }
+ },
+ "globby": {
+ "version": "11.0.1",
+ "resolved": "https://registry.npmjs.org/globby/-/globby-11.0.1.tgz",
+ "integrity": "sha512-iH9RmgwCmUJHi2z5o2l3eTtGBtXek1OYlHrbcxOYugyHLmAsZrPj43OtHThd62Buh/Vv6VyCBD2bdyWcGNQqoQ==",
+ "requires": {
+ "array-union": "^2.1.0",
+ "dir-glob": "^3.0.1",
+ "fast-glob": "^3.1.1",
+ "ignore": "^5.1.4",
+ "merge2": "^1.3.0",
+ "slash": "^3.0.0"
+ }
+ },
+ "ignore": {
+ "version": "5.1.8",
+ "resolved": "https://registry.npmjs.org/ignore/-/ignore-5.1.8.tgz",
+ "integrity": "sha512-BMpfD7PpiETpBl/A6S498BaIJ6Y/ABT93ETbby2fP00v4EbvPBXWEoaR1UBPKs3iR53pJY7EtZk5KACI57i1Uw=="
+ },
+ "is-glob": {
+ "version": "4.0.1",
+ "resolved": "https://registry.npmjs.org/is-glob/-/is-glob-4.0.1.tgz",
+ "integrity": "sha512-5G0tKtBTFImOqDnLB2hG6Bp2qcKEFduo4tZu9MT/H6NQv/ghhy30o55ufafxJ/LdH79LLs2Kfrn85TLKyA7BUg==",
+ "requires": {
+ "is-extglob": "^2.1.1"
+ }
+ },
+ "is-number": {
+ "version": "7.0.0",
+ "resolved": "https://registry.npmjs.org/is-number/-/is-number-7.0.0.tgz",
+ "integrity": "sha512-41Cifkg6e8TylSpdtTpeLVMqvSBEVzTttHvERD741+pnZ8ANv0004MRL43QKPDlK9cGvNp6NZWZUBlbGXYxxng=="
+ },
+ "merge2": {
+ "version": "1.4.1",
+ "resolved": "https://registry.npmjs.org/merge2/-/merge2-1.4.1.tgz",
+ "integrity": "sha512-8q7VEgMJW4J8tcfVPy8g09NcQwZdbwFEqhe/WZkoIzjn/3TGDwtOCYtXGxA3O8tPzpczCCDgv+P2P5y00ZJOOg=="
+ },
+ "micromatch": {
+ "version": "4.0.2",
+ "resolved": "https://registry.npmjs.org/micromatch/-/micromatch-4.0.2.tgz",
+ "integrity": "sha512-y7FpHSbMUMoyPbYUSzO6PaZ6FyRnQOpHuKwbo1G+Knck95XVU4QAiKdGEnj5wwoS7PlOgthX/09u5iFJ+aYf5Q==",
+ "requires": {
+ "braces": "^3.0.1",
+ "picomatch": "^2.0.5"
+ }
+ },
+ "p-limit": {
+ "version": "3.0.1",
+ "resolved": "https://registry.npmjs.org/p-limit/-/p-limit-3.0.1.tgz",
+ "integrity": "sha512-mw/p92EyOzl2MhauKodw54Rx5ZK4624rNfgNaBguFZkHzyUG9WsDzFF5/yQVEJinbJDdP4jEfMN+uBquiGnaLg==",
+ "requires": {
+ "p-try": "^2.0.0"
+ }
+ },
+ "path-type": {
+ "version": "4.0.0",
+ "resolved": "https://registry.npmjs.org/path-type/-/path-type-4.0.0.tgz",
+ "integrity": "sha512-gDKb8aZMDeD/tZWs9P6+q0J9Mwkdl6xMV8TjnGP3qJVJ06bdMgkbBlLU8IdfOsIsFz2BW1rNVT3XuNEl8zPAvw=="
+ },
+ "slash": {
+ "version": "3.0.0",
+ "resolved": "https://registry.npmjs.org/slash/-/slash-3.0.0.tgz",
+ "integrity": "sha512-g9Q1haeby36OSStwb4ntCGGGaKsaVSjQ68fBxoQcutl5fS1vuY18H3wSt3jFyFtrkx+Kz0V1G85A4MyAdDMi2Q=="
+ },
+ "to-regex-range": {
+ "version": "5.0.1",
+ "resolved": "https://registry.npmjs.org/to-regex-range/-/to-regex-range-5.0.1.tgz",
+ "integrity": "sha512-65P7iz6X5yEr1cwcgvQxbbIw7Uk3gOy5dIdtZ4rDveLqhrdJP+Li/Hx6tyK0NEb+2GCyneCMJiGqrADCSNk8sQ==",
+ "requires": {
+ "is-number": "^7.0.0"
+ }
+ },
+ "tslib": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/tslib/-/tslib-2.0.0.tgz",
+ "integrity": "sha512-lTqkx847PI7xEDYJntxZH89L2/aXInsyF2luSafe/+0fHOMjlBNXdH6th7f70qxLDhul7KZK0zC8V5ZIyHl0/g=="
+ }
+ }
+ },
+ "@graphql-tools/load-files": {
+ "version": "6.0.10",
+ "resolved": "https://registry.npmjs.org/@graphql-tools/load-files/-/load-files-6.0.10.tgz",
+ "integrity": "sha512-hB6os27RVAy01SI05krvmTP13xsIjzx151DlTaL5HnskzeDpjBWjYlfiKMhdWpx5ORVniyPtFSYzQxJEIr5/NA==",
+ "requires": {
+ "fs-extra": "9.0.1",
+ "globby": "11.0.1",
+ "unixify": "1.0.0"
+ },
+ "dependencies": {
+ "@nodelib/fs.stat": {
+ "version": "2.0.3",
+ "resolved": "https://registry.npmjs.org/@nodelib/fs.stat/-/fs.stat-2.0.3.tgz",
+ "integrity": "sha512-bQBFruR2TAwoevBEd/NWMoAAtNGzTRgdrqnYCc7dhzfoNvqPzLyqlEQnzZ3kVnNrSp25iyxE00/3h2fqGAGArA=="
+ },
+ "array-union": {
+ "version": "2.1.0",
+ "resolved": "https://registry.npmjs.org/array-union/-/array-union-2.1.0.tgz",
+ "integrity": "sha512-HGyxoOTYUyCM6stUe6EJgnd4EoewAI7zMdfqO+kGjnlZmBDz/cR5pf8r/cR4Wq60sL/p0IkcjUEEPwS3GFrIyw=="
+ },
+ "braces": {
+ "version": "3.0.2",
+ "resolved": "https://registry.npmjs.org/braces/-/braces-3.0.2.tgz",
+ "integrity": "sha512-b8um+L1RzM3WDSzvhm6gIz1yfTbBt6YTlcEKAvsmqCZZFw46z626lVj9j1yEPW33H5H+lBQpZMP1k8l+78Ha0A==",
+ "requires": {
+ "fill-range": "^7.0.1"
+ }
+ },
+ "dir-glob": {
+ "version": "3.0.1",
+ "resolved": "https://registry.npmjs.org/dir-glob/-/dir-glob-3.0.1.tgz",
+ "integrity": "sha512-WkrWp9GR4KXfKGYzOLmTuGVi1UWFfws377n9cc55/tb6DuqyF6pcQ5AbiHEshaDpY9v6oaSr2XCDidGmMwdzIA==",
+ "requires": {
+ "path-type": "^4.0.0"
+ }
+ },
+ "fast-glob": {
+ "version": "3.2.4",
+ "resolved": "https://registry.npmjs.org/fast-glob/-/fast-glob-3.2.4.tgz",
+ "integrity": "sha512-kr/Oo6PX51265qeuCYsyGypiO5uJFgBS0jksyG7FUeCyQzNwYnzrNIMR1NXfkZXsMYXYLRAHgISHBz8gQcxKHQ==",
+ "requires": {
+ "@nodelib/fs.stat": "^2.0.2",
+ "@nodelib/fs.walk": "^1.2.3",
+ "glob-parent": "^5.1.0",
+ "merge2": "^1.3.0",
+ "micromatch": "^4.0.2",
+ "picomatch": "^2.2.1"
+ }
+ },
+ "fill-range": {
+ "version": "7.0.1",
+ "resolved": "https://registry.npmjs.org/fill-range/-/fill-range-7.0.1.tgz",
+ "integrity": "sha512-qOo9F+dMUmC2Lcb4BbVvnKJxTPjCm+RRpe4gDuGrzkL7mEVl/djYSu2OdQ2Pa302N4oqkSg9ir6jaLWJ2USVpQ==",
+ "requires": {
+ "to-regex-range": "^5.0.1"
+ }
+ },
+ "fs-extra": {
+ "version": "9.0.1",
+ "resolved": "https://registry.npmjs.org/fs-extra/-/fs-extra-9.0.1.tgz",
+ "integrity": "sha512-h2iAoN838FqAFJY2/qVpzFXy+EBxfVE220PalAqQLDVsFOHLJrZvut5puAbCdNv6WJk+B8ihI+k0c7JK5erwqQ==",
+ "requires": {
+ "at-least-node": "^1.0.0",
+ "graceful-fs": "^4.2.0",
+ "jsonfile": "^6.0.1",
+ "universalify": "^1.0.0"
+ }
+ },
+ "glob-parent": {
+ "version": "5.1.1",
+ "resolved": "https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.1.tgz",
+ "integrity": "sha512-FnI+VGOpnlGHWZxthPGR+QhR78fuiK0sNLkHQv+bL9fQi57lNNdquIbna/WrfROrolq8GK5Ek6BiMwqL/voRYQ==",
+ "requires": {
+ "is-glob": "^4.0.1"
+ }
+ },
+ "globby": {
+ "version": "11.0.1",
+ "resolved": "https://registry.npmjs.org/globby/-/globby-11.0.1.tgz",
+ "integrity": "sha512-iH9RmgwCmUJHi2z5o2l3eTtGBtXek1OYlHrbcxOYugyHLmAsZrPj43OtHThd62Buh/Vv6VyCBD2bdyWcGNQqoQ==",
+ "requires": {
+ "array-union": "^2.1.0",
+ "dir-glob": "^3.0.1",
+ "fast-glob": "^3.1.1",
+ "ignore": "^5.1.4",
+ "merge2": "^1.3.0",
+ "slash": "^3.0.0"
+ }
+ },
+ "graceful-fs": {
+ "version": "4.2.4",
+ "resolved": "https://registry.npmjs.org/graceful-fs/-/graceful-fs-4.2.4.tgz",
+ "integrity": "sha512-WjKPNJF79dtJAVniUlGGWHYGz2jWxT6VhN/4m1NdkbZ2nOsEF+cI1Edgql5zCRhs/VsQYRvrXctxktVXZUkixw=="
+ },
+ "ignore": {
+ "version": "5.1.8",
+ "resolved": "https://registry.npmjs.org/ignore/-/ignore-5.1.8.tgz",
+ "integrity": "sha512-BMpfD7PpiETpBl/A6S498BaIJ6Y/ABT93ETbby2fP00v4EbvPBXWEoaR1UBPKs3iR53pJY7EtZk5KACI57i1Uw=="
+ },
+ "is-glob": {
+ "version": "4.0.1",
+ "resolved": "https://registry.npmjs.org/is-glob/-/is-glob-4.0.1.tgz",
+ "integrity": "sha512-5G0tKtBTFImOqDnLB2hG6Bp2qcKEFduo4tZu9MT/H6NQv/ghhy30o55ufafxJ/LdH79LLs2Kfrn85TLKyA7BUg==",
+ "requires": {
+ "is-extglob": "^2.1.1"
+ }
+ },
+ "is-number": {
+ "version": "7.0.0",
+ "resolved": "https://registry.npmjs.org/is-number/-/is-number-7.0.0.tgz",
+ "integrity": "sha512-41Cifkg6e8TylSpdtTpeLVMqvSBEVzTttHvERD741+pnZ8ANv0004MRL43QKPDlK9cGvNp6NZWZUBlbGXYxxng=="
+ },
+ "jsonfile": {
+ "version": "6.0.1",
+ "resolved": "https://registry.npmjs.org/jsonfile/-/jsonfile-6.0.1.tgz",
+ "integrity": "sha512-jR2b5v7d2vIOust+w3wtFKZIfpC2pnRmFAhAC/BuweZFQR8qZzxH1OyrQ10HmdVYiXWkYUqPVsz91cG7EL2FBg==",
+ "requires": {
+ "graceful-fs": "^4.1.6",
+ "universalify": "^1.0.0"
+ }
+ },
+ "merge2": {
+ "version": "1.4.1",
+ "resolved": "https://registry.npmjs.org/merge2/-/merge2-1.4.1.tgz",
+ "integrity": "sha512-8q7VEgMJW4J8tcfVPy8g09NcQwZdbwFEqhe/WZkoIzjn/3TGDwtOCYtXGxA3O8tPzpczCCDgv+P2P5y00ZJOOg=="
+ },
+ "micromatch": {
+ "version": "4.0.2",
+ "resolved": "https://registry.npmjs.org/micromatch/-/micromatch-4.0.2.tgz",
+ "integrity": "sha512-y7FpHSbMUMoyPbYUSzO6PaZ6FyRnQOpHuKwbo1G+Knck95XVU4QAiKdGEnj5wwoS7PlOgthX/09u5iFJ+aYf5Q==",
+ "requires": {
+ "braces": "^3.0.1",
+ "picomatch": "^2.0.5"
+ }
+ },
+ "path-type": {
+ "version": "4.0.0",
+ "resolved": "https://registry.npmjs.org/path-type/-/path-type-4.0.0.tgz",
+ "integrity": "sha512-gDKb8aZMDeD/tZWs9P6+q0J9Mwkdl6xMV8TjnGP3qJVJ06bdMgkbBlLU8IdfOsIsFz2BW1rNVT3XuNEl8zPAvw=="
+ },
+ "slash": {
+ "version": "3.0.0",
+ "resolved": "https://registry.npmjs.org/slash/-/slash-3.0.0.tgz",
+ "integrity": "sha512-g9Q1haeby36OSStwb4ntCGGGaKsaVSjQ68fBxoQcutl5fS1vuY18H3wSt3jFyFtrkx+Kz0V1G85A4MyAdDMi2Q=="
+ },
+ "to-regex-range": {
+ "version": "5.0.1",
+ "resolved": "https://registry.npmjs.org/to-regex-range/-/to-regex-range-5.0.1.tgz",
+ "integrity": "sha512-65P7iz6X5yEr1cwcgvQxbbIw7Uk3gOy5dIdtZ4rDveLqhrdJP+Li/Hx6tyK0NEb+2GCyneCMJiGqrADCSNk8sQ==",
+ "requires": {
+ "is-number": "^7.0.0"
+ }
+ },
+ "universalify": {
+ "version": "1.0.0",
+ "resolved": "https://registry.npmjs.org/universalify/-/universalify-1.0.0.tgz",
+ "integrity": "sha512-rb6X1W158d7pRQBg5gkR8uPaSfiids68LTJQYOtEUhoJUWBdaQHsuT/EUduxXYxcrt4r5PJ4fuHW1MHT6p0qug=="
+ }
+ }
+ },
+ "@graphql-tools/merge": {
+ "version": "6.0.10",
+ "resolved": "https://registry.npmjs.org/@graphql-tools/merge/-/merge-6.0.10.tgz",
+ "integrity": "sha512-fnz9h5vdA8LXc9TvmhnRXykwFZWZ4FdBeo4g3R1KqcQCp65ByCMcBuCJtYf4VxPrcgTLGlWtVOHrItCi0kdioA==",
+ "requires": {
+ "@graphql-tools/schema": "6.0.10",
+ "@graphql-tools/utils": "6.0.10",
+ "tslib": "~2.0.0"
+ },
+ "dependencies": {
+ "tslib": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/tslib/-/tslib-2.0.0.tgz",
+ "integrity": "sha512-lTqkx847PI7xEDYJntxZH89L2/aXInsyF2luSafe/+0fHOMjlBNXdH6th7f70qxLDhul7KZK0zC8V5ZIyHl0/g=="
+ }
+ }
+ },
+ "@graphql-tools/mock": {
+ "version": "6.0.10",
+ "resolved": "https://registry.npmjs.org/@graphql-tools/mock/-/mock-6.0.10.tgz",
+ "integrity": "sha512-RA+FExqDeSSgYHrLxSxF2El+0aG2Bw/KRfCHeJ54x9wrnA7gn/bC98K1EoGHolZ1b/YVUFkukaj3nooBzt9p0w==",
+ "requires": {
+ "@graphql-tools/schema": "6.0.10",
+ "@graphql-tools/utils": "6.0.10",
+ "tslib": "~2.0.0"
+ },
+ "dependencies": {
+ "tslib": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/tslib/-/tslib-2.0.0.tgz",
+ "integrity": "sha512-lTqkx847PI7xEDYJntxZH89L2/aXInsyF2luSafe/+0fHOMjlBNXdH6th7f70qxLDhul7KZK0zC8V5ZIyHl0/g=="
+ }
+ }
+ },
+ "@graphql-tools/module-loader": {
+ "version": "6.0.10",
+ "resolved": "https://registry.npmjs.org/@graphql-tools/module-loader/-/module-loader-6.0.10.tgz",
+ "integrity": "sha512-dLZ+JB7F/8OKYhi+1SucHaNGN0UBEWgahUaPUI0L2zuGZakuvUvLMSOQTZ5rF8oipU9p2b0ZzzpDjesABru7Ag==",
+ "requires": {
+ "@graphql-tools/utils": "6.0.10",
+ "tslib": "~2.0.0"
+ },
+ "dependencies": {
+ "tslib": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/tslib/-/tslib-2.0.0.tgz",
+ "integrity": "sha512-lTqkx847PI7xEDYJntxZH89L2/aXInsyF2luSafe/+0fHOMjlBNXdH6th7f70qxLDhul7KZK0zC8V5ZIyHl0/g=="
+ }
+ }
+ },
+ "@graphql-tools/relay-operation-optimizer": {
+ "version": "6.0.10",
+ "resolved": "https://registry.npmjs.org/@graphql-tools/relay-operation-optimizer/-/relay-operation-optimizer-6.0.10.tgz",
+ "integrity": "sha512-u8GavLgpIoOLDfFSvpAmpfp56mfN1YiqDpY+goGcOQudtR4IULqr6Mj5KPstKUMMnMtuFQ0OMcYRvWxN3jP4lQ==",
+ "requires": {
+ "@graphql-tools/utils": "6.0.10",
+ "relay-compiler": "9.1.0"
+ },
+ "dependencies": {
+ "@babel/generator": {
+ "version": "7.10.2",
+ "resolved": "https://registry.npmjs.org/@babel/generator/-/generator-7.10.2.tgz",
+ "integrity": "sha512-AxfBNHNu99DTMvlUPlt1h2+Hn7knPpH5ayJ8OqDWSeLld+Fi2AYBTC/IejWDM9Edcii4UzZRCsbUt0WlSDsDsA==",
+ "requires": {
+ "@babel/types": "^7.10.2",
+ "jsesc": "^2.5.1",
+ "lodash": "^4.17.13",
+ "source-map": "^0.5.0"
+ },
+ "dependencies": {
+ "@babel/types": {
+ "version": "7.10.2",
+ "resolved": "https://registry.npmjs.org/@babel/types/-/types-7.10.2.tgz",
+ "integrity": "sha512-AD3AwWBSz0AWF0AkCN9VPiWrvldXq+/e3cHa4J89vo4ymjz1XwrBFFVZmkJTsQIPNk+ZVomPSXUJqq8yyjZsng==",
+ "requires": {
+ "@babel/helper-validator-identifier": "^7.10.1",
+ "lodash": "^4.17.13",
+ "to-fast-properties": "^2.0.0"
+ }
+ }
+ }
+ },
+ "ansi-regex": {
+ "version": "4.1.0",
+ "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-4.1.0.tgz",
+ "integrity": "sha512-1apePfXM1UOSqw0o9IiFAovVz9M5S1Dg+4TrDwfMewQ6p/rmMueb7tWZjQ1rx4Loy1ArBggoqGpfqqdI4rondg=="
+ },
+ "camelcase": {
+ "version": "5.3.1",
+ "resolved": "https://registry.npmjs.org/camelcase/-/camelcase-5.3.1.tgz",
+ "integrity": "sha512-L28STB170nwWS63UjtlEOE3dldQApaJXZkOI1uMFfzf3rRuPegHaHesyee+YxQ+W6SvRDQV6UrdOdRiR153wJg=="
+ },
+ "cliui": {
+ "version": "5.0.0",
+ "resolved": "https://registry.npmjs.org/cliui/-/cliui-5.0.0.tgz",
+ "integrity": "sha512-PYeGSEmmHM6zvoef2w8TPzlrnNpXIjTipYK780YswmIP9vjxmd6Y2a3CB2Ks6/AU8NHjZugXvo8w3oWM2qnwXA==",
+ "requires": {
+ "string-width": "^3.1.0",
+ "strip-ansi": "^5.2.0",
+ "wrap-ansi": "^5.1.0"
+ }
+ },
+ "fbjs": {
+ "version": "1.0.0",
+ "resolved": "https://registry.npmjs.org/fbjs/-/fbjs-1.0.0.tgz",
+ "integrity": "sha512-MUgcMEJaFhCaF1QtWGnmq9ZDRAzECTCRAF7O6UZIlAlkTs1SasiX9aP0Iw7wfD2mJ7wDTNfg2w7u5fSCwJk1OA==",
+ "requires": {
+ "core-js": "^2.4.1",
+ "fbjs-css-vars": "^1.0.0",
+ "isomorphic-fetch": "^2.1.1",
+ "loose-envify": "^1.0.0",
+ "object-assign": "^4.1.0",
+ "promise": "^7.1.1",
+ "setimmediate": "^1.0.5",
+ "ua-parser-js": "^0.7.18"
+ }
+ },
+ "find-up": {
+ "version": "3.0.0",
+ "resolved": "https://registry.npmjs.org/find-up/-/find-up-3.0.0.tgz",
+ "integrity": "sha512-1yD6RmLI1XBfxugvORwlck6f75tYL+iR0jqwsOrOxMZyGYqUuDhJ0l4AXdO1iX/FTs9cBAMEk1gWSEx1kSbylg==",
+ "requires": {
+ "locate-path": "^3.0.0"
+ }
+ },
+ "get-caller-file": {
+ "version": "2.0.5",
+ "resolved": "https://registry.npmjs.org/get-caller-file/-/get-caller-file-2.0.5.tgz",
+ "integrity": "sha512-DyFP3BM/3YHTQOCUL/w0OZHR0lpKeGrxotcHWcqNEdnltqFwXVfhEBQ94eIo34AfQpo0rGki4cyIiftY06h2Fg=="
+ },
+ "locate-path": {
+ "version": "3.0.0",
+ "resolved": "https://registry.npmjs.org/locate-path/-/locate-path-3.0.0.tgz",
+ "integrity": "sha512-7AO748wWnIhNqAuaty2ZWHkQHRSNfPVIsPIfwEOWO22AmaoVrWavlOcMR5nzTLNYvp36X220/maaRsrec1G65A==",
+ "requires": {
+ "p-locate": "^3.0.0",
+ "path-exists": "^3.0.0"
+ }
+ },
+ "lodash": {
+ "version": "4.17.15",
+ "resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz",
+ "integrity": "sha512-8xOcRHvCjnocdS5cpwXQXVzmmh5e5+saE2QGoeQmbKmRS6J3VQppPOIt0MnmE+4xlZoumy0GPG0D0MVIQbNA1A=="
+ },
+ "p-limit": {
+ "version": "2.3.0",
+ "resolved": "https://registry.npmjs.org/p-limit/-/p-limit-2.3.0.tgz",
+ "integrity": "sha512-//88mFWSJx8lxCzwdAABTJL2MyWB12+eIY7MDL2SqLmAkeKU9qxRvWuSyTjm3FUmpBEMuFfckAIqEaVGUDxb6w==",
+ "requires": {
+ "p-try": "^2.0.0"
+ }
+ },
+ "p-locate": {
+ "version": "3.0.0",
+ "resolved": "https://registry.npmjs.org/p-locate/-/p-locate-3.0.0.tgz",
+ "integrity": "sha512-x+12w/To+4GFfgJhBEpiDcLozRJGegY+Ei7/z0tSLkMmxGZNybVMSfWj9aJn8Z5Fc7dBUNJOOVgPv2H7IwulSQ==",
+ "requires": {
+ "p-limit": "^2.0.0"
+ }
+ },
+ "relay-compiler": {
+ "version": "9.1.0",
+ "resolved": "https://registry.npmjs.org/relay-compiler/-/relay-compiler-9.1.0.tgz",
+ "integrity": "sha512-jsJx0Ux5RoxM+JFm3M3xl7UfZAJ0kUTY/r6jqOpcYgVI3GLJthvNI4IoziFRlWbhizEzGFbpkdshZcu9IObJYA==",
+ "requires": {
+ "@babel/core": "^7.0.0",
+ "@babel/generator": "^7.5.0",
+ "@babel/parser": "^7.0.0",
+ "@babel/runtime": "^7.0.0",
+ "@babel/traverse": "^7.0.0",
+ "@babel/types": "^7.0.0",
+ "babel-preset-fbjs": "^3.3.0",
+ "chalk": "^2.4.1",
+ "fast-glob": "^2.2.2",
+ "fb-watchman": "^2.0.0",
+ "fbjs": "^1.0.0",
+ "immutable": "~3.7.6",
+ "nullthrows": "^1.1.1",
+ "relay-runtime": "9.1.0",
+ "signedsource": "^1.0.0",
+ "yargs": "^14.2.0"
+ }
+ },
+ "relay-runtime": {
+ "version": "9.1.0",
+ "resolved": "https://registry.npmjs.org/relay-runtime/-/relay-runtime-9.1.0.tgz",
+ "integrity": "sha512-6FE5YlZpR/b3R/HzGly85V+c4MdtLJhFY/outQARgxXonomrwqEik0Cr34LnPK4DmGS36cMLUliqhCs/DZyPVw==",
+ "requires": {
+ "@babel/runtime": "^7.0.0",
+ "fbjs": "^1.0.0"
+ }
+ },
+ "require-main-filename": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/require-main-filename/-/require-main-filename-2.0.0.tgz",
+ "integrity": "sha512-NKN5kMDylKuldxYLSUfrbo5Tuzh4hd+2E8NPPX02mZtn1VuREQToYe/ZdlJy+J3uCpfaiGF05e7B8W0iXbQHmg=="
+ },
+ "string-width": {
+ "version": "3.1.0",
+ "resolved": "https://registry.npmjs.org/string-width/-/string-width-3.1.0.tgz",
+ "integrity": "sha512-vafcv6KjVZKSgz06oM/H6GDBrAtz8vdhQakGjFIvNrHA6y3HCF1CInLy+QLq8dTJPQ1b+KDUqDFctkdRW44e1w==",
+ "requires": {
+ "emoji-regex": "^7.0.1",
+ "is-fullwidth-code-point": "^2.0.0",
+ "strip-ansi": "^5.1.0"
+ }
+ },
+ "strip-ansi": {
+ "version": "5.2.0",
+ "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-5.2.0.tgz",
+ "integrity": "sha512-DuRs1gKbBqsMKIZlrffwlug8MHkcnpjs5VPmL1PAh+mA30U0DTotfDZ0d2UUsXpPmPmMMJ6W773MaA3J+lbiWA==",
+ "requires": {
+ "ansi-regex": "^4.1.0"
+ }
+ },
+ "wrap-ansi": {
+ "version": "5.1.0",
+ "resolved": "https://registry.npmjs.org/wrap-ansi/-/wrap-ansi-5.1.0.tgz",
+ "integrity": "sha512-QC1/iN/2/RPVJ5jYK8BGttj5z83LmSKmvbvrXPNCLZSEb32KKVDJDl/MOt2N01qU2H/FkzEa9PKto1BqDjtd7Q==",
+ "requires": {
+ "ansi-styles": "^3.2.0",
+ "string-width": "^3.0.0",
+ "strip-ansi": "^5.0.0"
+ }
+ },
+ "y18n": {
+ "version": "4.0.0",
+ "resolved": "https://registry.npmjs.org/y18n/-/y18n-4.0.0.tgz",
+ "integrity": "sha512-r9S/ZyXu/Xu9q1tYlpsLIsa3EeLXXk0VwlxqTcFRfg9EhMW+17kbt9G0NrgCmhGb5vT2hyhJZLfDGx+7+5Uj/w=="
+ },
+ "yargs": {
+ "version": "14.2.3",
+ "resolved": "https://registry.npmjs.org/yargs/-/yargs-14.2.3.tgz",
+ "integrity": "sha512-ZbotRWhF+lkjijC/VhmOT9wSgyBQ7+zr13+YLkhfsSiTriYsMzkTUFP18pFhWwBeMa5gUc1MzbhrO6/VB7c9Xg==",
+ "requires": {
+ "cliui": "^5.0.0",
+ "decamelize": "^1.2.0",
+ "find-up": "^3.0.0",
+ "get-caller-file": "^2.0.1",
+ "require-directory": "^2.1.1",
+ "require-main-filename": "^2.0.0",
+ "set-blocking": "^2.0.0",
+ "string-width": "^3.0.0",
+ "which-module": "^2.0.0",
+ "y18n": "^4.0.0",
+ "yargs-parser": "^15.0.1"
+ }
+ },
+ "yargs-parser": {
+ "version": "15.0.1",
+ "resolved": "https://registry.npmjs.org/yargs-parser/-/yargs-parser-15.0.1.tgz",
+ "integrity": "sha512-0OAMV2mAZQrs3FkNpDQcBk1x5HXb8X4twADss4S0Iuk+2dGnLOE/fRHrsYm542GduMveyA77OF4wrNJuanRCWw==",
+ "requires": {
+ "camelcase": "^5.0.0",
+ "decamelize": "^1.2.0"
+ }
+ }
+ }
+ },
+ "@graphql-tools/resolvers-composition": {
+ "version": "6.0.10",
+ "resolved": "https://registry.npmjs.org/@graphql-tools/resolvers-composition/-/resolvers-composition-6.0.10.tgz",
+ "integrity": "sha512-MNeQOxwrCBaBxnvPdbg8LTd6RhPV2q8MfHvQ8nMKrO68ab3G3bJZOL/kXu70Ajy+jPPJmgPVbevBOMJ7wkCwUQ==",
+ "requires": {
+ "@graphql-tools/utils": "6.0.10",
+ "lodash": "4.17.15"
+ },
+ "dependencies": {
+ "lodash": {
+ "version": "4.17.15",
+ "resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz",
+ "integrity": "sha512-8xOcRHvCjnocdS5cpwXQXVzmmh5e5+saE2QGoeQmbKmRS6J3VQppPOIt0MnmE+4xlZoumy0GPG0D0MVIQbNA1A=="
+ }
+ }
+ },
+ "@graphql-tools/schema": {
+ "version": "6.0.10",
+ "resolved": "https://registry.npmjs.org/@graphql-tools/schema/-/schema-6.0.10.tgz",
+ "integrity": "sha512-g8iy36dgf/Cpyz7bHSE2axkE8PdM5VYdS2tntmytLvPaN3Krb8IxBpZBJhmiICwyAAkruQE7OjDfYr8vP8jY4A==",
+ "requires": {
+ "@graphql-tools/utils": "6.0.10",
+ "tslib": "~2.0.0"
+ },
+ "dependencies": {
+ "tslib": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/tslib/-/tslib-2.0.0.tgz",
+ "integrity": "sha512-lTqkx847PI7xEDYJntxZH89L2/aXInsyF2luSafe/+0fHOMjlBNXdH6th7f70qxLDhul7KZK0zC8V5ZIyHl0/g=="
+ }
+ }
+ },
+ "@graphql-tools/stitch": {
+ "version": "6.0.10",
+ "resolved": "https://registry.npmjs.org/@graphql-tools/stitch/-/stitch-6.0.10.tgz",
+ "integrity": "sha512-45xgk/ggXEkj6Ys4Hf1sV0ngzzvPhcGvA23/NG6E5LSkt4GM0TjtRpqwWMMoKJps9+1JX9/RSbHBAchC+zZj3w==",
+ "requires": {
+ "@graphql-tools/delegate": "6.0.10",
+ "@graphql-tools/merge": "6.0.10",
+ "@graphql-tools/schema": "6.0.10",
+ "@graphql-tools/utils": "6.0.10",
+ "@graphql-tools/wrap": "6.0.10",
+ "tslib": "~2.0.0"
+ },
+ "dependencies": {
+ "tslib": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/tslib/-/tslib-2.0.0.tgz",
+ "integrity": "sha512-lTqkx847PI7xEDYJntxZH89L2/aXInsyF2luSafe/+0fHOMjlBNXdH6th7f70qxLDhul7KZK0zC8V5ZIyHl0/g=="
+ }
+ }
+ },
+ "@graphql-tools/url-loader": {
+ "version": "6.0.10",
+ "resolved": "https://registry.npmjs.org/@graphql-tools/url-loader/-/url-loader-6.0.10.tgz",
+ "integrity": "sha512-iaXtj/Rthf1omhFmaA7V+Np3lyEeBiFI6SZ89Pb84NLkgI51ENCaboecFrAW0hwNqAcqfSdCTMv09n/Fx2vXGg==",
+ "requires": {
+ "@graphql-tools/delegate": "6.0.10",
+ "@graphql-tools/utils": "6.0.10",
+ "@graphql-tools/wrap": "6.0.10",
+ "@types/websocket": "1.0.0",
+ "cross-fetch": "3.0.4",
+ "subscriptions-transport-ws": "0.9.16",
+ "tslib": "~2.0.0",
+ "valid-url": "1.0.9",
+ "websocket": "1.0.31"
+ },
+ "dependencies": {
+ "cross-fetch": {
+ "version": "3.0.4",
+ "resolved": "https://registry.npmjs.org/cross-fetch/-/cross-fetch-3.0.4.tgz",
+ "integrity": "sha512-MSHgpjQqgbT/94D4CyADeNoYh52zMkCX4pcJvPP5WqPsLFMKjr2TCMg381ox5qI0ii2dPwaLx/00477knXqXVw==",
+ "requires": {
+ "node-fetch": "2.6.0",
+ "whatwg-fetch": "3.0.0"
+ }
+ },
+ "node-fetch": {
+ "version": "2.6.0",
+ "resolved": "https://registry.npmjs.org/node-fetch/-/node-fetch-2.6.0.tgz",
+ "integrity": "sha512-8dG4H5ujfvFiqDmVu9fQ5bOHUC15JMjMY/Zumv26oOvvVJjM67KF8koCWIabKQ1GJIa9r2mMZscBq/TbdOcmNA=="
+ },
+ "tslib": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/tslib/-/tslib-2.0.0.tgz",
+ "integrity": "sha512-lTqkx847PI7xEDYJntxZH89L2/aXInsyF2luSafe/+0fHOMjlBNXdH6th7f70qxLDhul7KZK0zC8V5ZIyHl0/g=="
+ }
+ }
+ },
+ "@graphql-tools/utils": {
+ "version": "6.0.10",
+ "resolved": "https://registry.npmjs.org/@graphql-tools/utils/-/utils-6.0.10.tgz",
+ "integrity": "sha512-1s3vBnYUIDLBGEaV1VF3lv1Xq54lT8Oz7tNNypv7K7cv3auKX7idRtjP8RM6hKpGod46JNZgu3NNOshMUEyEyA==",
+ "requires": {
+ "aggregate-error": "3.0.1",
+ "camel-case": "4.1.1"
+ },
+ "dependencies": {
+ "camel-case": {
+ "version": "4.1.1",
+ "resolved": "https://registry.npmjs.org/camel-case/-/camel-case-4.1.1.tgz",
+ "integrity": "sha512-7fa2WcG4fYFkclIvEmxBbTvmibwF2/agfEBc6q3lOpVu0A13ltLsA+Hr/8Hp6kp5f+G7hKi6t8lys6XxP+1K6Q==",
+ "requires": {
+ "pascal-case": "^3.1.1",
+ "tslib": "^1.10.0"
+ }
+ },
+ "lower-case": {
+ "version": "2.0.1",
+ "resolved": "https://registry.npmjs.org/lower-case/-/lower-case-2.0.1.tgz",
+ "integrity": "sha512-LiWgfDLLb1dwbFQZsSglpRj+1ctGnayXz3Uv0/WO8n558JycT5fg6zkNcnW0G68Nn0aEldTFeEfmjCfmqry/rQ==",
+ "requires": {
+ "tslib": "^1.10.0"
+ }
+ },
+ "no-case": {
+ "version": "3.0.3",
+ "resolved": "https://registry.npmjs.org/no-case/-/no-case-3.0.3.tgz",
+ "integrity": "sha512-ehY/mVQCf9BL0gKfsJBvFJen+1V//U+0HQMPrWct40ixE4jnv0bfvxDbWtAHL9EcaPEOJHVVYKoQn1TlZUB8Tw==",
+ "requires": {
+ "lower-case": "^2.0.1",
+ "tslib": "^1.10.0"
+ }
+ },
+ "pascal-case": {
+ "version": "3.1.1",
+ "resolved": "https://registry.npmjs.org/pascal-case/-/pascal-case-3.1.1.tgz",
+ "integrity": "sha512-XIeHKqIrsquVTQL2crjq3NfJUxmdLasn3TYOU0VBM+UX2a6ztAWBlJQBePLGY7VHW8+2dRadeIPK5+KImwTxQA==",
+ "requires": {
+ "no-case": "^3.0.3",
+ "tslib": "^1.10.0"
+ }
+ }
+ }
+ },
+ "@graphql-tools/wrap": {
+ "version": "6.0.10",
+ "resolved": "https://registry.npmjs.org/@graphql-tools/wrap/-/wrap-6.0.10.tgz",
+ "integrity": "sha512-260f+eks3pSltokwueFJXQSwf7QdsjccphXINBIa0hwPyF8mPanyJlqd5GxkkG+C2K/oOXm8qaxc6pp7lpaomQ==",
+ "requires": {
+ "@graphql-tools/delegate": "6.0.10",
+ "@graphql-tools/schema": "6.0.10",
+ "@graphql-tools/utils": "6.0.10",
+ "aggregate-error": "3.0.1",
+ "tslib": "~2.0.0"
+ },
+ "dependencies": {
+ "tslib": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/tslib/-/tslib-2.0.0.tgz",
+ "integrity": "sha512-lTqkx847PI7xEDYJntxZH89L2/aXInsyF2luSafe/+0fHOMjlBNXdH6th7f70qxLDhul7KZK0zC8V5ZIyHl0/g=="
+ }
+ }
+ },
+ "@hapi/address": {
+ "version": "2.1.4",
+ "resolved": "https://registry.npmjs.org/@hapi/address/-/address-2.1.4.tgz",
+ "integrity": "sha512-QD1PhQk+s31P1ixsX0H0Suoupp3VMXzIVMSwobR3F3MSUO2YCV0B7xqLcUw/Bh8yuvd3LhpyqLQWTNcRmp6IdQ=="
+ },
+ "@hapi/bourne": {
+ "version": "1.3.2",
+ "resolved": "https://registry.npmjs.org/@hapi/bourne/-/bourne-1.3.2.tgz",
+ "integrity": "sha512-1dVNHT76Uu5N3eJNTYcvxee+jzX4Z9lfciqRRHCU27ihbUcYi+iSc2iml5Ke1LXe1SyJCLA0+14Jh4tXJgOppA=="
+ },
+ "@hapi/hoek": {
+ "version": "6.2.4",
+ "resolved": "https://registry.npmjs.org/@hapi/hoek/-/hoek-6.2.4.tgz",
+ "integrity": "sha512-HOJ20Kc93DkDVvjwHyHawPwPkX44sIrbXazAUDiUXaY2R9JwQGo2PhFfnQtdrsIe4igjG2fPgMra7NYw7qhy0A=="
+ },
+ "@hapi/joi": {
+ "version": "14.5.0",
+ "resolved": "https://registry.npmjs.org/@hapi/joi/-/joi-14.5.0.tgz",
+ "integrity": "sha512-q8oNlQWQpN14j6lMkaQqVdG8km+Ni32ZeuJ+sSOB+5a5VsIY6KVpPvdoMU/XKyAS7P7qP0TgM9fFGC2d8dB6hA==",
+ "requires": {
+ "@hapi/hoek": "6.x.x",
+ "@hapi/marker": "1.x.x",
+ "@hapi/topo": "3.x.x",
+ "isemail": "3.x.x"
+ }
+ },
+ "@hapi/marker": {
+ "version": "1.0.0",
+ "resolved": "https://registry.npmjs.org/@hapi/marker/-/marker-1.0.0.tgz",
+ "integrity": "sha512-JOfdekTXnJexfE8PyhZFyHvHjt81rBFSAbTIRAhF2vv/2Y1JzoKsGqxH/GpZJoF7aEfYok8JVcAHmSz1gkBieA=="
+ },
+ "@hapi/topo": {
+ "version": "3.1.6",
+ "resolved": "https://registry.npmjs.org/@hapi/topo/-/topo-3.1.6.tgz",
+ "integrity": "sha512-tAag0jEcjwH+P2quUfipd7liWCNX2F8NvYjQp2wtInsZxnMlypdw0FtAOLxtvvkO+GSRRbmNi8m/5y42PQJYCQ==",
+ "requires": {
+ "@hapi/hoek": "^8.3.0"
+ },
+ "dependencies": {
+ "@hapi/hoek": {
+ "version": "8.5.1",
+ "resolved": "https://registry.npmjs.org/@hapi/hoek/-/hoek-8.5.1.tgz",
+ "integrity": "sha512-yN7kbciD87WzLGc5539Tn0sApjyiGHAJgKvG9W8C7O+6c7qmoQMfVs0W4bX17eqz6C78QJqqFrtgdK5EWf6Qow=="
+ }
+ }
+ },
+ "@jest/types": {
+ "version": "25.5.0",
+ "resolved": "https://registry.npmjs.org/@jest/types/-/types-25.5.0.tgz",
+ "integrity": "sha512-OXD0RgQ86Tu3MazKo8bnrkDRaDXXMGUqd+kTtLtK1Zb7CRzQcaSRPPPV37SvYTdevXEBVxe0HXylEjs8ibkmCw==",
+ "requires": {
+ "@types/istanbul-lib-coverage": "^2.0.0",
+ "@types/istanbul-reports": "^1.1.1",
+ "@types/yargs": "^15.0.0",
+ "chalk": "^3.0.0"
+ },
+ "dependencies": {
+ "ansi-styles": {
+ "version": "4.2.1",
+ "resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-4.2.1.tgz",
+ "integrity": "sha512-9VGjrMsG1vePxcSweQsN20KY/c4zN0h9fLjqAbwbPfahM3t+NL+M9HC8xeXG2I8pX5NoamTGNuomEUFI7fcUjA==",
+ "requires": {
+ "@types/color-name": "^1.1.1",
+ "color-convert": "^2.0.1"
+ }
+ },
+ "chalk": {
+ "version": "3.0.0",
+ "resolved": "https://registry.npmjs.org/chalk/-/chalk-3.0.0.tgz",
+ "integrity": "sha512-4D3B6Wf41KOYRFdszmDqMCGq5VV/uMAB273JILmO+3jAlh8X4qDtdtgCR3fxtbLEMzSx22QdhnDcJvu2u1fVwg==",
+ "requires": {
+ "ansi-styles": "^4.1.0",
+ "supports-color": "^7.1.0"
+ }
+ },
+ "color-convert": {
+ "version": "2.0.1",
+ "resolved": "https://registry.npmjs.org/color-convert/-/color-convert-2.0.1.tgz",
+ "integrity": "sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ==",
+ "requires": {
+ "color-name": "~1.1.4"
+ }
+ },
+ "color-name": {
+ "version": "1.1.4",
+ "resolved": "https://registry.npmjs.org/color-name/-/color-name-1.1.4.tgz",
+ "integrity": "sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA=="
+ },
+ "has-flag": {
+ "version": "4.0.0",
+ "resolved": "https://registry.npmjs.org/has-flag/-/has-flag-4.0.0.tgz",
+ "integrity": "sha512-EykJT/Q1KjTWctppgIAgfSO0tKVuZUjhgMr17kqTumMl6Afv3EISleU7qZUzoXDFTAHTDC4NOoG/ZxU3EvlMPQ=="
+ },
+ "supports-color": {
+ "version": "7.1.0",
+ "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-7.1.0.tgz",
+ "integrity": "sha512-oRSIpR8pxT1Wr2FquTNnGet79b3BWljqOuoW/h4oBhxJ/HUbX5nX6JSruTkvXDCFMwDPvsaTTbvMLKZWSy0R5g==",
+ "requires": {
+ "has-flag": "^4.0.0"
+ }
+ }
+ }
+ },
"@jupyterlab/apputils": {
"version": "0.19.1",
"resolved": "https://registry.npmjs.org/@jupyterlab/apputils/-/apputils-0.19.1.tgz",
@@ -1273,11 +3223,591 @@
}
}
},
+ "@mdx-js/react": {
+ "version": "1.6.5",
+ "resolved": "https://registry.npmjs.org/@mdx-js/react/-/react-1.6.5.tgz",
+ "integrity": "sha512-y1Yu9baw3KokFrs7g5RxHpJNSU4e1zk/5bAJX94yVATglG5HyAL0lYMySU8YzebXNE+fJJMCx9CuiQHy2ezoew=="
+ },
+ "@mdx-js/runtime": {
+ "version": "1.6.5",
+ "resolved": "https://registry.npmjs.org/@mdx-js/runtime/-/runtime-1.6.5.tgz",
+ "integrity": "sha512-JxyskuQaQwJBAjdClY7Un7wD+RWLkzPuox0Tfs72c4OQ5it1TzxCeQTL3Zv6ZsWzNCUgVt9o+h31+pbvYtsFsA==",
+ "requires": {
+ "@mdx-js/mdx": "^1.6.5",
+ "@mdx-js/react": "^1.6.5",
+ "buble-jsx-only": "^0.19.8"
+ },
+ "dependencies": {
+ "@babel/code-frame": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/code-frame/-/code-frame-7.10.1.tgz",
+ "integrity": "sha512-IGhtTmpjGbYzcEDOw7DcQtbQSXcG9ftmAXtWTu9V936vDye4xjjekktFAtgZsWpzTj/X01jocB46mTywm/4SZw==",
+ "requires": {
+ "@babel/highlight": "^7.10.1"
+ }
+ },
+ "@babel/core": {
+ "version": "7.9.6",
+ "resolved": "https://registry.npmjs.org/@babel/core/-/core-7.9.6.tgz",
+ "integrity": "sha512-nD3deLvbsApbHAHttzIssYqgb883yU/d9roe4RZymBCDaZryMJDbptVpEpeQuRh4BJ+SYI8le9YGxKvFEvl1Wg==",
+ "requires": {
+ "@babel/code-frame": "^7.8.3",
+ "@babel/generator": "^7.9.6",
+ "@babel/helper-module-transforms": "^7.9.0",
+ "@babel/helpers": "^7.9.6",
+ "@babel/parser": "^7.9.6",
+ "@babel/template": "^7.8.6",
+ "@babel/traverse": "^7.9.6",
+ "@babel/types": "^7.9.6",
+ "convert-source-map": "^1.7.0",
+ "debug": "^4.1.0",
+ "gensync": "^1.0.0-beta.1",
+ "json5": "^2.1.2",
+ "lodash": "^4.17.13",
+ "resolve": "^1.3.2",
+ "semver": "^5.4.1",
+ "source-map": "^0.5.0"
+ }
+ },
+ "@babel/generator": {
+ "version": "7.10.2",
+ "resolved": "https://registry.npmjs.org/@babel/generator/-/generator-7.10.2.tgz",
+ "integrity": "sha512-AxfBNHNu99DTMvlUPlt1h2+Hn7knPpH5ayJ8OqDWSeLld+Fi2AYBTC/IejWDM9Edcii4UzZRCsbUt0WlSDsDsA==",
+ "requires": {
+ "@babel/types": "^7.10.2",
+ "jsesc": "^2.5.1",
+ "lodash": "^4.17.13",
+ "source-map": "^0.5.0"
+ }
+ },
+ "@babel/helper-function-name": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-function-name/-/helper-function-name-7.10.1.tgz",
+ "integrity": "sha512-fcpumwhs3YyZ/ttd5Rz0xn0TpIwVkN7X0V38B9TWNfVF42KEkhkAAuPCQ3oXmtTRtiPJrmZ0TrfS0GKF0eMaRQ==",
+ "requires": {
+ "@babel/helper-get-function-arity": "^7.10.1",
+ "@babel/template": "^7.10.1",
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helper-get-function-arity": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-get-function-arity/-/helper-get-function-arity-7.10.1.tgz",
+ "integrity": "sha512-F5qdXkYGOQUb0hpRaPoetF9AnsXknKjWMZ+wmsIRsp5ge5sFh4c3h1eH2pRTTuy9KKAA2+TTYomGXAtEL2fQEw==",
+ "requires": {
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helper-member-expression-to-functions": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-member-expression-to-functions/-/helper-member-expression-to-functions-7.10.1.tgz",
+ "integrity": "sha512-u7XLXeM2n50gb6PWJ9hoO5oO7JFPaZtrh35t8RqKLT1jFKj9IWeD1zrcrYp1q1qiZTdEarfDWfTIP8nGsu0h5g==",
+ "requires": {
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helper-module-imports": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-module-imports/-/helper-module-imports-7.10.1.tgz",
+ "integrity": "sha512-SFxgwYmZ3HZPyZwJRiVNLRHWuW2OgE5k2nrVs6D9Iv4PPnXVffuEHy83Sfx/l4SqF+5kyJXjAyUmrG7tNm+qVg==",
+ "requires": {
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helper-module-transforms": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-module-transforms/-/helper-module-transforms-7.10.1.tgz",
+ "integrity": "sha512-RLHRCAzyJe7Q7sF4oy2cB+kRnU4wDZY/H2xJFGof+M+SJEGhZsb+GFj5j1AD8NiSaVBJ+Pf0/WObiXu/zxWpFg==",
+ "requires": {
+ "@babel/helper-module-imports": "^7.10.1",
+ "@babel/helper-replace-supers": "^7.10.1",
+ "@babel/helper-simple-access": "^7.10.1",
+ "@babel/helper-split-export-declaration": "^7.10.1",
+ "@babel/template": "^7.10.1",
+ "@babel/types": "^7.10.1",
+ "lodash": "^4.17.13"
+ }
+ },
+ "@babel/helper-optimise-call-expression": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-optimise-call-expression/-/helper-optimise-call-expression-7.10.1.tgz",
+ "integrity": "sha512-a0DjNS1prnBsoKx83dP2falChcs7p3i8VMzdrSbfLhuQra/2ENC4sbri34dz/rWmDADsmF1q5GbfaXydh0Jbjg==",
+ "requires": {
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helper-plugin-utils": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-plugin-utils/-/helper-plugin-utils-7.10.1.tgz",
+ "integrity": "sha512-fvoGeXt0bJc7VMWZGCAEBEMo/HAjW2mP8apF5eXK0wSqwLAVHAISCWRoLMBMUs2kqeaG77jltVqu4Hn8Egl3nA=="
+ },
+ "@babel/helper-replace-supers": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-replace-supers/-/helper-replace-supers-7.10.1.tgz",
+ "integrity": "sha512-SOwJzEfpuQwInzzQJGjGaiG578UYmyi2Xw668klPWV5n07B73S0a9btjLk/52Mlcxa+5AdIYqws1KyXRfMoB7A==",
+ "requires": {
+ "@babel/helper-member-expression-to-functions": "^7.10.1",
+ "@babel/helper-optimise-call-expression": "^7.10.1",
+ "@babel/traverse": "^7.10.1",
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helper-simple-access": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-simple-access/-/helper-simple-access-7.10.1.tgz",
+ "integrity": "sha512-VSWpWzRzn9VtgMJBIWTZ+GP107kZdQ4YplJlCmIrjoLVSi/0upixezHCDG8kpPVTBJpKfxTH01wDhh+jS2zKbw==",
+ "requires": {
+ "@babel/template": "^7.10.1",
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helper-split-export-declaration": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-split-export-declaration/-/helper-split-export-declaration-7.10.1.tgz",
+ "integrity": "sha512-UQ1LVBPrYdbchNhLwj6fetj46BcFwfS4NllJo/1aJsT+1dLTEnXJL0qHqtY7gPzF8S2fXBJamf1biAXV3X077g==",
+ "requires": {
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helpers": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helpers/-/helpers-7.10.1.tgz",
+ "integrity": "sha512-muQNHF+IdU6wGgkaJyhhEmI54MOZBKsFfsXFhboz1ybwJ1Kl7IHlbm2a++4jwrmY5UYsgitt5lfqo1wMFcHmyw==",
+ "requires": {
+ "@babel/template": "^7.10.1",
+ "@babel/traverse": "^7.10.1",
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/highlight": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/highlight/-/highlight-7.10.1.tgz",
+ "integrity": "sha512-8rMof+gVP8mxYZApLF/JgNDAkdKa+aJt3ZYxF8z6+j/hpeXL7iMsKCPHa2jNMHu/qqBwzQF4OHNoYi8dMA/rYg==",
+ "requires": {
+ "@babel/helper-validator-identifier": "^7.10.1",
+ "chalk": "^2.0.0",
+ "js-tokens": "^4.0.0"
+ }
+ },
+ "@babel/parser": {
+ "version": "7.10.2",
+ "resolved": "https://registry.npmjs.org/@babel/parser/-/parser-7.10.2.tgz",
+ "integrity": "sha512-PApSXlNMJyB4JiGVhCOlzKIif+TKFTvu0aQAhnTvfP/z3vVSN6ZypH5bfUNwFXXjRQtUEBNFd2PtmCmG2Py3qQ=="
+ },
+ "@babel/plugin-syntax-jsx": {
+ "version": "7.8.3",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-jsx/-/plugin-syntax-jsx-7.8.3.tgz",
+ "integrity": "sha512-WxdW9xyLgBdefoo0Ynn3MRSkhe5tFVxxKNVdnZSh318WrG2e2jH+E9wd/++JsqcLJZPfz87njQJ8j2Upjm0M0A==",
+ "requires": {
+ "@babel/helper-plugin-utils": "^7.8.3"
+ }
+ },
+ "@babel/plugin-syntax-object-rest-spread": {
+ "version": "7.8.3",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-object-rest-spread/-/plugin-syntax-object-rest-spread-7.8.3.tgz",
+ "integrity": "sha512-XoqMijGZb9y3y2XskN+P1wUGiVwWZ5JmoDRwx5+3GmEplNyVM2s2Dg8ILFQm8rWM48orGy5YpI5Bl8U1y7ydlA==",
+ "requires": {
+ "@babel/helper-plugin-utils": "^7.8.0"
+ }
+ },
+ "@babel/template": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/template/-/template-7.10.1.tgz",
+ "integrity": "sha512-OQDg6SqvFSsc9A0ej6SKINWrpJiNonRIniYondK2ViKhB06i3c0s+76XUft71iqBEe9S1OKsHwPAjfHnuvnCig==",
+ "requires": {
+ "@babel/code-frame": "^7.10.1",
+ "@babel/parser": "^7.10.1",
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/traverse": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/traverse/-/traverse-7.10.1.tgz",
+ "integrity": "sha512-C/cTuXeKt85K+p08jN6vMDz8vSV0vZcI0wmQ36o6mjbuo++kPMdpOYw23W2XH04dbRt9/nMEfA4W3eR21CD+TQ==",
+ "requires": {
+ "@babel/code-frame": "^7.10.1",
+ "@babel/generator": "^7.10.1",
+ "@babel/helper-function-name": "^7.10.1",
+ "@babel/helper-split-export-declaration": "^7.10.1",
+ "@babel/parser": "^7.10.1",
+ "@babel/types": "^7.10.1",
+ "debug": "^4.1.0",
+ "globals": "^11.1.0",
+ "lodash": "^4.17.13"
+ }
+ },
+ "@babel/types": {
+ "version": "7.10.2",
+ "resolved": "https://registry.npmjs.org/@babel/types/-/types-7.10.2.tgz",
+ "integrity": "sha512-AD3AwWBSz0AWF0AkCN9VPiWrvldXq+/e3cHa4J89vo4ymjz1XwrBFFVZmkJTsQIPNk+ZVomPSXUJqq8yyjZsng==",
+ "requires": {
+ "@babel/helper-validator-identifier": "^7.10.1",
+ "lodash": "^4.17.13",
+ "to-fast-properties": "^2.0.0"
+ }
+ },
+ "@mdx-js/mdx": {
+ "version": "1.6.5",
+ "resolved": "https://registry.npmjs.org/@mdx-js/mdx/-/mdx-1.6.5.tgz",
+ "integrity": "sha512-DC13eeEM0Dv9OD+UVhyB69BlV29d2eoAmfiR/XdgNl4R7YmRNEPGRD3QvGUdRUDxYdJBHauMz5ZIV507cNXXaA==",
+ "requires": {
+ "@babel/core": "7.9.6",
+ "@babel/plugin-syntax-jsx": "7.8.3",
+ "@babel/plugin-syntax-object-rest-spread": "7.8.3",
+ "@mdx-js/util": "^1.6.5",
+ "babel-plugin-apply-mdx-type-prop": "^1.6.5",
+ "babel-plugin-extract-import-names": "^1.6.5",
+ "camelcase-css": "2.0.1",
+ "detab": "2.0.3",
+ "hast-util-raw": "5.0.2",
+ "lodash.uniq": "4.5.0",
+ "mdast-util-to-hast": "9.1.0",
+ "remark-footnotes": "1.0.0",
+ "remark-mdx": "^1.6.5",
+ "remark-parse": "8.0.2",
+ "remark-squeeze-paragraphs": "4.0.0",
+ "style-to-object": "0.3.0",
+ "unified": "9.0.0",
+ "unist-builder": "2.0.3",
+ "unist-util-visit": "2.0.2"
+ }
+ },
+ "@types/unist": {
+ "version": "2.0.3",
+ "resolved": "https://registry.npmjs.org/@types/unist/-/unist-2.0.3.tgz",
+ "integrity": "sha512-FvUupuM3rlRsRtCN+fDudtmytGO6iHJuuRKS1Ss0pG5z8oX0diNEw94UEL7hgDbpN94rgaK5R7sWm6RrSkZuAQ=="
+ },
+ "convert-source-map": {
+ "version": "1.7.0",
+ "resolved": "https://registry.npmjs.org/convert-source-map/-/convert-source-map-1.7.0.tgz",
+ "integrity": "sha512-4FJkXzKXEDB1snCFZlLP4gpC3JILicCpGbzG9f9G7tGqGCzETQ2hWPrcinA9oU4wtf2biUaEH5065UnMeR33oA==",
+ "requires": {
+ "safe-buffer": "~5.1.1"
+ }
+ },
+ "debug": {
+ "version": "4.1.1",
+ "resolved": "https://registry.npmjs.org/debug/-/debug-4.1.1.tgz",
+ "integrity": "sha512-pYAIzeRo8J6KPEaJ0VWOh5Pzkbw/RetuzehGM7QRRX5he4fPHx2rdKMB256ehJCkX+XRQm16eZLqLNS8RSZXZw==",
+ "requires": {
+ "ms": "^2.1.1"
+ }
+ },
+ "detab": {
+ "version": "2.0.3",
+ "resolved": "https://registry.npmjs.org/detab/-/detab-2.0.3.tgz",
+ "integrity": "sha512-Up8P0clUVwq0FnFjDclzZsy9PadzRn5FFxrr47tQQvMHqyiFYVbpH8oXDzWtF0Q7pYy3l+RPmtBl+BsFF6wH0A==",
+ "requires": {
+ "repeat-string": "^1.5.4"
+ }
+ },
+ "hast-to-hyperscript": {
+ "version": "7.0.4",
+ "resolved": "https://registry.npmjs.org/hast-to-hyperscript/-/hast-to-hyperscript-7.0.4.tgz",
+ "integrity": "sha512-vmwriQ2H0RPS9ho4Kkbf3n3lY436QKLq6VaGA1pzBh36hBi3tm1DO9bR+kaJIbpT10UqaANDkMjxvjVfr+cnOA==",
+ "requires": {
+ "comma-separated-tokens": "^1.0.0",
+ "property-information": "^5.3.0",
+ "space-separated-tokens": "^1.0.0",
+ "style-to-object": "^0.2.1",
+ "unist-util-is": "^3.0.0",
+ "web-namespaces": "^1.1.2"
+ },
+ "dependencies": {
+ "property-information": {
+ "version": "5.5.0",
+ "resolved": "https://registry.npmjs.org/property-information/-/property-information-5.5.0.tgz",
+ "integrity": "sha512-RgEbCx2HLa1chNgvChcx+rrCWD0ctBmGSE0M7lVm1yyv4UbvbrWoXp/BkVLZefzjrRBGW8/Js6uh/BnlHXFyjA==",
+ "requires": {
+ "xtend": "^4.0.0"
+ }
+ },
+ "style-to-object": {
+ "version": "0.2.3",
+ "resolved": "https://registry.npmjs.org/style-to-object/-/style-to-object-0.2.3.tgz",
+ "integrity": "sha512-1d/k4EY2N7jVLOqf2j04dTc37TPOv/hHxZmvpg8Pdh8UYydxeu/C1W1U4vD8alzf5V2Gt7rLsmkr4dxAlDm9ng==",
+ "requires": {
+ "inline-style-parser": "0.1.1"
+ }
+ }
+ }
+ },
+ "hast-util-raw": {
+ "version": "5.0.2",
+ "resolved": "https://registry.npmjs.org/hast-util-raw/-/hast-util-raw-5.0.2.tgz",
+ "integrity": "sha512-3ReYQcIHmzSgMq8UrDZHFL0oGlbuVGdLKs8s/Fe8BfHFAyZDrdv1fy/AGn+Fim8ZuvAHcJ61NQhVMtyfHviT/g==",
+ "requires": {
+ "hast-util-from-parse5": "^5.0.0",
+ "hast-util-to-parse5": "^5.0.0",
+ "html-void-elements": "^1.0.0",
+ "parse5": "^5.0.0",
+ "unist-util-position": "^3.0.0",
+ "web-namespaces": "^1.0.0",
+ "xtend": "^4.0.0",
+ "zwitch": "^1.0.0"
+ }
+ },
+ "hast-util-to-parse5": {
+ "version": "5.1.2",
+ "resolved": "https://registry.npmjs.org/hast-util-to-parse5/-/hast-util-to-parse5-5.1.2.tgz",
+ "integrity": "sha512-ZgYLJu9lYknMfsBY0rBV4TJn2xiwF1fXFFjbP6EE7S0s5mS8LIKBVWzhA1MeIs1SWW6GnnE4In6c3kPb+CWhog==",
+ "requires": {
+ "hast-to-hyperscript": "^7.0.0",
+ "property-information": "^5.0.0",
+ "web-namespaces": "^1.0.0",
+ "xtend": "^4.0.0",
+ "zwitch": "^1.0.0"
+ }
+ },
+ "is-buffer": {
+ "version": "2.0.4",
+ "resolved": "https://registry.npmjs.org/is-buffer/-/is-buffer-2.0.4.tgz",
+ "integrity": "sha512-Kq1rokWXOPXWuaMAqZiJW4XxsmD9zGx9q4aePabbn3qCRGedtH7Cm+zV8WETitMfu1wdh+Rvd6w5egwSngUX2A=="
+ },
+ "is-plain-obj": {
+ "version": "2.1.0",
+ "resolved": "https://registry.npmjs.org/is-plain-obj/-/is-plain-obj-2.1.0.tgz",
+ "integrity": "sha512-YWnfyRwxL/+SsrWYfOpUtz5b3YD+nyfkHvjbcanzk8zgyO4ASD67uVMRt8k5bM4lLMDnXfriRhOpemw+NfT1eA=="
+ },
+ "json5": {
+ "version": "2.1.3",
+ "resolved": "https://registry.npmjs.org/json5/-/json5-2.1.3.tgz",
+ "integrity": "sha512-KXPvOm8K9IJKFM0bmdn8QXh7udDh1g/giieX0NLCaMnb4hEiVFqnop2ImTXCc5e0/oHz3LTqmHGtExn5hfMkOA==",
+ "requires": {
+ "minimist": "^1.2.5"
+ }
+ },
+ "lodash": {
+ "version": "4.17.15",
+ "resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz",
+ "integrity": "sha512-8xOcRHvCjnocdS5cpwXQXVzmmh5e5+saE2QGoeQmbKmRS6J3VQppPOIt0MnmE+4xlZoumy0GPG0D0MVIQbNA1A=="
+ },
+ "mdast-squeeze-paragraphs": {
+ "version": "4.0.0",
+ "resolved": "https://registry.npmjs.org/mdast-squeeze-paragraphs/-/mdast-squeeze-paragraphs-4.0.0.tgz",
+ "integrity": "sha512-zxdPn69hkQ1rm4J+2Cs2j6wDEv7O17TfXTJ33tl/+JPIoEmtV9t2ZzBM5LPHE8QlHsmVD8t3vPKCyY3oH+H8MQ==",
+ "requires": {
+ "unist-util-remove": "^2.0.0"
+ }
+ },
+ "mdast-util-definitions": {
+ "version": "3.0.1",
+ "resolved": "https://registry.npmjs.org/mdast-util-definitions/-/mdast-util-definitions-3.0.1.tgz",
+ "integrity": "sha512-BAv2iUm/e6IK/b2/t+Fx69EL/AGcq/IG2S+HxHjDJGfLJtd6i9SZUS76aC9cig+IEucsqxKTR0ot3m933R3iuA==",
+ "requires": {
+ "unist-util-visit": "^2.0.0"
+ }
+ },
+ "mdast-util-to-hast": {
+ "version": "9.1.0",
+ "resolved": "https://registry.npmjs.org/mdast-util-to-hast/-/mdast-util-to-hast-9.1.0.tgz",
+ "integrity": "sha512-Akl2Vi9y9cSdr19/Dfu58PVwifPXuFt1IrHe7l+Crme1KvgUT+5z+cHLVcQVGCiNTZZcdqjnuv9vPkGsqWytWA==",
+ "requires": {
+ "@types/mdast": "^3.0.0",
+ "@types/unist": "^2.0.3",
+ "collapse-white-space": "^1.0.0",
+ "detab": "^2.0.0",
+ "mdast-util-definitions": "^3.0.0",
+ "mdurl": "^1.0.0",
+ "trim-lines": "^1.0.0",
+ "unist-builder": "^2.0.0",
+ "unist-util-generated": "^1.0.0",
+ "unist-util-position": "^3.0.0",
+ "unist-util-visit": "^2.0.0"
+ }
+ },
+ "minimist": {
+ "version": "1.2.5",
+ "resolved": "https://registry.npmjs.org/minimist/-/minimist-1.2.5.tgz",
+ "integrity": "sha512-FM9nNUYrRBAELZQT3xeZQ7fmMOBg6nWNmJKTcgsJeaLstP/UODVpGsr5OhXhhXg6f+qtJ8uiZ+PUxkDWcgIXLw=="
+ },
+ "parse-entities": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/parse-entities/-/parse-entities-2.0.0.tgz",
+ "integrity": "sha512-kkywGpCcRYhqQIchaWqZ875wzpS/bMKhz5HnN3p7wveJTkTtyAB/AlnS0f8DFSqYW1T82t6yEAkEcB+A1I3MbQ==",
+ "requires": {
+ "character-entities": "^1.0.0",
+ "character-entities-legacy": "^1.0.0",
+ "character-reference-invalid": "^1.0.0",
+ "is-alphanumerical": "^1.0.0",
+ "is-decimal": "^1.0.0",
+ "is-hexadecimal": "^1.0.0"
+ }
+ },
+ "parse5": {
+ "version": "5.1.1",
+ "resolved": "https://registry.npmjs.org/parse5/-/parse5-5.1.1.tgz",
+ "integrity": "sha512-ugq4DFI0Ptb+WWjAdOK16+u/nHfiIrcE+sh8kZMaM0WllQKLI9rOUq6c2b7cwPkXdzfQESqvoqK6ug7U/Yyzug=="
+ },
+ "remark-parse": {
+ "version": "8.0.2",
+ "resolved": "https://registry.npmjs.org/remark-parse/-/remark-parse-8.0.2.tgz",
+ "integrity": "sha512-eMI6kMRjsAGpMXXBAywJwiwAse+KNpmt+BK55Oofy4KvBZEqUDj6mWbGLJZrujoPIPPxDXzn3T9baRlpsm2jnQ==",
+ "requires": {
+ "ccount": "^1.0.0",
+ "collapse-white-space": "^1.0.2",
+ "is-alphabetical": "^1.0.0",
+ "is-decimal": "^1.0.0",
+ "is-whitespace-character": "^1.0.0",
+ "is-word-character": "^1.0.0",
+ "markdown-escapes": "^1.0.0",
+ "parse-entities": "^2.0.0",
+ "repeat-string": "^1.5.4",
+ "state-toggle": "^1.0.0",
+ "trim": "0.0.1",
+ "trim-trailing-lines": "^1.0.0",
+ "unherit": "^1.0.4",
+ "unist-util-remove-position": "^2.0.0",
+ "vfile-location": "^3.0.0",
+ "xtend": "^4.0.1"
+ }
+ },
+ "remark-squeeze-paragraphs": {
+ "version": "4.0.0",
+ "resolved": "https://registry.npmjs.org/remark-squeeze-paragraphs/-/remark-squeeze-paragraphs-4.0.0.tgz",
+ "integrity": "sha512-8qRqmL9F4nuLPIgl92XUuxI3pFxize+F1H0e/W3llTk0UsjJaj01+RrirkMw7P21RKe4X6goQhYRSvNWX+70Rw==",
+ "requires": {
+ "mdast-squeeze-paragraphs": "^4.0.0"
+ }
+ },
+ "style-to-object": {
+ "version": "0.3.0",
+ "resolved": "https://registry.npmjs.org/style-to-object/-/style-to-object-0.3.0.tgz",
+ "integrity": "sha512-CzFnRRXhzWIdItT3OmF8SQfWyahHhjq3HwcMNCNLn+N7klOOqPjMeG/4JSu77D7ypZdGvSzvkrbyeTMizz2VrA==",
+ "requires": {
+ "inline-style-parser": "0.1.1"
+ }
+ },
+ "unified": {
+ "version": "9.0.0",
+ "resolved": "https://registry.npmjs.org/unified/-/unified-9.0.0.tgz",
+ "integrity": "sha512-ssFo33gljU3PdlWLjNp15Inqb77d6JnJSfyplGJPT/a+fNRNyCBeveBAYJdO5khKdF6WVHa/yYCC7Xl6BDwZUQ==",
+ "requires": {
+ "bail": "^1.0.0",
+ "extend": "^3.0.0",
+ "is-buffer": "^2.0.0",
+ "is-plain-obj": "^2.0.0",
+ "trough": "^1.0.0",
+ "vfile": "^4.0.0"
+ }
+ },
+ "unist-builder": {
+ "version": "2.0.3",
+ "resolved": "https://registry.npmjs.org/unist-builder/-/unist-builder-2.0.3.tgz",
+ "integrity": "sha512-f98yt5pnlMWlzP539tPc4grGMsFaQQlP/vM396b00jngsiINumNmsY8rkXjfoi1c6QaM8nQ3vaGDuoKWbe/1Uw=="
+ },
+ "unist-util-is": {
+ "version": "3.0.0",
+ "resolved": "https://registry.npmjs.org/unist-util-is/-/unist-util-is-3.0.0.tgz",
+ "integrity": "sha512-sVZZX3+kspVNmLWBPAB6r+7D9ZgAFPNWm66f7YNb420RlQSbn+n8rG8dGZSkrER7ZIXGQYNm5pqC3v3HopH24A=="
+ },
+ "unist-util-remove": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/unist-util-remove/-/unist-util-remove-2.0.0.tgz",
+ "integrity": "sha512-HwwWyNHKkeg/eXRnE11IpzY8JT55JNM1YCwwU9YNCnfzk6s8GhPXrVBBZWiwLeATJbI7euvoGSzcy9M29UeW3g==",
+ "requires": {
+ "unist-util-is": "^4.0.0"
+ },
+ "dependencies": {
+ "unist-util-is": {
+ "version": "4.0.2",
+ "resolved": "https://registry.npmjs.org/unist-util-is/-/unist-util-is-4.0.2.tgz",
+ "integrity": "sha512-Ofx8uf6haexJwI1gxWMGg6I/dLnF2yE+KibhD3/diOqY2TinLcqHXCV6OI5gFVn3xQqDH+u0M625pfKwIwgBKQ=="
+ }
+ }
+ },
+ "unist-util-remove-position": {
+ "version": "2.0.1",
+ "resolved": "https://registry.npmjs.org/unist-util-remove-position/-/unist-util-remove-position-2.0.1.tgz",
+ "integrity": "sha512-fDZsLYIe2uT+oGFnuZmy73K6ZxOPG/Qcm+w7jbEjaFcJgbQ6cqjs/eSPzXhsmGpAsWPkqZM9pYjww5QTn3LHMA==",
+ "requires": {
+ "unist-util-visit": "^2.0.0"
+ }
+ },
+ "unist-util-stringify-position": {
+ "version": "2.0.3",
+ "resolved": "https://registry.npmjs.org/unist-util-stringify-position/-/unist-util-stringify-position-2.0.3.tgz",
+ "integrity": "sha512-3faScn5I+hy9VleOq/qNbAd6pAx7iH5jYBMS9I1HgQVijz/4mv5Bvw5iw1sC/90CODiKo81G/ps8AJrISn687g==",
+ "requires": {
+ "@types/unist": "^2.0.2"
+ }
+ },
+ "unist-util-visit": {
+ "version": "2.0.2",
+ "resolved": "https://registry.npmjs.org/unist-util-visit/-/unist-util-visit-2.0.2.tgz",
+ "integrity": "sha512-HoHNhGnKj6y+Sq+7ASo2zpVdfdRifhTgX2KTU3B/sO/TTlZchp7E3S4vjRzDJ7L60KmrCPsQkVK3lEF3cz36XQ==",
+ "requires": {
+ "@types/unist": "^2.0.0",
+ "unist-util-is": "^4.0.0",
+ "unist-util-visit-parents": "^3.0.0"
+ },
+ "dependencies": {
+ "unist-util-is": {
+ "version": "4.0.2",
+ "resolved": "https://registry.npmjs.org/unist-util-is/-/unist-util-is-4.0.2.tgz",
+ "integrity": "sha512-Ofx8uf6haexJwI1gxWMGg6I/dLnF2yE+KibhD3/diOqY2TinLcqHXCV6OI5gFVn3xQqDH+u0M625pfKwIwgBKQ=="
+ }
+ }
+ },
+ "unist-util-visit-parents": {
+ "version": "3.0.2",
+ "resolved": "https://registry.npmjs.org/unist-util-visit-parents/-/unist-util-visit-parents-3.0.2.tgz",
+ "integrity": "sha512-yJEfuZtzFpQmg1OSCyS9M5NJRrln/9FbYosH3iW0MG402QbdbaB8ZESwUv9RO6nRfLAKvWcMxCwdLWOov36x/g==",
+ "requires": {
+ "@types/unist": "^2.0.0",
+ "unist-util-is": "^4.0.0"
+ },
+ "dependencies": {
+ "unist-util-is": {
+ "version": "4.0.2",
+ "resolved": "https://registry.npmjs.org/unist-util-is/-/unist-util-is-4.0.2.tgz",
+ "integrity": "sha512-Ofx8uf6haexJwI1gxWMGg6I/dLnF2yE+KibhD3/diOqY2TinLcqHXCV6OI5gFVn3xQqDH+u0M625pfKwIwgBKQ=="
+ }
+ }
+ },
+ "vfile": {
+ "version": "4.1.1",
+ "resolved": "https://registry.npmjs.org/vfile/-/vfile-4.1.1.tgz",
+ "integrity": "sha512-lRjkpyDGjVlBA7cDQhQ+gNcvB1BGaTHYuSOcY3S7OhDmBtnzX95FhtZZDecSTDm6aajFymyve6S5DN4ZHGezdQ==",
+ "requires": {
+ "@types/unist": "^2.0.0",
+ "is-buffer": "^2.0.0",
+ "replace-ext": "1.0.0",
+ "unist-util-stringify-position": "^2.0.0",
+ "vfile-message": "^2.0.0"
+ }
+ },
+ "vfile-location": {
+ "version": "3.0.1",
+ "resolved": "https://registry.npmjs.org/vfile-location/-/vfile-location-3.0.1.tgz",
+ "integrity": "sha512-yYBO06eeN/Ki6Kh1QAkgzYpWT1d3Qln+ZCtSbJqFExPl1S3y2qqotJQXoh6qEvl/jDlgpUJolBn3PItVnnZRqQ=="
+ },
+ "vfile-message": {
+ "version": "2.0.4",
+ "resolved": "https://registry.npmjs.org/vfile-message/-/vfile-message-2.0.4.tgz",
+ "integrity": "sha512-DjssxRGkMvifUOJre00juHoP9DPWuzjxKuMDrhNbk2TdaYYBNMStsNhEOt3idrtI12VQYM/1+iM0KOzXi4pxwQ==",
+ "requires": {
+ "@types/unist": "^2.0.0",
+ "unist-util-stringify-position": "^2.0.0"
+ }
+ }
+ }
+ },
"@mdx-js/tag": {
"version": "0.17.5",
"resolved": "https://registry.npmjs.org/@mdx-js/tag/-/tag-0.17.5.tgz",
"integrity": "sha512-Qjfo9nkFY7qAN1DkAx7oKPZ3xLWSNtVJ22+m58FxYLfVzCXlElj0OXka4lC6ERPnW40q8v5rmJKJfqnI8mKr8g=="
},
+ "@mdx-js/util": {
+ "version": "1.6.5",
+ "resolved": "https://registry.npmjs.org/@mdx-js/util/-/util-1.6.5.tgz",
+ "integrity": "sha512-ljr9hGQYW3kZY1NmQbmSe4yXvgq3KDRt0FMBOB5OaDWqi4X2WzEsp6SZ02KmVrieNW1cjWlj13pgvcf0towZPw=="
+ },
+ "@mikaelkristiansson/domready": {
+ "version": "1.0.10",
+ "resolved": "https://registry.npmjs.org/@mikaelkristiansson/domready/-/domready-1.0.10.tgz",
+ "integrity": "sha512-6cDuZeKSCSJ1KvfEQ25Y8OXUjqDJZ+HgUs6dhASWbAX8fxVraTfPsSeRe2bN+4QJDsgUaXaMWBYfRomCr04GGg=="
+ },
"@mrmlnc/readdir-enhanced": {
"version": "2.2.1",
"resolved": "https://registry.npmjs.org/@mrmlnc/readdir-enhanced/-/readdir-enhanced-2.2.1.tgz",
@@ -1287,11 +3817,36 @@
"glob-to-regexp": "^0.3.0"
}
},
+ "@nodelib/fs.scandir": {
+ "version": "2.1.3",
+ "resolved": "https://registry.npmjs.org/@nodelib/fs.scandir/-/fs.scandir-2.1.3.tgz",
+ "integrity": "sha512-eGmwYQn3gxo4r7jdQnkrrN6bY478C3P+a/y72IJukF8LjB6ZHeB3c+Ehacj3sYeSmUXGlnA67/PmbM9CVwL7Dw==",
+ "requires": {
+ "@nodelib/fs.stat": "2.0.3",
+ "run-parallel": "^1.1.9"
+ },
+ "dependencies": {
+ "@nodelib/fs.stat": {
+ "version": "2.0.3",
+ "resolved": "https://registry.npmjs.org/@nodelib/fs.stat/-/fs.stat-2.0.3.tgz",
+ "integrity": "sha512-bQBFruR2TAwoevBEd/NWMoAAtNGzTRgdrqnYCc7dhzfoNvqPzLyqlEQnzZ3kVnNrSp25iyxE00/3h2fqGAGArA=="
+ }
+ }
+ },
"@nodelib/fs.stat": {
"version": "1.1.3",
"resolved": "https://registry.npmjs.org/@nodelib/fs.stat/-/fs.stat-1.1.3.tgz",
"integrity": "sha512-shAmDyaQC4H92APFoIaVDHCx5bStIocgvbwQyxPRrbUY20V1EYTbSDchWbuwlMG3V17cprZhA6+78JfB+3DTPw=="
},
+ "@nodelib/fs.walk": {
+ "version": "1.2.4",
+ "resolved": "https://registry.npmjs.org/@nodelib/fs.walk/-/fs.walk-1.2.4.tgz",
+ "integrity": "sha512-1V9XOY4rDW0rehzbrcqAmHnz8e7SKvX27gh8Gt2WgB0+pdzdiLV83p72kZPU+jvMbS1qU5mauP2iOvO8rhmurQ==",
+ "requires": {
+ "@nodelib/fs.scandir": "2.1.3",
+ "fastq": "^1.6.0"
+ }
+ },
"@phosphor/algorithm": {
"version": "1.1.2",
"resolved": "https://registry.npmjs.org/@phosphor/algorithm/-/algorithm-1.1.2.tgz",
@@ -1398,16 +3953,38 @@
"@phosphor/virtualdom": "^1.1.2"
}
},
- "@reach/router": {
- "version": "1.2.1",
- "resolved": "https://registry.npmjs.org/@reach/router/-/router-1.2.1.tgz",
- "integrity": "sha512-kTaX08X4g27tzIFQGRukaHmNbtMYDS3LEWIS8+l6OayGIw6Oyo1HIF/JzeuR2FoF9z6oV+x/wJSVSq4v8tcUGQ==",
+ "@pieh/friendly-errors-webpack-plugin": {
+ "version": "1.7.0-chalk-2",
+ "resolved": "https://registry.npmjs.org/@pieh/friendly-errors-webpack-plugin/-/friendly-errors-webpack-plugin-1.7.0-chalk-2.tgz",
+ "integrity": "sha512-65+vYGuDkHBCWWjqzzR/Ck318+d6yTI00EqII9qe3aPD1J3Olhvw0X38uM5moQb1PK/ksDXwSoPGt/5QhCiotw==",
"requires": {
- "create-react-context": "^0.2.1",
+ "chalk": "^2.4.2",
+ "error-stack-parser": "^2.0.0",
+ "string-width": "^2.0.0",
+ "strip-ansi": "^3"
+ },
+ "dependencies": {
+ "chalk": {
+ "version": "2.4.2",
+ "resolved": "https://registry.npmjs.org/chalk/-/chalk-2.4.2.tgz",
+ "integrity": "sha512-Mti+f9lpJNcwF4tWV8/OrTTtF1gZi+f8FqlyAdouralcFWFQWF2+NgCHShjkCb+IFBLq9buZwE1xckQU4peSuQ==",
+ "requires": {
+ "ansi-styles": "^3.2.1",
+ "escape-string-regexp": "^1.0.5",
+ "supports-color": "^5.3.0"
+ }
+ }
+ }
+ },
+ "@reach/router": {
+ "version": "1.3.3",
+ "resolved": "https://registry.npmjs.org/@reach/router/-/router-1.3.3.tgz",
+ "integrity": "sha512-gOIAiFhWdiVGSVjukKeNKkCRBLmnORoTPyBihI/jLunICPgxdP30DroAvPQuf1eVfQbfGJQDJkwhJXsNPMnVWw==",
+ "requires": {
+ "create-react-context": "0.3.0",
"invariant": "^2.2.3",
"prop-types": "^15.6.1",
- "react-lifecycles-compat": "^3.0.4",
- "warning": "^3.0.0"
+ "react-lifecycles-compat": "^3.0.4"
}
},
"@rehooks/online-status": {
@@ -1424,11 +4001,17 @@
"version": "0.8.0",
"resolved": "https://registry.npmjs.org/@sindresorhus/slugify/-/slugify-0.8.0.tgz",
"integrity": "sha512-Y+C3aG0JHmi4nCfixHgq0iAtqWCjMCliWghf6fXbemRKSGzpcrHdYxGZGDt8MeFg+gH7ounfMbz6WogqKCWvDg==",
+ "dev": true,
"requires": {
"escape-string-regexp": "^1.0.5",
"lodash.deburr": "^4.1.0"
}
},
+ "@stefanprobst/lokijs": {
+ "version": "1.5.6-b",
+ "resolved": "https://registry.npmjs.org/@stefanprobst/lokijs/-/lokijs-1.5.6-b.tgz",
+ "integrity": "sha512-MNodHp46og+Sdde/LCxTLrxcD5Dimu21R/Fer2raXMG1XtHSV2+vZnkIV87OPAxuf2NiDj1W5hN7Q2MYUfQQ8w=="
+ },
"@svgr/babel-plugin-add-jsx-attribute": {
"version": "4.0.0",
"resolved": "https://registry.npmjs.org/@svgr/babel-plugin-add-jsx-attribute/-/babel-plugin-add-jsx-attribute-4.0.0.tgz",
@@ -1547,6 +4130,19 @@
"loader-utils": "^1.1.0"
}
},
+ "@szmarczak/http-timer": {
+ "version": "1.1.2",
+ "resolved": "https://registry.npmjs.org/@szmarczak/http-timer/-/http-timer-1.1.2.tgz",
+ "integrity": "sha512-XIB2XbzHTN6ieIjfIMV9hlVcfPU26s2vafYWQcZHWXHOxiaRZYEDKEwdl129Zyg50+foYV2jCgtrqSA6qNuNSA==",
+ "requires": {
+ "defer-to-connect": "^1.0.1"
+ }
+ },
+ "@types/color-name": {
+ "version": "1.1.1",
+ "resolved": "https://registry.npmjs.org/@types/color-name/-/color-name-1.1.1.tgz",
+ "integrity": "sha512-rr+OQyAjxze7GgWrSaJwydHStIhHq2lvY3BOC2Mj7KnzI7XK0Uw1TOOdI9lDoajEbSWLiYgoo4f1R51erQfhPQ=="
+ },
"@types/configstore": {
"version": "2.1.1",
"resolved": "https://registry.npmjs.org/@types/configstore/-/configstore-2.1.1.tgz",
@@ -1554,7 +4150,7 @@
},
"@types/debug": {
"version": "0.0.29",
- "resolved": "http://registry.npmjs.org/@types/debug/-/debug-0.0.29.tgz",
+ "resolved": "https://registry.npmjs.org/@types/debug/-/debug-0.0.29.tgz",
"integrity": "sha1-oeUUrfvZLwOiJLpU1pMRHb8fN1Q="
},
"@types/events": {
@@ -1564,7 +4160,7 @@
},
"@types/get-port": {
"version": "0.0.4",
- "resolved": "http://registry.npmjs.org/@types/get-port/-/get-port-0.0.4.tgz",
+ "resolved": "https://registry.npmjs.org/@types/get-port/-/get-port-0.0.4.tgz",
"integrity": "sha1-62u3Qj2fiItjJmDcfS/T5po1ZD4="
},
"@types/glob": {
@@ -1578,9 +4174,49 @@
}
},
"@types/history": {
- "version": "4.7.2",
- "resolved": "https://registry.npmjs.org/@types/history/-/history-4.7.2.tgz",
- "integrity": "sha512-ui3WwXmjTaY73fOQ3/m3nnajU/Orhi6cEu5rzX+BrAAJxa3eITXZ5ch9suPqtM03OWhAHhPSyBGCN4UKoxO20Q=="
+ "version": "4.7.6",
+ "resolved": "https://registry.npmjs.org/@types/history/-/history-4.7.6.tgz",
+ "integrity": "sha512-GRTZLeLJ8ia00ZH8mxMO8t0aC9M1N9bN461Z2eaRurJo6Fpa+utgCwLzI4jQHcrdzuzp5WPN9jRwpsCQ1VhJ5w=="
+ },
+ "@types/istanbul-lib-coverage": {
+ "version": "2.0.3",
+ "resolved": "https://registry.npmjs.org/@types/istanbul-lib-coverage/-/istanbul-lib-coverage-2.0.3.tgz",
+ "integrity": "sha512-sz7iLqvVUg1gIedBOvlkxPlc8/uVzyS5OwGz1cKjXzkl3FpL3al0crU8YGU1WoHkxn0Wxbw5tyi6hvzJKNzFsw=="
+ },
+ "@types/istanbul-lib-report": {
+ "version": "3.0.0",
+ "resolved": "https://registry.npmjs.org/@types/istanbul-lib-report/-/istanbul-lib-report-3.0.0.tgz",
+ "integrity": "sha512-plGgXAPfVKFoYfa9NpYDAkseG+g6Jr294RqeqcqDixSbU34MZVJRi/P+7Y8GDpzkEwLaGZZOpKIEmeVZNtKsrg==",
+ "requires": {
+ "@types/istanbul-lib-coverage": "*"
+ }
+ },
+ "@types/istanbul-reports": {
+ "version": "1.1.2",
+ "resolved": "https://registry.npmjs.org/@types/istanbul-reports/-/istanbul-reports-1.1.2.tgz",
+ "integrity": "sha512-P/W9yOX/3oPZSpaYOCQzGqgCQRXn0FFO/V8bWrCQs+wLmvVVxk6CRBXALEvNs9OHIatlnlFokfhuDo2ug01ciw==",
+ "requires": {
+ "@types/istanbul-lib-coverage": "*",
+ "@types/istanbul-lib-report": "*"
+ }
+ },
+ "@types/json-schema": {
+ "version": "7.0.5",
+ "resolved": "https://registry.npmjs.org/@types/json-schema/-/json-schema-7.0.5.tgz",
+ "integrity": "sha512-7+2BITlgjgDhH0vvwZU/HZJVyk+2XUlvxXe8dFMedNX/aMkaOq++rMAFXc0tM7ij15QaWlbdQASBR9dihi+bDQ=="
+ },
+ "@types/json5": {
+ "version": "0.0.29",
+ "resolved": "https://registry.npmjs.org/@types/json5/-/json5-0.0.29.tgz",
+ "integrity": "sha1-7ihweulOEdK4J7y+UnC86n8+ce4="
+ },
+ "@types/mdast": {
+ "version": "3.0.3",
+ "resolved": "https://registry.npmjs.org/@types/mdast/-/mdast-3.0.3.tgz",
+ "integrity": "sha512-SXPBMnFVQg1s00dlMCc/jCdvPqdE4mXaMMCeRlxLDmTAEoegHT53xKtkDnzDTOcmMHUfcjyf36/YYZ6SxRdnsw==",
+ "requires": {
+ "@types/unist": "*"
+ }
},
"@types/minimatch": {
"version": "3.0.3",
@@ -1589,7 +4225,7 @@
},
"@types/mkdirp": {
"version": "0.3.29",
- "resolved": "http://registry.npmjs.org/@types/mkdirp/-/mkdirp-0.3.29.tgz",
+ "resolved": "https://registry.npmjs.org/@types/mkdirp/-/mkdirp-0.3.29.tgz",
"integrity": "sha1-fyrX7FX5FEgvybHsS7GuYCjUYGY="
},
"@types/node": {
@@ -1597,6 +4233,11 @@
"resolved": "https://registry.npmjs.org/@types/node/-/node-7.10.2.tgz",
"integrity": "sha512-RO4ig5taKmcrU4Rex8ojG1gpwFkjddzug9iPQSDvbewHN9vDpcFewevkaOK+KT+w1LeZnxbgOyfXwV4pxsQ4GQ=="
},
+ "@types/parse-json": {
+ "version": "4.0.0",
+ "resolved": "https://registry.npmjs.org/@types/parse-json/-/parse-json-4.0.0.tgz",
+ "integrity": "sha512-//oorEZjL6sbPcKUaCdIGlIUeH26mgzimjBB77G6XRgnDl/L5wOnpyBGRe/Mmf5CVW3PwEBE1NjiMZ/ssFh4wA=="
+ },
"@types/prop-types": {
"version": "15.5.8",
"resolved": "https://registry.npmjs.org/@types/prop-types/-/prop-types-15.5.8.tgz",
@@ -1608,18 +4249,18 @@
"integrity": "sha512-eqz8c/0kwNi/OEHQfvIuJVLTst3in0e7uTKeuY+WL/zfKn0xVujOTp42bS/vUUokhK5P2BppLd9JXMOMHcgbjA=="
},
"@types/reach__router": {
- "version": "1.2.3",
- "resolved": "https://registry.npmjs.org/@types/reach__router/-/reach__router-1.2.3.tgz",
- "integrity": "sha512-Zp0AdVhoJXjwsgp8pDPVEMnAH5eHU64hi5EnPT1Jerddqwiy0O87KFrnZKd1DKdO87cU120n2d3SnKKPtf4wFA==",
+ "version": "1.3.5",
+ "resolved": "https://registry.npmjs.org/@types/reach__router/-/reach__router-1.3.5.tgz",
+ "integrity": "sha512-h0NbqXN/tJuBY/xggZSej1SKQEstbHO7J/omt1tYoFGmj3YXOodZKbbqD4mNDh7zvEGYd7YFrac1LTtAr3xsYQ==",
"requires": {
"@types/history": "*",
"@types/react": "*"
}
},
"@types/react": {
- "version": "16.8.5",
- "resolved": "https://registry.npmjs.org/@types/react/-/react-16.8.5.tgz",
- "integrity": "sha512-8LRySaaSJVLNZb2dbOGvGmzn88cbAfrgDpuWy+6lLgQ0OJFgHHvyuaCX4/7ikqJlpmCPf4uazJAZcfTQRdJqdQ==",
+ "version": "16.9.36",
+ "resolved": "https://registry.npmjs.org/@types/react/-/react-16.9.36.tgz",
+ "integrity": "sha512-mGgUb/Rk/vGx4NCvquRuSH0GHBQKb1OqpGS9cT9lFxlTLHZgkksgI60TuIxubmn7JuCb+sENHhQciqa0npm0AQ==",
"requires": {
"@types/prop-types": "*",
"csstype": "^2.2.0"
@@ -1627,7 +4268,7 @@
},
"@types/tmp": {
"version": "0.0.32",
- "resolved": "http://registry.npmjs.org/@types/tmp/-/tmp-0.0.32.tgz",
+ "resolved": "https://registry.npmjs.org/@types/tmp/-/tmp-0.0.32.tgz",
"integrity": "sha1-DTyzECL4Qn6ljACK8yuA2hJspOM="
},
"@types/unist": {
@@ -1654,6 +4295,40 @@
"@types/unist": "*"
}
},
+ "@types/websocket": {
+ "version": "1.0.0",
+ "resolved": "https://registry.npmjs.org/@types/websocket/-/websocket-1.0.0.tgz",
+ "integrity": "sha512-MLr8hDM8y7vvdAdnoDEP5LotRoYJj7wgT6mWzCUQH/gHqzS4qcnOT/K4dhC0WimWIUiA3Arj9QAJGGKNRiRZKA==",
+ "requires": {
+ "@types/node": "*"
+ }
+ },
+ "@types/yargs": {
+ "version": "15.0.5",
+ "resolved": "https://registry.npmjs.org/@types/yargs/-/yargs-15.0.5.tgz",
+ "integrity": "sha512-Dk/IDOPtOgubt/IaevIUbTgV7doaKkoorvOyYM2CMwuDyP89bekI7H4xLIwunNYiK9jhCkmc6pUrJk3cj2AB9w==",
+ "requires": {
+ "@types/yargs-parser": "*"
+ }
+ },
+ "@types/yargs-parser": {
+ "version": "15.0.0",
+ "resolved": "https://registry.npmjs.org/@types/yargs-parser/-/yargs-parser-15.0.0.tgz",
+ "integrity": "sha512-FA/BWv8t8ZWJ+gEOnLLd8ygxH/2UFbAvgEonyfN6yWGLKc7zVjbpl2Y4CTjid9h2RfgPP6SEt6uHwEOply00yw=="
+ },
+ "@types/yoga-layout": {
+ "version": "1.9.2",
+ "resolved": "https://registry.npmjs.org/@types/yoga-layout/-/yoga-layout-1.9.2.tgz",
+ "integrity": "sha512-S9q47ByT2pPvD65IvrWp7qppVMpk9WGMbVq9wbWZOHg6tnXSD4vyhao6nOSBwwfDdV2p3Kx9evA9vI+XWTfDvw=="
+ },
+ "@urql/core": {
+ "version": "1.12.0",
+ "resolved": "https://registry.npmjs.org/@urql/core/-/core-1.12.0.tgz",
+ "integrity": "sha512-OyE1FClMz+r3j3v5MxNGzlpvFZya7PQxbSya5qa2lIJdyf9AtExJwjxpjmOf3crQGUvxatRVso4F9SNLT5MDjQ==",
+ "requires": {
+ "wonka": "^4.0.14"
+ }
+ },
"@webassemblyjs/ast": {
"version": "1.7.11",
"resolved": "https://registry.npmjs.org/@webassemblyjs/ast/-/ast-1.7.11.tgz",
@@ -1808,6 +4483,14 @@
"@xtuc/long": "4.2.1"
}
},
+ "@wry/equality": {
+ "version": "0.1.11",
+ "resolved": "https://registry.npmjs.org/@wry/equality/-/equality-0.1.11.tgz",
+ "integrity": "sha512-mwEVBDUVODlsQQ5dfuLUS5/Tf7jqUKyhKYHmVi4fPB6bDMOfWvUPJmKgS1Z7Za/sOI3vzWt4+O7yCiL/70MogA==",
+ "requires": {
+ "tslib": "^1.9.3"
+ }
+ },
"@xtuc/ieee754": {
"version": "1.2.0",
"resolved": "https://registry.npmjs.org/@xtuc/ieee754/-/ieee754-1.2.0.tgz",
@@ -1824,38 +4507,43 @@
"integrity": "sha512-nne9/IiQ/hzIhY6pdDnbBtz7DjPTKrY00P/zvPSm5pOFkl6xuGrGnXn/VtTNNfNtAfZ9/1RtehkszU9qcTii0Q=="
},
"accepts": {
- "version": "1.3.5",
- "resolved": "https://registry.npmjs.org/accepts/-/accepts-1.3.5.tgz",
- "integrity": "sha1-63d99gEXI6OxTopywIBcjoZ0a9I=",
+ "version": "1.3.7",
+ "resolved": "https://registry.npmjs.org/accepts/-/accepts-1.3.7.tgz",
+ "integrity": "sha512-Il80Qs2WjYlJIBNzNkK6KYqlVMTbZLXgHx2oT0pU/fjRHyEp+PEfEPY0R3WCwAGVOtauxh1hOxNgIf5bv7dQpA==",
"requires": {
- "mime-types": "~2.1.18",
- "negotiator": "0.6.1"
- }
- },
- "acorn": {
- "version": "6.1.1",
- "resolved": "https://registry.npmjs.org/acorn/-/acorn-6.1.1.tgz",
- "integrity": "sha512-jPTiwtOxaHNaAPg/dmrJ/beuzLRnXtB0kQPQ8JpotKJgTB6rX6c8mlf315941pyjBSaPg8NHXS9fhP4u17DpGA=="
- },
- "acorn-dynamic-import": {
- "version": "3.0.0",
- "resolved": "https://registry.npmjs.org/acorn-dynamic-import/-/acorn-dynamic-import-3.0.0.tgz",
- "integrity": "sha512-zVWV8Z8lislJoOKKqdNMOB+s6+XV5WERty8MnKBeFgwA+19XJjJHs2RP5dzM57FftIs+jQnRToLiWazKr6sSWg==",
- "requires": {
- "acorn": "^5.0.0"
+ "mime-types": "~2.1.24",
+ "negotiator": "0.6.2"
},
"dependencies": {
- "acorn": {
- "version": "5.7.3",
- "resolved": "https://registry.npmjs.org/acorn/-/acorn-5.7.3.tgz",
- "integrity": "sha512-T/zvzYRfbVojPWahDsE5evJdHb3oJoQfFbsrKM7w5Zcs++Tr257tia3BmMP8XYVjp1S9RZXQMh7gao96BlqZOw=="
+ "mime-db": {
+ "version": "1.44.0",
+ "resolved": "https://registry.npmjs.org/mime-db/-/mime-db-1.44.0.tgz",
+ "integrity": "sha512-/NOTfLrsPBVeH7YtFPgsVWveuL+4SjjYxaQ1xtM1KMFj7HdxlBlxeyNLzhyJVx7r4rZGJAZ/6lkKCitSc/Nmpg=="
+ },
+ "mime-types": {
+ "version": "2.1.27",
+ "resolved": "https://registry.npmjs.org/mime-types/-/mime-types-2.1.27.tgz",
+ "integrity": "sha512-JIhqnCasI9yD+SsmkquHBxTSEuZdQX5BuQnS2Vc7puQQQ+8yiP5AY5uWhpdv4YL4VM5c6iliiYWPgJ/nJQLp7w==",
+ "requires": {
+ "mime-db": "1.44.0"
+ }
}
}
},
+ "acorn": {
+ "version": "6.4.1",
+ "resolved": "https://registry.npmjs.org/acorn/-/acorn-6.4.1.tgz",
+ "integrity": "sha512-ZVA9k326Nwrj3Cj9jlh3wGFutC2ZornPNARZwsNYqQYgN0EsV2d53w5RN/co65Ohn4sUAUtb1rSUAOD6XN9idA=="
+ },
+ "acorn-dynamic-import": {
+ "version": "4.0.0",
+ "resolved": "https://registry.npmjs.org/acorn-dynamic-import/-/acorn-dynamic-import-4.0.0.tgz",
+ "integrity": "sha512-d3OEjQV4ROpoflsnUA8HozoIR504TFxNivYEUi6uwz0IYhBkTDXGuWlNdMtybRt3nqVx/L6XqMt0FxkXuWKZhw=="
+ },
"acorn-jsx": {
- "version": "5.0.1",
- "resolved": "https://registry.npmjs.org/acorn-jsx/-/acorn-jsx-5.0.1.tgz",
- "integrity": "sha512-HJ7CfNHrfJLlNTzIEUTj43LNWGkqpRLxm3YjAlcD0ACydk9XynzYsCBHxut+iqt+1aBXkx9UP/w/ZqMr13XIzg=="
+ "version": "5.2.0",
+ "resolved": "https://registry.npmjs.org/acorn-jsx/-/acorn-jsx-5.2.0.tgz",
+ "integrity": "sha512-HiUX/+K2YpkpJ+SzBffkM/AQ2YE03S0U1kjTLVpoJdhZMOWy8qvXVN9JdLqv2QsaQ6MPYQIuNmwD8zOiYUofLQ=="
},
"address": {
"version": "1.0.3",
@@ -1867,6 +4555,22 @@
"resolved": "https://registry.npmjs.org/after/-/after-0.8.2.tgz",
"integrity": "sha1-/ts5T58OAqqXaOcCvaI7UF+ufh8="
},
+ "aggregate-error": {
+ "version": "3.0.1",
+ "resolved": "https://registry.npmjs.org/aggregate-error/-/aggregate-error-3.0.1.tgz",
+ "integrity": "sha512-quoaXsZ9/BLNae5yiNoUz+Nhkwz83GhWwtYFglcjEQB2NDHCIpApbqXxIFnm4Pq/Nvhrsq5sYJFyohrrxnTGAA==",
+ "requires": {
+ "clean-stack": "^2.0.0",
+ "indent-string": "^4.0.0"
+ },
+ "dependencies": {
+ "indent-string": {
+ "version": "4.0.0",
+ "resolved": "https://registry.npmjs.org/indent-string/-/indent-string-4.0.0.tgz",
+ "integrity": "sha512-EdDDZu4A2OyIK7Lr/2zG+w5jmbuk1DVBnEwREQvBzspBJkCEbRa8GxU1lghYcaGJCnRWibjDXlq779X1/y5xwg=="
+ }
+ }
+ },
"ajv": {
"version": "6.6.2",
"resolved": "https://registry.npmjs.org/ajv/-/ajv-6.6.2.tgz",
@@ -1899,17 +4603,42 @@
"integrity": "sha1-SlKCrBZHKek2Gbz9OtFR+BfOkfU="
},
"ansi-align": {
- "version": "2.0.0",
- "resolved": "https://registry.npmjs.org/ansi-align/-/ansi-align-2.0.0.tgz",
- "integrity": "sha1-w2rsy6VjuJzrVW82kPCx2eNUf38=",
+ "version": "3.0.0",
+ "resolved": "https://registry.npmjs.org/ansi-align/-/ansi-align-3.0.0.tgz",
+ "integrity": "sha512-ZpClVKqXN3RGBmKibdfWzqCY4lnjEuoNzU5T0oEFpfd/z5qJHVarukridD4juLO2FXMiwUQxr9WqQtaYa8XRYw==",
"requires": {
- "string-width": "^2.0.0"
+ "string-width": "^3.0.0"
+ },
+ "dependencies": {
+ "ansi-regex": {
+ "version": "4.1.0",
+ "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-4.1.0.tgz",
+ "integrity": "sha512-1apePfXM1UOSqw0o9IiFAovVz9M5S1Dg+4TrDwfMewQ6p/rmMueb7tWZjQ1rx4Loy1ArBggoqGpfqqdI4rondg=="
+ },
+ "string-width": {
+ "version": "3.1.0",
+ "resolved": "https://registry.npmjs.org/string-width/-/string-width-3.1.0.tgz",
+ "integrity": "sha512-vafcv6KjVZKSgz06oM/H6GDBrAtz8vdhQakGjFIvNrHA6y3HCF1CInLy+QLq8dTJPQ1b+KDUqDFctkdRW44e1w==",
+ "requires": {
+ "emoji-regex": "^7.0.1",
+ "is-fullwidth-code-point": "^2.0.0",
+ "strip-ansi": "^5.1.0"
+ }
+ },
+ "strip-ansi": {
+ "version": "5.2.0",
+ "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-5.2.0.tgz",
+ "integrity": "sha512-DuRs1gKbBqsMKIZlrffwlug8MHkcnpjs5VPmL1PAh+mA30U0DTotfDZ0d2UUsXpPmPmMMJ6W773MaA3J+lbiWA==",
+ "requires": {
+ "ansi-regex": "^4.1.0"
+ }
+ }
}
},
"ansi-colors": {
- "version": "3.2.3",
- "resolved": "https://registry.npmjs.org/ansi-colors/-/ansi-colors-3.2.3.tgz",
- "integrity": "sha512-LEHHyuhlPY3TmuUYMh2oz89lTShfvgbmzaBcxve9t/9Wuy7Dwf4yoAKcND7KFT1HAQfqZ12qtc+DUrBMeKF9nw=="
+ "version": "3.2.4",
+ "resolved": "https://registry.npmjs.org/ansi-colors/-/ansi-colors-3.2.4.tgz",
+ "integrity": "sha512-hHUXGagefjN2iRrID63xckIvotOXOojhQKWIPUZ4mNUZ9nLZW+7FMNoE1lOkEhNWYsx/7ysGIuJYCiMAA9FnrA=="
},
"ansi-escapes": {
"version": "3.2.0",
@@ -1954,20 +4683,61 @@
}
},
"apollo-link": {
- "version": "1.2.8",
- "resolved": "https://registry.npmjs.org/apollo-link/-/apollo-link-1.2.8.tgz",
- "integrity": "sha512-lfzGRxhK9RmiH3HPFi7TIEBhhDY9M5a2ZDnllcfy5QDk7cCQHQ1WQArcw1FK0g1B+mV4Kl72DSrlvZHZJEolrA==",
+ "version": "1.2.14",
+ "resolved": "https://registry.npmjs.org/apollo-link/-/apollo-link-1.2.14.tgz",
+ "integrity": "sha512-p67CMEFP7kOG1JZ0ZkYZwRDa369w5PIjtMjvrQd/HnIV8FRsHRqLqK+oAZQnFa1DDdZtOtHTi+aMIW6EatC2jg==",
"requires": {
- "zen-observable-ts": "^0.8.15"
+ "apollo-utilities": "^1.3.0",
+ "ts-invariant": "^0.4.0",
+ "tslib": "^1.9.3",
+ "zen-observable-ts": "^0.8.21"
+ }
+ },
+ "apollo-link-http-common": {
+ "version": "0.2.16",
+ "resolved": "https://registry.npmjs.org/apollo-link-http-common/-/apollo-link-http-common-0.2.16.tgz",
+ "integrity": "sha512-2tIhOIrnaF4UbQHf7kjeQA/EmSorB7+HyJIIrUjJOKBgnXwuexi8aMecRlqTIDWcyVXCeqLhUnztMa6bOH/jTg==",
+ "requires": {
+ "apollo-link": "^1.2.14",
+ "ts-invariant": "^0.4.0",
+ "tslib": "^1.9.3"
+ }
+ },
+ "apollo-upload-client": {
+ "version": "13.0.0",
+ "resolved": "https://registry.npmjs.org/apollo-upload-client/-/apollo-upload-client-13.0.0.tgz",
+ "integrity": "sha512-lJ9/bk1BH1lD15WhWRha2J3+LrXrPIX5LP5EwiOUHv8PCORp4EUrcujrA3rI5hZeZygrTX8bshcuMdpqpSrvtA==",
+ "requires": {
+ "@babel/runtime": "^7.9.2",
+ "apollo-link": "^1.2.12",
+ "apollo-link-http-common": "^0.2.14",
+ "extract-files": "^8.0.0"
+ },
+ "dependencies": {
+ "@babel/runtime": {
+ "version": "7.10.2",
+ "resolved": "https://registry.npmjs.org/@babel/runtime/-/runtime-7.10.2.tgz",
+ "integrity": "sha512-6sF3uQw2ivImfVIl62RZ7MXhO2tap69WeWK57vAaimT6AZbE4FbqjdEJIN1UqoD6wI6B+1n9UiagafH1sxjOtg==",
+ "requires": {
+ "regenerator-runtime": "^0.13.4"
+ }
+ },
+ "regenerator-runtime": {
+ "version": "0.13.5",
+ "resolved": "https://registry.npmjs.org/regenerator-runtime/-/regenerator-runtime-0.13.5.tgz",
+ "integrity": "sha512-ZS5w8CpKFinUzOwW3c83oPeVXoNsrLsaCoLtJvAClH135j/R77RuymhiSErhm2lKcwSCIpmvIWSbDkIfAqKQlA=="
+ }
}
},
"apollo-utilities": {
- "version": "1.1.3",
- "resolved": "https://registry.npmjs.org/apollo-utilities/-/apollo-utilities-1.1.3.tgz",
- "integrity": "sha512-pF9abhiClX5gfj/WFWZh8DiI33nOLGxRhXH9ZMquaM1V8bhq1WLFPt2QjShWH3kGQVeIGUK+FQefnhe+ZaaAYg==",
+ "version": "1.3.4",
+ "resolved": "https://registry.npmjs.org/apollo-utilities/-/apollo-utilities-1.3.4.tgz",
+ "integrity": "sha512-pk2hiWrCXMAy2fRPwEyhvka+mqwzeP60Jr1tRYi5xru+3ko94HI9o6lK0CT33/w4RDlxWchmdhDCrvdr+pHCig==",
"requires": {
+ "@wry/equality": "^0.1.2",
"fast-json-stable-stringify": "^2.0.0",
- "tslib": "^1.9.3"
+ "ts-invariant": "^0.4.0",
+ "tslib": "^1.10.0"
}
},
"aproba": {
@@ -2031,6 +4801,11 @@
"resolved": "https://registry.npmjs.org/arr-flatten/-/arr-flatten-1.1.0.tgz",
"integrity": "sha512-L3hKV5R/p5o81R7O02IGnwpDmkp6E982XhtbuwSe3O4qOtMMMtodicASA1Cny2U+aCXcNpml+m4dPsvsJ3jatg=="
},
+ "arr-rotate": {
+ "version": "1.0.0",
+ "resolved": "https://registry.npmjs.org/arr-rotate/-/arr-rotate-1.0.0.tgz",
+ "integrity": "sha512-yOzOZcR9Tn7enTF66bqKorGGH0F36vcPaSWg8fO0c0UYb3LX3VMXj5ZxEqQLNOecAhlRJ7wYZja5i4jTlnbIfQ=="
+ },
"arr-union": {
"version": "3.1.0",
"resolved": "https://registry.npmjs.org/arr-union/-/arr-union-3.1.0.tgz",
@@ -2048,16 +4823,70 @@
},
"array-flatten": {
"version": "1.1.1",
- "resolved": "http://registry.npmjs.org/array-flatten/-/array-flatten-1.1.1.tgz",
+ "resolved": "https://registry.npmjs.org/array-flatten/-/array-flatten-1.1.1.tgz",
"integrity": "sha1-ml9pkFGx5wczKPKgCJaLZOopVdI="
},
"array-includes": {
- "version": "3.0.3",
- "resolved": "https://registry.npmjs.org/array-includes/-/array-includes-3.0.3.tgz",
- "integrity": "sha1-GEtI9i2S10UrsxsyMWXH+L0CJm0=",
+ "version": "3.1.1",
+ "resolved": "https://registry.npmjs.org/array-includes/-/array-includes-3.1.1.tgz",
+ "integrity": "sha512-c2VXaCHl7zPsvpkFsw4nxvFie4fh1ur9bpcgsVkIjqn0H/Xwdg+7fv3n2r/isyS8EBj5b06M9kHyZuIr4El6WQ==",
"requires": {
- "define-properties": "^1.1.2",
- "es-abstract": "^1.7.0"
+ "define-properties": "^1.1.3",
+ "es-abstract": "^1.17.0",
+ "is-string": "^1.0.5"
+ },
+ "dependencies": {
+ "es-abstract": {
+ "version": "1.17.6",
+ "resolved": "https://registry.npmjs.org/es-abstract/-/es-abstract-1.17.6.tgz",
+ "integrity": "sha512-Fr89bON3WFyUi5EvAeI48QTWX0AyekGgLA8H+c+7fbfCkJwRWRMLd8CQedNEyJuoYYhmtEqY92pgte1FAhBlhw==",
+ "requires": {
+ "es-to-primitive": "^1.2.1",
+ "function-bind": "^1.1.1",
+ "has": "^1.0.3",
+ "has-symbols": "^1.0.1",
+ "is-callable": "^1.2.0",
+ "is-regex": "^1.1.0",
+ "object-inspect": "^1.7.0",
+ "object-keys": "^1.1.1",
+ "object.assign": "^4.1.0",
+ "string.prototype.trimend": "^1.0.1",
+ "string.prototype.trimstart": "^1.0.1"
+ }
+ },
+ "es-to-primitive": {
+ "version": "1.2.1",
+ "resolved": "https://registry.npmjs.org/es-to-primitive/-/es-to-primitive-1.2.1.tgz",
+ "integrity": "sha512-QCOllgZJtaUo9miYBcLChTUaHNjJF3PYs1VidD7AwiEj1kYxKeQTctLAezAOH5ZKRH0g2IgPn6KwB4IT8iRpvA==",
+ "requires": {
+ "is-callable": "^1.1.4",
+ "is-date-object": "^1.0.1",
+ "is-symbol": "^1.0.2"
+ }
+ },
+ "has-symbols": {
+ "version": "1.0.1",
+ "resolved": "https://registry.npmjs.org/has-symbols/-/has-symbols-1.0.1.tgz",
+ "integrity": "sha512-PLcsoqu++dmEIZB+6totNFKq/7Do+Z0u4oT0zKOJNl3lYK6vGwwu2hjHs+68OEZbTjiUE9bgOABXbP/GvrS0Kg=="
+ },
+ "is-callable": {
+ "version": "1.2.0",
+ "resolved": "https://registry.npmjs.org/is-callable/-/is-callable-1.2.0.tgz",
+ "integrity": "sha512-pyVD9AaGLxtg6srb2Ng6ynWJqkHU9bEM087AKck0w8QwDarTfNcpIYoU8x8Hv2Icm8u6kFJM18Dag8lyqGkviw=="
+ },
+ "is-regex": {
+ "version": "1.1.0",
+ "resolved": "https://registry.npmjs.org/is-regex/-/is-regex-1.1.0.tgz",
+ "integrity": "sha512-iI97M8KTWID2la5uYXlkbSDQIg4F6o1sYboZKKTDpnDQMLtUL86zxhgDet3Q2SriaYsyGqZ6Mn2SjbRKeLHdqw==",
+ "requires": {
+ "has-symbols": "^1.0.1"
+ }
+ },
+ "object-keys": {
+ "version": "1.1.1",
+ "resolved": "https://registry.npmjs.org/object-keys/-/object-keys-1.1.1.tgz",
+ "integrity": "sha512-NuAESUOUMrlIXOfHKzD6bpPu3tYt3xvjNdRIQ+FeT0lNb4K8WR70CaDxhuNguS2XG+GjkyMwOzsN5ZktImfhLA=="
+ }
}
},
"array-iterate": {
@@ -2093,6 +4922,68 @@
"resolved": "https://registry.npmjs.org/array-unique/-/array-unique-0.3.2.tgz",
"integrity": "sha1-qJS3XUvE9s1nnvMkSp/Y9Gri1Cg="
},
+ "array.prototype.flat": {
+ "version": "1.2.3",
+ "resolved": "https://registry.npmjs.org/array.prototype.flat/-/array.prototype.flat-1.2.3.tgz",
+ "integrity": "sha512-gBlRZV0VSmfPIeWfuuy56XZMvbVfbEUnOXUvt3F/eUUUSyzlgLxhEX4YAEpxNAogRGehPSnfXyPtYyKAhkzQhQ==",
+ "requires": {
+ "define-properties": "^1.1.3",
+ "es-abstract": "^1.17.0-next.1"
+ },
+ "dependencies": {
+ "es-abstract": {
+ "version": "1.17.6",
+ "resolved": "https://registry.npmjs.org/es-abstract/-/es-abstract-1.17.6.tgz",
+ "integrity": "sha512-Fr89bON3WFyUi5EvAeI48QTWX0AyekGgLA8H+c+7fbfCkJwRWRMLd8CQedNEyJuoYYhmtEqY92pgte1FAhBlhw==",
+ "requires": {
+ "es-to-primitive": "^1.2.1",
+ "function-bind": "^1.1.1",
+ "has": "^1.0.3",
+ "has-symbols": "^1.0.1",
+ "is-callable": "^1.2.0",
+ "is-regex": "^1.1.0",
+ "object-inspect": "^1.7.0",
+ "object-keys": "^1.1.1",
+ "object.assign": "^4.1.0",
+ "string.prototype.trimend": "^1.0.1",
+ "string.prototype.trimstart": "^1.0.1"
+ }
+ },
+ "es-to-primitive": {
+ "version": "1.2.1",
+ "resolved": "https://registry.npmjs.org/es-to-primitive/-/es-to-primitive-1.2.1.tgz",
+ "integrity": "sha512-QCOllgZJtaUo9miYBcLChTUaHNjJF3PYs1VidD7AwiEj1kYxKeQTctLAezAOH5ZKRH0g2IgPn6KwB4IT8iRpvA==",
+ "requires": {
+ "is-callable": "^1.1.4",
+ "is-date-object": "^1.0.1",
+ "is-symbol": "^1.0.2"
+ }
+ },
+ "has-symbols": {
+ "version": "1.0.1",
+ "resolved": "https://registry.npmjs.org/has-symbols/-/has-symbols-1.0.1.tgz",
+ "integrity": "sha512-PLcsoqu++dmEIZB+6totNFKq/7Do+Z0u4oT0zKOJNl3lYK6vGwwu2hjHs+68OEZbTjiUE9bgOABXbP/GvrS0Kg=="
+ },
+ "is-callable": {
+ "version": "1.2.0",
+ "resolved": "https://registry.npmjs.org/is-callable/-/is-callable-1.2.0.tgz",
+ "integrity": "sha512-pyVD9AaGLxtg6srb2Ng6ynWJqkHU9bEM087AKck0w8QwDarTfNcpIYoU8x8Hv2Icm8u6kFJM18Dag8lyqGkviw=="
+ },
+ "is-regex": {
+ "version": "1.1.0",
+ "resolved": "https://registry.npmjs.org/is-regex/-/is-regex-1.1.0.tgz",
+ "integrity": "sha512-iI97M8KTWID2la5uYXlkbSDQIg4F6o1sYboZKKTDpnDQMLtUL86zxhgDet3Q2SriaYsyGqZ6Mn2SjbRKeLHdqw==",
+ "requires": {
+ "has-symbols": "^1.0.1"
+ }
+ },
+ "object-keys": {
+ "version": "1.1.1",
+ "resolved": "https://registry.npmjs.org/object-keys/-/object-keys-1.1.1.tgz",
+ "integrity": "sha512-NuAESUOUMrlIXOfHKzD6bpPu3tYt3xvjNdRIQ+FeT0lNb4K8WR70CaDxhuNguS2XG+GjkyMwOzsN5ZktImfhLA=="
+ }
+ }
+ },
"arraybuffer.slice": {
"version": "0.0.7",
"resolved": "https://registry.npmjs.org/arraybuffer.slice/-/arraybuffer.slice-0.0.7.tgz",
@@ -2124,13 +5015,21 @@
"bn.js": "^4.0.0",
"inherits": "^2.0.1",
"minimalistic-assert": "^1.0.0"
+ },
+ "dependencies": {
+ "bn.js": {
+ "version": "4.11.9",
+ "resolved": "https://registry.npmjs.org/bn.js/-/bn.js-4.11.9.tgz",
+ "integrity": "sha512-E6QoYqCKZfgatHTdHzs1RRKP7ip4vvm+EyRUeE2RF0NblwVvb0p6jSVeNTOFxPn26QXN2o6SMfNxKp6kU8zQaw=="
+ }
}
},
"assert": {
- "version": "1.4.1",
- "resolved": "https://registry.npmjs.org/assert/-/assert-1.4.1.tgz",
- "integrity": "sha1-mZEtWRg2tab1s0XA8H7vwI/GXZE=",
+ "version": "1.5.0",
+ "resolved": "https://registry.npmjs.org/assert/-/assert-1.5.0.tgz",
+ "integrity": "sha512-EDsgawzwoun2CZkCgtxJbv392v4nbk9XDD06zI+kQYoBM/3RBWLlEyJARDOmhAAosBjWACEkKL6S+lIZtcAubA==",
"requires": {
+ "object-assign": "^4.1.1",
"util": "0.10.3"
},
"dependencies": {
@@ -2141,7 +5040,7 @@
},
"util": {
"version": "0.10.3",
- "resolved": "http://registry.npmjs.org/util/-/util-0.10.3.tgz",
+ "resolved": "https://registry.npmjs.org/util/-/util-0.10.3.tgz",
"integrity": "sha1-evsa/lCAUkZInj23/g7TeTNqwPk=",
"requires": {
"inherits": "2.0.1"
@@ -2171,7 +5070,7 @@
},
"async": {
"version": "1.5.2",
- "resolved": "http://registry.npmjs.org/async/-/async-1.5.2.tgz",
+ "resolved": "https://registry.npmjs.org/async/-/async-1.5.2.tgz",
"integrity": "sha1-7GphrlZIDAw8skHJVhjiCJL5Zyo="
},
"async-each": {
@@ -2185,20 +5084,30 @@
"integrity": "sha1-NhIfhFwFeBct5Bmpfb6x0W7DRUI="
},
"async-limiter": {
- "version": "1.0.0",
- "resolved": "https://registry.npmjs.org/async-limiter/-/async-limiter-1.0.0.tgz",
- "integrity": "sha512-jp/uFnooOiO+L211eZOoSyzpOITMXx1rBITauYykG3BRYPu8h0UcxsPNB04RR5vo4Tyz3+ay17tR6JVf9qzYWg=="
+ "version": "1.0.1",
+ "resolved": "https://registry.npmjs.org/async-limiter/-/async-limiter-1.0.1.tgz",
+ "integrity": "sha512-csOlWGAcRFJaI6m+F2WKdnMKr4HhdhFVBk0H/QbJFMCr+uO2kwohwXQPxw/9OCxp05r5ghVBFSyioixx3gfkNQ=="
},
"asynckit": {
"version": "0.4.0",
"resolved": "https://registry.npmjs.org/asynckit/-/asynckit-0.4.0.tgz",
"integrity": "sha1-x57Zf380y48robyXkLzDZkdLS3k="
},
+ "at-least-node": {
+ "version": "1.0.0",
+ "resolved": "https://registry.npmjs.org/at-least-node/-/at-least-node-1.0.0.tgz",
+ "integrity": "sha512-+q/t7Ekv1EDY2l6Gda6LLiX14rU9TV20Wa3ofeQmwPFZbOMo9DXrLbOjFaaclkXKWidIaopwAObQDqwWtGUjqg=="
+ },
"atob": {
"version": "2.1.2",
"resolved": "https://registry.npmjs.org/atob/-/atob-2.1.2.tgz",
"integrity": "sha512-Wm6ukoaOGJi/73p/cl2GvLjTI5JM1k/O14isD73YML8StrH/7/lRFgmg8nICZgD3bZZvjwCGxtMOD3wWNAu8cg=="
},
+ "auto-bind": {
+ "version": "4.0.0",
+ "resolved": "https://registry.npmjs.org/auto-bind/-/auto-bind-4.0.0.tgz",
+ "integrity": "sha512-Hdw8qdNiqdJ8LqT0iK0sVzkFbzg6fhnQqqfWhBDxcHZvU75+B+ayzTy8x+k5Ix0Y92XOhOUlx74ps+bA6BeYMQ=="
+ },
"autoprefixer": {
"version": "9.4.7",
"resolved": "https://registry.npmjs.org/autoprefixer/-/autoprefixer-9.4.7.tgz",
@@ -2296,12 +5205,9 @@
"integrity": "sha512-ReZxvNHIOv88FlT7rxcXIIC0fPt4KZqZbOlivyWtXLt8ESx84zd3kMC6iK5jVeS2qt+g7ftS7ye4fi06X5rtRQ=="
},
"axobject-query": {
- "version": "2.0.2",
- "resolved": "https://registry.npmjs.org/axobject-query/-/axobject-query-2.0.2.tgz",
- "integrity": "sha512-MCeek8ZH7hKyO1rWUbKNQBbl4l2eY0ntk7OGi+q0RlafrCnfPxC06WZA+uebCfmYp4mNU9jRBP1AhGyf8+W3ww==",
- "requires": {
- "ast-types-flow": "0.0.7"
- }
+ "version": "2.1.2",
+ "resolved": "https://registry.npmjs.org/axobject-query/-/axobject-query-2.1.2.tgz",
+ "integrity": "sha512-ICt34ZmrVt8UQnvPl6TVyDTkmhXmAyAT4Jh5ugfGUX4MOrZ+U/ZY6/sdylRw3qGNr9Ub5AJsaHeDMzNLehRdOQ=="
},
"babel-code-frame": {
"version": "6.26.0",
@@ -2320,7 +5226,7 @@
},
"chalk": {
"version": "1.1.3",
- "resolved": "http://registry.npmjs.org/chalk/-/chalk-1.1.3.tgz",
+ "resolved": "https://registry.npmjs.org/chalk/-/chalk-1.1.3.tgz",
"integrity": "sha1-qBFcVeSnAv5NFQq9OHKCKn4J/Jg=",
"requires": {
"ansi-styles": "^2.2.1",
@@ -2337,7 +5243,7 @@
},
"supports-color": {
"version": "2.0.0",
- "resolved": "http://registry.npmjs.org/supports-color/-/supports-color-2.0.0.tgz",
+ "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-2.0.0.tgz",
"integrity": "sha1-U10EXOa2Nj+kARcIRimZXp3zJMc="
}
}
@@ -2369,21 +5275,83 @@
}
},
"babel-loader": {
- "version": "8.0.5",
- "resolved": "https://registry.npmjs.org/babel-loader/-/babel-loader-8.0.5.tgz",
- "integrity": "sha512-NTnHnVRd2JnRqPC0vW+iOQWU5pchDbYXsG2E6DMXEpMfUcQKclF9gmf3G3ZMhzG7IG9ji4coL0cm+FxeWxDpnw==",
+ "version": "8.1.0",
+ "resolved": "https://registry.npmjs.org/babel-loader/-/babel-loader-8.1.0.tgz",
+ "integrity": "sha512-7q7nC1tYOrqvUrN3LQK4GwSk/TQorZSOlO9C+RZDZpODgyN4ZlCqE5q9cDsyWOliN+aU9B4JX01xK9eJXowJLw==",
"requires": {
- "find-cache-dir": "^2.0.0",
- "loader-utils": "^1.0.2",
- "mkdirp": "^0.5.1",
- "util.promisify": "^1.0.0"
+ "find-cache-dir": "^2.1.0",
+ "loader-utils": "^1.4.0",
+ "mkdirp": "^0.5.3",
+ "pify": "^4.0.1",
+ "schema-utils": "^2.6.5"
+ },
+ "dependencies": {
+ "emojis-list": {
+ "version": "3.0.0",
+ "resolved": "https://registry.npmjs.org/emojis-list/-/emojis-list-3.0.0.tgz",
+ "integrity": "sha512-/kyM18EfinwXZbno9FyUGeFh87KC8HRQBQGildHZbEuRyWFOmv1U10o9BBp8XVZDVNNuQKyIGIu5ZYAAXJ0V2Q=="
+ },
+ "json5": {
+ "version": "1.0.1",
+ "resolved": "https://registry.npmjs.org/json5/-/json5-1.0.1.tgz",
+ "integrity": "sha512-aKS4WQjPenRxiQsC93MNfjx+nbF4PAdYzmd/1JIj8HYzqfbu86beTuNgXDzPknWk0n0uARlyewZo4s++ES36Ow==",
+ "requires": {
+ "minimist": "^1.2.0"
+ }
+ },
+ "loader-utils": {
+ "version": "1.4.0",
+ "resolved": "https://registry.npmjs.org/loader-utils/-/loader-utils-1.4.0.tgz",
+ "integrity": "sha512-qH0WSMBtn/oHuwjy/NucEgbx5dbxxnxup9s4PVXJUDHZBQY+s0NWA9rJf53RBnQZxfch7euUui7hpoAPvALZdA==",
+ "requires": {
+ "big.js": "^5.2.2",
+ "emojis-list": "^3.0.0",
+ "json5": "^1.0.1"
+ }
+ },
+ "mkdirp": {
+ "version": "0.5.5",
+ "resolved": "https://registry.npmjs.org/mkdirp/-/mkdirp-0.5.5.tgz",
+ "integrity": "sha512-NKmAlESf6jMGym1++R0Ra7wvhV+wFW63FaSOFPwRahvea0gMUcGUhVeAg/0BC0wiv9ih5NYPB1Wn1UEI1/L+xQ==",
+ "requires": {
+ "minimist": "^1.2.5"
+ },
+ "dependencies": {
+ "minimist": {
+ "version": "1.2.5",
+ "resolved": "https://registry.npmjs.org/minimist/-/minimist-1.2.5.tgz",
+ "integrity": "sha512-FM9nNUYrRBAELZQT3xeZQ7fmMOBg6nWNmJKTcgsJeaLstP/UODVpGsr5OhXhhXg6f+qtJ8uiZ+PUxkDWcgIXLw=="
+ }
+ }
+ },
+ "pify": {
+ "version": "4.0.1",
+ "resolved": "https://registry.npmjs.org/pify/-/pify-4.0.1.tgz",
+ "integrity": "sha512-uB80kBFb/tfd68bVleG9T5GGsGPjJrLAUpR5PZIrhBnIaRTQRjqdJSsIKkOP6OAIFbj7GOrcudc5pNjZ+geV2g=="
+ }
}
},
"babel-plugin-add-module-exports": {
"version": "0.2.1",
- "resolved": "http://registry.npmjs.org/babel-plugin-add-module-exports/-/babel-plugin-add-module-exports-0.2.1.tgz",
+ "resolved": "https://registry.npmjs.org/babel-plugin-add-module-exports/-/babel-plugin-add-module-exports-0.2.1.tgz",
"integrity": "sha1-mumh9KjcZ/DN7E9K7aHkOl/2XiU="
},
+ "babel-plugin-apply-mdx-type-prop": {
+ "version": "1.6.5",
+ "resolved": "https://registry.npmjs.org/babel-plugin-apply-mdx-type-prop/-/babel-plugin-apply-mdx-type-prop-1.6.5.tgz",
+ "integrity": "sha512-Bs2hv/bYFTJyhBqvsWOsceFyPXAhVM1gvwF8fIm6GeXYTQV+sY+qRR5TClamgr3OEsD8ZApmw+kxJSHgJggVyw==",
+ "requires": {
+ "@babel/helper-plugin-utils": "7.8.3",
+ "@mdx-js/util": "^1.6.5"
+ },
+ "dependencies": {
+ "@babel/helper-plugin-utils": {
+ "version": "7.8.3",
+ "resolved": "https://registry.npmjs.org/@babel/helper-plugin-utils/-/helper-plugin-utils-7.8.3.tgz",
+ "integrity": "sha512-j+fq49Xds2smCUNYmEHF9kGNkhbet6yVIBp4e6oeQpH1RUs/Ir06xUKzDjDkGcaaokPiTNs2JBWHjaE4csUkZQ=="
+ }
+ }
+ },
"babel-plugin-dynamic-import-node": {
"version": "1.2.0",
"resolved": "https://registry.npmjs.org/babel-plugin-dynamic-import-node/-/babel-plugin-dynamic-import-node-1.2.0.tgz",
@@ -2392,23 +5360,104 @@
"babel-plugin-syntax-dynamic-import": "^6.18.0"
}
},
- "babel-plugin-macros": {
- "version": "2.5.0",
- "resolved": "https://registry.npmjs.org/babel-plugin-macros/-/babel-plugin-macros-2.5.0.tgz",
- "integrity": "sha512-BWw0lD0kVZAXRD3Od1kMrdmfudqzDzYv2qrN3l2ISR1HVp1EgLKfbOrYV9xmY5k3qx3RIu5uPAUZZZHpo0o5Iw==",
+ "babel-plugin-extract-import-names": {
+ "version": "1.6.5",
+ "resolved": "https://registry.npmjs.org/babel-plugin-extract-import-names/-/babel-plugin-extract-import-names-1.6.5.tgz",
+ "integrity": "sha512-rrNoCZ1DHMdy3vuihvkuO2AjE2DVFrI78e61W7eVsgpNTbG0KO1UESQwXMTlS3v1PMnlEJjdvoteRAkatEkWFQ==",
"requires": {
- "cosmiconfig": "^5.0.5",
- "resolve": "^1.8.1"
+ "@babel/helper-plugin-utils": "7.8.3"
+ },
+ "dependencies": {
+ "@babel/helper-plugin-utils": {
+ "version": "7.8.3",
+ "resolved": "https://registry.npmjs.org/@babel/helper-plugin-utils/-/helper-plugin-utils-7.8.3.tgz",
+ "integrity": "sha512-j+fq49Xds2smCUNYmEHF9kGNkhbet6yVIBp4e6oeQpH1RUs/Ir06xUKzDjDkGcaaokPiTNs2JBWHjaE4csUkZQ=="
+ }
+ }
+ },
+ "babel-plugin-macros": {
+ "version": "2.8.0",
+ "resolved": "https://registry.npmjs.org/babel-plugin-macros/-/babel-plugin-macros-2.8.0.tgz",
+ "integrity": "sha512-SEP5kJpfGYqYKpBrj5XU3ahw5p5GOHJ0U5ssOSQ/WBVdwkD2Dzlce95exQTs3jOVWPPKLBN2rlEWkCK7dSmLvg==",
+ "requires": {
+ "@babel/runtime": "^7.7.2",
+ "cosmiconfig": "^6.0.0",
+ "resolve": "^1.12.0"
+ },
+ "dependencies": {
+ "@babel/runtime": {
+ "version": "7.10.2",
+ "resolved": "https://registry.npmjs.org/@babel/runtime/-/runtime-7.10.2.tgz",
+ "integrity": "sha512-6sF3uQw2ivImfVIl62RZ7MXhO2tap69WeWK57vAaimT6AZbE4FbqjdEJIN1UqoD6wI6B+1n9UiagafH1sxjOtg==",
+ "requires": {
+ "regenerator-runtime": "^0.13.4"
+ }
+ },
+ "cosmiconfig": {
+ "version": "6.0.0",
+ "resolved": "https://registry.npmjs.org/cosmiconfig/-/cosmiconfig-6.0.0.tgz",
+ "integrity": "sha512-xb3ZL6+L8b9JLLCx3ZdoZy4+2ECphCMo2PwqgP1tlfVq6M6YReyzBJtvWWtbDSpNr9hn96pkCiZqUcFEc+54Qg==",
+ "requires": {
+ "@types/parse-json": "^4.0.0",
+ "import-fresh": "^3.1.0",
+ "parse-json": "^5.0.0",
+ "path-type": "^4.0.0",
+ "yaml": "^1.7.2"
+ }
+ },
+ "import-fresh": {
+ "version": "3.2.1",
+ "resolved": "https://registry.npmjs.org/import-fresh/-/import-fresh-3.2.1.tgz",
+ "integrity": "sha512-6e1q1cnWP2RXD9/keSkxHScg508CdXqXWgWBaETNhyuBFz+kUZlKboh+ISK+bU++DmbHimVBrOz/zzPe0sZ3sQ==",
+ "requires": {
+ "parent-module": "^1.0.0",
+ "resolve-from": "^4.0.0"
+ }
+ },
+ "parse-json": {
+ "version": "5.0.0",
+ "resolved": "https://registry.npmjs.org/parse-json/-/parse-json-5.0.0.tgz",
+ "integrity": "sha512-OOY5b7PAEFV0E2Fir1KOkxchnZNCdowAJgQ5NuxjpBKTRP3pQhwkrkxqQjeoKJ+fO7bCpmIZaogI4eZGDMEGOw==",
+ "requires": {
+ "@babel/code-frame": "^7.0.0",
+ "error-ex": "^1.3.1",
+ "json-parse-better-errors": "^1.0.1",
+ "lines-and-columns": "^1.1.6"
+ }
+ },
+ "path-type": {
+ "version": "4.0.0",
+ "resolved": "https://registry.npmjs.org/path-type/-/path-type-4.0.0.tgz",
+ "integrity": "sha512-gDKb8aZMDeD/tZWs9P6+q0J9Mwkdl6xMV8TjnGP3qJVJ06bdMgkbBlLU8IdfOsIsFz2BW1rNVT3XuNEl8zPAvw=="
+ },
+ "regenerator-runtime": {
+ "version": "0.13.5",
+ "resolved": "https://registry.npmjs.org/regenerator-runtime/-/regenerator-runtime-0.13.5.tgz",
+ "integrity": "sha512-ZS5w8CpKFinUzOwW3c83oPeVXoNsrLsaCoLtJvAClH135j/R77RuymhiSErhm2lKcwSCIpmvIWSbDkIfAqKQlA=="
+ },
+ "resolve": {
+ "version": "1.17.0",
+ "resolved": "https://registry.npmjs.org/resolve/-/resolve-1.17.0.tgz",
+ "integrity": "sha512-ic+7JYiV8Vi2yzQGFWOkiZD5Z9z7O2Zhm9XMaTxdJExKasieFCr+yXZ/WmXsckHiKl12ar0y6XiXDx3m4RHn1w==",
+ "requires": {
+ "path-parse": "^1.0.6"
+ }
+ },
+ "resolve-from": {
+ "version": "4.0.0",
+ "resolved": "https://registry.npmjs.org/resolve-from/-/resolve-from-4.0.0.tgz",
+ "integrity": "sha512-pb/MYmXstAkysRFx8piNI1tGFNQIFA3vkE3Gq4EuA1dF6gHp/+vgZqsCGJapvy8N3Q+4o7FwvquPJcnZ7RYy4g=="
+ }
}
},
"babel-plugin-remove-graphql-queries": {
- "version": "2.6.1",
- "resolved": "https://registry.npmjs.org/babel-plugin-remove-graphql-queries/-/babel-plugin-remove-graphql-queries-2.6.1.tgz",
- "integrity": "sha512-YbJgL7sP+QtJIUnvmVth1iwbSlea6nc0YftAlkSkMTDCPTfhleBXUdMTqrXoBwR787Q9m9PR8javDMUK/Qs6Lg=="
+ "version": "2.9.5",
+ "resolved": "https://registry.npmjs.org/babel-plugin-remove-graphql-queries/-/babel-plugin-remove-graphql-queries-2.9.5.tgz",
+ "integrity": "sha512-z0T2dMz6V8a8hC11NFDwnuT5xR0k4Vu4Zie4A5BPchQOe59uHpbaM54mMl66FUA/iLTfYC11xez1N3Wc1gV20w=="
},
"babel-plugin-syntax-dynamic-import": {
"version": "6.18.0",
- "resolved": "http://registry.npmjs.org/babel-plugin-syntax-dynamic-import/-/babel-plugin-syntax-dynamic-import-6.18.0.tgz",
+ "resolved": "https://registry.npmjs.org/babel-plugin-syntax-dynamic-import/-/babel-plugin-syntax-dynamic-import-6.18.0.tgz",
"integrity": "sha1-jWomIpyDdFqZgqRBBRVyyqF5sdo="
},
"babel-plugin-syntax-object-rest-spread": {
@@ -2430,10 +5479,15 @@
"babel-runtime": "^6.26.0"
}
},
+ "babel-plugin-transform-react-remove-prop-types": {
+ "version": "0.4.24",
+ "resolved": "https://registry.npmjs.org/babel-plugin-transform-react-remove-prop-types/-/babel-plugin-transform-react-remove-prop-types-0.4.24.tgz",
+ "integrity": "sha512-eqj0hVcJUR57/Ug2zE1Yswsw4LhuqqHhD+8v120T1cl3kjg76QwtyBrdIk4WVwK+lAhBJVYCd/v+4nc4y+8JsA=="
+ },
"babel-preset-fbjs": {
- "version": "3.2.0",
- "resolved": "https://registry.npmjs.org/babel-preset-fbjs/-/babel-preset-fbjs-3.2.0.tgz",
- "integrity": "sha512-5Jo+JeWiVz2wHUUyAlvb/sSYnXNig9r+HqGAOSfh5Fzxp7SnAaR/tEGRJ1ZX7C77kfk82658w6R5Z+uPATTD9g==",
+ "version": "3.3.0",
+ "resolved": "https://registry.npmjs.org/babel-preset-fbjs/-/babel-preset-fbjs-3.3.0.tgz",
+ "integrity": "sha512-7QTLTCd2gwB2qGoi5epSULMHugSVgpcVt5YAeiFO9ABLrutDQzKfGwzxgZHLpugq8qMdg/DhRZDZ5CLKxBkEbw==",
"requires": {
"@babel/plugin-proposal-class-properties": "^7.0.0",
"@babel/plugin-proposal-object-rest-spread": "^7.0.0",
@@ -2465,16 +5519,789 @@
}
},
"babel-preset-gatsby": {
- "version": "0.1.8",
- "resolved": "https://registry.npmjs.org/babel-preset-gatsby/-/babel-preset-gatsby-0.1.8.tgz",
- "integrity": "sha512-JtsBZ6qB9Bf+6lO9j8mYKWatAbCsTo3ycWm+g13eIbxOlvYZ0VTLy9vkbV59Ax6DBIIANZhoGtOkVJHVR/59Ug==",
+ "version": "0.2.36",
+ "resolved": "https://registry.npmjs.org/babel-preset-gatsby/-/babel-preset-gatsby-0.2.36.tgz",
+ "integrity": "sha512-vmqN6ht4B28dHlK7Qsau3JseHwTEkLjf2QkUcUKlYCuVk7skZkbN2B6O8QeJQTQ30V/6uUKiNMU/U0nc0RYMNQ==",
"requires": {
- "@babel/plugin-proposal-class-properties": "^7.0.0",
- "@babel/plugin-syntax-dynamic-import": "^7.0.0",
- "@babel/plugin-transform-runtime": "^7.0.0",
- "@babel/preset-env": "^7.0.0",
- "@babel/preset-react": "^7.0.0",
- "babel-plugin-macros": "^2.4.2"
+ "@babel/plugin-proposal-class-properties": "^7.8.3",
+ "@babel/plugin-proposal-nullish-coalescing-operator": "^7.8.3",
+ "@babel/plugin-proposal-optional-chaining": "^7.8.3",
+ "@babel/plugin-syntax-dynamic-import": "^7.8.3",
+ "@babel/plugin-transform-runtime": "^7.8.3",
+ "@babel/plugin-transform-spread": "^7.8.3",
+ "@babel/preset-env": "^7.8.7",
+ "@babel/preset-react": "^7.8.3",
+ "@babel/runtime": "^7.8.7",
+ "babel-plugin-dynamic-import-node": "^2.3.0",
+ "babel-plugin-macros": "^2.8.0",
+ "babel-plugin-transform-react-remove-prop-types": "^0.4.24",
+ "gatsby-core-utils": "^1.0.34"
+ },
+ "dependencies": {
+ "@babel/code-frame": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/code-frame/-/code-frame-7.10.1.tgz",
+ "integrity": "sha512-IGhtTmpjGbYzcEDOw7DcQtbQSXcG9ftmAXtWTu9V936vDye4xjjekktFAtgZsWpzTj/X01jocB46mTywm/4SZw==",
+ "requires": {
+ "@babel/highlight": "^7.10.1"
+ }
+ },
+ "@babel/generator": {
+ "version": "7.10.2",
+ "resolved": "https://registry.npmjs.org/@babel/generator/-/generator-7.10.2.tgz",
+ "integrity": "sha512-AxfBNHNu99DTMvlUPlt1h2+Hn7knPpH5ayJ8OqDWSeLld+Fi2AYBTC/IejWDM9Edcii4UzZRCsbUt0WlSDsDsA==",
+ "requires": {
+ "@babel/types": "^7.10.2",
+ "jsesc": "^2.5.1",
+ "lodash": "^4.17.13",
+ "source-map": "^0.5.0"
+ }
+ },
+ "@babel/helper-annotate-as-pure": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-annotate-as-pure/-/helper-annotate-as-pure-7.10.1.tgz",
+ "integrity": "sha512-ewp3rvJEwLaHgyWGe4wQssC2vjks3E80WiUe2BpMb0KhreTjMROCbxXcEovTrbeGVdQct5VjQfrv9EgC+xMzCw==",
+ "requires": {
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helper-builder-binary-assignment-operator-visitor": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-builder-binary-assignment-operator-visitor/-/helper-builder-binary-assignment-operator-visitor-7.10.1.tgz",
+ "integrity": "sha512-cQpVq48EkYxUU0xozpGCLla3wlkdRRqLWu1ksFMXA9CM5KQmyyRpSEsYXbao7JUkOw/tAaYKCaYyZq6HOFYtyw==",
+ "requires": {
+ "@babel/helper-explode-assignable-expression": "^7.10.1",
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helper-builder-react-jsx": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-builder-react-jsx/-/helper-builder-react-jsx-7.10.1.tgz",
+ "integrity": "sha512-KXzzpyWhXgzjXIlJU1ZjIXzUPdej1suE6vzqgImZ/cpAsR/CC8gUcX4EWRmDfWz/cs6HOCPMBIJ3nKoXt3BFuw==",
+ "requires": {
+ "@babel/helper-annotate-as-pure": "^7.10.1",
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helper-define-map": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-define-map/-/helper-define-map-7.10.1.tgz",
+ "integrity": "sha512-+5odWpX+OnvkD0Zmq7panrMuAGQBu6aPUgvMzuMGo4R+jUOvealEj2hiqI6WhxgKrTpFoFj0+VdsuA8KDxHBDg==",
+ "requires": {
+ "@babel/helper-function-name": "^7.10.1",
+ "@babel/types": "^7.10.1",
+ "lodash": "^4.17.13"
+ }
+ },
+ "@babel/helper-explode-assignable-expression": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-explode-assignable-expression/-/helper-explode-assignable-expression-7.10.1.tgz",
+ "integrity": "sha512-vcUJ3cDjLjvkKzt6rHrl767FeE7pMEYfPanq5L16GRtrXIoznc0HykNW2aEYkcnP76P0isoqJ34dDMFZwzEpJg==",
+ "requires": {
+ "@babel/traverse": "^7.10.1",
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helper-function-name": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-function-name/-/helper-function-name-7.10.1.tgz",
+ "integrity": "sha512-fcpumwhs3YyZ/ttd5Rz0xn0TpIwVkN7X0V38B9TWNfVF42KEkhkAAuPCQ3oXmtTRtiPJrmZ0TrfS0GKF0eMaRQ==",
+ "requires": {
+ "@babel/helper-get-function-arity": "^7.10.1",
+ "@babel/template": "^7.10.1",
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helper-get-function-arity": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-get-function-arity/-/helper-get-function-arity-7.10.1.tgz",
+ "integrity": "sha512-F5qdXkYGOQUb0hpRaPoetF9AnsXknKjWMZ+wmsIRsp5ge5sFh4c3h1eH2pRTTuy9KKAA2+TTYomGXAtEL2fQEw==",
+ "requires": {
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helper-hoist-variables": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-hoist-variables/-/helper-hoist-variables-7.10.1.tgz",
+ "integrity": "sha512-vLm5srkU8rI6X3+aQ1rQJyfjvCBLXP8cAGeuw04zeAM2ItKb1e7pmVmLyHb4sDaAYnLL13RHOZPLEtcGZ5xvjg==",
+ "requires": {
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helper-member-expression-to-functions": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-member-expression-to-functions/-/helper-member-expression-to-functions-7.10.1.tgz",
+ "integrity": "sha512-u7XLXeM2n50gb6PWJ9hoO5oO7JFPaZtrh35t8RqKLT1jFKj9IWeD1zrcrYp1q1qiZTdEarfDWfTIP8nGsu0h5g==",
+ "requires": {
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helper-module-imports": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-module-imports/-/helper-module-imports-7.10.1.tgz",
+ "integrity": "sha512-SFxgwYmZ3HZPyZwJRiVNLRHWuW2OgE5k2nrVs6D9Iv4PPnXVffuEHy83Sfx/l4SqF+5kyJXjAyUmrG7tNm+qVg==",
+ "requires": {
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helper-module-transforms": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-module-transforms/-/helper-module-transforms-7.10.1.tgz",
+ "integrity": "sha512-RLHRCAzyJe7Q7sF4oy2cB+kRnU4wDZY/H2xJFGof+M+SJEGhZsb+GFj5j1AD8NiSaVBJ+Pf0/WObiXu/zxWpFg==",
+ "requires": {
+ "@babel/helper-module-imports": "^7.10.1",
+ "@babel/helper-replace-supers": "^7.10.1",
+ "@babel/helper-simple-access": "^7.10.1",
+ "@babel/helper-split-export-declaration": "^7.10.1",
+ "@babel/template": "^7.10.1",
+ "@babel/types": "^7.10.1",
+ "lodash": "^4.17.13"
+ }
+ },
+ "@babel/helper-optimise-call-expression": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-optimise-call-expression/-/helper-optimise-call-expression-7.10.1.tgz",
+ "integrity": "sha512-a0DjNS1prnBsoKx83dP2falChcs7p3i8VMzdrSbfLhuQra/2ENC4sbri34dz/rWmDADsmF1q5GbfaXydh0Jbjg==",
+ "requires": {
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helper-plugin-utils": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-plugin-utils/-/helper-plugin-utils-7.10.1.tgz",
+ "integrity": "sha512-fvoGeXt0bJc7VMWZGCAEBEMo/HAjW2mP8apF5eXK0wSqwLAVHAISCWRoLMBMUs2kqeaG77jltVqu4Hn8Egl3nA=="
+ },
+ "@babel/helper-regex": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-regex/-/helper-regex-7.10.1.tgz",
+ "integrity": "sha512-7isHr19RsIJWWLLFn21ubFt223PjQyg1HY7CZEMRr820HttHPpVvrsIN3bUOo44DEfFV4kBXO7Abbn9KTUZV7g==",
+ "requires": {
+ "lodash": "^4.17.13"
+ }
+ },
+ "@babel/helper-remap-async-to-generator": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-remap-async-to-generator/-/helper-remap-async-to-generator-7.10.1.tgz",
+ "integrity": "sha512-RfX1P8HqsfgmJ6CwaXGKMAqbYdlleqglvVtht0HGPMSsy2V6MqLlOJVF/0Qyb/m2ZCi2z3q3+s6Pv7R/dQuZ6A==",
+ "requires": {
+ "@babel/helper-annotate-as-pure": "^7.10.1",
+ "@babel/helper-wrap-function": "^7.10.1",
+ "@babel/template": "^7.10.1",
+ "@babel/traverse": "^7.10.1",
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helper-replace-supers": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-replace-supers/-/helper-replace-supers-7.10.1.tgz",
+ "integrity": "sha512-SOwJzEfpuQwInzzQJGjGaiG578UYmyi2Xw668klPWV5n07B73S0a9btjLk/52Mlcxa+5AdIYqws1KyXRfMoB7A==",
+ "requires": {
+ "@babel/helper-member-expression-to-functions": "^7.10.1",
+ "@babel/helper-optimise-call-expression": "^7.10.1",
+ "@babel/traverse": "^7.10.1",
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helper-simple-access": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-simple-access/-/helper-simple-access-7.10.1.tgz",
+ "integrity": "sha512-VSWpWzRzn9VtgMJBIWTZ+GP107kZdQ4YplJlCmIrjoLVSi/0upixezHCDG8kpPVTBJpKfxTH01wDhh+jS2zKbw==",
+ "requires": {
+ "@babel/template": "^7.10.1",
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helper-split-export-declaration": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-split-export-declaration/-/helper-split-export-declaration-7.10.1.tgz",
+ "integrity": "sha512-UQ1LVBPrYdbchNhLwj6fetj46BcFwfS4NllJo/1aJsT+1dLTEnXJL0qHqtY7gPzF8S2fXBJamf1biAXV3X077g==",
+ "requires": {
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helper-wrap-function": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-wrap-function/-/helper-wrap-function-7.10.1.tgz",
+ "integrity": "sha512-C0MzRGteVDn+H32/ZgbAv5r56f2o1fZSA/rj/TYo8JEJNHg+9BdSmKBUND0shxWRztWhjlT2cvHYuynpPsVJwQ==",
+ "requires": {
+ "@babel/helper-function-name": "^7.10.1",
+ "@babel/template": "^7.10.1",
+ "@babel/traverse": "^7.10.1",
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/highlight": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/highlight/-/highlight-7.10.1.tgz",
+ "integrity": "sha512-8rMof+gVP8mxYZApLF/JgNDAkdKa+aJt3ZYxF8z6+j/hpeXL7iMsKCPHa2jNMHu/qqBwzQF4OHNoYi8dMA/rYg==",
+ "requires": {
+ "@babel/helper-validator-identifier": "^7.10.1",
+ "chalk": "^2.0.0",
+ "js-tokens": "^4.0.0"
+ }
+ },
+ "@babel/parser": {
+ "version": "7.10.2",
+ "resolved": "https://registry.npmjs.org/@babel/parser/-/parser-7.10.2.tgz",
+ "integrity": "sha512-PApSXlNMJyB4JiGVhCOlzKIif+TKFTvu0aQAhnTvfP/z3vVSN6ZypH5bfUNwFXXjRQtUEBNFd2PtmCmG2Py3qQ=="
+ },
+ "@babel/plugin-proposal-async-generator-functions": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-proposal-async-generator-functions/-/plugin-proposal-async-generator-functions-7.10.1.tgz",
+ "integrity": "sha512-vzZE12ZTdB336POZjmpblWfNNRpMSua45EYnRigE2XsZxcXcIyly2ixnTJasJE4Zq3U7t2d8rRF7XRUuzHxbOw==",
+ "requires": {
+ "@babel/helper-plugin-utils": "^7.10.1",
+ "@babel/helper-remap-async-to-generator": "^7.10.1",
+ "@babel/plugin-syntax-async-generators": "^7.8.0"
+ }
+ },
+ "@babel/plugin-proposal-json-strings": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-proposal-json-strings/-/plugin-proposal-json-strings-7.10.1.tgz",
+ "integrity": "sha512-m8r5BmV+ZLpWPtMY2mOKN7wre6HIO4gfIiV+eOmsnZABNenrt/kzYBwrh+KOfgumSWpnlGs5F70J8afYMSJMBg==",
+ "requires": {
+ "@babel/helper-plugin-utils": "^7.10.1",
+ "@babel/plugin-syntax-json-strings": "^7.8.0"
+ }
+ },
+ "@babel/plugin-proposal-object-rest-spread": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-proposal-object-rest-spread/-/plugin-proposal-object-rest-spread-7.10.1.tgz",
+ "integrity": "sha512-Z+Qri55KiQkHh7Fc4BW6o+QBuTagbOp9txE+4U1i79u9oWlf2npkiDx+Rf3iK3lbcHBuNy9UOkwuR5wOMH3LIQ==",
+ "requires": {
+ "@babel/helper-plugin-utils": "^7.10.1",
+ "@babel/plugin-syntax-object-rest-spread": "^7.8.0",
+ "@babel/plugin-transform-parameters": "^7.10.1"
+ }
+ },
+ "@babel/plugin-proposal-optional-catch-binding": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-proposal-optional-catch-binding/-/plugin-proposal-optional-catch-binding-7.10.1.tgz",
+ "integrity": "sha512-VqExgeE62YBqI3ogkGoOJp1R6u12DFZjqwJhqtKc2o5m1YTUuUWnos7bZQFBhwkxIFpWYJ7uB75U7VAPPiKETA==",
+ "requires": {
+ "@babel/helper-plugin-utils": "^7.10.1",
+ "@babel/plugin-syntax-optional-catch-binding": "^7.8.0"
+ }
+ },
+ "@babel/plugin-proposal-unicode-property-regex": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-proposal-unicode-property-regex/-/plugin-proposal-unicode-property-regex-7.10.1.tgz",
+ "integrity": "sha512-JjfngYRvwmPwmnbRZyNiPFI8zxCZb8euzbCG/LxyKdeTb59tVciKo9GK9bi6JYKInk1H11Dq9j/zRqIH4KigfQ==",
+ "requires": {
+ "@babel/helper-create-regexp-features-plugin": "^7.10.1",
+ "@babel/helper-plugin-utils": "^7.10.1"
+ }
+ },
+ "@babel/plugin-syntax-async-generators": {
+ "version": "7.8.4",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-async-generators/-/plugin-syntax-async-generators-7.8.4.tgz",
+ "integrity": "sha512-tycmZxkGfZaxhMRbXlPXuVFpdWlXpir2W4AMhSJgRKzk/eDlIXOhb2LHWoLpDF7TEHylV5zNhykX6KAgHJmTNw==",
+ "requires": {
+ "@babel/helper-plugin-utils": "^7.8.0"
+ }
+ },
+ "@babel/plugin-syntax-json-strings": {
+ "version": "7.8.3",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-json-strings/-/plugin-syntax-json-strings-7.8.3.tgz",
+ "integrity": "sha512-lY6kdGpWHvjoe2vk4WrAapEuBR69EMxZl+RoGRhrFGNYVK8mOPAW8VfbT/ZgrFbXlDNiiaxQnAtgVCZ6jv30EA==",
+ "requires": {
+ "@babel/helper-plugin-utils": "^7.8.0"
+ }
+ },
+ "@babel/plugin-syntax-jsx": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-jsx/-/plugin-syntax-jsx-7.10.1.tgz",
+ "integrity": "sha512-+OxyOArpVFXQeXKLO9o+r2I4dIoVoy6+Uu0vKELrlweDM3QJADZj+Z+5ERansZqIZBcLj42vHnDI8Rz9BnRIuQ==",
+ "requires": {
+ "@babel/helper-plugin-utils": "^7.10.1"
+ }
+ },
+ "@babel/plugin-syntax-object-rest-spread": {
+ "version": "7.8.3",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-object-rest-spread/-/plugin-syntax-object-rest-spread-7.8.3.tgz",
+ "integrity": "sha512-XoqMijGZb9y3y2XskN+P1wUGiVwWZ5JmoDRwx5+3GmEplNyVM2s2Dg8ILFQm8rWM48orGy5YpI5Bl8U1y7ydlA==",
+ "requires": {
+ "@babel/helper-plugin-utils": "^7.8.0"
+ }
+ },
+ "@babel/plugin-syntax-optional-catch-binding": {
+ "version": "7.8.3",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-optional-catch-binding/-/plugin-syntax-optional-catch-binding-7.8.3.tgz",
+ "integrity": "sha512-6VPD0Pc1lpTqw0aKoeRTMiB+kWhAoT24PA+ksWSBrFtl5SIRVpZlwN3NNPQjehA2E/91FV3RjLWoVTglWcSV3Q==",
+ "requires": {
+ "@babel/helper-plugin-utils": "^7.8.0"
+ }
+ },
+ "@babel/plugin-transform-arrow-functions": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-transform-arrow-functions/-/plugin-transform-arrow-functions-7.10.1.tgz",
+ "integrity": "sha512-6AZHgFJKP3DJX0eCNJj01RpytUa3SOGawIxweHkNX2L6PYikOZmoh5B0d7hIHaIgveMjX990IAa/xK7jRTN8OA==",
+ "requires": {
+ "@babel/helper-plugin-utils": "^7.10.1"
+ }
+ },
+ "@babel/plugin-transform-async-to-generator": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-transform-async-to-generator/-/plugin-transform-async-to-generator-7.10.1.tgz",
+ "integrity": "sha512-XCgYjJ8TY2slj6SReBUyamJn3k2JLUIiiR5b6t1mNCMSvv7yx+jJpaewakikp0uWFQSF7ChPPoe3dHmXLpISkg==",
+ "requires": {
+ "@babel/helper-module-imports": "^7.10.1",
+ "@babel/helper-plugin-utils": "^7.10.1",
+ "@babel/helper-remap-async-to-generator": "^7.10.1"
+ }
+ },
+ "@babel/plugin-transform-block-scoped-functions": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-transform-block-scoped-functions/-/plugin-transform-block-scoped-functions-7.10.1.tgz",
+ "integrity": "sha512-B7K15Xp8lv0sOJrdVAoukKlxP9N59HS48V1J3U/JGj+Ad+MHq+am6xJVs85AgXrQn4LV8vaYFOB+pr/yIuzW8Q==",
+ "requires": {
+ "@babel/helper-plugin-utils": "^7.10.1"
+ }
+ },
+ "@babel/plugin-transform-block-scoping": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-transform-block-scoping/-/plugin-transform-block-scoping-7.10.1.tgz",
+ "integrity": "sha512-8bpWG6TtF5akdhIm/uWTyjHqENpy13Fx8chg7pFH875aNLwX8JxIxqm08gmAT+Whe6AOmaTeLPe7dpLbXt+xUw==",
+ "requires": {
+ "@babel/helper-plugin-utils": "^7.10.1",
+ "lodash": "^4.17.13"
+ }
+ },
+ "@babel/plugin-transform-classes": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-transform-classes/-/plugin-transform-classes-7.10.1.tgz",
+ "integrity": "sha512-P9V0YIh+ln/B3RStPoXpEQ/CoAxQIhRSUn7aXqQ+FZJ2u8+oCtjIXR3+X0vsSD8zv+mb56K7wZW1XiDTDGiDRQ==",
+ "requires": {
+ "@babel/helper-annotate-as-pure": "^7.10.1",
+ "@babel/helper-define-map": "^7.10.1",
+ "@babel/helper-function-name": "^7.10.1",
+ "@babel/helper-optimise-call-expression": "^7.10.1",
+ "@babel/helper-plugin-utils": "^7.10.1",
+ "@babel/helper-replace-supers": "^7.10.1",
+ "@babel/helper-split-export-declaration": "^7.10.1",
+ "globals": "^11.1.0"
+ }
+ },
+ "@babel/plugin-transform-computed-properties": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-transform-computed-properties/-/plugin-transform-computed-properties-7.10.1.tgz",
+ "integrity": "sha512-mqSrGjp3IefMsXIenBfGcPXxJxweQe2hEIwMQvjtiDQ9b1IBvDUjkAtV/HMXX47/vXf14qDNedXsIiNd1FmkaQ==",
+ "requires": {
+ "@babel/helper-plugin-utils": "^7.10.1"
+ }
+ },
+ "@babel/plugin-transform-destructuring": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-transform-destructuring/-/plugin-transform-destructuring-7.10.1.tgz",
+ "integrity": "sha512-V/nUc4yGWG71OhaTH705pU8ZSdM6c1KmmLP8ys59oOYbT7RpMYAR3MsVOt6OHL0WzG7BlTU076va9fjJyYzJMA==",
+ "requires": {
+ "@babel/helper-plugin-utils": "^7.10.1"
+ }
+ },
+ "@babel/plugin-transform-dotall-regex": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-transform-dotall-regex/-/plugin-transform-dotall-regex-7.10.1.tgz",
+ "integrity": "sha512-19VIMsD1dp02RvduFUmfzj8uknaO3uiHHF0s3E1OHnVsNj8oge8EQ5RzHRbJjGSetRnkEuBYO7TG1M5kKjGLOA==",
+ "requires": {
+ "@babel/helper-create-regexp-features-plugin": "^7.10.1",
+ "@babel/helper-plugin-utils": "^7.10.1"
+ }
+ },
+ "@babel/plugin-transform-duplicate-keys": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-transform-duplicate-keys/-/plugin-transform-duplicate-keys-7.10.1.tgz",
+ "integrity": "sha512-wIEpkX4QvX8Mo9W6XF3EdGttrIPZWozHfEaDTU0WJD/TDnXMvdDh30mzUl/9qWhnf7naicYartcEfUghTCSNpA==",
+ "requires": {
+ "@babel/helper-plugin-utils": "^7.10.1"
+ }
+ },
+ "@babel/plugin-transform-exponentiation-operator": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-transform-exponentiation-operator/-/plugin-transform-exponentiation-operator-7.10.1.tgz",
+ "integrity": "sha512-lr/przdAbpEA2BUzRvjXdEDLrArGRRPwbaF9rvayuHRvdQ7lUTTkZnhZrJ4LE2jvgMRFF4f0YuPQ20vhiPYxtA==",
+ "requires": {
+ "@babel/helper-builder-binary-assignment-operator-visitor": "^7.10.1",
+ "@babel/helper-plugin-utils": "^7.10.1"
+ }
+ },
+ "@babel/plugin-transform-for-of": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-transform-for-of/-/plugin-transform-for-of-7.10.1.tgz",
+ "integrity": "sha512-US8KCuxfQcn0LwSCMWMma8M2R5mAjJGsmoCBVwlMygvmDUMkTCykc84IqN1M7t+agSfOmLYTInLCHJM+RUoz+w==",
+ "requires": {
+ "@babel/helper-plugin-utils": "^7.10.1"
+ }
+ },
+ "@babel/plugin-transform-function-name": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-transform-function-name/-/plugin-transform-function-name-7.10.1.tgz",
+ "integrity": "sha512-//bsKsKFBJfGd65qSNNh1exBy5Y9gD9ZN+DvrJ8f7HXr4avE5POW6zB7Rj6VnqHV33+0vXWUwJT0wSHubiAQkw==",
+ "requires": {
+ "@babel/helper-function-name": "^7.10.1",
+ "@babel/helper-plugin-utils": "^7.10.1"
+ }
+ },
+ "@babel/plugin-transform-literals": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-transform-literals/-/plugin-transform-literals-7.10.1.tgz",
+ "integrity": "sha512-qi0+5qgevz1NHLZroObRm5A+8JJtibb7vdcPQF1KQE12+Y/xxl8coJ+TpPW9iRq+Mhw/NKLjm+5SHtAHCC7lAw==",
+ "requires": {
+ "@babel/helper-plugin-utils": "^7.10.1"
+ }
+ },
+ "@babel/plugin-transform-modules-amd": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-transform-modules-amd/-/plugin-transform-modules-amd-7.10.1.tgz",
+ "integrity": "sha512-31+hnWSFRI4/ACFr1qkboBbrTxoBIzj7qA69qlq8HY8p7+YCzkCT6/TvQ1a4B0z27VeWtAeJd6pr5G04dc1iHw==",
+ "requires": {
+ "@babel/helper-module-transforms": "^7.10.1",
+ "@babel/helper-plugin-utils": "^7.10.1",
+ "babel-plugin-dynamic-import-node": "^2.3.3"
+ }
+ },
+ "@babel/plugin-transform-modules-commonjs": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-transform-modules-commonjs/-/plugin-transform-modules-commonjs-7.10.1.tgz",
+ "integrity": "sha512-AQG4fc3KOah0vdITwt7Gi6hD9BtQP/8bhem7OjbaMoRNCH5Djx42O2vYMfau7QnAzQCa+RJnhJBmFFMGpQEzrg==",
+ "requires": {
+ "@babel/helper-module-transforms": "^7.10.1",
+ "@babel/helper-plugin-utils": "^7.10.1",
+ "@babel/helper-simple-access": "^7.10.1",
+ "babel-plugin-dynamic-import-node": "^2.3.3"
+ }
+ },
+ "@babel/plugin-transform-modules-systemjs": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-transform-modules-systemjs/-/plugin-transform-modules-systemjs-7.10.1.tgz",
+ "integrity": "sha512-ewNKcj1TQZDL3YnO85qh9zo1YF1CHgmSTlRQgHqe63oTrMI85cthKtZjAiZSsSNjPQ5NCaYo5QkbYqEw1ZBgZA==",
+ "requires": {
+ "@babel/helper-hoist-variables": "^7.10.1",
+ "@babel/helper-module-transforms": "^7.10.1",
+ "@babel/helper-plugin-utils": "^7.10.1",
+ "babel-plugin-dynamic-import-node": "^2.3.3"
+ }
+ },
+ "@babel/plugin-transform-modules-umd": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-transform-modules-umd/-/plugin-transform-modules-umd-7.10.1.tgz",
+ "integrity": "sha512-EIuiRNMd6GB6ulcYlETnYYfgv4AxqrswghmBRQbWLHZxN4s7mupxzglnHqk9ZiUpDI4eRWewedJJNj67PWOXKA==",
+ "requires": {
+ "@babel/helper-module-transforms": "^7.10.1",
+ "@babel/helper-plugin-utils": "^7.10.1"
+ }
+ },
+ "@babel/plugin-transform-new-target": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-transform-new-target/-/plugin-transform-new-target-7.10.1.tgz",
+ "integrity": "sha512-MBlzPc1nJvbmO9rPr1fQwXOM2iGut+JC92ku6PbiJMMK7SnQc1rytgpopveE3Evn47gzvGYeCdgfCDbZo0ecUw==",
+ "requires": {
+ "@babel/helper-plugin-utils": "^7.10.1"
+ }
+ },
+ "@babel/plugin-transform-object-super": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-transform-object-super/-/plugin-transform-object-super-7.10.1.tgz",
+ "integrity": "sha512-WnnStUDN5GL+wGQrJylrnnVlFhFmeArINIR9gjhSeYyvroGhBrSAXYg/RHsnfzmsa+onJrTJrEClPzgNmmQ4Gw==",
+ "requires": {
+ "@babel/helper-plugin-utils": "^7.10.1",
+ "@babel/helper-replace-supers": "^7.10.1"
+ }
+ },
+ "@babel/plugin-transform-parameters": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-transform-parameters/-/plugin-transform-parameters-7.10.1.tgz",
+ "integrity": "sha512-tJ1T0n6g4dXMsL45YsSzzSDZCxiHXAQp/qHrucOq5gEHncTA3xDxnd5+sZcoQp+N1ZbieAaB8r/VUCG0gqseOg==",
+ "requires": {
+ "@babel/helper-get-function-arity": "^7.10.1",
+ "@babel/helper-plugin-utils": "^7.10.1"
+ }
+ },
+ "@babel/plugin-transform-react-display-name": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-transform-react-display-name/-/plugin-transform-react-display-name-7.10.1.tgz",
+ "integrity": "sha512-rBjKcVwjk26H3VX8pavMxGf33LNlbocMHdSeldIEswtQ/hrjyTG8fKKILW1cSkODyRovckN/uZlGb2+sAV9JUQ==",
+ "requires": {
+ "@babel/helper-plugin-utils": "^7.10.1"
+ }
+ },
+ "@babel/plugin-transform-react-jsx": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-transform-react-jsx/-/plugin-transform-react-jsx-7.10.1.tgz",
+ "integrity": "sha512-MBVworWiSRBap3Vs39eHt+6pJuLUAaK4oxGc8g+wY+vuSJvLiEQjW1LSTqKb8OUPtDvHCkdPhk7d6sjC19xyFw==",
+ "requires": {
+ "@babel/helper-builder-react-jsx": "^7.10.1",
+ "@babel/helper-builder-react-jsx-experimental": "^7.10.1",
+ "@babel/helper-plugin-utils": "^7.10.1",
+ "@babel/plugin-syntax-jsx": "^7.10.1"
+ }
+ },
+ "@babel/plugin-transform-react-jsx-self": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-transform-react-jsx-self/-/plugin-transform-react-jsx-self-7.10.1.tgz",
+ "integrity": "sha512-4p+RBw9d1qV4S749J42ZooeQaBomFPrSxa9JONLHJ1TxCBo3TzJ79vtmG2S2erUT8PDDrPdw4ZbXGr2/1+dILA==",
+ "requires": {
+ "@babel/helper-plugin-utils": "^7.10.1",
+ "@babel/plugin-syntax-jsx": "^7.10.1"
+ }
+ },
+ "@babel/plugin-transform-react-jsx-source": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-transform-react-jsx-source/-/plugin-transform-react-jsx-source-7.10.1.tgz",
+ "integrity": "sha512-neAbaKkoiL+LXYbGDvh6PjPG+YeA67OsZlE78u50xbWh2L1/C81uHiNP5d1fw+uqUIoiNdCC8ZB+G4Zh3hShJA==",
+ "requires": {
+ "@babel/helper-plugin-utils": "^7.10.1",
+ "@babel/plugin-syntax-jsx": "^7.10.1"
+ }
+ },
+ "@babel/plugin-transform-regenerator": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-transform-regenerator/-/plugin-transform-regenerator-7.10.1.tgz",
+ "integrity": "sha512-B3+Y2prScgJ2Bh/2l9LJxKbb8C8kRfsG4AdPT+n7ixBHIxJaIG8bi8tgjxUMege1+WqSJ+7gu1YeoMVO3gPWzw==",
+ "requires": {
+ "regenerator-transform": "^0.14.2"
+ }
+ },
+ "@babel/plugin-transform-shorthand-properties": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-transform-shorthand-properties/-/plugin-transform-shorthand-properties-7.10.1.tgz",
+ "integrity": "sha512-AR0E/lZMfLstScFwztApGeyTHJ5u3JUKMjneqRItWeEqDdHWZwAOKycvQNCasCK/3r5YXsuNG25funcJDu7Y2g==",
+ "requires": {
+ "@babel/helper-plugin-utils": "^7.10.1"
+ }
+ },
+ "@babel/plugin-transform-spread": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-transform-spread/-/plugin-transform-spread-7.10.1.tgz",
+ "integrity": "sha512-8wTPym6edIrClW8FI2IoaePB91ETOtg36dOkj3bYcNe7aDMN2FXEoUa+WrmPc4xa1u2PQK46fUX2aCb+zo9rfw==",
+ "requires": {
+ "@babel/helper-plugin-utils": "^7.10.1"
+ }
+ },
+ "@babel/plugin-transform-sticky-regex": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-transform-sticky-regex/-/plugin-transform-sticky-regex-7.10.1.tgz",
+ "integrity": "sha512-j17ojftKjrL7ufX8ajKvwRilwqTok4q+BjkknmQw9VNHnItTyMP5anPFzxFJdCQs7clLcWpCV3ma+6qZWLnGMA==",
+ "requires": {
+ "@babel/helper-plugin-utils": "^7.10.1",
+ "@babel/helper-regex": "^7.10.1"
+ }
+ },
+ "@babel/plugin-transform-template-literals": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-transform-template-literals/-/plugin-transform-template-literals-7.10.1.tgz",
+ "integrity": "sha512-t7B/3MQf5M1T9hPCRG28DNGZUuxAuDqLYS03rJrIk2prj/UV7Z6FOneijhQhnv/Xa039vidXeVbvjK2SK5f7Gg==",
+ "requires": {
+ "@babel/helper-annotate-as-pure": "^7.10.1",
+ "@babel/helper-plugin-utils": "^7.10.1"
+ }
+ },
+ "@babel/plugin-transform-typeof-symbol": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-transform-typeof-symbol/-/plugin-transform-typeof-symbol-7.10.1.tgz",
+ "integrity": "sha512-qX8KZcmbvA23zDi+lk9s6hC1FM7jgLHYIjuLgULgc8QtYnmB3tAVIYkNoKRQ75qWBeyzcoMoK8ZQmogGtC/w0g==",
+ "requires": {
+ "@babel/helper-plugin-utils": "^7.10.1"
+ }
+ },
+ "@babel/plugin-transform-unicode-regex": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-transform-unicode-regex/-/plugin-transform-unicode-regex-7.10.1.tgz",
+ "integrity": "sha512-Y/2a2W299k0VIUdbqYm9X2qS6fE0CUBhhiPpimK6byy7OJ/kORLlIX+J6UrjgNu5awvs62k+6RSslxhcvVw2Tw==",
+ "requires": {
+ "@babel/helper-create-regexp-features-plugin": "^7.10.1",
+ "@babel/helper-plugin-utils": "^7.10.1"
+ }
+ },
+ "@babel/preset-env": {
+ "version": "7.10.2",
+ "resolved": "https://registry.npmjs.org/@babel/preset-env/-/preset-env-7.10.2.tgz",
+ "integrity": "sha512-MjqhX0RZaEgK/KueRzh+3yPSk30oqDKJ5HP5tqTSB1e2gzGS3PLy7K0BIpnp78+0anFuSwOeuCf1zZO7RzRvEA==",
+ "requires": {
+ "@babel/compat-data": "^7.10.1",
+ "@babel/helper-compilation-targets": "^7.10.2",
+ "@babel/helper-module-imports": "^7.10.1",
+ "@babel/helper-plugin-utils": "^7.10.1",
+ "@babel/plugin-proposal-async-generator-functions": "^7.10.1",
+ "@babel/plugin-proposal-class-properties": "^7.10.1",
+ "@babel/plugin-proposal-dynamic-import": "^7.10.1",
+ "@babel/plugin-proposal-json-strings": "^7.10.1",
+ "@babel/plugin-proposal-nullish-coalescing-operator": "^7.10.1",
+ "@babel/plugin-proposal-numeric-separator": "^7.10.1",
+ "@babel/plugin-proposal-object-rest-spread": "^7.10.1",
+ "@babel/plugin-proposal-optional-catch-binding": "^7.10.1",
+ "@babel/plugin-proposal-optional-chaining": "^7.10.1",
+ "@babel/plugin-proposal-private-methods": "^7.10.1",
+ "@babel/plugin-proposal-unicode-property-regex": "^7.10.1",
+ "@babel/plugin-syntax-async-generators": "^7.8.0",
+ "@babel/plugin-syntax-class-properties": "^7.10.1",
+ "@babel/plugin-syntax-dynamic-import": "^7.8.0",
+ "@babel/plugin-syntax-json-strings": "^7.8.0",
+ "@babel/plugin-syntax-nullish-coalescing-operator": "^7.8.0",
+ "@babel/plugin-syntax-numeric-separator": "^7.10.1",
+ "@babel/plugin-syntax-object-rest-spread": "^7.8.0",
+ "@babel/plugin-syntax-optional-catch-binding": "^7.8.0",
+ "@babel/plugin-syntax-optional-chaining": "^7.8.0",
+ "@babel/plugin-syntax-top-level-await": "^7.10.1",
+ "@babel/plugin-transform-arrow-functions": "^7.10.1",
+ "@babel/plugin-transform-async-to-generator": "^7.10.1",
+ "@babel/plugin-transform-block-scoped-functions": "^7.10.1",
+ "@babel/plugin-transform-block-scoping": "^7.10.1",
+ "@babel/plugin-transform-classes": "^7.10.1",
+ "@babel/plugin-transform-computed-properties": "^7.10.1",
+ "@babel/plugin-transform-destructuring": "^7.10.1",
+ "@babel/plugin-transform-dotall-regex": "^7.10.1",
+ "@babel/plugin-transform-duplicate-keys": "^7.10.1",
+ "@babel/plugin-transform-exponentiation-operator": "^7.10.1",
+ "@babel/plugin-transform-for-of": "^7.10.1",
+ "@babel/plugin-transform-function-name": "^7.10.1",
+ "@babel/plugin-transform-literals": "^7.10.1",
+ "@babel/plugin-transform-member-expression-literals": "^7.10.1",
+ "@babel/plugin-transform-modules-amd": "^7.10.1",
+ "@babel/plugin-transform-modules-commonjs": "^7.10.1",
+ "@babel/plugin-transform-modules-systemjs": "^7.10.1",
+ "@babel/plugin-transform-modules-umd": "^7.10.1",
+ "@babel/plugin-transform-named-capturing-groups-regex": "^7.8.3",
+ "@babel/plugin-transform-new-target": "^7.10.1",
+ "@babel/plugin-transform-object-super": "^7.10.1",
+ "@babel/plugin-transform-parameters": "^7.10.1",
+ "@babel/plugin-transform-property-literals": "^7.10.1",
+ "@babel/plugin-transform-regenerator": "^7.10.1",
+ "@babel/plugin-transform-reserved-words": "^7.10.1",
+ "@babel/plugin-transform-shorthand-properties": "^7.10.1",
+ "@babel/plugin-transform-spread": "^7.10.1",
+ "@babel/plugin-transform-sticky-regex": "^7.10.1",
+ "@babel/plugin-transform-template-literals": "^7.10.1",
+ "@babel/plugin-transform-typeof-symbol": "^7.10.1",
+ "@babel/plugin-transform-unicode-escapes": "^7.10.1",
+ "@babel/plugin-transform-unicode-regex": "^7.10.1",
+ "@babel/preset-modules": "^0.1.3",
+ "@babel/types": "^7.10.2",
+ "browserslist": "^4.12.0",
+ "core-js-compat": "^3.6.2",
+ "invariant": "^2.2.2",
+ "levenary": "^1.1.1",
+ "semver": "^5.5.0"
+ }
+ },
+ "@babel/preset-react": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/preset-react/-/preset-react-7.10.1.tgz",
+ "integrity": "sha512-Rw0SxQ7VKhObmFjD/cUcKhPTtzpeviEFX1E6PgP+cYOhQ98icNqtINNFANlsdbQHrmeWnqdxA4Tmnl1jy5tp3Q==",
+ "requires": {
+ "@babel/helper-plugin-utils": "^7.10.1",
+ "@babel/plugin-transform-react-display-name": "^7.10.1",
+ "@babel/plugin-transform-react-jsx": "^7.10.1",
+ "@babel/plugin-transform-react-jsx-development": "^7.10.1",
+ "@babel/plugin-transform-react-jsx-self": "^7.10.1",
+ "@babel/plugin-transform-react-jsx-source": "^7.10.1",
+ "@babel/plugin-transform-react-pure-annotations": "^7.10.1"
+ }
+ },
+ "@babel/runtime": {
+ "version": "7.10.2",
+ "resolved": "https://registry.npmjs.org/@babel/runtime/-/runtime-7.10.2.tgz",
+ "integrity": "sha512-6sF3uQw2ivImfVIl62RZ7MXhO2tap69WeWK57vAaimT6AZbE4FbqjdEJIN1UqoD6wI6B+1n9UiagafH1sxjOtg==",
+ "requires": {
+ "regenerator-runtime": "^0.13.4"
+ }
+ },
+ "@babel/template": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/template/-/template-7.10.1.tgz",
+ "integrity": "sha512-OQDg6SqvFSsc9A0ej6SKINWrpJiNonRIniYondK2ViKhB06i3c0s+76XUft71iqBEe9S1OKsHwPAjfHnuvnCig==",
+ "requires": {
+ "@babel/code-frame": "^7.10.1",
+ "@babel/parser": "^7.10.1",
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/traverse": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/traverse/-/traverse-7.10.1.tgz",
+ "integrity": "sha512-C/cTuXeKt85K+p08jN6vMDz8vSV0vZcI0wmQ36o6mjbuo++kPMdpOYw23W2XH04dbRt9/nMEfA4W3eR21CD+TQ==",
+ "requires": {
+ "@babel/code-frame": "^7.10.1",
+ "@babel/generator": "^7.10.1",
+ "@babel/helper-function-name": "^7.10.1",
+ "@babel/helper-split-export-declaration": "^7.10.1",
+ "@babel/parser": "^7.10.1",
+ "@babel/types": "^7.10.1",
+ "debug": "^4.1.0",
+ "globals": "^11.1.0",
+ "lodash": "^4.17.13"
+ }
+ },
+ "@babel/types": {
+ "version": "7.10.2",
+ "resolved": "https://registry.npmjs.org/@babel/types/-/types-7.10.2.tgz",
+ "integrity": "sha512-AD3AwWBSz0AWF0AkCN9VPiWrvldXq+/e3cHa4J89vo4ymjz1XwrBFFVZmkJTsQIPNk+ZVomPSXUJqq8yyjZsng==",
+ "requires": {
+ "@babel/helper-validator-identifier": "^7.10.1",
+ "lodash": "^4.17.13",
+ "to-fast-properties": "^2.0.0"
+ }
+ },
+ "babel-plugin-dynamic-import-node": {
+ "version": "2.3.3",
+ "resolved": "https://registry.npmjs.org/babel-plugin-dynamic-import-node/-/babel-plugin-dynamic-import-node-2.3.3.tgz",
+ "integrity": "sha512-jZVI+s9Zg3IqA/kdi0i6UDCybUI3aSBLnglhYbSSjKlV7yF1F/5LWv8MakQmvYpnbJDS6fcBL2KzHSxNCMtWSQ==",
+ "requires": {
+ "object.assign": "^4.1.0"
+ }
+ },
+ "browserslist": {
+ "version": "4.12.0",
+ "resolved": "https://registry.npmjs.org/browserslist/-/browserslist-4.12.0.tgz",
+ "integrity": "sha512-UH2GkcEDSI0k/lRkuDSzFl9ZZ87skSy9w2XAn1MsZnL+4c4rqbBd3e82UWHbYDpztABrPBhZsTEeuxVfHppqDg==",
+ "requires": {
+ "caniuse-lite": "^1.0.30001043",
+ "electron-to-chromium": "^1.3.413",
+ "node-releases": "^1.1.53",
+ "pkg-up": "^2.0.0"
+ }
+ },
+ "caniuse-lite": {
+ "version": "1.0.30001084",
+ "resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30001084.tgz",
+ "integrity": "sha512-ftdc5oGmhEbLUuMZ/Qp3mOpzfZLCxPYKcvGv6v2dJJ+8EdqcvZRbAGOiLmkM/PV1QGta/uwBs8/nCl6sokDW6w=="
+ },
+ "debug": {
+ "version": "4.1.1",
+ "resolved": "https://registry.npmjs.org/debug/-/debug-4.1.1.tgz",
+ "integrity": "sha512-pYAIzeRo8J6KPEaJ0VWOh5Pzkbw/RetuzehGM7QRRX5he4fPHx2rdKMB256ehJCkX+XRQm16eZLqLNS8RSZXZw==",
+ "requires": {
+ "ms": "^2.1.1"
+ }
+ },
+ "electron-to-chromium": {
+ "version": "1.3.474",
+ "resolved": "https://registry.npmjs.org/electron-to-chromium/-/electron-to-chromium-1.3.474.tgz",
+ "integrity": "sha512-fPkSgT9IBKmVJz02XioNsIpg0WYmkPrvU1lUJblMMJALxyE7/32NGvbJQKKxpNokozPvqfqkuUqVClYsvetcLw=="
+ },
+ "lodash": {
+ "version": "4.17.15",
+ "resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz",
+ "integrity": "sha512-8xOcRHvCjnocdS5cpwXQXVzmmh5e5+saE2QGoeQmbKmRS6J3VQppPOIt0MnmE+4xlZoumy0GPG0D0MVIQbNA1A=="
+ },
+ "node-releases": {
+ "version": "1.1.58",
+ "resolved": "https://registry.npmjs.org/node-releases/-/node-releases-1.1.58.tgz",
+ "integrity": "sha512-NxBudgVKiRh/2aPWMgPR7bPTX0VPmGx5QBwCtdHitnqFE5/O8DeBXuIMH1nwNnw/aMo6AjOrpsHzfY3UbUJ7yg=="
+ },
+ "regenerator-runtime": {
+ "version": "0.13.5",
+ "resolved": "https://registry.npmjs.org/regenerator-runtime/-/regenerator-runtime-0.13.5.tgz",
+ "integrity": "sha512-ZS5w8CpKFinUzOwW3c83oPeVXoNsrLsaCoLtJvAClH135j/R77RuymhiSErhm2lKcwSCIpmvIWSbDkIfAqKQlA=="
+ },
+ "regenerator-transform": {
+ "version": "0.14.4",
+ "resolved": "https://registry.npmjs.org/regenerator-transform/-/regenerator-transform-0.14.4.tgz",
+ "integrity": "sha512-EaJaKPBI9GvKpvUz2mz4fhx7WPgvwRLY9v3hlNHWmAuJHI13T4nwKnNvm5RWJzEdnI5g5UwtOww+S8IdoUC2bw==",
+ "requires": {
+ "@babel/runtime": "^7.8.4",
+ "private": "^0.1.8"
+ }
+ }
}
},
"babel-runtime": {
@@ -2574,9 +6401,9 @@
"integrity": "sha512-ccav/yGvoa80BQDljCxsmmQ3Xvx60/UpBIij5QN21W3wBi/hhIC9OoO+KLpu9IJTS9j4DRVJ3aDDF9cMSoa2lw=="
},
"base64id": {
- "version": "1.0.0",
- "resolved": "https://registry.npmjs.org/base64id/-/base64id-1.0.0.tgz",
- "integrity": "sha1-R2iMuZu2gE8OBtPnY7HDLlfY5rY="
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/base64id/-/base64id-2.0.0.tgz",
+ "integrity": "sha512-lGe34o6EHj9y3Kts9R4ZYs/Gr+6N7MCaMlIFA3F1R2O5/m7K06AxfSeO5530PEERE6/WyEg3lsuyw4GHlPZHog=="
},
"batch": {
"version": "0.6.1",
@@ -2875,27 +6702,32 @@
"integrity": "sha1-ZBE+nHzxICs3btYHvzBibr5XsYo="
},
"bn.js": {
- "version": "4.11.8",
- "resolved": "https://registry.npmjs.org/bn.js/-/bn.js-4.11.8.tgz",
- "integrity": "sha512-ItfYfPLkWHUjckQCk8xC+LwxgK8NYcXywGigJgSwOP8Y2iyWT4f2vsZnoOXTTbo+o5yXmIUJ4gn5538SO5S3gA=="
+ "version": "5.1.2",
+ "resolved": "https://registry.npmjs.org/bn.js/-/bn.js-5.1.2.tgz",
+ "integrity": "sha512-40rZaf3bUNKTVYu9sIeeEGOg7g14Yvnj9kH7b50EiwX0Q7A6umbvfI5tvHaOERH0XigqKkfLkFQxzb4e6CIXnA=="
},
"body-parser": {
- "version": "1.18.3",
- "resolved": "https://registry.npmjs.org/body-parser/-/body-parser-1.18.3.tgz",
- "integrity": "sha1-WykhmP/dVTs6DyDe0FkrlWlVyLQ=",
+ "version": "1.19.0",
+ "resolved": "https://registry.npmjs.org/body-parser/-/body-parser-1.19.0.tgz",
+ "integrity": "sha512-dhEPs72UPbDnAQJ9ZKMNTP6ptJaionhP5cBb541nXPlW60Jepo9RV/a4fX4XWW9CuFNK22krhrj1+rgzifNCsw==",
"requires": {
- "bytes": "3.0.0",
+ "bytes": "3.1.0",
"content-type": "~1.0.4",
"debug": "2.6.9",
"depd": "~1.1.2",
- "http-errors": "~1.6.3",
- "iconv-lite": "0.4.23",
+ "http-errors": "1.7.2",
+ "iconv-lite": "0.4.24",
"on-finished": "~2.3.0",
- "qs": "6.5.2",
- "raw-body": "2.3.3",
- "type-is": "~1.6.16"
+ "qs": "6.7.0",
+ "raw-body": "2.4.0",
+ "type-is": "~1.6.17"
},
"dependencies": {
+ "bytes": {
+ "version": "3.1.0",
+ "resolved": "https://registry.npmjs.org/bytes/-/bytes-3.1.0.tgz",
+ "integrity": "sha512-zauLjrfCG+xvoyaqLoV8bLVXXNGC4JqlxFCutSDWA6fJrTo2ZuvLYTqZ7aHBLZSMOopbzwv8f+wZcVzfVTI2Dg=="
+ },
"debug": {
"version": "2.6.9",
"resolved": "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz",
@@ -2904,18 +6736,15 @@
"ms": "2.0.0"
}
},
- "iconv-lite": {
- "version": "0.4.23",
- "resolved": "https://registry.npmjs.org/iconv-lite/-/iconv-lite-0.4.23.tgz",
- "integrity": "sha512-neyTUVFtahjf0mB3dZT77u+8O0QB89jFdnBkd5P1JgYPbPaia3gXXOVL2fq8VyU2gMMD7SaN7QukTB/pmXYvDA==",
- "requires": {
- "safer-buffer": ">= 2.1.2 < 3"
- }
- },
"ms": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz",
"integrity": "sha1-VgiurfwAvmwpAd9fmGF4jeDVl8g="
+ },
+ "qs": {
+ "version": "6.7.0",
+ "resolved": "https://registry.npmjs.org/qs/-/qs-6.7.0.tgz",
+ "integrity": "sha512-VCdBRNFTX1fyE7Nb6FYoURo/SPe62QCaAyzJvUjwRaIsc+NePBEniHlvxFmmX56+HZphIGtV0XeCirBtpDrTyQ=="
}
}
},
@@ -2945,17 +6774,102 @@
"integrity": "sha1-aN/1++YMUes3cl6p4+0xDcwed24="
},
"boxen": {
- "version": "1.3.0",
- "resolved": "https://registry.npmjs.org/boxen/-/boxen-1.3.0.tgz",
- "integrity": "sha512-TNPjfTr432qx7yOjQyaXm3dSR0MH9vXp7eT1BFSl/C51g+EFnOR9hTg1IreahGBmDNCehscshe45f+C1TBZbLw==",
+ "version": "4.2.0",
+ "resolved": "https://registry.npmjs.org/boxen/-/boxen-4.2.0.tgz",
+ "integrity": "sha512-eB4uT9RGzg2odpER62bBwSLvUeGC+WbRjjyyFhGsKnc8wp/m0+hQsMUvUe3H2V0D5vw0nBdO1hCJoZo5mKeuIQ==",
"requires": {
- "ansi-align": "^2.0.0",
- "camelcase": "^4.0.0",
- "chalk": "^2.0.1",
- "cli-boxes": "^1.0.0",
- "string-width": "^2.0.0",
- "term-size": "^1.2.0",
- "widest-line": "^2.0.0"
+ "ansi-align": "^3.0.0",
+ "camelcase": "^5.3.1",
+ "chalk": "^3.0.0",
+ "cli-boxes": "^2.2.0",
+ "string-width": "^4.1.0",
+ "term-size": "^2.1.0",
+ "type-fest": "^0.8.1",
+ "widest-line": "^3.1.0"
+ },
+ "dependencies": {
+ "ansi-regex": {
+ "version": "5.0.0",
+ "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.0.tgz",
+ "integrity": "sha512-bY6fj56OUQ0hU1KjFNDQuJFezqKdrAyFdIevADiqrWHwSlbmBNMHp5ak2f40Pm8JTFyM2mqxkG6ngkHO11f/lg=="
+ },
+ "ansi-styles": {
+ "version": "4.2.1",
+ "resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-4.2.1.tgz",
+ "integrity": "sha512-9VGjrMsG1vePxcSweQsN20KY/c4zN0h9fLjqAbwbPfahM3t+NL+M9HC8xeXG2I8pX5NoamTGNuomEUFI7fcUjA==",
+ "requires": {
+ "@types/color-name": "^1.1.1",
+ "color-convert": "^2.0.1"
+ }
+ },
+ "camelcase": {
+ "version": "5.3.1",
+ "resolved": "https://registry.npmjs.org/camelcase/-/camelcase-5.3.1.tgz",
+ "integrity": "sha512-L28STB170nwWS63UjtlEOE3dldQApaJXZkOI1uMFfzf3rRuPegHaHesyee+YxQ+W6SvRDQV6UrdOdRiR153wJg=="
+ },
+ "chalk": {
+ "version": "3.0.0",
+ "resolved": "https://registry.npmjs.org/chalk/-/chalk-3.0.0.tgz",
+ "integrity": "sha512-4D3B6Wf41KOYRFdszmDqMCGq5VV/uMAB273JILmO+3jAlh8X4qDtdtgCR3fxtbLEMzSx22QdhnDcJvu2u1fVwg==",
+ "requires": {
+ "ansi-styles": "^4.1.0",
+ "supports-color": "^7.1.0"
+ }
+ },
+ "color-convert": {
+ "version": "2.0.1",
+ "resolved": "https://registry.npmjs.org/color-convert/-/color-convert-2.0.1.tgz",
+ "integrity": "sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ==",
+ "requires": {
+ "color-name": "~1.1.4"
+ }
+ },
+ "color-name": {
+ "version": "1.1.4",
+ "resolved": "https://registry.npmjs.org/color-name/-/color-name-1.1.4.tgz",
+ "integrity": "sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA=="
+ },
+ "emoji-regex": {
+ "version": "8.0.0",
+ "resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-8.0.0.tgz",
+ "integrity": "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A=="
+ },
+ "has-flag": {
+ "version": "4.0.0",
+ "resolved": "https://registry.npmjs.org/has-flag/-/has-flag-4.0.0.tgz",
+ "integrity": "sha512-EykJT/Q1KjTWctppgIAgfSO0tKVuZUjhgMr17kqTumMl6Afv3EISleU7qZUzoXDFTAHTDC4NOoG/ZxU3EvlMPQ=="
+ },
+ "is-fullwidth-code-point": {
+ "version": "3.0.0",
+ "resolved": "https://registry.npmjs.org/is-fullwidth-code-point/-/is-fullwidth-code-point-3.0.0.tgz",
+ "integrity": "sha512-zymm5+u+sCsSWyD9qNaejV3DFvhCKclKdizYaJUuHA83RLjb7nSuGnddCHGv0hk+KY7BMAlsWeK4Ueg6EV6XQg=="
+ },
+ "string-width": {
+ "version": "4.2.0",
+ "resolved": "https://registry.npmjs.org/string-width/-/string-width-4.2.0.tgz",
+ "integrity": "sha512-zUz5JD+tgqtuDjMhwIg5uFVV3dtqZ9yQJlZVfq4I01/K5Paj5UHj7VyrQOJvzawSVlKpObApbfD0Ed6yJc+1eg==",
+ "requires": {
+ "emoji-regex": "^8.0.0",
+ "is-fullwidth-code-point": "^3.0.0",
+ "strip-ansi": "^6.0.0"
+ }
+ },
+ "strip-ansi": {
+ "version": "6.0.0",
+ "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.0.tgz",
+ "integrity": "sha512-AuvKTrTfQNYNIctbR1K/YGTR1756GycPsg7b9bdV9Duqur4gv6aKqHXah67Z8ImS7WEz5QVcOtlfW2rZEugt6w==",
+ "requires": {
+ "ansi-regex": "^5.0.0"
+ }
+ },
+ "supports-color": {
+ "version": "7.1.0",
+ "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-7.1.0.tgz",
+ "integrity": "sha512-oRSIpR8pxT1Wr2FquTNnGet79b3BWljqOuoW/h4oBhxJ/HUbX5nX6JSruTkvXDCFMwDPvsaTTbvMLKZWSy0R5g==",
+ "requires": {
+ "has-flag": "^4.0.0"
+ }
+ }
}
},
"brace-expansion": {
@@ -3010,7 +6924,7 @@
},
"browserify-aes": {
"version": "1.2.0",
- "resolved": "http://registry.npmjs.org/browserify-aes/-/browserify-aes-1.2.0.tgz",
+ "resolved": "https://registry.npmjs.org/browserify-aes/-/browserify-aes-1.2.0.tgz",
"integrity": "sha512-+7CHXqGuspUn/Sl5aO7Ea0xWGAtETPXNSAjHo48JfLdPWcMng33Xe4znFvQweqc/uzk5zSOI3H52CYnjCfb5hA==",
"requires": {
"buffer-xor": "^1.0.3",
@@ -3044,25 +6958,56 @@
},
"browserify-rsa": {
"version": "4.0.1",
- "resolved": "http://registry.npmjs.org/browserify-rsa/-/browserify-rsa-4.0.1.tgz",
+ "resolved": "https://registry.npmjs.org/browserify-rsa/-/browserify-rsa-4.0.1.tgz",
"integrity": "sha1-IeCr+vbyApzy+vsTNWenAdQTVSQ=",
"requires": {
"bn.js": "^4.1.0",
"randombytes": "^2.0.1"
+ },
+ "dependencies": {
+ "bn.js": {
+ "version": "4.11.9",
+ "resolved": "https://registry.npmjs.org/bn.js/-/bn.js-4.11.9.tgz",
+ "integrity": "sha512-E6QoYqCKZfgatHTdHzs1RRKP7ip4vvm+EyRUeE2RF0NblwVvb0p6jSVeNTOFxPn26QXN2o6SMfNxKp6kU8zQaw=="
+ }
}
},
"browserify-sign": {
- "version": "4.0.4",
- "resolved": "https://registry.npmjs.org/browserify-sign/-/browserify-sign-4.0.4.tgz",
- "integrity": "sha1-qk62jl17ZYuqa/alfmMMvXqT0pg=",
+ "version": "4.2.0",
+ "resolved": "https://registry.npmjs.org/browserify-sign/-/browserify-sign-4.2.0.tgz",
+ "integrity": "sha512-hEZC1KEeYuoHRqhGhTy6gWrpJA3ZDjFWv0DE61643ZnOXAKJb3u7yWcrU0mMc9SwAqK1n7myPGndkp0dFG7NFA==",
"requires": {
- "bn.js": "^4.1.1",
- "browserify-rsa": "^4.0.0",
- "create-hash": "^1.1.0",
- "create-hmac": "^1.1.2",
- "elliptic": "^6.0.0",
- "inherits": "^2.0.1",
- "parse-asn1": "^5.0.0"
+ "bn.js": "^5.1.1",
+ "browserify-rsa": "^4.0.1",
+ "create-hash": "^1.2.0",
+ "create-hmac": "^1.1.7",
+ "elliptic": "^6.5.2",
+ "inherits": "^2.0.4",
+ "parse-asn1": "^5.1.5",
+ "readable-stream": "^3.6.0",
+ "safe-buffer": "^5.2.0"
+ },
+ "dependencies": {
+ "inherits": {
+ "version": "2.0.4",
+ "resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.4.tgz",
+ "integrity": "sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ=="
+ },
+ "readable-stream": {
+ "version": "3.6.0",
+ "resolved": "https://registry.npmjs.org/readable-stream/-/readable-stream-3.6.0.tgz",
+ "integrity": "sha512-BViHy7LKeTz4oNnkcLJ+lVSL6vpiFeX6/d3oSH8zCW7UxP2onchk+vTGB143xuFjHS3deTgkKoXXymXqymiIdA==",
+ "requires": {
+ "inherits": "^2.0.3",
+ "string_decoder": "^1.1.1",
+ "util-deprecate": "^1.0.1"
+ }
+ },
+ "safe-buffer": {
+ "version": "5.2.1",
+ "resolved": "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.2.1.tgz",
+ "integrity": "sha512-rp3So07KcdmmKbGvgaNxQSJr7bGVSVk5S9Eq1F+ppbRo70+YeaDxkw5Dd8NPN+GD6bjnYm2VuPuCXmpuYvmCXQ=="
+ }
}
},
"browserify-zlib": {
@@ -3083,17 +7028,87 @@
}
},
"bser": {
- "version": "2.0.0",
- "resolved": "https://registry.npmjs.org/bser/-/bser-2.0.0.tgz",
- "integrity": "sha1-mseNPtXZFYBP2HrLFYvHlxR6Fxk=",
+ "version": "2.1.1",
+ "resolved": "https://registry.npmjs.org/bser/-/bser-2.1.1.tgz",
+ "integrity": "sha512-gQxTNE/GAfIIrmHLUE3oJyp5FO6HRBfhjnw4/wMmA63ZGDJnWBmgY/lyQBpnDUkGmAhbSe39tx2d/iTOAfglwQ==",
"requires": {
"node-int64": "^0.4.0"
}
},
+ "buble-jsx-only": {
+ "version": "0.19.8",
+ "resolved": "https://registry.npmjs.org/buble-jsx-only/-/buble-jsx-only-0.19.8.tgz",
+ "integrity": "sha512-7AW19pf7PrKFnGTEDzs6u9+JZqQwM1VnLS19OlqYDhXomtFFknnoQJAPHeg84RMFWAvOhYrG7harizJNwUKJsA==",
+ "requires": {
+ "acorn": "^6.1.1",
+ "acorn-dynamic-import": "^4.0.0",
+ "acorn-jsx": "^5.0.1",
+ "chalk": "^2.4.2",
+ "magic-string": "^0.25.3",
+ "minimist": "^1.2.0",
+ "regexpu-core": "^4.5.4"
+ },
+ "dependencies": {
+ "chalk": {
+ "version": "2.4.2",
+ "resolved": "https://registry.npmjs.org/chalk/-/chalk-2.4.2.tgz",
+ "integrity": "sha512-Mti+f9lpJNcwF4tWV8/OrTTtF1gZi+f8FqlyAdouralcFWFQWF2+NgCHShjkCb+IFBLq9buZwE1xckQU4peSuQ==",
+ "requires": {
+ "ansi-styles": "^3.2.1",
+ "escape-string-regexp": "^1.0.5",
+ "supports-color": "^5.3.0"
+ }
+ },
+ "jsesc": {
+ "version": "0.5.0",
+ "resolved": "https://registry.npmjs.org/jsesc/-/jsesc-0.5.0.tgz",
+ "integrity": "sha1-597mbjXW/Bb3EP6R1c9p9w8IkR0="
+ },
+ "regenerate-unicode-properties": {
+ "version": "8.2.0",
+ "resolved": "https://registry.npmjs.org/regenerate-unicode-properties/-/regenerate-unicode-properties-8.2.0.tgz",
+ "integrity": "sha512-F9DjY1vKLo/tPePDycuH3dn9H1OTPIkVD9Kz4LODu+F2C75mgjAJ7x/gwy6ZcSNRAAkhNlJSOHRe8k3p+K9WhA==",
+ "requires": {
+ "regenerate": "^1.4.0"
+ }
+ },
+ "regexpu-core": {
+ "version": "4.7.0",
+ "resolved": "https://registry.npmjs.org/regexpu-core/-/regexpu-core-4.7.0.tgz",
+ "integrity": "sha512-TQ4KXRnIn6tz6tjnrXEkD/sshygKH/j5KzK86X8MkeHyZ8qst/LZ89j3X4/8HEIfHANTFIP/AbXakeRhWIl5YQ==",
+ "requires": {
+ "regenerate": "^1.4.0",
+ "regenerate-unicode-properties": "^8.2.0",
+ "regjsgen": "^0.5.1",
+ "regjsparser": "^0.6.4",
+ "unicode-match-property-ecmascript": "^1.0.4",
+ "unicode-match-property-value-ecmascript": "^1.2.0"
+ }
+ },
+ "regjsgen": {
+ "version": "0.5.2",
+ "resolved": "https://registry.npmjs.org/regjsgen/-/regjsgen-0.5.2.tgz",
+ "integrity": "sha512-OFFT3MfrH90xIW8OOSyUrk6QHD5E9JOTeGodiJeBS3J6IwlgzJMNE/1bZklWz5oTg+9dCMyEetclvCVXOPoN3A=="
+ },
+ "regjsparser": {
+ "version": "0.6.4",
+ "resolved": "https://registry.npmjs.org/regjsparser/-/regjsparser-0.6.4.tgz",
+ "integrity": "sha512-64O87/dPDgfk8/RQqC4gkZoGyyWFIEUTTh80CU6CWuK5vkCGyekIx+oKcEIYtP/RAxSQltCZHCNu/mdd7fqlJw==",
+ "requires": {
+ "jsesc": "~0.5.0"
+ }
+ },
+ "unicode-match-property-value-ecmascript": {
+ "version": "1.2.0",
+ "resolved": "https://registry.npmjs.org/unicode-match-property-value-ecmascript/-/unicode-match-property-value-ecmascript-1.2.0.tgz",
+ "integrity": "sha512-wjuQHGQVofmSJv1uVISKLE5zO2rNGzM/KCYZch/QQvez7C1hUhBIuZ701fYXExuufJFMPhv2SyL8CyoIfMLbIQ=="
+ }
+ }
+ },
"buffer": {
- "version": "4.9.1",
- "resolved": "http://registry.npmjs.org/buffer/-/buffer-4.9.1.tgz",
- "integrity": "sha1-bRu2AbB6TvztlwlBMgkwJ8lbwpg=",
+ "version": "4.9.2",
+ "resolved": "https://registry.npmjs.org/buffer/-/buffer-4.9.2.tgz",
+ "integrity": "sha512-xq+q3SRMOxGivLhBNaUdC64hDTQwejJ+H0T/NB1XMtTVEwNTrfFF3gAxiyW0Bu/xWEGhjVKgUcMhCrUy2+uCWg==",
"requires": {
"base64-js": "^1.0.2",
"ieee754": "^1.1.4",
@@ -3160,26 +7175,45 @@
"integrity": "sha1-0ygVQE1olpn4Wk6k+odV3ROpYEg="
},
"cacache": {
- "version": "11.3.2",
- "resolved": "https://registry.npmjs.org/cacache/-/cacache-11.3.2.tgz",
- "integrity": "sha512-E0zP4EPGDOaT2chM08Als91eYnf8Z+eH1awwwVsngUmgppfM5jjJ8l3z5vO5p5w/I3LsiXawb1sW0VY65pQABg==",
+ "version": "12.0.4",
+ "resolved": "https://registry.npmjs.org/cacache/-/cacache-12.0.4.tgz",
+ "integrity": "sha512-a0tMB40oefvuInr4Cwb3GerbL9xTj1D5yg0T5xrjGCGyfvbxseIXX7BAO/u/hIXdafzOI5JC3wDwHyf24buOAQ==",
"requires": {
- "bluebird": "^3.5.3",
+ "bluebird": "^3.5.5",
"chownr": "^1.1.1",
"figgy-pudding": "^3.5.1",
- "glob": "^7.1.3",
+ "glob": "^7.1.4",
"graceful-fs": "^4.1.15",
+ "infer-owner": "^1.0.3",
"lru-cache": "^5.1.1",
"mississippi": "^3.0.0",
"mkdirp": "^0.5.1",
"move-concurrently": "^1.0.1",
"promise-inflight": "^1.0.1",
- "rimraf": "^2.6.2",
+ "rimraf": "^2.6.3",
"ssri": "^6.0.1",
"unique-filename": "^1.1.1",
"y18n": "^4.0.0"
},
"dependencies": {
+ "bluebird": {
+ "version": "3.7.2",
+ "resolved": "https://registry.npmjs.org/bluebird/-/bluebird-3.7.2.tgz",
+ "integrity": "sha512-XpNj6GDQzdfW+r2Wnn7xiSAd7TM3jzkxGXBGTtWKuSXv1xUV+azxAm8jdWZN06QTQk+2N2XB9jRDkvbmQmcRtg=="
+ },
+ "glob": {
+ "version": "7.1.6",
+ "resolved": "https://registry.npmjs.org/glob/-/glob-7.1.6.tgz",
+ "integrity": "sha512-LwaxwyZ72Lk7vZINtNNrywX0ZuLyStrdDtabefZKAY5ZGJhVtgdznluResxNmPitE0SAO+O26sWTHeKSI2wMBA==",
+ "requires": {
+ "fs.realpath": "^1.0.0",
+ "inflight": "^1.0.4",
+ "inherits": "2",
+ "minimatch": "^3.0.4",
+ "once": "^1.3.0",
+ "path-is-absolute": "^1.0.0"
+ }
+ },
"lru-cache": {
"version": "5.1.1",
"resolved": "https://registry.npmjs.org/lru-cache/-/lru-cache-5.1.1.tgz",
@@ -3188,15 +7222,23 @@
"yallist": "^3.0.2"
}
},
+ "rimraf": {
+ "version": "2.7.1",
+ "resolved": "https://registry.npmjs.org/rimraf/-/rimraf-2.7.1.tgz",
+ "integrity": "sha512-uWjbaKIK3T1OSVptzX7Nl6PvQ3qAGtKEtVRjRuazjfL3Bx5eI409VZSqgND+4UNnmzLVdPj9FqFJNPqBZFve4w==",
+ "requires": {
+ "glob": "^7.1.3"
+ }
+ },
"y18n": {
"version": "4.0.0",
"resolved": "https://registry.npmjs.org/y18n/-/y18n-4.0.0.tgz",
"integrity": "sha512-r9S/ZyXu/Xu9q1tYlpsLIsa3EeLXXk0VwlxqTcFRfg9EhMW+17kbt9G0NrgCmhGb5vT2hyhJZLfDGx+7+5Uj/w=="
},
"yallist": {
- "version": "3.0.3",
- "resolved": "https://registry.npmjs.org/yallist/-/yallist-3.0.3.tgz",
- "integrity": "sha512-S+Zk8DEWE6oKpV+vI3qWkaK+jSbIK86pCwe2IF/xwIpQ8jEuxpw9NyaGjmp9+BoJv5FV2piqCDcoCtStppiq2A=="
+ "version": "3.1.1",
+ "resolved": "https://registry.npmjs.org/yallist/-/yallist-3.1.1.tgz",
+ "integrity": "sha512-a4UGQaWPH59mOXUYnAG2ewncQS4i4F43Tv3JoAM+s2VDAmS9NsK8GpDMLrCHPksFT7h3K6TOoUNn2pb7RoXx4g=="
}
}
},
@@ -3217,11 +7259,12 @@
}
},
"cache-manager": {
- "version": "2.9.0",
- "resolved": "https://registry.npmjs.org/cache-manager/-/cache-manager-2.9.0.tgz",
- "integrity": "sha1-Xh9jF8oaJeQN3zZacWJ1evFSNT4=",
+ "version": "2.11.1",
+ "resolved": "https://registry.npmjs.org/cache-manager/-/cache-manager-2.11.1.tgz",
+ "integrity": "sha512-XhUuc9eYwkzpK89iNewFwtvcDYMUsvtwzHeyEOPJna/WsVsXcrzsA1ft2M0QqPNunEzLhNCYPo05tEfG+YuNow==",
"requires": {
"async": "1.5.2",
+ "lodash.clonedeep": "4.5.0",
"lru-cache": "4.0.0"
}
},
@@ -3323,6 +7366,11 @@
"resolved": "https://registry.npmjs.org/camelcase/-/camelcase-4.1.0.tgz",
"integrity": "sha1-1UVjW+HjPFQmScaRc+Xeas+uNN0="
},
+ "camelcase-css": {
+ "version": "2.0.1",
+ "resolved": "https://registry.npmjs.org/camelcase-css/-/camelcase-css-2.0.1.tgz",
+ "integrity": "sha512-QOSvevhslijgYwRx6Rv7zKdMF8lbRmx+uQGx2+vDc+KI/eBnsy9kit5aj23AgGu3pa4t9AgwbnXWqS+iOY+2aA=="
+ },
"camelcase-keys": {
"version": "2.1.0",
"resolved": "http://registry.npmjs.org/camelcase-keys/-/camelcase-keys-2.1.0.tgz",
@@ -3351,34 +7399,32 @@
},
"dependencies": {
"browserslist": {
- "version": "4.4.2",
- "resolved": "https://registry.npmjs.org/browserslist/-/browserslist-4.4.2.tgz",
- "integrity": "sha512-ISS/AIAiHERJ3d45Fz0AVYKkgcy+F/eJHzKEvv1j0wwKGKD9T3BrwKr/5g45L+Y4XIK5PlTqefHciRFcfE1Jxg==",
+ "version": "4.12.0",
+ "resolved": "https://registry.npmjs.org/browserslist/-/browserslist-4.12.0.tgz",
+ "integrity": "sha512-UH2GkcEDSI0k/lRkuDSzFl9ZZ87skSy9w2XAn1MsZnL+4c4rqbBd3e82UWHbYDpztABrPBhZsTEeuxVfHppqDg==",
"requires": {
- "caniuse-lite": "^1.0.30000939",
- "electron-to-chromium": "^1.3.113",
- "node-releases": "^1.1.8"
+ "caniuse-lite": "^1.0.30001043",
+ "electron-to-chromium": "^1.3.413",
+ "node-releases": "^1.1.53",
+ "pkg-up": "^2.0.0"
},
"dependencies": {
"caniuse-lite": {
- "version": "1.0.30000939",
- "resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30000939.tgz",
- "integrity": "sha512-oXB23ImDJOgQpGjRv1tCtzAvJr4/OvrHi5SO2vUgB0g0xpdZZoA/BxfImiWfdwoYdUTtQrPsXsvYU/dmCSM8gg=="
+ "version": "1.0.30001084",
+ "resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30001084.tgz",
+ "integrity": "sha512-ftdc5oGmhEbLUuMZ/Qp3mOpzfZLCxPYKcvGv6v2dJJ+8EdqcvZRbAGOiLmkM/PV1QGta/uwBs8/nCl6sokDW6w=="
}
}
},
"electron-to-chromium": {
- "version": "1.3.113",
- "resolved": "https://registry.npmjs.org/electron-to-chromium/-/electron-to-chromium-1.3.113.tgz",
- "integrity": "sha512-De+lPAxEcpxvqPTyZAXELNpRZXABRxf+uL/rSykstQhzj/B0l1150G/ExIIxKc16lI89Hgz81J0BHAcbTqK49g=="
+ "version": "1.3.474",
+ "resolved": "https://registry.npmjs.org/electron-to-chromium/-/electron-to-chromium-1.3.474.tgz",
+ "integrity": "sha512-fPkSgT9IBKmVJz02XioNsIpg0WYmkPrvU1lUJblMMJALxyE7/32NGvbJQKKxpNokozPvqfqkuUqVClYsvetcLw=="
},
"node-releases": {
- "version": "1.1.8",
- "resolved": "https://registry.npmjs.org/node-releases/-/node-releases-1.1.8.tgz",
- "integrity": "sha512-gQm+K9mGCiT/NXHy+V/ZZS1N/LOaGGqRAAJJs3X9Ah1g+CIbRcBgNyoNYQ+SEtcyAtB9KqDruu+fF7nWjsqRaA==",
- "requires": {
- "semver": "^5.3.0"
- }
+ "version": "1.1.58",
+ "resolved": "https://registry.npmjs.org/node-releases/-/node-releases-1.1.58.tgz",
+ "integrity": "sha512-NxBudgVKiRh/2aPWMgPR7bPTX0VPmGx5QBwCtdHitnqFE5/O8DeBXuIMH1nwNnw/aMo6AjOrpsHzfY3UbUJ7yg=="
}
}
},
@@ -3387,11 +7433,6 @@
"resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30000923.tgz",
"integrity": "sha512-j5ur7eeluOFjjPUkydtXP4KFAsmH3XaQNch5tvWSO+dLHYt5PE+VgJZLWtbVOodfWij6m6zas28T4gB/cLYq1w=="
},
- "capture-stack-trace": {
- "version": "1.0.1",
- "resolved": "https://registry.npmjs.org/capture-stack-trace/-/capture-stack-trace-1.0.1.tgz",
- "integrity": "sha512-mYQLZnx5Qt1JgB1WEiMCf2647plpGeQ2NMR/5L0HNZzGQo4fuSPnK+wjfPnKZV0aiJDgzmWqqkV/g7JD+DW0qw=="
- },
"caseless": {
"version": "0.12.0",
"resolved": "https://registry.npmjs.org/caseless/-/caseless-0.12.0.tgz",
@@ -3570,7 +7611,8 @@
},
"ansi-regex": {
"version": "2.1.1",
- "bundled": true
+ "bundled": true,
+ "optional": true
},
"aproba": {
"version": "1.2.0",
@@ -3588,11 +7630,13 @@
},
"balanced-match": {
"version": "1.0.0",
- "bundled": true
+ "bundled": true,
+ "optional": true
},
"brace-expansion": {
"version": "1.1.11",
"bundled": true,
+ "optional": true,
"requires": {
"balanced-match": "^1.0.0",
"concat-map": "0.0.1"
@@ -3605,15 +7649,18 @@
},
"code-point-at": {
"version": "1.1.0",
- "bundled": true
+ "bundled": true,
+ "optional": true
},
"concat-map": {
"version": "0.0.1",
- "bundled": true
+ "bundled": true,
+ "optional": true
},
"console-control-strings": {
"version": "1.1.0",
- "bundled": true
+ "bundled": true,
+ "optional": true
},
"core-util-is": {
"version": "1.0.2",
@@ -3716,7 +7763,8 @@
},
"inherits": {
"version": "2.0.3",
- "bundled": true
+ "bundled": true,
+ "optional": true
},
"ini": {
"version": "1.3.5",
@@ -3726,6 +7774,7 @@
"is-fullwidth-code-point": {
"version": "1.0.0",
"bundled": true,
+ "optional": true,
"requires": {
"number-is-nan": "^1.0.0"
}
@@ -3738,17 +7787,20 @@
"minimatch": {
"version": "3.0.4",
"bundled": true,
+ "optional": true,
"requires": {
"brace-expansion": "^1.1.7"
}
},
"minimist": {
"version": "0.0.8",
- "bundled": true
+ "bundled": true,
+ "optional": true
},
"minipass": {
"version": "2.3.5",
"bundled": true,
+ "optional": true,
"requires": {
"safe-buffer": "^5.1.2",
"yallist": "^3.0.0"
@@ -3765,6 +7817,7 @@
"mkdirp": {
"version": "0.5.1",
"bundled": true,
+ "optional": true,
"requires": {
"minimist": "0.0.8"
}
@@ -3837,7 +7890,8 @@
},
"number-is-nan": {
"version": "1.0.1",
- "bundled": true
+ "bundled": true,
+ "optional": true
},
"object-assign": {
"version": "4.1.1",
@@ -3847,6 +7901,7 @@
"once": {
"version": "1.4.0",
"bundled": true,
+ "optional": true,
"requires": {
"wrappy": "1"
}
@@ -3922,7 +7977,8 @@
},
"safe-buffer": {
"version": "5.1.2",
- "bundled": true
+ "bundled": true,
+ "optional": true
},
"safer-buffer": {
"version": "2.1.2",
@@ -3952,6 +8008,7 @@
"string-width": {
"version": "1.0.2",
"bundled": true,
+ "optional": true,
"requires": {
"code-point-at": "^1.0.0",
"is-fullwidth-code-point": "^1.0.0",
@@ -3969,6 +8026,7 @@
"strip-ansi": {
"version": "3.0.1",
"bundled": true,
+ "optional": true,
"requires": {
"ansi-regex": "^2.0.0"
}
@@ -4007,11 +8065,13 @@
},
"wrappy": {
"version": "1.0.2",
- "bundled": true
+ "bundled": true,
+ "optional": true
},
"yallist": {
"version": "3.0.3",
- "bundled": true
+ "bundled": true,
+ "optional": true
}
}
},
@@ -4028,17 +8088,17 @@
"integrity": "sha512-j38EvO5+LHX84jlo6h4UzmOwi0UgW61WRyPtJz4qaadK5eY3BTS5TY/S1Stc3Uk2lIM6TPevAlULiEJwie860g=="
},
"chrome-trace-event": {
- "version": "1.0.0",
- "resolved": "https://registry.npmjs.org/chrome-trace-event/-/chrome-trace-event-1.0.0.tgz",
- "integrity": "sha512-xDbVgyfDTT2piup/h8dK/y4QZfJRSa73bw1WZ8b4XM1o7fsFubUVGYcE+1ANtOzJJELGpYoG2961z0Z6OAld9A==",
+ "version": "1.0.2",
+ "resolved": "https://registry.npmjs.org/chrome-trace-event/-/chrome-trace-event-1.0.2.tgz",
+ "integrity": "sha512-9e/zx1jw7B4CO+c/RXoCsfg/x1AfUBioy4owYH0bJprEYAx5hRFLRhWBqHAG57D0ZM4H7vxbP7bPe0VwhQRYDQ==",
"requires": {
"tslib": "^1.9.0"
}
},
"ci-info": {
- "version": "1.6.0",
- "resolved": "https://registry.npmjs.org/ci-info/-/ci-info-1.6.0.tgz",
- "integrity": "sha512-vsGdkwSCDpWmP80ncATX7iea5DWQemg1UgCW5J8tqjU3lYw4FBYuj89J0CTVomA7BEfvSZd84GmHko+MxFQU2A=="
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/ci-info/-/ci-info-2.0.0.tgz",
+ "integrity": "sha512-5tK7EtrZ0N+OLFMthtqOj4fI2Jeb88C4CAZPu25LDVUgXJ0A3Js4PMGqrn0JU1W0Mh1/Z8wZzYPxqUrXeBboCQ=="
},
"cipher-base": {
"version": "1.0.4",
@@ -4107,10 +8167,15 @@
"resolved": "https://registry.npmjs.org/classnames/-/classnames-2.2.6.tgz",
"integrity": "sha512-JR/iSQOSt+LQIWwrwEzJ9uk0xfN3mTVYMwt1Ir5mUcSN6pU+V4zQFFaJsclJbPuAUQH+yfWef6tm7l1quW3C8Q=="
},
+ "clean-stack": {
+ "version": "2.2.0",
+ "resolved": "https://registry.npmjs.org/clean-stack/-/clean-stack-2.2.0.tgz",
+ "integrity": "sha512-4diC9HaTE+KRAMWhDhrGOECgWZxoevMc5TlkObMqNSsVU62PYzXZ/SMTjzyGAFF1YusgxGcSWTEXBhp0CPwQ1A=="
+ },
"cli-boxes": {
- "version": "1.0.0",
- "resolved": "https://registry.npmjs.org/cli-boxes/-/cli-boxes-1.0.0.tgz",
- "integrity": "sha1-T6kXw+WclKAEzWH47lCdplFocUM="
+ "version": "2.2.0",
+ "resolved": "https://registry.npmjs.org/cli-boxes/-/cli-boxes-2.2.0.tgz",
+ "integrity": "sha512-gpaBrMAizVEANOpfZp/EEUixTXDyGt7DFzdK5hU+UbWt/J0lB0w20ncZj59Z9a93xHb9u12zF5BS6i9RKbtg4w=="
},
"cli-cursor": {
"version": "2.1.0",
@@ -4120,6 +8185,11 @@
"restore-cursor": "^2.0.0"
}
},
+ "cli-spinners": {
+ "version": "1.3.1",
+ "resolved": "https://registry.npmjs.org/cli-spinners/-/cli-spinners-1.3.1.tgz",
+ "integrity": "sha512-1QL4544moEsDVH9T/l6Cemov/37iv1RtoKf7NJ04A60+4MREXNfx/QvavbH6QoGdsD4N4Mwy49cmaINR/o2mdg=="
+ },
"cli-table3": {
"version": "0.5.1",
"resolved": "https://registry.npmjs.org/cli-table3/-/cli-table3-0.5.1.tgz",
@@ -4130,10 +8200,91 @@
"string-width": "^2.1.1"
}
},
+ "cli-truncate": {
+ "version": "2.1.0",
+ "resolved": "https://registry.npmjs.org/cli-truncate/-/cli-truncate-2.1.0.tgz",
+ "integrity": "sha512-n8fOixwDD6b/ObinzTrp1ZKFzbgvKZvuz/TvejnLn1aQfC6r52XEx85FmuC+3HI+JM7coBRXUvNqEU2PHVrHpg==",
+ "requires": {
+ "slice-ansi": "^3.0.0",
+ "string-width": "^4.2.0"
+ },
+ "dependencies": {
+ "ansi-regex": {
+ "version": "5.0.0",
+ "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.0.tgz",
+ "integrity": "sha512-bY6fj56OUQ0hU1KjFNDQuJFezqKdrAyFdIevADiqrWHwSlbmBNMHp5ak2f40Pm8JTFyM2mqxkG6ngkHO11f/lg=="
+ },
+ "ansi-styles": {
+ "version": "4.2.1",
+ "resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-4.2.1.tgz",
+ "integrity": "sha512-9VGjrMsG1vePxcSweQsN20KY/c4zN0h9fLjqAbwbPfahM3t+NL+M9HC8xeXG2I8pX5NoamTGNuomEUFI7fcUjA==",
+ "requires": {
+ "@types/color-name": "^1.1.1",
+ "color-convert": "^2.0.1"
+ }
+ },
+ "astral-regex": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/astral-regex/-/astral-regex-2.0.0.tgz",
+ "integrity": "sha512-Z7tMw1ytTXt5jqMcOP+OQteU1VuNK9Y02uuJtKQ1Sv69jXQKKg5cibLwGJow8yzZP+eAc18EmLGPal0bp36rvQ=="
+ },
+ "color-convert": {
+ "version": "2.0.1",
+ "resolved": "https://registry.npmjs.org/color-convert/-/color-convert-2.0.1.tgz",
+ "integrity": "sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ==",
+ "requires": {
+ "color-name": "~1.1.4"
+ }
+ },
+ "color-name": {
+ "version": "1.1.4",
+ "resolved": "https://registry.npmjs.org/color-name/-/color-name-1.1.4.tgz",
+ "integrity": "sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA=="
+ },
+ "emoji-regex": {
+ "version": "8.0.0",
+ "resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-8.0.0.tgz",
+ "integrity": "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A=="
+ },
+ "is-fullwidth-code-point": {
+ "version": "3.0.0",
+ "resolved": "https://registry.npmjs.org/is-fullwidth-code-point/-/is-fullwidth-code-point-3.0.0.tgz",
+ "integrity": "sha512-zymm5+u+sCsSWyD9qNaejV3DFvhCKclKdizYaJUuHA83RLjb7nSuGnddCHGv0hk+KY7BMAlsWeK4Ueg6EV6XQg=="
+ },
+ "slice-ansi": {
+ "version": "3.0.0",
+ "resolved": "https://registry.npmjs.org/slice-ansi/-/slice-ansi-3.0.0.tgz",
+ "integrity": "sha512-pSyv7bSTC7ig9Dcgbw9AuRNUb5k5V6oDudjZoMBSr13qpLBG7tB+zgCkARjq7xIUgdz5P1Qe8u+rSGdouOOIyQ==",
+ "requires": {
+ "ansi-styles": "^4.0.0",
+ "astral-regex": "^2.0.0",
+ "is-fullwidth-code-point": "^3.0.0"
+ }
+ },
+ "string-width": {
+ "version": "4.2.0",
+ "resolved": "https://registry.npmjs.org/string-width/-/string-width-4.2.0.tgz",
+ "integrity": "sha512-zUz5JD+tgqtuDjMhwIg5uFVV3dtqZ9yQJlZVfq4I01/K5Paj5UHj7VyrQOJvzawSVlKpObApbfD0Ed6yJc+1eg==",
+ "requires": {
+ "emoji-regex": "^8.0.0",
+ "is-fullwidth-code-point": "^3.0.0",
+ "strip-ansi": "^6.0.0"
+ }
+ },
+ "strip-ansi": {
+ "version": "6.0.0",
+ "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.0.tgz",
+ "integrity": "sha512-AuvKTrTfQNYNIctbR1K/YGTR1756GycPsg7b9bdV9Duqur4gv6aKqHXah67Z8ImS7WEz5QVcOtlfW2rZEugt6w==",
+ "requires": {
+ "ansi-regex": "^5.0.0"
+ }
+ }
+ }
+ },
"cli-width": {
- "version": "2.2.0",
- "resolved": "https://registry.npmjs.org/cli-width/-/cli-width-2.2.0.tgz",
- "integrity": "sha1-/xnt6Kml5XkyQUewwR8PvLq+1jk="
+ "version": "2.2.1",
+ "resolved": "https://registry.npmjs.org/cli-width/-/cli-width-2.2.1.tgz",
+ "integrity": "sha512-GRMWDxpOB6Dgk2E5Uo+3eEBvtOOlimMmpbFiKuLFnQzYDavtLFY3K5ona41jgN/WdRZtG7utuVSVTL4HbZHGkw=="
},
"clipboard": {
"version": "2.0.4",
@@ -4146,6 +8297,60 @@
"tiny-emitter": "^2.0.0"
}
},
+ "clipboardy": {
+ "version": "2.3.0",
+ "resolved": "https://registry.npmjs.org/clipboardy/-/clipboardy-2.3.0.tgz",
+ "integrity": "sha512-mKhiIL2DrQIsuXMgBgnfEHOZOryC7kY7YO//TN6c63wlEm3NG5tz+YgY5rVi29KCmq/QQjKYvM7a19+MDOTHOQ==",
+ "requires": {
+ "arch": "^2.1.1",
+ "execa": "^1.0.0",
+ "is-wsl": "^2.1.1"
+ },
+ "dependencies": {
+ "cross-spawn": {
+ "version": "6.0.5",
+ "resolved": "https://registry.npmjs.org/cross-spawn/-/cross-spawn-6.0.5.tgz",
+ "integrity": "sha512-eTVLrBSt7fjbDygz805pMnstIs2VTBNkRm0qxZd+M7A5XDdxVRWO5MxGBXZhjY4cqLYLdtrGqRf8mBPmzwSpWQ==",
+ "requires": {
+ "nice-try": "^1.0.4",
+ "path-key": "^2.0.1",
+ "semver": "^5.5.0",
+ "shebang-command": "^1.2.0",
+ "which": "^1.2.9"
+ }
+ },
+ "execa": {
+ "version": "1.0.0",
+ "resolved": "https://registry.npmjs.org/execa/-/execa-1.0.0.tgz",
+ "integrity": "sha512-adbxcyWV46qiHyvSp50TKt05tB4tK3HcmF7/nxfAdhnox83seTDbwnaqKO4sXRy7roHAIFqJP/Rw/AuEbX61LA==",
+ "requires": {
+ "cross-spawn": "^6.0.0",
+ "get-stream": "^4.0.0",
+ "is-stream": "^1.1.0",
+ "npm-run-path": "^2.0.0",
+ "p-finally": "^1.0.0",
+ "signal-exit": "^3.0.0",
+ "strip-eof": "^1.0.0"
+ }
+ },
+ "get-stream": {
+ "version": "4.1.0",
+ "resolved": "https://registry.npmjs.org/get-stream/-/get-stream-4.1.0.tgz",
+ "integrity": "sha512-GMat4EJ5161kIy2HevLlr4luNjBgvmj413KaQA7jt4V8B4RDsfpHk7WQ9GVqfYyyx8OS/L66Kox+rJRNklLK7w==",
+ "requires": {
+ "pump": "^3.0.0"
+ }
+ },
+ "is-wsl": {
+ "version": "2.2.0",
+ "resolved": "https://registry.npmjs.org/is-wsl/-/is-wsl-2.2.0.tgz",
+ "integrity": "sha512-fKzAra0rGJUUBwGBgNkHZuToZcn+TtXHpeCgmkMJMMYx1sQDYaCSyjJBSCa2nH1DGm7s3n1oBnohoVTBaN7Lww==",
+ "requires": {
+ "is-docker": "^2.0.0"
+ }
+ }
+ }
+ },
"cliui": {
"version": "3.2.0",
"resolved": "https://registry.npmjs.org/cliui/-/cliui-3.2.0.tgz",
@@ -4276,9 +8481,9 @@
}
},
"colors": {
- "version": "1.3.3",
- "resolved": "https://registry.npmjs.org/colors/-/colors-1.3.3.tgz",
- "integrity": "sha512-mmGt/1pZqYRjMxB1axhTo16/snVZ5krrKkcmMeVKxzECMMXoCgnvTPp10QgHfcbQZw8Dq2jMNG6je4JlWU0gWg==",
+ "version": "1.4.0",
+ "resolved": "https://registry.npmjs.org/colors/-/colors-1.4.0.tgz",
+ "integrity": "sha512-a+UqTh4kgZg/SlGvfbzDHpgRu7AAQOmmqRHJnxhRZICKFUT91brVhNNt58CMWU9PsBbv3PDCZUHbVxuDiH2mtA==",
"optional": true
},
"combined-stream": {
@@ -4298,14 +8503,14 @@
}
},
"command-exists": {
- "version": "1.2.8",
- "resolved": "https://registry.npmjs.org/command-exists/-/command-exists-1.2.8.tgz",
- "integrity": "sha512-PM54PkseWbiiD/mMsbvW351/u+dafwTJ0ye2qB60G1aGQP9j3xK2gmMDc+R34L3nDtx4qMCitXT75mkbkGJDLw=="
+ "version": "1.2.9",
+ "resolved": "https://registry.npmjs.org/command-exists/-/command-exists-1.2.9.tgz",
+ "integrity": "sha512-LTQ/SGc+s0Xc0Fu5WaKnR0YiygZkm9eKFvyS+fRsU7/ZWFF8ykFM6Pc9aCVf1+xasOOZpO3BAVgVrKvsqKHV7w=="
},
"commander": {
- "version": "2.19.0",
- "resolved": "https://registry.npmjs.org/commander/-/commander-2.19.0.tgz",
- "integrity": "sha512-6tvAOO+D6OENvRAh524Dh9jcfKTYDQAqvqezbCW82xj5X0pSrcpxtvRKHLG0yBY6SD7PSDrJaj+0AiOcKVd1Xg=="
+ "version": "2.20.3",
+ "resolved": "https://registry.npmjs.org/commander/-/commander-2.20.3.tgz",
+ "integrity": "sha512-GpVkmM8vF2vQUkj2LvZmD35JxeJOLCwJ9cUkugyk2nuhbv3+mJvpLYYt+0+USMxE+oj+ey/lJEnhZw75x/OMcQ=="
},
"comment-json": {
"version": "1.1.3",
@@ -4341,30 +8546,30 @@
"integrity": "sha1-ZF/ErfWLcrZJ1crmUTVhnbJv8UM="
},
"compressible": {
- "version": "2.0.16",
- "resolved": "https://registry.npmjs.org/compressible/-/compressible-2.0.16.tgz",
- "integrity": "sha512-JQfEOdnI7dASwCuSPWIeVYwc/zMsu/+tRhoUvEfXz2gxOA2DNjmG5vhtFdBlhWPPGo+RdT9S3tgc/uH5qgDiiA==",
+ "version": "2.0.18",
+ "resolved": "https://registry.npmjs.org/compressible/-/compressible-2.0.18.tgz",
+ "integrity": "sha512-AF3r7P5dWxL8MxyITRMlORQNaOA2IkAFaTr4k7BUumjPtRpGDTZpl0Pb1XCO6JeDCBdp126Cgs9sMxqSjgYyRg==",
"requires": {
- "mime-db": ">= 1.38.0 < 2"
+ "mime-db": ">= 1.43.0 < 2"
},
"dependencies": {
"mime-db": {
- "version": "1.38.0",
- "resolved": "https://registry.npmjs.org/mime-db/-/mime-db-1.38.0.tgz",
- "integrity": "sha512-bqVioMFFzc2awcdJZIzR3HjZFX20QhilVS7hytkKrv7xFAn8bM1gzc/FOX2awLISvWe0PV8ptFKcon+wZ5qYkg=="
+ "version": "1.44.0",
+ "resolved": "https://registry.npmjs.org/mime-db/-/mime-db-1.44.0.tgz",
+ "integrity": "sha512-/NOTfLrsPBVeH7YtFPgsVWveuL+4SjjYxaQ1xtM1KMFj7HdxlBlxeyNLzhyJVx7r4rZGJAZ/6lkKCitSc/Nmpg=="
}
}
},
"compression": {
- "version": "1.7.3",
- "resolved": "https://registry.npmjs.org/compression/-/compression-1.7.3.tgz",
- "integrity": "sha512-HSjyBG5N1Nnz7tF2+O7A9XUhyjru71/fwgNb7oIsEVHR0WShfs2tIS/EySLgiTe98aOK18YDlMXpzjCXY/n9mg==",
+ "version": "1.7.4",
+ "resolved": "https://registry.npmjs.org/compression/-/compression-1.7.4.tgz",
+ "integrity": "sha512-jaSIDzP9pZVS4ZfQ+TzvtiWhdpFhE2RDHz8QJkpX9SIpLq88VueF5jJw6t+6CUQcAoA6t+x89MLrWAqpfDE8iQ==",
"requires": {
"accepts": "~1.3.5",
"bytes": "3.0.0",
- "compressible": "~2.0.14",
+ "compressible": "~2.0.16",
"debug": "2.6.9",
- "on-headers": "~1.0.1",
+ "on-headers": "~1.0.2",
"safe-buffer": "5.1.2",
"vary": "~1.1.2"
},
@@ -4391,7 +8596,7 @@
},
"concat-stream": {
"version": "1.6.2",
- "resolved": "http://registry.npmjs.org/concat-stream/-/concat-stream-1.6.2.tgz",
+ "resolved": "https://registry.npmjs.org/concat-stream/-/concat-stream-1.6.2.tgz",
"integrity": "sha512-27HBghJxjiZtIk3Ycvn/4kbJk/1uZuJFfuPEns6LaEvpvG1f0hTea8lilrouyo9mVc2GWdcEZ8OLoGmSADlrCw==",
"requires": {
"buffer-from": "^1.0.0",
@@ -4410,22 +8615,37 @@
}
},
"configstore": {
- "version": "3.1.2",
- "resolved": "https://registry.npmjs.org/configstore/-/configstore-3.1.2.tgz",
- "integrity": "sha512-vtv5HtGjcYUgFrXc6Kx747B83MRRVS5R1VTEQoXvuP+kMI+if6uywV0nDGoiydJRy4yk7h9od5Og0kxx4zUXmw==",
+ "version": "5.0.1",
+ "resolved": "https://registry.npmjs.org/configstore/-/configstore-5.0.1.tgz",
+ "integrity": "sha512-aMKprgk5YhBNyH25hj8wGt2+D52Sw1DRRIzqBwLp2Ya9mFmY8KPvvtvmna8SxVR9JMZ4kzMD68N22vlaRpkeFA==",
"requires": {
- "dot-prop": "^4.1.0",
+ "dot-prop": "^5.2.0",
"graceful-fs": "^4.1.2",
- "make-dir": "^1.0.0",
- "unique-string": "^1.0.0",
- "write-file-atomic": "^2.0.0",
- "xdg-basedir": "^3.0.0"
+ "make-dir": "^3.0.0",
+ "unique-string": "^2.0.0",
+ "write-file-atomic": "^3.0.0",
+ "xdg-basedir": "^4.0.0"
+ },
+ "dependencies": {
+ "make-dir": {
+ "version": "3.1.0",
+ "resolved": "https://registry.npmjs.org/make-dir/-/make-dir-3.1.0.tgz",
+ "integrity": "sha512-g3FeP20LNwhALb/6Cz6Dd4F2ngze0jz7tbzrD2wAV+o9FeNHe4rL+yK2md0J/fiSf1sa1ADhXqi5+oVwOM/eGw==",
+ "requires": {
+ "semver": "^6.0.0"
+ }
+ },
+ "semver": {
+ "version": "6.3.0",
+ "resolved": "https://registry.npmjs.org/semver/-/semver-6.3.0.tgz",
+ "integrity": "sha512-b39TBaTSfV6yBrapU89p5fKekE2m/NwnDocOVruQFS1/veMgdzuPcnOM34M6CwxW8jH/lxEa5rBoDeUwu5HHTw=="
+ }
}
},
"confusing-browser-globals": {
- "version": "1.0.5",
- "resolved": "https://registry.npmjs.org/confusing-browser-globals/-/confusing-browser-globals-1.0.5.tgz",
- "integrity": "sha512-tHo1tQL/9Ox5RELbkCAJhnViqWlzBz3MG1bB2czbHjH2mWd4aYUgNCNLfysFL7c4LoDws7pjg2tj48Gmpw4QHA=="
+ "version": "1.0.9",
+ "resolved": "https://registry.npmjs.org/confusing-browser-globals/-/confusing-browser-globals-1.0.9.tgz",
+ "integrity": "sha512-KbS1Y0jMtyPgIxjO7ZzMAuUpAKMt1SzCL9fsrKsX6b0zJPTaT0SiSPmewwVZg9UAO83HVIlEhZF84LIjZ0lmAw=="
},
"connect-history-api-fallback": {
"version": "1.6.0",
@@ -4433,12 +8653,9 @@
"integrity": "sha512-e54B99q/OUoH64zYYRf3HBP5z24G38h5D3qXu23JGRoigpX5Ss4r9ZnDk3g0Z8uQC2x2lPaJ+UlWBc1ZWBWdLg=="
},
"console-browserify": {
- "version": "1.1.0",
- "resolved": "https://registry.npmjs.org/console-browserify/-/console-browserify-1.1.0.tgz",
- "integrity": "sha1-8CQcRXMKn8YyOyBtvzjtx0HQuxA=",
- "requires": {
- "date-now": "^0.1.4"
- }
+ "version": "1.2.0",
+ "resolved": "https://registry.npmjs.org/console-browserify/-/console-browserify-1.2.0.tgz",
+ "integrity": "sha512-ZMkYO/LkF17QvCPqM0gxw8yUzigAOZOSWSHg91FH6orS7vcEj5dVZTidN2fQ14yBSdg97RqhSNwLUXInd52OTA=="
},
"console-control-strings": {
"version": "1.1.0",
@@ -4493,9 +8710,9 @@
}
},
"cookie": {
- "version": "0.3.1",
- "resolved": "https://registry.npmjs.org/cookie/-/cookie-0.3.1.tgz",
- "integrity": "sha1-5+Ch+e9DtMi6klxcWpboBtFoc7s="
+ "version": "0.4.0",
+ "resolved": "https://registry.npmjs.org/cookie/-/cookie-0.4.0.tgz",
+ "integrity": "sha512-+Hp8fLp57wnUSt0tY0tHEXh4voZRDnoIrZPqlo3DPiI4y9lwg/jqx+1Om94/W6ZaPDOUbnjOt/99w66zk+l1Xg=="
},
"cookie-signature": {
"version": "1.0.6",
@@ -4538,11 +8755,67 @@
"resolved": "https://registry.npmjs.org/core-js/-/core-js-2.6.1.tgz",
"integrity": "sha512-L72mmmEayPJBejKIWe2pYtGis5r0tQ5NaJekdhyXgeMQTpJoBsH0NL4ElY2LfSoV15xeQWKQ+XTTOZdyero5Xg=="
},
+ "core-js-compat": {
+ "version": "3.6.5",
+ "resolved": "https://registry.npmjs.org/core-js-compat/-/core-js-compat-3.6.5.tgz",
+ "integrity": "sha512-7ItTKOhOZbznhXAQ2g/slGg1PJV5zDO/WdkTwi7UEOJmkvsE32PWvx6mKtDjiMpjnR2CNf6BAD6sSxIlv7ptng==",
+ "requires": {
+ "browserslist": "^4.8.5",
+ "semver": "7.0.0"
+ },
+ "dependencies": {
+ "browserslist": {
+ "version": "4.12.0",
+ "resolved": "https://registry.npmjs.org/browserslist/-/browserslist-4.12.0.tgz",
+ "integrity": "sha512-UH2GkcEDSI0k/lRkuDSzFl9ZZ87skSy9w2XAn1MsZnL+4c4rqbBd3e82UWHbYDpztABrPBhZsTEeuxVfHppqDg==",
+ "requires": {
+ "caniuse-lite": "^1.0.30001043",
+ "electron-to-chromium": "^1.3.413",
+ "node-releases": "^1.1.53",
+ "pkg-up": "^2.0.0"
+ }
+ },
+ "caniuse-lite": {
+ "version": "1.0.30001084",
+ "resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30001084.tgz",
+ "integrity": "sha512-ftdc5oGmhEbLUuMZ/Qp3mOpzfZLCxPYKcvGv6v2dJJ+8EdqcvZRbAGOiLmkM/PV1QGta/uwBs8/nCl6sokDW6w=="
+ },
+ "electron-to-chromium": {
+ "version": "1.3.474",
+ "resolved": "https://registry.npmjs.org/electron-to-chromium/-/electron-to-chromium-1.3.474.tgz",
+ "integrity": "sha512-fPkSgT9IBKmVJz02XioNsIpg0WYmkPrvU1lUJblMMJALxyE7/32NGvbJQKKxpNokozPvqfqkuUqVClYsvetcLw=="
+ },
+ "node-releases": {
+ "version": "1.1.58",
+ "resolved": "https://registry.npmjs.org/node-releases/-/node-releases-1.1.58.tgz",
+ "integrity": "sha512-NxBudgVKiRh/2aPWMgPR7bPTX0VPmGx5QBwCtdHitnqFE5/O8DeBXuIMH1nwNnw/aMo6AjOrpsHzfY3UbUJ7yg=="
+ },
+ "semver": {
+ "version": "7.0.0",
+ "resolved": "https://registry.npmjs.org/semver/-/semver-7.0.0.tgz",
+ "integrity": "sha512-+GB6zVA9LWh6zovYQLALHwv5rb2PHGlJi3lfiqIHxR0uuwCgefcOJc59v9fv1w8GbStwxuuqqAjI9NMAOOgq1A=="
+ }
+ }
+ },
+ "core-js-pure": {
+ "version": "3.6.5",
+ "resolved": "https://registry.npmjs.org/core-js-pure/-/core-js-pure-3.6.5.tgz",
+ "integrity": "sha512-lacdXOimsiD0QyNf9BC/mxivNJ/ybBGJXQFKzRekp1WTHoVUWsUHEn+2T8GJAzzIhyOuXA+gOxCVN3l+5PLPUA=="
+ },
"core-util-is": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/core-util-is/-/core-util-is-1.0.2.tgz",
"integrity": "sha1-tf1UIgqivFq1eqtxQMlAdUUDwac="
},
+ "cors": {
+ "version": "2.8.5",
+ "resolved": "https://registry.npmjs.org/cors/-/cors-2.8.5.tgz",
+ "integrity": "sha512-KIHbLJqu73RGr/hnbrO9uBeixNGuvSQjul/jdFvS/KFSIH1hWVd1ng7zOHx+YrEfInLG7q4n6GHQ9cDtxv/P6g==",
+ "requires": {
+ "object-assign": "^4",
+ "vary": "^1"
+ }
+ },
"cosmiconfig": {
"version": "5.0.7",
"resolved": "https://registry.npmjs.org/cosmiconfig/-/cosmiconfig-5.0.7.tgz",
@@ -4561,19 +8834,18 @@
"requires": {
"bn.js": "^4.1.0",
"elliptic": "^6.0.0"
- }
- },
- "create-error-class": {
- "version": "3.0.2",
- "resolved": "https://registry.npmjs.org/create-error-class/-/create-error-class-3.0.2.tgz",
- "integrity": "sha1-Br56vvlHo/FKMP1hBnHUAbyot7Y=",
- "requires": {
- "capture-stack-trace": "^1.0.0"
+ },
+ "dependencies": {
+ "bn.js": {
+ "version": "4.11.9",
+ "resolved": "https://registry.npmjs.org/bn.js/-/bn.js-4.11.9.tgz",
+ "integrity": "sha512-E6QoYqCKZfgatHTdHzs1RRKP7ip4vvm+EyRUeE2RF0NblwVvb0p6jSVeNTOFxPn26QXN2o6SMfNxKp6kU8zQaw=="
+ }
}
},
"create-hash": {
"version": "1.2.0",
- "resolved": "http://registry.npmjs.org/create-hash/-/create-hash-1.2.0.tgz",
+ "resolved": "https://registry.npmjs.org/create-hash/-/create-hash-1.2.0.tgz",
"integrity": "sha512-z00bCGNHDG8mHAkP7CtT1qVu+bFQUPjYq/4Iv3C3kWjTFV10zIjfSoeqXo9Asws8gwSHDGj/hl2u4OGIjapeCg==",
"requires": {
"cipher-base": "^1.0.1",
@@ -4585,7 +8857,7 @@
},
"create-hmac": {
"version": "1.1.7",
- "resolved": "http://registry.npmjs.org/create-hmac/-/create-hmac-1.1.7.tgz",
+ "resolved": "https://registry.npmjs.org/create-hmac/-/create-hmac-1.1.7.tgz",
"integrity": "sha512-MJG9liiZ+ogc4TzUwuvbER1JRdgvUFSB5+VR/g5h82fGaIRWMWddtKBHi7/sVhfjQZ6SehlyhvQYrcYkaUIpLg==",
"requires": {
"cipher-base": "^1.0.3",
@@ -4597,12 +8869,12 @@
}
},
"create-react-context": {
- "version": "0.2.3",
- "resolved": "https://registry.npmjs.org/create-react-context/-/create-react-context-0.2.3.tgz",
- "integrity": "sha512-CQBmD0+QGgTaxDL3OX1IDXYqjkp2It4RIbcb99jS6AEg27Ga+a9G3JtK6SIu0HBwPLZlmwt9F7UwWA4Bn92Rag==",
+ "version": "0.3.0",
+ "resolved": "https://registry.npmjs.org/create-react-context/-/create-react-context-0.3.0.tgz",
+ "integrity": "sha512-dNldIoSuNSvlTJ7slIKC/ZFGKexBMBrrcc+TTe1NdmROnaASuLPvqpwj9v4XS4uXZ8+YPu0sNmShX2rXI5LNsw==",
"requires": {
- "fbjs": "^0.8.0",
- "gud": "^1.0.0"
+ "gud": "^1.0.0",
+ "warning": "^4.0.3"
}
},
"cross-fetch": {
@@ -4616,12 +8888,12 @@
"dependencies": {
"node-fetch": {
"version": "2.1.2",
- "resolved": "http://registry.npmjs.org/node-fetch/-/node-fetch-2.1.2.tgz",
+ "resolved": "https://registry.npmjs.org/node-fetch/-/node-fetch-2.1.2.tgz",
"integrity": "sha1-q4hOjn5X44qUR1POxwb3iNF2i7U="
},
"whatwg-fetch": {
"version": "2.0.4",
- "resolved": "http://registry.npmjs.org/whatwg-fetch/-/whatwg-fetch-2.0.4.tgz",
+ "resolved": "https://registry.npmjs.org/whatwg-fetch/-/whatwg-fetch-2.0.4.tgz",
"integrity": "sha512-dcQ1GWpOD/eEQ97k66aiEVpNnapVj90/+R+SXTPYGHpYBBypfKJEQjLrvMZ7YXbKm21gXd4NcuxUTjiv1YtLng=="
}
}
@@ -4671,9 +8943,9 @@
}
},
"crypto-random-string": {
- "version": "1.0.0",
- "resolved": "https://registry.npmjs.org/crypto-random-string/-/crypto-random-string-1.0.0.tgz",
- "integrity": "sha1-ojD2T1aDEOFJgAmUB5DsmVRbyn4="
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/crypto-random-string/-/crypto-random-string-2.0.0.tgz",
+ "integrity": "sha512-v1plID3y9r/lPhviJ1wrXpLeyUIGAZ2SHNYTEapm7/8A9nLPoyvVp3RK/EPFqn5kEznyWgYZNsRtYYIWbuG8KA=="
},
"css": {
"version": "2.2.4",
@@ -4695,7 +8967,7 @@
},
"css-color-names": {
"version": "0.0.4",
- "resolved": "http://registry.npmjs.org/css-color-names/-/css-color-names-0.0.4.tgz",
+ "resolved": "https://registry.npmjs.org/css-color-names/-/css-color-names-0.0.4.tgz",
"integrity": "sha1-gIrcLnnPhHOAabZGyyDsJ762KeA="
},
"css-declaration-sorter": {
@@ -4728,9 +9000,9 @@
}
},
"postcss": {
- "version": "7.0.14",
- "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.14.tgz",
- "integrity": "sha512-NsbD6XUUMZvBxtQAJuWDJeeC4QFsmWsfozWxCJPWf3M55K9iu2iMDaKqyoOdTJ1R4usBXuxlVFAIo8rZPQD4Bg==",
+ "version": "7.0.32",
+ "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.32.tgz",
+ "integrity": "sha512-03eXong5NLnNCD05xscnGKGDZ98CyzoqPSMjOe6SuoQY7Z2hIj0Ld1g/O/UQRuOle2aRtiIRDg9tDcTGAkLfKw==",
"requires": {
"chalk": "^2.4.2",
"source-map": "^0.6.1",
@@ -4793,42 +9065,58 @@
"integrity": "sha1-XxrUPi2O77/cME/NOaUhZklD4+s="
},
"css-selector-tokenizer": {
- "version": "0.7.1",
- "resolved": "https://registry.npmjs.org/css-selector-tokenizer/-/css-selector-tokenizer-0.7.1.tgz",
- "integrity": "sha512-xYL0AMZJ4gFzJQsHUKa5jiWWi2vH77WVNg7JYRyewwj6oPh4yb/y6Y9ZCw9dsj/9UauMhtuxR+ogQd//EdEVNA==",
+ "version": "0.7.2",
+ "resolved": "https://registry.npmjs.org/css-selector-tokenizer/-/css-selector-tokenizer-0.7.2.tgz",
+ "integrity": "sha512-yj856NGuAymN6r8bn8/Jl46pR+OC3eEvAhfGYDUe7YPtTPAYrSSw4oAniZ9Y8T5B92hjhwTBLUen0/vKPxf6pw==",
"requires": {
- "cssesc": "^0.1.0",
- "fastparse": "^1.1.1",
- "regexpu-core": "^1.0.0"
+ "cssesc": "^3.0.0",
+ "fastparse": "^1.1.2",
+ "regexpu-core": "^4.6.0"
},
"dependencies": {
"jsesc": {
"version": "0.5.0",
- "resolved": "http://registry.npmjs.org/jsesc/-/jsesc-0.5.0.tgz",
+ "resolved": "https://registry.npmjs.org/jsesc/-/jsesc-0.5.0.tgz",
"integrity": "sha1-597mbjXW/Bb3EP6R1c9p9w8IkR0="
},
- "regexpu-core": {
- "version": "1.0.0",
- "resolved": "http://registry.npmjs.org/regexpu-core/-/regexpu-core-1.0.0.tgz",
- "integrity": "sha1-hqdj9Y7k18L2sQLkdkBQ3n7ZDGs=",
+ "regenerate-unicode-properties": {
+ "version": "8.2.0",
+ "resolved": "https://registry.npmjs.org/regenerate-unicode-properties/-/regenerate-unicode-properties-8.2.0.tgz",
+ "integrity": "sha512-F9DjY1vKLo/tPePDycuH3dn9H1OTPIkVD9Kz4LODu+F2C75mgjAJ7x/gwy6ZcSNRAAkhNlJSOHRe8k3p+K9WhA==",
"requires": {
- "regenerate": "^1.2.1",
- "regjsgen": "^0.2.0",
- "regjsparser": "^0.1.4"
+ "regenerate": "^1.4.0"
+ }
+ },
+ "regexpu-core": {
+ "version": "4.7.0",
+ "resolved": "https://registry.npmjs.org/regexpu-core/-/regexpu-core-4.7.0.tgz",
+ "integrity": "sha512-TQ4KXRnIn6tz6tjnrXEkD/sshygKH/j5KzK86X8MkeHyZ8qst/LZ89j3X4/8HEIfHANTFIP/AbXakeRhWIl5YQ==",
+ "requires": {
+ "regenerate": "^1.4.0",
+ "regenerate-unicode-properties": "^8.2.0",
+ "regjsgen": "^0.5.1",
+ "regjsparser": "^0.6.4",
+ "unicode-match-property-ecmascript": "^1.0.4",
+ "unicode-match-property-value-ecmascript": "^1.2.0"
}
},
"regjsgen": {
- "version": "0.2.0",
- "resolved": "http://registry.npmjs.org/regjsgen/-/regjsgen-0.2.0.tgz",
- "integrity": "sha1-bAFq3qxVT3WCP+N6wFuS1aTtsfc="
+ "version": "0.5.2",
+ "resolved": "https://registry.npmjs.org/regjsgen/-/regjsgen-0.5.2.tgz",
+ "integrity": "sha512-OFFT3MfrH90xIW8OOSyUrk6QHD5E9JOTeGodiJeBS3J6IwlgzJMNE/1bZklWz5oTg+9dCMyEetclvCVXOPoN3A=="
},
"regjsparser": {
- "version": "0.1.5",
- "resolved": "http://registry.npmjs.org/regjsparser/-/regjsparser-0.1.5.tgz",
- "integrity": "sha1-fuj4Tcb6eS0/0K4ijSS9lJ6tIFw=",
+ "version": "0.6.4",
+ "resolved": "https://registry.npmjs.org/regjsparser/-/regjsparser-0.6.4.tgz",
+ "integrity": "sha512-64O87/dPDgfk8/RQqC4gkZoGyyWFIEUTTh80CU6CWuK5vkCGyekIx+oKcEIYtP/RAxSQltCZHCNu/mdd7fqlJw==",
"requires": {
"jsesc": "~0.5.0"
}
+ },
+ "unicode-match-property-value-ecmascript": {
+ "version": "1.2.0",
+ "resolved": "https://registry.npmjs.org/unicode-match-property-value-ecmascript/-/unicode-match-property-value-ecmascript-1.2.0.tgz",
+ "integrity": "sha512-wjuQHGQVofmSJv1uVISKLE5zO2rNGzM/KCYZch/QQvez7C1hUhBIuZ701fYXExuufJFMPhv2SyL8CyoIfMLbIQ=="
}
}
},
@@ -4841,11 +9129,6 @@
"source-map": "^0.5.3"
}
},
- "css-unit-converter": {
- "version": "1.1.1",
- "resolved": "https://registry.npmjs.org/css-unit-converter/-/css-unit-converter-1.1.1.tgz",
- "integrity": "sha1-2bkoGtz9jO2TW9urqDeGiX9k6ZY="
- },
"css-url-regex": {
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/css-url-regex/-/css-url-regex-1.1.0.tgz",
@@ -4857,9 +9140,14 @@
"integrity": "sha512-wan8dMWQ0GUeF7DGEPVjhHemVW/vy6xUYmFzRY8RYqgA0JtXC9rJmbScBjqSu6dg9q0lwPQy6ZAmJVr3PPTvqQ=="
},
"cssesc": {
- "version": "0.1.0",
- "resolved": "https://registry.npmjs.org/cssesc/-/cssesc-0.1.0.tgz",
- "integrity": "sha1-yBSQPkViM3GgR3tAEJqq++6t27Q="
+ "version": "3.0.0",
+ "resolved": "https://registry.npmjs.org/cssesc/-/cssesc-3.0.0.tgz",
+ "integrity": "sha512-/Tb/JcjK111nNScGob5MNtsntNM1aCNUDipB/TkwZFhyDrrE47SOx/18wF2bbjgc3ZzCSKW1T5nt5EbFoAz/Vg=="
+ },
+ "cssfilter": {
+ "version": "0.0.10",
+ "resolved": "https://registry.npmjs.org/cssfilter/-/cssfilter-0.0.10.tgz",
+ "integrity": "sha1-xtJnJjKi5cg+AT5oZKQs6N79IK4="
},
"cssnano": {
"version": "4.1.10",
@@ -4893,9 +9181,9 @@
}
},
"postcss": {
- "version": "7.0.14",
- "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.14.tgz",
- "integrity": "sha512-NsbD6XUUMZvBxtQAJuWDJeeC4QFsmWsfozWxCJPWf3M55K9iu2iMDaKqyoOdTJ1R4usBXuxlVFAIo8rZPQD4Bg==",
+ "version": "7.0.32",
+ "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.32.tgz",
+ "integrity": "sha512-03eXong5NLnNCD05xscnGKGDZ98CyzoqPSMjOe6SuoQY7Z2hIj0Ld1g/O/UQRuOle2aRtiIRDg9tDcTGAkLfKw==",
"requires": {
"chalk": "^2.4.2",
"source-map": "^0.6.1",
@@ -4975,9 +9263,9 @@
}
},
"postcss": {
- "version": "7.0.14",
- "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.14.tgz",
- "integrity": "sha512-NsbD6XUUMZvBxtQAJuWDJeeC4QFsmWsfozWxCJPWf3M55K9iu2iMDaKqyoOdTJ1R4usBXuxlVFAIo8rZPQD4Bg==",
+ "version": "7.0.32",
+ "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.32.tgz",
+ "integrity": "sha512-03eXong5NLnNCD05xscnGKGDZ98CyzoqPSMjOe6SuoQY7Z2hIj0Ld1g/O/UQRuOle2aRtiIRDg9tDcTGAkLfKw==",
"requires": {
"chalk": "^2.4.2",
"source-map": "^0.6.1",
@@ -5038,9 +9326,9 @@
}
},
"postcss": {
- "version": "7.0.14",
- "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.14.tgz",
- "integrity": "sha512-NsbD6XUUMZvBxtQAJuWDJeeC4QFsmWsfozWxCJPWf3M55K9iu2iMDaKqyoOdTJ1R4usBXuxlVFAIo8rZPQD4Bg==",
+ "version": "7.0.32",
+ "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.32.tgz",
+ "integrity": "sha512-03eXong5NLnNCD05xscnGKGDZ98CyzoqPSMjOe6SuoQY7Z2hIj0Ld1g/O/UQRuOle2aRtiIRDg9tDcTGAkLfKw==",
"requires": {
"chalk": "^2.4.2",
"source-map": "^0.6.1",
@@ -5110,14 +9398,23 @@
}
},
"cyclist": {
- "version": "0.2.2",
- "resolved": "https://registry.npmjs.org/cyclist/-/cyclist-0.2.2.tgz",
- "integrity": "sha1-GzN5LhHpFKL9bW7WRHRkRE5fpkA="
+ "version": "1.0.1",
+ "resolved": "https://registry.npmjs.org/cyclist/-/cyclist-1.0.1.tgz",
+ "integrity": "sha1-WW6WmP0MgOEgOMK4LW6xs1tiJNk="
+ },
+ "d": {
+ "version": "1.0.1",
+ "resolved": "https://registry.npmjs.org/d/-/d-1.0.1.tgz",
+ "integrity": "sha512-m62ShEObQ39CfralilEQRjH6oAMtNCV1xJyEx5LpRYUVN+EviphDgUc/F3hnYbADmkiNs67Y+3ylmlG7Lnu+FA==",
+ "requires": {
+ "es5-ext": "^0.10.50",
+ "type": "^1.0.1"
+ }
},
"damerau-levenshtein": {
- "version": "1.0.4",
- "resolved": "https://registry.npmjs.org/damerau-levenshtein/-/damerau-levenshtein-1.0.4.tgz",
- "integrity": "sha1-AxkcQyy27qFou3fzpV/9zLiXhRQ="
+ "version": "1.0.6",
+ "resolved": "https://registry.npmjs.org/damerau-levenshtein/-/damerau-levenshtein-1.0.6.tgz",
+ "integrity": "sha512-JVrozIeElnj3QzfUIt8tB8YMluBJom4Vw9qTPpjGYQ9fYlB3D/rb6OordUxf3xeFB35LKWs0xqcO5U6ySvBtug=="
},
"dashdash": {
"version": "1.14.1",
@@ -5127,10 +9424,11 @@
"assert-plus": "^1.0.0"
}
},
- "date-now": {
- "version": "0.1.4",
- "resolved": "https://registry.npmjs.org/date-now/-/date-now-0.1.4.tgz",
- "integrity": "sha1-6vQ5/U1ISK105cx9vvIAZyueNFs="
+ "de-indent": {
+ "version": "1.0.2",
+ "resolved": "https://registry.npmjs.org/de-indent/-/de-indent-1.0.2.tgz",
+ "integrity": "sha1-sgOOhG3DO6pXlhKNCAS0VbjB4h0=",
+ "optional": true
},
"debug": {
"version": "3.2.6",
@@ -5286,9 +9584,9 @@
"integrity": "sha512-R9hc1Xa/NOBi9WRVUWg19rl1UB7Tt4kuPd+thNJgFZoxXsTz7ncaPaeIm+40oSGuP33DfMb4sZt1QIGiJzC4EA=="
},
"default-gateway": {
- "version": "4.1.2",
- "resolved": "https://registry.npmjs.org/default-gateway/-/default-gateway-4.1.2.tgz",
- "integrity": "sha512-xhJUAp3u02JsBGovj0V6B6uYhKCUOmiNc8xGmReUwGu77NmvcpxPVB0pCielxMFumO7CmXBG02XjM8HB97k8Hw==",
+ "version": "4.2.0",
+ "resolved": "https://registry.npmjs.org/default-gateway/-/default-gateway-4.2.0.tgz",
+ "integrity": "sha512-h6sMrVB1VMWVrW13mSc6ia/DwYYw5MN6+exNu1OaJeFac5aSAvwM7lZ0NVfTABuSkQelr4h5oebg3KB1XPdjgA==",
"requires": {
"execa": "^1.0.0",
"ip-regex": "^2.1.0"
@@ -5330,6 +9628,11 @@
}
}
},
+ "defer-to-connect": {
+ "version": "1.1.3",
+ "resolved": "https://registry.npmjs.org/defer-to-connect/-/defer-to-connect-1.1.3.tgz",
+ "integrity": "sha512-0ISdNousHvZT2EiFlZeZAHBUvSxmKswVCEf8hW7KWgG4a8MVEu/3Vb6uWYozkjylyCxe0JBIiRB1jV45S70WVQ=="
+ },
"define-properties": {
"version": "1.1.3",
"resolved": "https://registry.npmjs.org/define-properties/-/define-properties-1.1.3.tgz",
@@ -5409,15 +9712,10 @@
"resolved": "https://registry.npmjs.org/depd/-/depd-1.1.2.tgz",
"integrity": "sha1-m81S4UwJd2PnSbJ0xDRu0uVgtak="
},
- "deprecated-decorator": {
- "version": "0.1.6",
- "resolved": "https://registry.npmjs.org/deprecated-decorator/-/deprecated-decorator-0.1.6.tgz",
- "integrity": "sha1-AJZjF7ehL+kvPMgx91g68ym4bDc="
- },
"des.js": {
- "version": "1.0.0",
- "resolved": "https://registry.npmjs.org/des.js/-/des.js-1.0.0.tgz",
- "integrity": "sha1-wHTS4qpqipoH29YfmhXCzYPsjsw=",
+ "version": "1.0.1",
+ "resolved": "https://registry.npmjs.org/des.js/-/des.js-1.0.1.tgz",
+ "integrity": "sha512-Q0I4pfFrv2VPd34/vfLrFOoRmlYj3OV50i7fskps1jZWK1kApMWWT9G6RRUeYedLcBDIhnSDaUvJMb3AhUlaEA==",
"requires": {
"inherits": "^2.0.1",
"minimalistic-assert": "^1.0.0"
@@ -5437,15 +9735,24 @@
}
},
"detect-indent": {
- "version": "5.0.0",
- "resolved": "https://registry.npmjs.org/detect-indent/-/detect-indent-5.0.0.tgz",
- "integrity": "sha1-OHHMCmoALow+Wzz38zYmRnXwa50="
+ "version": "6.0.0",
+ "resolved": "https://registry.npmjs.org/detect-indent/-/detect-indent-6.0.0.tgz",
+ "integrity": "sha512-oSyFlqaTHCItVRGK5RmrmjB+CmaMOW7IaNA/kdxqhoa6d17j/5ce9O9eWXmV/KEdRwqpQA+Vqe8a8Bsybu4YnA=="
},
"detect-libc": {
"version": "1.0.3",
"resolved": "https://registry.npmjs.org/detect-libc/-/detect-libc-1.0.3.tgz",
"integrity": "sha1-+hN8S9aY7fVc1c0CrFWfkaTEups="
},
+ "detect-newline": {
+ "version": "1.0.3",
+ "resolved": "https://registry.npmjs.org/detect-newline/-/detect-newline-1.0.3.tgz",
+ "integrity": "sha1-6XsQA4d9cMCa8a81v63/Fo3kkg0=",
+ "requires": {
+ "get-stdin": "^4.0.1",
+ "minimist": "^1.1.0"
+ }
+ },
"detect-node": {
"version": "2.0.4",
"resolved": "https://registry.npmjs.org/detect-node/-/detect-node-2.0.4.tgz",
@@ -5498,6 +9805,24 @@
"tslib": "^1.6.0"
},
"dependencies": {
+ "configstore": {
+ "version": "3.1.2",
+ "resolved": "https://registry.npmjs.org/configstore/-/configstore-3.1.2.tgz",
+ "integrity": "sha512-vtv5HtGjcYUgFrXc6Kx747B83MRRVS5R1VTEQoXvuP+kMI+if6uywV0nDGoiydJRy4yk7h9od5Og0kxx4zUXmw==",
+ "requires": {
+ "dot-prop": "^4.1.0",
+ "graceful-fs": "^4.1.2",
+ "make-dir": "^1.0.0",
+ "unique-string": "^1.0.0",
+ "write-file-atomic": "^2.0.0",
+ "xdg-basedir": "^3.0.0"
+ }
+ },
+ "crypto-random-string": {
+ "version": "1.0.0",
+ "resolved": "https://registry.npmjs.org/crypto-random-string/-/crypto-random-string-1.0.0.tgz",
+ "integrity": "sha1-ojD2T1aDEOFJgAmUB5DsmVRbyn4="
+ },
"debug": {
"version": "2.6.9",
"resolved": "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz",
@@ -5506,21 +9831,64 @@
"ms": "2.0.0"
}
},
+ "dot-prop": {
+ "version": "4.2.0",
+ "resolved": "https://registry.npmjs.org/dot-prop/-/dot-prop-4.2.0.tgz",
+ "integrity": "sha512-tUMXrxlExSW6U2EXiiKGSBVdYgtV8qlHL+C10TsW4PURY/ic+eaysnSkwB4kA/mBlCyy/IKDJ+Lc3wbWeaXtuQ==",
+ "requires": {
+ "is-obj": "^1.0.0"
+ }
+ },
"ms": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz",
"integrity": "sha1-VgiurfwAvmwpAd9fmGF4jeDVl8g="
+ },
+ "unique-string": {
+ "version": "1.0.0",
+ "resolved": "https://registry.npmjs.org/unique-string/-/unique-string-1.0.0.tgz",
+ "integrity": "sha1-nhBXzKhRq7kzmPizOuGHuZyuwRo=",
+ "requires": {
+ "crypto-random-string": "^1.0.0"
+ }
+ },
+ "write-file-atomic": {
+ "version": "2.4.3",
+ "resolved": "https://registry.npmjs.org/write-file-atomic/-/write-file-atomic-2.4.3.tgz",
+ "integrity": "sha512-GaETH5wwsX+GcnzhPgKcKjJ6M2Cq3/iZp1WyY/X1CSqrW+jVNM9Y7D8EC2sM4ZG/V8wZlSniJnCKWPmBYAucRQ==",
+ "requires": {
+ "graceful-fs": "^4.1.11",
+ "imurmurhash": "^0.1.4",
+ "signal-exit": "^3.0.2"
+ }
+ },
+ "xdg-basedir": {
+ "version": "3.0.0",
+ "resolved": "https://registry.npmjs.org/xdg-basedir/-/xdg-basedir-3.0.0.tgz",
+ "integrity": "sha1-SWsswQnsqNus/i3HK2A8F8WHCtQ="
}
}
},
+ "diff-sequences": {
+ "version": "25.2.6",
+ "resolved": "https://registry.npmjs.org/diff-sequences/-/diff-sequences-25.2.6.tgz",
+ "integrity": "sha512-Hq8o7+6GaZeoFjtpgvRBUknSXNeJiCx7V9Fr94ZMljNiCr9n9L8H8aJqgWOQiDDGdyn29fRNcDdRVJ5fdyihfg=="
+ },
"diffie-hellman": {
"version": "5.0.3",
- "resolved": "http://registry.npmjs.org/diffie-hellman/-/diffie-hellman-5.0.3.tgz",
+ "resolved": "https://registry.npmjs.org/diffie-hellman/-/diffie-hellman-5.0.3.tgz",
"integrity": "sha512-kqag/Nl+f3GwyK25fhUMYj81BUOrZ9IuJsjIcDE5icNM9FJHAVm3VcUDxdLPoQtTuUylWm6ZIknYJwwaPxsUzg==",
"requires": {
"bn.js": "^4.1.0",
"miller-rabin": "^4.0.0",
"randombytes": "^2.0.0"
+ },
+ "dependencies": {
+ "bn.js": {
+ "version": "4.11.9",
+ "resolved": "https://registry.npmjs.org/bn.js/-/bn.js-4.11.9.tgz",
+ "integrity": "sha512-E6QoYqCKZfgatHTdHzs1RRKP7ip4vvm+EyRUeE2RF0NblwVvb0p6jSVeNTOFxPn26QXN2o6SMfNxKp6kU8zQaw=="
+ }
}
},
"dir-glob": {
@@ -5627,11 +9995,6 @@
"domelementtype": "1"
}
},
- "domready": {
- "version": "1.0.8",
- "resolved": "https://registry.npmjs.org/domready/-/domready-1.0.8.tgz",
- "integrity": "sha1-kfJS5Ze2Wvd+dFriTdAYXV4m1Yw="
- },
"domutils": {
"version": "1.5.1",
"resolved": "https://registry.npmjs.org/domutils/-/domutils-1.5.1.tgz",
@@ -5650,16 +10013,23 @@
}
},
"dot-prop": {
- "version": "4.2.0",
- "resolved": "https://registry.npmjs.org/dot-prop/-/dot-prop-4.2.0.tgz",
- "integrity": "sha512-tUMXrxlExSW6U2EXiiKGSBVdYgtV8qlHL+C10TsW4PURY/ic+eaysnSkwB4kA/mBlCyy/IKDJ+Lc3wbWeaXtuQ==",
+ "version": "5.2.0",
+ "resolved": "https://registry.npmjs.org/dot-prop/-/dot-prop-5.2.0.tgz",
+ "integrity": "sha512-uEUyaDKoSQ1M4Oq8l45hSE26SnTxL6snNnqvK/VWx5wJhmff5z0FUVJDKDanor/6w3kzE3i7XZOk+7wC0EXr1A==",
"requires": {
- "is-obj": "^1.0.0"
+ "is-obj": "^2.0.0"
+ },
+ "dependencies": {
+ "is-obj": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/is-obj/-/is-obj-2.0.0.tgz",
+ "integrity": "sha512-drqDG3cbczxxEJRoOXcOjtdp1J/lyp1mNn0xaznRs8+muBhgQcrnbspox5X5fOw0HnMnbfDzvnEMEtqDEJEo8w=="
+ }
}
},
"dotenv": {
"version": "4.0.0",
- "resolved": "http://registry.npmjs.org/dotenv/-/dotenv-4.0.0.tgz",
+ "resolved": "https://registry.npmjs.org/dotenv/-/dotenv-4.0.0.tgz",
"integrity": "sha1-hk7xN5rO1Vzm+V3r7NzhefegzR0="
},
"download": {
@@ -5710,7 +10080,7 @@
},
"duplexer": {
"version": "0.1.1",
- "resolved": "http://registry.npmjs.org/duplexer/-/duplexer-0.1.1.tgz",
+ "resolved": "https://registry.npmjs.org/duplexer/-/duplexer-0.1.1.tgz",
"integrity": "sha1-rOb/gIwc5mtX0ev5eXessCM0z8E="
},
"duplexer3": {
@@ -5749,9 +10119,9 @@
"integrity": "sha512-ZUXBUyGLeoJxp4Nt6G/GjBRLnyz8IKQGexZ2ndWaoegThgMGFO1tdDYID5gBV32/1S83osjJHyfzvanE/8HY4Q=="
},
"elliptic": {
- "version": "6.4.1",
- "resolved": "https://registry.npmjs.org/elliptic/-/elliptic-6.4.1.tgz",
- "integrity": "sha512-BsXLz5sqX8OHcsh7CqBMztyXARmGQ3LWPtGjJi6DiJHq5C/qvi9P3OqgswKSDftbu8+IoI/QDTAm2fFnQ9SZSQ==",
+ "version": "6.5.2",
+ "resolved": "https://registry.npmjs.org/elliptic/-/elliptic-6.5.2.tgz",
+ "integrity": "sha512-f4x70okzZbIQl/NSRLkI/+tteV/9WqL98zx+SQ69KbXxmVrmjwsNUPn/gYJJ0sHvEak24cZgHIPegRePAtA/xw==",
"requires": {
"bn.js": "^4.4.0",
"brorand": "^1.0.1",
@@ -5760,6 +10130,13 @@
"inherits": "^2.0.1",
"minimalistic-assert": "^1.0.0",
"minimalistic-crypto-utils": "^1.0.0"
+ },
+ "dependencies": {
+ "bn.js": {
+ "version": "4.11.9",
+ "resolved": "https://registry.npmjs.org/bn.js/-/bn.js-4.11.9.tgz",
+ "integrity": "sha512-E6QoYqCKZfgatHTdHzs1RRKP7ip4vvm+EyRUeE2RF0NblwVvb0p6jSVeNTOFxPn26QXN2o6SMfNxKp6kU8zQaw=="
+ }
}
},
"emoji-regex": {
@@ -5794,42 +10171,42 @@
}
},
"engine.io": {
- "version": "3.3.2",
- "resolved": "https://registry.npmjs.org/engine.io/-/engine.io-3.3.2.tgz",
- "integrity": "sha512-AsaA9KG7cWPXWHp5FvHdDWY3AMWeZ8x+2pUVLcn71qE5AtAzgGbxuclOytygskw8XGmiQafTmnI9Bix3uihu2w==",
+ "version": "3.4.2",
+ "resolved": "https://registry.npmjs.org/engine.io/-/engine.io-3.4.2.tgz",
+ "integrity": "sha512-b4Q85dFkGw+TqgytGPrGgACRUhsdKc9S9ErRAXpPGy/CXKs4tYoHDkvIRdsseAF7NjfVwjRFIn6KTnbw7LwJZg==",
"requires": {
"accepts": "~1.3.4",
- "base64id": "1.0.0",
+ "base64id": "2.0.0",
"cookie": "0.3.1",
- "debug": "~3.1.0",
- "engine.io-parser": "~2.1.0",
- "ws": "~6.1.0"
+ "debug": "~4.1.0",
+ "engine.io-parser": "~2.2.0",
+ "ws": "^7.1.2"
},
"dependencies": {
- "debug": {
- "version": "3.1.0",
- "resolved": "https://registry.npmjs.org/debug/-/debug-3.1.0.tgz",
- "integrity": "sha512-OX8XqP7/1a9cqkxYw2yXss15f26NKWBpDXQd0/uK/KPqdQhxbPa994hnzjcE2VqQpDslf55723cKPUOGSmMY3g==",
- "requires": {
- "ms": "2.0.0"
- }
+ "cookie": {
+ "version": "0.3.1",
+ "resolved": "https://registry.npmjs.org/cookie/-/cookie-0.3.1.tgz",
+ "integrity": "sha1-5+Ch+e9DtMi6klxcWpboBtFoc7s="
},
- "ms": {
- "version": "2.0.0",
- "resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz",
- "integrity": "sha1-VgiurfwAvmwpAd9fmGF4jeDVl8g="
+ "debug": {
+ "version": "4.1.1",
+ "resolved": "https://registry.npmjs.org/debug/-/debug-4.1.1.tgz",
+ "integrity": "sha512-pYAIzeRo8J6KPEaJ0VWOh5Pzkbw/RetuzehGM7QRRX5he4fPHx2rdKMB256ehJCkX+XRQm16eZLqLNS8RSZXZw==",
+ "requires": {
+ "ms": "^2.1.1"
+ }
}
}
},
"engine.io-client": {
- "version": "3.3.2",
- "resolved": "https://registry.npmjs.org/engine.io-client/-/engine.io-client-3.3.2.tgz",
- "integrity": "sha512-y0CPINnhMvPuwtqXfsGuWE8BB66+B6wTtCofQDRecMQPYX3MYUZXFNKDhdrSe3EVjgOu4V3rxdeqN/Tr91IgbQ==",
+ "version": "3.4.3",
+ "resolved": "https://registry.npmjs.org/engine.io-client/-/engine.io-client-3.4.3.tgz",
+ "integrity": "sha512-0NGY+9hioejTEJCaSJZfWZLk4FPI9dN+1H1C4+wj2iuFba47UgZbJzfWs4aNFajnX/qAaYKbe2lLTfEEWzCmcw==",
"requires": {
- "component-emitter": "1.2.1",
+ "component-emitter": "~1.3.0",
"component-inherit": "0.0.3",
- "debug": "~3.1.0",
- "engine.io-parser": "~2.1.1",
+ "debug": "~4.1.0",
+ "engine.io-parser": "~2.2.0",
"has-cors": "1.1.0",
"indexof": "0.0.1",
"parseqs": "0.0.5",
@@ -5839,25 +10216,33 @@
"yeast": "0.1.2"
},
"dependencies": {
+ "component-emitter": {
+ "version": "1.3.0",
+ "resolved": "https://registry.npmjs.org/component-emitter/-/component-emitter-1.3.0.tgz",
+ "integrity": "sha512-Rd3se6QB+sO1TwqZjscQrurpEPIfO0/yYnSin6Q/rD3mOutHvUrCAhJub3r90uNb+SESBuE0QYoB90YdfatsRg=="
+ },
"debug": {
- "version": "3.1.0",
- "resolved": "https://registry.npmjs.org/debug/-/debug-3.1.0.tgz",
- "integrity": "sha512-OX8XqP7/1a9cqkxYw2yXss15f26NKWBpDXQd0/uK/KPqdQhxbPa994hnzjcE2VqQpDslf55723cKPUOGSmMY3g==",
+ "version": "4.1.1",
+ "resolved": "https://registry.npmjs.org/debug/-/debug-4.1.1.tgz",
+ "integrity": "sha512-pYAIzeRo8J6KPEaJ0VWOh5Pzkbw/RetuzehGM7QRRX5he4fPHx2rdKMB256ehJCkX+XRQm16eZLqLNS8RSZXZw==",
"requires": {
- "ms": "2.0.0"
+ "ms": "^2.1.1"
}
},
- "ms": {
- "version": "2.0.0",
- "resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz",
- "integrity": "sha1-VgiurfwAvmwpAd9fmGF4jeDVl8g="
+ "ws": {
+ "version": "6.1.4",
+ "resolved": "https://registry.npmjs.org/ws/-/ws-6.1.4.tgz",
+ "integrity": "sha512-eqZfL+NE/YQc1/ZynhojeV8q+H050oR8AZ2uIev7RU10svA9ZnJUddHcOUZTJLinZ9yEfdA2kSATS2qZK5fhJA==",
+ "requires": {
+ "async-limiter": "~1.0.0"
+ }
}
}
},
"engine.io-parser": {
- "version": "2.1.3",
- "resolved": "https://registry.npmjs.org/engine.io-parser/-/engine.io-parser-2.1.3.tgz",
- "integrity": "sha512-6HXPre2O4Houl7c4g7Ic/XzPnHBvaEmN90vtRO9uLmwtRqQmTOw0QMevL1TOfL2Cpu1VzsaTmMotQgMdkzGkVA==",
+ "version": "2.2.0",
+ "resolved": "https://registry.npmjs.org/engine.io-parser/-/engine.io-parser-2.2.0.tgz",
+ "integrity": "sha512-6I3qD9iUxotsC5HEMuuGsKA0cXerGz+4uGcXQEkfBidgKf0amsjrrtwcbwK/nzpZBxclXlV7gGl9dgWvu4LF6w==",
"requires": {
"after": "0.8.2",
"arraybuffer.slice": "~0.0.7",
@@ -5867,13 +10252,24 @@
}
},
"enhanced-resolve": {
- "version": "4.1.0",
- "resolved": "https://registry.npmjs.org/enhanced-resolve/-/enhanced-resolve-4.1.0.tgz",
- "integrity": "sha512-F/7vkyTtyc/llOIn8oWclcB25KdRaiPBpZYDgJHgh/UHtpgT2p2eldQgtQnLtUvfMKPKxbRaQM/hHkvLHt1Vng==",
+ "version": "4.2.0",
+ "resolved": "https://registry.npmjs.org/enhanced-resolve/-/enhanced-resolve-4.2.0.tgz",
+ "integrity": "sha512-S7eiFb/erugyd1rLb6mQ3Vuq+EXHv5cpCkNqqIkYkBgN2QdFnyCZzFBleqwGEx4lgNGYij81BWnCrFNK7vxvjQ==",
"requires": {
"graceful-fs": "^4.1.2",
- "memory-fs": "^0.4.0",
+ "memory-fs": "^0.5.0",
"tapable": "^1.0.0"
+ },
+ "dependencies": {
+ "memory-fs": {
+ "version": "0.5.0",
+ "resolved": "https://registry.npmjs.org/memory-fs/-/memory-fs-0.5.0.tgz",
+ "integrity": "sha512-jA0rdU5KoQMC0e6ppoNRtpp6vjFq6+NY7r8hywnC7V+1Xj/MtHwGIbB1QaK/dunyjWteJzmkpd7ooeWg10T7GA==",
+ "requires": {
+ "errno": "^0.1.3",
+ "readable-stream": "^2.0.1"
+ }
+ }
}
},
"entities": {
@@ -5882,9 +10278,9 @@
"integrity": "sha512-f2LZMYl1Fzu7YSBKg+RoROelpOaNrcGmE9AZubeDfrCEia483oW4MI4VyFd5VNHIgQ/7qm1I0wUHK1eJnn2y2w=="
},
"envinfo": {
- "version": "5.12.1",
- "resolved": "https://registry.npmjs.org/envinfo/-/envinfo-5.12.1.tgz",
- "integrity": "sha512-pwdo0/G3CIkQ0y6PCXq4RdkvId2elvtPCJMG0konqlrfkWQbf1DWeH9K2b/cvu2YgGvPPTOnonZxXM1gikFu1w=="
+ "version": "7.5.1",
+ "resolved": "https://registry.npmjs.org/envinfo/-/envinfo-7.5.1.tgz",
+ "integrity": "sha512-hQBkDf2iO4Nv0CNHpCuSBeaSrveU6nThVxFGTrq/eDlV716UQk09zChaJae4mZRsos1x4YLY2TaH3LHUae3ZmQ=="
},
"eol": {
"version": "0.8.1",
@@ -5908,11 +10304,11 @@
}
},
"error-stack-parser": {
- "version": "2.0.2",
- "resolved": "https://registry.npmjs.org/error-stack-parser/-/error-stack-parser-2.0.2.tgz",
- "integrity": "sha512-E1fPutRDdIj/hohG0UpT5mayXNCxXP9d+snxFsPU9X0XgccOumKraa3juDMwTUyi7+Bu5+mCGagjg4IYeNbOdw==",
+ "version": "2.0.6",
+ "resolved": "https://registry.npmjs.org/error-stack-parser/-/error-stack-parser-2.0.6.tgz",
+ "integrity": "sha512-d51brTeqC+BHlwF0BhPtcYgF5nlzf9ZZ0ZIUQNZpc9ZB9qw5IJ2diTrBY9jlCJkTLITYPjmiX6OWCwH+fuyNgQ==",
"requires": {
- "stackframe": "^1.0.4"
+ "stackframe": "^1.1.1"
}
},
"es-abstract": {
@@ -5937,15 +10333,44 @@
"is-symbol": "^1.0.2"
}
},
+ "es5-ext": {
+ "version": "0.10.53",
+ "resolved": "https://registry.npmjs.org/es5-ext/-/es5-ext-0.10.53.tgz",
+ "integrity": "sha512-Xs2Stw6NiNHWypzRTY1MtaG/uJlwCk8kH81920ma8mvN8Xq1gsfhZvpkImLQArw8AHnv8MT2I45J3c0R8slE+Q==",
+ "requires": {
+ "es6-iterator": "~2.0.3",
+ "es6-symbol": "~3.1.3",
+ "next-tick": "~1.0.0"
+ }
+ },
+ "es6-iterator": {
+ "version": "2.0.3",
+ "resolved": "https://registry.npmjs.org/es6-iterator/-/es6-iterator-2.0.3.tgz",
+ "integrity": "sha1-p96IkUGgWpSwhUQDstCg+/qY87c=",
+ "requires": {
+ "d": "1",
+ "es5-ext": "^0.10.35",
+ "es6-symbol": "^3.1.1"
+ }
+ },
"es6-promise": {
"version": "3.3.1",
"resolved": "http://registry.npmjs.org/es6-promise/-/es6-promise-3.3.1.tgz",
"integrity": "sha1-oIzd6EzNvzTQJ6FFG8kdS80ophM="
},
"es6-promisify": {
- "version": "6.0.1",
- "resolved": "https://registry.npmjs.org/es6-promisify/-/es6-promisify-6.0.1.tgz",
- "integrity": "sha512-J3ZkwbEnnO+fGAKrjVpeUAnZshAdfZvbhQpqfIH9kSAspReRC4nJnu8ewm55b4y9ElyeuhCTzJD0XiH8Tsbhlw=="
+ "version": "6.1.1",
+ "resolved": "https://registry.npmjs.org/es6-promisify/-/es6-promisify-6.1.1.tgz",
+ "integrity": "sha512-HBL8I3mIki5C1Cc9QjKUenHtnG0A5/xA8Q/AllRcfiwl2CZFXGK7ddBiCoRwAix4i2KxcQfjtIVcrVbB3vbmwg=="
+ },
+ "es6-symbol": {
+ "version": "3.1.3",
+ "resolved": "https://registry.npmjs.org/es6-symbol/-/es6-symbol-3.1.3.tgz",
+ "integrity": "sha512-NJ6Yn3FuDinBaBRWl/q5X/s4koRHBrgKAu+yGI6JCBeiu3qrcbJhwT2GeR/EXVfylRk8dpQVJoLEFhK+Mu31NA==",
+ "requires": {
+ "d": "^1.0.1",
+ "ext": "^1.1.2"
+ }
},
"escape-html": {
"version": "1.0.3",
@@ -5958,9 +10383,9 @@
"integrity": "sha1-G2HAViGQqN/2rjuyzwIAyhMLhtQ="
},
"eslint": {
- "version": "5.15.1",
- "resolved": "https://registry.npmjs.org/eslint/-/eslint-5.15.1.tgz",
- "integrity": "sha512-NTcm6vQ+PTgN3UBsALw5BMhgO6i5EpIjQF/Xb5tIh3sk9QhrFafujUOczGz4J24JBlzWclSB9Vmx8d+9Z6bFCg==",
+ "version": "5.16.0",
+ "resolved": "https://registry.npmjs.org/eslint/-/eslint-5.16.0.tgz",
+ "integrity": "sha512-S3Rz11i7c8AA5JPv7xAH+dOyq/Cu/VXHiHXBPOU1k/JAM5dXqQPt3qcrhpHSorXmrpu2g0gkIBVXAqCpzfoZIg==",
"requires": {
"@babel/code-frame": "^7.0.0",
"ajv": "^6.9.1",
@@ -5968,7 +10393,7 @@
"cross-spawn": "^6.0.5",
"debug": "^4.0.1",
"doctrine": "^3.0.0",
- "eslint-scope": "^4.0.2",
+ "eslint-scope": "^4.0.3",
"eslint-utils": "^1.3.1",
"eslint-visitor-keys": "^1.0.0",
"espree": "^5.0.1",
@@ -5982,7 +10407,7 @@
"import-fresh": "^3.0.0",
"imurmurhash": "^0.1.4",
"inquirer": "^6.2.2",
- "js-yaml": "^3.12.0",
+ "js-yaml": "^3.13.0",
"json-stable-stringify-without-jsonify": "^1.0.1",
"levn": "^0.3.0",
"lodash": "^4.17.11",
@@ -6001,11 +10426,11 @@
},
"dependencies": {
"ajv": {
- "version": "6.10.0",
- "resolved": "https://registry.npmjs.org/ajv/-/ajv-6.10.0.tgz",
- "integrity": "sha512-nffhOpkymDECQyR0mnsUtoCE8RlX38G0rYP+wgLWFyZuUyuuojSSvi/+euOiQBIn63whYwYVIIH1TvE3tu4OEg==",
+ "version": "6.12.2",
+ "resolved": "https://registry.npmjs.org/ajv/-/ajv-6.12.2.tgz",
+ "integrity": "sha512-k+V+hzjm5q/Mr8ef/1Y9goCmlsK4I6Sm74teeyGvFk1XrOsbsKLjEdrvny42CZ+a8sXbk8KWpY/bDwS+FLL2UQ==",
"requires": {
- "fast-deep-equal": "^2.0.1",
+ "fast-deep-equal": "^3.1.1",
"fast-json-stable-stringify": "^2.0.0",
"json-schema-traverse": "^0.4.1",
"uri-js": "^4.2.2"
@@ -6037,28 +10462,42 @@
}
},
"eslint-scope": {
- "version": "4.0.2",
- "resolved": "https://registry.npmjs.org/eslint-scope/-/eslint-scope-4.0.2.tgz",
- "integrity": "sha512-5q1+B/ogmHl8+paxtOKx38Z8LtWkVGuNt3+GQNErqwLl6ViNp/gdJGMCjZNxZ8j/VYjDNZ2Fo+eQc1TAVPIzbg==",
+ "version": "4.0.3",
+ "resolved": "https://registry.npmjs.org/eslint-scope/-/eslint-scope-4.0.3.tgz",
+ "integrity": "sha512-p7VutNr1O/QrxysMo3E45FjYDTeXBy0iTltPFNSqKAIfjDSXC+4dj+qfyuD8bfAXrW/y6lW3O76VaYNPKfpKrg==",
"requires": {
"esrecurse": "^4.1.0",
"estraverse": "^4.1.1"
}
},
+ "fast-deep-equal": {
+ "version": "3.1.3",
+ "resolved": "https://registry.npmjs.org/fast-deep-equal/-/fast-deep-equal-3.1.3.tgz",
+ "integrity": "sha512-f3qQ9oQy9j2AhBe/H9VC91wLmKBCCU/gDOnKNAYG5hswO7BLKj09Hc5HYNz9cGI++xlpDCIgDaitVs03ATR84Q=="
+ },
"ignore": {
"version": "4.0.6",
"resolved": "https://registry.npmjs.org/ignore/-/ignore-4.0.6.tgz",
"integrity": "sha512-cyFDKrqc/YdcWFniJhzI42+AzS+gNwmUzOSFcRCQYwySuBBBy/KjuxWLZ/FHEH6Moq1NizMOBWyTcv8O4OZIMg=="
},
"import-fresh": {
- "version": "3.0.0",
- "resolved": "https://registry.npmjs.org/import-fresh/-/import-fresh-3.0.0.tgz",
- "integrity": "sha512-pOnA9tfM3Uwics+SaBLCNyZZZbK+4PTu0OPZtLlMIrv17EdBoC15S9Kn8ckJ9TZTyKb3ywNE5y1yeDxxGA7nTQ==",
+ "version": "3.2.1",
+ "resolved": "https://registry.npmjs.org/import-fresh/-/import-fresh-3.2.1.tgz",
+ "integrity": "sha512-6e1q1cnWP2RXD9/keSkxHScg508CdXqXWgWBaETNhyuBFz+kUZlKboh+ISK+bU++DmbHimVBrOz/zzPe0sZ3sQ==",
"requires": {
"parent-module": "^1.0.0",
"resolve-from": "^4.0.0"
}
},
+ "js-yaml": {
+ "version": "3.14.0",
+ "resolved": "https://registry.npmjs.org/js-yaml/-/js-yaml-3.14.0.tgz",
+ "integrity": "sha512-/4IbIeHcD9VMHFqDR/gQ7EdZdLimOvW2DdcxFjdyyZ9NsbS+ccrXqVWDtab/lRl5AlUqmpBx8EhPaWR+OtY17A==",
+ "requires": {
+ "argparse": "^1.0.7",
+ "esprima": "^4.0.0"
+ }
+ },
"resolve-from": {
"version": "4.0.0",
"resolved": "https://registry.npmjs.org/resolve-from/-/resolve-from-4.0.0.tgz",
@@ -6075,20 +10514,20 @@
}
},
"eslint-config-react-app": {
- "version": "3.0.7",
- "resolved": "https://registry.npmjs.org/eslint-config-react-app/-/eslint-config-react-app-3.0.7.tgz",
- "integrity": "sha512-Mmmc9lIY/qvX6OEV09+ZLqVTz1aX8VVCrgCjBHXdmMGaC+pldD+87oj3BiJWXMSfcYs5iOo9gy0mGnQ8f/fMsQ==",
+ "version": "3.0.8",
+ "resolved": "https://registry.npmjs.org/eslint-config-react-app/-/eslint-config-react-app-3.0.8.tgz",
+ "integrity": "sha512-Ovi6Bva67OjXrom9Y/SLJRkrGqKhMAL0XCH8BizPhjEVEhYczl2ZKiNZI2CuqO5/CJwAfMwRXAVGY0KToWr1aA==",
"requires": {
- "confusing-browser-globals": "^1.0.5"
+ "confusing-browser-globals": "^1.0.6"
}
},
"eslint-import-resolver-node": {
- "version": "0.3.2",
- "resolved": "https://registry.npmjs.org/eslint-import-resolver-node/-/eslint-import-resolver-node-0.3.2.tgz",
- "integrity": "sha512-sfmTqJfPSizWu4aymbPr4Iidp5yKm8yDkHp+Ir3YiTHiiDfxh69mOUsmiqW6RZ9zRXFaF64GtYmN7e+8GHBv6Q==",
+ "version": "0.3.3",
+ "resolved": "https://registry.npmjs.org/eslint-import-resolver-node/-/eslint-import-resolver-node-0.3.3.tgz",
+ "integrity": "sha512-b8crLDo0M5RSe5YG8Pu2DYBj71tSB6OvXkfzwbJU2w7y8P4/yo0MyF8jU26IEuEuHF2K5/gcAJE3LhQGqBBbVg==",
"requires": {
"debug": "^2.6.9",
- "resolve": "^1.5.0"
+ "resolve": "^1.13.1"
},
"dependencies": {
"debug": {
@@ -6103,13 +10542,21 @@
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz",
"integrity": "sha1-VgiurfwAvmwpAd9fmGF4jeDVl8g="
+ },
+ "resolve": {
+ "version": "1.17.0",
+ "resolved": "https://registry.npmjs.org/resolve/-/resolve-1.17.0.tgz",
+ "integrity": "sha512-ic+7JYiV8Vi2yzQGFWOkiZD5Z9z7O2Zhm9XMaTxdJExKasieFCr+yXZ/WmXsckHiKl12ar0y6XiXDx3m4RHn1w==",
+ "requires": {
+ "path-parse": "^1.0.6"
+ }
}
}
},
"eslint-loader": {
- "version": "2.1.2",
- "resolved": "https://registry.npmjs.org/eslint-loader/-/eslint-loader-2.1.2.tgz",
- "integrity": "sha512-rA9XiXEOilLYPOIInvVH5S/hYfyTPyxag6DZhoQOduM+3TkghAEQ3VcFO8VnX4J4qg/UIBzp72aOf/xvYmpmsg==",
+ "version": "2.2.1",
+ "resolved": "https://registry.npmjs.org/eslint-loader/-/eslint-loader-2.2.1.tgz",
+ "integrity": "sha512-RLgV9hoCVsMLvOxCuNjdqOrUqIj9oJg8hF44vzJaYqsAHuY9G2YAeN3joQ9nxP0p5Th9iFSIpKo+SD8KISxXRg==",
"requires": {
"loader-fs-cache": "^1.0.0",
"loader-utils": "^1.0.2",
@@ -6119,11 +10566,11 @@
}
},
"eslint-module-utils": {
- "version": "2.3.0",
- "resolved": "https://registry.npmjs.org/eslint-module-utils/-/eslint-module-utils-2.3.0.tgz",
- "integrity": "sha512-lmDJgeOOjk8hObTysjqH7wyMi+nsHwwvfBykwfhjR1LNdd7C2uFJBvx4OpWYpXOw4df1yE1cDEVd1yLHitk34w==",
+ "version": "2.6.0",
+ "resolved": "https://registry.npmjs.org/eslint-module-utils/-/eslint-module-utils-2.6.0.tgz",
+ "integrity": "sha512-6j9xxegbqe8/kZY8cYpcp0xhbK0EgJlg3g9mib3/miLaExuuwc3n5UEfSnU6hWMbT0FAYVvDbL9RrRgpUeQIvA==",
"requires": {
- "debug": "^2.6.8",
+ "debug": "^2.6.9",
"pkg-dir": "^2.0.0"
},
"dependencies": {
@@ -6159,29 +10606,32 @@
}
},
"eslint-plugin-graphql": {
- "version": "2.1.1",
- "resolved": "https://registry.npmjs.org/eslint-plugin-graphql/-/eslint-plugin-graphql-2.1.1.tgz",
- "integrity": "sha512-JT2paUyu3e9ZDnroSshwUMc6pKcnkfXTsZInX1+/rPotvqOLVLtdrx/cmfb7PTJwjiEAshwcpm3/XPdTpsKJPw==",
+ "version": "3.1.1",
+ "resolved": "https://registry.npmjs.org/eslint-plugin-graphql/-/eslint-plugin-graphql-3.1.1.tgz",
+ "integrity": "sha512-VNu2AipS8P1BAnE/tcJ2EmBWjFlCnG+1jKdUlFNDQjocWZlFiPpMu9xYNXePoEXK+q+jG51M/6PdhOjEgJZEaQ==",
"requires": {
"graphql-config": "^2.0.1",
"lodash": "^4.11.1"
}
},
"eslint-plugin-import": {
- "version": "2.16.0",
- "resolved": "https://registry.npmjs.org/eslint-plugin-import/-/eslint-plugin-import-2.16.0.tgz",
- "integrity": "sha512-z6oqWlf1x5GkHIFgrSvtmudnqM6Q60KM4KvpWi5ubonMjycLjndvd5+8VAZIsTlHC03djdgJuyKG6XO577px6A==",
+ "version": "2.21.2",
+ "resolved": "https://registry.npmjs.org/eslint-plugin-import/-/eslint-plugin-import-2.21.2.tgz",
+ "integrity": "sha512-FEmxeGI6yaz+SnEB6YgNHlQK1Bs2DKLM+YF+vuTk5H8J9CLbJLtlPvRFgZZ2+sXiKAlN5dpdlrWOjK8ZoZJpQA==",
"requires": {
+ "array-includes": "^3.1.1",
+ "array.prototype.flat": "^1.2.3",
"contains-path": "^0.1.0",
"debug": "^2.6.9",
"doctrine": "1.5.0",
- "eslint-import-resolver-node": "^0.3.2",
- "eslint-module-utils": "^2.3.0",
+ "eslint-import-resolver-node": "^0.3.3",
+ "eslint-module-utils": "^2.6.0",
"has": "^1.0.3",
- "lodash": "^4.17.11",
"minimatch": "^3.0.4",
+ "object.values": "^1.1.1",
"read-pkg-up": "^2.0.0",
- "resolve": "^1.9.0"
+ "resolve": "^1.17.0",
+ "tsconfig-paths": "^3.9.0"
},
"dependencies": {
"debug": {
@@ -6194,25 +10644,96 @@
},
"doctrine": {
"version": "1.5.0",
- "resolved": "http://registry.npmjs.org/doctrine/-/doctrine-1.5.0.tgz",
+ "resolved": "https://registry.npmjs.org/doctrine/-/doctrine-1.5.0.tgz",
"integrity": "sha1-N53Ocw9hZvds76TmcHoVmwLFpvo=",
"requires": {
"esutils": "^2.0.2",
"isarray": "^1.0.0"
}
},
+ "es-abstract": {
+ "version": "1.17.6",
+ "resolved": "https://registry.npmjs.org/es-abstract/-/es-abstract-1.17.6.tgz",
+ "integrity": "sha512-Fr89bON3WFyUi5EvAeI48QTWX0AyekGgLA8H+c+7fbfCkJwRWRMLd8CQedNEyJuoYYhmtEqY92pgte1FAhBlhw==",
+ "requires": {
+ "es-to-primitive": "^1.2.1",
+ "function-bind": "^1.1.1",
+ "has": "^1.0.3",
+ "has-symbols": "^1.0.1",
+ "is-callable": "^1.2.0",
+ "is-regex": "^1.1.0",
+ "object-inspect": "^1.7.0",
+ "object-keys": "^1.1.1",
+ "object.assign": "^4.1.0",
+ "string.prototype.trimend": "^1.0.1",
+ "string.prototype.trimstart": "^1.0.1"
+ }
+ },
+ "es-to-primitive": {
+ "version": "1.2.1",
+ "resolved": "https://registry.npmjs.org/es-to-primitive/-/es-to-primitive-1.2.1.tgz",
+ "integrity": "sha512-QCOllgZJtaUo9miYBcLChTUaHNjJF3PYs1VidD7AwiEj1kYxKeQTctLAezAOH5ZKRH0g2IgPn6KwB4IT8iRpvA==",
+ "requires": {
+ "is-callable": "^1.1.4",
+ "is-date-object": "^1.0.1",
+ "is-symbol": "^1.0.2"
+ }
+ },
+ "has-symbols": {
+ "version": "1.0.1",
+ "resolved": "https://registry.npmjs.org/has-symbols/-/has-symbols-1.0.1.tgz",
+ "integrity": "sha512-PLcsoqu++dmEIZB+6totNFKq/7Do+Z0u4oT0zKOJNl3lYK6vGwwu2hjHs+68OEZbTjiUE9bgOABXbP/GvrS0Kg=="
+ },
+ "is-callable": {
+ "version": "1.2.0",
+ "resolved": "https://registry.npmjs.org/is-callable/-/is-callable-1.2.0.tgz",
+ "integrity": "sha512-pyVD9AaGLxtg6srb2Ng6ynWJqkHU9bEM087AKck0w8QwDarTfNcpIYoU8x8Hv2Icm8u6kFJM18Dag8lyqGkviw=="
+ },
+ "is-regex": {
+ "version": "1.1.0",
+ "resolved": "https://registry.npmjs.org/is-regex/-/is-regex-1.1.0.tgz",
+ "integrity": "sha512-iI97M8KTWID2la5uYXlkbSDQIg4F6o1sYboZKKTDpnDQMLtUL86zxhgDet3Q2SriaYsyGqZ6Mn2SjbRKeLHdqw==",
+ "requires": {
+ "has-symbols": "^1.0.1"
+ }
+ },
"ms": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz",
"integrity": "sha1-VgiurfwAvmwpAd9fmGF4jeDVl8g="
+ },
+ "object-keys": {
+ "version": "1.1.1",
+ "resolved": "https://registry.npmjs.org/object-keys/-/object-keys-1.1.1.tgz",
+ "integrity": "sha512-NuAESUOUMrlIXOfHKzD6bpPu3tYt3xvjNdRIQ+FeT0lNb4K8WR70CaDxhuNguS2XG+GjkyMwOzsN5ZktImfhLA=="
+ },
+ "object.values": {
+ "version": "1.1.1",
+ "resolved": "https://registry.npmjs.org/object.values/-/object.values-1.1.1.tgz",
+ "integrity": "sha512-WTa54g2K8iu0kmS/us18jEmdv1a4Wi//BZ/DTVYEcH0XhLM5NYdpDHja3gt57VrZLcNAO2WGA+KpWsDBaHt6eA==",
+ "requires": {
+ "define-properties": "^1.1.3",
+ "es-abstract": "^1.17.0-next.1",
+ "function-bind": "^1.1.1",
+ "has": "^1.0.3"
+ }
+ },
+ "resolve": {
+ "version": "1.17.0",
+ "resolved": "https://registry.npmjs.org/resolve/-/resolve-1.17.0.tgz",
+ "integrity": "sha512-ic+7JYiV8Vi2yzQGFWOkiZD5Z9z7O2Zhm9XMaTxdJExKasieFCr+yXZ/WmXsckHiKl12ar0y6XiXDx3m4RHn1w==",
+ "requires": {
+ "path-parse": "^1.0.6"
+ }
}
}
},
"eslint-plugin-jsx-a11y": {
- "version": "6.2.1",
- "resolved": "https://registry.npmjs.org/eslint-plugin-jsx-a11y/-/eslint-plugin-jsx-a11y-6.2.1.tgz",
- "integrity": "sha512-cjN2ObWrRz0TTw7vEcGQrx+YltMvZoOEx4hWU8eEERDnBIU00OTq7Vr+jA7DFKxiwLNv4tTh5Pq2GUNEa8b6+w==",
+ "version": "6.2.3",
+ "resolved": "https://registry.npmjs.org/eslint-plugin-jsx-a11y/-/eslint-plugin-jsx-a11y-6.2.3.tgz",
+ "integrity": "sha512-CawzfGt9w83tyuVekn0GDPU9ytYtxyxyFZ3aSWROmnRRFQFT2BiPJd7jvRdzNDi6oLWaS2asMeYSNMjWTV4eNg==",
"requires": {
+ "@babel/runtime": "^7.4.5",
"aria-query": "^3.0.0",
"array-includes": "^3.0.3",
"ast-types-flow": "^0.0.7",
@@ -6220,21 +10741,40 @@
"damerau-levenshtein": "^1.0.4",
"emoji-regex": "^7.0.2",
"has": "^1.0.3",
- "jsx-ast-utils": "^2.0.1"
+ "jsx-ast-utils": "^2.2.1"
+ },
+ "dependencies": {
+ "@babel/runtime": {
+ "version": "7.10.2",
+ "resolved": "https://registry.npmjs.org/@babel/runtime/-/runtime-7.10.2.tgz",
+ "integrity": "sha512-6sF3uQw2ivImfVIl62RZ7MXhO2tap69WeWK57vAaimT6AZbE4FbqjdEJIN1UqoD6wI6B+1n9UiagafH1sxjOtg==",
+ "requires": {
+ "regenerator-runtime": "^0.13.4"
+ }
+ },
+ "regenerator-runtime": {
+ "version": "0.13.5",
+ "resolved": "https://registry.npmjs.org/regenerator-runtime/-/regenerator-runtime-0.13.5.tgz",
+ "integrity": "sha512-ZS5w8CpKFinUzOwW3c83oPeVXoNsrLsaCoLtJvAClH135j/R77RuymhiSErhm2lKcwSCIpmvIWSbDkIfAqKQlA=="
+ }
}
},
"eslint-plugin-react": {
- "version": "7.12.4",
- "resolved": "https://registry.npmjs.org/eslint-plugin-react/-/eslint-plugin-react-7.12.4.tgz",
- "integrity": "sha512-1puHJkXJY+oS1t467MjbqjvX53uQ05HXwjqDgdbGBqf5j9eeydI54G3KwiJmWciQ0HTBacIKw2jgwSBSH3yfgQ==",
+ "version": "7.20.0",
+ "resolved": "https://registry.npmjs.org/eslint-plugin-react/-/eslint-plugin-react-7.20.0.tgz",
+ "integrity": "sha512-rqe1abd0vxMjmbPngo4NaYxTcR3Y4Hrmc/jg4T+sYz63yqlmJRknpEQfmWY+eDWPuMmix6iUIK+mv0zExjeLgA==",
"requires": {
- "array-includes": "^3.0.3",
+ "array-includes": "^3.1.1",
"doctrine": "^2.1.0",
"has": "^1.0.3",
- "jsx-ast-utils": "^2.0.1",
- "object.fromentries": "^2.0.0",
- "prop-types": "^15.6.2",
- "resolve": "^1.9.0"
+ "jsx-ast-utils": "^2.2.3",
+ "object.entries": "^1.1.1",
+ "object.fromentries": "^2.0.2",
+ "object.values": "^1.1.1",
+ "prop-types": "^15.7.2",
+ "resolve": "^1.15.1",
+ "string.prototype.matchall": "^4.0.2",
+ "xregexp": "^4.3.0"
},
"dependencies": {
"doctrine": {
@@ -6244,6 +10784,76 @@
"requires": {
"esutils": "^2.0.2"
}
+ },
+ "es-abstract": {
+ "version": "1.17.6",
+ "resolved": "https://registry.npmjs.org/es-abstract/-/es-abstract-1.17.6.tgz",
+ "integrity": "sha512-Fr89bON3WFyUi5EvAeI48QTWX0AyekGgLA8H+c+7fbfCkJwRWRMLd8CQedNEyJuoYYhmtEqY92pgte1FAhBlhw==",
+ "requires": {
+ "es-to-primitive": "^1.2.1",
+ "function-bind": "^1.1.1",
+ "has": "^1.0.3",
+ "has-symbols": "^1.0.1",
+ "is-callable": "^1.2.0",
+ "is-regex": "^1.1.0",
+ "object-inspect": "^1.7.0",
+ "object-keys": "^1.1.1",
+ "object.assign": "^4.1.0",
+ "string.prototype.trimend": "^1.0.1",
+ "string.prototype.trimstart": "^1.0.1"
+ }
+ },
+ "es-to-primitive": {
+ "version": "1.2.1",
+ "resolved": "https://registry.npmjs.org/es-to-primitive/-/es-to-primitive-1.2.1.tgz",
+ "integrity": "sha512-QCOllgZJtaUo9miYBcLChTUaHNjJF3PYs1VidD7AwiEj1kYxKeQTctLAezAOH5ZKRH0g2IgPn6KwB4IT8iRpvA==",
+ "requires": {
+ "is-callable": "^1.1.4",
+ "is-date-object": "^1.0.1",
+ "is-symbol": "^1.0.2"
+ }
+ },
+ "has-symbols": {
+ "version": "1.0.1",
+ "resolved": "https://registry.npmjs.org/has-symbols/-/has-symbols-1.0.1.tgz",
+ "integrity": "sha512-PLcsoqu++dmEIZB+6totNFKq/7Do+Z0u4oT0zKOJNl3lYK6vGwwu2hjHs+68OEZbTjiUE9bgOABXbP/GvrS0Kg=="
+ },
+ "is-callable": {
+ "version": "1.2.0",
+ "resolved": "https://registry.npmjs.org/is-callable/-/is-callable-1.2.0.tgz",
+ "integrity": "sha512-pyVD9AaGLxtg6srb2Ng6ynWJqkHU9bEM087AKck0w8QwDarTfNcpIYoU8x8Hv2Icm8u6kFJM18Dag8lyqGkviw=="
+ },
+ "is-regex": {
+ "version": "1.1.0",
+ "resolved": "https://registry.npmjs.org/is-regex/-/is-regex-1.1.0.tgz",
+ "integrity": "sha512-iI97M8KTWID2la5uYXlkbSDQIg4F6o1sYboZKKTDpnDQMLtUL86zxhgDet3Q2SriaYsyGqZ6Mn2SjbRKeLHdqw==",
+ "requires": {
+ "has-symbols": "^1.0.1"
+ }
+ },
+ "object-keys": {
+ "version": "1.1.1",
+ "resolved": "https://registry.npmjs.org/object-keys/-/object-keys-1.1.1.tgz",
+ "integrity": "sha512-NuAESUOUMrlIXOfHKzD6bpPu3tYt3xvjNdRIQ+FeT0lNb4K8WR70CaDxhuNguS2XG+GjkyMwOzsN5ZktImfhLA=="
+ },
+ "object.values": {
+ "version": "1.1.1",
+ "resolved": "https://registry.npmjs.org/object.values/-/object.values-1.1.1.tgz",
+ "integrity": "sha512-WTa54g2K8iu0kmS/us18jEmdv1a4Wi//BZ/DTVYEcH0XhLM5NYdpDHja3gt57VrZLcNAO2WGA+KpWsDBaHt6eA==",
+ "requires": {
+ "define-properties": "^1.1.3",
+ "es-abstract": "^1.17.0-next.1",
+ "function-bind": "^1.1.1",
+ "has": "^1.0.3"
+ }
+ },
+ "resolve": {
+ "version": "1.17.0",
+ "resolved": "https://registry.npmjs.org/resolve/-/resolve-1.17.0.tgz",
+ "integrity": "sha512-ic+7JYiV8Vi2yzQGFWOkiZD5Z9z7O2Zhm9XMaTxdJExKasieFCr+yXZ/WmXsckHiKl12ar0y6XiXDx3m4RHn1w==",
+ "requires": {
+ "path-parse": "^1.0.6"
+ }
}
}
},
@@ -6257,14 +10867,17 @@
}
},
"eslint-utils": {
- "version": "1.3.1",
- "resolved": "https://registry.npmjs.org/eslint-utils/-/eslint-utils-1.3.1.tgz",
- "integrity": "sha512-Z7YjnIldX+2XMcjr7ZkgEsOj/bREONV60qYeB/bjMAqqqZ4zxKyWX+BOUkdmRmA9riiIPVvo5x86m5elviOk0Q=="
+ "version": "1.4.3",
+ "resolved": "https://registry.npmjs.org/eslint-utils/-/eslint-utils-1.4.3.tgz",
+ "integrity": "sha512-fbBN5W2xdY45KulGXmLHZ3c3FHfVYmKg0IrAKGOkT/464PQsx2UeIzfz1RmEci+KLm1bBaAzZAh8+/E+XAeZ8Q==",
+ "requires": {
+ "eslint-visitor-keys": "^1.1.0"
+ }
},
"eslint-visitor-keys": {
- "version": "1.0.0",
- "resolved": "https://registry.npmjs.org/eslint-visitor-keys/-/eslint-visitor-keys-1.0.0.tgz",
- "integrity": "sha512-qzm/XxIbxm/FHyH341ZrbnMUpe+5Bocte9xkmFMzPMjRaZMcXww+MpBptFvtU+79L362nqiLhekCxCxDPaUMBQ=="
+ "version": "1.2.0",
+ "resolved": "https://registry.npmjs.org/eslint-visitor-keys/-/eslint-visitor-keys-1.2.0.tgz",
+ "integrity": "sha512-WFb4ihckKil6hu3Dp798xdzSfddwKKU3+nGniKF6HfeW6OLd2OUDEPP7TcHtB5+QXOKg2s6B2DaMPE1Nn/kxKQ=="
},
"espree": {
"version": "5.0.1",
@@ -6282,11 +10895,18 @@
"integrity": "sha512-eGuFFw7Upda+g4p+QHvnW0RyTX/SVeJBDM/gCtMARO0cLuT2HcEKnTPvhjV6aGeqrCB/sbNop0Kszm0jsaWU4A=="
},
"esquery": {
- "version": "1.0.1",
- "resolved": "https://registry.npmjs.org/esquery/-/esquery-1.0.1.tgz",
- "integrity": "sha512-SmiyZ5zIWH9VM+SRUReLS5Q8a7GxtRdxEBVZpm98rJM7Sb+A9DVCndXfkeFUd3byderg+EbDkfnevfCwynWaNA==",
+ "version": "1.3.1",
+ "resolved": "https://registry.npmjs.org/esquery/-/esquery-1.3.1.tgz",
+ "integrity": "sha512-olpvt9QG0vniUBZspVRN6lwB7hOZoTRtT+jzR+tS4ffYx2mzbw+z0XCOk44aaLYKApNX5nMm+E+P6o25ip/DHQ==",
"requires": {
- "estraverse": "^4.0.0"
+ "estraverse": "^5.1.0"
+ },
+ "dependencies": {
+ "estraverse": {
+ "version": "5.1.0",
+ "resolved": "https://registry.npmjs.org/estraverse/-/estraverse-5.1.0.tgz",
+ "integrity": "sha512-FyohXK+R0vE+y1nHLoBM7ZTyqRpqAlhdZHCWIWEviFLiGB8b04H6bQs8G+XTthacvT8VuwvteiP7RJSxMs8UEw=="
+ }
}
},
"esrecurse": {
@@ -6298,9 +10918,9 @@
}
},
"estraverse": {
- "version": "4.2.0",
- "resolved": "https://registry.npmjs.org/estraverse/-/estraverse-4.2.0.tgz",
- "integrity": "sha1-De4/7TH81GlhjOc0IJn8GvoL2xM="
+ "version": "4.3.0",
+ "resolved": "https://registry.npmjs.org/estraverse/-/estraverse-4.3.0.tgz",
+ "integrity": "sha512-39nnKffWz8xN1BU/2c79n9nB9HDzo0niYUqx6xyqUnyoAnQyyWpOTdZEeiCch8BBu515t4wp9ZmgVfVhn9EBpw=="
},
"esutils": {
"version": "2.0.2",
@@ -6321,19 +10941,19 @@
}
},
"event-source-polyfill": {
- "version": "1.0.5",
- "resolved": "https://registry.npmjs.org/event-source-polyfill/-/event-source-polyfill-1.0.5.tgz",
- "integrity": "sha512-PdStgZ3+G2o2gjqsBYbV4931ByVmwLwSrX7mFgawCL+9I1npo9dwAQTnWtNWXe5IY2P8+AbbPteeOueiEtRCUA=="
+ "version": "1.0.15",
+ "resolved": "https://registry.npmjs.org/event-source-polyfill/-/event-source-polyfill-1.0.15.tgz",
+ "integrity": "sha512-IVmd8jWwX6ag5rXIdVCPBjBChiHBceLb1/7aKPIK7CUeJ5Br7alx029+ZpQlK4jW4Hk2qncy3ClJP97S8ltvmg=="
},
"eventemitter3": {
- "version": "3.1.0",
- "resolved": "https://registry.npmjs.org/eventemitter3/-/eventemitter3-3.1.0.tgz",
- "integrity": "sha512-ivIvhpq/Y0uSjcHDcOIccjmYjGLcP09MFGE7ysAwkAvkXfpZlC985pH2/ui64DKazbTW/4kN3yqozUxlXzI6cA=="
+ "version": "3.1.2",
+ "resolved": "https://registry.npmjs.org/eventemitter3/-/eventemitter3-3.1.2.tgz",
+ "integrity": "sha512-tvtQIeLVHjDkJYnzf2dgVMxfuSGJeM/7UCG17TT4EumTfNtF+0nebF/4zWOIkCreAbtNqhGEboB6BWrwqNaw4Q=="
},
"events": {
- "version": "3.0.0",
- "resolved": "https://registry.npmjs.org/events/-/events-3.0.0.tgz",
- "integrity": "sha512-Dc381HFWJzEOhQ+d8pkNon++bk9h6cdAoAj4iE6Q4y6xgTzySWXlKn05/TVNpjnfRqi/X0EpJEJohPjNI3zpVA=="
+ "version": "3.1.0",
+ "resolved": "https://registry.npmjs.org/events/-/events-3.1.0.tgz",
+ "integrity": "sha512-Rv+u8MLHNOdMjTAFeT3nCjHn2aGlx435FP/sDHNaRhDEMwyI/aB22Kj2qIN8R0cw3z28psEQLYwxVKLsKrMgWg=="
},
"eventsource": {
"version": "0.1.6",
@@ -6462,42 +11082,50 @@
}
},
"express": {
- "version": "4.16.4",
- "resolved": "https://registry.npmjs.org/express/-/express-4.16.4.tgz",
- "integrity": "sha512-j12Uuyb4FMrd/qQAm6uCHAkPtO8FDTRJZBDd5D2KOL2eLaz1yUNdUB/NOIyq0iU4q4cFarsUCrnFDPBcnksuOg==",
+ "version": "4.17.1",
+ "resolved": "https://registry.npmjs.org/express/-/express-4.17.1.tgz",
+ "integrity": "sha512-mHJ9O79RqluphRrcw2X/GTh3k9tVv8YcoyY4Kkh4WDMUYKRZUq0h1o0w2rrrxBqM7VoeUVqgb27xlEMXTnYt4g==",
"requires": {
- "accepts": "~1.3.5",
+ "accepts": "~1.3.7",
"array-flatten": "1.1.1",
- "body-parser": "1.18.3",
- "content-disposition": "0.5.2",
+ "body-parser": "1.19.0",
+ "content-disposition": "0.5.3",
"content-type": "~1.0.4",
- "cookie": "0.3.1",
+ "cookie": "0.4.0",
"cookie-signature": "1.0.6",
"debug": "2.6.9",
"depd": "~1.1.2",
"encodeurl": "~1.0.2",
"escape-html": "~1.0.3",
"etag": "~1.8.1",
- "finalhandler": "1.1.1",
+ "finalhandler": "~1.1.2",
"fresh": "0.5.2",
"merge-descriptors": "1.0.1",
"methods": "~1.1.2",
"on-finished": "~2.3.0",
- "parseurl": "~1.3.2",
+ "parseurl": "~1.3.3",
"path-to-regexp": "0.1.7",
- "proxy-addr": "~2.0.4",
- "qs": "6.5.2",
- "range-parser": "~1.2.0",
+ "proxy-addr": "~2.0.5",
+ "qs": "6.7.0",
+ "range-parser": "~1.2.1",
"safe-buffer": "5.1.2",
- "send": "0.16.2",
- "serve-static": "1.13.2",
- "setprototypeof": "1.1.0",
- "statuses": "~1.4.0",
- "type-is": "~1.6.16",
+ "send": "0.17.1",
+ "serve-static": "1.14.1",
+ "setprototypeof": "1.1.1",
+ "statuses": "~1.5.0",
+ "type-is": "~1.6.18",
"utils-merge": "1.0.1",
"vary": "~1.1.2"
},
"dependencies": {
+ "content-disposition": {
+ "version": "0.5.3",
+ "resolved": "https://registry.npmjs.org/content-disposition/-/content-disposition-0.5.3.tgz",
+ "integrity": "sha512-ExO0774ikEObIAEV9kDo50o+79VCUdEB6n6lzKgGwupcVeRlhrj3qGAfwq8G6uBJjkqLrhT0qEYFcWng8z1z0g==",
+ "requires": {
+ "safe-buffer": "5.1.2"
+ }
+ },
"debug": {
"version": "2.6.9",
"resolved": "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz",
@@ -6510,18 +11138,38 @@
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz",
"integrity": "sha1-VgiurfwAvmwpAd9fmGF4jeDVl8g="
+ },
+ "qs": {
+ "version": "6.7.0",
+ "resolved": "https://registry.npmjs.org/qs/-/qs-6.7.0.tgz",
+ "integrity": "sha512-VCdBRNFTX1fyE7Nb6FYoURo/SPe62QCaAyzJvUjwRaIsc+NePBEniHlvxFmmX56+HZphIGtV0XeCirBtpDrTyQ=="
}
}
},
"express-graphql": {
- "version": "0.6.12",
- "resolved": "http://registry.npmjs.org/express-graphql/-/express-graphql-0.6.12.tgz",
- "integrity": "sha512-ouLWV0hRw4hnaLtXzzwhdC79ewxKbY2PRvm05mPc/zOH5W5WVCHDQ1SmNxEPBQdUeeSNh29aIqW9zEQkA3kMuA==",
+ "version": "0.7.1",
+ "resolved": "https://registry.npmjs.org/express-graphql/-/express-graphql-0.7.1.tgz",
+ "integrity": "sha512-YpheAqTbSKpb5h57rV2yu2dPNUBi4FvZDspZ5iEV3ov34PBRgnM4lEBkv60+vZRJ6SweYL14N8AGYdov7g6ooQ==",
"requires": {
- "accepts": "^1.3.0",
+ "accepts": "^1.3.5",
"content-type": "^1.0.4",
- "http-errors": "^1.3.0",
- "raw-body": "^2.3.2"
+ "http-errors": "^1.7.1",
+ "raw-body": "^2.3.3"
+ }
+ },
+ "ext": {
+ "version": "1.4.0",
+ "resolved": "https://registry.npmjs.org/ext/-/ext-1.4.0.tgz",
+ "integrity": "sha512-Key5NIsUxdqKg3vIsdw9dSuXpPCQ297y6wBjL30edxwPgt2E44WcWBZey/ZvUc6sERLTxKdyCu4gZFmUbk1Q7A==",
+ "requires": {
+ "type": "^2.0.0"
+ },
+ "dependencies": {
+ "type": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/type/-/type-2.0.0.tgz",
+ "integrity": "sha512-KBt58xCHry4Cejnc2ISQAF7QY+ORngsWfxezO68+12hKV6lQY8P/psIkcbjeHWn7MqcgciWJyCCevFMJdIXpow=="
+ }
}
},
"ext-list": {
@@ -6566,9 +11214,9 @@
}
},
"external-editor": {
- "version": "3.0.3",
- "resolved": "https://registry.npmjs.org/external-editor/-/external-editor-3.0.3.tgz",
- "integrity": "sha512-bn71H9+qWoOQKyZDo25mOMVpSmXROAsTJVVVYzrrtol3d4y+AsKjf4Iwl2Q+IuT0kFSQ1qo166UuIwqYq7mGnA==",
+ "version": "3.1.0",
+ "resolved": "https://registry.npmjs.org/external-editor/-/external-editor-3.1.0.tgz",
+ "integrity": "sha512-hMQ4CX1p1izmuLYyZqLMO/qGNw10wSv9QDCPfzXfyFrOaCSSoRfqE1Kf1s5an66J5JZC62NewG+mK49jOCtQew==",
"requires": {
"chardet": "^0.7.0",
"iconv-lite": "^0.4.24",
@@ -6644,6 +11292,11 @@
}
}
},
+ "extract-files": {
+ "version": "8.1.0",
+ "resolved": "https://registry.npmjs.org/extract-files/-/extract-files-8.1.0.tgz",
+ "integrity": "sha512-PTGtfthZK79WUMk+avLmwx3NGdU8+iVFXC2NMGxKsn0MnihOG2lvumj+AZo8CTwTrwjXDgZ5tztbRlEdRjBonQ=="
+ },
"extsprintf": {
"version": "1.3.0",
"resolved": "https://registry.npmjs.org/extsprintf/-/extsprintf-1.3.0.tgz",
@@ -6682,20 +11335,28 @@
"resolved": "https://registry.npmjs.org/fastparse/-/fastparse-1.1.2.tgz",
"integrity": "sha512-483XLLxTVIwWK3QTrMGRqUfUpoOs/0hbQrl2oz4J0pAcm3A3bu84wxTFqGqkJzewCLdME38xJLJAxBABfQT8sQ=="
},
+ "fastq": {
+ "version": "1.8.0",
+ "resolved": "https://registry.npmjs.org/fastq/-/fastq-1.8.0.tgz",
+ "integrity": "sha512-SMIZoZdLh/fgofivvIkmknUXyPnvxRE3DhtZ5Me3Mrsk5gyPL42F0xr51TdRXskBxHfMp+07bcYzfsYEsSQA9Q==",
+ "requires": {
+ "reusify": "^1.0.4"
+ }
+ },
"faye-websocket": {
- "version": "0.11.1",
- "resolved": "https://registry.npmjs.org/faye-websocket/-/faye-websocket-0.11.1.tgz",
- "integrity": "sha1-8O/hjE9W5PQK/H4Gxxn9XuYYjzg=",
+ "version": "0.11.3",
+ "resolved": "https://registry.npmjs.org/faye-websocket/-/faye-websocket-0.11.3.tgz",
+ "integrity": "sha512-D2y4bovYpzziGgbHYtGCMjlJM36vAl/y+xUyn1C+FVx8szd1E+86KwVw6XvYSzOP8iMpm1X0I4xJD+QtUb36OA==",
"requires": {
"websocket-driver": ">=0.5.1"
}
},
"fb-watchman": {
- "version": "2.0.0",
- "resolved": "https://registry.npmjs.org/fb-watchman/-/fb-watchman-2.0.0.tgz",
- "integrity": "sha1-VOmr99+i8mzZsWNsWIwa/AXeXVg=",
+ "version": "2.0.1",
+ "resolved": "https://registry.npmjs.org/fb-watchman/-/fb-watchman-2.0.1.tgz",
+ "integrity": "sha512-DkPJKQeY6kKwmuMretBhr7G6Vodr7bFwDYTXIkfG1gjvNpaxBTQV3PbXg6bR1c1UP4jPOX0jHUbbHANL9vRjVg==",
"requires": {
- "bser": "^2.0.0"
+ "bser": "2.1.1"
}
},
"fbjs": {
@@ -6733,9 +11394,9 @@
}
},
"figgy-pudding": {
- "version": "3.5.1",
- "resolved": "https://registry.npmjs.org/figgy-pudding/-/figgy-pudding-3.5.1.tgz",
- "integrity": "sha512-vNKxJHTEKNThjfrdJwHc7brvM6eVevuO5nTj6ez8ZQ1qbXTvGthucRF7S4vf2cr71QVnT70V34v0S1DyQsti0w=="
+ "version": "3.5.2",
+ "resolved": "https://registry.npmjs.org/figgy-pudding/-/figgy-pudding-3.5.2.tgz",
+ "integrity": "sha512-0btnI/H8f2pavGMN8w40mlSKOfTK2SVJmBfBeVIj3kNw0swwgzyRq0d5TJVOwodFmtvpPeWPN/MCcfuWF0Ezbw=="
},
"figures": {
"version": "2.0.0",
@@ -6755,11 +11416,22 @@
},
"file-loader": {
"version": "1.1.11",
- "resolved": "http://registry.npmjs.org/file-loader/-/file-loader-1.1.11.tgz",
+ "resolved": "https://registry.npmjs.org/file-loader/-/file-loader-1.1.11.tgz",
"integrity": "sha512-TGR4HU7HUsGg6GCOPJnFk06RhWgEWFLAGWiT6rcD+GRC2keU3s9RGJ+b3Z6/U73jwwNb2gKLJ7YCrp+jvU4ALg==",
"requires": {
"loader-utils": "^1.0.2",
"schema-utils": "^0.4.5"
+ },
+ "dependencies": {
+ "schema-utils": {
+ "version": "0.4.7",
+ "resolved": "https://registry.npmjs.org/schema-utils/-/schema-utils-0.4.7.tgz",
+ "integrity": "sha512-v/iwU6wvwGK8HbU9yi3/nhGzP0yGSuhQMzL6ySiec1FSrZZDkhm4noOSWzrNFo/jEc+SJY6jRTwuwbSXJPDUnQ==",
+ "requires": {
+ "ajv": "^6.1.0",
+ "ajv-keywords": "^3.1.0"
+ }
+ }
}
},
"file-type": {
@@ -6814,16 +11486,16 @@
}
},
"finalhandler": {
- "version": "1.1.1",
- "resolved": "http://registry.npmjs.org/finalhandler/-/finalhandler-1.1.1.tgz",
- "integrity": "sha512-Y1GUDo39ez4aHAw7MysnUD5JzYX+WaIj8I57kO3aEPT1fFRL4sr7mjei97FgnwhAyyzRYmQZaTHb2+9uZ1dPtg==",
+ "version": "1.1.2",
+ "resolved": "https://registry.npmjs.org/finalhandler/-/finalhandler-1.1.2.tgz",
+ "integrity": "sha512-aAWcW57uxVNrQZqFXjITpW3sIUQmHGG3qSb9mUah9MgMC4NeWhNOlNjXEYq3HjRAvL6arUviZGGJsBg6z0zsWA==",
"requires": {
"debug": "2.6.9",
"encodeurl": "~1.0.2",
"escape-html": "~1.0.3",
"on-finished": "~2.3.0",
- "parseurl": "~1.3.2",
- "statuses": "~1.4.0",
+ "parseurl": "~1.3.3",
+ "statuses": "~1.5.0",
"unpipe": "~1.0.0"
},
"dependencies": {
@@ -6843,13 +11515,29 @@
}
},
"find-cache-dir": {
- "version": "2.0.0",
- "resolved": "https://registry.npmjs.org/find-cache-dir/-/find-cache-dir-2.0.0.tgz",
- "integrity": "sha512-LDUY6V1Xs5eFskUVYtIwatojt6+9xC9Chnlk/jYOOvn3FAFfSaWddxahDGyNHh0b2dMXa6YW2m0tk8TdVaXHlA==",
+ "version": "2.1.0",
+ "resolved": "https://registry.npmjs.org/find-cache-dir/-/find-cache-dir-2.1.0.tgz",
+ "integrity": "sha512-Tq6PixE0w/VMFfCgbONnkiQIVol/JJL7nRMi20fqzA4NRs9AfeqMGeRdPi3wIhYkxjeBaWh2rxwapn5Tu3IqOQ==",
"requires": {
"commondir": "^1.0.1",
- "make-dir": "^1.0.0",
+ "make-dir": "^2.0.0",
"pkg-dir": "^3.0.0"
+ },
+ "dependencies": {
+ "make-dir": {
+ "version": "2.1.0",
+ "resolved": "https://registry.npmjs.org/make-dir/-/make-dir-2.1.0.tgz",
+ "integrity": "sha512-LS9X+dc8KLxXCb8dni79fLIIUA5VyZoyjSMCwTluaXA0o27cCK0bhXkpgw+sTXVpPy/lSO57ilRixqk0vDmtRA==",
+ "requires": {
+ "pify": "^4.0.1",
+ "semver": "^5.6.0"
+ }
+ },
+ "pify": {
+ "version": "4.0.1",
+ "resolved": "https://registry.npmjs.org/pify/-/pify-4.0.1.tgz",
+ "integrity": "sha512-uB80kBFb/tfd68bVleG9T5GGsGPjJrLAUpR5PZIrhBnIaRTQRjqdJSsIKkOP6OAIFbj7GOrcudc5pNjZ+geV2g=="
+ }
}
},
"find-up": {
@@ -6885,9 +11573,9 @@
},
"dependencies": {
"is-buffer": {
- "version": "2.0.3",
- "resolved": "https://registry.npmjs.org/is-buffer/-/is-buffer-2.0.3.tgz",
- "integrity": "sha512-U15Q7MXTuZlrbymiz95PJpZxu8IlipAp4dtS3wOdgPXx3mqBnslrWU14kxfHB+Py/+2PVKSr37dMAgM2A4uArw=="
+ "version": "2.0.4",
+ "resolved": "https://registry.npmjs.org/is-buffer/-/is-buffer-2.0.4.tgz",
+ "integrity": "sha512-Kq1rokWXOPXWuaMAqZiJW4XxsmD9zGx9q4aePabbn3qCRGedtH7Cm+zV8WETitMfu1wdh+Rvd6w5egwSngUX2A=="
}
}
},
@@ -6912,9 +11600,9 @@
}
},
"flatted": {
- "version": "2.0.0",
- "resolved": "https://registry.npmjs.org/flatted/-/flatted-2.0.0.tgz",
- "integrity": "sha512-R+H8IZclI8AAkSBRQJLVOsxwAoHd6WC40b4QTNWIjzAa6BXOBfQcM587MXDTVPeYaopFNWHUFLx7eNmHDSxMWg=="
+ "version": "2.0.2",
+ "resolved": "https://registry.npmjs.org/flatted/-/flatted-2.0.2.tgz",
+ "integrity": "sha512-r5wGx7YeOwNWNlCA0wQ86zKyDLMQr+/RB8xy74M4hTphfmjlijTSSXGuH8rnvKZnfT9i+75zmd8jcKdMR4O6jA=="
},
"flush-write-stream": {
"version": "1.1.1",
@@ -6926,11 +11614,11 @@
}
},
"follow-redirects": {
- "version": "1.7.0",
- "resolved": "https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.7.0.tgz",
- "integrity": "sha512-m/pZQy4Gj287eNy94nivy5wchN3Kp+Q5WgUPNy5lJSZ3sgkVKSYV/ZChMAQVIgx1SqfZ2zBZtPA2YlXIWxxJOQ==",
+ "version": "1.11.0",
+ "resolved": "https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.11.0.tgz",
+ "integrity": "sha512-KZm0V+ll8PfBrKwMzdo5D13b1bur9Iq9Zd/RMmAoQQcl2PxxFml8cxXPaaPYVbV0RjNjq1CU7zIzAOqtUPudmA==",
"requires": {
- "debug": "^3.2.6"
+ "debug": "^3.0.0"
}
},
"for-each": {
@@ -6987,40 +11675,6 @@
"resolved": "https://registry.npmjs.org/fresh/-/fresh-0.5.2.tgz",
"integrity": "sha1-PYyt2Q2XZWn6g1qx+OSyOhBWBac="
},
- "friendly-errors-webpack-plugin": {
- "version": "1.7.0",
- "resolved": "https://registry.npmjs.org/friendly-errors-webpack-plugin/-/friendly-errors-webpack-plugin-1.7.0.tgz",
- "integrity": "sha512-K27M3VK30wVoOarP651zDmb93R9zF28usW4ocaK3mfQeIEI5BPht/EzZs5E8QLLwbLRJQMwscAjDxYPb1FuNiw==",
- "requires": {
- "chalk": "^1.1.3",
- "error-stack-parser": "^2.0.0",
- "string-width": "^2.0.0"
- },
- "dependencies": {
- "ansi-styles": {
- "version": "2.2.1",
- "resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-2.2.1.tgz",
- "integrity": "sha1-tDLdM1i2NM914eRmQ2gkBTPB3b4="
- },
- "chalk": {
- "version": "1.1.3",
- "resolved": "http://registry.npmjs.org/chalk/-/chalk-1.1.3.tgz",
- "integrity": "sha1-qBFcVeSnAv5NFQq9OHKCKn4J/Jg=",
- "requires": {
- "ansi-styles": "^2.2.1",
- "escape-string-regexp": "^1.0.2",
- "has-ansi": "^2.0.0",
- "strip-ansi": "^3.0.0",
- "supports-color": "^2.0.0"
- }
- },
- "supports-color": {
- "version": "2.0.0",
- "resolved": "http://registry.npmjs.org/supports-color/-/supports-color-2.0.0.tgz",
- "integrity": "sha1-U10EXOa2Nj+kARcIRimZXp3zJMc="
- }
- }
- },
"from2": {
"version": "2.3.0",
"resolved": "https://registry.npmjs.org/from2/-/from2-2.3.0.tgz",
@@ -7079,6 +11733,12 @@
"resolved": "https://registry.npmjs.org/fs.realpath/-/fs.realpath-1.0.0.tgz",
"integrity": "sha1-FQStJSMVjKpA20onh8sBQRmU6k8="
},
+ "fsevents": {
+ "version": "2.1.3",
+ "resolved": "https://registry.npmjs.org/fsevents/-/fsevents-2.1.3.tgz",
+ "integrity": "sha512-Auw9a4AxqWpa9GUfj370BMPzzyncfBABW8Mab7BGWBYDj4Isgq+cDKtx0i6u9jcX9pQDnswsaaOTgTmA5pEjuQ==",
+ "optional": true
+ },
"fstream": {
"version": "1.0.11",
"resolved": "https://registry.npmjs.org/fstream/-/fstream-1.0.11.tgz",
@@ -7101,9 +11761,9 @@
"integrity": "sha1-GwqzvVU7Kg1jmdKcDj6gslIHgyc="
},
"gatsby": {
- "version": "2.1.18",
- "resolved": "https://registry.npmjs.org/gatsby/-/gatsby-2.1.18.tgz",
- "integrity": "sha512-4PRaM9VEtAJ7MFwu6RuMdcWcewSL97NFp5CrIyNotZ/84HPDc7JXAxyldtQIb3Nme3u+1D2gWNiXvYdJkkWvWw==",
+ "version": "2.11.1",
+ "resolved": "https://registry.npmjs.org/gatsby/-/gatsby-2.11.1.tgz",
+ "integrity": "sha512-QqtnLD13xS8ptBgoF39sRoxZjDz+Wvl8V8mNi3V0Ifzli8EPC+9YG+9TbqPIU5DfRnGr8r/ZtH54iuVem2keGw==",
"requires": {
"@babel/code-frame": "^7.0.0",
"@babel/core": "^7.0.0",
@@ -7112,16 +11772,20 @@
"@babel/runtime": "^7.0.0",
"@babel/traverse": "^7.0.0",
"@gatsbyjs/relay-compiler": "2.0.0-printer-fix.2",
+ "@hapi/joi": "^14.0.0",
+ "@mikaelkristiansson/domready": "^1.0.9",
+ "@pieh/friendly-errors-webpack-plugin": "1.7.0-chalk-2",
"@reach/router": "^1.1.1",
+ "@stefanprobst/lokijs": "^1.5.6-b",
"address": "1.0.3",
- "autoprefixer": "^9.4.3",
+ "autoprefixer": "^9.6.0",
"babel-core": "7.0.0-bridge.0",
"babel-eslint": "^9.0.0",
"babel-loader": "^8.0.0",
"babel-plugin-add-module-exports": "^0.2.1",
"babel-plugin-dynamic-import-node": "^1.2.0",
- "babel-plugin-remove-graphql-queries": "^2.6.1",
- "babel-preset-gatsby": "^0.1.8",
+ "babel-plugin-remove-graphql-queries": "^2.7.0",
+ "babel-preset-gatsby": "^0.2.0",
"better-opn": "0.1.4",
"better-queue": "^3.8.6",
"bluebird": "^3.5.0",
@@ -7129,58 +11793,54 @@
"cache-manager": "^2.9.0",
"cache-manager-fs-hash": "^0.0.6",
"chalk": "^2.3.2",
- "chokidar": "^2.0.2",
+ "chokidar": "2.1.2",
"common-tags": "^1.4.0",
"compression": "^1.7.3",
"convert-hrtime": "^2.0.0",
"copyfiles": "^1.2.0",
"core-js": "^2.5.0",
+ "cors": "^2.8.5",
"css-loader": "^1.0.0",
"debug": "^3.1.0",
"del": "^3.0.0",
"detect-port": "^1.2.1",
"devcert-san": "^0.3.3",
- "domready": "^1.0.8",
"dotenv": "^4.0.0",
"eslint": "^5.6.0",
"eslint-config-react-app": "^3.0.0",
"eslint-loader": "^2.1.0",
"eslint-plugin-flowtype": "^2.46.1",
- "eslint-plugin-graphql": "^2.0.0",
+ "eslint-plugin-graphql": "^3.0.3",
"eslint-plugin-import": "^2.9.0",
"eslint-plugin-jsx-a11y": "^6.0.3",
"eslint-plugin-react": "^7.8.2",
"event-source-polyfill": "^1.0.5",
"express": "^4.16.3",
- "express-graphql": "^0.6.12",
+ "express-graphql": "^0.7.1",
"fast-levenshtein": "~2.0.4",
"file-loader": "^1.1.11",
"flat": "^4.0.0",
- "friendly-errors-webpack-plugin": "^1.6.1",
"fs-exists-cached": "1.0.0",
"fs-extra": "^5.0.0",
- "gatsby-cli": "^2.4.11",
- "gatsby-link": "^2.0.12",
- "gatsby-plugin-page-creator": "^2.0.8",
- "gatsby-react-router-scroll": "^2.0.4",
+ "gatsby-cli": "^2.7.4",
+ "gatsby-graphiql-explorer": "^0.2.0",
+ "gatsby-link": "^2.2.0",
+ "gatsby-plugin-page-creator": "^2.1.1",
+ "gatsby-react-router-scroll": "^2.1.0",
+ "gatsby-telemetry": "^1.1.1",
"glob": "^7.1.1",
+ "got": "8.0.0",
"graphql": "^14.1.1",
+ "graphql-compose": "^6.3.2",
"graphql-playground-middleware-express": "^1.7.10",
- "graphql-relay": "^0.6.0",
- "graphql-skip-limit": "^2.0.5",
- "graphql-tools": "^3.0.4",
- "graphql-type-json": "^0.2.1",
- "hash-mod": "^0.0.5",
"invariant": "^2.2.4",
"is-relative": "^1.0.0",
"is-relative-url": "^2.0.0",
+ "is-wsl": "^1.1.0",
"jest-worker": "^23.2.0",
- "joi": "12.x.x",
"json-loader": "^0.5.7",
"json-stringify-safe": "^5.0.1",
- "kebab-hash": "^0.1.2",
"lodash": "^4.17.10",
- "lokijs": "^1.5.6",
"md5": "^2.2.1",
"md5-file": "^3.1.1",
"mime": "^2.2.0",
@@ -7193,15 +11853,18 @@
"null-loader": "^0.1.1",
"opentracing": "^0.14.3",
"optimize-css-assets-webpack-plugin": "^5.0.1",
+ "parseurl": "^1.3.2",
"physical-cpu-count": "^2.0.0",
+ "pnp-webpack-plugin": "^1.4.1",
"postcss-flexbugs-fixes": "^3.0.0",
"postcss-loader": "^2.1.3",
+ "prop-types": "^15.6.1",
"raw-loader": "^0.5.1",
- "react-dev-utils": "^4.2.1",
+ "react-dev-utils": "^4.2.3",
"react-error-overlay": "^3.0.0",
- "react-hot-loader": "^4.6.2",
+ "react-hot-loader": "^4.8.4",
"redux": "^4.0.0",
- "request": "^2.85.0",
+ "redux-thunk": "^2.3.0",
"semver": "^5.6.0",
"shallow-compare": "^1.2.2",
"sift": "^5.1.0",
@@ -7215,6 +11878,7 @@
"true-case-path": "^1.0.3",
"type-of": "^2.0.1",
"url-loader": "^1.0.1",
+ "util.promisify": "^1.0.0",
"uuid": "^3.1.0",
"v8-compile-cache": "^1.1.0",
"webpack": "~4.28.4",
@@ -7223,119 +11887,671 @@
"webpack-hot-middleware": "^2.21.0",
"webpack-merge": "^4.1.0",
"webpack-stats-plugin": "^0.1.5",
+ "xstate": "^4.3.2",
"yaml-loader": "^0.5.0"
},
"dependencies": {
+ "@babel/highlight": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/highlight/-/highlight-7.10.1.tgz",
+ "integrity": "sha512-8rMof+gVP8mxYZApLF/JgNDAkdKa+aJt3ZYxF8z6+j/hpeXL7iMsKCPHa2jNMHu/qqBwzQF4OHNoYi8dMA/rYg==",
+ "requires": {
+ "@babel/helper-validator-identifier": "^7.10.1",
+ "chalk": "^2.0.0",
+ "js-tokens": "^4.0.0"
+ }
+ },
+ "@hapi/hoek": {
+ "version": "8.5.1",
+ "resolved": "https://registry.npmjs.org/@hapi/hoek/-/hoek-8.5.1.tgz",
+ "integrity": "sha512-yN7kbciD87WzLGc5539Tn0sApjyiGHAJgKvG9W8C7O+6c7qmoQMfVs0W4bX17eqz6C78QJqqFrtgdK5EWf6Qow=="
+ },
"ansi-regex": {
- "version": "3.0.0",
- "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-3.0.0.tgz",
- "integrity": "sha1-7QMXwyIGT3lGbAKWa922Bas32Zg="
- },
- "cliui": {
"version": "4.1.0",
- "resolved": "https://registry.npmjs.org/cliui/-/cliui-4.1.0.tgz",
- "integrity": "sha512-4FG+RSG9DL7uEwRUZXZn3SS34DiDPfzP0VOiEwtUWlE+AR2EIg+hSyvrIgUUfhdgR/UkAeW2QHgeP+hWrXs7jQ==",
- "requires": {
- "string-width": "^2.1.1",
- "strip-ansi": "^4.0.0",
- "wrap-ansi": "^2.0.0"
- }
+ "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-4.1.0.tgz",
+ "integrity": "sha512-1apePfXM1UOSqw0o9IiFAovVz9M5S1Dg+4TrDwfMewQ6p/rmMueb7tWZjQ1rx4Loy1ArBggoqGpfqqdI4rondg=="
},
- "execa": {
- "version": "0.8.0",
- "resolved": "https://registry.npmjs.org/execa/-/execa-0.8.0.tgz",
- "integrity": "sha1-2NdrvBtVIX7RkP1t1J08d07PyNo=",
+ "autoprefixer": {
+ "version": "9.8.0",
+ "resolved": "https://registry.npmjs.org/autoprefixer/-/autoprefixer-9.8.0.tgz",
+ "integrity": "sha512-D96ZiIHXbDmU02dBaemyAg53ez+6F5yZmapmgKcjm35yEe1uVDYI8hGW3VYoGRaG290ZFf91YxHrR518vC0u/A==",
"requires": {
- "cross-spawn": "^5.0.1",
- "get-stream": "^3.0.0",
- "is-stream": "^1.1.0",
- "npm-run-path": "^2.0.0",
- "p-finally": "^1.0.0",
- "signal-exit": "^3.0.0",
- "strip-eof": "^1.0.0"
- }
- },
- "gatsby-cli": {
- "version": "2.4.11",
- "resolved": "https://registry.npmjs.org/gatsby-cli/-/gatsby-cli-2.4.11.tgz",
- "integrity": "sha512-bN09Avwx8cDX17FPjHjOwnDBmD9g1WVyP1LrRoYcOeGUPCo3Qrg97znYW687BCYwetZ5UBF+Bb5KgJNKj4Vrqw==",
- "requires": {
- "@babel/code-frame": "^7.0.0",
- "@babel/runtime": "^7.0.0",
- "bluebird": "^3.5.0",
- "common-tags": "^1.4.0",
- "convert-hrtime": "^2.0.0",
- "core-js": "^2.5.0",
- "envinfo": "^5.8.1",
- "execa": "^0.8.0",
- "fs-exists-cached": "^1.0.0",
- "fs-extra": "^4.0.1",
- "hosted-git-info": "^2.6.0",
- "lodash": "^4.17.10",
- "opentracing": "^0.14.3",
- "pretty-error": "^2.1.1",
- "resolve-cwd": "^2.0.0",
- "source-map": "^0.5.7",
- "stack-trace": "^0.0.10",
- "update-notifier": "^2.3.0",
- "yargs": "^11.1.0",
- "yurnalist": "^1.0.2"
+ "browserslist": "^4.12.0",
+ "caniuse-lite": "^1.0.30001061",
+ "chalk": "^2.4.2",
+ "normalize-range": "^0.1.2",
+ "num2fraction": "^1.2.2",
+ "postcss": "^7.0.30",
+ "postcss-value-parser": "^4.1.0"
},
"dependencies": {
- "fs-extra": {
- "version": "4.0.3",
- "resolved": "https://registry.npmjs.org/fs-extra/-/fs-extra-4.0.3.tgz",
- "integrity": "sha512-q6rbdDd1o2mAnQreO7YADIxf/Whx4AHBiRf6d+/cVT8h44ss+lHgxf1FemcqDnQt9X3ct4McHr+JMGlYSsK7Cg==",
+ "browserslist": {
+ "version": "4.12.0",
+ "resolved": "https://registry.npmjs.org/browserslist/-/browserslist-4.12.0.tgz",
+ "integrity": "sha512-UH2GkcEDSI0k/lRkuDSzFl9ZZ87skSy9w2XAn1MsZnL+4c4rqbBd3e82UWHbYDpztABrPBhZsTEeuxVfHppqDg==",
"requires": {
- "graceful-fs": "^4.1.2",
- "jsonfile": "^4.0.0",
- "universalify": "^0.1.0"
+ "caniuse-lite": "^1.0.30001043",
+ "electron-to-chromium": "^1.3.413",
+ "node-releases": "^1.1.53",
+ "pkg-up": "^2.0.0"
+ }
+ },
+ "chalk": {
+ "version": "2.4.2",
+ "resolved": "https://registry.npmjs.org/chalk/-/chalk-2.4.2.tgz",
+ "integrity": "sha512-Mti+f9lpJNcwF4tWV8/OrTTtF1gZi+f8FqlyAdouralcFWFQWF2+NgCHShjkCb+IFBLq9buZwE1xckQU4peSuQ==",
+ "requires": {
+ "ansi-styles": "^3.2.1",
+ "escape-string-regexp": "^1.0.5",
+ "supports-color": "^5.3.0"
}
}
}
},
+ "camelcase": {
+ "version": "5.3.1",
+ "resolved": "https://registry.npmjs.org/camelcase/-/camelcase-5.3.1.tgz",
+ "integrity": "sha512-L28STB170nwWS63UjtlEOE3dldQApaJXZkOI1uMFfzf3rRuPegHaHesyee+YxQ+W6SvRDQV6UrdOdRiR153wJg=="
+ },
+ "caniuse-lite": {
+ "version": "1.0.30001084",
+ "resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30001084.tgz",
+ "integrity": "sha512-ftdc5oGmhEbLUuMZ/Qp3mOpzfZLCxPYKcvGv6v2dJJ+8EdqcvZRbAGOiLmkM/PV1QGta/uwBs8/nCl6sokDW6w=="
+ },
+ "cliui": {
+ "version": "6.0.0",
+ "resolved": "https://registry.npmjs.org/cliui/-/cliui-6.0.0.tgz",
+ "integrity": "sha512-t6wbgtoCXvAzst7QgXxJYqPt0usEfbgQdftEPbLL/cvv6HPE5VgvqCuAIDR0NgU52ds6rFwqrgakNLrHEjCbrQ==",
+ "requires": {
+ "string-width": "^4.2.0",
+ "strip-ansi": "^6.0.0",
+ "wrap-ansi": "^6.2.0"
+ },
+ "dependencies": {
+ "ansi-regex": {
+ "version": "5.0.0",
+ "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.0.tgz",
+ "integrity": "sha512-bY6fj56OUQ0hU1KjFNDQuJFezqKdrAyFdIevADiqrWHwSlbmBNMHp5ak2f40Pm8JTFyM2mqxkG6ngkHO11f/lg=="
+ },
+ "strip-ansi": {
+ "version": "6.0.0",
+ "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.0.tgz",
+ "integrity": "sha512-AuvKTrTfQNYNIctbR1K/YGTR1756GycPsg7b9bdV9Duqur4gv6aKqHXah67Z8ImS7WEz5QVcOtlfW2rZEugt6w==",
+ "requires": {
+ "ansi-regex": "^5.0.0"
+ }
+ }
+ }
+ },
+ "color-convert": {
+ "version": "2.0.1",
+ "resolved": "https://registry.npmjs.org/color-convert/-/color-convert-2.0.1.tgz",
+ "integrity": "sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ==",
+ "requires": {
+ "color-name": "~1.1.4"
+ }
+ },
+ "color-name": {
+ "version": "1.1.4",
+ "resolved": "https://registry.npmjs.org/color-name/-/color-name-1.1.4.tgz",
+ "integrity": "sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA=="
+ },
+ "cross-spawn": {
+ "version": "7.0.3",
+ "resolved": "https://registry.npmjs.org/cross-spawn/-/cross-spawn-7.0.3.tgz",
+ "integrity": "sha512-iRDPJKUPVEND7dHPO8rkbOnPpyDygcDFtWjpeWNCgy8WP2rXcxXL8TskReQl6OrB2G7+UJrags1q15Fudc7G6w==",
+ "requires": {
+ "path-key": "^3.1.0",
+ "shebang-command": "^2.0.0",
+ "which": "^2.0.1"
+ }
+ },
+ "electron-to-chromium": {
+ "version": "1.3.474",
+ "resolved": "https://registry.npmjs.org/electron-to-chromium/-/electron-to-chromium-1.3.474.tgz",
+ "integrity": "sha512-fPkSgT9IBKmVJz02XioNsIpg0WYmkPrvU1lUJblMMJALxyE7/32NGvbJQKKxpNokozPvqfqkuUqVClYsvetcLw=="
+ },
+ "emoji-regex": {
+ "version": "8.0.0",
+ "resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-8.0.0.tgz",
+ "integrity": "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A=="
+ },
+ "execa": {
+ "version": "3.4.0",
+ "resolved": "https://registry.npmjs.org/execa/-/execa-3.4.0.tgz",
+ "integrity": "sha512-r9vdGQk4bmCuK1yKQu1KTwcT2zwfWdbdaXfCtAh+5nU/4fSX+JAb7vZGvI5naJrQlvONrEB20jeruESI69530g==",
+ "requires": {
+ "cross-spawn": "^7.0.0",
+ "get-stream": "^5.0.0",
+ "human-signals": "^1.1.1",
+ "is-stream": "^2.0.0",
+ "merge-stream": "^2.0.0",
+ "npm-run-path": "^4.0.0",
+ "onetime": "^5.1.0",
+ "p-finally": "^2.0.0",
+ "signal-exit": "^3.0.2",
+ "strip-final-newline": "^2.0.0"
+ }
+ },
+ "find-up": {
+ "version": "4.1.0",
+ "resolved": "https://registry.npmjs.org/find-up/-/find-up-4.1.0.tgz",
+ "integrity": "sha512-PpOwAdQ/YlXQ2vj8a3h8IipDuYRi3wceVQQGYWxNINccq40Anw7BlsEXCMbt1Zt+OLA6Fq9suIpIWD0OsnISlw==",
+ "requires": {
+ "locate-path": "^5.0.0",
+ "path-exists": "^4.0.0"
+ }
+ },
+ "gatsby-cli": {
+ "version": "2.12.45",
+ "resolved": "https://registry.npmjs.org/gatsby-cli/-/gatsby-cli-2.12.45.tgz",
+ "integrity": "sha512-CY0ltZ5DvrSo30MkPcU2FFGiCrQUjirXAY7TdFEhc8IeV5Gj9E2AFmUF1FO4Dna6yv47WBBaHKFpsiKc5Iq0XA==",
+ "requires": {
+ "@babel/code-frame": "^7.10.1",
+ "@babel/runtime": "^7.10.2",
+ "@hapi/joi": "^15.1.1",
+ "better-opn": "^1.0.0",
+ "bluebird": "^3.7.2",
+ "chalk": "^2.4.2",
+ "clipboardy": "^2.3.0",
+ "common-tags": "^1.8.0",
+ "configstore": "^5.0.1",
+ "convert-hrtime": "^3.0.0",
+ "core-js": "^2.6.11",
+ "envinfo": "^7.5.1",
+ "execa": "^3.4.0",
+ "fs-exists-cached": "^1.0.0",
+ "fs-extra": "^8.1.0",
+ "gatsby-core-utils": "^1.3.5",
+ "gatsby-recipes": "^0.1.39",
+ "gatsby-telemetry": "^1.3.11",
+ "hosted-git-info": "^3.0.4",
+ "ink": "^2.7.1",
+ "ink-spinner": "^3.0.1",
+ "is-valid-path": "^0.1.1",
+ "lodash": "^4.17.15",
+ "meant": "^1.0.1",
+ "node-fetch": "^2.6.0",
+ "object.entries": "^1.1.2",
+ "opentracing": "^0.14.4",
+ "pretty-error": "^2.1.1",
+ "progress": "^2.0.3",
+ "prompts": "^2.3.2",
+ "react": "^16.8.0",
+ "redux": "^4.0.5",
+ "resolve-cwd": "^3.0.0",
+ "semver": "^6.3.0",
+ "signal-exit": "^3.0.3",
+ "source-map": "0.7.3",
+ "stack-trace": "^0.0.10",
+ "strip-ansi": "^5.2.0",
+ "update-notifier": "^3.0.1",
+ "uuid": "3.4.0",
+ "yargs": "^15.3.1",
+ "yurnalist": "^1.1.2"
+ },
+ "dependencies": {
+ "@babel/code-frame": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/code-frame/-/code-frame-7.10.1.tgz",
+ "integrity": "sha512-IGhtTmpjGbYzcEDOw7DcQtbQSXcG9ftmAXtWTu9V936vDye4xjjekktFAtgZsWpzTj/X01jocB46mTywm/4SZw==",
+ "requires": {
+ "@babel/highlight": "^7.10.1"
+ }
+ },
+ "@babel/runtime": {
+ "version": "7.10.2",
+ "resolved": "https://registry.npmjs.org/@babel/runtime/-/runtime-7.10.2.tgz",
+ "integrity": "sha512-6sF3uQw2ivImfVIl62RZ7MXhO2tap69WeWK57vAaimT6AZbE4FbqjdEJIN1UqoD6wI6B+1n9UiagafH1sxjOtg==",
+ "requires": {
+ "regenerator-runtime": "^0.13.4"
+ }
+ },
+ "@hapi/joi": {
+ "version": "15.1.1",
+ "resolved": "https://registry.npmjs.org/@hapi/joi/-/joi-15.1.1.tgz",
+ "integrity": "sha512-entf8ZMOK8sc+8YfeOlM8pCfg3b5+WZIKBfUaaJT8UsjAAPjartzxIYm3TIbjvA4u+u++KbcXD38k682nVHDAQ==",
+ "requires": {
+ "@hapi/address": "2.x.x",
+ "@hapi/bourne": "1.x.x",
+ "@hapi/hoek": "8.x.x",
+ "@hapi/topo": "3.x.x"
+ }
+ },
+ "better-opn": {
+ "version": "1.0.0",
+ "resolved": "https://registry.npmjs.org/better-opn/-/better-opn-1.0.0.tgz",
+ "integrity": "sha512-q3eO2se4sFbTERB1dFBDdjTiIIpRohMErpwBX21lhPvmgmQNNrcQj0zbWRhMREDesJvyod9kxBS3kOtdAvkB/A==",
+ "requires": {
+ "open": "^6.4.0"
+ }
+ },
+ "bluebird": {
+ "version": "3.7.2",
+ "resolved": "https://registry.npmjs.org/bluebird/-/bluebird-3.7.2.tgz",
+ "integrity": "sha512-XpNj6GDQzdfW+r2Wnn7xiSAd7TM3jzkxGXBGTtWKuSXv1xUV+azxAm8jdWZN06QTQk+2N2XB9jRDkvbmQmcRtg=="
+ },
+ "chalk": {
+ "version": "2.4.2",
+ "resolved": "https://registry.npmjs.org/chalk/-/chalk-2.4.2.tgz",
+ "integrity": "sha512-Mti+f9lpJNcwF4tWV8/OrTTtF1gZi+f8FqlyAdouralcFWFQWF2+NgCHShjkCb+IFBLq9buZwE1xckQU4peSuQ==",
+ "requires": {
+ "ansi-styles": "^3.2.1",
+ "escape-string-regexp": "^1.0.5",
+ "supports-color": "^5.3.0"
+ }
+ },
+ "convert-hrtime": {
+ "version": "3.0.0",
+ "resolved": "https://registry.npmjs.org/convert-hrtime/-/convert-hrtime-3.0.0.tgz",
+ "integrity": "sha512-7V+KqSvMiHp8yWDuwfww06XleMWVVB9b9tURBx+G7UTADuo5hYPuowKloz4OzOqbPezxgo+fdQ1522WzPG4OeA=="
+ },
+ "core-js": {
+ "version": "2.6.11",
+ "resolved": "https://registry.npmjs.org/core-js/-/core-js-2.6.11.tgz",
+ "integrity": "sha512-5wjnpaT/3dV+XB4borEsnAYQchn00XSgTAWKDkEqv+K8KevjbzmofK6hfJ9TZIlpj2N0xQpazy7PiRQiWHqzWg=="
+ },
+ "fs-extra": {
+ "version": "8.1.0",
+ "resolved": "https://registry.npmjs.org/fs-extra/-/fs-extra-8.1.0.tgz",
+ "integrity": "sha512-yhlQgA6mnOJUKOsRUFsgJdQCvkKhcz8tlZG5HBQfReYZy46OwLcY+Zia0mtdHsOo9y/hP+CxMN0TU9QxoOtG4g==",
+ "requires": {
+ "graceful-fs": "^4.2.0",
+ "jsonfile": "^4.0.0",
+ "universalify": "^0.1.0"
+ }
+ },
+ "lodash": {
+ "version": "4.17.15",
+ "resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz",
+ "integrity": "sha512-8xOcRHvCjnocdS5cpwXQXVzmmh5e5+saE2QGoeQmbKmRS6J3VQppPOIt0MnmE+4xlZoumy0GPG0D0MVIQbNA1A=="
+ },
+ "semver": {
+ "version": "6.3.0",
+ "resolved": "https://registry.npmjs.org/semver/-/semver-6.3.0.tgz",
+ "integrity": "sha512-b39TBaTSfV6yBrapU89p5fKekE2m/NwnDocOVruQFS1/veMgdzuPcnOM34M6CwxW8jH/lxEa5rBoDeUwu5HHTw=="
+ },
+ "signal-exit": {
+ "version": "3.0.3",
+ "resolved": "https://registry.npmjs.org/signal-exit/-/signal-exit-3.0.3.tgz",
+ "integrity": "sha512-VUJ49FC8U1OxwZLxIbTTrDvLnf/6TDgxZcK8wxR8zs13xpx7xbG60ndBlhNrFi2EMuFRoeDoJO7wthSLq42EjA=="
+ },
+ "source-map": {
+ "version": "0.7.3",
+ "resolved": "https://registry.npmjs.org/source-map/-/source-map-0.7.3.tgz",
+ "integrity": "sha512-CkCj6giN3S+n9qrYiBTX5gystlENnRW5jZeNLHpe6aue+SrHcG5VYwujhW9s4dY31mEGsxBDrHR6oI69fTXsaQ=="
+ },
+ "uuid": {
+ "version": "3.4.0",
+ "resolved": "https://registry.npmjs.org/uuid/-/uuid-3.4.0.tgz",
+ "integrity": "sha512-HjSDRw6gZE5JMggctHBcjVak08+KEVhSIiDzFnT9S9aegmp85S/bReBVTb4QTFaRNptJ9kuYaNhnbNEOkbKb/A=="
+ }
+ }
+ },
+ "get-caller-file": {
+ "version": "2.0.5",
+ "resolved": "https://registry.npmjs.org/get-caller-file/-/get-caller-file-2.0.5.tgz",
+ "integrity": "sha512-DyFP3BM/3YHTQOCUL/w0OZHR0lpKeGrxotcHWcqNEdnltqFwXVfhEBQ94eIo34AfQpo0rGki4cyIiftY06h2Fg=="
+ },
+ "get-stream": {
+ "version": "5.1.0",
+ "resolved": "https://registry.npmjs.org/get-stream/-/get-stream-5.1.0.tgz",
+ "integrity": "sha512-EXr1FOzrzTfGeL0gQdeFEvOMm2mzMOglyiOXSTpPC+iAjAKftbr3jpCMWynogwYnM+eSj9sHGc6wjIcDvYiygw==",
+ "requires": {
+ "pump": "^3.0.0"
+ }
+ },
+ "graceful-fs": {
+ "version": "4.2.4",
+ "resolved": "https://registry.npmjs.org/graceful-fs/-/graceful-fs-4.2.4.tgz",
+ "integrity": "sha512-WjKPNJF79dtJAVniUlGGWHYGz2jWxT6VhN/4m1NdkbZ2nOsEF+cI1Edgql5zCRhs/VsQYRvrXctxktVXZUkixw=="
+ },
+ "hosted-git-info": {
+ "version": "3.0.4",
+ "resolved": "https://registry.npmjs.org/hosted-git-info/-/hosted-git-info-3.0.4.tgz",
+ "integrity": "sha512-4oT62d2jwSDBbLLFLZE+1vPuQ1h8p9wjrJ8Mqx5TjsyWmBMV5B13eJqn8pvluqubLf3cJPTfiYCIwNwDNmzScQ==",
+ "requires": {
+ "lru-cache": "^5.1.1"
+ }
+ },
+ "is-fullwidth-code-point": {
+ "version": "3.0.0",
+ "resolved": "https://registry.npmjs.org/is-fullwidth-code-point/-/is-fullwidth-code-point-3.0.0.tgz",
+ "integrity": "sha512-zymm5+u+sCsSWyD9qNaejV3DFvhCKclKdizYaJUuHA83RLjb7nSuGnddCHGv0hk+KY7BMAlsWeK4Ueg6EV6XQg=="
+ },
+ "is-stream": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/is-stream/-/is-stream-2.0.0.tgz",
+ "integrity": "sha512-XCoy+WlUr7d1+Z8GgSuXmpuUFC9fOhRXglJMx+dwLKTkL44Cjd4W1Z5P+BQZpr+cR93aGP4S/s7Ftw6Nd/kiEw=="
+ },
+ "locate-path": {
+ "version": "5.0.0",
+ "resolved": "https://registry.npmjs.org/locate-path/-/locate-path-5.0.0.tgz",
+ "integrity": "sha512-t7hw9pI+WvuwNJXwk5zVHpyhIqzg2qTlklJOf0mVxGSbe3Fp2VieZcduNYjaLDoy6p9uGpQEGWG87WpMKlNq8g==",
+ "requires": {
+ "p-locate": "^4.1.0"
+ }
+ },
+ "lru-cache": {
+ "version": "5.1.1",
+ "resolved": "https://registry.npmjs.org/lru-cache/-/lru-cache-5.1.1.tgz",
+ "integrity": "sha512-KpNARQA3Iwv+jTA0utUVVbrh+Jlrr1Fv0e56GGzAFOXN7dk/FviaDW8LHmK52DlcH4WP2n6gI8vN1aesBFgo9w==",
+ "requires": {
+ "yallist": "^3.0.2"
+ }
+ },
+ "mimic-fn": {
+ "version": "2.1.0",
+ "resolved": "https://registry.npmjs.org/mimic-fn/-/mimic-fn-2.1.0.tgz",
+ "integrity": "sha512-OqbOk5oEQeAZ8WXWydlu9HJjz9WVdEIvamMCcXmuqUYjTknH/sqsWvhQ3vgwKFRR1HpjvNBKQ37nbJgYzGqGcg=="
+ },
+ "node-fetch": {
+ "version": "2.6.0",
+ "resolved": "https://registry.npmjs.org/node-fetch/-/node-fetch-2.6.0.tgz",
+ "integrity": "sha512-8dG4H5ujfvFiqDmVu9fQ5bOHUC15JMjMY/Zumv26oOvvVJjM67KF8koCWIabKQ1GJIa9r2mMZscBq/TbdOcmNA=="
+ },
+ "node-releases": {
+ "version": "1.1.58",
+ "resolved": "https://registry.npmjs.org/node-releases/-/node-releases-1.1.58.tgz",
+ "integrity": "sha512-NxBudgVKiRh/2aPWMgPR7bPTX0VPmGx5QBwCtdHitnqFE5/O8DeBXuIMH1nwNnw/aMo6AjOrpsHzfY3UbUJ7yg=="
+ },
+ "npm-run-path": {
+ "version": "4.0.1",
+ "resolved": "https://registry.npmjs.org/npm-run-path/-/npm-run-path-4.0.1.tgz",
+ "integrity": "sha512-S48WzZW777zhNIrn7gxOlISNAqi9ZC/uQFnRdbeIHhZhCA6UqpkOT8T1G7BvfdgP4Er8gF4sUbaS0i7QvIfCWw==",
+ "requires": {
+ "path-key": "^3.0.0"
+ }
+ },
+ "onetime": {
+ "version": "5.1.0",
+ "resolved": "https://registry.npmjs.org/onetime/-/onetime-5.1.0.tgz",
+ "integrity": "sha512-5NcSkPHhwTVFIQN+TUqXoS5+dlElHXdpAWu9I0HP20YOtIi+aZ0Ct82jdlILDxjLEAWwvm+qj1m6aEtsDVmm6Q==",
+ "requires": {
+ "mimic-fn": "^2.1.0"
+ }
+ },
+ "p-finally": {
+ "version": "2.0.1",
+ "resolved": "https://registry.npmjs.org/p-finally/-/p-finally-2.0.1.tgz",
+ "integrity": "sha512-vpm09aKwq6H9phqRQzecoDpD8TmVyGw70qmWlyq5onxY7tqyTTFVvxMykxQSQKILBSFlbXpypIw2T1Ml7+DDtw=="
+ },
+ "p-limit": {
+ "version": "2.3.0",
+ "resolved": "https://registry.npmjs.org/p-limit/-/p-limit-2.3.0.tgz",
+ "integrity": "sha512-//88mFWSJx8lxCzwdAABTJL2MyWB12+eIY7MDL2SqLmAkeKU9qxRvWuSyTjm3FUmpBEMuFfckAIqEaVGUDxb6w==",
+ "requires": {
+ "p-try": "^2.0.0"
+ }
+ },
+ "p-locate": {
+ "version": "4.1.0",
+ "resolved": "https://registry.npmjs.org/p-locate/-/p-locate-4.1.0.tgz",
+ "integrity": "sha512-R79ZZ/0wAxKGu3oYMlz8jy/kbhsNrS7SKZ7PxEHBgJ5+F2mtFW2fK2cOtBh1cHYkQsbzFV7I+EoRKe6Yt0oK7A==",
+ "requires": {
+ "p-limit": "^2.2.0"
+ }
+ },
+ "path-exists": {
+ "version": "4.0.0",
+ "resolved": "https://registry.npmjs.org/path-exists/-/path-exists-4.0.0.tgz",
+ "integrity": "sha512-ak9Qy5Q7jYb2Wwcey5Fpvg2KoAc/ZIhLSLOSBmRmygPsGwkVVt0fZa0qrtMz+m6tJTAHfZQ8FnmB4MG4LWy7/w=="
+ },
+ "path-key": {
+ "version": "3.1.1",
+ "resolved": "https://registry.npmjs.org/path-key/-/path-key-3.1.1.tgz",
+ "integrity": "sha512-ojmeN0qd+y0jszEtoY48r0Peq5dwMEkIlCOu6Q5f41lfkswXuKtYrhgoTpLnyIcHm24Uhqx+5Tqm2InSwLhE6Q=="
+ },
+ "postcss": {
+ "version": "7.0.32",
+ "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.32.tgz",
+ "integrity": "sha512-03eXong5NLnNCD05xscnGKGDZ98CyzoqPSMjOe6SuoQY7Z2hIj0Ld1g/O/UQRuOle2aRtiIRDg9tDcTGAkLfKw==",
+ "requires": {
+ "chalk": "^2.4.2",
+ "source-map": "^0.6.1",
+ "supports-color": "^6.1.0"
+ },
+ "dependencies": {
+ "chalk": {
+ "version": "2.4.2",
+ "resolved": "https://registry.npmjs.org/chalk/-/chalk-2.4.2.tgz",
+ "integrity": "sha512-Mti+f9lpJNcwF4tWV8/OrTTtF1gZi+f8FqlyAdouralcFWFQWF2+NgCHShjkCb+IFBLq9buZwE1xckQU4peSuQ==",
+ "requires": {
+ "ansi-styles": "^3.2.1",
+ "escape-string-regexp": "^1.0.5",
+ "supports-color": "^5.3.0"
+ },
+ "dependencies": {
+ "supports-color": {
+ "version": "5.5.0",
+ "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-5.5.0.tgz",
+ "integrity": "sha512-QjVjwdXIt408MIiAqCX4oUKsgU2EqAGzs2Ppkm4aQYbjm+ZEWEcW4SfFNTr4uMNZma0ey4f5lgLrkB0aX0QMow==",
+ "requires": {
+ "has-flag": "^3.0.0"
+ }
+ }
+ }
+ },
+ "supports-color": {
+ "version": "6.1.0",
+ "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-6.1.0.tgz",
+ "integrity": "sha512-qe1jfm1Mg7Nq/NSh6XE24gPXROEVsWHxC1LIx//XNlD9iw7YZQGjZNjYN7xGaEG6iKdA8EtNFW6R0gjnVXp+wQ==",
+ "requires": {
+ "has-flag": "^3.0.0"
+ }
+ }
+ }
+ },
+ "postcss-value-parser": {
+ "version": "4.1.0",
+ "resolved": "https://registry.npmjs.org/postcss-value-parser/-/postcss-value-parser-4.1.0.tgz",
+ "integrity": "sha512-97DXOFbQJhk71ne5/Mt6cOu6yxsSfM0QGQyl0L25Gca4yGWEGJaig7l7gbCX623VqTBNGLRLaVUCnNkcedlRSQ=="
+ },
"raw-loader": {
"version": "0.5.1",
"resolved": "https://registry.npmjs.org/raw-loader/-/raw-loader-0.5.1.tgz",
"integrity": "sha1-DD0L6u2KAclm2Xh793goElKpeao="
},
- "strip-ansi": {
- "version": "4.0.0",
- "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-4.0.0.tgz",
- "integrity": "sha1-qEeQIusaw2iocTibY1JixQXuNo8=",
+ "regenerator-runtime": {
+ "version": "0.13.5",
+ "resolved": "https://registry.npmjs.org/regenerator-runtime/-/regenerator-runtime-0.13.5.tgz",
+ "integrity": "sha512-ZS5w8CpKFinUzOwW3c83oPeVXoNsrLsaCoLtJvAClH135j/R77RuymhiSErhm2lKcwSCIpmvIWSbDkIfAqKQlA=="
+ },
+ "require-main-filename": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/require-main-filename/-/require-main-filename-2.0.0.tgz",
+ "integrity": "sha512-NKN5kMDylKuldxYLSUfrbo5Tuzh4hd+2E8NPPX02mZtn1VuREQToYe/ZdlJy+J3uCpfaiGF05e7B8W0iXbQHmg=="
+ },
+ "shebang-command": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/shebang-command/-/shebang-command-2.0.0.tgz",
+ "integrity": "sha512-kHxr2zZpYtdmrN1qDjrrX/Z1rR1kG8Dx+gkpK1G4eXmvXswmcE1hTWBWYUzlraYw1/yZp6YuDY77YtvbN0dmDA==",
"requires": {
- "ansi-regex": "^3.0.0"
+ "shebang-regex": "^3.0.0"
}
},
- "yargs": {
- "version": "11.1.0",
- "resolved": "https://registry.npmjs.org/yargs/-/yargs-11.1.0.tgz",
- "integrity": "sha512-NwW69J42EsCSanF8kyn5upxvjp5ds+t3+udGBeTbFnERA+lF541DDpMawzo4z6W/QrzNM18D+BPMiOBibnFV5A==",
+ "shebang-regex": {
+ "version": "3.0.0",
+ "resolved": "https://registry.npmjs.org/shebang-regex/-/shebang-regex-3.0.0.tgz",
+ "integrity": "sha512-7++dFhtcx3353uBaq8DDR4NuxBetBzC7ZQOhmTQInHEd6bSrXdiEyzCvG07Z44UYdLShWUyXt5M/yhz8ekcb1A=="
+ },
+ "source-map": {
+ "version": "0.6.1",
+ "resolved": "https://registry.npmjs.org/source-map/-/source-map-0.6.1.tgz",
+ "integrity": "sha512-UjgapumWlbMhkBgzT7Ykc5YXUT46F0iKu8SGXq0bcwP5dz/h0Plj6enJqjz1Zbq2l5WaqYnrVbwWOWMyF3F47g=="
+ },
+ "string-width": {
+ "version": "4.2.0",
+ "resolved": "https://registry.npmjs.org/string-width/-/string-width-4.2.0.tgz",
+ "integrity": "sha512-zUz5JD+tgqtuDjMhwIg5uFVV3dtqZ9yQJlZVfq4I01/K5Paj5UHj7VyrQOJvzawSVlKpObApbfD0Ed6yJc+1eg==",
"requires": {
- "cliui": "^4.0.0",
- "decamelize": "^1.1.1",
- "find-up": "^2.1.0",
- "get-caller-file": "^1.0.1",
- "os-locale": "^2.0.0",
+ "emoji-regex": "^8.0.0",
+ "is-fullwidth-code-point": "^3.0.0",
+ "strip-ansi": "^6.0.0"
+ },
+ "dependencies": {
+ "ansi-regex": {
+ "version": "5.0.0",
+ "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.0.tgz",
+ "integrity": "sha512-bY6fj56OUQ0hU1KjFNDQuJFezqKdrAyFdIevADiqrWHwSlbmBNMHp5ak2f40Pm8JTFyM2mqxkG6ngkHO11f/lg=="
+ },
+ "strip-ansi": {
+ "version": "6.0.0",
+ "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.0.tgz",
+ "integrity": "sha512-AuvKTrTfQNYNIctbR1K/YGTR1756GycPsg7b9bdV9Duqur4gv6aKqHXah67Z8ImS7WEz5QVcOtlfW2rZEugt6w==",
+ "requires": {
+ "ansi-regex": "^5.0.0"
+ }
+ }
+ }
+ },
+ "strip-ansi": {
+ "version": "5.2.0",
+ "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-5.2.0.tgz",
+ "integrity": "sha512-DuRs1gKbBqsMKIZlrffwlug8MHkcnpjs5VPmL1PAh+mA30U0DTotfDZ0d2UUsXpPmPmMMJ6W773MaA3J+lbiWA==",
+ "requires": {
+ "ansi-regex": "^4.1.0"
+ }
+ },
+ "which": {
+ "version": "2.0.2",
+ "resolved": "https://registry.npmjs.org/which/-/which-2.0.2.tgz",
+ "integrity": "sha512-BLI3Tl1TW3Pvl70l3yq3Y64i+awpwXqsGBYWkkqMtnbXgrMD+yj7rhW0kuEDxzJaYXGjEW5ogapKNMEKNMjibA==",
+ "requires": {
+ "isexe": "^2.0.0"
+ }
+ },
+ "wrap-ansi": {
+ "version": "6.2.0",
+ "resolved": "https://registry.npmjs.org/wrap-ansi/-/wrap-ansi-6.2.0.tgz",
+ "integrity": "sha512-r6lPcBGxZXlIcymEu7InxDMhdW0KDxpLgoFLcguasxCaJ/SOIZwINatK9KY/tf+ZrlywOKU0UDj3ATXUBfxJXA==",
+ "requires": {
+ "ansi-styles": "^4.0.0",
+ "string-width": "^4.1.0",
+ "strip-ansi": "^6.0.0"
+ },
+ "dependencies": {
+ "ansi-regex": {
+ "version": "5.0.0",
+ "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.0.tgz",
+ "integrity": "sha512-bY6fj56OUQ0hU1KjFNDQuJFezqKdrAyFdIevADiqrWHwSlbmBNMHp5ak2f40Pm8JTFyM2mqxkG6ngkHO11f/lg=="
+ },
+ "ansi-styles": {
+ "version": "4.2.1",
+ "resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-4.2.1.tgz",
+ "integrity": "sha512-9VGjrMsG1vePxcSweQsN20KY/c4zN0h9fLjqAbwbPfahM3t+NL+M9HC8xeXG2I8pX5NoamTGNuomEUFI7fcUjA==",
+ "requires": {
+ "@types/color-name": "^1.1.1",
+ "color-convert": "^2.0.1"
+ }
+ },
+ "strip-ansi": {
+ "version": "6.0.0",
+ "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.0.tgz",
+ "integrity": "sha512-AuvKTrTfQNYNIctbR1K/YGTR1756GycPsg7b9bdV9Duqur4gv6aKqHXah67Z8ImS7WEz5QVcOtlfW2rZEugt6w==",
+ "requires": {
+ "ansi-regex": "^5.0.0"
+ }
+ }
+ }
+ },
+ "xstate": {
+ "version": "4.10.0",
+ "resolved": "https://registry.npmjs.org/xstate/-/xstate-4.10.0.tgz",
+ "integrity": "sha512-nncQ9gW+xgk5iUEvpBOXhbzSCS0uwzzT4bOAXxo6oUoALgbxzqEyMmaMYwuvOHrabDTdMJYnF+xe2XD8RRgWmA=="
+ },
+ "y18n": {
+ "version": "4.0.0",
+ "resolved": "https://registry.npmjs.org/y18n/-/y18n-4.0.0.tgz",
+ "integrity": "sha512-r9S/ZyXu/Xu9q1tYlpsLIsa3EeLXXk0VwlxqTcFRfg9EhMW+17kbt9G0NrgCmhGb5vT2hyhJZLfDGx+7+5Uj/w=="
+ },
+ "yallist": {
+ "version": "3.1.1",
+ "resolved": "https://registry.npmjs.org/yallist/-/yallist-3.1.1.tgz",
+ "integrity": "sha512-a4UGQaWPH59mOXUYnAG2ewncQS4i4F43Tv3JoAM+s2VDAmS9NsK8GpDMLrCHPksFT7h3K6TOoUNn2pb7RoXx4g=="
+ },
+ "yargs": {
+ "version": "15.3.1",
+ "resolved": "https://registry.npmjs.org/yargs/-/yargs-15.3.1.tgz",
+ "integrity": "sha512-92O1HWEjw27sBfgmXiixJWT5hRBp2eobqXicLtPBIDBhYB+1HpwZlXmbW2luivBJHBzki+7VyCLRtAkScbTBQA==",
+ "requires": {
+ "cliui": "^6.0.0",
+ "decamelize": "^1.2.0",
+ "find-up": "^4.1.0",
+ "get-caller-file": "^2.0.1",
"require-directory": "^2.1.1",
- "require-main-filename": "^1.0.1",
+ "require-main-filename": "^2.0.0",
"set-blocking": "^2.0.0",
- "string-width": "^2.0.0",
+ "string-width": "^4.2.0",
"which-module": "^2.0.0",
- "y18n": "^3.2.1",
- "yargs-parser": "^9.0.2"
+ "y18n": "^4.0.0",
+ "yargs-parser": "^18.1.1"
}
},
"yargs-parser": {
- "version": "9.0.2",
- "resolved": "https://registry.npmjs.org/yargs-parser/-/yargs-parser-9.0.2.tgz",
- "integrity": "sha1-nM9qQ0YP5O1Aqbto9I1DuKaMwHc=",
+ "version": "18.1.3",
+ "resolved": "https://registry.npmjs.org/yargs-parser/-/yargs-parser-18.1.3.tgz",
+ "integrity": "sha512-o50j0JeToy/4K6OZcaQmW6lyXXKhq7csREXcDwk2omFPJEwUNOVtJKvmDr9EI1fAJZUyZcRF7kxGBWmRXudrCQ==",
"requires": {
- "camelcase": "^4.1.0"
+ "camelcase": "^5.0.0",
+ "decamelize": "^1.2.0"
}
}
}
},
+ "gatsby-core-utils": {
+ "version": "1.3.5",
+ "resolved": "https://registry.npmjs.org/gatsby-core-utils/-/gatsby-core-utils-1.3.5.tgz",
+ "integrity": "sha512-kbwJ5BeQ8OixJVuBb1AGRL6vdkFz9nFBa6gXqjQ6AAXHhYDrjOYrRMIENT1QLoabWo6tlh0Hyl1agfWaQwW8lg==",
+ "requires": {
+ "ci-info": "2.0.0",
+ "configstore": "^5.0.1",
+ "fs-extra": "^8.1.0",
+ "node-object-hash": "^2.0.0",
+ "proper-lockfile": "^4.1.1",
+ "xdg-basedir": "^4.0.0"
+ },
+ "dependencies": {
+ "fs-extra": {
+ "version": "8.1.0",
+ "resolved": "https://registry.npmjs.org/fs-extra/-/fs-extra-8.1.0.tgz",
+ "integrity": "sha512-yhlQgA6mnOJUKOsRUFsgJdQCvkKhcz8tlZG5HBQfReYZy46OwLcY+Zia0mtdHsOo9y/hP+CxMN0TU9QxoOtG4g==",
+ "requires": {
+ "graceful-fs": "^4.2.0",
+ "jsonfile": "^4.0.0",
+ "universalify": "^0.1.0"
+ }
+ },
+ "graceful-fs": {
+ "version": "4.2.4",
+ "resolved": "https://registry.npmjs.org/graceful-fs/-/graceful-fs-4.2.4.tgz",
+ "integrity": "sha512-WjKPNJF79dtJAVniUlGGWHYGz2jWxT6VhN/4m1NdkbZ2nOsEF+cI1Edgql5zCRhs/VsQYRvrXctxktVXZUkixw=="
+ }
+ }
+ },
+ "gatsby-graphiql-explorer": {
+ "version": "0.2.37",
+ "resolved": "https://registry.npmjs.org/gatsby-graphiql-explorer/-/gatsby-graphiql-explorer-0.2.37.tgz",
+ "integrity": "sha512-dWfNA/CDjKO86DZLgxhYFSmK7DTCxwGvKm0HeMBYxcSyLP/WFAOoJjV2DCE2gMge28Sqmsz8ueOMZXM2YH8rIA==",
+ "requires": {
+ "@babel/runtime": "^7.8.7"
+ },
+ "dependencies": {
+ "@babel/runtime": {
+ "version": "7.10.2",
+ "resolved": "https://registry.npmjs.org/@babel/runtime/-/runtime-7.10.2.tgz",
+ "integrity": "sha512-6sF3uQw2ivImfVIl62RZ7MXhO2tap69WeWK57vAaimT6AZbE4FbqjdEJIN1UqoD6wI6B+1n9UiagafH1sxjOtg==",
+ "requires": {
+ "regenerator-runtime": "^0.13.4"
+ }
+ },
+ "regenerator-runtime": {
+ "version": "0.13.5",
+ "resolved": "https://registry.npmjs.org/regenerator-runtime/-/regenerator-runtime-0.13.5.tgz",
+ "integrity": "sha512-ZS5w8CpKFinUzOwW3c83oPeVXoNsrLsaCoLtJvAClH135j/R77RuymhiSErhm2lKcwSCIpmvIWSbDkIfAqKQlA=="
+ }
+ }
+ },
"gatsby-image": {
"version": "2.0.29",
"resolved": "https://registry.npmjs.org/gatsby-image/-/gatsby-image-2.0.29.tgz",
@@ -7346,14 +12562,28 @@
}
},
"gatsby-link": {
- "version": "2.0.12",
- "resolved": "https://registry.npmjs.org/gatsby-link/-/gatsby-link-2.0.12.tgz",
- "integrity": "sha512-hyDQquhr6RJLNx1WN7D+1b6a4DApsYABQzrMZxaaFTfTp0+UIdyKBEKbN6nVA96qjzD1iq5fiWX3mBhpuhYeBw==",
+ "version": "2.4.6",
+ "resolved": "https://registry.npmjs.org/gatsby-link/-/gatsby-link-2.4.6.tgz",
+ "integrity": "sha512-jOYEJa860KHcVOZ/6gjMv2EnCG7StdD4mLEGEMcEC8mzn4PWHQXHYsGdXcOvjn6SaqJ888hWuYjik5Jm8xW+cg==",
"requires": {
- "@babel/runtime": "^7.0.0",
- "@reach/router": "^1.1.1",
- "@types/reach__router": "^1.0.0",
- "prop-types": "^15.6.1"
+ "@babel/runtime": "^7.10.2",
+ "@types/reach__router": "^1.3.3",
+ "prop-types": "^15.7.2"
+ },
+ "dependencies": {
+ "@babel/runtime": {
+ "version": "7.10.2",
+ "resolved": "https://registry.npmjs.org/@babel/runtime/-/runtime-7.10.2.tgz",
+ "integrity": "sha512-6sF3uQw2ivImfVIl62RZ7MXhO2tap69WeWK57vAaimT6AZbE4FbqjdEJIN1UqoD6wI6B+1n9UiagafH1sxjOtg==",
+ "requires": {
+ "regenerator-runtime": "^0.13.4"
+ }
+ },
+ "regenerator-runtime": {
+ "version": "0.13.5",
+ "resolved": "https://registry.npmjs.org/regenerator-runtime/-/regenerator-runtime-0.13.5.tgz",
+ "integrity": "sha512-ZS5w8CpKFinUzOwW3c83oPeVXoNsrLsaCoLtJvAClH135j/R77RuymhiSErhm2lKcwSCIpmvIWSbDkIfAqKQlA=="
+ }
}
},
"gatsby-mdx": {
@@ -7412,6 +12642,154 @@
}
}
},
+ "gatsby-page-utils": {
+ "version": "0.2.9",
+ "resolved": "https://registry.npmjs.org/gatsby-page-utils/-/gatsby-page-utils-0.2.9.tgz",
+ "integrity": "sha512-Mh3QbDdKKrvbJRHtMsBvo+sDTaGfcTiXCFGTkFu2VbL3P6mZySFJ8fDLb9SbQvwvMVw/vD5IZT1KJerfmgfvGQ==",
+ "requires": {
+ "@babel/runtime": "^7.10.2",
+ "bluebird": "^3.7.2",
+ "chokidar": "3.4.0",
+ "fs-exists-cached": "^1.0.0",
+ "gatsby-core-utils": "^1.3.5",
+ "glob": "^7.1.6",
+ "lodash": "^4.17.15",
+ "micromatch": "^3.1.10"
+ },
+ "dependencies": {
+ "@babel/runtime": {
+ "version": "7.10.2",
+ "resolved": "https://registry.npmjs.org/@babel/runtime/-/runtime-7.10.2.tgz",
+ "integrity": "sha512-6sF3uQw2ivImfVIl62RZ7MXhO2tap69WeWK57vAaimT6AZbE4FbqjdEJIN1UqoD6wI6B+1n9UiagafH1sxjOtg==",
+ "requires": {
+ "regenerator-runtime": "^0.13.4"
+ }
+ },
+ "anymatch": {
+ "version": "3.1.1",
+ "resolved": "https://registry.npmjs.org/anymatch/-/anymatch-3.1.1.tgz",
+ "integrity": "sha512-mM8522psRCqzV+6LhomX5wgp25YVibjh8Wj23I5RPkPppSVSjyKD2A2mBJmWGa+KN7f2D6LNh9jkBCeyLktzjg==",
+ "requires": {
+ "normalize-path": "^3.0.0",
+ "picomatch": "^2.0.4"
+ }
+ },
+ "binary-extensions": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/binary-extensions/-/binary-extensions-2.0.0.tgz",
+ "integrity": "sha512-Phlt0plgpIIBOGTT/ehfFnbNlfsDEiqmzE2KRXoX1bLIlir4X/MR+zSyBEkL05ffWgnRSf/DXv+WrUAVr93/ow=="
+ },
+ "bluebird": {
+ "version": "3.7.2",
+ "resolved": "https://registry.npmjs.org/bluebird/-/bluebird-3.7.2.tgz",
+ "integrity": "sha512-XpNj6GDQzdfW+r2Wnn7xiSAd7TM3jzkxGXBGTtWKuSXv1xUV+azxAm8jdWZN06QTQk+2N2XB9jRDkvbmQmcRtg=="
+ },
+ "braces": {
+ "version": "3.0.2",
+ "resolved": "https://registry.npmjs.org/braces/-/braces-3.0.2.tgz",
+ "integrity": "sha512-b8um+L1RzM3WDSzvhm6gIz1yfTbBt6YTlcEKAvsmqCZZFw46z626lVj9j1yEPW33H5H+lBQpZMP1k8l+78Ha0A==",
+ "requires": {
+ "fill-range": "^7.0.1"
+ }
+ },
+ "chokidar": {
+ "version": "3.4.0",
+ "resolved": "https://registry.npmjs.org/chokidar/-/chokidar-3.4.0.tgz",
+ "integrity": "sha512-aXAaho2VJtisB/1fg1+3nlLJqGOuewTzQpd/Tz0yTg2R0e4IGtshYvtjowyEumcBv2z+y4+kc75Mz7j5xJskcQ==",
+ "requires": {
+ "anymatch": "~3.1.1",
+ "braces": "~3.0.2",
+ "fsevents": "~2.1.2",
+ "glob-parent": "~5.1.0",
+ "is-binary-path": "~2.1.0",
+ "is-glob": "~4.0.1",
+ "normalize-path": "~3.0.0",
+ "readdirp": "~3.4.0"
+ }
+ },
+ "fill-range": {
+ "version": "7.0.1",
+ "resolved": "https://registry.npmjs.org/fill-range/-/fill-range-7.0.1.tgz",
+ "integrity": "sha512-qOo9F+dMUmC2Lcb4BbVvnKJxTPjCm+RRpe4gDuGrzkL7mEVl/djYSu2OdQ2Pa302N4oqkSg9ir6jaLWJ2USVpQ==",
+ "requires": {
+ "to-regex-range": "^5.0.1"
+ }
+ },
+ "glob": {
+ "version": "7.1.6",
+ "resolved": "https://registry.npmjs.org/glob/-/glob-7.1.6.tgz",
+ "integrity": "sha512-LwaxwyZ72Lk7vZINtNNrywX0ZuLyStrdDtabefZKAY5ZGJhVtgdznluResxNmPitE0SAO+O26sWTHeKSI2wMBA==",
+ "requires": {
+ "fs.realpath": "^1.0.0",
+ "inflight": "^1.0.4",
+ "inherits": "2",
+ "minimatch": "^3.0.4",
+ "once": "^1.3.0",
+ "path-is-absolute": "^1.0.0"
+ }
+ },
+ "glob-parent": {
+ "version": "5.1.1",
+ "resolved": "https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.1.tgz",
+ "integrity": "sha512-FnI+VGOpnlGHWZxthPGR+QhR78fuiK0sNLkHQv+bL9fQi57lNNdquIbna/WrfROrolq8GK5Ek6BiMwqL/voRYQ==",
+ "requires": {
+ "is-glob": "^4.0.1"
+ }
+ },
+ "is-binary-path": {
+ "version": "2.1.0",
+ "resolved": "https://registry.npmjs.org/is-binary-path/-/is-binary-path-2.1.0.tgz",
+ "integrity": "sha512-ZMERYes6pDydyuGidse7OsHxtbI7WVeUEozgR/g7rd0xUimYNlvZRE/K2MgZTjWy725IfelLeVcEM97mmtRGXw==",
+ "requires": {
+ "binary-extensions": "^2.0.0"
+ }
+ },
+ "is-glob": {
+ "version": "4.0.1",
+ "resolved": "https://registry.npmjs.org/is-glob/-/is-glob-4.0.1.tgz",
+ "integrity": "sha512-5G0tKtBTFImOqDnLB2hG6Bp2qcKEFduo4tZu9MT/H6NQv/ghhy30o55ufafxJ/LdH79LLs2Kfrn85TLKyA7BUg==",
+ "requires": {
+ "is-extglob": "^2.1.1"
+ }
+ },
+ "is-number": {
+ "version": "7.0.0",
+ "resolved": "https://registry.npmjs.org/is-number/-/is-number-7.0.0.tgz",
+ "integrity": "sha512-41Cifkg6e8TylSpdtTpeLVMqvSBEVzTttHvERD741+pnZ8ANv0004MRL43QKPDlK9cGvNp6NZWZUBlbGXYxxng=="
+ },
+ "lodash": {
+ "version": "4.17.15",
+ "resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz",
+ "integrity": "sha512-8xOcRHvCjnocdS5cpwXQXVzmmh5e5+saE2QGoeQmbKmRS6J3VQppPOIt0MnmE+4xlZoumy0GPG0D0MVIQbNA1A=="
+ },
+ "normalize-path": {
+ "version": "3.0.0",
+ "resolved": "https://registry.npmjs.org/normalize-path/-/normalize-path-3.0.0.tgz",
+ "integrity": "sha512-6eZs5Ls3WtCisHWp9S2GUy8dqkpGi4BVSz3GaqiE6ezub0512ESztXUwUB6C6IKbQkY2Pnb/mD4WYojCRwcwLA=="
+ },
+ "readdirp": {
+ "version": "3.4.0",
+ "resolved": "https://registry.npmjs.org/readdirp/-/readdirp-3.4.0.tgz",
+ "integrity": "sha512-0xe001vZBnJEK+uKcj8qOhyAKPzIT+gStxWr3LCB0DwcXR5NZJ3IaC+yGnHCYzB/S7ov3m3EEbZI2zeNvX+hGQ==",
+ "requires": {
+ "picomatch": "^2.2.1"
+ }
+ },
+ "regenerator-runtime": {
+ "version": "0.13.5",
+ "resolved": "https://registry.npmjs.org/regenerator-runtime/-/regenerator-runtime-0.13.5.tgz",
+ "integrity": "sha512-ZS5w8CpKFinUzOwW3c83oPeVXoNsrLsaCoLtJvAClH135j/R77RuymhiSErhm2lKcwSCIpmvIWSbDkIfAqKQlA=="
+ },
+ "to-regex-range": {
+ "version": "5.0.1",
+ "resolved": "https://registry.npmjs.org/to-regex-range/-/to-regex-range-5.0.1.tgz",
+ "integrity": "sha512-65P7iz6X5yEr1cwcgvQxbbIw7Uk3gOy5dIdtZ4rDveLqhrdJP+Li/Hx6tyK0NEb+2GCyneCMJiGqrADCSNk8sQ==",
+ "requires": {
+ "is-number": "^7.0.0"
+ }
+ }
+ }
+ },
"gatsby-plugin-catch-links": {
"version": "2.0.11",
"resolved": "https://registry.npmjs.org/gatsby-plugin-catch-links/-/gatsby-plugin-catch-links-2.0.11.tgz",
@@ -7467,19 +12845,55 @@
}
},
"gatsby-plugin-page-creator": {
- "version": "2.0.8",
- "resolved": "https://registry.npmjs.org/gatsby-plugin-page-creator/-/gatsby-plugin-page-creator-2.0.8.tgz",
- "integrity": "sha512-F8OBN4R4TzYS5rOGPkoqZnZJef0wL7W1UBTacKgRQDB0C5wozWKTBMTYTWOFaGSt09rApKQzRdJdIYiStsDGyA==",
+ "version": "2.3.9",
+ "resolved": "https://registry.npmjs.org/gatsby-plugin-page-creator/-/gatsby-plugin-page-creator-2.3.9.tgz",
+ "integrity": "sha512-ZNfjSoJ3AyACP5FWo0rwoeuIoZdD58le7oCmcVHVks/KOS/pJVGn8GwcrHE6xxCNM4KzqdfNBGZVyM+7RUASyA==",
"requires": {
- "@babel/runtime": "^7.0.0",
- "bluebird": "^3.5.0",
- "chokidar": "^2.1.2",
+ "@babel/runtime": "^7.10.2",
+ "bluebird": "^3.7.2",
"fs-exists-cached": "^1.0.0",
- "glob": "^7.1.1",
- "lodash": "^4.17.10",
- "micromatch": "^3.1.10",
- "parse-filepath": "^1.0.1",
- "slash": "^1.0.0"
+ "gatsby-page-utils": "^0.2.9",
+ "glob": "^7.1.6",
+ "lodash": "^4.17.15",
+ "micromatch": "^3.1.10"
+ },
+ "dependencies": {
+ "@babel/runtime": {
+ "version": "7.10.2",
+ "resolved": "https://registry.npmjs.org/@babel/runtime/-/runtime-7.10.2.tgz",
+ "integrity": "sha512-6sF3uQw2ivImfVIl62RZ7MXhO2tap69WeWK57vAaimT6AZbE4FbqjdEJIN1UqoD6wI6B+1n9UiagafH1sxjOtg==",
+ "requires": {
+ "regenerator-runtime": "^0.13.4"
+ }
+ },
+ "bluebird": {
+ "version": "3.7.2",
+ "resolved": "https://registry.npmjs.org/bluebird/-/bluebird-3.7.2.tgz",
+ "integrity": "sha512-XpNj6GDQzdfW+r2Wnn7xiSAd7TM3jzkxGXBGTtWKuSXv1xUV+azxAm8jdWZN06QTQk+2N2XB9jRDkvbmQmcRtg=="
+ },
+ "glob": {
+ "version": "7.1.6",
+ "resolved": "https://registry.npmjs.org/glob/-/glob-7.1.6.tgz",
+ "integrity": "sha512-LwaxwyZ72Lk7vZINtNNrywX0ZuLyStrdDtabefZKAY5ZGJhVtgdznluResxNmPitE0SAO+O26sWTHeKSI2wMBA==",
+ "requires": {
+ "fs.realpath": "^1.0.0",
+ "inflight": "^1.0.4",
+ "inherits": "2",
+ "minimatch": "^3.0.4",
+ "once": "^1.3.0",
+ "path-is-absolute": "^1.0.0"
+ }
+ },
+ "lodash": {
+ "version": "4.17.15",
+ "resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz",
+ "integrity": "sha512-8xOcRHvCjnocdS5cpwXQXVzmmh5e5+saE2QGoeQmbKmRS6J3VQppPOIt0MnmE+4xlZoumy0GPG0D0MVIQbNA1A=="
+ },
+ "regenerator-runtime": {
+ "version": "0.13.5",
+ "resolved": "https://registry.npmjs.org/regenerator-runtime/-/regenerator-runtime-0.13.5.tgz",
+ "integrity": "sha512-ZS5w8CpKFinUzOwW3c83oPeVXoNsrLsaCoLtJvAClH135j/R77RuymhiSErhm2lKcwSCIpmvIWSbDkIfAqKQlA=="
+ }
}
},
"gatsby-plugin-react-helmet": {
@@ -7622,13 +13036,1065 @@
"integrity": "sha512-iNhS3e8TrWkrPY5EgrWpKoxh13DEcP3f8DrVGQ0mz8qafCHxcxjLUKaCUO6WZlgxsANIhm3dMXiABvqAebApzw=="
},
"gatsby-react-router-scroll": {
- "version": "2.0.4",
- "resolved": "https://registry.npmjs.org/gatsby-react-router-scroll/-/gatsby-react-router-scroll-2.0.4.tgz",
- "integrity": "sha512-PI/XAzIqlbmjrM4QuAd/1IYMRHvCJljOBbd5tleawgwihRegw2CA5yKBU126hvW6RmFo4p6qRxxuaPjNPE0ohA==",
+ "version": "2.3.1",
+ "resolved": "https://registry.npmjs.org/gatsby-react-router-scroll/-/gatsby-react-router-scroll-2.3.1.tgz",
+ "integrity": "sha512-l1yTR43T05k+7rhiE2O/S4kNsjacveyLzai7BGGo1N7qIvd/E3p2bWNEtr4BUbzel76aZJRQ3BXENbHyDDAuJw==",
"requires": {
- "@babel/runtime": "^7.0.0",
- "scroll-behavior": "^0.9.9",
+ "@babel/runtime": "^7.9.6",
+ "scroll-behavior": "^0.9.12",
"warning": "^3.0.0"
+ },
+ "dependencies": {
+ "@babel/runtime": {
+ "version": "7.10.2",
+ "resolved": "https://registry.npmjs.org/@babel/runtime/-/runtime-7.10.2.tgz",
+ "integrity": "sha512-6sF3uQw2ivImfVIl62RZ7MXhO2tap69WeWK57vAaimT6AZbE4FbqjdEJIN1UqoD6wI6B+1n9UiagafH1sxjOtg==",
+ "requires": {
+ "regenerator-runtime": "^0.13.4"
+ }
+ },
+ "regenerator-runtime": {
+ "version": "0.13.5",
+ "resolved": "https://registry.npmjs.org/regenerator-runtime/-/regenerator-runtime-0.13.5.tgz",
+ "integrity": "sha512-ZS5w8CpKFinUzOwW3c83oPeVXoNsrLsaCoLtJvAClH135j/R77RuymhiSErhm2lKcwSCIpmvIWSbDkIfAqKQlA=="
+ },
+ "warning": {
+ "version": "3.0.0",
+ "resolved": "https://registry.npmjs.org/warning/-/warning-3.0.0.tgz",
+ "integrity": "sha1-MuU3fLVy3kqwR1O9+IIcAe1gW3w=",
+ "requires": {
+ "loose-envify": "^1.0.0"
+ }
+ }
+ }
+ },
+ "gatsby-recipes": {
+ "version": "0.1.39",
+ "resolved": "https://registry.npmjs.org/gatsby-recipes/-/gatsby-recipes-0.1.39.tgz",
+ "integrity": "sha512-Pf8EGKhCAv4E1rU0NL6pKH9mC8QB/0pW/9oAAb9Rs2N3TeBYcQ36hQP95ana63GZwY35eKoFzjdWGHmegQw90Q==",
+ "requires": {
+ "@babel/core": "^7.10.2",
+ "@babel/generator": "^7.10.2",
+ "@babel/helper-plugin-utils": "^7.10.1",
+ "@babel/plugin-transform-react-jsx": "^7.10.1",
+ "@babel/standalone": "^7.10.2",
+ "@babel/template": "^7.10.1",
+ "@babel/types": "^7.10.2",
+ "@hapi/hoek": "8.x",
+ "@hapi/joi": "^15.1.1",
+ "@mdx-js/mdx": "^1.6.5",
+ "@mdx-js/react": "^1.6.5",
+ "@mdx-js/runtime": "^1.6.5",
+ "acorn": "^7.2.0",
+ "acorn-jsx": "^5.2.0",
+ "cors": "^2.8.5",
+ "debug": "^4.1.1",
+ "detect-port": "^1.3.0",
+ "execa": "^4.0.2",
+ "express": "^4.17.1",
+ "express-graphql": "^0.9.0",
+ "fs-extra": "^8.1.0",
+ "gatsby-core-utils": "^1.3.5",
+ "gatsby-telemetry": "^1.3.11",
+ "glob": "^7.1.6",
+ "graphql": "^14.6.0",
+ "graphql-compose": "^6.3.8",
+ "graphql-subscriptions": "^1.1.0",
+ "graphql-tools": "^6.0.5",
+ "graphql-type-json": "^0.3.1",
+ "hicat": "^0.7.0",
+ "html-tag-names": "^1.1.5",
+ "ink": "^2.7.1",
+ "ink-box": "^1.0.0",
+ "ink-link": "^1.1.0",
+ "ink-select-input": "^3.1.2",
+ "ink-spinner": "^3.0.1",
+ "is-binary-path": "^2.1.0",
+ "is-blank": "^2.1.0",
+ "is-string": "^1.0.5",
+ "is-url": "^1.2.4",
+ "jest-diff": "^25.5.0",
+ "lodash": "^4.17.15",
+ "mkdirp": "^0.5.1",
+ "node-fetch": "^2.6.0",
+ "pkg-dir": "^4.2.0",
+ "prettier": "^2.0.5",
+ "react-reconciler": "^0.25.1",
+ "remark-mdx": "^1.6.5",
+ "remark-parse": "^6.0.3",
+ "remark-stringify": "^8.0.0",
+ "resolve-cwd": "^3.0.0",
+ "semver": "^7.3.2",
+ "single-trailing-newline": "^1.0.0",
+ "strip-ansi": "^6.0.0",
+ "style-to-object": "^0.3.0",
+ "subscriptions-transport-ws": "^0.9.16",
+ "svg-tag-names": "^2.0.1",
+ "unified": "^8.4.2",
+ "unist-util-visit": "^2.0.2",
+ "urql": "^1.9.7",
+ "ws": "^7.3.0",
+ "xstate": "^4.10.0"
+ },
+ "dependencies": {
+ "@babel/code-frame": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/code-frame/-/code-frame-7.10.1.tgz",
+ "integrity": "sha512-IGhtTmpjGbYzcEDOw7DcQtbQSXcG9ftmAXtWTu9V936vDye4xjjekktFAtgZsWpzTj/X01jocB46mTywm/4SZw==",
+ "requires": {
+ "@babel/highlight": "^7.10.1"
+ }
+ },
+ "@babel/core": {
+ "version": "7.10.2",
+ "resolved": "https://registry.npmjs.org/@babel/core/-/core-7.10.2.tgz",
+ "integrity": "sha512-KQmV9yguEjQsXqyOUGKjS4+3K8/DlOCE2pZcq4augdQmtTy5iv5EHtmMSJ7V4c1BIPjuwtZYqYLCq9Ga+hGBRQ==",
+ "requires": {
+ "@babel/code-frame": "^7.10.1",
+ "@babel/generator": "^7.10.2",
+ "@babel/helper-module-transforms": "^7.10.1",
+ "@babel/helpers": "^7.10.1",
+ "@babel/parser": "^7.10.2",
+ "@babel/template": "^7.10.1",
+ "@babel/traverse": "^7.10.1",
+ "@babel/types": "^7.10.2",
+ "convert-source-map": "^1.7.0",
+ "debug": "^4.1.0",
+ "gensync": "^1.0.0-beta.1",
+ "json5": "^2.1.2",
+ "lodash": "^4.17.13",
+ "resolve": "^1.3.2",
+ "semver": "^5.4.1",
+ "source-map": "^0.5.0"
+ },
+ "dependencies": {
+ "semver": {
+ "version": "5.7.1",
+ "resolved": "https://registry.npmjs.org/semver/-/semver-5.7.1.tgz",
+ "integrity": "sha512-sauaDf/PZdVgrLTNYHRtpXa1iRiKcaebiKQ1BJdpQlWH2lCvexQdX55snPFyK7QzpudqbCI0qXFfOasHdyNDGQ=="
+ }
+ }
+ },
+ "@babel/generator": {
+ "version": "7.10.2",
+ "resolved": "https://registry.npmjs.org/@babel/generator/-/generator-7.10.2.tgz",
+ "integrity": "sha512-AxfBNHNu99DTMvlUPlt1h2+Hn7knPpH5ayJ8OqDWSeLld+Fi2AYBTC/IejWDM9Edcii4UzZRCsbUt0WlSDsDsA==",
+ "requires": {
+ "@babel/types": "^7.10.2",
+ "jsesc": "^2.5.1",
+ "lodash": "^4.17.13",
+ "source-map": "^0.5.0"
+ }
+ },
+ "@babel/helper-annotate-as-pure": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-annotate-as-pure/-/helper-annotate-as-pure-7.10.1.tgz",
+ "integrity": "sha512-ewp3rvJEwLaHgyWGe4wQssC2vjks3E80WiUe2BpMb0KhreTjMROCbxXcEovTrbeGVdQct5VjQfrv9EgC+xMzCw==",
+ "requires": {
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helper-builder-react-jsx": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-builder-react-jsx/-/helper-builder-react-jsx-7.10.1.tgz",
+ "integrity": "sha512-KXzzpyWhXgzjXIlJU1ZjIXzUPdej1suE6vzqgImZ/cpAsR/CC8gUcX4EWRmDfWz/cs6HOCPMBIJ3nKoXt3BFuw==",
+ "requires": {
+ "@babel/helper-annotate-as-pure": "^7.10.1",
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helper-function-name": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-function-name/-/helper-function-name-7.10.1.tgz",
+ "integrity": "sha512-fcpumwhs3YyZ/ttd5Rz0xn0TpIwVkN7X0V38B9TWNfVF42KEkhkAAuPCQ3oXmtTRtiPJrmZ0TrfS0GKF0eMaRQ==",
+ "requires": {
+ "@babel/helper-get-function-arity": "^7.10.1",
+ "@babel/template": "^7.10.1",
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helper-get-function-arity": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-get-function-arity/-/helper-get-function-arity-7.10.1.tgz",
+ "integrity": "sha512-F5qdXkYGOQUb0hpRaPoetF9AnsXknKjWMZ+wmsIRsp5ge5sFh4c3h1eH2pRTTuy9KKAA2+TTYomGXAtEL2fQEw==",
+ "requires": {
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helper-member-expression-to-functions": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-member-expression-to-functions/-/helper-member-expression-to-functions-7.10.1.tgz",
+ "integrity": "sha512-u7XLXeM2n50gb6PWJ9hoO5oO7JFPaZtrh35t8RqKLT1jFKj9IWeD1zrcrYp1q1qiZTdEarfDWfTIP8nGsu0h5g==",
+ "requires": {
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helper-module-imports": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-module-imports/-/helper-module-imports-7.10.1.tgz",
+ "integrity": "sha512-SFxgwYmZ3HZPyZwJRiVNLRHWuW2OgE5k2nrVs6D9Iv4PPnXVffuEHy83Sfx/l4SqF+5kyJXjAyUmrG7tNm+qVg==",
+ "requires": {
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helper-module-transforms": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-module-transforms/-/helper-module-transforms-7.10.1.tgz",
+ "integrity": "sha512-RLHRCAzyJe7Q7sF4oy2cB+kRnU4wDZY/H2xJFGof+M+SJEGhZsb+GFj5j1AD8NiSaVBJ+Pf0/WObiXu/zxWpFg==",
+ "requires": {
+ "@babel/helper-module-imports": "^7.10.1",
+ "@babel/helper-replace-supers": "^7.10.1",
+ "@babel/helper-simple-access": "^7.10.1",
+ "@babel/helper-split-export-declaration": "^7.10.1",
+ "@babel/template": "^7.10.1",
+ "@babel/types": "^7.10.1",
+ "lodash": "^4.17.13"
+ }
+ },
+ "@babel/helper-optimise-call-expression": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-optimise-call-expression/-/helper-optimise-call-expression-7.10.1.tgz",
+ "integrity": "sha512-a0DjNS1prnBsoKx83dP2falChcs7p3i8VMzdrSbfLhuQra/2ENC4sbri34dz/rWmDADsmF1q5GbfaXydh0Jbjg==",
+ "requires": {
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helper-plugin-utils": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-plugin-utils/-/helper-plugin-utils-7.10.1.tgz",
+ "integrity": "sha512-fvoGeXt0bJc7VMWZGCAEBEMo/HAjW2mP8apF5eXK0wSqwLAVHAISCWRoLMBMUs2kqeaG77jltVqu4Hn8Egl3nA=="
+ },
+ "@babel/helper-replace-supers": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-replace-supers/-/helper-replace-supers-7.10.1.tgz",
+ "integrity": "sha512-SOwJzEfpuQwInzzQJGjGaiG578UYmyi2Xw668klPWV5n07B73S0a9btjLk/52Mlcxa+5AdIYqws1KyXRfMoB7A==",
+ "requires": {
+ "@babel/helper-member-expression-to-functions": "^7.10.1",
+ "@babel/helper-optimise-call-expression": "^7.10.1",
+ "@babel/traverse": "^7.10.1",
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helper-simple-access": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-simple-access/-/helper-simple-access-7.10.1.tgz",
+ "integrity": "sha512-VSWpWzRzn9VtgMJBIWTZ+GP107kZdQ4YplJlCmIrjoLVSi/0upixezHCDG8kpPVTBJpKfxTH01wDhh+jS2zKbw==",
+ "requires": {
+ "@babel/template": "^7.10.1",
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helper-split-export-declaration": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-split-export-declaration/-/helper-split-export-declaration-7.10.1.tgz",
+ "integrity": "sha512-UQ1LVBPrYdbchNhLwj6fetj46BcFwfS4NllJo/1aJsT+1dLTEnXJL0qHqtY7gPzF8S2fXBJamf1biAXV3X077g==",
+ "requires": {
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helpers": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helpers/-/helpers-7.10.1.tgz",
+ "integrity": "sha512-muQNHF+IdU6wGgkaJyhhEmI54MOZBKsFfsXFhboz1ybwJ1Kl7IHlbm2a++4jwrmY5UYsgitt5lfqo1wMFcHmyw==",
+ "requires": {
+ "@babel/template": "^7.10.1",
+ "@babel/traverse": "^7.10.1",
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/highlight": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/highlight/-/highlight-7.10.1.tgz",
+ "integrity": "sha512-8rMof+gVP8mxYZApLF/JgNDAkdKa+aJt3ZYxF8z6+j/hpeXL7iMsKCPHa2jNMHu/qqBwzQF4OHNoYi8dMA/rYg==",
+ "requires": {
+ "@babel/helper-validator-identifier": "^7.10.1",
+ "chalk": "^2.0.0",
+ "js-tokens": "^4.0.0"
+ }
+ },
+ "@babel/parser": {
+ "version": "7.10.2",
+ "resolved": "https://registry.npmjs.org/@babel/parser/-/parser-7.10.2.tgz",
+ "integrity": "sha512-PApSXlNMJyB4JiGVhCOlzKIif+TKFTvu0aQAhnTvfP/z3vVSN6ZypH5bfUNwFXXjRQtUEBNFd2PtmCmG2Py3qQ=="
+ },
+ "@babel/plugin-syntax-jsx": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-jsx/-/plugin-syntax-jsx-7.10.1.tgz",
+ "integrity": "sha512-+OxyOArpVFXQeXKLO9o+r2I4dIoVoy6+Uu0vKELrlweDM3QJADZj+Z+5ERansZqIZBcLj42vHnDI8Rz9BnRIuQ==",
+ "requires": {
+ "@babel/helper-plugin-utils": "^7.10.1"
+ }
+ },
+ "@babel/plugin-syntax-object-rest-spread": {
+ "version": "7.8.3",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-object-rest-spread/-/plugin-syntax-object-rest-spread-7.8.3.tgz",
+ "integrity": "sha512-XoqMijGZb9y3y2XskN+P1wUGiVwWZ5JmoDRwx5+3GmEplNyVM2s2Dg8ILFQm8rWM48orGy5YpI5Bl8U1y7ydlA==",
+ "requires": {
+ "@babel/helper-plugin-utils": "^7.8.0"
+ }
+ },
+ "@babel/plugin-transform-react-jsx": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-transform-react-jsx/-/plugin-transform-react-jsx-7.10.1.tgz",
+ "integrity": "sha512-MBVworWiSRBap3Vs39eHt+6pJuLUAaK4oxGc8g+wY+vuSJvLiEQjW1LSTqKb8OUPtDvHCkdPhk7d6sjC19xyFw==",
+ "requires": {
+ "@babel/helper-builder-react-jsx": "^7.10.1",
+ "@babel/helper-builder-react-jsx-experimental": "^7.10.1",
+ "@babel/helper-plugin-utils": "^7.10.1",
+ "@babel/plugin-syntax-jsx": "^7.10.1"
+ }
+ },
+ "@babel/template": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/template/-/template-7.10.1.tgz",
+ "integrity": "sha512-OQDg6SqvFSsc9A0ej6SKINWrpJiNonRIniYondK2ViKhB06i3c0s+76XUft71iqBEe9S1OKsHwPAjfHnuvnCig==",
+ "requires": {
+ "@babel/code-frame": "^7.10.1",
+ "@babel/parser": "^7.10.1",
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/traverse": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/traverse/-/traverse-7.10.1.tgz",
+ "integrity": "sha512-C/cTuXeKt85K+p08jN6vMDz8vSV0vZcI0wmQ36o6mjbuo++kPMdpOYw23W2XH04dbRt9/nMEfA4W3eR21CD+TQ==",
+ "requires": {
+ "@babel/code-frame": "^7.10.1",
+ "@babel/generator": "^7.10.1",
+ "@babel/helper-function-name": "^7.10.1",
+ "@babel/helper-split-export-declaration": "^7.10.1",
+ "@babel/parser": "^7.10.1",
+ "@babel/types": "^7.10.1",
+ "debug": "^4.1.0",
+ "globals": "^11.1.0",
+ "lodash": "^4.17.13"
+ }
+ },
+ "@babel/types": {
+ "version": "7.10.2",
+ "resolved": "https://registry.npmjs.org/@babel/types/-/types-7.10.2.tgz",
+ "integrity": "sha512-AD3AwWBSz0AWF0AkCN9VPiWrvldXq+/e3cHa4J89vo4ymjz1XwrBFFVZmkJTsQIPNk+ZVomPSXUJqq8yyjZsng==",
+ "requires": {
+ "@babel/helper-validator-identifier": "^7.10.1",
+ "lodash": "^4.17.13",
+ "to-fast-properties": "^2.0.0"
+ }
+ },
+ "@hapi/hoek": {
+ "version": "8.5.1",
+ "resolved": "https://registry.npmjs.org/@hapi/hoek/-/hoek-8.5.1.tgz",
+ "integrity": "sha512-yN7kbciD87WzLGc5539Tn0sApjyiGHAJgKvG9W8C7O+6c7qmoQMfVs0W4bX17eqz6C78QJqqFrtgdK5EWf6Qow=="
+ },
+ "@hapi/joi": {
+ "version": "15.1.1",
+ "resolved": "https://registry.npmjs.org/@hapi/joi/-/joi-15.1.1.tgz",
+ "integrity": "sha512-entf8ZMOK8sc+8YfeOlM8pCfg3b5+WZIKBfUaaJT8UsjAAPjartzxIYm3TIbjvA4u+u++KbcXD38k682nVHDAQ==",
+ "requires": {
+ "@hapi/address": "2.x.x",
+ "@hapi/bourne": "1.x.x",
+ "@hapi/hoek": "8.x.x",
+ "@hapi/topo": "3.x.x"
+ }
+ },
+ "@mdx-js/mdx": {
+ "version": "1.6.5",
+ "resolved": "https://registry.npmjs.org/@mdx-js/mdx/-/mdx-1.6.5.tgz",
+ "integrity": "sha512-DC13eeEM0Dv9OD+UVhyB69BlV29d2eoAmfiR/XdgNl4R7YmRNEPGRD3QvGUdRUDxYdJBHauMz5ZIV507cNXXaA==",
+ "requires": {
+ "@babel/core": "7.9.6",
+ "@babel/plugin-syntax-jsx": "7.8.3",
+ "@babel/plugin-syntax-object-rest-spread": "7.8.3",
+ "@mdx-js/util": "^1.6.5",
+ "babel-plugin-apply-mdx-type-prop": "^1.6.5",
+ "babel-plugin-extract-import-names": "^1.6.5",
+ "camelcase-css": "2.0.1",
+ "detab": "2.0.3",
+ "hast-util-raw": "5.0.2",
+ "lodash.uniq": "4.5.0",
+ "mdast-util-to-hast": "9.1.0",
+ "remark-footnotes": "1.0.0",
+ "remark-mdx": "^1.6.5",
+ "remark-parse": "8.0.2",
+ "remark-squeeze-paragraphs": "4.0.0",
+ "style-to-object": "0.3.0",
+ "unified": "9.0.0",
+ "unist-builder": "2.0.3",
+ "unist-util-visit": "2.0.2"
+ },
+ "dependencies": {
+ "@babel/core": {
+ "version": "7.9.6",
+ "resolved": "https://registry.npmjs.org/@babel/core/-/core-7.9.6.tgz",
+ "integrity": "sha512-nD3deLvbsApbHAHttzIssYqgb883yU/d9roe4RZymBCDaZryMJDbptVpEpeQuRh4BJ+SYI8le9YGxKvFEvl1Wg==",
+ "requires": {
+ "@babel/code-frame": "^7.8.3",
+ "@babel/generator": "^7.9.6",
+ "@babel/helper-module-transforms": "^7.9.0",
+ "@babel/helpers": "^7.9.6",
+ "@babel/parser": "^7.9.6",
+ "@babel/template": "^7.8.6",
+ "@babel/traverse": "^7.9.6",
+ "@babel/types": "^7.9.6",
+ "convert-source-map": "^1.7.0",
+ "debug": "^4.1.0",
+ "gensync": "^1.0.0-beta.1",
+ "json5": "^2.1.2",
+ "lodash": "^4.17.13",
+ "resolve": "^1.3.2",
+ "semver": "^5.4.1",
+ "source-map": "^0.5.0"
+ }
+ },
+ "@babel/plugin-syntax-jsx": {
+ "version": "7.8.3",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-jsx/-/plugin-syntax-jsx-7.8.3.tgz",
+ "integrity": "sha512-WxdW9xyLgBdefoo0Ynn3MRSkhe5tFVxxKNVdnZSh318WrG2e2jH+E9wd/++JsqcLJZPfz87njQJ8j2Upjm0M0A==",
+ "requires": {
+ "@babel/helper-plugin-utils": "^7.8.3"
+ }
+ },
+ "remark-parse": {
+ "version": "8.0.2",
+ "resolved": "https://registry.npmjs.org/remark-parse/-/remark-parse-8.0.2.tgz",
+ "integrity": "sha512-eMI6kMRjsAGpMXXBAywJwiwAse+KNpmt+BK55Oofy4KvBZEqUDj6mWbGLJZrujoPIPPxDXzn3T9baRlpsm2jnQ==",
+ "requires": {
+ "ccount": "^1.0.0",
+ "collapse-white-space": "^1.0.2",
+ "is-alphabetical": "^1.0.0",
+ "is-decimal": "^1.0.0",
+ "is-whitespace-character": "^1.0.0",
+ "is-word-character": "^1.0.0",
+ "markdown-escapes": "^1.0.0",
+ "parse-entities": "^2.0.0",
+ "repeat-string": "^1.5.4",
+ "state-toggle": "^1.0.0",
+ "trim": "0.0.1",
+ "trim-trailing-lines": "^1.0.0",
+ "unherit": "^1.0.4",
+ "unist-util-remove-position": "^2.0.0",
+ "vfile-location": "^3.0.0",
+ "xtend": "^4.0.1"
+ }
+ },
+ "semver": {
+ "version": "5.7.1",
+ "resolved": "https://registry.npmjs.org/semver/-/semver-5.7.1.tgz",
+ "integrity": "sha512-sauaDf/PZdVgrLTNYHRtpXa1iRiKcaebiKQ1BJdpQlWH2lCvexQdX55snPFyK7QzpudqbCI0qXFfOasHdyNDGQ=="
+ },
+ "unified": {
+ "version": "9.0.0",
+ "resolved": "https://registry.npmjs.org/unified/-/unified-9.0.0.tgz",
+ "integrity": "sha512-ssFo33gljU3PdlWLjNp15Inqb77d6JnJSfyplGJPT/a+fNRNyCBeveBAYJdO5khKdF6WVHa/yYCC7Xl6BDwZUQ==",
+ "requires": {
+ "bail": "^1.0.0",
+ "extend": "^3.0.0",
+ "is-buffer": "^2.0.0",
+ "is-plain-obj": "^2.0.0",
+ "trough": "^1.0.0",
+ "vfile": "^4.0.0"
+ }
+ }
+ }
+ },
+ "@types/unist": {
+ "version": "2.0.3",
+ "resolved": "https://registry.npmjs.org/@types/unist/-/unist-2.0.3.tgz",
+ "integrity": "sha512-FvUupuM3rlRsRtCN+fDudtmytGO6iHJuuRKS1Ss0pG5z8oX0diNEw94UEL7hgDbpN94rgaK5R7sWm6RrSkZuAQ=="
+ },
+ "acorn": {
+ "version": "7.3.1",
+ "resolved": "https://registry.npmjs.org/acorn/-/acorn-7.3.1.tgz",
+ "integrity": "sha512-tLc0wSnatxAQHVHUapaHdz72pi9KUyHjq5KyHjGg9Y8Ifdc79pTh2XvI6I1/chZbnM7QtNKzh66ooDogPZSleA=="
+ },
+ "ansi-regex": {
+ "version": "5.0.0",
+ "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.0.tgz",
+ "integrity": "sha512-bY6fj56OUQ0hU1KjFNDQuJFezqKdrAyFdIevADiqrWHwSlbmBNMHp5ak2f40Pm8JTFyM2mqxkG6ngkHO11f/lg=="
+ },
+ "binary-extensions": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/binary-extensions/-/binary-extensions-2.0.0.tgz",
+ "integrity": "sha512-Phlt0plgpIIBOGTT/ehfFnbNlfsDEiqmzE2KRXoX1bLIlir4X/MR+zSyBEkL05ffWgnRSf/DXv+WrUAVr93/ow=="
+ },
+ "bytes": {
+ "version": "3.1.0",
+ "resolved": "https://registry.npmjs.org/bytes/-/bytes-3.1.0.tgz",
+ "integrity": "sha512-zauLjrfCG+xvoyaqLoV8bLVXXNGC4JqlxFCutSDWA6fJrTo2ZuvLYTqZ7aHBLZSMOopbzwv8f+wZcVzfVTI2Dg=="
+ },
+ "convert-source-map": {
+ "version": "1.7.0",
+ "resolved": "https://registry.npmjs.org/convert-source-map/-/convert-source-map-1.7.0.tgz",
+ "integrity": "sha512-4FJkXzKXEDB1snCFZlLP4gpC3JILicCpGbzG9f9G7tGqGCzETQ2hWPrcinA9oU4wtf2biUaEH5065UnMeR33oA==",
+ "requires": {
+ "safe-buffer": "~5.1.1"
+ }
+ },
+ "cross-spawn": {
+ "version": "7.0.3",
+ "resolved": "https://registry.npmjs.org/cross-spawn/-/cross-spawn-7.0.3.tgz",
+ "integrity": "sha512-iRDPJKUPVEND7dHPO8rkbOnPpyDygcDFtWjpeWNCgy8WP2rXcxXL8TskReQl6OrB2G7+UJrags1q15Fudc7G6w==",
+ "requires": {
+ "path-key": "^3.1.0",
+ "shebang-command": "^2.0.0",
+ "which": "^2.0.1"
+ }
+ },
+ "debug": {
+ "version": "4.1.1",
+ "resolved": "https://registry.npmjs.org/debug/-/debug-4.1.1.tgz",
+ "integrity": "sha512-pYAIzeRo8J6KPEaJ0VWOh5Pzkbw/RetuzehGM7QRRX5he4fPHx2rdKMB256ehJCkX+XRQm16eZLqLNS8RSZXZw==",
+ "requires": {
+ "ms": "^2.1.1"
+ }
+ },
+ "detab": {
+ "version": "2.0.3",
+ "resolved": "https://registry.npmjs.org/detab/-/detab-2.0.3.tgz",
+ "integrity": "sha512-Up8P0clUVwq0FnFjDclzZsy9PadzRn5FFxrr47tQQvMHqyiFYVbpH8oXDzWtF0Q7pYy3l+RPmtBl+BsFF6wH0A==",
+ "requires": {
+ "repeat-string": "^1.5.4"
+ }
+ },
+ "execa": {
+ "version": "4.0.2",
+ "resolved": "https://registry.npmjs.org/execa/-/execa-4.0.2.tgz",
+ "integrity": "sha512-QI2zLa6CjGWdiQsmSkZoGtDx2N+cQIGb3yNolGTdjSQzydzLgYYf8LRuagp7S7fPimjcrzUDSUFd/MgzELMi4Q==",
+ "requires": {
+ "cross-spawn": "^7.0.0",
+ "get-stream": "^5.0.0",
+ "human-signals": "^1.1.1",
+ "is-stream": "^2.0.0",
+ "merge-stream": "^2.0.0",
+ "npm-run-path": "^4.0.0",
+ "onetime": "^5.1.0",
+ "signal-exit": "^3.0.2",
+ "strip-final-newline": "^2.0.0"
+ }
+ },
+ "express-graphql": {
+ "version": "0.9.0",
+ "resolved": "https://registry.npmjs.org/express-graphql/-/express-graphql-0.9.0.tgz",
+ "integrity": "sha512-wccd9Lb6oeJ8yHpUs/8LcnGjFUUQYmOG9A5BNLybRdCzGw0PeUrtBxsIR8bfiur6uSW4OvPkVDoYH06z6/N9+w==",
+ "requires": {
+ "accepts": "^1.3.7",
+ "content-type": "^1.0.4",
+ "http-errors": "^1.7.3",
+ "raw-body": "^2.4.1"
+ }
+ },
+ "find-up": {
+ "version": "4.1.0",
+ "resolved": "https://registry.npmjs.org/find-up/-/find-up-4.1.0.tgz",
+ "integrity": "sha512-PpOwAdQ/YlXQ2vj8a3h8IipDuYRi3wceVQQGYWxNINccq40Anw7BlsEXCMbt1Zt+OLA6Fq9suIpIWD0OsnISlw==",
+ "requires": {
+ "locate-path": "^5.0.0",
+ "path-exists": "^4.0.0"
+ }
+ },
+ "fs-extra": {
+ "version": "8.1.0",
+ "resolved": "https://registry.npmjs.org/fs-extra/-/fs-extra-8.1.0.tgz",
+ "integrity": "sha512-yhlQgA6mnOJUKOsRUFsgJdQCvkKhcz8tlZG5HBQfReYZy46OwLcY+Zia0mtdHsOo9y/hP+CxMN0TU9QxoOtG4g==",
+ "requires": {
+ "graceful-fs": "^4.2.0",
+ "jsonfile": "^4.0.0",
+ "universalify": "^0.1.0"
+ }
+ },
+ "get-stream": {
+ "version": "5.1.0",
+ "resolved": "https://registry.npmjs.org/get-stream/-/get-stream-5.1.0.tgz",
+ "integrity": "sha512-EXr1FOzrzTfGeL0gQdeFEvOMm2mzMOglyiOXSTpPC+iAjAKftbr3jpCMWynogwYnM+eSj9sHGc6wjIcDvYiygw==",
+ "requires": {
+ "pump": "^3.0.0"
+ }
+ },
+ "glob": {
+ "version": "7.1.6",
+ "resolved": "https://registry.npmjs.org/glob/-/glob-7.1.6.tgz",
+ "integrity": "sha512-LwaxwyZ72Lk7vZINtNNrywX0ZuLyStrdDtabefZKAY5ZGJhVtgdznluResxNmPitE0SAO+O26sWTHeKSI2wMBA==",
+ "requires": {
+ "fs.realpath": "^1.0.0",
+ "inflight": "^1.0.4",
+ "inherits": "2",
+ "minimatch": "^3.0.4",
+ "once": "^1.3.0",
+ "path-is-absolute": "^1.0.0"
+ }
+ },
+ "graceful-fs": {
+ "version": "4.2.4",
+ "resolved": "https://registry.npmjs.org/graceful-fs/-/graceful-fs-4.2.4.tgz",
+ "integrity": "sha512-WjKPNJF79dtJAVniUlGGWHYGz2jWxT6VhN/4m1NdkbZ2nOsEF+cI1Edgql5zCRhs/VsQYRvrXctxktVXZUkixw=="
+ },
+ "hast-to-hyperscript": {
+ "version": "7.0.4",
+ "resolved": "https://registry.npmjs.org/hast-to-hyperscript/-/hast-to-hyperscript-7.0.4.tgz",
+ "integrity": "sha512-vmwriQ2H0RPS9ho4Kkbf3n3lY436QKLq6VaGA1pzBh36hBi3tm1DO9bR+kaJIbpT10UqaANDkMjxvjVfr+cnOA==",
+ "requires": {
+ "comma-separated-tokens": "^1.0.0",
+ "property-information": "^5.3.0",
+ "space-separated-tokens": "^1.0.0",
+ "style-to-object": "^0.2.1",
+ "unist-util-is": "^3.0.0",
+ "web-namespaces": "^1.1.2"
+ },
+ "dependencies": {
+ "property-information": {
+ "version": "5.5.0",
+ "resolved": "https://registry.npmjs.org/property-information/-/property-information-5.5.0.tgz",
+ "integrity": "sha512-RgEbCx2HLa1chNgvChcx+rrCWD0ctBmGSE0M7lVm1yyv4UbvbrWoXp/BkVLZefzjrRBGW8/Js6uh/BnlHXFyjA==",
+ "requires": {
+ "xtend": "^4.0.0"
+ }
+ },
+ "style-to-object": {
+ "version": "0.2.3",
+ "resolved": "https://registry.npmjs.org/style-to-object/-/style-to-object-0.2.3.tgz",
+ "integrity": "sha512-1d/k4EY2N7jVLOqf2j04dTc37TPOv/hHxZmvpg8Pdh8UYydxeu/C1W1U4vD8alzf5V2Gt7rLsmkr4dxAlDm9ng==",
+ "requires": {
+ "inline-style-parser": "0.1.1"
+ }
+ }
+ }
+ },
+ "hast-util-raw": {
+ "version": "5.0.2",
+ "resolved": "https://registry.npmjs.org/hast-util-raw/-/hast-util-raw-5.0.2.tgz",
+ "integrity": "sha512-3ReYQcIHmzSgMq8UrDZHFL0oGlbuVGdLKs8s/Fe8BfHFAyZDrdv1fy/AGn+Fim8ZuvAHcJ61NQhVMtyfHviT/g==",
+ "requires": {
+ "hast-util-from-parse5": "^5.0.0",
+ "hast-util-to-parse5": "^5.0.0",
+ "html-void-elements": "^1.0.0",
+ "parse5": "^5.0.0",
+ "unist-util-position": "^3.0.0",
+ "web-namespaces": "^1.0.0",
+ "xtend": "^4.0.0",
+ "zwitch": "^1.0.0"
+ }
+ },
+ "hast-util-to-parse5": {
+ "version": "5.1.2",
+ "resolved": "https://registry.npmjs.org/hast-util-to-parse5/-/hast-util-to-parse5-5.1.2.tgz",
+ "integrity": "sha512-ZgYLJu9lYknMfsBY0rBV4TJn2xiwF1fXFFjbP6EE7S0s5mS8LIKBVWzhA1MeIs1SWW6GnnE4In6c3kPb+CWhog==",
+ "requires": {
+ "hast-to-hyperscript": "^7.0.0",
+ "property-information": "^5.0.0",
+ "web-namespaces": "^1.0.0",
+ "xtend": "^4.0.0",
+ "zwitch": "^1.0.0"
+ }
+ },
+ "http-errors": {
+ "version": "1.7.3",
+ "resolved": "https://registry.npmjs.org/http-errors/-/http-errors-1.7.3.tgz",
+ "integrity": "sha512-ZTTX0MWrsQ2ZAhA1cejAwDLycFsd7I7nVtnkT3Ol0aqodaKW+0CTZDQ1uBv5whptCnc8e8HeRRJxRs0kmm/Qfw==",
+ "requires": {
+ "depd": "~1.1.2",
+ "inherits": "2.0.4",
+ "setprototypeof": "1.1.1",
+ "statuses": ">= 1.5.0 < 2",
+ "toidentifier": "1.0.0"
+ }
+ },
+ "inherits": {
+ "version": "2.0.4",
+ "resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.4.tgz",
+ "integrity": "sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ=="
+ },
+ "is-binary-path": {
+ "version": "2.1.0",
+ "resolved": "https://registry.npmjs.org/is-binary-path/-/is-binary-path-2.1.0.tgz",
+ "integrity": "sha512-ZMERYes6pDydyuGidse7OsHxtbI7WVeUEozgR/g7rd0xUimYNlvZRE/K2MgZTjWy725IfelLeVcEM97mmtRGXw==",
+ "requires": {
+ "binary-extensions": "^2.0.0"
+ }
+ },
+ "is-buffer": {
+ "version": "2.0.4",
+ "resolved": "https://registry.npmjs.org/is-buffer/-/is-buffer-2.0.4.tgz",
+ "integrity": "sha512-Kq1rokWXOPXWuaMAqZiJW4XxsmD9zGx9q4aePabbn3qCRGedtH7Cm+zV8WETitMfu1wdh+Rvd6w5egwSngUX2A=="
+ },
+ "is-plain-obj": {
+ "version": "2.1.0",
+ "resolved": "https://registry.npmjs.org/is-plain-obj/-/is-plain-obj-2.1.0.tgz",
+ "integrity": "sha512-YWnfyRwxL/+SsrWYfOpUtz5b3YD+nyfkHvjbcanzk8zgyO4ASD67uVMRt8k5bM4lLMDnXfriRhOpemw+NfT1eA=="
+ },
+ "is-stream": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/is-stream/-/is-stream-2.0.0.tgz",
+ "integrity": "sha512-XCoy+WlUr7d1+Z8GgSuXmpuUFC9fOhRXglJMx+dwLKTkL44Cjd4W1Z5P+BQZpr+cR93aGP4S/s7Ftw6Nd/kiEw=="
+ },
+ "json5": {
+ "version": "2.1.3",
+ "resolved": "https://registry.npmjs.org/json5/-/json5-2.1.3.tgz",
+ "integrity": "sha512-KXPvOm8K9IJKFM0bmdn8QXh7udDh1g/giieX0NLCaMnb4hEiVFqnop2ImTXCc5e0/oHz3LTqmHGtExn5hfMkOA==",
+ "requires": {
+ "minimist": "^1.2.5"
+ }
+ },
+ "locate-path": {
+ "version": "5.0.0",
+ "resolved": "https://registry.npmjs.org/locate-path/-/locate-path-5.0.0.tgz",
+ "integrity": "sha512-t7hw9pI+WvuwNJXwk5zVHpyhIqzg2qTlklJOf0mVxGSbe3Fp2VieZcduNYjaLDoy6p9uGpQEGWG87WpMKlNq8g==",
+ "requires": {
+ "p-locate": "^4.1.0"
+ }
+ },
+ "lodash": {
+ "version": "4.17.15",
+ "resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz",
+ "integrity": "sha512-8xOcRHvCjnocdS5cpwXQXVzmmh5e5+saE2QGoeQmbKmRS6J3VQppPOIt0MnmE+4xlZoumy0GPG0D0MVIQbNA1A=="
+ },
+ "markdown-table": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/markdown-table/-/markdown-table-2.0.0.tgz",
+ "integrity": "sha512-Ezda85ToJUBhM6WGaG6veasyym+Tbs3cMAw/ZhOPqXiYsr0jgocBV3j3nx+4lk47plLlIqjwuTm/ywVI+zjJ/A==",
+ "requires": {
+ "repeat-string": "^1.0.0"
+ }
+ },
+ "mdast-squeeze-paragraphs": {
+ "version": "4.0.0",
+ "resolved": "https://registry.npmjs.org/mdast-squeeze-paragraphs/-/mdast-squeeze-paragraphs-4.0.0.tgz",
+ "integrity": "sha512-zxdPn69hkQ1rm4J+2Cs2j6wDEv7O17TfXTJ33tl/+JPIoEmtV9t2ZzBM5LPHE8QlHsmVD8t3vPKCyY3oH+H8MQ==",
+ "requires": {
+ "unist-util-remove": "^2.0.0"
+ }
+ },
+ "mdast-util-compact": {
+ "version": "2.0.1",
+ "resolved": "https://registry.npmjs.org/mdast-util-compact/-/mdast-util-compact-2.0.1.tgz",
+ "integrity": "sha512-7GlnT24gEwDrdAwEHrU4Vv5lLWrEer4KOkAiKT9nYstsTad7Oc1TwqT2zIMKRdZF7cTuaf+GA1E4Kv7jJh8mPA==",
+ "requires": {
+ "unist-util-visit": "^2.0.0"
+ }
+ },
+ "mdast-util-definitions": {
+ "version": "3.0.1",
+ "resolved": "https://registry.npmjs.org/mdast-util-definitions/-/mdast-util-definitions-3.0.1.tgz",
+ "integrity": "sha512-BAv2iUm/e6IK/b2/t+Fx69EL/AGcq/IG2S+HxHjDJGfLJtd6i9SZUS76aC9cig+IEucsqxKTR0ot3m933R3iuA==",
+ "requires": {
+ "unist-util-visit": "^2.0.0"
+ }
+ },
+ "mdast-util-to-hast": {
+ "version": "9.1.0",
+ "resolved": "https://registry.npmjs.org/mdast-util-to-hast/-/mdast-util-to-hast-9.1.0.tgz",
+ "integrity": "sha512-Akl2Vi9y9cSdr19/Dfu58PVwifPXuFt1IrHe7l+Crme1KvgUT+5z+cHLVcQVGCiNTZZcdqjnuv9vPkGsqWytWA==",
+ "requires": {
+ "@types/mdast": "^3.0.0",
+ "@types/unist": "^2.0.3",
+ "collapse-white-space": "^1.0.0",
+ "detab": "^2.0.0",
+ "mdast-util-definitions": "^3.0.0",
+ "mdurl": "^1.0.0",
+ "trim-lines": "^1.0.0",
+ "unist-builder": "^2.0.0",
+ "unist-util-generated": "^1.0.0",
+ "unist-util-position": "^3.0.0",
+ "unist-util-visit": "^2.0.0"
+ }
+ },
+ "mimic-fn": {
+ "version": "2.1.0",
+ "resolved": "https://registry.npmjs.org/mimic-fn/-/mimic-fn-2.1.0.tgz",
+ "integrity": "sha512-OqbOk5oEQeAZ8WXWydlu9HJjz9WVdEIvamMCcXmuqUYjTknH/sqsWvhQ3vgwKFRR1HpjvNBKQ37nbJgYzGqGcg=="
+ },
+ "minimist": {
+ "version": "1.2.5",
+ "resolved": "https://registry.npmjs.org/minimist/-/minimist-1.2.5.tgz",
+ "integrity": "sha512-FM9nNUYrRBAELZQT3xeZQ7fmMOBg6nWNmJKTcgsJeaLstP/UODVpGsr5OhXhhXg6f+qtJ8uiZ+PUxkDWcgIXLw=="
+ },
+ "node-fetch": {
+ "version": "2.6.0",
+ "resolved": "https://registry.npmjs.org/node-fetch/-/node-fetch-2.6.0.tgz",
+ "integrity": "sha512-8dG4H5ujfvFiqDmVu9fQ5bOHUC15JMjMY/Zumv26oOvvVJjM67KF8koCWIabKQ1GJIa9r2mMZscBq/TbdOcmNA=="
+ },
+ "npm-run-path": {
+ "version": "4.0.1",
+ "resolved": "https://registry.npmjs.org/npm-run-path/-/npm-run-path-4.0.1.tgz",
+ "integrity": "sha512-S48WzZW777zhNIrn7gxOlISNAqi9ZC/uQFnRdbeIHhZhCA6UqpkOT8T1G7BvfdgP4Er8gF4sUbaS0i7QvIfCWw==",
+ "requires": {
+ "path-key": "^3.0.0"
+ }
+ },
+ "onetime": {
+ "version": "5.1.0",
+ "resolved": "https://registry.npmjs.org/onetime/-/onetime-5.1.0.tgz",
+ "integrity": "sha512-5NcSkPHhwTVFIQN+TUqXoS5+dlElHXdpAWu9I0HP20YOtIi+aZ0Ct82jdlILDxjLEAWwvm+qj1m6aEtsDVmm6Q==",
+ "requires": {
+ "mimic-fn": "^2.1.0"
+ }
+ },
+ "p-limit": {
+ "version": "2.3.0",
+ "resolved": "https://registry.npmjs.org/p-limit/-/p-limit-2.3.0.tgz",
+ "integrity": "sha512-//88mFWSJx8lxCzwdAABTJL2MyWB12+eIY7MDL2SqLmAkeKU9qxRvWuSyTjm3FUmpBEMuFfckAIqEaVGUDxb6w==",
+ "requires": {
+ "p-try": "^2.0.0"
+ }
+ },
+ "p-locate": {
+ "version": "4.1.0",
+ "resolved": "https://registry.npmjs.org/p-locate/-/p-locate-4.1.0.tgz",
+ "integrity": "sha512-R79ZZ/0wAxKGu3oYMlz8jy/kbhsNrS7SKZ7PxEHBgJ5+F2mtFW2fK2cOtBh1cHYkQsbzFV7I+EoRKe6Yt0oK7A==",
+ "requires": {
+ "p-limit": "^2.2.0"
+ }
+ },
+ "parse-entities": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/parse-entities/-/parse-entities-2.0.0.tgz",
+ "integrity": "sha512-kkywGpCcRYhqQIchaWqZ875wzpS/bMKhz5HnN3p7wveJTkTtyAB/AlnS0f8DFSqYW1T82t6yEAkEcB+A1I3MbQ==",
+ "requires": {
+ "character-entities": "^1.0.0",
+ "character-entities-legacy": "^1.0.0",
+ "character-reference-invalid": "^1.0.0",
+ "is-alphanumerical": "^1.0.0",
+ "is-decimal": "^1.0.0",
+ "is-hexadecimal": "^1.0.0"
+ }
+ },
+ "parse5": {
+ "version": "5.1.1",
+ "resolved": "https://registry.npmjs.org/parse5/-/parse5-5.1.1.tgz",
+ "integrity": "sha512-ugq4DFI0Ptb+WWjAdOK16+u/nHfiIrcE+sh8kZMaM0WllQKLI9rOUq6c2b7cwPkXdzfQESqvoqK6ug7U/Yyzug=="
+ },
+ "path-exists": {
+ "version": "4.0.0",
+ "resolved": "https://registry.npmjs.org/path-exists/-/path-exists-4.0.0.tgz",
+ "integrity": "sha512-ak9Qy5Q7jYb2Wwcey5Fpvg2KoAc/ZIhLSLOSBmRmygPsGwkVVt0fZa0qrtMz+m6tJTAHfZQ8FnmB4MG4LWy7/w=="
+ },
+ "path-key": {
+ "version": "3.1.1",
+ "resolved": "https://registry.npmjs.org/path-key/-/path-key-3.1.1.tgz",
+ "integrity": "sha512-ojmeN0qd+y0jszEtoY48r0Peq5dwMEkIlCOu6Q5f41lfkswXuKtYrhgoTpLnyIcHm24Uhqx+5Tqm2InSwLhE6Q=="
+ },
+ "pkg-dir": {
+ "version": "4.2.0",
+ "resolved": "https://registry.npmjs.org/pkg-dir/-/pkg-dir-4.2.0.tgz",
+ "integrity": "sha512-HRDzbaKjC+AOWVXxAU/x54COGeIv9eb+6CkDSQoNTt4XyWoIJvuPsXizxu/Fr23EiekbtZwmh1IcIG/l/a10GQ==",
+ "requires": {
+ "find-up": "^4.0.0"
+ }
+ },
+ "prettier": {
+ "version": "2.0.5",
+ "resolved": "https://registry.npmjs.org/prettier/-/prettier-2.0.5.tgz",
+ "integrity": "sha512-7PtVymN48hGcO4fGjybyBSIWDsLU4H4XlvOHfq91pz9kkGlonzwTfYkaIEwiRg/dAJF9YlbsduBAgtYLi+8cFg=="
+ },
+ "raw-body": {
+ "version": "2.4.1",
+ "resolved": "https://registry.npmjs.org/raw-body/-/raw-body-2.4.1.tgz",
+ "integrity": "sha512-9WmIKF6mkvA0SLmA2Knm9+qj89e+j1zqgyn8aXGd7+nAduPoqgI9lO57SAZNn/Byzo5P7JhXTyg9PzaJbH73bA==",
+ "requires": {
+ "bytes": "3.1.0",
+ "http-errors": "1.7.3",
+ "iconv-lite": "0.4.24",
+ "unpipe": "1.0.0"
+ }
+ },
+ "remark-squeeze-paragraphs": {
+ "version": "4.0.0",
+ "resolved": "https://registry.npmjs.org/remark-squeeze-paragraphs/-/remark-squeeze-paragraphs-4.0.0.tgz",
+ "integrity": "sha512-8qRqmL9F4nuLPIgl92XUuxI3pFxize+F1H0e/W3llTk0UsjJaj01+RrirkMw7P21RKe4X6goQhYRSvNWX+70Rw==",
+ "requires": {
+ "mdast-squeeze-paragraphs": "^4.0.0"
+ }
+ },
+ "remark-stringify": {
+ "version": "8.1.0",
+ "resolved": "https://registry.npmjs.org/remark-stringify/-/remark-stringify-8.1.0.tgz",
+ "integrity": "sha512-FSPZv1ds76oAZjurhhuV5qXSUSoz6QRPuwYK38S41sLHwg4oB7ejnmZshj7qwjgYLf93kdz6BOX9j5aidNE7rA==",
+ "requires": {
+ "ccount": "^1.0.0",
+ "is-alphanumeric": "^1.0.0",
+ "is-decimal": "^1.0.0",
+ "is-whitespace-character": "^1.0.0",
+ "longest-streak": "^2.0.1",
+ "markdown-escapes": "^1.0.0",
+ "markdown-table": "^2.0.0",
+ "mdast-util-compact": "^2.0.0",
+ "parse-entities": "^2.0.0",
+ "repeat-string": "^1.5.4",
+ "state-toggle": "^1.0.0",
+ "stringify-entities": "^3.0.0",
+ "unherit": "^1.0.4",
+ "xtend": "^4.0.1"
+ }
+ },
+ "semver": {
+ "version": "7.3.2",
+ "resolved": "https://registry.npmjs.org/semver/-/semver-7.3.2.tgz",
+ "integrity": "sha512-OrOb32TeeambH6UrhtShmF7CRDqhL6/5XpPNp2DuRH6+9QLw/orhp72j87v8Qa1ScDkvrrBNpZcDejAirJmfXQ=="
+ },
+ "shebang-command": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/shebang-command/-/shebang-command-2.0.0.tgz",
+ "integrity": "sha512-kHxr2zZpYtdmrN1qDjrrX/Z1rR1kG8Dx+gkpK1G4eXmvXswmcE1hTWBWYUzlraYw1/yZp6YuDY77YtvbN0dmDA==",
+ "requires": {
+ "shebang-regex": "^3.0.0"
+ }
+ },
+ "shebang-regex": {
+ "version": "3.0.0",
+ "resolved": "https://registry.npmjs.org/shebang-regex/-/shebang-regex-3.0.0.tgz",
+ "integrity": "sha512-7++dFhtcx3353uBaq8DDR4NuxBetBzC7ZQOhmTQInHEd6bSrXdiEyzCvG07Z44UYdLShWUyXt5M/yhz8ekcb1A=="
+ },
+ "stringify-entities": {
+ "version": "3.0.1",
+ "resolved": "https://registry.npmjs.org/stringify-entities/-/stringify-entities-3.0.1.tgz",
+ "integrity": "sha512-Lsk3ISA2++eJYqBMPKcr/8eby1I6L0gP0NlxF8Zja6c05yr/yCYyb2c9PwXjd08Ib3If1vn1rbs1H5ZtVuOfvQ==",
+ "requires": {
+ "character-entities-html4": "^1.0.0",
+ "character-entities-legacy": "^1.0.0",
+ "is-alphanumerical": "^1.0.0",
+ "is-decimal": "^1.0.2",
+ "is-hexadecimal": "^1.0.0"
+ }
+ },
+ "strip-ansi": {
+ "version": "6.0.0",
+ "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.0.tgz",
+ "integrity": "sha512-AuvKTrTfQNYNIctbR1K/YGTR1756GycPsg7b9bdV9Duqur4gv6aKqHXah67Z8ImS7WEz5QVcOtlfW2rZEugt6w==",
+ "requires": {
+ "ansi-regex": "^5.0.0"
+ }
+ },
+ "style-to-object": {
+ "version": "0.3.0",
+ "resolved": "https://registry.npmjs.org/style-to-object/-/style-to-object-0.3.0.tgz",
+ "integrity": "sha512-CzFnRRXhzWIdItT3OmF8SQfWyahHhjq3HwcMNCNLn+N7klOOqPjMeG/4JSu77D7ypZdGvSzvkrbyeTMizz2VrA==",
+ "requires": {
+ "inline-style-parser": "0.1.1"
+ }
+ },
+ "unified": {
+ "version": "8.4.2",
+ "resolved": "https://registry.npmjs.org/unified/-/unified-8.4.2.tgz",
+ "integrity": "sha512-JCrmN13jI4+h9UAyKEoGcDZV+i1E7BLFuG7OsaDvTXI5P0qhHX+vZO/kOhz9jn8HGENDKbwSeB0nVOg4gVStGA==",
+ "requires": {
+ "bail": "^1.0.0",
+ "extend": "^3.0.0",
+ "is-plain-obj": "^2.0.0",
+ "trough": "^1.0.0",
+ "vfile": "^4.0.0"
+ }
+ },
+ "unist-builder": {
+ "version": "2.0.3",
+ "resolved": "https://registry.npmjs.org/unist-builder/-/unist-builder-2.0.3.tgz",
+ "integrity": "sha512-f98yt5pnlMWlzP539tPc4grGMsFaQQlP/vM396b00jngsiINumNmsY8rkXjfoi1c6QaM8nQ3vaGDuoKWbe/1Uw=="
+ },
+ "unist-util-is": {
+ "version": "3.0.0",
+ "resolved": "https://registry.npmjs.org/unist-util-is/-/unist-util-is-3.0.0.tgz",
+ "integrity": "sha512-sVZZX3+kspVNmLWBPAB6r+7D9ZgAFPNWm66f7YNb420RlQSbn+n8rG8dGZSkrER7ZIXGQYNm5pqC3v3HopH24A=="
+ },
+ "unist-util-remove": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/unist-util-remove/-/unist-util-remove-2.0.0.tgz",
+ "integrity": "sha512-HwwWyNHKkeg/eXRnE11IpzY8JT55JNM1YCwwU9YNCnfzk6s8GhPXrVBBZWiwLeATJbI7euvoGSzcy9M29UeW3g==",
+ "requires": {
+ "unist-util-is": "^4.0.0"
+ },
+ "dependencies": {
+ "unist-util-is": {
+ "version": "4.0.2",
+ "resolved": "https://registry.npmjs.org/unist-util-is/-/unist-util-is-4.0.2.tgz",
+ "integrity": "sha512-Ofx8uf6haexJwI1gxWMGg6I/dLnF2yE+KibhD3/diOqY2TinLcqHXCV6OI5gFVn3xQqDH+u0M625pfKwIwgBKQ=="
+ }
+ }
+ },
+ "unist-util-remove-position": {
+ "version": "2.0.1",
+ "resolved": "https://registry.npmjs.org/unist-util-remove-position/-/unist-util-remove-position-2.0.1.tgz",
+ "integrity": "sha512-fDZsLYIe2uT+oGFnuZmy73K6ZxOPG/Qcm+w7jbEjaFcJgbQ6cqjs/eSPzXhsmGpAsWPkqZM9pYjww5QTn3LHMA==",
+ "requires": {
+ "unist-util-visit": "^2.0.0"
+ }
+ },
+ "unist-util-stringify-position": {
+ "version": "2.0.3",
+ "resolved": "https://registry.npmjs.org/unist-util-stringify-position/-/unist-util-stringify-position-2.0.3.tgz",
+ "integrity": "sha512-3faScn5I+hy9VleOq/qNbAd6pAx7iH5jYBMS9I1HgQVijz/4mv5Bvw5iw1sC/90CODiKo81G/ps8AJrISn687g==",
+ "requires": {
+ "@types/unist": "^2.0.2"
+ }
+ },
+ "unist-util-visit": {
+ "version": "2.0.2",
+ "resolved": "https://registry.npmjs.org/unist-util-visit/-/unist-util-visit-2.0.2.tgz",
+ "integrity": "sha512-HoHNhGnKj6y+Sq+7ASo2zpVdfdRifhTgX2KTU3B/sO/TTlZchp7E3S4vjRzDJ7L60KmrCPsQkVK3lEF3cz36XQ==",
+ "requires": {
+ "@types/unist": "^2.0.0",
+ "unist-util-is": "^4.0.0",
+ "unist-util-visit-parents": "^3.0.0"
+ },
+ "dependencies": {
+ "unist-util-is": {
+ "version": "4.0.2",
+ "resolved": "https://registry.npmjs.org/unist-util-is/-/unist-util-is-4.0.2.tgz",
+ "integrity": "sha512-Ofx8uf6haexJwI1gxWMGg6I/dLnF2yE+KibhD3/diOqY2TinLcqHXCV6OI5gFVn3xQqDH+u0M625pfKwIwgBKQ=="
+ }
+ }
+ },
+ "unist-util-visit-parents": {
+ "version": "3.0.2",
+ "resolved": "https://registry.npmjs.org/unist-util-visit-parents/-/unist-util-visit-parents-3.0.2.tgz",
+ "integrity": "sha512-yJEfuZtzFpQmg1OSCyS9M5NJRrln/9FbYosH3iW0MG402QbdbaB8ZESwUv9RO6nRfLAKvWcMxCwdLWOov36x/g==",
+ "requires": {
+ "@types/unist": "^2.0.0",
+ "unist-util-is": "^4.0.0"
+ },
+ "dependencies": {
+ "unist-util-is": {
+ "version": "4.0.2",
+ "resolved": "https://registry.npmjs.org/unist-util-is/-/unist-util-is-4.0.2.tgz",
+ "integrity": "sha512-Ofx8uf6haexJwI1gxWMGg6I/dLnF2yE+KibhD3/diOqY2TinLcqHXCV6OI5gFVn3xQqDH+u0M625pfKwIwgBKQ=="
+ }
+ }
+ },
+ "vfile": {
+ "version": "4.1.1",
+ "resolved": "https://registry.npmjs.org/vfile/-/vfile-4.1.1.tgz",
+ "integrity": "sha512-lRjkpyDGjVlBA7cDQhQ+gNcvB1BGaTHYuSOcY3S7OhDmBtnzX95FhtZZDecSTDm6aajFymyve6S5DN4ZHGezdQ==",
+ "requires": {
+ "@types/unist": "^2.0.0",
+ "is-buffer": "^2.0.0",
+ "replace-ext": "1.0.0",
+ "unist-util-stringify-position": "^2.0.0",
+ "vfile-message": "^2.0.0"
+ }
+ },
+ "vfile-location": {
+ "version": "3.0.1",
+ "resolved": "https://registry.npmjs.org/vfile-location/-/vfile-location-3.0.1.tgz",
+ "integrity": "sha512-yYBO06eeN/Ki6Kh1QAkgzYpWT1d3Qln+ZCtSbJqFExPl1S3y2qqotJQXoh6qEvl/jDlgpUJolBn3PItVnnZRqQ=="
+ },
+ "vfile-message": {
+ "version": "2.0.4",
+ "resolved": "https://registry.npmjs.org/vfile-message/-/vfile-message-2.0.4.tgz",
+ "integrity": "sha512-DjssxRGkMvifUOJre00juHoP9DPWuzjxKuMDrhNbk2TdaYYBNMStsNhEOt3idrtI12VQYM/1+iM0KOzXi4pxwQ==",
+ "requires": {
+ "@types/unist": "^2.0.0",
+ "unist-util-stringify-position": "^2.0.0"
+ }
+ },
+ "which": {
+ "version": "2.0.2",
+ "resolved": "https://registry.npmjs.org/which/-/which-2.0.2.tgz",
+ "integrity": "sha512-BLI3Tl1TW3Pvl70l3yq3Y64i+awpwXqsGBYWkkqMtnbXgrMD+yj7rhW0kuEDxzJaYXGjEW5ogapKNMEKNMjibA==",
+ "requires": {
+ "isexe": "^2.0.0"
+ }
+ },
+ "xstate": {
+ "version": "4.10.0",
+ "resolved": "https://registry.npmjs.org/xstate/-/xstate-4.10.0.tgz",
+ "integrity": "sha512-nncQ9gW+xgk5iUEvpBOXhbzSCS0uwzzT4bOAXxo6oUoALgbxzqEyMmaMYwuvOHrabDTdMJYnF+xe2XD8RRgWmA=="
+ }
}
},
"gatsby-remark-copy-linked-files": {
@@ -7872,6 +14338,111 @@
}
}
},
+ "gatsby-telemetry": {
+ "version": "1.3.11",
+ "resolved": "https://registry.npmjs.org/gatsby-telemetry/-/gatsby-telemetry-1.3.11.tgz",
+ "integrity": "sha512-k5bzy0G0Me0aQYaW1cOWp0PQ9+wRXHU0lbztdinnRAWlqqb3EGMVPtfUhP7aMJvXtj3UfLy3pk0xBfsX8BHvfA==",
+ "requires": {
+ "@babel/code-frame": "^7.10.1",
+ "@babel/runtime": "^7.10.2",
+ "bluebird": "^3.7.2",
+ "boxen": "^4.2.0",
+ "configstore": "^5.0.1",
+ "envinfo": "^7.5.1",
+ "fs-extra": "^8.1.0",
+ "gatsby-core-utils": "^1.3.5",
+ "git-up": "4.0.1",
+ "is-docker": "2.0.0",
+ "lodash": "^4.17.15",
+ "node-fetch": "2.6.0",
+ "resolve-cwd": "^2.0.0",
+ "source-map": "^0.7.3",
+ "stack-trace": "^0.0.10",
+ "stack-utils": "1.0.2",
+ "uuid": "3.4.0"
+ },
+ "dependencies": {
+ "@babel/code-frame": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/code-frame/-/code-frame-7.10.1.tgz",
+ "integrity": "sha512-IGhtTmpjGbYzcEDOw7DcQtbQSXcG9ftmAXtWTu9V936vDye4xjjekktFAtgZsWpzTj/X01jocB46mTywm/4SZw==",
+ "requires": {
+ "@babel/highlight": "^7.10.1"
+ }
+ },
+ "@babel/highlight": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/highlight/-/highlight-7.10.1.tgz",
+ "integrity": "sha512-8rMof+gVP8mxYZApLF/JgNDAkdKa+aJt3ZYxF8z6+j/hpeXL7iMsKCPHa2jNMHu/qqBwzQF4OHNoYi8dMA/rYg==",
+ "requires": {
+ "@babel/helper-validator-identifier": "^7.10.1",
+ "chalk": "^2.0.0",
+ "js-tokens": "^4.0.0"
+ }
+ },
+ "@babel/runtime": {
+ "version": "7.10.2",
+ "resolved": "https://registry.npmjs.org/@babel/runtime/-/runtime-7.10.2.tgz",
+ "integrity": "sha512-6sF3uQw2ivImfVIl62RZ7MXhO2tap69WeWK57vAaimT6AZbE4FbqjdEJIN1UqoD6wI6B+1n9UiagafH1sxjOtg==",
+ "requires": {
+ "regenerator-runtime": "^0.13.4"
+ }
+ },
+ "bluebird": {
+ "version": "3.7.2",
+ "resolved": "https://registry.npmjs.org/bluebird/-/bluebird-3.7.2.tgz",
+ "integrity": "sha512-XpNj6GDQzdfW+r2Wnn7xiSAd7TM3jzkxGXBGTtWKuSXv1xUV+azxAm8jdWZN06QTQk+2N2XB9jRDkvbmQmcRtg=="
+ },
+ "fs-extra": {
+ "version": "8.1.0",
+ "resolved": "https://registry.npmjs.org/fs-extra/-/fs-extra-8.1.0.tgz",
+ "integrity": "sha512-yhlQgA6mnOJUKOsRUFsgJdQCvkKhcz8tlZG5HBQfReYZy46OwLcY+Zia0mtdHsOo9y/hP+CxMN0TU9QxoOtG4g==",
+ "requires": {
+ "graceful-fs": "^4.2.0",
+ "jsonfile": "^4.0.0",
+ "universalify": "^0.1.0"
+ }
+ },
+ "graceful-fs": {
+ "version": "4.2.4",
+ "resolved": "https://registry.npmjs.org/graceful-fs/-/graceful-fs-4.2.4.tgz",
+ "integrity": "sha512-WjKPNJF79dtJAVniUlGGWHYGz2jWxT6VhN/4m1NdkbZ2nOsEF+cI1Edgql5zCRhs/VsQYRvrXctxktVXZUkixw=="
+ },
+ "lodash": {
+ "version": "4.17.15",
+ "resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz",
+ "integrity": "sha512-8xOcRHvCjnocdS5cpwXQXVzmmh5e5+saE2QGoeQmbKmRS6J3VQppPOIt0MnmE+4xlZoumy0GPG0D0MVIQbNA1A=="
+ },
+ "node-fetch": {
+ "version": "2.6.0",
+ "resolved": "https://registry.npmjs.org/node-fetch/-/node-fetch-2.6.0.tgz",
+ "integrity": "sha512-8dG4H5ujfvFiqDmVu9fQ5bOHUC15JMjMY/Zumv26oOvvVJjM67KF8koCWIabKQ1GJIa9r2mMZscBq/TbdOcmNA=="
+ },
+ "regenerator-runtime": {
+ "version": "0.13.5",
+ "resolved": "https://registry.npmjs.org/regenerator-runtime/-/regenerator-runtime-0.13.5.tgz",
+ "integrity": "sha512-ZS5w8CpKFinUzOwW3c83oPeVXoNsrLsaCoLtJvAClH135j/R77RuymhiSErhm2lKcwSCIpmvIWSbDkIfAqKQlA=="
+ },
+ "resolve-cwd": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/resolve-cwd/-/resolve-cwd-2.0.0.tgz",
+ "integrity": "sha1-AKn3OHVW4nA46uIyyqNypqWbZlo=",
+ "requires": {
+ "resolve-from": "^3.0.0"
+ }
+ },
+ "source-map": {
+ "version": "0.7.3",
+ "resolved": "https://registry.npmjs.org/source-map/-/source-map-0.7.3.tgz",
+ "integrity": "sha512-CkCj6giN3S+n9qrYiBTX5gystlENnRW5jZeNLHpe6aue+SrHcG5VYwujhW9s4dY31mEGsxBDrHR6oI69fTXsaQ=="
+ },
+ "uuid": {
+ "version": "3.4.0",
+ "resolved": "https://registry.npmjs.org/uuid/-/uuid-3.4.0.tgz",
+ "integrity": "sha512-HjSDRw6gZE5JMggctHBcjVak08+KEVhSIiDzFnT9S9aegmp85S/bReBVTb4QTFaRNptJ9kuYaNhnbNEOkbKb/A=="
+ }
+ }
+ },
"gatsby-transformer-remark": {
"version": "2.2.5",
"resolved": "https://registry.npmjs.org/gatsby-transformer-remark/-/gatsby-transformer-remark-2.2.5.tgz",
@@ -8032,6 +14603,11 @@
"globule": "^1.0.0"
}
},
+ "gensync": {
+ "version": "1.0.0-beta.1",
+ "resolved": "https://registry.npmjs.org/gensync/-/gensync-1.0.0-beta.1.tgz",
+ "integrity": "sha512-r8EC6NO1sngH/zdD9fiRDLdcgnbayXah+mLgManTaIZJqEC1MZstmnox8KpnI2/fxQwrp5OpCOYWLp4rBl4Jcg=="
+ },
"get-caller-file": {
"version": "1.0.3",
"resolved": "https://registry.npmjs.org/get-caller-file/-/get-caller-file-1.0.3.tgz",
@@ -8078,6 +14654,15 @@
"assert-plus": "^1.0.0"
}
},
+ "git-up": {
+ "version": "4.0.1",
+ "resolved": "https://registry.npmjs.org/git-up/-/git-up-4.0.1.tgz",
+ "integrity": "sha512-LFTZZrBlrCrGCG07/dm1aCjjpL1z9L3+5aEeI9SBhAqSc+kiA9Or1bgZhQFNppJX6h/f5McrvJt1mQXTFm6Qrw==",
+ "requires": {
+ "is-ssh": "^1.3.0",
+ "parse-url": "^5.0.0"
+ }
+ },
"github-from-package": {
"version": "0.0.0",
"resolved": "https://registry.npmjs.org/github-from-package/-/github-from-package-0.0.0.tgz",
@@ -8181,7 +14766,7 @@
},
"globby": {
"version": "6.1.0",
- "resolved": "http://registry.npmjs.org/globby/-/globby-6.1.0.tgz",
+ "resolved": "https://registry.npmjs.org/globby/-/globby-6.1.0.tgz",
"integrity": "sha1-9abXDoOV4hyFj7BInWTfAkJNUGw=",
"requires": {
"array-union": "^1.0.1",
@@ -8193,7 +14778,7 @@
"dependencies": {
"pify": {
"version": "2.3.0",
- "resolved": "http://registry.npmjs.org/pify/-/pify-2.3.0.tgz",
+ "resolved": "https://registry.npmjs.org/pify/-/pify-2.3.0.tgz",
"integrity": "sha1-7RQaasBDqEnqWISY59yosVMw6Qw="
}
}
@@ -8218,21 +14803,43 @@
}
},
"got": {
- "version": "6.7.1",
- "resolved": "http://registry.npmjs.org/got/-/got-6.7.1.tgz",
- "integrity": "sha1-JAzQV4WpoY5WHcG0S0HHY+8ejbA=",
+ "version": "8.0.0",
+ "resolved": "https://registry.npmjs.org/got/-/got-8.0.0.tgz",
+ "integrity": "sha512-lqVA9ORcSGfJPHfMXh1RW451aYMP1NyXivpGqGggnfDqNz3QVfMl7MkuEz+dr70gK2X8dhLiS5YzHhCV3/3yOQ==",
"requires": {
- "create-error-class": "^3.0.0",
+ "cacheable-request": "^2.1.1",
+ "decompress-response": "^3.3.0",
"duplexer3": "^0.1.4",
"get-stream": "^3.0.0",
- "is-redirect": "^1.0.0",
- "is-retry-allowed": "^1.0.0",
- "is-stream": "^1.0.0",
+ "into-stream": "^3.1.0",
+ "is-plain-obj": "^1.1.0",
+ "is-retry-allowed": "^1.1.0",
+ "is-stream": "^1.1.0",
+ "isurl": "^1.0.0-alpha5",
"lowercase-keys": "^1.0.0",
- "safe-buffer": "^5.0.1",
- "timed-out": "^4.0.0",
- "unzip-response": "^2.0.1",
- "url-parse-lax": "^1.0.0"
+ "mimic-response": "^1.0.0",
+ "p-cancelable": "^0.3.0",
+ "p-timeout": "^1.2.0",
+ "pify": "^3.0.0",
+ "safe-buffer": "^5.1.1",
+ "timed-out": "^4.0.1",
+ "url-parse-lax": "^3.0.0",
+ "url-to-options": "^1.0.1"
+ },
+ "dependencies": {
+ "prepend-http": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/prepend-http/-/prepend-http-2.0.0.tgz",
+ "integrity": "sha1-6SQ0v6XqjBn0HN/UAddBo8gZ2Jc="
+ },
+ "url-parse-lax": {
+ "version": "3.0.0",
+ "resolved": "https://registry.npmjs.org/url-parse-lax/-/url-parse-lax-3.0.0.tgz",
+ "integrity": "sha1-FrXK/Afb42dsGxmZF3gj1lA6yww=",
+ "requires": {
+ "prepend-http": "^2.0.0"
+ }
+ }
}
},
"graceful-fs": {
@@ -8246,17 +14853,33 @@
"integrity": "sha1-TK+tdrxi8C+gObL5Tpo906ORpyU="
},
"graphql": {
- "version": "14.1.1",
- "resolved": "https://registry.npmjs.org/graphql/-/graphql-14.1.1.tgz",
- "integrity": "sha512-C5zDzLqvfPAgTtP8AUPIt9keDabrdRAqSWjj2OPRKrKxI9Fb65I36s1uCs1UUBFnSWTdO7hyHi7z1ZbwKMKF6Q==",
+ "version": "14.6.0",
+ "resolved": "https://registry.npmjs.org/graphql/-/graphql-14.6.0.tgz",
+ "integrity": "sha512-VKzfvHEKybTKjQVpTFrA5yUq2S9ihcZvfJAtsDBBCuV6wauPu1xl/f9ehgVf0FcEJJs4vz6ysb/ZMkGigQZseg==",
"requires": {
"iterall": "^1.2.2"
}
},
+ "graphql-compose": {
+ "version": "6.3.8",
+ "resolved": "https://registry.npmjs.org/graphql-compose/-/graphql-compose-6.3.8.tgz",
+ "integrity": "sha512-o0/jzQEMIpSjryLKwmD1vGrCubiPxD0LxlGTgWDSu38TBepu2GhugC9gYgTEbtiCZAHPtvkZ90SzzABOWZyQLA==",
+ "requires": {
+ "graphql-type-json": "^0.2.4",
+ "object-path": "^0.11.4"
+ },
+ "dependencies": {
+ "graphql-type-json": {
+ "version": "0.2.4",
+ "resolved": "https://registry.npmjs.org/graphql-type-json/-/graphql-type-json-0.2.4.tgz",
+ "integrity": "sha512-/tq02ayMQjrG4oDFDRLLrPk0KvJXue0nVXoItBe7uAdbNXjQUu+HYCBdAmPLQoseVzUKKMzrhq2P/sfI76ON6w=="
+ }
+ }
+ },
"graphql-config": {
- "version": "2.2.1",
- "resolved": "https://registry.npmjs.org/graphql-config/-/graphql-config-2.2.1.tgz",
- "integrity": "sha512-U8+1IAhw9m6WkZRRcyj8ZarK96R6lQBQ0an4lp76Ps9FyhOXENC5YQOxOFGm5CxPrX2rD0g3Je4zG5xdNJjwzQ==",
+ "version": "2.2.2",
+ "resolved": "https://registry.npmjs.org/graphql-config/-/graphql-config-2.2.2.tgz",
+ "integrity": "sha512-mtv1ejPyyR2mJUUZNhljggU+B/Xl8tJJWf+h145hB+1Y48acSghFalhNtXfPBcYl2tJzpb+lGxfj3O7OjaiMgw==",
"requires": {
"graphql-import": "^0.7.1",
"graphql-request": "^1.5.0",
@@ -8282,24 +14905,19 @@
}
},
"graphql-playground-html": {
- "version": "1.6.12",
- "resolved": "https://registry.npmjs.org/graphql-playground-html/-/graphql-playground-html-1.6.12.tgz",
- "integrity": "sha512-yOYFwwSMBL0MwufeL8bkrNDgRE7eF/kTHiwrqn9FiR9KLcNIl1xw9l9a+6yIRZM56JReQOHpbQFXTZn1IuSKRg=="
- },
- "graphql-playground-middleware-express": {
- "version": "1.7.12",
- "resolved": "https://registry.npmjs.org/graphql-playground-middleware-express/-/graphql-playground-middleware-express-1.7.12.tgz",
- "integrity": "sha512-17szgonnVSxWVrgblLRHHLjWnMUONfkULIwSunaMvYx8k5oG3yL86cyGCbHuDFUFkyr2swLhdfYl4mDfDXuvOA==",
+ "version": "1.6.25",
+ "resolved": "https://registry.npmjs.org/graphql-playground-html/-/graphql-playground-html-1.6.25.tgz",
+ "integrity": "sha512-wMNvGsQ0OwBVhn72VVi7OdpI85IxiIZT43glRx7gQIwQ6NvhFnzMYBIVmcJAJ4UlXRYiWtrQhuOItDXObiR3kg==",
"requires": {
- "graphql-playground-html": "1.6.12"
+ "xss": "^1.0.6"
}
},
- "graphql-relay": {
- "version": "0.6.0",
- "resolved": "https://registry.npmjs.org/graphql-relay/-/graphql-relay-0.6.0.tgz",
- "integrity": "sha512-OVDi6C9/qOT542Q3KxZdXja3NrDvqzbihn1B44PH8P/c5s0Q90RyQwT6guhGqXqbYEH6zbeLJWjQqiYvcg2vVw==",
+ "graphql-playground-middleware-express": {
+ "version": "1.7.18",
+ "resolved": "https://registry.npmjs.org/graphql-playground-middleware-express/-/graphql-playground-middleware-express-1.7.18.tgz",
+ "integrity": "sha512-EywRL+iBa4u//5YbY1iJxrl0n4IKyomBKgLXrMbG8gHJUwxmFs5FCWJJ4Q6moSn5Q3RgMZvrWzXB27lKwN8Kgw==",
"requires": {
- "prettier": "^1.16.0"
+ "graphql-playground-html": "1.6.25"
}
},
"graphql-request": {
@@ -8310,30 +14928,46 @@
"cross-fetch": "2.2.2"
}
},
- "graphql-skip-limit": {
- "version": "2.0.5",
- "resolved": "https://registry.npmjs.org/graphql-skip-limit/-/graphql-skip-limit-2.0.5.tgz",
- "integrity": "sha512-eh3S+V6qpR8I4GHmMEsJDETa9FnS/tqjuV/1RUBx/OI05AiSz1OiKDWACIWJc3Pca8YcBvviILx7iGQLtnICFw==",
+ "graphql-subscriptions": {
+ "version": "1.1.0",
+ "resolved": "https://registry.npmjs.org/graphql-subscriptions/-/graphql-subscriptions-1.1.0.tgz",
+ "integrity": "sha512-6WzlBFC0lWmXJbIVE8OgFgXIP4RJi3OQgTPa0DVMsDXdpRDjTsM1K9wfl5HSYX7R87QAGlvcv2Y4BIZa/ItonA==",
"requires": {
- "@babel/runtime": "^7.0.0"
+ "iterall": "^1.2.1"
}
},
"graphql-tools": {
- "version": "3.1.1",
- "resolved": "https://registry.npmjs.org/graphql-tools/-/graphql-tools-3.1.1.tgz",
- "integrity": "sha512-yHvPkweUB0+Q/GWH5wIG60bpt8CTwBklCSzQdEHmRUgAdEQKxw+9B7zB3dG7wB3Ym7M7lfrS4Ej+jtDZfA2UXg==",
+ "version": "6.0.10",
+ "resolved": "https://registry.npmjs.org/graphql-tools/-/graphql-tools-6.0.10.tgz",
+ "integrity": "sha512-2cKAl+jMvLYfsYlmbVasbRZTvNNSBum1JFM/fYYoiKJfDbTzSceJerzxGPidfKFfThWe+msa5w7OWlXOCCnP8g==",
"requires": {
- "apollo-link": "^1.2.2",
- "apollo-utilities": "^1.0.1",
- "deprecated-decorator": "^0.1.6",
- "iterall": "^1.1.3",
- "uuid": "^3.1.0"
+ "@graphql-tools/code-file-loader": "6.0.10",
+ "@graphql-tools/delegate": "6.0.10",
+ "@graphql-tools/git-loader": "6.0.10",
+ "@graphql-tools/github-loader": "6.0.10",
+ "@graphql-tools/graphql-file-loader": "6.0.10",
+ "@graphql-tools/graphql-tag-pluck": "6.0.10",
+ "@graphql-tools/import": "6.0.10",
+ "@graphql-tools/json-file-loader": "6.0.10",
+ "@graphql-tools/links": "6.0.10",
+ "@graphql-tools/load": "6.0.10",
+ "@graphql-tools/load-files": "6.0.10",
+ "@graphql-tools/merge": "6.0.10",
+ "@graphql-tools/mock": "6.0.10",
+ "@graphql-tools/module-loader": "6.0.10",
+ "@graphql-tools/relay-operation-optimizer": "6.0.10",
+ "@graphql-tools/resolvers-composition": "6.0.10",
+ "@graphql-tools/schema": "6.0.10",
+ "@graphql-tools/stitch": "6.0.10",
+ "@graphql-tools/url-loader": "6.0.10",
+ "@graphql-tools/utils": "6.0.10",
+ "@graphql-tools/wrap": "6.0.10"
}
},
"graphql-type-json": {
- "version": "0.2.1",
- "resolved": "https://registry.npmjs.org/graphql-type-json/-/graphql-type-json-0.2.1.tgz",
- "integrity": "sha1-0sF34vGxfYf4EHLNBTEcB1S6pCA="
+ "version": "0.3.2",
+ "resolved": "https://registry.npmjs.org/graphql-type-json/-/graphql-type-json-0.3.2.tgz",
+ "integrity": "sha512-J+vjof74oMlCWXSvt0DOf2APEdZOCdubEvGDUAlqH//VBYcOYsGgRW7Xzorr44LvkjiuvecWc8fChxuZZbChtg=="
},
"gray-matter": {
"version": "4.0.2",
@@ -8360,9 +14994,9 @@
}
},
"handle-thing": {
- "version": "2.0.0",
- "resolved": "https://registry.npmjs.org/handle-thing/-/handle-thing-2.0.0.tgz",
- "integrity": "sha512-d4sze1JNC454Wdo2fkuyzCr6aHcbL6PGGuFAz0Li/NcOm1tCHGnWDRmJP85dh9IhQErTc2svWFEX5xHIOo//kQ=="
+ "version": "2.0.1",
+ "resolved": "https://registry.npmjs.org/handle-thing/-/handle-thing-2.0.1.tgz",
+ "integrity": "sha512-9Qn4yBxelxoh2Ow62nP+Ka/kMnOXRi8BXnRaUwezLNhqelnN49xKz4F/dPP8OYLxLxq6JDtZb2i9XznUQbNPTg=="
},
"har-schema": {
"version": "2.0.0",
@@ -8494,19 +15128,42 @@
}
}
},
- "hash-base": {
- "version": "3.0.4",
- "resolved": "https://registry.npmjs.org/hash-base/-/hash-base-3.0.4.tgz",
- "integrity": "sha1-X8hoaEfs1zSZQDMZprCj8/auSRg=",
- "requires": {
- "inherits": "^2.0.1",
- "safe-buffer": "^5.0.1"
- }
+ "has-yarn": {
+ "version": "2.1.0",
+ "resolved": "https://registry.npmjs.org/has-yarn/-/has-yarn-2.1.0.tgz",
+ "integrity": "sha512-UqBRqi4ju7T+TqGNdqAO0PaSVGsDGJUBQvk9eUWNGRY1CFGDzYhLWoM7JQEemnlvVcv/YEmc2wNW8BC24EnUsw=="
},
- "hash-mod": {
- "version": "0.0.5",
- "resolved": "https://registry.npmjs.org/hash-mod/-/hash-mod-0.0.5.tgz",
- "integrity": "sha1-2vHklzqRFmQ0Z9VO52kLQ++ALsw="
+ "hash-base": {
+ "version": "3.1.0",
+ "resolved": "https://registry.npmjs.org/hash-base/-/hash-base-3.1.0.tgz",
+ "integrity": "sha512-1nmYp/rhMDiE7AYkDw+lLwlAzz0AntGIe51F3RfFfEqyQ3feY2eI/NcwC6umIQVOASPMsWJLJScWKSSvzL9IVA==",
+ "requires": {
+ "inherits": "^2.0.4",
+ "readable-stream": "^3.6.0",
+ "safe-buffer": "^5.2.0"
+ },
+ "dependencies": {
+ "inherits": {
+ "version": "2.0.4",
+ "resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.4.tgz",
+ "integrity": "sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ=="
+ },
+ "readable-stream": {
+ "version": "3.6.0",
+ "resolved": "https://registry.npmjs.org/readable-stream/-/readable-stream-3.6.0.tgz",
+ "integrity": "sha512-BViHy7LKeTz4oNnkcLJ+lVSL6vpiFeX6/d3oSH8zCW7UxP2onchk+vTGB143xuFjHS3deTgkKoXXymXqymiIdA==",
+ "requires": {
+ "inherits": "^2.0.3",
+ "string_decoder": "^1.1.1",
+ "util-deprecate": "^1.0.1"
+ }
+ },
+ "safe-buffer": {
+ "version": "5.2.1",
+ "resolved": "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.2.1.tgz",
+ "integrity": "sha512-rp3So07KcdmmKbGvgaNxQSJr7bGVSVk5S9Eq1F+ppbRo70+YeaDxkw5Dd8NPN+GD6bjnYm2VuPuCXmpuYvmCXQ=="
+ }
+ }
},
"hash.js": {
"version": "1.1.7",
@@ -8688,6 +15345,12 @@
"space-separated-tokens": "^1.0.0"
}
},
+ "he": {
+ "version": "1.2.0",
+ "resolved": "https://registry.npmjs.org/he/-/he-1.2.0.tgz",
+ "integrity": "sha512-F/1DnUGPopORZi0ni+CvrCgHQ5FyEAHRLSApuYWMmrbSwoN2Mn/7k+Gl38gJnR7yyDZk6WLXwiGod1JOWNDKGw==",
+ "optional": true
+ },
"header-case": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/header-case/-/header-case-1.0.1.tgz",
@@ -8702,6 +15365,27 @@
"resolved": "https://registry.npmjs.org/hex-color-regex/-/hex-color-regex-1.1.0.tgz",
"integrity": "sha512-l9sfDFsuqtOqKDsQdqrMRk0U85RZc0RtOR9yPI7mRVOa4FsR/BVnZ0shmQRM96Ji99kYZP/7hn1cedc1+ApsTQ=="
},
+ "hicat": {
+ "version": "0.7.0",
+ "resolved": "https://registry.npmjs.org/hicat/-/hicat-0.7.0.tgz",
+ "integrity": "sha1-pwTLP1fkn719OMLt16ujj/CzUmM=",
+ "requires": {
+ "highlight.js": "^8.1.0",
+ "minimist": "^0.2.0"
+ },
+ "dependencies": {
+ "minimist": {
+ "version": "0.2.1",
+ "resolved": "https://registry.npmjs.org/minimist/-/minimist-0.2.1.tgz",
+ "integrity": "sha512-GY8fANSrTMfBVfInqJAY41QkOM+upUTytK1jZ0c8+3HdHrJxBJ3rF5i9moClXTE8uUSnUo8cAsCoxDXvSY4DHg=="
+ }
+ }
+ },
+ "highlight.js": {
+ "version": "8.9.1",
+ "resolved": "https://registry.npmjs.org/highlight.js/-/highlight.js-8.9.1.tgz",
+ "integrity": "sha1-uKnFSTISqTkvAiK2SclhFJfr+4g="
+ },
"hmac-drbg": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/hmac-drbg/-/hmac-drbg-1.0.1.tgz",
@@ -8718,9 +15402,12 @@
"integrity": "sha512-QLg82fGkfnJ/4iy1xZ81/9SIJiq1NGFUMGs6ParyjBZr6jW2Ufj/snDqTHixNlHdPNwN2RLVD0Pi3igeK9+JfA=="
},
"hoist-non-react-statics": {
- "version": "2.5.5",
- "resolved": "https://registry.npmjs.org/hoist-non-react-statics/-/hoist-non-react-statics-2.5.5.tgz",
- "integrity": "sha512-rqcy4pJo55FTTLWt+bU8ukscqHeE/e9KWvsOW2b/a3afxQZhwkQdT1rPPCJ0rYXdj4vNcasY8zHTH+jF/qStxw=="
+ "version": "3.3.2",
+ "resolved": "https://registry.npmjs.org/hoist-non-react-statics/-/hoist-non-react-statics-3.3.2.tgz",
+ "integrity": "sha512-/gGivxi8JPKWNm/W0jSmzcMPpfpPLc3dY/6GxhX2hQ9iGj3aDfklV4ET7NjKpSinLpJ5vafa9iiGIEZg10SfBw==",
+ "requires": {
+ "react-is": "^16.7.0"
+ }
},
"homedir-polyfill": {
"version": "1.0.3",
@@ -8762,9 +15449,14 @@
"integrity": "sha512-P+M65QY2JQ5Y0G9KKdlDpo0zK+/OHptU5AaBwUfAIDJZk1MYf32Frm84EcOytfJE0t5JvkAnKlmjsXDnWzCJmQ=="
},
"html-entities": {
- "version": "1.2.1",
- "resolved": "https://registry.npmjs.org/html-entities/-/html-entities-1.2.1.tgz",
- "integrity": "sha1-DfKTUfByEWNRXfueVUPl9u7VFi8="
+ "version": "1.3.1",
+ "resolved": "https://registry.npmjs.org/html-entities/-/html-entities-1.3.1.tgz",
+ "integrity": "sha512-rhE/4Z3hIhzHAUKbW8jVcCyuT5oJCXXqhN/6mXXVCpzTmvJnoH2HL/bt3EZ6p55jbFJBeAe1ZNpL5BugLujxNA=="
+ },
+ "html-tag-names": {
+ "version": "1.1.5",
+ "resolved": "https://registry.npmjs.org/html-tag-names/-/html-tag-names-1.1.5.tgz",
+ "integrity": "sha512-aI5tKwNTBzOZApHIynaAwecLBv8TlZTEy/P4Sj2SzzAhBrGuI8yGZ0UIXVPQzOHGS+to2mjb04iy6VWt/8+d8A=="
},
"html-to-react": {
"version": "1.3.4",
@@ -8857,29 +15549,37 @@
"integrity": "sha1-+nFolEq5pRnTN8sL7HKE3D5yPYc="
},
"http-errors": {
- "version": "1.6.3",
- "resolved": "http://registry.npmjs.org/http-errors/-/http-errors-1.6.3.tgz",
- "integrity": "sha1-i1VoC7S+KDoLW/TqLjhYC+HZMg0=",
+ "version": "1.7.2",
+ "resolved": "https://registry.npmjs.org/http-errors/-/http-errors-1.7.2.tgz",
+ "integrity": "sha512-uUQBt3H/cSIVfch6i1EuPNy/YsRSOUBXTVfZ+yR7Zjez3qjBz6i9+i4zjNaoqcoFVI4lQJ5plg63TvGfRSDCRg==",
"requires": {
"depd": "~1.1.2",
"inherits": "2.0.3",
- "setprototypeof": "1.1.0",
- "statuses": ">= 1.4.0 < 2"
+ "setprototypeof": "1.1.1",
+ "statuses": ">= 1.5.0 < 2",
+ "toidentifier": "1.0.0"
}
},
"http-parser-js": {
- "version": "0.5.0",
- "resolved": "https://registry.npmjs.org/http-parser-js/-/http-parser-js-0.5.0.tgz",
- "integrity": "sha512-cZdEF7r4gfRIq7ezX9J0T+kQmJNOub71dWbgAXVHDct80TKP4MCETtZQ31xyv38UwgzkWPYF/Xc0ge55dW9Z9w=="
+ "version": "0.5.2",
+ "resolved": "https://registry.npmjs.org/http-parser-js/-/http-parser-js-0.5.2.tgz",
+ "integrity": "sha512-opCO9ASqg5Wy2FNo7A0sxy71yGbbkJJXLdgMK04Tcypw9jr2MgWbyubb0+WdmDmGnFflO7fRbqbaihh/ENDlRQ=="
},
"http-proxy": {
- "version": "1.17.0",
- "resolved": "https://registry.npmjs.org/http-proxy/-/http-proxy-1.17.0.tgz",
- "integrity": "sha512-Taqn+3nNvYRfJ3bGvKfBSRwy1v6eePlm3oc/aWVxZp57DQr5Eq3xhKJi7Z4hZpS8PC3H4qI+Yly5EmFacGuA/g==",
+ "version": "1.18.1",
+ "resolved": "https://registry.npmjs.org/http-proxy/-/http-proxy-1.18.1.tgz",
+ "integrity": "sha512-7mz/721AbnJwIVbnaSv1Cz3Am0ZLT/UBwkC92VlxhXv/k/BBQfM2fXElQNC27BVGr0uwUpplYPQM9LnaBMR5NQ==",
"requires": {
- "eventemitter3": "^3.0.0",
+ "eventemitter3": "^4.0.0",
"follow-redirects": "^1.0.0",
"requires-port": "^1.0.0"
+ },
+ "dependencies": {
+ "eventemitter3": {
+ "version": "4.0.4",
+ "resolved": "https://registry.npmjs.org/eventemitter3/-/eventemitter3-4.0.4.tgz",
+ "integrity": "sha512-rlaVLnVxtxvoyLsQQFBx53YmXHDxRIzzTLbdfxqi4yocpSjAxXwkU0cScM5JgSKMqEhrZpnvQ2D9gjylR0AimQ=="
+ }
}
},
"http-proxy-middleware": {
@@ -8908,6 +15608,11 @@
"resolved": "https://registry.npmjs.org/https-browserify/-/https-browserify-1.0.0.tgz",
"integrity": "sha1-7AbBDgo0wPL68Zn3/X/Hj//QPHM="
},
+ "human-signals": {
+ "version": "1.1.1",
+ "resolved": "https://registry.npmjs.org/human-signals/-/human-signals-1.1.1.tgz",
+ "integrity": "sha512-SEQu7vl8KjNL2eoGBLF3+wAjpsNfA9XMlXAYj/3EdaNfAlxKthD1xjEQfGOUhllCGGJVNY34bRr6lPINhNjyZw=="
+ },
"iconv-lite": {
"version": "0.4.24",
"resolved": "https://registry.npmjs.org/iconv-lite/-/iconv-lite-0.4.24.tgz",
@@ -9087,7 +15792,7 @@
},
"immutable": {
"version": "3.7.6",
- "resolved": "http://registry.npmjs.org/immutable/-/immutable-3.7.6.tgz",
+ "resolved": "https://registry.npmjs.org/immutable/-/immutable-3.7.6.tgz",
"integrity": "sha1-E7TTyxK++hVIKib+Gy665kAHHks="
},
"import-cwd": {
@@ -9096,6 +15801,16 @@
"integrity": "sha1-qmzzbnInYShcs3HsZRn1PiQ1sKk=",
"requires": {
"import-from": "^2.1.0"
+ },
+ "dependencies": {
+ "import-from": {
+ "version": "2.1.0",
+ "resolved": "https://registry.npmjs.org/import-from/-/import-from-2.1.0.tgz",
+ "integrity": "sha1-M1238qev/VOqpHHUuAId7ja387E=",
+ "requires": {
+ "resolve-from": "^3.0.0"
+ }
+ }
}
},
"import-fresh": {
@@ -9108,11 +15823,18 @@
}
},
"import-from": {
- "version": "2.1.0",
- "resolved": "https://registry.npmjs.org/import-from/-/import-from-2.1.0.tgz",
- "integrity": "sha1-M1238qev/VOqpHHUuAId7ja387E=",
+ "version": "3.0.0",
+ "resolved": "https://registry.npmjs.org/import-from/-/import-from-3.0.0.tgz",
+ "integrity": "sha512-CiuXOFFSzkU5x/CR0+z7T91Iht4CXgfCxVOFRhh2Zyhg5wOpWvvDLQUsWl+gcN+QscYBjez8hDCt85O7RLDttQ==",
"requires": {
- "resolve-from": "^3.0.0"
+ "resolve-from": "^5.0.0"
+ },
+ "dependencies": {
+ "resolve-from": {
+ "version": "5.0.0",
+ "resolved": "https://registry.npmjs.org/resolve-from/-/resolve-from-5.0.0.tgz",
+ "integrity": "sha512-qYg9KP24dD5qka9J47d0aVky0N+b4fTU89LN9iDnjB5waksiC49rvMB0PrUJQGoTmH50XPiqOvAjDfaijGxYZw=="
+ }
}
},
"import-lazy": {
@@ -9127,6 +15849,16 @@
"requires": {
"pkg-dir": "^3.0.0",
"resolve-cwd": "^2.0.0"
+ },
+ "dependencies": {
+ "resolve-cwd": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/resolve-cwd/-/resolve-cwd-2.0.0.tgz",
+ "integrity": "sha1-AKn3OHVW4nA46uIyyqNypqWbZlo=",
+ "requires": {
+ "resolve-from": "^3.0.0"
+ }
+ }
}
},
"imurmurhash": {
@@ -9157,6 +15889,11 @@
"resolved": "https://registry.npmjs.org/indexof/-/indexof-0.0.1.tgz",
"integrity": "sha1-gtwzbSMrkGIXnQWrMpOmYFn9Q10="
},
+ "infer-owner": {
+ "version": "1.0.4",
+ "resolved": "https://registry.npmjs.org/infer-owner/-/infer-owner-1.0.4.tgz",
+ "integrity": "sha512-IClj+Xz94+d7irH5qRyfJonOdfTzuDaifE6ZPWfx0N0+/ATZCbuTPq2prFl526urkQd90WyUKIh1DfBQ2hMz9A=="
+ },
"inflight": {
"version": "1.0.6",
"resolved": "https://registry.npmjs.org/inflight/-/inflight-1.0.6.tgz",
@@ -9176,30 +15913,236 @@
"resolved": "https://registry.npmjs.org/ini/-/ini-1.3.5.tgz",
"integrity": "sha512-RZY5huIKCMRWDUqZlEi72f/lmXKMvuszcMBduliQ3nnWbx9X/ZBQO7DijMEYS9EhHBb2qacRUMtC7svLwe0lcw=="
},
- "inquirer": {
- "version": "6.2.2",
- "resolved": "https://registry.npmjs.org/inquirer/-/inquirer-6.2.2.tgz",
- "integrity": "sha512-Z2rREiXA6cHRR9KBOarR3WuLlFzlIfAEIiB45ll5SSadMg7WqOh1MKEjjndfuH5ewXdixWCxqnVfGOQzPeiztA==",
+ "ink": {
+ "version": "2.7.1",
+ "resolved": "https://registry.npmjs.org/ink/-/ink-2.7.1.tgz",
+ "integrity": "sha512-s7lJuQDJEdjqtaIWhp3KYHl6WV3J04U9zoQ6wVc+Xoa06XM27SXUY57qC5DO46xkF0CfgXMKkKNcgvSu/SAEpA==",
"requires": {
- "ansi-escapes": "^3.2.0",
- "chalk": "^2.4.2",
- "cli-cursor": "^2.1.0",
- "cli-width": "^2.0.0",
- "external-editor": "^3.0.3",
- "figures": "^2.0.0",
- "lodash": "^4.17.11",
- "mute-stream": "0.0.7",
- "run-async": "^2.2.0",
- "rxjs": "^6.4.0",
- "string-width": "^2.1.0",
- "strip-ansi": "^5.0.0",
- "through": "^2.3.6"
+ "ansi-escapes": "^4.2.1",
+ "arrify": "^2.0.1",
+ "auto-bind": "^4.0.0",
+ "chalk": "^3.0.0",
+ "cli-cursor": "^3.1.0",
+ "cli-truncate": "^2.1.0",
+ "is-ci": "^2.0.0",
+ "lodash.throttle": "^4.1.1",
+ "log-update": "^3.0.0",
+ "prop-types": "^15.6.2",
+ "react-reconciler": "^0.24.0",
+ "scheduler": "^0.18.0",
+ "signal-exit": "^3.0.2",
+ "slice-ansi": "^3.0.0",
+ "string-length": "^3.1.0",
+ "widest-line": "^3.1.0",
+ "wrap-ansi": "^6.2.0",
+ "yoga-layout-prebuilt": "^1.9.3"
+ },
+ "dependencies": {
+ "ansi-escapes": {
+ "version": "4.3.1",
+ "resolved": "https://registry.npmjs.org/ansi-escapes/-/ansi-escapes-4.3.1.tgz",
+ "integrity": "sha512-JWF7ocqNrp8u9oqpgV+wH5ftbt+cfvv+PTjOvKLT3AdYly/LmORARfEVT1iyjwN+4MqE5UmVKoAdIBqeoCHgLA==",
+ "requires": {
+ "type-fest": "^0.11.0"
+ }
+ },
+ "ansi-regex": {
+ "version": "5.0.0",
+ "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.0.tgz",
+ "integrity": "sha512-bY6fj56OUQ0hU1KjFNDQuJFezqKdrAyFdIevADiqrWHwSlbmBNMHp5ak2f40Pm8JTFyM2mqxkG6ngkHO11f/lg=="
+ },
+ "ansi-styles": {
+ "version": "4.2.1",
+ "resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-4.2.1.tgz",
+ "integrity": "sha512-9VGjrMsG1vePxcSweQsN20KY/c4zN0h9fLjqAbwbPfahM3t+NL+M9HC8xeXG2I8pX5NoamTGNuomEUFI7fcUjA==",
+ "requires": {
+ "@types/color-name": "^1.1.1",
+ "color-convert": "^2.0.1"
+ }
+ },
+ "arrify": {
+ "version": "2.0.1",
+ "resolved": "https://registry.npmjs.org/arrify/-/arrify-2.0.1.tgz",
+ "integrity": "sha512-3duEwti880xqi4eAMN8AyR4a0ByT90zoYdLlevfrvU43vb0YZwZVfxOgxWrLXXXpyugL0hNZc9G6BiB5B3nUug=="
+ },
+ "astral-regex": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/astral-regex/-/astral-regex-2.0.0.tgz",
+ "integrity": "sha512-Z7tMw1ytTXt5jqMcOP+OQteU1VuNK9Y02uuJtKQ1Sv69jXQKKg5cibLwGJow8yzZP+eAc18EmLGPal0bp36rvQ=="
+ },
+ "chalk": {
+ "version": "3.0.0",
+ "resolved": "https://registry.npmjs.org/chalk/-/chalk-3.0.0.tgz",
+ "integrity": "sha512-4D3B6Wf41KOYRFdszmDqMCGq5VV/uMAB273JILmO+3jAlh8X4qDtdtgCR3fxtbLEMzSx22QdhnDcJvu2u1fVwg==",
+ "requires": {
+ "ansi-styles": "^4.1.0",
+ "supports-color": "^7.1.0"
+ }
+ },
+ "cli-cursor": {
+ "version": "3.1.0",
+ "resolved": "https://registry.npmjs.org/cli-cursor/-/cli-cursor-3.1.0.tgz",
+ "integrity": "sha512-I/zHAwsKf9FqGoXM4WWRACob9+SNukZTd94DWF57E4toouRulbCxcUh6RKUEOQlYTHJnzkPMySvPNaaSLNfLZw==",
+ "requires": {
+ "restore-cursor": "^3.1.0"
+ }
+ },
+ "color-convert": {
+ "version": "2.0.1",
+ "resolved": "https://registry.npmjs.org/color-convert/-/color-convert-2.0.1.tgz",
+ "integrity": "sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ==",
+ "requires": {
+ "color-name": "~1.1.4"
+ }
+ },
+ "color-name": {
+ "version": "1.1.4",
+ "resolved": "https://registry.npmjs.org/color-name/-/color-name-1.1.4.tgz",
+ "integrity": "sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA=="
+ },
+ "emoji-regex": {
+ "version": "8.0.0",
+ "resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-8.0.0.tgz",
+ "integrity": "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A=="
+ },
+ "has-flag": {
+ "version": "4.0.0",
+ "resolved": "https://registry.npmjs.org/has-flag/-/has-flag-4.0.0.tgz",
+ "integrity": "sha512-EykJT/Q1KjTWctppgIAgfSO0tKVuZUjhgMr17kqTumMl6Afv3EISleU7qZUzoXDFTAHTDC4NOoG/ZxU3EvlMPQ=="
+ },
+ "is-fullwidth-code-point": {
+ "version": "3.0.0",
+ "resolved": "https://registry.npmjs.org/is-fullwidth-code-point/-/is-fullwidth-code-point-3.0.0.tgz",
+ "integrity": "sha512-zymm5+u+sCsSWyD9qNaejV3DFvhCKclKdizYaJUuHA83RLjb7nSuGnddCHGv0hk+KY7BMAlsWeK4Ueg6EV6XQg=="
+ },
+ "mimic-fn": {
+ "version": "2.1.0",
+ "resolved": "https://registry.npmjs.org/mimic-fn/-/mimic-fn-2.1.0.tgz",
+ "integrity": "sha512-OqbOk5oEQeAZ8WXWydlu9HJjz9WVdEIvamMCcXmuqUYjTknH/sqsWvhQ3vgwKFRR1HpjvNBKQ37nbJgYzGqGcg=="
+ },
+ "onetime": {
+ "version": "5.1.0",
+ "resolved": "https://registry.npmjs.org/onetime/-/onetime-5.1.0.tgz",
+ "integrity": "sha512-5NcSkPHhwTVFIQN+TUqXoS5+dlElHXdpAWu9I0HP20YOtIi+aZ0Ct82jdlILDxjLEAWwvm+qj1m6aEtsDVmm6Q==",
+ "requires": {
+ "mimic-fn": "^2.1.0"
+ }
+ },
+ "react-reconciler": {
+ "version": "0.24.0",
+ "resolved": "https://registry.npmjs.org/react-reconciler/-/react-reconciler-0.24.0.tgz",
+ "integrity": "sha512-gAGnwWkf+NOTig9oOowqid9O0HjTDC+XVGBCAmJYYJ2A2cN/O4gDdIuuUQjv8A4v6GDwVfJkagpBBLW5OW9HSw==",
+ "requires": {
+ "loose-envify": "^1.1.0",
+ "object-assign": "^4.1.1",
+ "prop-types": "^15.6.2",
+ "scheduler": "^0.18.0"
+ }
+ },
+ "restore-cursor": {
+ "version": "3.1.0",
+ "resolved": "https://registry.npmjs.org/restore-cursor/-/restore-cursor-3.1.0.tgz",
+ "integrity": "sha512-l+sSefzHpj5qimhFSE5a8nufZYAM3sBSVMAPtYkmC+4EH2anSGaEMXSD0izRQbu9nfyQ9y5JrVmp7E8oZrUjvA==",
+ "requires": {
+ "onetime": "^5.1.0",
+ "signal-exit": "^3.0.2"
+ }
+ },
+ "scheduler": {
+ "version": "0.18.0",
+ "resolved": "https://registry.npmjs.org/scheduler/-/scheduler-0.18.0.tgz",
+ "integrity": "sha512-agTSHR1Nbfi6ulI0kYNK0203joW2Y5W4po4l+v03tOoiJKpTBbxpNhWDvqc/4IcOw+KLmSiQLTasZ4cab2/UWQ==",
+ "requires": {
+ "loose-envify": "^1.1.0",
+ "object-assign": "^4.1.1"
+ }
+ },
+ "slice-ansi": {
+ "version": "3.0.0",
+ "resolved": "https://registry.npmjs.org/slice-ansi/-/slice-ansi-3.0.0.tgz",
+ "integrity": "sha512-pSyv7bSTC7ig9Dcgbw9AuRNUb5k5V6oDudjZoMBSr13qpLBG7tB+zgCkARjq7xIUgdz5P1Qe8u+rSGdouOOIyQ==",
+ "requires": {
+ "ansi-styles": "^4.0.0",
+ "astral-regex": "^2.0.0",
+ "is-fullwidth-code-point": "^3.0.0"
+ }
+ },
+ "string-width": {
+ "version": "4.2.0",
+ "resolved": "https://registry.npmjs.org/string-width/-/string-width-4.2.0.tgz",
+ "integrity": "sha512-zUz5JD+tgqtuDjMhwIg5uFVV3dtqZ9yQJlZVfq4I01/K5Paj5UHj7VyrQOJvzawSVlKpObApbfD0Ed6yJc+1eg==",
+ "requires": {
+ "emoji-regex": "^8.0.0",
+ "is-fullwidth-code-point": "^3.0.0",
+ "strip-ansi": "^6.0.0"
+ }
+ },
+ "strip-ansi": {
+ "version": "6.0.0",
+ "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.0.tgz",
+ "integrity": "sha512-AuvKTrTfQNYNIctbR1K/YGTR1756GycPsg7b9bdV9Duqur4gv6aKqHXah67Z8ImS7WEz5QVcOtlfW2rZEugt6w==",
+ "requires": {
+ "ansi-regex": "^5.0.0"
+ }
+ },
+ "supports-color": {
+ "version": "7.1.0",
+ "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-7.1.0.tgz",
+ "integrity": "sha512-oRSIpR8pxT1Wr2FquTNnGet79b3BWljqOuoW/h4oBhxJ/HUbX5nX6JSruTkvXDCFMwDPvsaTTbvMLKZWSy0R5g==",
+ "requires": {
+ "has-flag": "^4.0.0"
+ }
+ },
+ "type-fest": {
+ "version": "0.11.0",
+ "resolved": "https://registry.npmjs.org/type-fest/-/type-fest-0.11.0.tgz",
+ "integrity": "sha512-OdjXJxnCN1AvyLSzeKIgXTXxV+99ZuXl3Hpo9XpJAv9MBcHrrJOQ5kV7ypXOuQie+AmWG25hLbiKdwYTifzcfQ=="
+ },
+ "wrap-ansi": {
+ "version": "6.2.0",
+ "resolved": "https://registry.npmjs.org/wrap-ansi/-/wrap-ansi-6.2.0.tgz",
+ "integrity": "sha512-r6lPcBGxZXlIcymEu7InxDMhdW0KDxpLgoFLcguasxCaJ/SOIZwINatK9KY/tf+ZrlywOKU0UDj3ATXUBfxJXA==",
+ "requires": {
+ "ansi-styles": "^4.0.0",
+ "string-width": "^4.1.0",
+ "strip-ansi": "^6.0.0"
+ }
+ }
+ }
+ },
+ "ink-box": {
+ "version": "1.0.0",
+ "resolved": "https://registry.npmjs.org/ink-box/-/ink-box-1.0.0.tgz",
+ "integrity": "sha512-wD2ldWX9lcE/6+flKbAJ0TZF7gKbTH8CRdhEor6DD8d+V0hPITrrGeST2reDBpCia8wiqHrdxrqTyafwtmVanA==",
+ "requires": {
+ "boxen": "^3.0.0",
+ "prop-types": "^15.7.2"
},
"dependencies": {
"ansi-regex": {
- "version": "4.0.0",
- "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-4.0.0.tgz",
- "integrity": "sha512-iB5Dda8t/UqpPI/IjsejXu5jOGDrzn41wJyljwPH65VCIbk6+1BzFIMJGFwTNrYXT1CrD+B4l19U7awiQ8rk7w=="
+ "version": "4.1.0",
+ "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-4.1.0.tgz",
+ "integrity": "sha512-1apePfXM1UOSqw0o9IiFAovVz9M5S1Dg+4TrDwfMewQ6p/rmMueb7tWZjQ1rx4Loy1ArBggoqGpfqqdI4rondg=="
+ },
+ "boxen": {
+ "version": "3.2.0",
+ "resolved": "https://registry.npmjs.org/boxen/-/boxen-3.2.0.tgz",
+ "integrity": "sha512-cU4J/+NodM3IHdSL2yN8bqYqnmlBTidDR4RC7nJs61ZmtGz8VZzM3HLQX0zY5mrSmPtR3xWwsq2jOUQqFZN8+A==",
+ "requires": {
+ "ansi-align": "^3.0.0",
+ "camelcase": "^5.3.1",
+ "chalk": "^2.4.2",
+ "cli-boxes": "^2.2.0",
+ "string-width": "^3.0.0",
+ "term-size": "^1.2.0",
+ "type-fest": "^0.3.0",
+ "widest-line": "^2.0.0"
+ }
+ },
+ "camelcase": {
+ "version": "5.3.1",
+ "resolved": "https://registry.npmjs.org/camelcase/-/camelcase-5.3.1.tgz",
+ "integrity": "sha512-L28STB170nwWS63UjtlEOE3dldQApaJXZkOI1uMFfzf3rRuPegHaHesyee+YxQ+W6SvRDQV6UrdOdRiR153wJg=="
},
"chalk": {
"version": "2.4.2",
@@ -9211,29 +16154,224 @@
"supports-color": "^5.3.0"
}
},
- "strip-ansi": {
- "version": "5.0.0",
- "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-5.0.0.tgz",
- "integrity": "sha512-Uu7gQyZI7J7gn5qLn1Np3G9vcYGTVqB+lFTytnDJv83dd8T22aGH451P3jueT2/QemInJDfxHB5Tde5OzgG1Ow==",
+ "string-width": {
+ "version": "3.1.0",
+ "resolved": "https://registry.npmjs.org/string-width/-/string-width-3.1.0.tgz",
+ "integrity": "sha512-vafcv6KjVZKSgz06oM/H6GDBrAtz8vdhQakGjFIvNrHA6y3HCF1CInLy+QLq8dTJPQ1b+KDUqDFctkdRW44e1w==",
"requires": {
- "ansi-regex": "^4.0.0"
+ "emoji-regex": "^7.0.1",
+ "is-fullwidth-code-point": "^2.0.0",
+ "strip-ansi": "^5.1.0"
+ }
+ },
+ "strip-ansi": {
+ "version": "5.2.0",
+ "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-5.2.0.tgz",
+ "integrity": "sha512-DuRs1gKbBqsMKIZlrffwlug8MHkcnpjs5VPmL1PAh+mA30U0DTotfDZ0d2UUsXpPmPmMMJ6W773MaA3J+lbiWA==",
+ "requires": {
+ "ansi-regex": "^4.1.0"
+ }
+ },
+ "term-size": {
+ "version": "1.2.0",
+ "resolved": "https://registry.npmjs.org/term-size/-/term-size-1.2.0.tgz",
+ "integrity": "sha1-RYuDiH8oj8Vtb/+/rSYuJmOO+mk=",
+ "requires": {
+ "execa": "^0.7.0"
+ }
+ },
+ "type-fest": {
+ "version": "0.3.1",
+ "resolved": "https://registry.npmjs.org/type-fest/-/type-fest-0.3.1.tgz",
+ "integrity": "sha512-cUGJnCdr4STbePCgqNFbpVNCepa+kAVohJs1sLhxzdH+gnEoOd8VhbYa7pD3zZYGiURWM2xzEII3fQcRizDkYQ=="
+ },
+ "widest-line": {
+ "version": "2.0.1",
+ "resolved": "https://registry.npmjs.org/widest-line/-/widest-line-2.0.1.tgz",
+ "integrity": "sha512-Ba5m9/Fa4Xt9eb2ELXt77JxVDV8w7qQrH0zS/TWSJdLyAwQjWoOzpzj5lwVftDz6n/EOu3tNACS84v509qwnJA==",
+ "requires": {
+ "string-width": "^2.1.1"
+ },
+ "dependencies": {
+ "ansi-regex": {
+ "version": "3.0.0",
+ "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-3.0.0.tgz",
+ "integrity": "sha1-7QMXwyIGT3lGbAKWa922Bas32Zg="
+ },
+ "string-width": {
+ "version": "2.1.1",
+ "resolved": "https://registry.npmjs.org/string-width/-/string-width-2.1.1.tgz",
+ "integrity": "sha512-nOqH59deCq9SRHlxq1Aw85Jnt4w6KvLKqWVik6oA9ZklXLNIOlqg4F2yrT1MVaTjAqvVwdfeZ7w7aCvJD7ugkw==",
+ "requires": {
+ "is-fullwidth-code-point": "^2.0.0",
+ "strip-ansi": "^4.0.0"
+ }
+ },
+ "strip-ansi": {
+ "version": "4.0.0",
+ "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-4.0.0.tgz",
+ "integrity": "sha1-qEeQIusaw2iocTibY1JixQXuNo8=",
+ "requires": {
+ "ansi-regex": "^3.0.0"
+ }
+ }
+ }
+ }
+ }
+ },
+ "ink-link": {
+ "version": "1.1.0",
+ "resolved": "https://registry.npmjs.org/ink-link/-/ink-link-1.1.0.tgz",
+ "integrity": "sha512-a716nYz4YDPu8UOA2PwabTZgTvZa3SYB/70yeXVmTOKFAEdMbJyGSVeNuB7P+aM2olzDj9AGVchA7W5QytF9uA==",
+ "requires": {
+ "prop-types": "^15.7.2",
+ "terminal-link": "^2.1.1"
+ }
+ },
+ "ink-select-input": {
+ "version": "3.1.2",
+ "resolved": "https://registry.npmjs.org/ink-select-input/-/ink-select-input-3.1.2.tgz",
+ "integrity": "sha512-PaLraGx8A54GhSkTNzZI8bgY0elAoa1jSPPe5Q52B5VutcBoJc4HE3ICDwsEGJ88l1Hw6AWjpeoqrq82a8uQPA==",
+ "requires": {
+ "arr-rotate": "^1.0.0",
+ "figures": "^2.0.0",
+ "lodash.isequal": "^4.5.0",
+ "prop-types": "^15.5.10"
+ }
+ },
+ "ink-spinner": {
+ "version": "3.0.1",
+ "resolved": "https://registry.npmjs.org/ink-spinner/-/ink-spinner-3.0.1.tgz",
+ "integrity": "sha512-AVR4Z/NXDQ7dT5ltWcCzFS9Dd4T8eaO//E2UO8VYNiJcZpPCSJ11o5A0UVPcMlZxGbGD6ikUFDR3ZgPUQk5haQ==",
+ "requires": {
+ "cli-spinners": "^1.0.0",
+ "prop-types": "^15.5.10"
+ }
+ },
+ "inline-style-parser": {
+ "version": "0.1.1",
+ "resolved": "https://registry.npmjs.org/inline-style-parser/-/inline-style-parser-0.1.1.tgz",
+ "integrity": "sha512-7NXolsK4CAS5+xvdj5OMMbI962hU/wvwoxk+LWR9Ek9bVtyuuYScDN6eS0rUm6TxApFpw7CX1o4uJzcd4AyD3Q=="
+ },
+ "inquirer": {
+ "version": "6.5.2",
+ "resolved": "https://registry.npmjs.org/inquirer/-/inquirer-6.5.2.tgz",
+ "integrity": "sha512-cntlB5ghuB0iuO65Ovoi8ogLHiWGs/5yNrtUcKjFhSSiVeAIVpD7koaSU9RM8mpXw5YDi9RdYXGQMaOURB7ycQ==",
+ "requires": {
+ "ansi-escapes": "^3.2.0",
+ "chalk": "^2.4.2",
+ "cli-cursor": "^2.1.0",
+ "cli-width": "^2.0.0",
+ "external-editor": "^3.0.3",
+ "figures": "^2.0.0",
+ "lodash": "^4.17.12",
+ "mute-stream": "0.0.7",
+ "run-async": "^2.2.0",
+ "rxjs": "^6.4.0",
+ "string-width": "^2.1.0",
+ "strip-ansi": "^5.1.0",
+ "through": "^2.3.6"
+ },
+ "dependencies": {
+ "ansi-regex": {
+ "version": "4.1.0",
+ "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-4.1.0.tgz",
+ "integrity": "sha512-1apePfXM1UOSqw0o9IiFAovVz9M5S1Dg+4TrDwfMewQ6p/rmMueb7tWZjQ1rx4Loy1ArBggoqGpfqqdI4rondg=="
+ },
+ "chalk": {
+ "version": "2.4.2",
+ "resolved": "https://registry.npmjs.org/chalk/-/chalk-2.4.2.tgz",
+ "integrity": "sha512-Mti+f9lpJNcwF4tWV8/OrTTtF1gZi+f8FqlyAdouralcFWFQWF2+NgCHShjkCb+IFBLq9buZwE1xckQU4peSuQ==",
+ "requires": {
+ "ansi-styles": "^3.2.1",
+ "escape-string-regexp": "^1.0.5",
+ "supports-color": "^5.3.0"
+ }
+ },
+ "lodash": {
+ "version": "4.17.15",
+ "resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz",
+ "integrity": "sha512-8xOcRHvCjnocdS5cpwXQXVzmmh5e5+saE2QGoeQmbKmRS6J3VQppPOIt0MnmE+4xlZoumy0GPG0D0MVIQbNA1A=="
+ },
+ "strip-ansi": {
+ "version": "5.2.0",
+ "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-5.2.0.tgz",
+ "integrity": "sha512-DuRs1gKbBqsMKIZlrffwlug8MHkcnpjs5VPmL1PAh+mA30U0DTotfDZ0d2UUsXpPmPmMMJ6W773MaA3J+lbiWA==",
+ "requires": {
+ "ansi-regex": "^4.1.0"
}
}
}
},
"internal-ip": {
- "version": "4.2.0",
- "resolved": "https://registry.npmjs.org/internal-ip/-/internal-ip-4.2.0.tgz",
- "integrity": "sha512-ZY8Rk+hlvFeuMmG5uH1MXhhdeMntmIaxaInvAmzMq/SHV8rv4Kh+6GiQNNDQd0wZFrcO+FiTBo8lui/osKOyJw==",
+ "version": "4.3.0",
+ "resolved": "https://registry.npmjs.org/internal-ip/-/internal-ip-4.3.0.tgz",
+ "integrity": "sha512-S1zBo1D6zcsyuC6PMmY5+55YMILQ9av8lotMx447Bq6SAgo/sDK6y6uUKmuYhW7eacnIhFfsPmCNYdDzsnnDCg==",
"requires": {
- "default-gateway": "^4.0.1",
+ "default-gateway": "^4.2.0",
"ipaddr.js": "^1.9.0"
+ }
+ },
+ "internal-slot": {
+ "version": "1.0.2",
+ "resolved": "https://registry.npmjs.org/internal-slot/-/internal-slot-1.0.2.tgz",
+ "integrity": "sha512-2cQNfwhAfJIkU4KZPkDI+Gj5yNNnbqi40W9Gge6dfnk4TocEVm00B3bdiL+JINrbGJil2TeHvM4rETGzk/f/0g==",
+ "requires": {
+ "es-abstract": "^1.17.0-next.1",
+ "has": "^1.0.3",
+ "side-channel": "^1.0.2"
},
"dependencies": {
- "ipaddr.js": {
- "version": "1.9.0",
- "resolved": "https://registry.npmjs.org/ipaddr.js/-/ipaddr.js-1.9.0.tgz",
- "integrity": "sha512-M4Sjn6N/+O6/IXSJseKqHoFc+5FdGJ22sXqnjTpdZweHK64MzEPAyQZyEU3R/KRv2GLoa7nNtg/C2Ev6m7z+eA=="
+ "es-abstract": {
+ "version": "1.17.6",
+ "resolved": "https://registry.npmjs.org/es-abstract/-/es-abstract-1.17.6.tgz",
+ "integrity": "sha512-Fr89bON3WFyUi5EvAeI48QTWX0AyekGgLA8H+c+7fbfCkJwRWRMLd8CQedNEyJuoYYhmtEqY92pgte1FAhBlhw==",
+ "requires": {
+ "es-to-primitive": "^1.2.1",
+ "function-bind": "^1.1.1",
+ "has": "^1.0.3",
+ "has-symbols": "^1.0.1",
+ "is-callable": "^1.2.0",
+ "is-regex": "^1.1.0",
+ "object-inspect": "^1.7.0",
+ "object-keys": "^1.1.1",
+ "object.assign": "^4.1.0",
+ "string.prototype.trimend": "^1.0.1",
+ "string.prototype.trimstart": "^1.0.1"
+ }
+ },
+ "es-to-primitive": {
+ "version": "1.2.1",
+ "resolved": "https://registry.npmjs.org/es-to-primitive/-/es-to-primitive-1.2.1.tgz",
+ "integrity": "sha512-QCOllgZJtaUo9miYBcLChTUaHNjJF3PYs1VidD7AwiEj1kYxKeQTctLAezAOH5ZKRH0g2IgPn6KwB4IT8iRpvA==",
+ "requires": {
+ "is-callable": "^1.1.4",
+ "is-date-object": "^1.0.1",
+ "is-symbol": "^1.0.2"
+ }
+ },
+ "has-symbols": {
+ "version": "1.0.1",
+ "resolved": "https://registry.npmjs.org/has-symbols/-/has-symbols-1.0.1.tgz",
+ "integrity": "sha512-PLcsoqu++dmEIZB+6totNFKq/7Do+Z0u4oT0zKOJNl3lYK6vGwwu2hjHs+68OEZbTjiUE9bgOABXbP/GvrS0Kg=="
+ },
+ "is-callable": {
+ "version": "1.2.0",
+ "resolved": "https://registry.npmjs.org/is-callable/-/is-callable-1.2.0.tgz",
+ "integrity": "sha512-pyVD9AaGLxtg6srb2Ng6ynWJqkHU9bEM087AKck0w8QwDarTfNcpIYoU8x8Hv2Icm8u6kFJM18Dag8lyqGkviw=="
+ },
+ "is-regex": {
+ "version": "1.1.0",
+ "resolved": "https://registry.npmjs.org/is-regex/-/is-regex-1.1.0.tgz",
+ "integrity": "sha512-iI97M8KTWID2la5uYXlkbSDQIg4F6o1sYboZKKTDpnDQMLtUL86zxhgDet3Q2SriaYsyGqZ6Mn2SjbRKeLHdqw==",
+ "requires": {
+ "has-symbols": "^1.0.1"
+ }
+ },
+ "object-keys": {
+ "version": "1.1.1",
+ "resolved": "https://registry.npmjs.org/object-keys/-/object-keys-1.1.1.tgz",
+ "integrity": "sha512-NuAESUOUMrlIXOfHKzD6bpPu3tYt3xvjNdRIQ+FeT0lNb4K8WR70CaDxhuNguS2XG+GjkyMwOzsN5ZktImfhLA=="
}
}
},
@@ -9275,18 +16413,9 @@
"integrity": "sha1-+ni/XS5pE8kRzp+BnuUUa7bYROk="
},
"ipaddr.js": {
- "version": "1.8.0",
- "resolved": "https://registry.npmjs.org/ipaddr.js/-/ipaddr.js-1.8.0.tgz",
- "integrity": "sha1-6qM9bd16zo9/b+DJygRA5wZzix4="
- },
- "is-absolute": {
- "version": "1.0.0",
- "resolved": "https://registry.npmjs.org/is-absolute/-/is-absolute-1.0.0.tgz",
- "integrity": "sha512-dOWoqflvcydARa360Gvv18DZ/gRuHKi2NU/wU5X1ZFzdYfH29nkiNZsF3mp4OJ3H4yo9Mx8A/uAGNzpzPN3yBA==",
- "requires": {
- "is-relative": "^1.0.0",
- "is-windows": "^1.0.1"
- }
+ "version": "1.9.1",
+ "resolved": "https://registry.npmjs.org/ipaddr.js/-/ipaddr.js-1.9.1.tgz",
+ "integrity": "sha512-0KI/607xoxSToH7GjN1FfSbLoU0+btTicjsQSWQlh/hZykN8KpmMf7uYwPW3R+akZ6R/w18ZlXSHBYXiYUPO3g=="
},
"is-absolute-url": {
"version": "2.1.0",
@@ -9330,6 +16459,11 @@
"is-decimal": "^1.0.0"
}
},
+ "is-arguments": {
+ "version": "1.0.4",
+ "resolved": "https://registry.npmjs.org/is-arguments/-/is-arguments-1.0.4.tgz",
+ "integrity": "sha512-xPh0Rmt8NE65sNzvyUmWgI1tz3mKq74lGA0mL8LYZcoIzKOzDh6HmrYm3d18k60nHerC8A9Km8kYu87zfSFnLA=="
+ },
"is-arrayish": {
"version": "0.2.1",
"resolved": "https://registry.npmjs.org/is-arrayish/-/is-arrayish-0.2.1.tgz",
@@ -9343,6 +16477,15 @@
"binary-extensions": "^1.0.0"
}
},
+ "is-blank": {
+ "version": "2.1.0",
+ "resolved": "https://registry.npmjs.org/is-blank/-/is-blank-2.1.0.tgz",
+ "integrity": "sha1-aac9PA1PQX3/+yB6J5XA8OV23gQ=",
+ "requires": {
+ "is-empty": "^1.2.0",
+ "is-whitespace": "^0.3.0"
+ }
+ },
"is-buffer": {
"version": "1.1.6",
"resolved": "https://registry.npmjs.org/is-buffer/-/is-buffer-1.1.6.tgz",
@@ -9362,11 +16505,11 @@
"integrity": "sha512-r5p9sxJjYnArLjObpjA4xu5EKI3CuKHkJXMhT7kwbpUyIFD1n5PMAsoPvWnvtZiNz7LjkYDRZhd7FlI0eMijEA=="
},
"is-ci": {
- "version": "1.2.1",
- "resolved": "https://registry.npmjs.org/is-ci/-/is-ci-1.2.1.tgz",
- "integrity": "sha512-s6tfsaQaQi3JNciBH6shVqEDvhGut0SUXr31ag8Pd8BBbVVlcGfWhpPmEOoM6RJ5TFhbypvf5yyRw/VXW1IiWg==",
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/is-ci/-/is-ci-2.0.0.tgz",
+ "integrity": "sha512-YfJT7rkpQB0updsdHLGWrvhBJfcfzNNawYDNIyQXJz0IViGf75O8EBPKSdvw2rF+LGCsX4FZ8tcr3b19LcZq4w==",
"requires": {
- "ci-info": "^1.5.0"
+ "ci-info": "^2.0.0"
}
},
"is-color-stop": {
@@ -9447,6 +16590,16 @@
"resolved": "https://registry.npmjs.org/is-directory/-/is-directory-0.3.1.tgz",
"integrity": "sha1-YTObbyR1/Hcv2cnYP1yFddwVSuE="
},
+ "is-docker": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/is-docker/-/is-docker-2.0.0.tgz",
+ "integrity": "sha512-pJEdRugimx4fBMra5z2/5iRdZ63OhYV0vr0Dwm5+xtW4D1FvRkB8hamMIhnWfyJeDdyr/aa7BDyNbtG38VxgoQ=="
+ },
+ "is-empty": {
+ "version": "1.2.0",
+ "resolved": "https://registry.npmjs.org/is-empty/-/is-empty-1.2.0.tgz",
+ "integrity": "sha1-3pu1snhzigWgsJpX4ftNSjQan2s="
+ },
"is-extendable": {
"version": "0.1.1",
"resolved": "https://registry.npmjs.org/is-extendable/-/is-extendable-0.1.1.tgz",
@@ -9497,6 +16650,29 @@
"is-path-inside": "^1.0.0"
}
},
+ "is-invalid-path": {
+ "version": "0.1.0",
+ "resolved": "https://registry.npmjs.org/is-invalid-path/-/is-invalid-path-0.1.0.tgz",
+ "integrity": "sha1-MHqFWzzxqTi0TqcNLGEQYFNxTzQ=",
+ "requires": {
+ "is-glob": "^2.0.0"
+ },
+ "dependencies": {
+ "is-extglob": {
+ "version": "1.0.0",
+ "resolved": "https://registry.npmjs.org/is-extglob/-/is-extglob-1.0.0.tgz",
+ "integrity": "sha1-rEaBd8SUNAWgkvyPKXYMb/xiBsA="
+ },
+ "is-glob": {
+ "version": "2.0.1",
+ "resolved": "https://registry.npmjs.org/is-glob/-/is-glob-2.0.1.tgz",
+ "integrity": "sha1-0Jb5JqPe1WAPP9/ZEZjLCIjC2GM=",
+ "requires": {
+ "is-extglob": "^1.0.0"
+ }
+ }
+ }
+ },
"is-jpg": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/is-jpg/-/is-jpg-2.0.0.tgz",
@@ -9516,9 +16692,9 @@
"integrity": "sha1-q5124dtM7VHjXeDHLr7PCfc0zeg="
},
"is-npm": {
- "version": "1.0.0",
- "resolved": "https://registry.npmjs.org/is-npm/-/is-npm-1.0.0.tgz",
- "integrity": "sha1-8vtjpl5JBbQGyGBydloaTceTufQ="
+ "version": "3.0.0",
+ "resolved": "https://registry.npmjs.org/is-npm/-/is-npm-3.0.0.tgz",
+ "integrity": "sha512-wsigDr1Kkschp2opC4G3yA6r9EgVA6NjRpWzIi9axXqeIaAATPRJc4uLujXe3Nd9uO8KoDyA4MD6aZSeXTADhA=="
},
"is-number": {
"version": "3.0.0",
@@ -9587,16 +16763,6 @@
"resolved": "https://registry.npmjs.org/is-png/-/is-png-1.1.0.tgz",
"integrity": "sha1-1XSxK/J1wDUEVVcLDltXqwYgd84="
},
- "is-promise": {
- "version": "2.1.0",
- "resolved": "https://registry.npmjs.org/is-promise/-/is-promise-2.1.0.tgz",
- "integrity": "sha1-eaKp7OfwlugPNtKy87wWwf9L8/o="
- },
- "is-redirect": {
- "version": "1.0.0",
- "resolved": "https://registry.npmjs.org/is-redirect/-/is-redirect-1.0.0.tgz",
- "integrity": "sha1-HQPd7VO9jbDzDCbk+V02/HyH3CQ="
- },
"is-regex": {
"version": "1.0.4",
"resolved": "https://registry.npmjs.org/is-regex/-/is-regex-1.0.4.tgz",
@@ -9641,11 +16807,24 @@
"resolved": "https://registry.npmjs.org/is-root/-/is-root-1.0.0.tgz",
"integrity": "sha1-B7bCM7w5TNnQK6FclmvWZg1jQtU="
},
+ "is-ssh": {
+ "version": "1.3.1",
+ "resolved": "https://registry.npmjs.org/is-ssh/-/is-ssh-1.3.1.tgz",
+ "integrity": "sha512-0eRIASHZt1E68/ixClI8bp2YK2wmBPVWEismTs6M+M099jKgrzl/3E976zIbImSIob48N2/XGe9y7ZiYdImSlg==",
+ "requires": {
+ "protocols": "^1.1.0"
+ }
+ },
"is-stream": {
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/is-stream/-/is-stream-1.1.0.tgz",
"integrity": "sha1-EtSj3U5o4Lec6428hBc66A2RykQ="
},
+ "is-string": {
+ "version": "1.0.5",
+ "resolved": "https://registry.npmjs.org/is-string/-/is-string-1.0.5.tgz",
+ "integrity": "sha512-buY6VNRjhQMiF1qWDouloZlQbRhDPCebwxSjxMjxgemYT46YMd2NR0/H+fBhEfWX4A/w9TBJ+ol+okqJKFE6vQ=="
+ },
"is-svg": {
"version": "3.0.0",
"resolved": "https://registry.npmjs.org/is-svg/-/is-svg-3.0.0.tgz",
@@ -9683,11 +16862,29 @@
"upper-case": "^1.1.0"
}
},
+ "is-url": {
+ "version": "1.2.4",
+ "resolved": "https://registry.npmjs.org/is-url/-/is-url-1.2.4.tgz",
+ "integrity": "sha512-ITvGim8FhRiYe4IQ5uHSkj7pVaPDrCTkNd3yq3cV7iZAcJdHTUMPMEHcqSOy9xZ9qFenQCvi+2wjH9a1nXqHww=="
+ },
"is-utf8": {
"version": "0.2.1",
"resolved": "https://registry.npmjs.org/is-utf8/-/is-utf8-0.2.1.tgz",
"integrity": "sha1-Sw2hRCEE0bM2NA6AeX6GXPOffXI="
},
+ "is-valid-path": {
+ "version": "0.1.1",
+ "resolved": "https://registry.npmjs.org/is-valid-path/-/is-valid-path-0.1.1.tgz",
+ "integrity": "sha1-EQ+f90w39mPh7HkV60UfLbk6yd8=",
+ "requires": {
+ "is-invalid-path": "^0.1.0"
+ }
+ },
+ "is-whitespace": {
+ "version": "0.3.0",
+ "resolved": "https://registry.npmjs.org/is-whitespace/-/is-whitespace-0.3.0.tgz",
+ "integrity": "sha1-Fjnssb4DauxppUy7QBz77XEUq38="
+ },
"is-whitespace-character": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/is-whitespace-character/-/is-whitespace-character-1.0.2.tgz",
@@ -9708,6 +16905,11 @@
"resolved": "https://registry.npmjs.org/is-wsl/-/is-wsl-1.1.0.tgz",
"integrity": "sha1-HxbkqiKwTRM2tmGIpmrzxgDDpm0="
},
+ "is-yarn-global": {
+ "version": "0.3.0",
+ "resolved": "https://registry.npmjs.org/is-yarn-global/-/is-yarn-global-0.3.0.tgz",
+ "integrity": "sha512-VjSeb/lHmkoyd8ryPVIKvOCn4D1koMqY+vqyjjUfc3xyKtP4dYOxM44sZrnqQSzSds3xyOrUTLTC9LVCVgLngw=="
+ },
"isarray": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/isarray/-/isarray-1.0.0.tgz",
@@ -9755,9 +16957,71 @@
}
},
"iterall": {
- "version": "1.2.2",
- "resolved": "https://registry.npmjs.org/iterall/-/iterall-1.2.2.tgz",
- "integrity": "sha512-yynBb1g+RFUPY64fTrFv7nsjRrENBQJaX2UL+2Szc9REFrSNm1rpSXHGzhmAy7a9uv3vlvgBlXnf9RqmPH1/DA=="
+ "version": "1.3.0",
+ "resolved": "https://registry.npmjs.org/iterall/-/iterall-1.3.0.tgz",
+ "integrity": "sha512-QZ9qOMdF+QLHxy1QIpUHUU1D5pS2CG2P69LF6L6CPjPYA/XMOmKV3PZpawHoAjHNyB0swdVTRxdYT4tbBbxqwg=="
+ },
+ "jest-diff": {
+ "version": "25.5.0",
+ "resolved": "https://registry.npmjs.org/jest-diff/-/jest-diff-25.5.0.tgz",
+ "integrity": "sha512-z1kygetuPiREYdNIumRpAHY6RXiGmp70YHptjdaxTWGmA085W3iCnXNx0DhflK3vwrKmrRWyY1wUpkPMVxMK7A==",
+ "requires": {
+ "chalk": "^3.0.0",
+ "diff-sequences": "^25.2.6",
+ "jest-get-type": "^25.2.6",
+ "pretty-format": "^25.5.0"
+ },
+ "dependencies": {
+ "ansi-styles": {
+ "version": "4.2.1",
+ "resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-4.2.1.tgz",
+ "integrity": "sha512-9VGjrMsG1vePxcSweQsN20KY/c4zN0h9fLjqAbwbPfahM3t+NL+M9HC8xeXG2I8pX5NoamTGNuomEUFI7fcUjA==",
+ "requires": {
+ "@types/color-name": "^1.1.1",
+ "color-convert": "^2.0.1"
+ }
+ },
+ "chalk": {
+ "version": "3.0.0",
+ "resolved": "https://registry.npmjs.org/chalk/-/chalk-3.0.0.tgz",
+ "integrity": "sha512-4D3B6Wf41KOYRFdszmDqMCGq5VV/uMAB273JILmO+3jAlh8X4qDtdtgCR3fxtbLEMzSx22QdhnDcJvu2u1fVwg==",
+ "requires": {
+ "ansi-styles": "^4.1.0",
+ "supports-color": "^7.1.0"
+ }
+ },
+ "color-convert": {
+ "version": "2.0.1",
+ "resolved": "https://registry.npmjs.org/color-convert/-/color-convert-2.0.1.tgz",
+ "integrity": "sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ==",
+ "requires": {
+ "color-name": "~1.1.4"
+ }
+ },
+ "color-name": {
+ "version": "1.1.4",
+ "resolved": "https://registry.npmjs.org/color-name/-/color-name-1.1.4.tgz",
+ "integrity": "sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA=="
+ },
+ "has-flag": {
+ "version": "4.0.0",
+ "resolved": "https://registry.npmjs.org/has-flag/-/has-flag-4.0.0.tgz",
+ "integrity": "sha512-EykJT/Q1KjTWctppgIAgfSO0tKVuZUjhgMr17kqTumMl6Afv3EISleU7qZUzoXDFTAHTDC4NOoG/ZxU3EvlMPQ=="
+ },
+ "supports-color": {
+ "version": "7.1.0",
+ "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-7.1.0.tgz",
+ "integrity": "sha512-oRSIpR8pxT1Wr2FquTNnGet79b3BWljqOuoW/h4oBhxJ/HUbX5nX6JSruTkvXDCFMwDPvsaTTbvMLKZWSy0R5g==",
+ "requires": {
+ "has-flag": "^4.0.0"
+ }
+ }
+ }
+ },
+ "jest-get-type": {
+ "version": "25.2.6",
+ "resolved": "https://registry.npmjs.org/jest-get-type/-/jest-get-type-25.2.6.tgz",
+ "integrity": "sha512-DxjtyzOHjObRM+sM1knti6or+eOgcGU4xVSb2HNP1TqO4ahsT+rqZg+nyqHWJSvWgKC5cG3QjGFBqxLghiF/Ig=="
},
"jest-worker": {
"version": "23.2.0",
@@ -9765,6 +17029,16 @@
"integrity": "sha1-+vcGqNo2+uYOsmlXJX+ntdjqArk=",
"requires": {
"merge-stream": "^1.0.1"
+ },
+ "dependencies": {
+ "merge-stream": {
+ "version": "1.0.1",
+ "resolved": "https://registry.npmjs.org/merge-stream/-/merge-stream-1.0.1.tgz",
+ "integrity": "sha1-QEEgLVCKNCugAXQAjfDCUbjBNeE=",
+ "requires": {
+ "readable-stream": "^2.0.1"
+ }
+ }
}
},
"jimp": {
@@ -9802,16 +17076,6 @@
}
}
},
- "joi": {
- "version": "12.0.0",
- "resolved": "https://registry.npmjs.org/joi/-/joi-12.0.0.tgz",
- "integrity": "sha512-z0FNlV4NGgjQN1fdtHYXf5kmgludM65fG/JlXzU6+rwkt9U5UWuXVYnXa2FpK0u6+qBuCmrm5byPNuiiddAHvQ==",
- "requires": {
- "hoek": "4.x.x",
- "isemail": "3.x.x",
- "topo": "2.x.x"
- }
- },
"jpeg-js": {
"version": "0.2.0",
"resolved": "https://registry.npmjs.org/jpeg-js/-/jpeg-js-0.2.0.tgz",
@@ -9910,9 +17174,9 @@
"integrity": "sha1-Epai1Y/UXxmg9s4B1lcB4sc1tus="
},
"json3": {
- "version": "3.3.2",
- "resolved": "https://registry.npmjs.org/json3/-/json3-3.3.2.tgz",
- "integrity": "sha1-PAQ0dD35Pi9cQq7nsZvLSDV19OE="
+ "version": "3.3.3",
+ "resolved": "https://registry.npmjs.org/json3/-/json3-3.3.3.tgz",
+ "integrity": "sha512-c7/8mbUsKigAbLkD5B010BK4D9LZm7A1pNItkEwiUZRpIN66exu/e7YQWysGun+TRKaJp8MhemM+VkfWv42aCA=="
},
"json5": {
"version": "2.1.0",
@@ -9947,19 +17211,12 @@
}
},
"jsx-ast-utils": {
- "version": "2.0.1",
- "resolved": "https://registry.npmjs.org/jsx-ast-utils/-/jsx-ast-utils-2.0.1.tgz",
- "integrity": "sha1-6AGxs5mF4g//yHtA43SAgOLcrH8=",
+ "version": "2.4.1",
+ "resolved": "https://registry.npmjs.org/jsx-ast-utils/-/jsx-ast-utils-2.4.1.tgz",
+ "integrity": "sha512-z1xSldJ6imESSzOjd3NNkieVJKRlKYSOtMG8SFyCj2FIrvSaSuli/WjpBkEzCBoR9bYYYFgqJw61Xhu7Lcgk+w==",
"requires": {
- "array-includes": "^3.0.3"
- }
- },
- "kebab-hash": {
- "version": "0.1.2",
- "resolved": "https://registry.npmjs.org/kebab-hash/-/kebab-hash-0.1.2.tgz",
- "integrity": "sha512-BTZpq3xgISmQmAVzkISy4eUutsUA7s4IEFlCwOBJjvSFOwyR7I+fza+tBc/rzYWK/NrmFHjfU1IhO3lu29Ib/w==",
- "requires": {
- "lodash.kebabcase": "^4.1.1"
+ "array-includes": "^3.1.1",
+ "object.assign": "^4.1.0"
}
},
"keyv": {
@@ -9980,6 +17237,11 @@
"resolved": "https://registry.npmjs.org/kind-of/-/kind-of-6.0.2.tgz",
"integrity": "sha512-s5kLOcnH0XqDO+FvuaLX8DDjZ18CGFk7VygH40QoKPUQhW4e2rvM0rwUq0t8IQDOwYSeLK01U90OjzBTme2QqA=="
},
+ "kleur": {
+ "version": "3.0.3",
+ "resolved": "https://registry.npmjs.org/kleur/-/kleur-3.0.3.tgz",
+ "integrity": "sha512-eTIzlVOSUR+JxdDFepEYcBMtZ9Qqdef+rnzWdRZuMbOywu5tO2w2N7rqjoANZ5k9vywhL6Br1VRjUIgTQx4E8w=="
+ },
"last-call-webpack-plugin": {
"version": "3.0.0",
"resolved": "https://registry.npmjs.org/last-call-webpack-plugin/-/last-call-webpack-plugin-3.0.0.tgz",
@@ -9990,11 +17252,11 @@
}
},
"latest-version": {
- "version": "3.1.0",
- "resolved": "https://registry.npmjs.org/latest-version/-/latest-version-3.1.0.tgz",
- "integrity": "sha1-ogU4P+oyKzO1rjsYq+4NwvNW7hU=",
+ "version": "5.1.0",
+ "resolved": "https://registry.npmjs.org/latest-version/-/latest-version-5.1.0.tgz",
+ "integrity": "sha512-weT+r0kTkRQdCdYCNtkMwWXQTMEswKrFBkm4ckQOMVhhqhIMI1UT2hMj+1iigIhgSZm5gTmrRXBNoGUgaTY1xA==",
"requires": {
- "package-json": "^4.0.0"
+ "package-json": "^6.3.0"
}
},
"lazy-cache": {
@@ -10011,9 +17273,17 @@
}
},
"leven": {
- "version": "2.1.0",
- "resolved": "https://registry.npmjs.org/leven/-/leven-2.1.0.tgz",
- "integrity": "sha1-wuep93IJTe6dNCAq6KzORoeHVYA="
+ "version": "3.1.0",
+ "resolved": "https://registry.npmjs.org/leven/-/leven-3.1.0.tgz",
+ "integrity": "sha512-qsda+H8jTaUaN/x5vzW2rzc+8Rw4TAQ/4KjB46IwK5VH+IlVeeeje/EoZRpiXvIqjFgK84QffqPztGI3VBLG1A=="
+ },
+ "levenary": {
+ "version": "1.1.1",
+ "resolved": "https://registry.npmjs.org/levenary/-/levenary-1.1.1.tgz",
+ "integrity": "sha512-mkAdOIt79FD6irqjYSs4rdbnlT5vRonMEvBVPVb3XmevfS8kgRXwfes0dhPdEtzTWD/1eNE/Bm/G1iRt6DcnQQ==",
+ "requires": {
+ "leven": "^3.1.0"
+ }
},
"levn": {
"version": "0.3.0",
@@ -10024,6 +17294,11 @@
"type-check": "~0.3.2"
}
},
+ "lines-and-columns": {
+ "version": "1.1.6",
+ "resolved": "https://registry.npmjs.org/lines-and-columns/-/lines-and-columns-1.1.6.tgz",
+ "integrity": "sha1-HADHQ7QzzQpOgHWPe2SldEDZ/wA="
+ },
"load-bmfont": {
"version": "1.4.0",
"resolved": "https://registry.npmjs.org/load-bmfont/-/load-bmfont-1.4.0.tgz",
@@ -10048,7 +17323,7 @@
},
"load-json-file": {
"version": "2.0.0",
- "resolved": "http://registry.npmjs.org/load-json-file/-/load-json-file-2.0.0.tgz",
+ "resolved": "https://registry.npmjs.org/load-json-file/-/load-json-file-2.0.0.tgz",
"integrity": "sha1-eUfkIUmvgNaWy/eXvKq8/h/inKg=",
"requires": {
"graceful-fs": "^4.1.2",
@@ -10067,18 +17342,18 @@
},
"pify": {
"version": "2.3.0",
- "resolved": "http://registry.npmjs.org/pify/-/pify-2.3.0.tgz",
+ "resolved": "https://registry.npmjs.org/pify/-/pify-2.3.0.tgz",
"integrity": "sha1-7RQaasBDqEnqWISY59yosVMw6Qw="
}
}
},
"loader-fs-cache": {
- "version": "1.0.1",
- "resolved": "https://registry.npmjs.org/loader-fs-cache/-/loader-fs-cache-1.0.1.tgz",
- "integrity": "sha1-VuC/CL2XCLJqdltoUJhAyN7J/bw=",
+ "version": "1.0.3",
+ "resolved": "https://registry.npmjs.org/loader-fs-cache/-/loader-fs-cache-1.0.3.tgz",
+ "integrity": "sha512-ldcgZpjNJj71n+2Mf6yetz+c9bM4xpKtNds4LbqXzU/PTdeAX0g3ytnU1AJMEcTk2Lex4Smpe3Q/eCTsvUBxbA==",
"requires": {
"find-cache-dir": "^0.1.1",
- "mkdirp": "0.5.1"
+ "mkdirp": "^0.5.1"
},
"dependencies": {
"find-cache-dir": {
@@ -10198,7 +17473,8 @@
"lodash.deburr": {
"version": "4.1.0",
"resolved": "https://registry.npmjs.org/lodash.deburr/-/lodash.deburr-4.1.0.tgz",
- "integrity": "sha1-3bG7s+8HRYwBd7oH3hRCLLAz/5s="
+ "integrity": "sha1-3bG7s+8HRYwBd7oH3hRCLLAz/5s=",
+ "dev": true
},
"lodash.defaults": {
"version": "4.2.0",
@@ -10235,6 +17511,11 @@
"resolved": "https://registry.npmjs.org/lodash.foreach/-/lodash.foreach-4.5.0.tgz",
"integrity": "sha1-Gmo16s5AEoDH8G3d7DUWWrJ+PlM="
},
+ "lodash.isequal": {
+ "version": "4.5.0",
+ "resolved": "https://registry.npmjs.org/lodash.isequal/-/lodash.isequal-4.5.0.tgz",
+ "integrity": "sha1-QVxEePK8wwEgwizhDtMib30+GOA="
+ },
"lodash.isplainobject": {
"version": "4.0.6",
"resolved": "https://registry.npmjs.org/lodash.isplainobject/-/lodash.isplainobject-4.0.6.tgz",
@@ -10245,11 +17526,6 @@
"resolved": "https://registry.npmjs.org/lodash.isstring/-/lodash.isstring-4.0.1.tgz",
"integrity": "sha1-1SfftUVuynzJu5XV2ur4i6VKVFE="
},
- "lodash.kebabcase": {
- "version": "4.1.1",
- "resolved": "https://registry.npmjs.org/lodash.kebabcase/-/lodash.kebabcase-4.1.1.tgz",
- "integrity": "sha1-hImxyw0p/4gZXM7KRI/21swpXDY="
- },
"lodash.map": {
"version": "4.6.0",
"resolved": "https://registry.npmjs.org/lodash.map/-/lodash.map-4.6.0.tgz",
@@ -10317,6 +17593,11 @@
"lodash._reinterpolate": "~3.0.0"
}
},
+ "lodash.throttle": {
+ "version": "4.1.1",
+ "resolved": "https://registry.npmjs.org/lodash.throttle/-/lodash.throttle-4.1.1.tgz",
+ "integrity": "sha1-wj6RtxAkKscMN/HhzaknTMOb8vQ="
+ },
"lodash.toarray": {
"version": "4.4.0",
"resolved": "https://registry.npmjs.org/lodash.toarray/-/lodash.toarray-4.4.0.tgz",
@@ -10327,6 +17608,51 @@
"resolved": "https://registry.npmjs.org/lodash.uniq/-/lodash.uniq-4.5.0.tgz",
"integrity": "sha1-0CJTc662Uq3BvILklFM5qEJ1R3M="
},
+ "log-update": {
+ "version": "3.4.0",
+ "resolved": "https://registry.npmjs.org/log-update/-/log-update-3.4.0.tgz",
+ "integrity": "sha512-ILKe88NeMt4gmDvk/eb615U/IVn7K9KWGkoYbdatQ69Z65nj1ZzjM6fHXfcs0Uge+e+EGnMW7DY4T9yko8vWFg==",
+ "requires": {
+ "ansi-escapes": "^3.2.0",
+ "cli-cursor": "^2.1.0",
+ "wrap-ansi": "^5.0.0"
+ },
+ "dependencies": {
+ "ansi-regex": {
+ "version": "4.1.0",
+ "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-4.1.0.tgz",
+ "integrity": "sha512-1apePfXM1UOSqw0o9IiFAovVz9M5S1Dg+4TrDwfMewQ6p/rmMueb7tWZjQ1rx4Loy1ArBggoqGpfqqdI4rondg=="
+ },
+ "string-width": {
+ "version": "3.1.0",
+ "resolved": "https://registry.npmjs.org/string-width/-/string-width-3.1.0.tgz",
+ "integrity": "sha512-vafcv6KjVZKSgz06oM/H6GDBrAtz8vdhQakGjFIvNrHA6y3HCF1CInLy+QLq8dTJPQ1b+KDUqDFctkdRW44e1w==",
+ "requires": {
+ "emoji-regex": "^7.0.1",
+ "is-fullwidth-code-point": "^2.0.0",
+ "strip-ansi": "^5.1.0"
+ }
+ },
+ "strip-ansi": {
+ "version": "5.2.0",
+ "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-5.2.0.tgz",
+ "integrity": "sha512-DuRs1gKbBqsMKIZlrffwlug8MHkcnpjs5VPmL1PAh+mA30U0DTotfDZ0d2UUsXpPmPmMMJ6W773MaA3J+lbiWA==",
+ "requires": {
+ "ansi-regex": "^4.1.0"
+ }
+ },
+ "wrap-ansi": {
+ "version": "5.1.0",
+ "resolved": "https://registry.npmjs.org/wrap-ansi/-/wrap-ansi-5.1.0.tgz",
+ "integrity": "sha512-QC1/iN/2/RPVJ5jYK8BGttj5z83LmSKmvbvrXPNCLZSEb32KKVDJDl/MOt2N01qU2H/FkzEa9PKto1BqDjtd7Q==",
+ "requires": {
+ "ansi-styles": "^3.2.0",
+ "string-width": "^3.0.0",
+ "strip-ansi": "^5.0.0"
+ }
+ }
+ }
+ },
"logalot": {
"version": "2.1.0",
"resolved": "https://registry.npmjs.org/logalot/-/logalot-2.1.0.tgz",
@@ -10348,14 +17674,9 @@
}
},
"loglevel": {
- "version": "1.6.1",
- "resolved": "https://registry.npmjs.org/loglevel/-/loglevel-1.6.1.tgz",
- "integrity": "sha1-4PyVEztu8nbNyIh82vJKpvFW+Po="
- },
- "lokijs": {
- "version": "1.5.6",
- "resolved": "https://registry.npmjs.org/lokijs/-/lokijs-1.5.6.tgz",
- "integrity": "sha512-xJoDXy8TASTjmXMKr4F8vvNUCu4dqlwY5gmn0g5BajGt1GM3goDCafNiGAh/sfrWgkfWu1J4OfsxWm8yrWweJA=="
+ "version": "1.6.8",
+ "resolved": "https://registry.npmjs.org/loglevel/-/loglevel-1.6.8.tgz",
+ "integrity": "sha512-bsU7+gc9AJ2SqpzxwU3+1fedl8zAntbtC5XYlt3s2j1hJcn2PsXSmgN8TaLG/J1/2mod4+cE/3vNL70/c1RNCA=="
},
"longest": {
"version": "1.0.1",
@@ -10415,7 +17736,7 @@
},
"lru-cache": {
"version": "4.0.0",
- "resolved": "http://registry.npmjs.org/lru-cache/-/lru-cache-4.0.0.tgz",
+ "resolved": "https://registry.npmjs.org/lru-cache/-/lru-cache-4.0.0.tgz",
"integrity": "sha1-tcvwFVbBaWb+vlTO7A+03JDfbCg=",
"requires": {
"pseudomap": "^1.0.1",
@@ -10427,6 +17748,14 @@
"resolved": "https://registry.npmjs.org/ltcdr/-/ltcdr-2.2.1.tgz",
"integrity": "sha1-Wrh60dTB2rjowIu/A37gwZAih88="
},
+ "magic-string": {
+ "version": "0.25.7",
+ "resolved": "https://registry.npmjs.org/magic-string/-/magic-string-0.25.7.tgz",
+ "integrity": "sha512-4CrMT5DOHTDk4HYDlzmwu4FVCcIYI8gauveasrdCu2IKIFOJ3f0v/8MDGJCDL9oD2ppz/Av1b0Nj345H9M+XIA==",
+ "requires": {
+ "sourcemap-codec": "^1.4.4"
+ }
+ },
"make-dir": {
"version": "1.3.0",
"resolved": "https://registry.npmjs.org/make-dir/-/make-dir-1.3.0.tgz",
@@ -10435,14 +17764,6 @@
"pify": "^3.0.0"
}
},
- "map-age-cleaner": {
- "version": "0.1.3",
- "resolved": "https://registry.npmjs.org/map-age-cleaner/-/map-age-cleaner-0.1.3.tgz",
- "integrity": "sha512-bJzx6nMoP6PDLPBFmg7+xRKeFZvFboMrGlxmNj9ClvX53KrmvM5bXFXEWjbz4cz1AFn+jWJ9z/DJSz7hrs0w3w==",
- "requires": {
- "p-defer": "^1.0.0"
- }
- },
"map-cache": {
"version": "0.2.2",
"resolved": "https://registry.npmjs.org/map-cache/-/map-cache-0.2.2.tgz",
@@ -10588,9 +17909,14 @@
"resolved": "https://registry.npmjs.org/mdurl/-/mdurl-1.0.1.tgz",
"integrity": "sha1-/oWy7HWlkDfyrf7BAP1sYBdhFS4="
},
+ "meant": {
+ "version": "1.0.1",
+ "resolved": "https://registry.npmjs.org/meant/-/meant-1.0.1.tgz",
+ "integrity": "sha512-UakVLFjKkbbUwNWJ2frVLnnAtbb7D7DsloxRd3s/gDpI8rdv8W5Hp3NaDb+POBI1fQdeussER6NB8vpcRURvlg=="
+ },
"media-typer": {
"version": "0.3.0",
- "resolved": "http://registry.npmjs.org/media-typer/-/media-typer-0.3.0.tgz",
+ "resolved": "https://registry.npmjs.org/media-typer/-/media-typer-0.3.0.tgz",
"integrity": "sha1-hxDXrwqmJvj/+hzgAWhUUmMlV0g="
},
"mem": {
@@ -10772,12 +18098,9 @@
"integrity": "sha1-sAqqVW3YtEVoFQ7J0blT8/kMu2E="
},
"merge-stream": {
- "version": "1.0.1",
- "resolved": "https://registry.npmjs.org/merge-stream/-/merge-stream-1.0.1.tgz",
- "integrity": "sha1-QEEgLVCKNCugAXQAjfDCUbjBNeE=",
- "requires": {
- "readable-stream": "^2.0.1"
- }
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/merge-stream/-/merge-stream-2.0.0.tgz",
+ "integrity": "sha512-abv/qOcuPfk3URPfDzmZU1LKmuw8kT+0nIHvKrKgFrwifol/doWcdA4ZqsWQ8ENrFKkd67Mfpo/LovbIUsbt3w=="
},
"merge2": {
"version": "1.2.3",
@@ -10816,6 +18139,13 @@
"requires": {
"bn.js": "^4.0.0",
"brorand": "^1.0.1"
+ },
+ "dependencies": {
+ "bn.js": {
+ "version": "4.11.9",
+ "resolved": "https://registry.npmjs.org/bn.js/-/bn.js-4.11.9.tgz",
+ "integrity": "sha512-E6QoYqCKZfgatHTdHzs1RRKP7ip4vvm+EyRUeE2RF0NblwVvb0p6jSVeNTOFxPn26QXN2o6SMfNxKp6kU8zQaw=="
+ }
}
},
"mime": {
@@ -10946,9 +18276,9 @@
}
},
"mitt": {
- "version": "1.1.3",
- "resolved": "https://registry.npmjs.org/mitt/-/mitt-1.1.3.tgz",
- "integrity": "sha512-mUDCnVNsAi+eD6qA0HkRkwYczbLHJ49z17BGe2PYRhZL4wpZUFZGJHU7/5tmvohoma+Hdn0Vh/oJTiPEmgSruA=="
+ "version": "1.2.0",
+ "resolved": "https://registry.npmjs.org/mitt/-/mitt-1.2.0.tgz",
+ "integrity": "sha512-r6lj77KlwqLhIUku9UWYes7KJtsczvolZkzp8hbaDPPaE24OmWl5s539Mytlj22siEQKosZ26qCBgda2PKwoJw=="
},
"mixin-deep": {
"version": "1.3.1",
@@ -11001,9 +18331,9 @@
}
},
"moment": {
- "version": "2.24.0",
- "resolved": "https://registry.npmjs.org/moment/-/moment-2.24.0.tgz",
- "integrity": "sha512-bV7f+6l2QigeBBZSM/6yTNq4P2fNpSWj/0e7jQcy87A8e7o2nAfP/34/2ky5Vw4B9S446EtIhodAzkFCcR4dQg=="
+ "version": "2.26.0",
+ "resolved": "https://registry.npmjs.org/moment/-/moment-2.26.0.tgz",
+ "integrity": "sha512-oIixUO+OamkUkwjhAVE18rAMfRJNsNe/Stid/gwHSOfHrOtw9EhAY2AHvdKZ/k/MggcYELFCJz/Sn2pL8b8JMw=="
},
"move-concurrently": {
"version": "1.0.1",
@@ -11091,9 +18421,9 @@
"integrity": "sha1-Sr6/7tdUHywnrPspvbvRXI1bpPc="
},
"negotiator": {
- "version": "0.6.1",
- "resolved": "https://registry.npmjs.org/negotiator/-/negotiator-0.6.1.tgz",
- "integrity": "sha1-KzJxhOiZIQEXeyhWP7XnECrNDKk="
+ "version": "0.6.2",
+ "resolved": "https://registry.npmjs.org/negotiator/-/negotiator-0.6.2.tgz",
+ "integrity": "sha512-hZXc7K2e+PgeI1eDBe/10Ard4ekbfrrqG8Ep+8Jmf4JID2bNg7NvCPOZN+kfF574pFQI7mum2AUqDidoKqcTOw=="
},
"neo-async": {
"version": "2.6.0",
@@ -11154,9 +18484,9 @@
}
},
"node-forge": {
- "version": "0.7.5",
- "resolved": "https://registry.npmjs.org/node-forge/-/node-forge-0.7.5.tgz",
- "integrity": "sha512-MmbQJ2MTESTjt3Gi/3yG1wGpIMhUfcIypUCGtTizFR9IiccFwxSpfp0vtIZlkFclEqERemxfnSdZEMR9VqqEFQ=="
+ "version": "0.9.0",
+ "resolved": "https://registry.npmjs.org/node-forge/-/node-forge-0.9.0.tgz",
+ "integrity": "sha512-7ASaDa3pD+lJ3WvXFsxekJQelBKRpne+GOVbLbtHYdd7pFspyeuJHnWfLplGf3SwKGbfs/aYl5V/JCIaHVUKKQ=="
},
"node-gyp": {
"version": "3.8.0",
@@ -11200,9 +18530,9 @@
"integrity": "sha1-h6kGXNs1XTGC2PlM4RGIuCXGijs="
},
"node-libs-browser": {
- "version": "2.2.0",
- "resolved": "https://registry.npmjs.org/node-libs-browser/-/node-libs-browser-2.2.0.tgz",
- "integrity": "sha512-5MQunG/oyOaBdttrL40dA7bUfPORLRWMUJLQtMg7nluxUvk5XwnLdL9twQHFAjRx/y7mIMkLKT9++qPbbk6BZA==",
+ "version": "2.2.1",
+ "resolved": "https://registry.npmjs.org/node-libs-browser/-/node-libs-browser-2.2.1.tgz",
+ "integrity": "sha512-h/zcD8H9kaDZ9ALUWwlBUDo6TKF8a7qBSCSEGfjTVIYeqsioSKaAX+BN7NgiMGp6iSIXZ3PxgCu8KS3b71YK5Q==",
"requires": {
"assert": "^1.1.1",
"browserify-zlib": "^0.2.0",
@@ -11214,7 +18544,7 @@
"events": "^3.0.0",
"https-browserify": "^1.0.0",
"os-browserify": "^0.3.0",
- "path-browserify": "0.0.0",
+ "path-browserify": "0.0.1",
"process": "^0.11.10",
"punycode": "^1.2.4",
"querystring-es3": "^0.2.0",
@@ -11226,7 +18556,7 @@
"tty-browserify": "0.0.0",
"url": "^0.11.0",
"util": "^0.11.0",
- "vm-browserify": "0.0.4"
+ "vm-browserify": "^1.0.1"
},
"dependencies": {
"process": {
@@ -11241,6 +18571,11 @@
}
}
},
+ "node-object-hash": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/node-object-hash/-/node-object-hash-2.0.0.tgz",
+ "integrity": "sha512-VZR0zroAusy1ETZMZiGeLkdu50LGjG5U1KHZqTruqtTyQ2wfWhHG2Ow4nsUbfTFGlaREgNHcCWoM/OzEm6p+NQ=="
+ },
"node-releases": {
"version": "1.1.2",
"resolved": "https://registry.npmjs.org/node-releases/-/node-releases-1.1.2.tgz",
@@ -11333,7 +18668,7 @@
},
"readable-stream": {
"version": "1.0.34",
- "resolved": "http://registry.npmjs.org/readable-stream/-/readable-stream-1.0.34.tgz",
+ "resolved": "https://registry.npmjs.org/readable-stream/-/readable-stream-1.0.34.tgz",
"integrity": "sha1-Elgg40vIQtLyqq+v5MKRbuMsFXw=",
"requires": {
"core-util-is": "~1.0.0",
@@ -11344,7 +18679,7 @@
},
"string_decoder": {
"version": "0.10.31",
- "resolved": "http://registry.npmjs.org/string_decoder/-/string_decoder-0.10.31.tgz",
+ "resolved": "https://registry.npmjs.org/string_decoder/-/string_decoder-0.10.31.tgz",
"integrity": "sha1-YuIDvEF2bGwoyfyEMB2rHFMQ+pQ="
}
}
@@ -11501,6 +18836,73 @@
"resolved": "https://registry.npmjs.org/object-hash/-/object-hash-1.3.1.tgz",
"integrity": "sha512-OSuu/pU4ENM9kmREg0BdNrUDIl1heYa4mBZacJc+vVWz4GtAwu7jO8s4AIt2aGRUTqxykpWzI3Oqnsm13tTMDA=="
},
+ "object-inspect": {
+ "version": "1.7.0",
+ "resolved": "https://registry.npmjs.org/object-inspect/-/object-inspect-1.7.0.tgz",
+ "integrity": "sha512-a7pEHdh1xKIAgTySUGgLMx/xwDZskN1Ud6egYYN3EdRW4ZMPNEDUTF+hwy2LUC+Bl+SyLXANnwz/jyh/qutKUw=="
+ },
+ "object-is": {
+ "version": "1.1.2",
+ "resolved": "https://registry.npmjs.org/object-is/-/object-is-1.1.2.tgz",
+ "integrity": "sha512-5lHCz+0uufF6wZ7CRFWJN3hp8Jqblpgve06U5CMQ3f//6iDjPr2PEo9MWCjEssDsa+UZEL4PkFpr+BMop6aKzQ==",
+ "requires": {
+ "define-properties": "^1.1.3",
+ "es-abstract": "^1.17.5"
+ },
+ "dependencies": {
+ "es-abstract": {
+ "version": "1.17.6",
+ "resolved": "https://registry.npmjs.org/es-abstract/-/es-abstract-1.17.6.tgz",
+ "integrity": "sha512-Fr89bON3WFyUi5EvAeI48QTWX0AyekGgLA8H+c+7fbfCkJwRWRMLd8CQedNEyJuoYYhmtEqY92pgte1FAhBlhw==",
+ "requires": {
+ "es-to-primitive": "^1.2.1",
+ "function-bind": "^1.1.1",
+ "has": "^1.0.3",
+ "has-symbols": "^1.0.1",
+ "is-callable": "^1.2.0",
+ "is-regex": "^1.1.0",
+ "object-inspect": "^1.7.0",
+ "object-keys": "^1.1.1",
+ "object.assign": "^4.1.0",
+ "string.prototype.trimend": "^1.0.1",
+ "string.prototype.trimstart": "^1.0.1"
+ }
+ },
+ "es-to-primitive": {
+ "version": "1.2.1",
+ "resolved": "https://registry.npmjs.org/es-to-primitive/-/es-to-primitive-1.2.1.tgz",
+ "integrity": "sha512-QCOllgZJtaUo9miYBcLChTUaHNjJF3PYs1VidD7AwiEj1kYxKeQTctLAezAOH5ZKRH0g2IgPn6KwB4IT8iRpvA==",
+ "requires": {
+ "is-callable": "^1.1.4",
+ "is-date-object": "^1.0.1",
+ "is-symbol": "^1.0.2"
+ }
+ },
+ "has-symbols": {
+ "version": "1.0.1",
+ "resolved": "https://registry.npmjs.org/has-symbols/-/has-symbols-1.0.1.tgz",
+ "integrity": "sha512-PLcsoqu++dmEIZB+6totNFKq/7Do+Z0u4oT0zKOJNl3lYK6vGwwu2hjHs+68OEZbTjiUE9bgOABXbP/GvrS0Kg=="
+ },
+ "is-callable": {
+ "version": "1.2.0",
+ "resolved": "https://registry.npmjs.org/is-callable/-/is-callable-1.2.0.tgz",
+ "integrity": "sha512-pyVD9AaGLxtg6srb2Ng6ynWJqkHU9bEM087AKck0w8QwDarTfNcpIYoU8x8Hv2Icm8u6kFJM18Dag8lyqGkviw=="
+ },
+ "is-regex": {
+ "version": "1.1.0",
+ "resolved": "https://registry.npmjs.org/is-regex/-/is-regex-1.1.0.tgz",
+ "integrity": "sha512-iI97M8KTWID2la5uYXlkbSDQIg4F6o1sYboZKKTDpnDQMLtUL86zxhgDet3Q2SriaYsyGqZ6Mn2SjbRKeLHdqw==",
+ "requires": {
+ "has-symbols": "^1.0.1"
+ }
+ },
+ "object-keys": {
+ "version": "1.1.1",
+ "resolved": "https://registry.npmjs.org/object-keys/-/object-keys-1.1.1.tgz",
+ "integrity": "sha512-NuAESUOUMrlIXOfHKzD6bpPu3tYt3xvjNdRIQ+FeT0lNb4K8WR70CaDxhuNguS2XG+GjkyMwOzsN5ZktImfhLA=="
+ }
+ }
+ },
"object-keys": {
"version": "1.0.12",
"resolved": "https://registry.npmjs.org/object-keys/-/object-keys-1.0.12.tgz",
@@ -11519,15 +18921,142 @@
"isobject": "^3.0.0"
}
},
- "object.fromentries": {
- "version": "2.0.0",
- "resolved": "https://registry.npmjs.org/object.fromentries/-/object.fromentries-2.0.0.tgz",
- "integrity": "sha512-9iLiI6H083uiqUuvzyY6qrlmc/Gz8hLQFOcb/Ri/0xXFkSNS3ctV+CbE6yM2+AnkYfOB3dGjdzC0wrMLIhQICA==",
+ "object.assign": {
+ "version": "4.1.0",
+ "resolved": "https://registry.npmjs.org/object.assign/-/object.assign-4.1.0.tgz",
+ "integrity": "sha512-exHJeq6kBKj58mqGyTQ9DFvrZC/eR6OwxzoM9YRoGBqrXYonaFyGiFMuc9VZrXf7DarreEwMpurG3dd+CNyW5w==",
"requires": {
"define-properties": "^1.1.2",
- "es-abstract": "^1.11.0",
"function-bind": "^1.1.1",
- "has": "^1.0.1"
+ "has-symbols": "^1.0.0",
+ "object-keys": "^1.0.11"
+ }
+ },
+ "object.entries": {
+ "version": "1.1.2",
+ "resolved": "https://registry.npmjs.org/object.entries/-/object.entries-1.1.2.tgz",
+ "integrity": "sha512-BQdB9qKmb/HyNdMNWVr7O3+z5MUIx3aiegEIJqjMBbBf0YT9RRxTJSim4mzFqtyr7PDAHigq0N9dO0m0tRakQA==",
+ "requires": {
+ "define-properties": "^1.1.3",
+ "es-abstract": "^1.17.5",
+ "has": "^1.0.3"
+ },
+ "dependencies": {
+ "es-abstract": {
+ "version": "1.17.6",
+ "resolved": "https://registry.npmjs.org/es-abstract/-/es-abstract-1.17.6.tgz",
+ "integrity": "sha512-Fr89bON3WFyUi5EvAeI48QTWX0AyekGgLA8H+c+7fbfCkJwRWRMLd8CQedNEyJuoYYhmtEqY92pgte1FAhBlhw==",
+ "requires": {
+ "es-to-primitive": "^1.2.1",
+ "function-bind": "^1.1.1",
+ "has": "^1.0.3",
+ "has-symbols": "^1.0.1",
+ "is-callable": "^1.2.0",
+ "is-regex": "^1.1.0",
+ "object-inspect": "^1.7.0",
+ "object-keys": "^1.1.1",
+ "object.assign": "^4.1.0",
+ "string.prototype.trimend": "^1.0.1",
+ "string.prototype.trimstart": "^1.0.1"
+ }
+ },
+ "es-to-primitive": {
+ "version": "1.2.1",
+ "resolved": "https://registry.npmjs.org/es-to-primitive/-/es-to-primitive-1.2.1.tgz",
+ "integrity": "sha512-QCOllgZJtaUo9miYBcLChTUaHNjJF3PYs1VidD7AwiEj1kYxKeQTctLAezAOH5ZKRH0g2IgPn6KwB4IT8iRpvA==",
+ "requires": {
+ "is-callable": "^1.1.4",
+ "is-date-object": "^1.0.1",
+ "is-symbol": "^1.0.2"
+ }
+ },
+ "has-symbols": {
+ "version": "1.0.1",
+ "resolved": "https://registry.npmjs.org/has-symbols/-/has-symbols-1.0.1.tgz",
+ "integrity": "sha512-PLcsoqu++dmEIZB+6totNFKq/7Do+Z0u4oT0zKOJNl3lYK6vGwwu2hjHs+68OEZbTjiUE9bgOABXbP/GvrS0Kg=="
+ },
+ "is-callable": {
+ "version": "1.2.0",
+ "resolved": "https://registry.npmjs.org/is-callable/-/is-callable-1.2.0.tgz",
+ "integrity": "sha512-pyVD9AaGLxtg6srb2Ng6ynWJqkHU9bEM087AKck0w8QwDarTfNcpIYoU8x8Hv2Icm8u6kFJM18Dag8lyqGkviw=="
+ },
+ "is-regex": {
+ "version": "1.1.0",
+ "resolved": "https://registry.npmjs.org/is-regex/-/is-regex-1.1.0.tgz",
+ "integrity": "sha512-iI97M8KTWID2la5uYXlkbSDQIg4F6o1sYboZKKTDpnDQMLtUL86zxhgDet3Q2SriaYsyGqZ6Mn2SjbRKeLHdqw==",
+ "requires": {
+ "has-symbols": "^1.0.1"
+ }
+ },
+ "object-keys": {
+ "version": "1.1.1",
+ "resolved": "https://registry.npmjs.org/object-keys/-/object-keys-1.1.1.tgz",
+ "integrity": "sha512-NuAESUOUMrlIXOfHKzD6bpPu3tYt3xvjNdRIQ+FeT0lNb4K8WR70CaDxhuNguS2XG+GjkyMwOzsN5ZktImfhLA=="
+ }
+ }
+ },
+ "object.fromentries": {
+ "version": "2.0.2",
+ "resolved": "https://registry.npmjs.org/object.fromentries/-/object.fromentries-2.0.2.tgz",
+ "integrity": "sha512-r3ZiBH7MQppDJVLx6fhD618GKNG40CZYH9wgwdhKxBDDbQgjeWGGd4AtkZad84d291YxvWe7bJGuE65Anh0dxQ==",
+ "requires": {
+ "define-properties": "^1.1.3",
+ "es-abstract": "^1.17.0-next.1",
+ "function-bind": "^1.1.1",
+ "has": "^1.0.3"
+ },
+ "dependencies": {
+ "es-abstract": {
+ "version": "1.17.6",
+ "resolved": "https://registry.npmjs.org/es-abstract/-/es-abstract-1.17.6.tgz",
+ "integrity": "sha512-Fr89bON3WFyUi5EvAeI48QTWX0AyekGgLA8H+c+7fbfCkJwRWRMLd8CQedNEyJuoYYhmtEqY92pgte1FAhBlhw==",
+ "requires": {
+ "es-to-primitive": "^1.2.1",
+ "function-bind": "^1.1.1",
+ "has": "^1.0.3",
+ "has-symbols": "^1.0.1",
+ "is-callable": "^1.2.0",
+ "is-regex": "^1.1.0",
+ "object-inspect": "^1.7.0",
+ "object-keys": "^1.1.1",
+ "object.assign": "^4.1.0",
+ "string.prototype.trimend": "^1.0.1",
+ "string.prototype.trimstart": "^1.0.1"
+ }
+ },
+ "es-to-primitive": {
+ "version": "1.2.1",
+ "resolved": "https://registry.npmjs.org/es-to-primitive/-/es-to-primitive-1.2.1.tgz",
+ "integrity": "sha512-QCOllgZJtaUo9miYBcLChTUaHNjJF3PYs1VidD7AwiEj1kYxKeQTctLAezAOH5ZKRH0g2IgPn6KwB4IT8iRpvA==",
+ "requires": {
+ "is-callable": "^1.1.4",
+ "is-date-object": "^1.0.1",
+ "is-symbol": "^1.0.2"
+ }
+ },
+ "has-symbols": {
+ "version": "1.0.1",
+ "resolved": "https://registry.npmjs.org/has-symbols/-/has-symbols-1.0.1.tgz",
+ "integrity": "sha512-PLcsoqu++dmEIZB+6totNFKq/7Do+Z0u4oT0zKOJNl3lYK6vGwwu2hjHs+68OEZbTjiUE9bgOABXbP/GvrS0Kg=="
+ },
+ "is-callable": {
+ "version": "1.2.0",
+ "resolved": "https://registry.npmjs.org/is-callable/-/is-callable-1.2.0.tgz",
+ "integrity": "sha512-pyVD9AaGLxtg6srb2Ng6ynWJqkHU9bEM087AKck0w8QwDarTfNcpIYoU8x8Hv2Icm8u6kFJM18Dag8lyqGkviw=="
+ },
+ "is-regex": {
+ "version": "1.1.0",
+ "resolved": "https://registry.npmjs.org/is-regex/-/is-regex-1.1.0.tgz",
+ "integrity": "sha512-iI97M8KTWID2la5uYXlkbSDQIg4F6o1sYboZKKTDpnDQMLtUL86zxhgDet3Q2SriaYsyGqZ6Mn2SjbRKeLHdqw==",
+ "requires": {
+ "has-symbols": "^1.0.1"
+ }
+ },
+ "object-keys": {
+ "version": "1.1.1",
+ "resolved": "https://registry.npmjs.org/object-keys/-/object-keys-1.1.1.tgz",
+ "integrity": "sha512-NuAESUOUMrlIXOfHKzD6bpPu3tYt3xvjNdRIQ+FeT0lNb4K8WR70CaDxhuNguS2XG+GjkyMwOzsN5ZktImfhLA=="
+ }
}
},
"object.getownpropertydescriptors": {
@@ -11592,39 +19121,47 @@
"mimic-fn": "^1.0.0"
}
},
+ "open": {
+ "version": "6.4.0",
+ "resolved": "https://registry.npmjs.org/open/-/open-6.4.0.tgz",
+ "integrity": "sha512-IFenVPgF70fSm1keSd2iDBIDIBZkroLeuffXq+wKTzTJlBpesFWojV9lb8mzOfaAzM1sr7HQHuO0vtV0zYekGg==",
+ "requires": {
+ "is-wsl": "^1.1.0"
+ }
+ },
"opentracing": {
- "version": "0.14.3",
- "resolved": "https://registry.npmjs.org/opentracing/-/opentracing-0.14.3.tgz",
- "integrity": "sha1-I+OtAp+mamU5Jq2+V+g0Rp+FUKo="
+ "version": "0.14.4",
+ "resolved": "https://registry.npmjs.org/opentracing/-/opentracing-0.14.4.tgz",
+ "integrity": "sha512-nNnZDkUNExBwEpb7LZaeMeQgvrlO8l4bgY/LvGNZCR0xG/dGWqHqjKrAmR5GUoYo0FIz38kxasvA1aevxWs2CA=="
},
"opn": {
- "version": "5.4.0",
- "resolved": "https://registry.npmjs.org/opn/-/opn-5.4.0.tgz",
- "integrity": "sha512-YF9MNdVy/0qvJvDtunAOzFw9iasOQHpVthTCvGzxt61Il64AYSGdK+rYwld7NAfk9qJ7dt+hymBNSc9LNYS+Sw==",
+ "version": "5.5.0",
+ "resolved": "https://registry.npmjs.org/opn/-/opn-5.5.0.tgz",
+ "integrity": "sha512-PqHpggC9bLV0VeWcdKhkpxY+3JTzetLSqTCWL/z/tFIbI6G8JCjondXklT1JinczLz2Xib62sSp0T/gKT4KksA==",
"requires": {
"is-wsl": "^1.1.0"
}
},
"optimize-css-assets-webpack-plugin": {
- "version": "5.0.1",
- "resolved": "https://registry.npmjs.org/optimize-css-assets-webpack-plugin/-/optimize-css-assets-webpack-plugin-5.0.1.tgz",
- "integrity": "sha512-Rqm6sSjWtx9FchdP0uzTQDc7GXDKnwVEGoSxjezPkzMewx7gEWE9IMUYKmigTRC4U3RaNSwYVnUDLuIdtTpm0A==",
+ "version": "5.0.3",
+ "resolved": "https://registry.npmjs.org/optimize-css-assets-webpack-plugin/-/optimize-css-assets-webpack-plugin-5.0.3.tgz",
+ "integrity": "sha512-q9fbvCRS6EYtUKKSwI87qm2IxlyJK5b4dygW1rKUBT6mMDhdG5e5bZT63v6tnJR9F9FB/H5a0HTmtw+laUBxKA==",
"requires": {
- "cssnano": "^4.1.0",
+ "cssnano": "^4.1.10",
"last-call-webpack-plugin": "^3.0.0"
}
},
"optionator": {
- "version": "0.8.2",
- "resolved": "https://registry.npmjs.org/optionator/-/optionator-0.8.2.tgz",
- "integrity": "sha1-NkxeQJ0/TWMB1sC0wFu6UBgK62Q=",
+ "version": "0.8.3",
+ "resolved": "https://registry.npmjs.org/optionator/-/optionator-0.8.3.tgz",
+ "integrity": "sha512-+IW9pACdk3XWmmTXG8m3upGUJst5XRGzxMRjXzAuJ1XnIFNvfhjjIuYkDvysnPQ7qzqVzLt78BCruntqRhWQbA==",
"requires": {
"deep-is": "~0.1.3",
- "fast-levenshtein": "~2.0.4",
+ "fast-levenshtein": "~2.0.6",
"levn": "~0.3.0",
"prelude-ls": "~1.1.2",
"type-check": "~0.3.2",
- "wordwrap": "~1.0.0"
+ "word-wrap": "~1.2.3"
}
},
"original": {
@@ -11682,11 +19219,6 @@
"resolved": "https://registry.npmjs.org/p-cancelable/-/p-cancelable-0.3.0.tgz",
"integrity": "sha512-RVbZPLso8+jFeq1MfNvgXtCRED2raz/dKpacfTNxsx6pLEpEomM7gah6VeHSYV3+vo0OAi4MkArtQcWWXuQoyw=="
},
- "p-defer": {
- "version": "1.0.0",
- "resolved": "https://registry.npmjs.org/p-defer/-/p-defer-1.0.0.tgz",
- "integrity": "sha1-n26xgvbJqozXQwBKfU+WsZaw+ww="
- },
"p-event": {
"version": "1.3.0",
"resolved": "https://registry.npmjs.org/p-event/-/p-event-1.3.0.tgz",
@@ -11751,6 +19283,14 @@
"resolved": "https://registry.npmjs.org/p-reduce/-/p-reduce-1.0.0.tgz",
"integrity": "sha1-GMKw3ZNqRpClKfgjH1ig/bakffo="
},
+ "p-retry": {
+ "version": "3.0.1",
+ "resolved": "https://registry.npmjs.org/p-retry/-/p-retry-3.0.1.tgz",
+ "integrity": "sha512-XE6G4+YTTkT2a0UWb2kjZe8xNwf8bIbnqpc/IS/idOBVhyves0mK5OJgeocjx7q5pvX/6m23xuzVPYT1uGM73w==",
+ "requires": {
+ "retry": "^0.12.0"
+ }
+ },
"p-timeout": {
"version": "1.2.1",
"resolved": "https://registry.npmjs.org/p-timeout/-/p-timeout-1.2.1.tgz",
@@ -11765,27 +19305,122 @@
"integrity": "sha512-hMp0onDKIajHfIkdRk3P4CdCmErkYAxxDtP3Wx/4nZ3aGlau2VKh3mZpcuFkH27WQkL/3WBCPOktzA9ZOAnMQQ=="
},
"package-json": {
- "version": "4.0.1",
- "resolved": "https://registry.npmjs.org/package-json/-/package-json-4.0.1.tgz",
- "integrity": "sha1-iGmgQBJTZhxMTKPabCEh7VVfXu0=",
+ "version": "6.5.0",
+ "resolved": "https://registry.npmjs.org/package-json/-/package-json-6.5.0.tgz",
+ "integrity": "sha512-k3bdm2n25tkyxcjSKzB5x8kfVxlMdgsbPr0GkZcwHsLpba6cBjqCt1KlcChKEvxHIcTB1FVMuwoijZ26xex5MQ==",
"requires": {
- "got": "^6.7.1",
- "registry-auth-token": "^3.0.1",
- "registry-url": "^3.0.3",
- "semver": "^5.1.0"
+ "got": "^9.6.0",
+ "registry-auth-token": "^4.0.0",
+ "registry-url": "^5.0.0",
+ "semver": "^6.2.0"
+ },
+ "dependencies": {
+ "@sindresorhus/is": {
+ "version": "0.14.0",
+ "resolved": "https://registry.npmjs.org/@sindresorhus/is/-/is-0.14.0.tgz",
+ "integrity": "sha512-9NET910DNaIPngYnLLPeg+Ogzqsi9uM4mSboU5y6p8S5DzMTVEsJZrawi+BoDNUVBa2DhJqQYUFvMDfgU062LQ=="
+ },
+ "cacheable-request": {
+ "version": "6.1.0",
+ "resolved": "https://registry.npmjs.org/cacheable-request/-/cacheable-request-6.1.0.tgz",
+ "integrity": "sha512-Oj3cAGPCqOZX7Rz64Uny2GYAZNliQSqfbePrgAQ1wKAihYmCUnraBtJtKcGR4xz7wF+LoJC+ssFZvv5BgF9Igg==",
+ "requires": {
+ "clone-response": "^1.0.2",
+ "get-stream": "^5.1.0",
+ "http-cache-semantics": "^4.0.0",
+ "keyv": "^3.0.0",
+ "lowercase-keys": "^2.0.0",
+ "normalize-url": "^4.1.0",
+ "responselike": "^1.0.2"
+ },
+ "dependencies": {
+ "get-stream": {
+ "version": "5.1.0",
+ "resolved": "https://registry.npmjs.org/get-stream/-/get-stream-5.1.0.tgz",
+ "integrity": "sha512-EXr1FOzrzTfGeL0gQdeFEvOMm2mzMOglyiOXSTpPC+iAjAKftbr3jpCMWynogwYnM+eSj9sHGc6wjIcDvYiygw==",
+ "requires": {
+ "pump": "^3.0.0"
+ }
+ },
+ "lowercase-keys": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/lowercase-keys/-/lowercase-keys-2.0.0.tgz",
+ "integrity": "sha512-tqNXrS78oMOE73NMxK4EMLQsQowWf8jKooH9g7xPavRT706R6bkQJ6DY2Te7QukaZsulxa30wQ7bk0pm4XiHmA=="
+ }
+ }
+ },
+ "get-stream": {
+ "version": "4.1.0",
+ "resolved": "https://registry.npmjs.org/get-stream/-/get-stream-4.1.0.tgz",
+ "integrity": "sha512-GMat4EJ5161kIy2HevLlr4luNjBgvmj413KaQA7jt4V8B4RDsfpHk7WQ9GVqfYyyx8OS/L66Kox+rJRNklLK7w==",
+ "requires": {
+ "pump": "^3.0.0"
+ }
+ },
+ "got": {
+ "version": "9.6.0",
+ "resolved": "https://registry.npmjs.org/got/-/got-9.6.0.tgz",
+ "integrity": "sha512-R7eWptXuGYxwijs0eV+v3o6+XH1IqVK8dJOEecQfTmkncw9AV4dcw/Dhxi8MdlqPthxxpZyizMzyg8RTmEsG+Q==",
+ "requires": {
+ "@sindresorhus/is": "^0.14.0",
+ "@szmarczak/http-timer": "^1.1.2",
+ "cacheable-request": "^6.0.0",
+ "decompress-response": "^3.3.0",
+ "duplexer3": "^0.1.4",
+ "get-stream": "^4.1.0",
+ "lowercase-keys": "^1.0.1",
+ "mimic-response": "^1.0.1",
+ "p-cancelable": "^1.0.0",
+ "to-readable-stream": "^1.0.0",
+ "url-parse-lax": "^3.0.0"
+ }
+ },
+ "http-cache-semantics": {
+ "version": "4.1.0",
+ "resolved": "https://registry.npmjs.org/http-cache-semantics/-/http-cache-semantics-4.1.0.tgz",
+ "integrity": "sha512-carPklcUh7ROWRK7Cv27RPtdhYhUsela/ue5/jKzjegVvXDqM2ILE9Q2BGn9JZJh1g87cp56su/FgQSzcWS8cQ=="
+ },
+ "normalize-url": {
+ "version": "4.5.0",
+ "resolved": "https://registry.npmjs.org/normalize-url/-/normalize-url-4.5.0.tgz",
+ "integrity": "sha512-2s47yzUxdexf1OhyRi4Em83iQk0aPvwTddtFz4hnSSw9dCEsLEGf6SwIO8ss/19S9iBb5sJaOuTvTGDeZI00BQ=="
+ },
+ "p-cancelable": {
+ "version": "1.1.0",
+ "resolved": "https://registry.npmjs.org/p-cancelable/-/p-cancelable-1.1.0.tgz",
+ "integrity": "sha512-s73XxOZ4zpt1edZYZzvhqFa6uvQc1vwUa0K0BdtIZgQMAJj9IbebH+JkgKZc9h+B05PKHLOTl4ajG1BmNrVZlw=="
+ },
+ "prepend-http": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/prepend-http/-/prepend-http-2.0.0.tgz",
+ "integrity": "sha1-6SQ0v6XqjBn0HN/UAddBo8gZ2Jc="
+ },
+ "semver": {
+ "version": "6.3.0",
+ "resolved": "https://registry.npmjs.org/semver/-/semver-6.3.0.tgz",
+ "integrity": "sha512-b39TBaTSfV6yBrapU89p5fKekE2m/NwnDocOVruQFS1/veMgdzuPcnOM34M6CwxW8jH/lxEa5rBoDeUwu5HHTw=="
+ },
+ "url-parse-lax": {
+ "version": "3.0.0",
+ "resolved": "https://registry.npmjs.org/url-parse-lax/-/url-parse-lax-3.0.0.tgz",
+ "integrity": "sha1-FrXK/Afb42dsGxmZF3gj1lA6yww=",
+ "requires": {
+ "prepend-http": "^2.0.0"
+ }
+ }
}
},
"pako": {
- "version": "1.0.8",
- "resolved": "https://registry.npmjs.org/pako/-/pako-1.0.8.tgz",
- "integrity": "sha512-6i0HVbUfcKaTv+EG8ZTr75az7GFXcLYk9UyLEg7Notv/Ma+z/UG3TCoz6GiNeOrn1E/e63I0X/Hpw18jHOTUnA=="
+ "version": "1.0.11",
+ "resolved": "https://registry.npmjs.org/pako/-/pako-1.0.11.tgz",
+ "integrity": "sha512-4hLB8Py4zZce5s4yd9XzopqwVv/yGNhV1Bl8NTmCq1763HeK2+EwVTv+leGeL13Dnh2wfbqowVPXCIO0z4taYw=="
},
"parallel-transform": {
- "version": "1.1.0",
- "resolved": "https://registry.npmjs.org/parallel-transform/-/parallel-transform-1.1.0.tgz",
- "integrity": "sha1-1BDwZbBdojCB/NEPKIVMKb2jOwY=",
+ "version": "1.2.0",
+ "resolved": "https://registry.npmjs.org/parallel-transform/-/parallel-transform-1.2.0.tgz",
+ "integrity": "sha512-P2vSmIu38uIlvdcU7fDkyrxj33gTUy/ABO5ZUbGowxNCopBq/OoD42bP4UmMrJoPyk4Uqf0mu3mtWBhHCZD8yg==",
"requires": {
- "cyclist": "~0.2.2",
+ "cyclist": "^1.0.1",
"inherits": "^2.0.3",
"readable-stream": "^2.1.5"
}
@@ -11799,24 +19434,24 @@
}
},
"parent-module": {
- "version": "1.0.0",
- "resolved": "https://registry.npmjs.org/parent-module/-/parent-module-1.0.0.tgz",
- "integrity": "sha512-8Mf5juOMmiE4FcmzYc4IaiS9L3+9paz2KOiXzkRviCP6aDmN49Hz6EMWz0lGNp9pX80GvvAuLADtyGfW/Em3TA==",
+ "version": "1.0.1",
+ "resolved": "https://registry.npmjs.org/parent-module/-/parent-module-1.0.1.tgz",
+ "integrity": "sha512-GQ2EWRpQV8/o+Aw8YqtfZZPfNRWZYkbidE9k5rpl/hC3vtHHBfGm2Ifi6qWV+coDGkrUKZAxE3Lot5kcsRlh+g==",
"requires": {
"callsites": "^3.0.0"
},
"dependencies": {
"callsites": {
- "version": "3.0.0",
- "resolved": "https://registry.npmjs.org/callsites/-/callsites-3.0.0.tgz",
- "integrity": "sha512-tWnkwu9YEq2uzlBDI4RcLn8jrFvF9AOi8PxDNU3hZZjJcjkcRAq3vCI+vZcg1SuxISDYe86k9VZFwAxDiJGoAw=="
+ "version": "3.1.0",
+ "resolved": "https://registry.npmjs.org/callsites/-/callsites-3.1.0.tgz",
+ "integrity": "sha512-P8BjAsXvZS+VIDUI11hHCQEv74YT67YUi5JJFNWIqL235sBmjX4+qx9Muvls5ivyNENctx46xQLQ3aTuE7ssaQ=="
}
}
},
"parse-asn1": {
- "version": "5.1.4",
- "resolved": "https://registry.npmjs.org/parse-asn1/-/parse-asn1-5.1.4.tgz",
- "integrity": "sha512-Qs5duJcuvNExRfFZ99HDD3z4mAi3r9Wl/FOjEOijlxwCZs7E7mW2vjTpgQ4J8LpTF8x5v+1Vn5UQFejmWT11aw==",
+ "version": "5.1.5",
+ "resolved": "https://registry.npmjs.org/parse-asn1/-/parse-asn1-5.1.5.tgz",
+ "integrity": "sha512-jkMYn1dcJqF6d5CpU689bq7w/b5ALS9ROVSpQDPrZsqqesUJii9qutvoT5ltGedNXMO2e16YUWIghG9KxaViTQ==",
"requires": {
"asn1.js": "^4.0.0",
"browserify-aes": "^1.0.0",
@@ -11869,16 +19504,6 @@
"is-hexadecimal": "^1.0.0"
}
},
- "parse-filepath": {
- "version": "1.0.2",
- "resolved": "https://registry.npmjs.org/parse-filepath/-/parse-filepath-1.0.2.tgz",
- "integrity": "sha1-pjISf1Oq89FYdvWHLz/6x2PWyJE=",
- "requires": {
- "is-absolute": "^1.0.0",
- "map-cache": "^0.2.0",
- "path-root": "^0.1.1"
- }
- },
"parse-headers": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/parse-headers/-/parse-headers-2.0.1.tgz",
@@ -11917,6 +19542,26 @@
"resolved": "https://registry.npmjs.org/parse-passwd/-/parse-passwd-1.0.0.tgz",
"integrity": "sha1-bVuTSkVpk7I9N/QKOC1vFmao5cY="
},
+ "parse-path": {
+ "version": "4.0.1",
+ "resolved": "https://registry.npmjs.org/parse-path/-/parse-path-4.0.1.tgz",
+ "integrity": "sha512-d7yhga0Oc+PwNXDvQ0Jv1BuWkLVPXcAoQ/WREgd6vNNoKYaW52KI+RdOFjI63wjkmps9yUE8VS4veP+AgpQ/hA==",
+ "requires": {
+ "is-ssh": "^1.3.0",
+ "protocols": "^1.4.0"
+ }
+ },
+ "parse-url": {
+ "version": "5.0.1",
+ "resolved": "https://registry.npmjs.org/parse-url/-/parse-url-5.0.1.tgz",
+ "integrity": "sha512-flNUPP27r3vJpROi0/R3/2efgKkyXqnXwyP1KQ2U0SfFRgdizOdWfvrrvJg1LuOoxs7GQhmxJlq23IpQ/BkByg==",
+ "requires": {
+ "is-ssh": "^1.3.0",
+ "normalize-url": "^3.3.0",
+ "parse-path": "^4.0.0",
+ "protocols": "^1.4.0"
+ }
+ },
"parse5": {
"version": "3.0.3",
"resolved": "https://registry.npmjs.org/parse5/-/parse5-3.0.3.tgz",
@@ -11942,9 +19587,9 @@
}
},
"parseurl": {
- "version": "1.3.2",
- "resolved": "https://registry.npmjs.org/parseurl/-/parseurl-1.3.2.tgz",
- "integrity": "sha1-/CidTtiZMRlGDBViUyYs3I3mW/M="
+ "version": "1.3.3",
+ "resolved": "https://registry.npmjs.org/parseurl/-/parseurl-1.3.3.tgz",
+ "integrity": "sha512-CiyeOxFT/JZyN5m0z9PfXw4SCBJ6Sygz1Dpl0wqjlhDEGGBP1GnsUVEL0p63hoG1fcj3fHynXi9NYO4nWOL+qQ=="
},
"pascal-case": {
"version": "2.0.1",
@@ -11961,9 +19606,9 @@
"integrity": "sha1-s2PlXoAGym/iF4TS2yK9FdeRfxQ="
},
"path-browserify": {
- "version": "0.0.0",
- "resolved": "http://registry.npmjs.org/path-browserify/-/path-browserify-0.0.0.tgz",
- "integrity": "sha1-oLhwcpquIUAFt9UDLsLLuw+0RRo="
+ "version": "0.0.1",
+ "resolved": "https://registry.npmjs.org/path-browserify/-/path-browserify-0.0.1.tgz",
+ "integrity": "sha512-BapA40NHICOS+USX9SN4tyhq+A2RrN/Ws5F0Z5aMHDp98Fl86lX8Oti8B7uN93L4Ifv4fHOEA+pQw87gmMO/lQ=="
},
"path-case": {
"version": "2.1.1",
@@ -12008,19 +19653,6 @@
"resolved": "https://registry.npmjs.org/path-posix/-/path-posix-1.0.0.tgz",
"integrity": "sha1-BrJhE/Vr6rBCVFojv6iAA8ysJg8="
},
- "path-root": {
- "version": "0.1.1",
- "resolved": "https://registry.npmjs.org/path-root/-/path-root-0.1.1.tgz",
- "integrity": "sha1-mkpoFMrBwM1zNgqV8yCDyOpHRbc=",
- "requires": {
- "path-root-regex": "^0.1.0"
- }
- },
- "path-root-regex": {
- "version": "0.1.2",
- "resolved": "https://registry.npmjs.org/path-root-regex/-/path-root-regex-0.1.2.tgz",
- "integrity": "sha1-v8zcjfWxLcUsi0PsONGNcsBLqW0="
- },
"path-to-regexp": {
"version": "0.1.7",
"resolved": "https://registry.npmjs.org/path-to-regexp/-/path-to-regexp-0.1.7.tgz",
@@ -12036,15 +19668,15 @@
"dependencies": {
"pify": {
"version": "2.3.0",
- "resolved": "http://registry.npmjs.org/pify/-/pify-2.3.0.tgz",
+ "resolved": "https://registry.npmjs.org/pify/-/pify-2.3.0.tgz",
"integrity": "sha1-7RQaasBDqEnqWISY59yosVMw6Qw="
}
}
},
"pbkdf2": {
- "version": "3.0.17",
- "resolved": "https://registry.npmjs.org/pbkdf2/-/pbkdf2-3.0.17.tgz",
- "integrity": "sha512-U/il5MsrZp7mGg3mSQfn742na2T+1/vHDCG5/iTI3X9MKUuYUZVLQhyRsg06mCgDBTd57TxzgZt7P+fYfjRLtA==",
+ "version": "3.1.1",
+ "resolved": "https://registry.npmjs.org/pbkdf2/-/pbkdf2-3.1.1.tgz",
+ "integrity": "sha512-4Ejy1OPxi9f2tt1rRV7Go7zmfDQ+ZectEQz3VGUQhgq62HtIRPDyG/JtnwIxs6x3uNMwo2V7q1fMvKjb+Tnpqg==",
"requires": {
"create-hash": "^1.1.2",
"create-hmac": "^1.1.4",
@@ -12073,6 +19705,11 @@
"resolved": "https://registry.npmjs.org/physical-cpu-count/-/physical-cpu-count-2.0.0.tgz",
"integrity": "sha1-GN4vl+S/epVRrXURlCtUlverpmA="
},
+ "picomatch": {
+ "version": "2.2.2",
+ "resolved": "https://registry.npmjs.org/picomatch/-/picomatch-2.2.2.tgz",
+ "integrity": "sha512-q0M/9eZHzmr0AulXyPwNfZjtwZ/RBZlbN3K3CErVrk50T2ASYI7Bye0EvekFY3IP1Nt2DHu0re+V2ZHIpMkuWg=="
+ },
"pify": {
"version": "3.0.0",
"resolved": "https://registry.npmjs.org/pify/-/pify-3.0.0.tgz",
@@ -12125,9 +19762,9 @@
}
},
"p-limit": {
- "version": "2.1.0",
- "resolved": "https://registry.npmjs.org/p-limit/-/p-limit-2.1.0.tgz",
- "integrity": "sha512-NhURkNcrVB+8hNfLuysU8enY5xn2KXphsHBaC2YmRNTZRc7RWusw6apSpdEj3jo4CMb6W9nrF6tTnsJsJeyu6g==",
+ "version": "2.3.0",
+ "resolved": "https://registry.npmjs.org/p-limit/-/p-limit-2.3.0.tgz",
+ "integrity": "sha512-//88mFWSJx8lxCzwdAABTJL2MyWB12+eIY7MDL2SqLmAkeKU9qxRvWuSyTjm3FUmpBEMuFfckAIqEaVGUDxb6w==",
"requires": {
"p-try": "^2.0.0"
}
@@ -12142,6 +19779,14 @@
}
}
},
+ "pkg-up": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/pkg-up/-/pkg-up-2.0.0.tgz",
+ "integrity": "sha1-yBmscoBZpGHKscOImivjxJoATX8=",
+ "requires": {
+ "find-up": "^2.1.0"
+ }
+ },
"pngjs": {
"version": "3.3.3",
"resolved": "https://registry.npmjs.org/pngjs/-/pngjs-3.3.3.tgz",
@@ -12186,28 +19831,36 @@
}
}
},
- "portfinder": {
- "version": "1.0.20",
- "resolved": "https://registry.npmjs.org/portfinder/-/portfinder-1.0.20.tgz",
- "integrity": "sha512-Yxe4mTyDzTd59PZJY4ojZR8F+E5e97iq2ZOHPz3HDgSvYC5siNad2tLooQ5y5QHyQhc3xVqvyk/eNA3wuoa7Sw==",
+ "pnp-webpack-plugin": {
+ "version": "1.6.4",
+ "resolved": "https://registry.npmjs.org/pnp-webpack-plugin/-/pnp-webpack-plugin-1.6.4.tgz",
+ "integrity": "sha512-7Wjy+9E3WwLOEL30D+m8TSTF7qJJUJLONBnwQp0518siuMxUQUbgZwssaFX+QKlZkjHZcw/IpZCt/H0srrntSg==",
"requires": {
- "async": "^1.5.2",
- "debug": "^2.2.0",
- "mkdirp": "0.5.x"
+ "ts-pnp": "^1.1.6"
+ }
+ },
+ "portfinder": {
+ "version": "1.0.26",
+ "resolved": "https://registry.npmjs.org/portfinder/-/portfinder-1.0.26.tgz",
+ "integrity": "sha512-Xi7mKxJHHMI3rIUrnm/jjUgwhbYMkp/XKEcZX3aG4BrumLpq3nmoQMX+ClYnDZnZ/New7IatC1no5RX0zo1vXQ==",
+ "requires": {
+ "async": "^2.6.2",
+ "debug": "^3.1.1",
+ "mkdirp": "^0.5.1"
},
"dependencies": {
- "debug": {
- "version": "2.6.9",
- "resolved": "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz",
- "integrity": "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA==",
+ "async": {
+ "version": "2.6.3",
+ "resolved": "https://registry.npmjs.org/async/-/async-2.6.3.tgz",
+ "integrity": "sha512-zflvls11DCy+dQWzTW2dzuilv8Z5X/pjfmZOWba6TNIVDm+2UDaJmXSOXlasHKfNBs8oo3M0aT50fDEWfKZjXg==",
"requires": {
- "ms": "2.0.0"
+ "lodash": "^4.17.14"
}
},
- "ms": {
- "version": "2.0.0",
- "resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz",
- "integrity": "sha1-VgiurfwAvmwpAd9fmGF4jeDVl8g="
+ "lodash": {
+ "version": "4.17.15",
+ "resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz",
+ "integrity": "sha512-8xOcRHvCjnocdS5cpwXQXVzmmh5e5+saE2QGoeQmbKmRS6J3VQppPOIt0MnmE+4xlZoumy0GPG0D0MVIQbNA1A=="
}
}
},
@@ -12234,14 +19887,13 @@
}
},
"postcss-calc": {
- "version": "7.0.1",
- "resolved": "https://registry.npmjs.org/postcss-calc/-/postcss-calc-7.0.1.tgz",
- "integrity": "sha512-oXqx0m6tb4N3JGdmeMSc/i91KppbYsFZKdH0xMOqK8V1rJlzrKlTdokz8ozUXLVejydRN6u2IddxpcijRj2FqQ==",
+ "version": "7.0.2",
+ "resolved": "https://registry.npmjs.org/postcss-calc/-/postcss-calc-7.0.2.tgz",
+ "integrity": "sha512-rofZFHUg6ZIrvRwPeFktv06GdbDYLcGqh9EwiMutZg+a0oePCCw1zHOEiji6LCpyRcjTREtPASuUqeAvYlEVvQ==",
"requires": {
- "css-unit-converter": "^1.1.1",
- "postcss": "^7.0.5",
- "postcss-selector-parser": "^5.0.0-rc.4",
- "postcss-value-parser": "^3.3.1"
+ "postcss": "^7.0.27",
+ "postcss-selector-parser": "^6.0.2",
+ "postcss-value-parser": "^4.0.2"
},
"dependencies": {
"chalk": {
@@ -12265,15 +19917,20 @@
}
},
"postcss": {
- "version": "7.0.14",
- "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.14.tgz",
- "integrity": "sha512-NsbD6XUUMZvBxtQAJuWDJeeC4QFsmWsfozWxCJPWf3M55K9iu2iMDaKqyoOdTJ1R4usBXuxlVFAIo8rZPQD4Bg==",
+ "version": "7.0.32",
+ "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.32.tgz",
+ "integrity": "sha512-03eXong5NLnNCD05xscnGKGDZ98CyzoqPSMjOe6SuoQY7Z2hIj0Ld1g/O/UQRuOle2aRtiIRDg9tDcTGAkLfKw==",
"requires": {
"chalk": "^2.4.2",
"source-map": "^0.6.1",
"supports-color": "^6.1.0"
}
},
+ "postcss-value-parser": {
+ "version": "4.1.0",
+ "resolved": "https://registry.npmjs.org/postcss-value-parser/-/postcss-value-parser-4.1.0.tgz",
+ "integrity": "sha512-97DXOFbQJhk71ne5/Mt6cOu6yxsSfM0QGQyl0L25Gca4yGWEGJaig7l7gbCX623VqTBNGLRLaVUCnNkcedlRSQ=="
+ },
"source-map": {
"version": "0.6.1",
"resolved": "https://registry.npmjs.org/source-map/-/source-map-0.6.1.tgz",
@@ -12302,19 +19959,20 @@
},
"dependencies": {
"browserslist": {
- "version": "4.4.2",
- "resolved": "https://registry.npmjs.org/browserslist/-/browserslist-4.4.2.tgz",
- "integrity": "sha512-ISS/AIAiHERJ3d45Fz0AVYKkgcy+F/eJHzKEvv1j0wwKGKD9T3BrwKr/5g45L+Y4XIK5PlTqefHciRFcfE1Jxg==",
+ "version": "4.12.0",
+ "resolved": "https://registry.npmjs.org/browserslist/-/browserslist-4.12.0.tgz",
+ "integrity": "sha512-UH2GkcEDSI0k/lRkuDSzFl9ZZ87skSy9w2XAn1MsZnL+4c4rqbBd3e82UWHbYDpztABrPBhZsTEeuxVfHppqDg==",
"requires": {
- "caniuse-lite": "^1.0.30000939",
- "electron-to-chromium": "^1.3.113",
- "node-releases": "^1.1.8"
+ "caniuse-lite": "^1.0.30001043",
+ "electron-to-chromium": "^1.3.413",
+ "node-releases": "^1.1.53",
+ "pkg-up": "^2.0.0"
}
},
"caniuse-lite": {
- "version": "1.0.30000939",
- "resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30000939.tgz",
- "integrity": "sha512-oXB23ImDJOgQpGjRv1tCtzAvJr4/OvrHi5SO2vUgB0g0xpdZZoA/BxfImiWfdwoYdUTtQrPsXsvYU/dmCSM8gg=="
+ "version": "1.0.30001084",
+ "resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30001084.tgz",
+ "integrity": "sha512-ftdc5oGmhEbLUuMZ/Qp3mOpzfZLCxPYKcvGv6v2dJJ+8EdqcvZRbAGOiLmkM/PV1QGta/uwBs8/nCl6sokDW6w=="
},
"chalk": {
"version": "2.4.2",
@@ -12337,22 +19995,19 @@
}
},
"electron-to-chromium": {
- "version": "1.3.113",
- "resolved": "https://registry.npmjs.org/electron-to-chromium/-/electron-to-chromium-1.3.113.tgz",
- "integrity": "sha512-De+lPAxEcpxvqPTyZAXELNpRZXABRxf+uL/rSykstQhzj/B0l1150G/ExIIxKc16lI89Hgz81J0BHAcbTqK49g=="
+ "version": "1.3.474",
+ "resolved": "https://registry.npmjs.org/electron-to-chromium/-/electron-to-chromium-1.3.474.tgz",
+ "integrity": "sha512-fPkSgT9IBKmVJz02XioNsIpg0WYmkPrvU1lUJblMMJALxyE7/32NGvbJQKKxpNokozPvqfqkuUqVClYsvetcLw=="
},
"node-releases": {
- "version": "1.1.8",
- "resolved": "https://registry.npmjs.org/node-releases/-/node-releases-1.1.8.tgz",
- "integrity": "sha512-gQm+K9mGCiT/NXHy+V/ZZS1N/LOaGGqRAAJJs3X9Ah1g+CIbRcBgNyoNYQ+SEtcyAtB9KqDruu+fF7nWjsqRaA==",
- "requires": {
- "semver": "^5.3.0"
- }
+ "version": "1.1.58",
+ "resolved": "https://registry.npmjs.org/node-releases/-/node-releases-1.1.58.tgz",
+ "integrity": "sha512-NxBudgVKiRh/2aPWMgPR7bPTX0VPmGx5QBwCtdHitnqFE5/O8DeBXuIMH1nwNnw/aMo6AjOrpsHzfY3UbUJ7yg=="
},
"postcss": {
- "version": "7.0.14",
- "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.14.tgz",
- "integrity": "sha512-NsbD6XUUMZvBxtQAJuWDJeeC4QFsmWsfozWxCJPWf3M55K9iu2iMDaKqyoOdTJ1R4usBXuxlVFAIo8rZPQD4Bg==",
+ "version": "7.0.32",
+ "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.32.tgz",
+ "integrity": "sha512-03eXong5NLnNCD05xscnGKGDZ98CyzoqPSMjOe6SuoQY7Z2hIj0Ld1g/O/UQRuOle2aRtiIRDg9tDcTGAkLfKw==",
"requires": {
"chalk": "^2.4.2",
"source-map": "^0.6.1",
@@ -12404,9 +20059,9 @@
}
},
"postcss": {
- "version": "7.0.14",
- "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.14.tgz",
- "integrity": "sha512-NsbD6XUUMZvBxtQAJuWDJeeC4QFsmWsfozWxCJPWf3M55K9iu2iMDaKqyoOdTJ1R4usBXuxlVFAIo8rZPQD4Bg==",
+ "version": "7.0.32",
+ "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.32.tgz",
+ "integrity": "sha512-03eXong5NLnNCD05xscnGKGDZ98CyzoqPSMjOe6SuoQY7Z2hIj0Ld1g/O/UQRuOle2aRtiIRDg9tDcTGAkLfKw==",
"requires": {
"chalk": "^2.4.2",
"source-map": "^0.6.1",
@@ -12457,9 +20112,9 @@
}
},
"postcss": {
- "version": "7.0.14",
- "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.14.tgz",
- "integrity": "sha512-NsbD6XUUMZvBxtQAJuWDJeeC4QFsmWsfozWxCJPWf3M55K9iu2iMDaKqyoOdTJ1R4usBXuxlVFAIo8rZPQD4Bg==",
+ "version": "7.0.32",
+ "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.32.tgz",
+ "integrity": "sha512-03eXong5NLnNCD05xscnGKGDZ98CyzoqPSMjOe6SuoQY7Z2hIj0Ld1g/O/UQRuOle2aRtiIRDg9tDcTGAkLfKw==",
"requires": {
"chalk": "^2.4.2",
"source-map": "^0.6.1",
@@ -12510,9 +20165,9 @@
}
},
"postcss": {
- "version": "7.0.14",
- "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.14.tgz",
- "integrity": "sha512-NsbD6XUUMZvBxtQAJuWDJeeC4QFsmWsfozWxCJPWf3M55K9iu2iMDaKqyoOdTJ1R4usBXuxlVFAIo8rZPQD4Bg==",
+ "version": "7.0.32",
+ "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.32.tgz",
+ "integrity": "sha512-03eXong5NLnNCD05xscnGKGDZ98CyzoqPSMjOe6SuoQY7Z2hIj0Ld1g/O/UQRuOle2aRtiIRDg9tDcTGAkLfKw==",
"requires": {
"chalk": "^2.4.2",
"source-map": "^0.6.1",
@@ -12563,9 +20218,9 @@
}
},
"postcss": {
- "version": "7.0.14",
- "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.14.tgz",
- "integrity": "sha512-NsbD6XUUMZvBxtQAJuWDJeeC4QFsmWsfozWxCJPWf3M55K9iu2iMDaKqyoOdTJ1R4usBXuxlVFAIo8rZPQD4Bg==",
+ "version": "7.0.32",
+ "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.32.tgz",
+ "integrity": "sha512-03eXong5NLnNCD05xscnGKGDZ98CyzoqPSMjOe6SuoQY7Z2hIj0Ld1g/O/UQRuOle2aRtiIRDg9tDcTGAkLfKw==",
"requires": {
"chalk": "^2.4.2",
"source-map": "^0.6.1",
@@ -12616,9 +20271,9 @@
}
},
"postcss": {
- "version": "7.0.14",
- "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.14.tgz",
- "integrity": "sha512-NsbD6XUUMZvBxtQAJuWDJeeC4QFsmWsfozWxCJPWf3M55K9iu2iMDaKqyoOdTJ1R4usBXuxlVFAIo8rZPQD4Bg==",
+ "version": "7.0.32",
+ "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.32.tgz",
+ "integrity": "sha512-03eXong5NLnNCD05xscnGKGDZ98CyzoqPSMjOe6SuoQY7Z2hIj0Ld1g/O/UQRuOle2aRtiIRDg9tDcTGAkLfKw==",
"requires": {
"chalk": "^2.4.2",
"source-map": "^0.6.1",
@@ -12649,25 +20304,12 @@
}
},
"postcss-load-config": {
- "version": "2.0.0",
- "resolved": "https://registry.npmjs.org/postcss-load-config/-/postcss-load-config-2.0.0.tgz",
- "integrity": "sha512-V5JBLzw406BB8UIfsAWSK2KSwIJ5yoEIVFb4gVkXci0QdKgA24jLmHZ/ghe/GgX0lJ0/D1uUK1ejhzEY94MChQ==",
+ "version": "2.1.0",
+ "resolved": "https://registry.npmjs.org/postcss-load-config/-/postcss-load-config-2.1.0.tgz",
+ "integrity": "sha512-4pV3JJVPLd5+RueiVVB+gFOAa7GWc25XQcMp86Zexzke69mKf6Nx9LRcQywdz7yZI9n1udOxmLuAwTBypypF8Q==",
"requires": {
- "cosmiconfig": "^4.0.0",
+ "cosmiconfig": "^5.0.0",
"import-cwd": "^2.0.0"
- },
- "dependencies": {
- "cosmiconfig": {
- "version": "4.0.0",
- "resolved": "https://registry.npmjs.org/cosmiconfig/-/cosmiconfig-4.0.0.tgz",
- "integrity": "sha512-6e5vDdrXZD+t5v0L8CrurPeybg4Fmf+FCSYxXKYVAqLUtyCSbuyqE059d0kDthTNRzKVjL7QMgNpEUlsoYH3iQ==",
- "requires": {
- "is-directory": "^0.3.1",
- "js-yaml": "^3.9.0",
- "parse-json": "^4.0.0",
- "require-from-string": "^2.0.1"
- }
- }
}
},
"postcss-loader": {
@@ -12679,6 +20321,17 @@
"postcss": "^6.0.0",
"postcss-load-config": "^2.0.0",
"schema-utils": "^0.4.0"
+ },
+ "dependencies": {
+ "schema-utils": {
+ "version": "0.4.7",
+ "resolved": "https://registry.npmjs.org/schema-utils/-/schema-utils-0.4.7.tgz",
+ "integrity": "sha512-v/iwU6wvwGK8HbU9yi3/nhGzP0yGSuhQMzL6ySiec1FSrZZDkhm4noOSWzrNFo/jEc+SJY6jRTwuwbSXJPDUnQ==",
+ "requires": {
+ "ajv": "^6.1.0",
+ "ajv-keywords": "^3.1.0"
+ }
+ }
}
},
"postcss-merge-longhand": {
@@ -12713,9 +20366,9 @@
}
},
"postcss": {
- "version": "7.0.14",
- "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.14.tgz",
- "integrity": "sha512-NsbD6XUUMZvBxtQAJuWDJeeC4QFsmWsfozWxCJPWf3M55K9iu2iMDaKqyoOdTJ1R4usBXuxlVFAIo8rZPQD4Bg==",
+ "version": "7.0.32",
+ "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.32.tgz",
+ "integrity": "sha512-03eXong5NLnNCD05xscnGKGDZ98CyzoqPSMjOe6SuoQY7Z2hIj0Ld1g/O/UQRuOle2aRtiIRDg9tDcTGAkLfKw==",
"requires": {
"chalk": "^2.4.2",
"source-map": "^0.6.1",
@@ -12751,19 +20404,20 @@
},
"dependencies": {
"browserslist": {
- "version": "4.4.2",
- "resolved": "https://registry.npmjs.org/browserslist/-/browserslist-4.4.2.tgz",
- "integrity": "sha512-ISS/AIAiHERJ3d45Fz0AVYKkgcy+F/eJHzKEvv1j0wwKGKD9T3BrwKr/5g45L+Y4XIK5PlTqefHciRFcfE1Jxg==",
+ "version": "4.12.0",
+ "resolved": "https://registry.npmjs.org/browserslist/-/browserslist-4.12.0.tgz",
+ "integrity": "sha512-UH2GkcEDSI0k/lRkuDSzFl9ZZ87skSy9w2XAn1MsZnL+4c4rqbBd3e82UWHbYDpztABrPBhZsTEeuxVfHppqDg==",
"requires": {
- "caniuse-lite": "^1.0.30000939",
- "electron-to-chromium": "^1.3.113",
- "node-releases": "^1.1.8"
+ "caniuse-lite": "^1.0.30001043",
+ "electron-to-chromium": "^1.3.413",
+ "node-releases": "^1.1.53",
+ "pkg-up": "^2.0.0"
}
},
"caniuse-lite": {
- "version": "1.0.30000939",
- "resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30000939.tgz",
- "integrity": "sha512-oXB23ImDJOgQpGjRv1tCtzAvJr4/OvrHi5SO2vUgB0g0xpdZZoA/BxfImiWfdwoYdUTtQrPsXsvYU/dmCSM8gg=="
+ "version": "1.0.30001084",
+ "resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30001084.tgz",
+ "integrity": "sha512-ftdc5oGmhEbLUuMZ/Qp3mOpzfZLCxPYKcvGv6v2dJJ+8EdqcvZRbAGOiLmkM/PV1QGta/uwBs8/nCl6sokDW6w=="
},
"chalk": {
"version": "2.4.2",
@@ -12786,22 +20440,19 @@
}
},
"electron-to-chromium": {
- "version": "1.3.113",
- "resolved": "https://registry.npmjs.org/electron-to-chromium/-/electron-to-chromium-1.3.113.tgz",
- "integrity": "sha512-De+lPAxEcpxvqPTyZAXELNpRZXABRxf+uL/rSykstQhzj/B0l1150G/ExIIxKc16lI89Hgz81J0BHAcbTqK49g=="
+ "version": "1.3.474",
+ "resolved": "https://registry.npmjs.org/electron-to-chromium/-/electron-to-chromium-1.3.474.tgz",
+ "integrity": "sha512-fPkSgT9IBKmVJz02XioNsIpg0WYmkPrvU1lUJblMMJALxyE7/32NGvbJQKKxpNokozPvqfqkuUqVClYsvetcLw=="
},
"node-releases": {
- "version": "1.1.8",
- "resolved": "https://registry.npmjs.org/node-releases/-/node-releases-1.1.8.tgz",
- "integrity": "sha512-gQm+K9mGCiT/NXHy+V/ZZS1N/LOaGGqRAAJJs3X9Ah1g+CIbRcBgNyoNYQ+SEtcyAtB9KqDruu+fF7nWjsqRaA==",
- "requires": {
- "semver": "^5.3.0"
- }
+ "version": "1.1.58",
+ "resolved": "https://registry.npmjs.org/node-releases/-/node-releases-1.1.58.tgz",
+ "integrity": "sha512-NxBudgVKiRh/2aPWMgPR7bPTX0VPmGx5QBwCtdHitnqFE5/O8DeBXuIMH1nwNnw/aMo6AjOrpsHzfY3UbUJ7yg=="
},
"postcss": {
- "version": "7.0.14",
- "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.14.tgz",
- "integrity": "sha512-NsbD6XUUMZvBxtQAJuWDJeeC4QFsmWsfozWxCJPWf3M55K9iu2iMDaKqyoOdTJ1R4usBXuxlVFAIo8rZPQD4Bg==",
+ "version": "7.0.32",
+ "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.32.tgz",
+ "integrity": "sha512-03eXong5NLnNCD05xscnGKGDZ98CyzoqPSMjOe6SuoQY7Z2hIj0Ld1g/O/UQRuOle2aRtiIRDg9tDcTGAkLfKw==",
"requires": {
"chalk": "^2.4.2",
"source-map": "^0.6.1",
@@ -12809,11 +20460,11 @@
}
},
"postcss-selector-parser": {
- "version": "3.1.1",
- "resolved": "https://registry.npmjs.org/postcss-selector-parser/-/postcss-selector-parser-3.1.1.tgz",
- "integrity": "sha1-T4dfSvsMllc9XPTXQBGu4lCn6GU=",
+ "version": "3.1.2",
+ "resolved": "https://registry.npmjs.org/postcss-selector-parser/-/postcss-selector-parser-3.1.2.tgz",
+ "integrity": "sha512-h7fJ/5uWuRVyOtkO45pnt1Ih40CEleeyCHzipqAZO2e5H20g25Y48uYnFUiShvY4rZWNJ/Bib/KVPmanaCtOhA==",
"requires": {
- "dot-prop": "^4.1.1",
+ "dot-prop": "^5.2.0",
"indexes-of": "^1.0.1",
"uniq": "^1.0.1"
}
@@ -12863,9 +20514,9 @@
}
},
"postcss": {
- "version": "7.0.14",
- "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.14.tgz",
- "integrity": "sha512-NsbD6XUUMZvBxtQAJuWDJeeC4QFsmWsfozWxCJPWf3M55K9iu2iMDaKqyoOdTJ1R4usBXuxlVFAIo8rZPQD4Bg==",
+ "version": "7.0.32",
+ "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.32.tgz",
+ "integrity": "sha512-03eXong5NLnNCD05xscnGKGDZ98CyzoqPSMjOe6SuoQY7Z2hIj0Ld1g/O/UQRuOle2aRtiIRDg9tDcTGAkLfKw==",
"requires": {
"chalk": "^2.4.2",
"source-map": "^0.6.1",
@@ -12919,9 +20570,9 @@
}
},
"postcss": {
- "version": "7.0.14",
- "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.14.tgz",
- "integrity": "sha512-NsbD6XUUMZvBxtQAJuWDJeeC4QFsmWsfozWxCJPWf3M55K9iu2iMDaKqyoOdTJ1R4usBXuxlVFAIo8rZPQD4Bg==",
+ "version": "7.0.32",
+ "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.32.tgz",
+ "integrity": "sha512-03eXong5NLnNCD05xscnGKGDZ98CyzoqPSMjOe6SuoQY7Z2hIj0Ld1g/O/UQRuOle2aRtiIRDg9tDcTGAkLfKw==",
"requires": {
"chalk": "^2.4.2",
"source-map": "^0.6.1",
@@ -12957,19 +20608,20 @@
},
"dependencies": {
"browserslist": {
- "version": "4.4.2",
- "resolved": "https://registry.npmjs.org/browserslist/-/browserslist-4.4.2.tgz",
- "integrity": "sha512-ISS/AIAiHERJ3d45Fz0AVYKkgcy+F/eJHzKEvv1j0wwKGKD9T3BrwKr/5g45L+Y4XIK5PlTqefHciRFcfE1Jxg==",
+ "version": "4.12.0",
+ "resolved": "https://registry.npmjs.org/browserslist/-/browserslist-4.12.0.tgz",
+ "integrity": "sha512-UH2GkcEDSI0k/lRkuDSzFl9ZZ87skSy9w2XAn1MsZnL+4c4rqbBd3e82UWHbYDpztABrPBhZsTEeuxVfHppqDg==",
"requires": {
- "caniuse-lite": "^1.0.30000939",
- "electron-to-chromium": "^1.3.113",
- "node-releases": "^1.1.8"
+ "caniuse-lite": "^1.0.30001043",
+ "electron-to-chromium": "^1.3.413",
+ "node-releases": "^1.1.53",
+ "pkg-up": "^2.0.0"
}
},
"caniuse-lite": {
- "version": "1.0.30000939",
- "resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30000939.tgz",
- "integrity": "sha512-oXB23ImDJOgQpGjRv1tCtzAvJr4/OvrHi5SO2vUgB0g0xpdZZoA/BxfImiWfdwoYdUTtQrPsXsvYU/dmCSM8gg=="
+ "version": "1.0.30001084",
+ "resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30001084.tgz",
+ "integrity": "sha512-ftdc5oGmhEbLUuMZ/Qp3mOpzfZLCxPYKcvGv6v2dJJ+8EdqcvZRbAGOiLmkM/PV1QGta/uwBs8/nCl6sokDW6w=="
},
"chalk": {
"version": "2.4.2",
@@ -12992,22 +20644,19 @@
}
},
"electron-to-chromium": {
- "version": "1.3.113",
- "resolved": "https://registry.npmjs.org/electron-to-chromium/-/electron-to-chromium-1.3.113.tgz",
- "integrity": "sha512-De+lPAxEcpxvqPTyZAXELNpRZXABRxf+uL/rSykstQhzj/B0l1150G/ExIIxKc16lI89Hgz81J0BHAcbTqK49g=="
+ "version": "1.3.474",
+ "resolved": "https://registry.npmjs.org/electron-to-chromium/-/electron-to-chromium-1.3.474.tgz",
+ "integrity": "sha512-fPkSgT9IBKmVJz02XioNsIpg0WYmkPrvU1lUJblMMJALxyE7/32NGvbJQKKxpNokozPvqfqkuUqVClYsvetcLw=="
},
"node-releases": {
- "version": "1.1.8",
- "resolved": "https://registry.npmjs.org/node-releases/-/node-releases-1.1.8.tgz",
- "integrity": "sha512-gQm+K9mGCiT/NXHy+V/ZZS1N/LOaGGqRAAJJs3X9Ah1g+CIbRcBgNyoNYQ+SEtcyAtB9KqDruu+fF7nWjsqRaA==",
- "requires": {
- "semver": "^5.3.0"
- }
+ "version": "1.1.58",
+ "resolved": "https://registry.npmjs.org/node-releases/-/node-releases-1.1.58.tgz",
+ "integrity": "sha512-NxBudgVKiRh/2aPWMgPR7bPTX0VPmGx5QBwCtdHitnqFE5/O8DeBXuIMH1nwNnw/aMo6AjOrpsHzfY3UbUJ7yg=="
},
"postcss": {
- "version": "7.0.14",
- "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.14.tgz",
- "integrity": "sha512-NsbD6XUUMZvBxtQAJuWDJeeC4QFsmWsfozWxCJPWf3M55K9iu2iMDaKqyoOdTJ1R4usBXuxlVFAIo8rZPQD4Bg==",
+ "version": "7.0.32",
+ "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.32.tgz",
+ "integrity": "sha512-03eXong5NLnNCD05xscnGKGDZ98CyzoqPSMjOe6SuoQY7Z2hIj0Ld1g/O/UQRuOle2aRtiIRDg9tDcTGAkLfKw==",
"requires": {
"chalk": "^2.4.2",
"source-map": "^0.6.1",
@@ -13061,9 +20710,9 @@
}
},
"postcss": {
- "version": "7.0.14",
- "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.14.tgz",
- "integrity": "sha512-NsbD6XUUMZvBxtQAJuWDJeeC4QFsmWsfozWxCJPWf3M55K9iu2iMDaKqyoOdTJ1R4usBXuxlVFAIo8rZPQD4Bg==",
+ "version": "7.0.32",
+ "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.32.tgz",
+ "integrity": "sha512-03eXong5NLnNCD05xscnGKGDZ98CyzoqPSMjOe6SuoQY7Z2hIj0Ld1g/O/UQRuOle2aRtiIRDg9tDcTGAkLfKw==",
"requires": {
"chalk": "^2.4.2",
"source-map": "^0.6.1",
@@ -13071,11 +20720,11 @@
}
},
"postcss-selector-parser": {
- "version": "3.1.1",
- "resolved": "https://registry.npmjs.org/postcss-selector-parser/-/postcss-selector-parser-3.1.1.tgz",
- "integrity": "sha1-T4dfSvsMllc9XPTXQBGu4lCn6GU=",
+ "version": "3.1.2",
+ "resolved": "https://registry.npmjs.org/postcss-selector-parser/-/postcss-selector-parser-3.1.2.tgz",
+ "integrity": "sha512-h7fJ/5uWuRVyOtkO45pnt1Ih40CEleeyCHzipqAZO2e5H20g25Y48uYnFUiShvY4rZWNJ/Bib/KVPmanaCtOhA==",
"requires": {
- "dot-prop": "^4.1.1",
+ "dot-prop": "^5.2.0",
"indexes-of": "^1.0.1",
"uniq": "^1.0.1"
}
@@ -13159,9 +20808,9 @@
}
},
"postcss": {
- "version": "7.0.14",
- "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.14.tgz",
- "integrity": "sha512-NsbD6XUUMZvBxtQAJuWDJeeC4QFsmWsfozWxCJPWf3M55K9iu2iMDaKqyoOdTJ1R4usBXuxlVFAIo8rZPQD4Bg==",
+ "version": "7.0.32",
+ "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.32.tgz",
+ "integrity": "sha512-03eXong5NLnNCD05xscnGKGDZ98CyzoqPSMjOe6SuoQY7Z2hIj0Ld1g/O/UQRuOle2aRtiIRDg9tDcTGAkLfKw==",
"requires": {
"chalk": "^2.4.2",
"source-map": "^0.6.1",
@@ -13214,9 +20863,9 @@
}
},
"postcss": {
- "version": "7.0.14",
- "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.14.tgz",
- "integrity": "sha512-NsbD6XUUMZvBxtQAJuWDJeeC4QFsmWsfozWxCJPWf3M55K9iu2iMDaKqyoOdTJ1R4usBXuxlVFAIo8rZPQD4Bg==",
+ "version": "7.0.32",
+ "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.32.tgz",
+ "integrity": "sha512-03eXong5NLnNCD05xscnGKGDZ98CyzoqPSMjOe6SuoQY7Z2hIj0Ld1g/O/UQRuOle2aRtiIRDg9tDcTGAkLfKw==",
"requires": {
"chalk": "^2.4.2",
"source-map": "^0.6.1",
@@ -13270,9 +20919,9 @@
}
},
"postcss": {
- "version": "7.0.14",
- "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.14.tgz",
- "integrity": "sha512-NsbD6XUUMZvBxtQAJuWDJeeC4QFsmWsfozWxCJPWf3M55K9iu2iMDaKqyoOdTJ1R4usBXuxlVFAIo8rZPQD4Bg==",
+ "version": "7.0.32",
+ "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.32.tgz",
+ "integrity": "sha512-03eXong5NLnNCD05xscnGKGDZ98CyzoqPSMjOe6SuoQY7Z2hIj0Ld1g/O/UQRuOle2aRtiIRDg9tDcTGAkLfKw==",
"requires": {
"chalk": "^2.4.2",
"source-map": "^0.6.1",
@@ -13326,9 +20975,9 @@
}
},
"postcss": {
- "version": "7.0.14",
- "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.14.tgz",
- "integrity": "sha512-NsbD6XUUMZvBxtQAJuWDJeeC4QFsmWsfozWxCJPWf3M55K9iu2iMDaKqyoOdTJ1R4usBXuxlVFAIo8rZPQD4Bg==",
+ "version": "7.0.32",
+ "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.32.tgz",
+ "integrity": "sha512-03eXong5NLnNCD05xscnGKGDZ98CyzoqPSMjOe6SuoQY7Z2hIj0Ld1g/O/UQRuOle2aRtiIRDg9tDcTGAkLfKw==",
"requires": {
"chalk": "^2.4.2",
"source-map": "^0.6.1",
@@ -13381,9 +21030,9 @@
}
},
"postcss": {
- "version": "7.0.14",
- "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.14.tgz",
- "integrity": "sha512-NsbD6XUUMZvBxtQAJuWDJeeC4QFsmWsfozWxCJPWf3M55K9iu2iMDaKqyoOdTJ1R4usBXuxlVFAIo8rZPQD4Bg==",
+ "version": "7.0.32",
+ "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.32.tgz",
+ "integrity": "sha512-03eXong5NLnNCD05xscnGKGDZ98CyzoqPSMjOe6SuoQY7Z2hIj0Ld1g/O/UQRuOle2aRtiIRDg9tDcTGAkLfKw==",
"requires": {
"chalk": "^2.4.2",
"source-map": "^0.6.1",
@@ -13436,9 +21085,9 @@
}
},
"postcss": {
- "version": "7.0.14",
- "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.14.tgz",
- "integrity": "sha512-NsbD6XUUMZvBxtQAJuWDJeeC4QFsmWsfozWxCJPWf3M55K9iu2iMDaKqyoOdTJ1R4usBXuxlVFAIo8rZPQD4Bg==",
+ "version": "7.0.32",
+ "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.32.tgz",
+ "integrity": "sha512-03eXong5NLnNCD05xscnGKGDZ98CyzoqPSMjOe6SuoQY7Z2hIj0Ld1g/O/UQRuOle2aRtiIRDg9tDcTGAkLfKw==",
"requires": {
"chalk": "^2.4.2",
"source-map": "^0.6.1",
@@ -13471,19 +21120,20 @@
},
"dependencies": {
"browserslist": {
- "version": "4.4.2",
- "resolved": "https://registry.npmjs.org/browserslist/-/browserslist-4.4.2.tgz",
- "integrity": "sha512-ISS/AIAiHERJ3d45Fz0AVYKkgcy+F/eJHzKEvv1j0wwKGKD9T3BrwKr/5g45L+Y4XIK5PlTqefHciRFcfE1Jxg==",
+ "version": "4.12.0",
+ "resolved": "https://registry.npmjs.org/browserslist/-/browserslist-4.12.0.tgz",
+ "integrity": "sha512-UH2GkcEDSI0k/lRkuDSzFl9ZZ87skSy9w2XAn1MsZnL+4c4rqbBd3e82UWHbYDpztABrPBhZsTEeuxVfHppqDg==",
"requires": {
- "caniuse-lite": "^1.0.30000939",
- "electron-to-chromium": "^1.3.113",
- "node-releases": "^1.1.8"
+ "caniuse-lite": "^1.0.30001043",
+ "electron-to-chromium": "^1.3.413",
+ "node-releases": "^1.1.53",
+ "pkg-up": "^2.0.0"
}
},
"caniuse-lite": {
- "version": "1.0.30000939",
- "resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30000939.tgz",
- "integrity": "sha512-oXB23ImDJOgQpGjRv1tCtzAvJr4/OvrHi5SO2vUgB0g0xpdZZoA/BxfImiWfdwoYdUTtQrPsXsvYU/dmCSM8gg=="
+ "version": "1.0.30001084",
+ "resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30001084.tgz",
+ "integrity": "sha512-ftdc5oGmhEbLUuMZ/Qp3mOpzfZLCxPYKcvGv6v2dJJ+8EdqcvZRbAGOiLmkM/PV1QGta/uwBs8/nCl6sokDW6w=="
},
"chalk": {
"version": "2.4.2",
@@ -13506,22 +21156,19 @@
}
},
"electron-to-chromium": {
- "version": "1.3.113",
- "resolved": "https://registry.npmjs.org/electron-to-chromium/-/electron-to-chromium-1.3.113.tgz",
- "integrity": "sha512-De+lPAxEcpxvqPTyZAXELNpRZXABRxf+uL/rSykstQhzj/B0l1150G/ExIIxKc16lI89Hgz81J0BHAcbTqK49g=="
+ "version": "1.3.474",
+ "resolved": "https://registry.npmjs.org/electron-to-chromium/-/electron-to-chromium-1.3.474.tgz",
+ "integrity": "sha512-fPkSgT9IBKmVJz02XioNsIpg0WYmkPrvU1lUJblMMJALxyE7/32NGvbJQKKxpNokozPvqfqkuUqVClYsvetcLw=="
},
"node-releases": {
- "version": "1.1.8",
- "resolved": "https://registry.npmjs.org/node-releases/-/node-releases-1.1.8.tgz",
- "integrity": "sha512-gQm+K9mGCiT/NXHy+V/ZZS1N/LOaGGqRAAJJs3X9Ah1g+CIbRcBgNyoNYQ+SEtcyAtB9KqDruu+fF7nWjsqRaA==",
- "requires": {
- "semver": "^5.3.0"
- }
+ "version": "1.1.58",
+ "resolved": "https://registry.npmjs.org/node-releases/-/node-releases-1.1.58.tgz",
+ "integrity": "sha512-NxBudgVKiRh/2aPWMgPR7bPTX0VPmGx5QBwCtdHitnqFE5/O8DeBXuIMH1nwNnw/aMo6AjOrpsHzfY3UbUJ7yg=="
},
"postcss": {
- "version": "7.0.14",
- "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.14.tgz",
- "integrity": "sha512-NsbD6XUUMZvBxtQAJuWDJeeC4QFsmWsfozWxCJPWf3M55K9iu2iMDaKqyoOdTJ1R4usBXuxlVFAIo8rZPQD4Bg==",
+ "version": "7.0.32",
+ "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.32.tgz",
+ "integrity": "sha512-03eXong5NLnNCD05xscnGKGDZ98CyzoqPSMjOe6SuoQY7Z2hIj0Ld1g/O/UQRuOle2aRtiIRDg9tDcTGAkLfKw==",
"requires": {
"chalk": "^2.4.2",
"source-map": "^0.6.1",
@@ -13575,9 +21222,9 @@
}
},
"postcss": {
- "version": "7.0.14",
- "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.14.tgz",
- "integrity": "sha512-NsbD6XUUMZvBxtQAJuWDJeeC4QFsmWsfozWxCJPWf3M55K9iu2iMDaKqyoOdTJ1R4usBXuxlVFAIo8rZPQD4Bg==",
+ "version": "7.0.32",
+ "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.32.tgz",
+ "integrity": "sha512-03eXong5NLnNCD05xscnGKGDZ98CyzoqPSMjOe6SuoQY7Z2hIj0Ld1g/O/UQRuOle2aRtiIRDg9tDcTGAkLfKw==",
"requires": {
"chalk": "^2.4.2",
"source-map": "^0.6.1",
@@ -13629,9 +21276,9 @@
}
},
"postcss": {
- "version": "7.0.14",
- "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.14.tgz",
- "integrity": "sha512-NsbD6XUUMZvBxtQAJuWDJeeC4QFsmWsfozWxCJPWf3M55K9iu2iMDaKqyoOdTJ1R4usBXuxlVFAIo8rZPQD4Bg==",
+ "version": "7.0.32",
+ "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.32.tgz",
+ "integrity": "sha512-03eXong5NLnNCD05xscnGKGDZ98CyzoqPSMjOe6SuoQY7Z2hIj0Ld1g/O/UQRuOle2aRtiIRDg9tDcTGAkLfKw==",
"requires": {
"chalk": "^2.4.2",
"source-map": "^0.6.1",
@@ -13684,9 +21331,9 @@
}
},
"postcss": {
- "version": "7.0.14",
- "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.14.tgz",
- "integrity": "sha512-NsbD6XUUMZvBxtQAJuWDJeeC4QFsmWsfozWxCJPWf3M55K9iu2iMDaKqyoOdTJ1R4usBXuxlVFAIo8rZPQD4Bg==",
+ "version": "7.0.32",
+ "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.32.tgz",
+ "integrity": "sha512-03eXong5NLnNCD05xscnGKGDZ98CyzoqPSMjOe6SuoQY7Z2hIj0Ld1g/O/UQRuOle2aRtiIRDg9tDcTGAkLfKw==",
"requires": {
"chalk": "^2.4.2",
"source-map": "^0.6.1",
@@ -13720,19 +21367,20 @@
},
"dependencies": {
"browserslist": {
- "version": "4.4.2",
- "resolved": "https://registry.npmjs.org/browserslist/-/browserslist-4.4.2.tgz",
- "integrity": "sha512-ISS/AIAiHERJ3d45Fz0AVYKkgcy+F/eJHzKEvv1j0wwKGKD9T3BrwKr/5g45L+Y4XIK5PlTqefHciRFcfE1Jxg==",
+ "version": "4.12.0",
+ "resolved": "https://registry.npmjs.org/browserslist/-/browserslist-4.12.0.tgz",
+ "integrity": "sha512-UH2GkcEDSI0k/lRkuDSzFl9ZZ87skSy9w2XAn1MsZnL+4c4rqbBd3e82UWHbYDpztABrPBhZsTEeuxVfHppqDg==",
"requires": {
- "caniuse-lite": "^1.0.30000939",
- "electron-to-chromium": "^1.3.113",
- "node-releases": "^1.1.8"
+ "caniuse-lite": "^1.0.30001043",
+ "electron-to-chromium": "^1.3.413",
+ "node-releases": "^1.1.53",
+ "pkg-up": "^2.0.0"
}
},
"caniuse-lite": {
- "version": "1.0.30000939",
- "resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30000939.tgz",
- "integrity": "sha512-oXB23ImDJOgQpGjRv1tCtzAvJr4/OvrHi5SO2vUgB0g0xpdZZoA/BxfImiWfdwoYdUTtQrPsXsvYU/dmCSM8gg=="
+ "version": "1.0.30001084",
+ "resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30001084.tgz",
+ "integrity": "sha512-ftdc5oGmhEbLUuMZ/Qp3mOpzfZLCxPYKcvGv6v2dJJ+8EdqcvZRbAGOiLmkM/PV1QGta/uwBs8/nCl6sokDW6w=="
},
"chalk": {
"version": "2.4.2",
@@ -13755,22 +21403,19 @@
}
},
"electron-to-chromium": {
- "version": "1.3.113",
- "resolved": "https://registry.npmjs.org/electron-to-chromium/-/electron-to-chromium-1.3.113.tgz",
- "integrity": "sha512-De+lPAxEcpxvqPTyZAXELNpRZXABRxf+uL/rSykstQhzj/B0l1150G/ExIIxKc16lI89Hgz81J0BHAcbTqK49g=="
+ "version": "1.3.474",
+ "resolved": "https://registry.npmjs.org/electron-to-chromium/-/electron-to-chromium-1.3.474.tgz",
+ "integrity": "sha512-fPkSgT9IBKmVJz02XioNsIpg0WYmkPrvU1lUJblMMJALxyE7/32NGvbJQKKxpNokozPvqfqkuUqVClYsvetcLw=="
},
"node-releases": {
- "version": "1.1.8",
- "resolved": "https://registry.npmjs.org/node-releases/-/node-releases-1.1.8.tgz",
- "integrity": "sha512-gQm+K9mGCiT/NXHy+V/ZZS1N/LOaGGqRAAJJs3X9Ah1g+CIbRcBgNyoNYQ+SEtcyAtB9KqDruu+fF7nWjsqRaA==",
- "requires": {
- "semver": "^5.3.0"
- }
+ "version": "1.1.58",
+ "resolved": "https://registry.npmjs.org/node-releases/-/node-releases-1.1.58.tgz",
+ "integrity": "sha512-NxBudgVKiRh/2aPWMgPR7bPTX0VPmGx5QBwCtdHitnqFE5/O8DeBXuIMH1nwNnw/aMo6AjOrpsHzfY3UbUJ7yg=="
},
"postcss": {
- "version": "7.0.14",
- "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.14.tgz",
- "integrity": "sha512-NsbD6XUUMZvBxtQAJuWDJeeC4QFsmWsfozWxCJPWf3M55K9iu2iMDaKqyoOdTJ1R4usBXuxlVFAIo8rZPQD4Bg==",
+ "version": "7.0.32",
+ "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.32.tgz",
+ "integrity": "sha512-03eXong5NLnNCD05xscnGKGDZ98CyzoqPSMjOe6SuoQY7Z2hIj0Ld1g/O/UQRuOle2aRtiIRDg9tDcTGAkLfKw==",
"requires": {
"chalk": "^2.4.2",
"source-map": "^0.6.1",
@@ -13824,9 +21469,9 @@
}
},
"postcss": {
- "version": "7.0.14",
- "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.14.tgz",
- "integrity": "sha512-NsbD6XUUMZvBxtQAJuWDJeeC4QFsmWsfozWxCJPWf3M55K9iu2iMDaKqyoOdTJ1R4usBXuxlVFAIo8rZPQD4Bg==",
+ "version": "7.0.32",
+ "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.32.tgz",
+ "integrity": "sha512-03eXong5NLnNCD05xscnGKGDZ98CyzoqPSMjOe6SuoQY7Z2hIj0Ld1g/O/UQRuOle2aRtiIRDg9tDcTGAkLfKw==",
"requires": {
"chalk": "^2.4.2",
"source-map": "^0.6.1",
@@ -13849,20 +21494,13 @@
}
},
"postcss-selector-parser": {
- "version": "5.0.0",
- "resolved": "https://registry.npmjs.org/postcss-selector-parser/-/postcss-selector-parser-5.0.0.tgz",
- "integrity": "sha512-w+zLE5Jhg6Liz8+rQOWEAwtwkyqpfnmsinXjXg6cY7YIONZZtgvE0v2O0uhQBs0peNomOJwWRKt6JBfTdTd3OQ==",
+ "version": "6.0.2",
+ "resolved": "https://registry.npmjs.org/postcss-selector-parser/-/postcss-selector-parser-6.0.2.tgz",
+ "integrity": "sha512-36P2QR59jDTOAiIkqEprfJDsoNrvwFei3eCqKd1Y0tUsBimsq39BLp7RD+JWny3WgB1zGhJX8XVePwm9k4wdBg==",
"requires": {
- "cssesc": "^2.0.0",
+ "cssesc": "^3.0.0",
"indexes-of": "^1.0.1",
"uniq": "^1.0.1"
- },
- "dependencies": {
- "cssesc": {
- "version": "2.0.0",
- "resolved": "https://registry.npmjs.org/cssesc/-/cssesc-2.0.0.tgz",
- "integrity": "sha512-MsCAG1z9lPdoO/IUMLSBWBSVxVtJ1395VGIQ+Fc2gNdkQ1hNDnQdw3YhA71WJCBW1vdwA0cAnk/DnW6bqoEUYg=="
- }
}
},
"postcss-svgo": {
@@ -13897,9 +21535,9 @@
}
},
"postcss": {
- "version": "7.0.14",
- "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.14.tgz",
- "integrity": "sha512-NsbD6XUUMZvBxtQAJuWDJeeC4QFsmWsfozWxCJPWf3M55K9iu2iMDaKqyoOdTJ1R4usBXuxlVFAIo8rZPQD4Bg==",
+ "version": "7.0.32",
+ "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.32.tgz",
+ "integrity": "sha512-03eXong5NLnNCD05xscnGKGDZ98CyzoqPSMjOe6SuoQY7Z2hIj0Ld1g/O/UQRuOle2aRtiIRDg9tDcTGAkLfKw==",
"requires": {
"chalk": "^2.4.2",
"source-map": "^0.6.1",
@@ -13952,9 +21590,9 @@
}
},
"postcss": {
- "version": "7.0.14",
- "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.14.tgz",
- "integrity": "sha512-NsbD6XUUMZvBxtQAJuWDJeeC4QFsmWsfozWxCJPWf3M55K9iu2iMDaKqyoOdTJ1R4usBXuxlVFAIo8rZPQD4Bg==",
+ "version": "7.0.32",
+ "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.32.tgz",
+ "integrity": "sha512-03eXong5NLnNCD05xscnGKGDZ98CyzoqPSMjOe6SuoQY7Z2hIj0Ld1g/O/UQRuOle2aRtiIRDg9tDcTGAkLfKw==",
"requires": {
"chalk": "^2.4.2",
"source-map": "^0.6.1",
@@ -14046,7 +21684,8 @@
"prettier": {
"version": "1.16.4",
"resolved": "https://registry.npmjs.org/prettier/-/prettier-1.16.4.tgz",
- "integrity": "sha512-ZzWuos7TI5CKUeQAtFd6Zhm2s6EpAD/ZLApIhsF9pRvRtM1RFo61dM/4MSRUA0SuLugA/zgrZD8m0BaY46Og7g=="
+ "integrity": "sha512-ZzWuos7TI5CKUeQAtFd6Zhm2s6EpAD/ZLApIhsF9pRvRtM1RFo61dM/4MSRUA0SuLugA/zgrZD8m0BaY46Og7g==",
+ "dev": true
},
"pretty-bytes": {
"version": "4.0.2",
@@ -14062,6 +21701,51 @@
"utila": "~0.4"
}
},
+ "pretty-format": {
+ "version": "25.5.0",
+ "resolved": "https://registry.npmjs.org/pretty-format/-/pretty-format-25.5.0.tgz",
+ "integrity": "sha512-kbo/kq2LQ/A/is0PQwsEHM7Ca6//bGPPvU6UnsdDRSKTWxT/ru/xb88v4BJf6a69H+uTytOEsTusT9ksd/1iWQ==",
+ "requires": {
+ "@jest/types": "^25.5.0",
+ "ansi-regex": "^5.0.0",
+ "ansi-styles": "^4.0.0",
+ "react-is": "^16.12.0"
+ },
+ "dependencies": {
+ "ansi-regex": {
+ "version": "5.0.0",
+ "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.0.tgz",
+ "integrity": "sha512-bY6fj56OUQ0hU1KjFNDQuJFezqKdrAyFdIevADiqrWHwSlbmBNMHp5ak2f40Pm8JTFyM2mqxkG6ngkHO11f/lg=="
+ },
+ "ansi-styles": {
+ "version": "4.2.1",
+ "resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-4.2.1.tgz",
+ "integrity": "sha512-9VGjrMsG1vePxcSweQsN20KY/c4zN0h9fLjqAbwbPfahM3t+NL+M9HC8xeXG2I8pX5NoamTGNuomEUFI7fcUjA==",
+ "requires": {
+ "@types/color-name": "^1.1.1",
+ "color-convert": "^2.0.1"
+ }
+ },
+ "color-convert": {
+ "version": "2.0.1",
+ "resolved": "https://registry.npmjs.org/color-convert/-/color-convert-2.0.1.tgz",
+ "integrity": "sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ==",
+ "requires": {
+ "color-name": "~1.1.4"
+ }
+ },
+ "color-name": {
+ "version": "1.1.4",
+ "resolved": "https://registry.npmjs.org/color-name/-/color-name-1.1.4.tgz",
+ "integrity": "sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA=="
+ },
+ "react-is": {
+ "version": "16.13.1",
+ "resolved": "https://registry.npmjs.org/react-is/-/react-is-16.13.1.tgz",
+ "integrity": "sha512-24e6ynE2H+OKt4kqsOvNd8kBpV65zoxbA4BVsEOB3ARVWQki/DHzaUoC5KuON/BiccDaCCTZBuOcfZs70kR8bQ=="
+ }
+ }
+ },
"prismjs": {
"version": "1.15.0",
"resolved": "https://registry.npmjs.org/prismjs/-/prismjs-1.15.0.tgz",
@@ -14116,6 +21800,15 @@
"resolved": "https://registry.npmjs.org/promise-inflight/-/promise-inflight-1.0.1.tgz",
"integrity": "sha1-mEcocL8igTL8vdhoEputEsPAKeM="
},
+ "prompts": {
+ "version": "2.3.2",
+ "resolved": "https://registry.npmjs.org/prompts/-/prompts-2.3.2.tgz",
+ "integrity": "sha512-Q06uKs2CkNYVID0VqwfAl9mipo99zkBv/n2JtWY89Yxa3ZabWSrs0e2KTudKVa3peLUvYXMefDqIleLPVUBZMA==",
+ "requires": {
+ "kleur": "^3.0.3",
+ "sisteransi": "^1.0.4"
+ }
+ },
"prop-types": {
"version": "15.7.2",
"resolved": "https://registry.npmjs.org/prop-types/-/prop-types-15.7.2.tgz",
@@ -14126,6 +21819,16 @@
"react-is": "^16.8.1"
}
},
+ "proper-lockfile": {
+ "version": "4.1.1",
+ "resolved": "https://registry.npmjs.org/proper-lockfile/-/proper-lockfile-4.1.1.tgz",
+ "integrity": "sha512-1w6rxXodisVpn7QYvLk706mzprPTAPCYAqxMvctmPN3ekuRk/kuGkGc82pangZiAt4R3lwSuUzheTTn0/Yb7Zg==",
+ "requires": {
+ "graceful-fs": "^4.1.11",
+ "retry": "^0.12.0",
+ "signal-exit": "^3.0.2"
+ }
+ },
"property-information": {
"version": "5.0.1",
"resolved": "https://registry.npmjs.org/property-information/-/property-information-5.0.1.tgz",
@@ -14139,13 +21842,18 @@
"resolved": "https://registry.npmjs.org/proto-list/-/proto-list-1.2.4.tgz",
"integrity": "sha1-IS1b/hMYMGpCD2QCuOJv85ZHqEk="
},
+ "protocols": {
+ "version": "1.4.7",
+ "resolved": "https://registry.npmjs.org/protocols/-/protocols-1.4.7.tgz",
+ "integrity": "sha512-Fx65lf9/YDn3hUX08XUc0J8rSux36rEsyiv21ZGUC1mOyeM3lTRpZLcrm8aAolzS4itwVfm7TAPyxC2E5zd6xg=="
+ },
"proxy-addr": {
- "version": "2.0.4",
- "resolved": "https://registry.npmjs.org/proxy-addr/-/proxy-addr-2.0.4.tgz",
- "integrity": "sha512-5erio2h9jp5CHGwcybmxmVqHmnCBZeewlfJ0pex+UW7Qny7OOZXTtH56TGNyBizkgiOwhJtMKrVzDTeKcySZwA==",
+ "version": "2.0.6",
+ "resolved": "https://registry.npmjs.org/proxy-addr/-/proxy-addr-2.0.6.tgz",
+ "integrity": "sha512-dh/frvCBVmSsDYzw6n926jv974gddhkFPfiN8hPOi30Wax25QZyZEGveluCgliBnqmuM+UJmBErbAUFIoDbjOw==",
"requires": {
"forwarded": "~0.1.2",
- "ipaddr.js": "1.8.0"
+ "ipaddr.js": "1.9.1"
}
},
"prr": {
@@ -14174,6 +21882,13 @@
"parse-asn1": "^5.0.0",
"randombytes": "^2.0.1",
"safe-buffer": "^5.1.2"
+ },
+ "dependencies": {
+ "bn.js": {
+ "version": "4.11.9",
+ "resolved": "https://registry.npmjs.org/bn.js/-/bn.js-4.11.9.tgz",
+ "integrity": "sha512-E6QoYqCKZfgatHTdHzs1RRKP7ip4vvm+EyRUeE2RF0NblwVvb0p6jSVeNTOFxPn26QXN2o6SMfNxKp6kU8zQaw=="
+ }
}
},
"pump": {
@@ -14269,28 +21984,25 @@
}
},
"range-parser": {
- "version": "1.2.0",
- "resolved": "https://registry.npmjs.org/range-parser/-/range-parser-1.2.0.tgz",
- "integrity": "sha1-9JvmtIeJTdxA3MlKMi9hEJLgDV4="
+ "version": "1.2.1",
+ "resolved": "https://registry.npmjs.org/range-parser/-/range-parser-1.2.1.tgz",
+ "integrity": "sha512-Hrgsx+orqoygnmhFbKaHE6c296J+HTAQXoxEF6gNupROmmGJRoyzfG3ccAveqCBrwr/2yxQ5BVd/GTl5agOwSg=="
},
"raw-body": {
- "version": "2.3.3",
- "resolved": "https://registry.npmjs.org/raw-body/-/raw-body-2.3.3.tgz",
- "integrity": "sha512-9esiElv1BrZoI3rCDuOuKCBRbuApGGaDPQfjSflGxdy4oyzqghxu6klEkkVIvBje+FF0BX9coEv8KqW6X/7njw==",
+ "version": "2.4.0",
+ "resolved": "https://registry.npmjs.org/raw-body/-/raw-body-2.4.0.tgz",
+ "integrity": "sha512-4Oz8DUIwdvoa5qMJelxipzi/iJIi40O5cGV1wNYp5hvZP8ZN0T+jiNkL0QepXs+EsQ9XJ8ipEDoiH70ySUJP3Q==",
"requires": {
- "bytes": "3.0.0",
- "http-errors": "1.6.3",
- "iconv-lite": "0.4.23",
+ "bytes": "3.1.0",
+ "http-errors": "1.7.2",
+ "iconv-lite": "0.4.24",
"unpipe": "1.0.0"
},
"dependencies": {
- "iconv-lite": {
- "version": "0.4.23",
- "resolved": "https://registry.npmjs.org/iconv-lite/-/iconv-lite-0.4.23.tgz",
- "integrity": "sha512-neyTUVFtahjf0mB3dZT77u+8O0QB89jFdnBkd5P1JgYPbPaia3gXXOVL2fq8VyU2gMMD7SaN7QukTB/pmXYvDA==",
- "requires": {
- "safer-buffer": ">= 2.1.2 < 3"
- }
+ "bytes": {
+ "version": "3.1.0",
+ "resolved": "https://registry.npmjs.org/bytes/-/bytes-3.1.0.tgz",
+ "integrity": "sha512-zauLjrfCG+xvoyaqLoV8bLVXXNGC4JqlxFCutSDWA6fJrTo2ZuvLYTqZ7aHBLZSMOopbzwv8f+wZcVzfVTI2Dg=="
}
}
},
@@ -14387,7 +22099,7 @@
},
"chalk": {
"version": "1.1.3",
- "resolved": "http://registry.npmjs.org/chalk/-/chalk-1.1.3.tgz",
+ "resolved": "https://registry.npmjs.org/chalk/-/chalk-1.1.3.tgz",
"integrity": "sha1-qBFcVeSnAv5NFQq9OHKCKn4J/Jg=",
"requires": {
"ansi-styles": "^2.2.1",
@@ -14501,7 +22213,7 @@
},
"supports-color": {
"version": "2.0.0",
- "resolved": "http://registry.npmjs.org/supports-color/-/supports-color-2.0.0.tgz",
+ "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-2.0.0.tgz",
"integrity": "sha1-U10EXOa2Nj+kARcIRimZXp3zJMc="
},
"tmp": {
@@ -14542,18 +22254,17 @@
}
},
"react-hot-loader": {
- "version": "4.7.1",
- "resolved": "https://registry.npmjs.org/react-hot-loader/-/react-hot-loader-4.7.1.tgz",
- "integrity": "sha512-OVq9tBndJ+KJWyWbj6kAkJbRVFx3Ykx+XOlndT3zyxAQMBFFygV+AS9RQi6Z2axkPIcEkuE5K6nkpcjlzR8I9w==",
+ "version": "4.12.21",
+ "resolved": "https://registry.npmjs.org/react-hot-loader/-/react-hot-loader-4.12.21.tgz",
+ "integrity": "sha512-Ynxa6ROfWUeKWsTHxsrL2KMzujxJVPjs385lmB2t5cHUxdoRPGind9F00tOkdc1l5WBleOF4XEAMILY1KPIIDA==",
"requires": {
"fast-levenshtein": "^2.0.6",
"global": "^4.3.0",
- "hoist-non-react-statics": "^2.5.0",
+ "hoist-non-react-statics": "^3.3.0",
"loader-utils": "^1.1.0",
- "lodash.merge": "^4.6.1",
"prop-types": "^15.6.1",
"react-lifecycles-compat": "^3.0.4",
- "shallowequal": "^1.0.2",
+ "shallowequal": "^1.1.0",
"source-map": "^0.7.3"
},
"dependencies": {
@@ -14583,6 +22294,28 @@
"resolved": "https://registry.npmjs.org/react-lifecycles-compat/-/react-lifecycles-compat-3.0.4.tgz",
"integrity": "sha512-fBASbA6LnOU9dOU2eW7aQ8xmYBSXUIWr+UmF9b1efZBazGNO+rcXT/icdKnYm2pTwcRylVUYwW7H1PHfLekVzA=="
},
+ "react-reconciler": {
+ "version": "0.25.1",
+ "resolved": "https://registry.npmjs.org/react-reconciler/-/react-reconciler-0.25.1.tgz",
+ "integrity": "sha512-R5UwsIvRcSs3w8n9k3tBoTtUHdVhu9u84EG7E5M0Jk9F5i6DA1pQzPfUZd6opYWGy56MJOtV3VADzy6DRwYDjw==",
+ "requires": {
+ "loose-envify": "^1.1.0",
+ "object-assign": "^4.1.1",
+ "prop-types": "^15.6.2",
+ "scheduler": "^0.19.1"
+ },
+ "dependencies": {
+ "scheduler": {
+ "version": "0.19.1",
+ "resolved": "https://registry.npmjs.org/scheduler/-/scheduler-0.19.1.tgz",
+ "integrity": "sha512-n/zwRWRYSUj0/3g/otKDRPMh6qv2SYMWNq85IEa8iZyAv8od9zDYpGSnpBEjNgcMNq6Scbu5KfIPxNF72R/2EA==",
+ "requires": {
+ "loose-envify": "^1.1.0",
+ "object-assign": "^4.1.1"
+ }
+ }
+ }
+ },
"react-side-effect": {
"version": "1.1.5",
"resolved": "https://registry.npmjs.org/react-side-effect/-/react-side-effect-1.1.5.tgz",
@@ -14676,14 +22409,19 @@
}
},
"redux": {
- "version": "4.0.1",
- "resolved": "https://registry.npmjs.org/redux/-/redux-4.0.1.tgz",
- "integrity": "sha512-R7bAtSkk7nY6O/OYMVR9RiBI+XghjF9rlbl5806HJbQph0LJVHZrU5oaO4q70eUKiqMRqm4y07KLTlMZ2BlVmg==",
+ "version": "4.0.5",
+ "resolved": "https://registry.npmjs.org/redux/-/redux-4.0.5.tgz",
+ "integrity": "sha512-VSz1uMAH24DM6MF72vcojpYPtrTUu3ByVWfPL1nPfVRb5mZVTve5GnNCUV53QM/BZ66xfWrm0CTWoM+Xlz8V1w==",
"requires": {
"loose-envify": "^1.4.0",
"symbol-observable": "^1.2.0"
}
},
+ "redux-thunk": {
+ "version": "2.3.0",
+ "resolved": "https://registry.npmjs.org/redux-thunk/-/redux-thunk-2.3.0.tgz",
+ "integrity": "sha512-km6dclyFnmcvxhAcrQV2AkZmPQjzPDjgVlQtR0EQjxZPyJ0BnMf3in1ryuR8A2qU0HldVRfxYXbFSKlI3N7Slw=="
+ },
"regenerate": {
"version": "1.4.0",
"resolved": "https://registry.npmjs.org/regenerate/-/regenerate-1.4.0.tgz",
@@ -14719,6 +22457,68 @@
"safe-regex": "^1.1.0"
}
},
+ "regexp.prototype.flags": {
+ "version": "1.3.0",
+ "resolved": "https://registry.npmjs.org/regexp.prototype.flags/-/regexp.prototype.flags-1.3.0.tgz",
+ "integrity": "sha512-2+Q0C5g951OlYlJz6yu5/M33IcsESLlLfsyIaLJaG4FA2r4yP8MvVMJUUP/fVBkSpbbbZlS5gynbEWLipiiXiQ==",
+ "requires": {
+ "define-properties": "^1.1.3",
+ "es-abstract": "^1.17.0-next.1"
+ },
+ "dependencies": {
+ "es-abstract": {
+ "version": "1.17.6",
+ "resolved": "https://registry.npmjs.org/es-abstract/-/es-abstract-1.17.6.tgz",
+ "integrity": "sha512-Fr89bON3WFyUi5EvAeI48QTWX0AyekGgLA8H+c+7fbfCkJwRWRMLd8CQedNEyJuoYYhmtEqY92pgte1FAhBlhw==",
+ "requires": {
+ "es-to-primitive": "^1.2.1",
+ "function-bind": "^1.1.1",
+ "has": "^1.0.3",
+ "has-symbols": "^1.0.1",
+ "is-callable": "^1.2.0",
+ "is-regex": "^1.1.0",
+ "object-inspect": "^1.7.0",
+ "object-keys": "^1.1.1",
+ "object.assign": "^4.1.0",
+ "string.prototype.trimend": "^1.0.1",
+ "string.prototype.trimstart": "^1.0.1"
+ }
+ },
+ "es-to-primitive": {
+ "version": "1.2.1",
+ "resolved": "https://registry.npmjs.org/es-to-primitive/-/es-to-primitive-1.2.1.tgz",
+ "integrity": "sha512-QCOllgZJtaUo9miYBcLChTUaHNjJF3PYs1VidD7AwiEj1kYxKeQTctLAezAOH5ZKRH0g2IgPn6KwB4IT8iRpvA==",
+ "requires": {
+ "is-callable": "^1.1.4",
+ "is-date-object": "^1.0.1",
+ "is-symbol": "^1.0.2"
+ }
+ },
+ "has-symbols": {
+ "version": "1.0.1",
+ "resolved": "https://registry.npmjs.org/has-symbols/-/has-symbols-1.0.1.tgz",
+ "integrity": "sha512-PLcsoqu++dmEIZB+6totNFKq/7Do+Z0u4oT0zKOJNl3lYK6vGwwu2hjHs+68OEZbTjiUE9bgOABXbP/GvrS0Kg=="
+ },
+ "is-callable": {
+ "version": "1.2.0",
+ "resolved": "https://registry.npmjs.org/is-callable/-/is-callable-1.2.0.tgz",
+ "integrity": "sha512-pyVD9AaGLxtg6srb2Ng6ynWJqkHU9bEM087AKck0w8QwDarTfNcpIYoU8x8Hv2Icm8u6kFJM18Dag8lyqGkviw=="
+ },
+ "is-regex": {
+ "version": "1.1.0",
+ "resolved": "https://registry.npmjs.org/is-regex/-/is-regex-1.1.0.tgz",
+ "integrity": "sha512-iI97M8KTWID2la5uYXlkbSDQIg4F6o1sYboZKKTDpnDQMLtUL86zxhgDet3Q2SriaYsyGqZ6Mn2SjbRKeLHdqw==",
+ "requires": {
+ "has-symbols": "^1.0.1"
+ }
+ },
+ "object-keys": {
+ "version": "1.1.1",
+ "resolved": "https://registry.npmjs.org/object-keys/-/object-keys-1.1.1.tgz",
+ "integrity": "sha512-NuAESUOUMrlIXOfHKzD6bpPu3tYt3xvjNdRIQ+FeT0lNb4K8WR70CaDxhuNguS2XG+GjkyMwOzsN5ZktImfhLA=="
+ }
+ }
+ },
"regexpp": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/regexpp/-/regexpp-2.0.1.tgz",
@@ -14738,20 +22538,19 @@
}
},
"registry-auth-token": {
- "version": "3.3.2",
- "resolved": "https://registry.npmjs.org/registry-auth-token/-/registry-auth-token-3.3.2.tgz",
- "integrity": "sha512-JL39c60XlzCVgNrO+qq68FoNb56w/m7JYvGR2jT5iR1xBrUA3Mfx5Twk5rqTThPmQKMWydGmq8oFtDlxfrmxnQ==",
+ "version": "4.1.1",
+ "resolved": "https://registry.npmjs.org/registry-auth-token/-/registry-auth-token-4.1.1.tgz",
+ "integrity": "sha512-9bKS7nTl9+/A1s7tnPeGrUpRcVY+LUh7bfFgzpndALdPfXQBfQV77rQVtqgUV3ti4vc/Ik81Ex8UJDWDQ12zQA==",
"requires": {
- "rc": "^1.1.6",
- "safe-buffer": "^5.0.1"
+ "rc": "^1.2.8"
}
},
"registry-url": {
- "version": "3.1.0",
- "resolved": "https://registry.npmjs.org/registry-url/-/registry-url-3.1.0.tgz",
- "integrity": "sha1-PU74cPc93h138M+aOBQyRE4XSUI=",
+ "version": "5.1.0",
+ "resolved": "https://registry.npmjs.org/registry-url/-/registry-url-5.1.0.tgz",
+ "integrity": "sha512-8acYXXTI0AkQv6RAOjE3vOaIXZkT9wo4LOFbBKYQEEnnMNBpKqdUrI6S4NT0KPIo/WVvJ5tE/X5LF/TQUf0ekw==",
"requires": {
- "rc": "^1.0.1"
+ "rc": "^1.2.8"
}
},
"regjsgen": {
@@ -14875,6 +22674,426 @@
}
}
},
+ "remark-footnotes": {
+ "version": "1.0.0",
+ "resolved": "https://registry.npmjs.org/remark-footnotes/-/remark-footnotes-1.0.0.tgz",
+ "integrity": "sha512-X9Ncj4cj3/CIvLI2Z9IobHtVi8FVdUrdJkCNaL9kdX8ohfsi18DXHsCVd/A7ssARBdccdDb5ODnt62WuEWaM/g=="
+ },
+ "remark-mdx": {
+ "version": "1.6.5",
+ "resolved": "https://registry.npmjs.org/remark-mdx/-/remark-mdx-1.6.5.tgz",
+ "integrity": "sha512-zItwP3xcVQAEPJTHseFh+KZEyJ31+pbVJMOMzognqTuZ2zfzIR4Xrg0BAx6eo+paV4fHne/5vi2ugWtCeOaBRA==",
+ "requires": {
+ "@babel/core": "7.9.6",
+ "@babel/helper-plugin-utils": "7.8.3",
+ "@babel/plugin-proposal-object-rest-spread": "7.9.6",
+ "@babel/plugin-syntax-jsx": "7.8.3",
+ "@mdx-js/util": "^1.6.5",
+ "is-alphabetical": "1.0.4",
+ "remark-parse": "8.0.2",
+ "unified": "9.0.0"
+ },
+ "dependencies": {
+ "@babel/code-frame": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/code-frame/-/code-frame-7.10.1.tgz",
+ "integrity": "sha512-IGhtTmpjGbYzcEDOw7DcQtbQSXcG9ftmAXtWTu9V936vDye4xjjekktFAtgZsWpzTj/X01jocB46mTywm/4SZw==",
+ "requires": {
+ "@babel/highlight": "^7.10.1"
+ }
+ },
+ "@babel/core": {
+ "version": "7.9.6",
+ "resolved": "https://registry.npmjs.org/@babel/core/-/core-7.9.6.tgz",
+ "integrity": "sha512-nD3deLvbsApbHAHttzIssYqgb883yU/d9roe4RZymBCDaZryMJDbptVpEpeQuRh4BJ+SYI8le9YGxKvFEvl1Wg==",
+ "requires": {
+ "@babel/code-frame": "^7.8.3",
+ "@babel/generator": "^7.9.6",
+ "@babel/helper-module-transforms": "^7.9.0",
+ "@babel/helpers": "^7.9.6",
+ "@babel/parser": "^7.9.6",
+ "@babel/template": "^7.8.6",
+ "@babel/traverse": "^7.9.6",
+ "@babel/types": "^7.9.6",
+ "convert-source-map": "^1.7.0",
+ "debug": "^4.1.0",
+ "gensync": "^1.0.0-beta.1",
+ "json5": "^2.1.2",
+ "lodash": "^4.17.13",
+ "resolve": "^1.3.2",
+ "semver": "^5.4.1",
+ "source-map": "^0.5.0"
+ }
+ },
+ "@babel/generator": {
+ "version": "7.10.2",
+ "resolved": "https://registry.npmjs.org/@babel/generator/-/generator-7.10.2.tgz",
+ "integrity": "sha512-AxfBNHNu99DTMvlUPlt1h2+Hn7knPpH5ayJ8OqDWSeLld+Fi2AYBTC/IejWDM9Edcii4UzZRCsbUt0WlSDsDsA==",
+ "requires": {
+ "@babel/types": "^7.10.2",
+ "jsesc": "^2.5.1",
+ "lodash": "^4.17.13",
+ "source-map": "^0.5.0"
+ }
+ },
+ "@babel/helper-function-name": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-function-name/-/helper-function-name-7.10.1.tgz",
+ "integrity": "sha512-fcpumwhs3YyZ/ttd5Rz0xn0TpIwVkN7X0V38B9TWNfVF42KEkhkAAuPCQ3oXmtTRtiPJrmZ0TrfS0GKF0eMaRQ==",
+ "requires": {
+ "@babel/helper-get-function-arity": "^7.10.1",
+ "@babel/template": "^7.10.1",
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helper-get-function-arity": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-get-function-arity/-/helper-get-function-arity-7.10.1.tgz",
+ "integrity": "sha512-F5qdXkYGOQUb0hpRaPoetF9AnsXknKjWMZ+wmsIRsp5ge5sFh4c3h1eH2pRTTuy9KKAA2+TTYomGXAtEL2fQEw==",
+ "requires": {
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helper-member-expression-to-functions": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-member-expression-to-functions/-/helper-member-expression-to-functions-7.10.1.tgz",
+ "integrity": "sha512-u7XLXeM2n50gb6PWJ9hoO5oO7JFPaZtrh35t8RqKLT1jFKj9IWeD1zrcrYp1q1qiZTdEarfDWfTIP8nGsu0h5g==",
+ "requires": {
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helper-module-imports": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-module-imports/-/helper-module-imports-7.10.1.tgz",
+ "integrity": "sha512-SFxgwYmZ3HZPyZwJRiVNLRHWuW2OgE5k2nrVs6D9Iv4PPnXVffuEHy83Sfx/l4SqF+5kyJXjAyUmrG7tNm+qVg==",
+ "requires": {
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helper-module-transforms": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-module-transforms/-/helper-module-transforms-7.10.1.tgz",
+ "integrity": "sha512-RLHRCAzyJe7Q7sF4oy2cB+kRnU4wDZY/H2xJFGof+M+SJEGhZsb+GFj5j1AD8NiSaVBJ+Pf0/WObiXu/zxWpFg==",
+ "requires": {
+ "@babel/helper-module-imports": "^7.10.1",
+ "@babel/helper-replace-supers": "^7.10.1",
+ "@babel/helper-simple-access": "^7.10.1",
+ "@babel/helper-split-export-declaration": "^7.10.1",
+ "@babel/template": "^7.10.1",
+ "@babel/types": "^7.10.1",
+ "lodash": "^4.17.13"
+ }
+ },
+ "@babel/helper-optimise-call-expression": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-optimise-call-expression/-/helper-optimise-call-expression-7.10.1.tgz",
+ "integrity": "sha512-a0DjNS1prnBsoKx83dP2falChcs7p3i8VMzdrSbfLhuQra/2ENC4sbri34dz/rWmDADsmF1q5GbfaXydh0Jbjg==",
+ "requires": {
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helper-plugin-utils": {
+ "version": "7.8.3",
+ "resolved": "https://registry.npmjs.org/@babel/helper-plugin-utils/-/helper-plugin-utils-7.8.3.tgz",
+ "integrity": "sha512-j+fq49Xds2smCUNYmEHF9kGNkhbet6yVIBp4e6oeQpH1RUs/Ir06xUKzDjDkGcaaokPiTNs2JBWHjaE4csUkZQ=="
+ },
+ "@babel/helper-replace-supers": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-replace-supers/-/helper-replace-supers-7.10.1.tgz",
+ "integrity": "sha512-SOwJzEfpuQwInzzQJGjGaiG578UYmyi2Xw668klPWV5n07B73S0a9btjLk/52Mlcxa+5AdIYqws1KyXRfMoB7A==",
+ "requires": {
+ "@babel/helper-member-expression-to-functions": "^7.10.1",
+ "@babel/helper-optimise-call-expression": "^7.10.1",
+ "@babel/traverse": "^7.10.1",
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helper-simple-access": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-simple-access/-/helper-simple-access-7.10.1.tgz",
+ "integrity": "sha512-VSWpWzRzn9VtgMJBIWTZ+GP107kZdQ4YplJlCmIrjoLVSi/0upixezHCDG8kpPVTBJpKfxTH01wDhh+jS2zKbw==",
+ "requires": {
+ "@babel/template": "^7.10.1",
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helper-split-export-declaration": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-split-export-declaration/-/helper-split-export-declaration-7.10.1.tgz",
+ "integrity": "sha512-UQ1LVBPrYdbchNhLwj6fetj46BcFwfS4NllJo/1aJsT+1dLTEnXJL0qHqtY7gPzF8S2fXBJamf1biAXV3X077g==",
+ "requires": {
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/helpers": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helpers/-/helpers-7.10.1.tgz",
+ "integrity": "sha512-muQNHF+IdU6wGgkaJyhhEmI54MOZBKsFfsXFhboz1ybwJ1Kl7IHlbm2a++4jwrmY5UYsgitt5lfqo1wMFcHmyw==",
+ "requires": {
+ "@babel/template": "^7.10.1",
+ "@babel/traverse": "^7.10.1",
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/highlight": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/highlight/-/highlight-7.10.1.tgz",
+ "integrity": "sha512-8rMof+gVP8mxYZApLF/JgNDAkdKa+aJt3ZYxF8z6+j/hpeXL7iMsKCPHa2jNMHu/qqBwzQF4OHNoYi8dMA/rYg==",
+ "requires": {
+ "@babel/helper-validator-identifier": "^7.10.1",
+ "chalk": "^2.0.0",
+ "js-tokens": "^4.0.0"
+ }
+ },
+ "@babel/parser": {
+ "version": "7.10.2",
+ "resolved": "https://registry.npmjs.org/@babel/parser/-/parser-7.10.2.tgz",
+ "integrity": "sha512-PApSXlNMJyB4JiGVhCOlzKIif+TKFTvu0aQAhnTvfP/z3vVSN6ZypH5bfUNwFXXjRQtUEBNFd2PtmCmG2Py3qQ=="
+ },
+ "@babel/plugin-proposal-object-rest-spread": {
+ "version": "7.9.6",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-proposal-object-rest-spread/-/plugin-proposal-object-rest-spread-7.9.6.tgz",
+ "integrity": "sha512-Ga6/fhGqA9Hj+y6whNpPv8psyaK5xzrQwSPsGPloVkvmH+PqW1ixdnfJ9uIO06OjQNYol3PMnfmJ8vfZtkzF+A==",
+ "requires": {
+ "@babel/helper-plugin-utils": "^7.8.3",
+ "@babel/plugin-syntax-object-rest-spread": "^7.8.0",
+ "@babel/plugin-transform-parameters": "^7.9.5"
+ }
+ },
+ "@babel/plugin-syntax-jsx": {
+ "version": "7.8.3",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-jsx/-/plugin-syntax-jsx-7.8.3.tgz",
+ "integrity": "sha512-WxdW9xyLgBdefoo0Ynn3MRSkhe5tFVxxKNVdnZSh318WrG2e2jH+E9wd/++JsqcLJZPfz87njQJ8j2Upjm0M0A==",
+ "requires": {
+ "@babel/helper-plugin-utils": "^7.8.3"
+ }
+ },
+ "@babel/plugin-syntax-object-rest-spread": {
+ "version": "7.8.3",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-object-rest-spread/-/plugin-syntax-object-rest-spread-7.8.3.tgz",
+ "integrity": "sha512-XoqMijGZb9y3y2XskN+P1wUGiVwWZ5JmoDRwx5+3GmEplNyVM2s2Dg8ILFQm8rWM48orGy5YpI5Bl8U1y7ydlA==",
+ "requires": {
+ "@babel/helper-plugin-utils": "^7.8.0"
+ }
+ },
+ "@babel/plugin-transform-parameters": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/plugin-transform-parameters/-/plugin-transform-parameters-7.10.1.tgz",
+ "integrity": "sha512-tJ1T0n6g4dXMsL45YsSzzSDZCxiHXAQp/qHrucOq5gEHncTA3xDxnd5+sZcoQp+N1ZbieAaB8r/VUCG0gqseOg==",
+ "requires": {
+ "@babel/helper-get-function-arity": "^7.10.1",
+ "@babel/helper-plugin-utils": "^7.10.1"
+ },
+ "dependencies": {
+ "@babel/helper-plugin-utils": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/helper-plugin-utils/-/helper-plugin-utils-7.10.1.tgz",
+ "integrity": "sha512-fvoGeXt0bJc7VMWZGCAEBEMo/HAjW2mP8apF5eXK0wSqwLAVHAISCWRoLMBMUs2kqeaG77jltVqu4Hn8Egl3nA=="
+ }
+ }
+ },
+ "@babel/template": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/template/-/template-7.10.1.tgz",
+ "integrity": "sha512-OQDg6SqvFSsc9A0ej6SKINWrpJiNonRIniYondK2ViKhB06i3c0s+76XUft71iqBEe9S1OKsHwPAjfHnuvnCig==",
+ "requires": {
+ "@babel/code-frame": "^7.10.1",
+ "@babel/parser": "^7.10.1",
+ "@babel/types": "^7.10.1"
+ }
+ },
+ "@babel/traverse": {
+ "version": "7.10.1",
+ "resolved": "https://registry.npmjs.org/@babel/traverse/-/traverse-7.10.1.tgz",
+ "integrity": "sha512-C/cTuXeKt85K+p08jN6vMDz8vSV0vZcI0wmQ36o6mjbuo++kPMdpOYw23W2XH04dbRt9/nMEfA4W3eR21CD+TQ==",
+ "requires": {
+ "@babel/code-frame": "^7.10.1",
+ "@babel/generator": "^7.10.1",
+ "@babel/helper-function-name": "^7.10.1",
+ "@babel/helper-split-export-declaration": "^7.10.1",
+ "@babel/parser": "^7.10.1",
+ "@babel/types": "^7.10.1",
+ "debug": "^4.1.0",
+ "globals": "^11.1.0",
+ "lodash": "^4.17.13"
+ }
+ },
+ "@babel/types": {
+ "version": "7.10.2",
+ "resolved": "https://registry.npmjs.org/@babel/types/-/types-7.10.2.tgz",
+ "integrity": "sha512-AD3AwWBSz0AWF0AkCN9VPiWrvldXq+/e3cHa4J89vo4ymjz1XwrBFFVZmkJTsQIPNk+ZVomPSXUJqq8yyjZsng==",
+ "requires": {
+ "@babel/helper-validator-identifier": "^7.10.1",
+ "lodash": "^4.17.13",
+ "to-fast-properties": "^2.0.0"
+ }
+ },
+ "convert-source-map": {
+ "version": "1.7.0",
+ "resolved": "https://registry.npmjs.org/convert-source-map/-/convert-source-map-1.7.0.tgz",
+ "integrity": "sha512-4FJkXzKXEDB1snCFZlLP4gpC3JILicCpGbzG9f9G7tGqGCzETQ2hWPrcinA9oU4wtf2biUaEH5065UnMeR33oA==",
+ "requires": {
+ "safe-buffer": "~5.1.1"
+ }
+ },
+ "debug": {
+ "version": "4.1.1",
+ "resolved": "https://registry.npmjs.org/debug/-/debug-4.1.1.tgz",
+ "integrity": "sha512-pYAIzeRo8J6KPEaJ0VWOh5Pzkbw/RetuzehGM7QRRX5he4fPHx2rdKMB256ehJCkX+XRQm16eZLqLNS8RSZXZw==",
+ "requires": {
+ "ms": "^2.1.1"
+ }
+ },
+ "is-alphabetical": {
+ "version": "1.0.4",
+ "resolved": "https://registry.npmjs.org/is-alphabetical/-/is-alphabetical-1.0.4.tgz",
+ "integrity": "sha512-DwzsA04LQ10FHTZuL0/grVDk4rFoVH1pjAToYwBrHSxcrBIGQuXrQMtD5U1b0U2XVgKZCTLLP8u2Qxqhy3l2Vg=="
+ },
+ "is-buffer": {
+ "version": "2.0.4",
+ "resolved": "https://registry.npmjs.org/is-buffer/-/is-buffer-2.0.4.tgz",
+ "integrity": "sha512-Kq1rokWXOPXWuaMAqZiJW4XxsmD9zGx9q4aePabbn3qCRGedtH7Cm+zV8WETitMfu1wdh+Rvd6w5egwSngUX2A=="
+ },
+ "is-plain-obj": {
+ "version": "2.1.0",
+ "resolved": "https://registry.npmjs.org/is-plain-obj/-/is-plain-obj-2.1.0.tgz",
+ "integrity": "sha512-YWnfyRwxL/+SsrWYfOpUtz5b3YD+nyfkHvjbcanzk8zgyO4ASD67uVMRt8k5bM4lLMDnXfriRhOpemw+NfT1eA=="
+ },
+ "json5": {
+ "version": "2.1.3",
+ "resolved": "https://registry.npmjs.org/json5/-/json5-2.1.3.tgz",
+ "integrity": "sha512-KXPvOm8K9IJKFM0bmdn8QXh7udDh1g/giieX0NLCaMnb4hEiVFqnop2ImTXCc5e0/oHz3LTqmHGtExn5hfMkOA==",
+ "requires": {
+ "minimist": "^1.2.5"
+ }
+ },
+ "lodash": {
+ "version": "4.17.15",
+ "resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz",
+ "integrity": "sha512-8xOcRHvCjnocdS5cpwXQXVzmmh5e5+saE2QGoeQmbKmRS6J3VQppPOIt0MnmE+4xlZoumy0GPG0D0MVIQbNA1A=="
+ },
+ "minimist": {
+ "version": "1.2.5",
+ "resolved": "https://registry.npmjs.org/minimist/-/minimist-1.2.5.tgz",
+ "integrity": "sha512-FM9nNUYrRBAELZQT3xeZQ7fmMOBg6nWNmJKTcgsJeaLstP/UODVpGsr5OhXhhXg6f+qtJ8uiZ+PUxkDWcgIXLw=="
+ },
+ "parse-entities": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/parse-entities/-/parse-entities-2.0.0.tgz",
+ "integrity": "sha512-kkywGpCcRYhqQIchaWqZ875wzpS/bMKhz5HnN3p7wveJTkTtyAB/AlnS0f8DFSqYW1T82t6yEAkEcB+A1I3MbQ==",
+ "requires": {
+ "character-entities": "^1.0.0",
+ "character-entities-legacy": "^1.0.0",
+ "character-reference-invalid": "^1.0.0",
+ "is-alphanumerical": "^1.0.0",
+ "is-decimal": "^1.0.0",
+ "is-hexadecimal": "^1.0.0"
+ }
+ },
+ "remark-parse": {
+ "version": "8.0.2",
+ "resolved": "https://registry.npmjs.org/remark-parse/-/remark-parse-8.0.2.tgz",
+ "integrity": "sha512-eMI6kMRjsAGpMXXBAywJwiwAse+KNpmt+BK55Oofy4KvBZEqUDj6mWbGLJZrujoPIPPxDXzn3T9baRlpsm2jnQ==",
+ "requires": {
+ "ccount": "^1.0.0",
+ "collapse-white-space": "^1.0.2",
+ "is-alphabetical": "^1.0.0",
+ "is-decimal": "^1.0.0",
+ "is-whitespace-character": "^1.0.0",
+ "is-word-character": "^1.0.0",
+ "markdown-escapes": "^1.0.0",
+ "parse-entities": "^2.0.0",
+ "repeat-string": "^1.5.4",
+ "state-toggle": "^1.0.0",
+ "trim": "0.0.1",
+ "trim-trailing-lines": "^1.0.0",
+ "unherit": "^1.0.4",
+ "unist-util-remove-position": "^2.0.0",
+ "vfile-location": "^3.0.0",
+ "xtend": "^4.0.1"
+ }
+ },
+ "unified": {
+ "version": "9.0.0",
+ "resolved": "https://registry.npmjs.org/unified/-/unified-9.0.0.tgz",
+ "integrity": "sha512-ssFo33gljU3PdlWLjNp15Inqb77d6JnJSfyplGJPT/a+fNRNyCBeveBAYJdO5khKdF6WVHa/yYCC7Xl6BDwZUQ==",
+ "requires": {
+ "bail": "^1.0.0",
+ "extend": "^3.0.0",
+ "is-buffer": "^2.0.0",
+ "is-plain-obj": "^2.0.0",
+ "trough": "^1.0.0",
+ "vfile": "^4.0.0"
+ }
+ },
+ "unist-util-is": {
+ "version": "4.0.2",
+ "resolved": "https://registry.npmjs.org/unist-util-is/-/unist-util-is-4.0.2.tgz",
+ "integrity": "sha512-Ofx8uf6haexJwI1gxWMGg6I/dLnF2yE+KibhD3/diOqY2TinLcqHXCV6OI5gFVn3xQqDH+u0M625pfKwIwgBKQ=="
+ },
+ "unist-util-remove-position": {
+ "version": "2.0.1",
+ "resolved": "https://registry.npmjs.org/unist-util-remove-position/-/unist-util-remove-position-2.0.1.tgz",
+ "integrity": "sha512-fDZsLYIe2uT+oGFnuZmy73K6ZxOPG/Qcm+w7jbEjaFcJgbQ6cqjs/eSPzXhsmGpAsWPkqZM9pYjww5QTn3LHMA==",
+ "requires": {
+ "unist-util-visit": "^2.0.0"
+ }
+ },
+ "unist-util-stringify-position": {
+ "version": "2.0.3",
+ "resolved": "https://registry.npmjs.org/unist-util-stringify-position/-/unist-util-stringify-position-2.0.3.tgz",
+ "integrity": "sha512-3faScn5I+hy9VleOq/qNbAd6pAx7iH5jYBMS9I1HgQVijz/4mv5Bvw5iw1sC/90CODiKo81G/ps8AJrISn687g==",
+ "requires": {
+ "@types/unist": "^2.0.2"
+ }
+ },
+ "unist-util-visit": {
+ "version": "2.0.2",
+ "resolved": "https://registry.npmjs.org/unist-util-visit/-/unist-util-visit-2.0.2.tgz",
+ "integrity": "sha512-HoHNhGnKj6y+Sq+7ASo2zpVdfdRifhTgX2KTU3B/sO/TTlZchp7E3S4vjRzDJ7L60KmrCPsQkVK3lEF3cz36XQ==",
+ "requires": {
+ "@types/unist": "^2.0.0",
+ "unist-util-is": "^4.0.0",
+ "unist-util-visit-parents": "^3.0.0"
+ }
+ },
+ "unist-util-visit-parents": {
+ "version": "3.0.2",
+ "resolved": "https://registry.npmjs.org/unist-util-visit-parents/-/unist-util-visit-parents-3.0.2.tgz",
+ "integrity": "sha512-yJEfuZtzFpQmg1OSCyS9M5NJRrln/9FbYosH3iW0MG402QbdbaB8ZESwUv9RO6nRfLAKvWcMxCwdLWOov36x/g==",
+ "requires": {
+ "@types/unist": "^2.0.0",
+ "unist-util-is": "^4.0.0"
+ }
+ },
+ "vfile": {
+ "version": "4.1.1",
+ "resolved": "https://registry.npmjs.org/vfile/-/vfile-4.1.1.tgz",
+ "integrity": "sha512-lRjkpyDGjVlBA7cDQhQ+gNcvB1BGaTHYuSOcY3S7OhDmBtnzX95FhtZZDecSTDm6aajFymyve6S5DN4ZHGezdQ==",
+ "requires": {
+ "@types/unist": "^2.0.0",
+ "is-buffer": "^2.0.0",
+ "replace-ext": "1.0.0",
+ "unist-util-stringify-position": "^2.0.0",
+ "vfile-message": "^2.0.0"
+ }
+ },
+ "vfile-location": {
+ "version": "3.0.1",
+ "resolved": "https://registry.npmjs.org/vfile-location/-/vfile-location-3.0.1.tgz",
+ "integrity": "sha512-yYBO06eeN/Ki6Kh1QAkgzYpWT1d3Qln+ZCtSbJqFExPl1S3y2qqotJQXoh6qEvl/jDlgpUJolBn3PItVnnZRqQ=="
+ },
+ "vfile-message": {
+ "version": "2.0.4",
+ "resolved": "https://registry.npmjs.org/vfile-message/-/vfile-message-2.0.4.tgz",
+ "integrity": "sha512-DjssxRGkMvifUOJre00juHoP9DPWuzjxKuMDrhNbk2TdaYYBNMStsNhEOt3idrtI12VQYM/1+iM0KOzXi4pxwQ==",
+ "requires": {
+ "@types/unist": "^2.0.0",
+ "unist-util-stringify-position": "^2.0.0"
+ }
+ }
+ }
+ },
"remark-parse": {
"version": "6.0.3",
"resolved": "https://registry.npmjs.org/remark-parse/-/remark-parse-6.0.3.tgz",
@@ -15032,11 +23251,6 @@
"resolved": "https://registry.npmjs.org/require-directory/-/require-directory-2.1.1.tgz",
"integrity": "sha1-jGStX9MNqxyXbiNE/+f3kqam30I="
},
- "require-from-string": {
- "version": "2.0.2",
- "resolved": "https://registry.npmjs.org/require-from-string/-/require-from-string-2.0.2.tgz",
- "integrity": "sha512-Xf0nWe6RseziFMu+Ap9biiUbmplq6S9/p+7w7YXP/JBHhrUDDUhwa+vANyubuqfZWTveU//DYVGsDG7RKL/vEw=="
- },
"require-like": {
"version": "0.1.2",
"resolved": "https://registry.npmjs.org/require-like/-/require-like-0.1.2.tgz",
@@ -15061,11 +23275,18 @@
}
},
"resolve-cwd": {
- "version": "2.0.0",
- "resolved": "https://registry.npmjs.org/resolve-cwd/-/resolve-cwd-2.0.0.tgz",
- "integrity": "sha1-AKn3OHVW4nA46uIyyqNypqWbZlo=",
+ "version": "3.0.0",
+ "resolved": "https://registry.npmjs.org/resolve-cwd/-/resolve-cwd-3.0.0.tgz",
+ "integrity": "sha512-OrZaX2Mb+rJCpH/6CpSqt9xFVpN++x01XnN2ie9g6P5/3xelLAkXWVADpdz1IHD/KFfEXyE6V0U01OQ3UO2rEg==",
"requires": {
- "resolve-from": "^3.0.0"
+ "resolve-from": "^5.0.0"
+ },
+ "dependencies": {
+ "resolve-from": {
+ "version": "5.0.0",
+ "resolved": "https://registry.npmjs.org/resolve-from/-/resolve-from-5.0.0.tgz",
+ "integrity": "sha512-qYg9KP24dD5qka9J47d0aVky0N+b4fTU89LN9iDnjB5waksiC49rvMB0PrUJQGoTmH50XPiqOvAjDfaijGxYZw=="
+ }
}
},
"resolve-dir": {
@@ -15180,6 +23401,16 @@
"nlcst-to-string": "^2.0.0"
}
},
+ "retry": {
+ "version": "0.12.0",
+ "resolved": "https://registry.npmjs.org/retry/-/retry-0.12.0.tgz",
+ "integrity": "sha1-G0KmJmoh8HQh0bC1S33BZ7AcATs="
+ },
+ "reusify": {
+ "version": "1.0.4",
+ "resolved": "https://registry.npmjs.org/reusify/-/reusify-1.0.4.tgz",
+ "integrity": "sha512-U9nH88a3fc/ekCF1l0/UP1IosiuIjyTh7hBvXVMHYgVcfGvt897Xguj2UOLDeI5BG2m7/uwyaLVT6fbtCwTyzw=="
+ },
"rgb-regex": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/rgb-regex/-/rgb-regex-1.0.1.tgz",
@@ -15187,7 +23418,7 @@
},
"rgba-regex": {
"version": "1.0.0",
- "resolved": "http://registry.npmjs.org/rgba-regex/-/rgba-regex-1.0.0.tgz",
+ "resolved": "https://registry.npmjs.org/rgba-regex/-/rgba-regex-1.0.0.tgz",
"integrity": "sha1-QzdOLiyglosO8VI0YLfXMP8i7rM="
},
"rimraf": {
@@ -15208,12 +23439,14 @@
}
},
"run-async": {
- "version": "2.3.0",
- "resolved": "https://registry.npmjs.org/run-async/-/run-async-2.3.0.tgz",
- "integrity": "sha1-A3GrSuC91yDUFm19/aZP96RFpsA=",
- "requires": {
- "is-promise": "^2.1.0"
- }
+ "version": "2.4.1",
+ "resolved": "https://registry.npmjs.org/run-async/-/run-async-2.4.1.tgz",
+ "integrity": "sha512-tvVnVv01b8c1RrA6Ep7JkStj85Guv/YrMcwqYQnwjsAS2cTmmPGBBjAjpCW7RrSodNSoE2/qg9O4bceNvUuDgQ=="
+ },
+ "run-parallel": {
+ "version": "1.1.9",
+ "resolved": "https://registry.npmjs.org/run-parallel/-/run-parallel-1.1.9.tgz",
+ "integrity": "sha512-DEqnSRTDw/Tc3FXf49zedI638Z9onwUotBMiUFKmrO2sdFKIbXamXGQ3Axd4qgphxKB4kw/qP1w5kTxnfU1B9Q=="
},
"run-queue": {
"version": "1.0.3",
@@ -15242,9 +23475,9 @@
}
},
"rxjs": {
- "version": "6.4.0",
- "resolved": "https://registry.npmjs.org/rxjs/-/rxjs-6.4.0.tgz",
- "integrity": "sha512-Z9Yfa11F6B9Sg/BK9MnqnQ+aQYicPLtilXBp2yUtDt2JRCE0h26d33EnfO3ZxoNxG0T92OUucP3Ct7cpfkdFfw==",
+ "version": "6.5.5",
+ "resolved": "https://registry.npmjs.org/rxjs/-/rxjs-6.5.5.tgz",
+ "integrity": "sha512-WfQI+1gohdf0Dai/Bbmk5L5ItH5tYqm3ki2c5GdWhKjalzjg93N3avFjVStyZZz+A2Em+ZxKH5bNghw9UeylGQ==",
"requires": {
"tslib": "^1.9.0"
}
@@ -15556,21 +23789,45 @@
}
},
"schema-utils": {
- "version": "0.4.7",
- "resolved": "https://registry.npmjs.org/schema-utils/-/schema-utils-0.4.7.tgz",
- "integrity": "sha512-v/iwU6wvwGK8HbU9yi3/nhGzP0yGSuhQMzL6ySiec1FSrZZDkhm4noOSWzrNFo/jEc+SJY6jRTwuwbSXJPDUnQ==",
+ "version": "2.7.0",
+ "resolved": "https://registry.npmjs.org/schema-utils/-/schema-utils-2.7.0.tgz",
+ "integrity": "sha512-0ilKFI6QQF5nxDZLFn2dMjvc4hjg/Wkg7rHd3jK6/A4a1Hl9VFdQWvgB1UMGoU94pad1P/8N7fMcEnLnSiju8A==",
"requires": {
- "ajv": "^6.1.0",
- "ajv-keywords": "^3.1.0"
+ "@types/json-schema": "^7.0.4",
+ "ajv": "^6.12.2",
+ "ajv-keywords": "^3.4.1"
+ },
+ "dependencies": {
+ "ajv": {
+ "version": "6.12.2",
+ "resolved": "https://registry.npmjs.org/ajv/-/ajv-6.12.2.tgz",
+ "integrity": "sha512-k+V+hzjm5q/Mr8ef/1Y9goCmlsK4I6Sm74teeyGvFk1XrOsbsKLjEdrvny42CZ+a8sXbk8KWpY/bDwS+FLL2UQ==",
+ "requires": {
+ "fast-deep-equal": "^3.1.1",
+ "fast-json-stable-stringify": "^2.0.0",
+ "json-schema-traverse": "^0.4.1",
+ "uri-js": "^4.2.2"
+ }
+ },
+ "ajv-keywords": {
+ "version": "3.4.1",
+ "resolved": "https://registry.npmjs.org/ajv-keywords/-/ajv-keywords-3.4.1.tgz",
+ "integrity": "sha512-RO1ibKvd27e6FEShVFfPALuHI3WjSVNeK5FIsmme/LYRNxjKuNj+Dt7bucLa6NdSv3JcVTyMlm9kGR84z1XpaQ=="
+ },
+ "fast-deep-equal": {
+ "version": "3.1.3",
+ "resolved": "https://registry.npmjs.org/fast-deep-equal/-/fast-deep-equal-3.1.3.tgz",
+ "integrity": "sha512-f3qQ9oQy9j2AhBe/H9VC91wLmKBCCU/gDOnKNAYG5hswO7BLKj09Hc5HYNz9cGI++xlpDCIgDaitVs03ATR84Q=="
+ }
}
},
"scroll-behavior": {
- "version": "0.9.10",
- "resolved": "https://registry.npmjs.org/scroll-behavior/-/scroll-behavior-0.9.10.tgz",
- "integrity": "sha512-JVJQkBkqMLEM4ATtbHTKare97zhz/qlla9mNttFYY/bcpyOb4BuBGEQ/N9AQWXvshzf6zo9jP60TlphnJ4YPoQ==",
+ "version": "0.9.12",
+ "resolved": "https://registry.npmjs.org/scroll-behavior/-/scroll-behavior-0.9.12.tgz",
+ "integrity": "sha512-18sirtyq1P/VsBX6O/vgw20Np+ngduFXEMO4/NDFXabdOKBL2kjPVUpz1y0+jm99EWwFJafxf5/tCyMeXt9Xyg==",
"requires": {
- "dom-helpers": "^3.2.1",
- "invariant": "^2.2.2"
+ "dom-helpers": "^3.4.0",
+ "invariant": "^2.2.4"
}
},
"scss-tokenizer": {
@@ -15641,11 +23898,11 @@
"integrity": "sha1-Yl2GWPhlr0Psliv8N2o3NZpJlMo="
},
"selfsigned": {
- "version": "1.10.4",
- "resolved": "https://registry.npmjs.org/selfsigned/-/selfsigned-1.10.4.tgz",
- "integrity": "sha512-9AukTiDmHXGXWtWjembZ5NDmVvP2695EtpgbCsxCa68w3c88B+alqbmZ4O3hZ4VWGXeGWzEVdvqgAJD8DQPCDw==",
+ "version": "1.10.7",
+ "resolved": "https://registry.npmjs.org/selfsigned/-/selfsigned-1.10.7.tgz",
+ "integrity": "sha512-8M3wBCzeWIJnQfl43IKwOmC4H/RAp50S8DF60znzjW5GVqTcSe2vWclt7hmYVPkKPlHWOu5EaWOMZ2Y6W8ZXTA==",
"requires": {
- "node-forge": "0.7.5"
+ "node-forge": "0.9.0"
}
},
"semver": {
@@ -15675,9 +23932,9 @@
}
},
"send": {
- "version": "0.16.2",
- "resolved": "https://registry.npmjs.org/send/-/send-0.16.2.tgz",
- "integrity": "sha512-E64YFPUssFHEFBvpbbjr44NCLtI1AohxQ8ZSiJjQLskAdKuriYEP6VyGEsRDH8ScozGpkaX1BGvhanqCwkcEZw==",
+ "version": "0.17.1",
+ "resolved": "https://registry.npmjs.org/send/-/send-0.17.1.tgz",
+ "integrity": "sha512-BsVKsiGcQMFwT8UxypobUKyv7irCNRHk1T0G680vk88yf6LBByGcZJOTJCrTP2xVN6yI+XjPJcNuE3V4fT9sAg==",
"requires": {
"debug": "2.6.9",
"depd": "~1.1.2",
@@ -15686,12 +23943,12 @@
"escape-html": "~1.0.3",
"etag": "~1.8.1",
"fresh": "0.5.2",
- "http-errors": "~1.6.2",
- "mime": "1.4.1",
- "ms": "2.0.0",
+ "http-errors": "~1.7.2",
+ "mime": "1.6.0",
+ "ms": "2.1.1",
"on-finished": "~2.3.0",
- "range-parser": "~1.2.0",
- "statuses": "~1.4.0"
+ "range-parser": "~1.2.1",
+ "statuses": "~1.5.0"
},
"dependencies": {
"debug": {
@@ -15700,17 +23957,19 @@
"integrity": "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA==",
"requires": {
"ms": "2.0.0"
+ },
+ "dependencies": {
+ "ms": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz",
+ "integrity": "sha1-VgiurfwAvmwpAd9fmGF4jeDVl8g="
+ }
}
},
"mime": {
- "version": "1.4.1",
- "resolved": "https://registry.npmjs.org/mime/-/mime-1.4.1.tgz",
- "integrity": "sha512-KI1+qOZu5DcW6wayYHSzR/tXKCDC5Om4s1z2QJjDULzLcmf3DvzS7oluY4HCTrc+9FiKmWUgeNLg7W3uIQvxtQ=="
- },
- "ms": {
- "version": "2.0.0",
- "resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz",
- "integrity": "sha1-VgiurfwAvmwpAd9fmGF4jeDVl8g="
+ "version": "1.6.0",
+ "resolved": "https://registry.npmjs.org/mime/-/mime-1.6.0.tgz",
+ "integrity": "sha512-x0Vn8spI+wuJ1O6S7gnbaQg8Pxh4NNHb7KSINmEWKiPE4RKOplvijn+NkmYmmRgP68mc70j2EbeTFRsrswaQeg=="
}
}
},
@@ -15724,9 +23983,12 @@
}
},
"serialize-javascript": {
- "version": "1.6.1",
- "resolved": "https://registry.npmjs.org/serialize-javascript/-/serialize-javascript-1.6.1.tgz",
- "integrity": "sha512-A5MOagrPFga4YaKQSWHryl7AXvbQkEqpw4NNYMTNYUNV51bA8ABHgYFpqKx+YFFrw59xMV1qGH1R4AgoNIVgCw=="
+ "version": "3.1.0",
+ "resolved": "https://registry.npmjs.org/serialize-javascript/-/serialize-javascript-3.1.0.tgz",
+ "integrity": "sha512-JIJT1DGiWmIKhzRsG91aS6Ze4sFUrYbltlkg2onR5OrnNM02Kl/hnY/T4FN2omvyeBbQmMJv+K4cPOpGzOTFBg==",
+ "requires": {
+ "randombytes": "^2.1.0"
+ }
},
"serve-index": {
"version": "1.9.1",
@@ -15750,22 +24012,38 @@
"ms": "2.0.0"
}
},
+ "http-errors": {
+ "version": "1.6.3",
+ "resolved": "https://registry.npmjs.org/http-errors/-/http-errors-1.6.3.tgz",
+ "integrity": "sha1-i1VoC7S+KDoLW/TqLjhYC+HZMg0=",
+ "requires": {
+ "depd": "~1.1.2",
+ "inherits": "2.0.3",
+ "setprototypeof": "1.1.0",
+ "statuses": ">= 1.4.0 < 2"
+ }
+ },
"ms": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz",
"integrity": "sha1-VgiurfwAvmwpAd9fmGF4jeDVl8g="
+ },
+ "setprototypeof": {
+ "version": "1.1.0",
+ "resolved": "https://registry.npmjs.org/setprototypeof/-/setprototypeof-1.1.0.tgz",
+ "integrity": "sha512-BvE/TwpZX4FXExxOxZyRGQQv651MSwmWKZGqvmPcRIjDqWub67kTKuIMx43cZZrS/cBBzwBcNDWoFxt2XEFIpQ=="
}
}
},
"serve-static": {
- "version": "1.13.2",
- "resolved": "https://registry.npmjs.org/serve-static/-/serve-static-1.13.2.tgz",
- "integrity": "sha512-p/tdJrO4U387R9oMjb1oj7qSMaMfmOyd4j9hOFoxZe2baQszgHcSWjuya/CiT5kgZZKRudHNOA0pYXOl8rQ5nw==",
+ "version": "1.14.1",
+ "resolved": "https://registry.npmjs.org/serve-static/-/serve-static-1.14.1.tgz",
+ "integrity": "sha512-JMrvUwE54emCYWlTI+hGrGv5I8dEwmco/00EvkzIIsR7MqrHonbD9pO2MOfFnpFntl7ecpZs+3mW+XbQZu9QCg==",
"requires": {
"encodeurl": "~1.0.2",
"escape-html": "~1.0.3",
- "parseurl": "~1.3.2",
- "send": "0.16.2"
+ "parseurl": "~1.3.3",
+ "send": "0.17.1"
}
},
"set-blocking": {
@@ -15800,13 +24078,13 @@
"integrity": "sha1-KQy7Iy4waULX1+qbg3Mqt4VvgoU="
},
"setprototypeof": {
- "version": "1.1.0",
- "resolved": "https://registry.npmjs.org/setprototypeof/-/setprototypeof-1.1.0.tgz",
- "integrity": "sha512-BvE/TwpZX4FXExxOxZyRGQQv651MSwmWKZGqvmPcRIjDqWub67kTKuIMx43cZZrS/cBBzwBcNDWoFxt2XEFIpQ=="
+ "version": "1.1.1",
+ "resolved": "https://registry.npmjs.org/setprototypeof/-/setprototypeof-1.1.1.tgz",
+ "integrity": "sha512-JvdAWfbXeIGaZ9cILp38HntZSFSo3mWg6xGcJJsd+d4aRMOqauag1C63dJfDw7OaMYwEbHMOxEZ1lqVRYP2OAw=="
},
"sha.js": {
"version": "2.4.11",
- "resolved": "http://registry.npmjs.org/sha.js/-/sha.js-2.4.11.tgz",
+ "resolved": "https://registry.npmjs.org/sha.js/-/sha.js-2.4.11.tgz",
"integrity": "sha512-QMEp5B7cftE7APOjk5Y6xgrbWu+WkLVQwk8JNjZ8nKRciZaByEW6MubieAiToS7+dwvrjGhH8jRXz3MVd0AYqQ==",
"requires": {
"inherits": "^2.0.1",
@@ -15882,9 +24160,71 @@
"jsonify": "~0.0.0"
}
},
+ "side-channel": {
+ "version": "1.0.2",
+ "resolved": "https://registry.npmjs.org/side-channel/-/side-channel-1.0.2.tgz",
+ "integrity": "sha512-7rL9YlPHg7Ancea1S96Pa8/QWb4BtXL/TZvS6B8XFetGBeuhAsfmUspK6DokBeZ64+Kj9TCNRD/30pVz1BvQNA==",
+ "requires": {
+ "es-abstract": "^1.17.0-next.1",
+ "object-inspect": "^1.7.0"
+ },
+ "dependencies": {
+ "es-abstract": {
+ "version": "1.17.6",
+ "resolved": "https://registry.npmjs.org/es-abstract/-/es-abstract-1.17.6.tgz",
+ "integrity": "sha512-Fr89bON3WFyUi5EvAeI48QTWX0AyekGgLA8H+c+7fbfCkJwRWRMLd8CQedNEyJuoYYhmtEqY92pgte1FAhBlhw==",
+ "requires": {
+ "es-to-primitive": "^1.2.1",
+ "function-bind": "^1.1.1",
+ "has": "^1.0.3",
+ "has-symbols": "^1.0.1",
+ "is-callable": "^1.2.0",
+ "is-regex": "^1.1.0",
+ "object-inspect": "^1.7.0",
+ "object-keys": "^1.1.1",
+ "object.assign": "^4.1.0",
+ "string.prototype.trimend": "^1.0.1",
+ "string.prototype.trimstart": "^1.0.1"
+ }
+ },
+ "es-to-primitive": {
+ "version": "1.2.1",
+ "resolved": "https://registry.npmjs.org/es-to-primitive/-/es-to-primitive-1.2.1.tgz",
+ "integrity": "sha512-QCOllgZJtaUo9miYBcLChTUaHNjJF3PYs1VidD7AwiEj1kYxKeQTctLAezAOH5ZKRH0g2IgPn6KwB4IT8iRpvA==",
+ "requires": {
+ "is-callable": "^1.1.4",
+ "is-date-object": "^1.0.1",
+ "is-symbol": "^1.0.2"
+ }
+ },
+ "has-symbols": {
+ "version": "1.0.1",
+ "resolved": "https://registry.npmjs.org/has-symbols/-/has-symbols-1.0.1.tgz",
+ "integrity": "sha512-PLcsoqu++dmEIZB+6totNFKq/7Do+Z0u4oT0zKOJNl3lYK6vGwwu2hjHs+68OEZbTjiUE9bgOABXbP/GvrS0Kg=="
+ },
+ "is-callable": {
+ "version": "1.2.0",
+ "resolved": "https://registry.npmjs.org/is-callable/-/is-callable-1.2.0.tgz",
+ "integrity": "sha512-pyVD9AaGLxtg6srb2Ng6ynWJqkHU9bEM087AKck0w8QwDarTfNcpIYoU8x8Hv2Icm8u6kFJM18Dag8lyqGkviw=="
+ },
+ "is-regex": {
+ "version": "1.1.0",
+ "resolved": "https://registry.npmjs.org/is-regex/-/is-regex-1.1.0.tgz",
+ "integrity": "sha512-iI97M8KTWID2la5uYXlkbSDQIg4F6o1sYboZKKTDpnDQMLtUL86zxhgDet3Q2SriaYsyGqZ6Mn2SjbRKeLHdqw==",
+ "requires": {
+ "has-symbols": "^1.0.1"
+ }
+ },
+ "object-keys": {
+ "version": "1.1.1",
+ "resolved": "https://registry.npmjs.org/object-keys/-/object-keys-1.1.1.tgz",
+ "integrity": "sha512-NuAESUOUMrlIXOfHKzD6bpPu3tYt3xvjNdRIQ+FeT0lNb4K8WR70CaDxhuNguS2XG+GjkyMwOzsN5ZktImfhLA=="
+ }
+ }
+ },
"sift": {
"version": "5.1.0",
- "resolved": "http://registry.npmjs.org/sift/-/sift-5.1.0.tgz",
+ "resolved": "https://registry.npmjs.org/sift/-/sift-5.1.0.tgz",
"integrity": "sha1-G78t+w63HlbEzH+1Z/vRNRtlAV4="
},
"signal-exit": {
@@ -15912,6 +24252,11 @@
"simple-concat": "^1.0.0"
}
},
+ "simple-git": {
+ "version": "2.6.0",
+ "resolved": "https://registry.npmjs.org/simple-git/-/simple-git-2.6.0.tgz",
+ "integrity": "sha512-eplWRfu6RTfoAzGl7I0+g06MvYauXaNpjeuhFiOYZO9hevnH54RkkStOkEevWwqBWfdzWNO9ocffbdtxFzBqXQ=="
+ },
"simple-swizzle": {
"version": "0.2.2",
"resolved": "https://registry.npmjs.org/simple-swizzle/-/simple-swizzle-0.2.2.tgz",
@@ -15927,6 +24272,19 @@
}
}
},
+ "single-trailing-newline": {
+ "version": "1.0.0",
+ "resolved": "https://registry.npmjs.org/single-trailing-newline/-/single-trailing-newline-1.0.0.tgz",
+ "integrity": "sha1-gfCtKtZFGBlFyAlSpcFBSZLulmQ=",
+ "requires": {
+ "detect-newline": "^1.0.3"
+ }
+ },
+ "sisteransi": {
+ "version": "1.0.5",
+ "resolved": "https://registry.npmjs.org/sisteransi/-/sisteransi-1.0.5.tgz",
+ "integrity": "sha512-bLGGlR1QxBcynn2d5YmDX4MGjlZvy2MRBDRNHLJ8VI6l6+9FUiyTFNJ0IveOSP0bcXgVDPRcfGqA0pjaqUpfVg=="
+ },
"sitemap": {
"version": "1.13.0",
"resolved": "https://registry.npmjs.org/sitemap/-/sitemap-1.13.0.tgz",
@@ -16070,16 +24428,16 @@
}
},
"socket.io": {
- "version": "2.2.0",
- "resolved": "https://registry.npmjs.org/socket.io/-/socket.io-2.2.0.tgz",
- "integrity": "sha512-wxXrIuZ8AILcn+f1B4ez4hJTPG24iNgxBBDaJfT6MsyOhVYiTXWexGoPkd87ktJG8kQEcL/NBvRi64+9k4Kc0w==",
+ "version": "2.3.0",
+ "resolved": "https://registry.npmjs.org/socket.io/-/socket.io-2.3.0.tgz",
+ "integrity": "sha512-2A892lrj0GcgR/9Qk81EaY2gYhCBxurV0PfmmESO6p27QPrUK1J3zdns+5QPqvUYK2q657nSj0guoIil9+7eFg==",
"requires": {
"debug": "~4.1.0",
- "engine.io": "~3.3.1",
+ "engine.io": "~3.4.0",
"has-binary2": "~1.0.2",
"socket.io-adapter": "~1.1.0",
- "socket.io-client": "2.2.0",
- "socket.io-parser": "~3.3.0"
+ "socket.io-client": "2.3.0",
+ "socket.io-parser": "~3.4.0"
},
"dependencies": {
"debug": {
@@ -16093,21 +24451,21 @@
}
},
"socket.io-adapter": {
- "version": "1.1.1",
- "resolved": "https://registry.npmjs.org/socket.io-adapter/-/socket.io-adapter-1.1.1.tgz",
- "integrity": "sha1-KoBeihTWNyEk3ZFZrUUC+MsH8Gs="
+ "version": "1.1.2",
+ "resolved": "https://registry.npmjs.org/socket.io-adapter/-/socket.io-adapter-1.1.2.tgz",
+ "integrity": "sha512-WzZRUj1kUjrTIrUKpZLEzFZ1OLj5FwLlAFQs9kuZJzJi5DKdU7FsWc36SNmA8iDOtwBQyT8FkrriRM8vXLYz8g=="
},
"socket.io-client": {
- "version": "2.2.0",
- "resolved": "https://registry.npmjs.org/socket.io-client/-/socket.io-client-2.2.0.tgz",
- "integrity": "sha512-56ZrkTDbdTLmBIyfFYesgOxsjcLnwAKoN4CiPyTVkMQj3zTUh0QAx3GbvIvLpFEOvQWu92yyWICxB0u7wkVbYA==",
+ "version": "2.3.0",
+ "resolved": "https://registry.npmjs.org/socket.io-client/-/socket.io-client-2.3.0.tgz",
+ "integrity": "sha512-cEQQf24gET3rfhxZ2jJ5xzAOo/xhZwK+mOqtGRg5IowZsMgwvHwnf/mCRapAAkadhM26y+iydgwsXGObBB5ZdA==",
"requires": {
"backo2": "1.0.2",
"base64-arraybuffer": "0.1.5",
"component-bind": "1.0.0",
"component-emitter": "1.2.1",
- "debug": "~3.1.0",
- "engine.io-client": "~3.3.1",
+ "debug": "~4.1.0",
+ "engine.io-client": "~3.4.0",
"has-binary2": "~1.0.2",
"has-cors": "1.1.0",
"indexof": "0.0.1",
@@ -16119,36 +24477,11 @@
},
"dependencies": {
"debug": {
- "version": "3.1.0",
- "resolved": "https://registry.npmjs.org/debug/-/debug-3.1.0.tgz",
- "integrity": "sha512-OX8XqP7/1a9cqkxYw2yXss15f26NKWBpDXQd0/uK/KPqdQhxbPa994hnzjcE2VqQpDslf55723cKPUOGSmMY3g==",
+ "version": "4.1.1",
+ "resolved": "https://registry.npmjs.org/debug/-/debug-4.1.1.tgz",
+ "integrity": "sha512-pYAIzeRo8J6KPEaJ0VWOh5Pzkbw/RetuzehGM7QRRX5he4fPHx2rdKMB256ehJCkX+XRQm16eZLqLNS8RSZXZw==",
"requires": {
- "ms": "2.0.0"
- }
- },
- "ms": {
- "version": "2.0.0",
- "resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz",
- "integrity": "sha1-VgiurfwAvmwpAd9fmGF4jeDVl8g="
- }
- }
- },
- "socket.io-parser": {
- "version": "3.3.0",
- "resolved": "https://registry.npmjs.org/socket.io-parser/-/socket.io-parser-3.3.0.tgz",
- "integrity": "sha512-hczmV6bDgdaEbVqhAeVMM/jfUfzuEZHsQg6eOmLgJht6G3mPKMxYm75w2+qhAQZ+4X+1+ATZ+QFKeOZD5riHng==",
- "requires": {
- "component-emitter": "1.2.1",
- "debug": "~3.1.0",
- "isarray": "2.0.1"
- },
- "dependencies": {
- "debug": {
- "version": "3.1.0",
- "resolved": "https://registry.npmjs.org/debug/-/debug-3.1.0.tgz",
- "integrity": "sha512-OX8XqP7/1a9cqkxYw2yXss15f26NKWBpDXQd0/uK/KPqdQhxbPa994hnzjcE2VqQpDslf55723cKPUOGSmMY3g==",
- "requires": {
- "ms": "2.0.0"
+ "ms": "^2.1.1"
}
},
"isarray": {
@@ -16156,20 +24489,66 @@
"resolved": "https://registry.npmjs.org/isarray/-/isarray-2.0.1.tgz",
"integrity": "sha1-o32U7ZzaLVmGXJ92/llu4fM4dB4="
},
- "ms": {
- "version": "2.0.0",
- "resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz",
- "integrity": "sha1-VgiurfwAvmwpAd9fmGF4jeDVl8g="
+ "socket.io-parser": {
+ "version": "3.3.0",
+ "resolved": "https://registry.npmjs.org/socket.io-parser/-/socket.io-parser-3.3.0.tgz",
+ "integrity": "sha512-hczmV6bDgdaEbVqhAeVMM/jfUfzuEZHsQg6eOmLgJht6G3mPKMxYm75w2+qhAQZ+4X+1+ATZ+QFKeOZD5riHng==",
+ "requires": {
+ "component-emitter": "1.2.1",
+ "debug": "~3.1.0",
+ "isarray": "2.0.1"
+ },
+ "dependencies": {
+ "debug": {
+ "version": "3.1.0",
+ "resolved": "https://registry.npmjs.org/debug/-/debug-3.1.0.tgz",
+ "integrity": "sha512-OX8XqP7/1a9cqkxYw2yXss15f26NKWBpDXQd0/uK/KPqdQhxbPa994hnzjcE2VqQpDslf55723cKPUOGSmMY3g==",
+ "requires": {
+ "ms": "2.0.0"
+ }
+ },
+ "ms": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz",
+ "integrity": "sha1-VgiurfwAvmwpAd9fmGF4jeDVl8g="
+ }
+ }
+ }
+ }
+ },
+ "socket.io-parser": {
+ "version": "3.4.1",
+ "resolved": "https://registry.npmjs.org/socket.io-parser/-/socket.io-parser-3.4.1.tgz",
+ "integrity": "sha512-11hMgzL+WCLWf1uFtHSNvliI++tcRUWdoeYuwIl+Axvwy9z2gQM+7nJyN3STj1tLj5JyIUH8/gpDGxzAlDdi0A==",
+ "requires": {
+ "component-emitter": "1.2.1",
+ "debug": "~4.1.0",
+ "isarray": "2.0.1"
+ },
+ "dependencies": {
+ "debug": {
+ "version": "4.1.1",
+ "resolved": "https://registry.npmjs.org/debug/-/debug-4.1.1.tgz",
+ "integrity": "sha512-pYAIzeRo8J6KPEaJ0VWOh5Pzkbw/RetuzehGM7QRRX5he4fPHx2rdKMB256ehJCkX+XRQm16eZLqLNS8RSZXZw==",
+ "requires": {
+ "ms": "^2.1.1"
+ }
+ },
+ "isarray": {
+ "version": "2.0.1",
+ "resolved": "https://registry.npmjs.org/isarray/-/isarray-2.0.1.tgz",
+ "integrity": "sha1-o32U7ZzaLVmGXJ92/llu4fM4dB4="
}
}
},
"sockjs": {
- "version": "0.3.19",
- "resolved": "https://registry.npmjs.org/sockjs/-/sockjs-0.3.19.tgz",
- "integrity": "sha512-V48klKZl8T6MzatbLlzzRNhMepEys9Y4oGFpypBFFn1gLI/QQ9HtLLyWJNbPlwGLelOVOEijUbTTJeLLI59jLw==",
+ "version": "0.3.20",
+ "resolved": "https://registry.npmjs.org/sockjs/-/sockjs-0.3.20.tgz",
+ "integrity": "sha512-SpmVOVpdq0DJc0qArhF3E5xsxvaiqGNb73XfgBpK1y3UD5gs8DSo8aCTsuT5pX8rssdc2NDIzANwP9eCAiSdTA==",
"requires": {
"faye-websocket": "^0.10.0",
- "uuid": "^3.0.1"
+ "uuid": "^3.4.0",
+ "websocket-driver": "0.6.5"
},
"dependencies": {
"faye-websocket": {
@@ -16179,6 +24558,19 @@
"requires": {
"websocket-driver": ">=0.5.1"
}
+ },
+ "uuid": {
+ "version": "3.4.0",
+ "resolved": "https://registry.npmjs.org/uuid/-/uuid-3.4.0.tgz",
+ "integrity": "sha512-HjSDRw6gZE5JMggctHBcjVak08+KEVhSIiDzFnT9S9aegmp85S/bReBVTb4QTFaRNptJ9kuYaNhnbNEOkbKb/A=="
+ },
+ "websocket-driver": {
+ "version": "0.6.5",
+ "resolved": "https://registry.npmjs.org/websocket-driver/-/websocket-driver-0.6.5.tgz",
+ "integrity": "sha1-XLJVbOuF9Dc8bYI4qmkchFThOjY=",
+ "requires": {
+ "websocket-extensions": ">=0.1.1"
+ }
}
}
},
@@ -16249,9 +24641,9 @@
}
},
"source-map-support": {
- "version": "0.5.10",
- "resolved": "https://registry.npmjs.org/source-map-support/-/source-map-support-0.5.10.tgz",
- "integrity": "sha512-YfQ3tQFTK/yzlGJuX8pTwa4tifQj4QS2Mj7UegOu8jAz59MqIiMGPXxQhVQiIMNzayuUSF/jEuVnfFF5JqybmQ==",
+ "version": "0.5.19",
+ "resolved": "https://registry.npmjs.org/source-map-support/-/source-map-support-0.5.19.tgz",
+ "integrity": "sha512-Wonm7zOCIJzBGQdB+thsPar0kYuCIzYvxZwlBa87yi/Mdjv7Tip2cyVbLj5o0cFPN4EVkuTwb3GDDyUx2DGnGw==",
"requires": {
"buffer-from": "^1.0.0",
"source-map": "^0.6.0"
@@ -16269,6 +24661,11 @@
"resolved": "https://registry.npmjs.org/source-map-url/-/source-map-url-0.4.0.tgz",
"integrity": "sha1-PpNdfd1zYxuXZZlW1VEo6HtQhKM="
},
+ "sourcemap-codec": {
+ "version": "1.4.8",
+ "resolved": "https://registry.npmjs.org/sourcemap-codec/-/sourcemap-codec-1.4.8.tgz",
+ "integrity": "sha512-9NykojV5Uih4lgo5So5dtw+f0JgJX30KCNI8gwhz2J9A15wD0Ml6tjHKwf6fTSa6fAdVBdZeNOs9eJ71qCk8vA=="
+ },
"space-separated-tokens": {
"version": "1.1.2",
"resolved": "https://registry.npmjs.org/space-separated-tokens/-/space-separated-tokens-1.1.2.tgz",
@@ -16306,9 +24703,9 @@
"integrity": "sha512-uBIcIl3Ih6Phe3XHK1NqboJLdGfwr1UN3k6wSD1dZpmPsIkb8AGNbZYJ1fOBk834+Gxy8rpfDxrS6XLEMZMY2g=="
},
"spdy": {
- "version": "4.0.0",
- "resolved": "https://registry.npmjs.org/spdy/-/spdy-4.0.0.tgz",
- "integrity": "sha512-ot0oEGT/PGUpzf/6uk4AWLqkq+irlqHXkrdbk51oWONh3bxQmBuljxPNl66zlRRcIJStWq0QkLUCPOPjgjvU0Q==",
+ "version": "4.0.2",
+ "resolved": "https://registry.npmjs.org/spdy/-/spdy-4.0.2.tgz",
+ "integrity": "sha512-r46gZQZQV+Kl9oItvl1JZZqJKGr+oEkB08A6BzkiR7593/7IbtuncXHd2YoYeTsG4157ZssMu9KYvUHLcjcDoA==",
"requires": {
"debug": "^4.1.0",
"handle-thing": "^2.0.0",
@@ -16349,9 +24746,9 @@
}
},
"readable-stream": {
- "version": "3.1.1",
- "resolved": "https://registry.npmjs.org/readable-stream/-/readable-stream-3.1.1.tgz",
- "integrity": "sha512-DkN66hPyqDhnIQ6Jcsvx9bFjhw214O4poMBcIMgPVpQvNy9a0e0Uhg5SqySyDKAmUlwt8LonTBz1ezOnM8pUdA==",
+ "version": "3.6.0",
+ "resolved": "https://registry.npmjs.org/readable-stream/-/readable-stream-3.6.0.tgz",
+ "integrity": "sha512-BViHy7LKeTz4oNnkcLJ+lVSL6vpiFeX6/d3oSH8zCW7UxP2onchk+vTGB143xuFjHS3deTgkKoXXymXqymiIdA==",
"requires": {
"inherits": "^2.0.3",
"string_decoder": "^1.1.1",
@@ -16450,10 +24847,15 @@
"resolved": "https://registry.npmjs.org/stack-trace/-/stack-trace-0.0.10.tgz",
"integrity": "sha1-VHxws0fo0ytOEI6hoqFZ5f3eGcA="
},
+ "stack-utils": {
+ "version": "1.0.2",
+ "resolved": "https://registry.npmjs.org/stack-utils/-/stack-utils-1.0.2.tgz",
+ "integrity": "sha512-MTX+MeG5U994cazkjd/9KNAapsHnibjMLnfXodlkXw76JEea0UiNzrqidzo1emMwk7w5Qhc9jd4Bn9TBb1MFwA=="
+ },
"stackframe": {
- "version": "1.0.4",
- "resolved": "https://registry.npmjs.org/stackframe/-/stackframe-1.0.4.tgz",
- "integrity": "sha512-to7oADIniaYwS3MhtCa/sQhrxidCCQiF/qp4/m5iN3ipf0Y7Xlri0f6eG29r08aL7JYl8n32AF3Q5GYBZ7K8vw=="
+ "version": "1.2.0",
+ "resolved": "https://registry.npmjs.org/stackframe/-/stackframe-1.2.0.tgz",
+ "integrity": "sha512-GrdeshiRmS1YLMYgzF16olf2jJ/IzxXY9lhKOskuVziubpTYcYqyOwYeJKzQkwy7uN0fYSsbsC4RQaXf9LCrYA=="
},
"state-toggle": {
"version": "1.0.1",
@@ -16508,9 +24910,9 @@
}
},
"statuses": {
- "version": "1.4.0",
- "resolved": "https://registry.npmjs.org/statuses/-/statuses-1.4.0.tgz",
- "integrity": "sha512-zhSCtt8v2NDrRlPQpCNtw/heZLtfUDqxBM1udqikb/Hbk52LK4nQSwr10u77iopCW5LsyHpuXS0GnEc48mLeew=="
+ "version": "1.5.0",
+ "resolved": "https://registry.npmjs.org/statuses/-/statuses-1.5.0.tgz",
+ "integrity": "sha1-Fhx9rBd2Wf2YEfQ3cfqZOBR4Yow="
},
"stdout-stream": {
"version": "1.4.1",
@@ -16574,9 +24976,9 @@
}
},
"stream-shift": {
- "version": "1.0.0",
- "resolved": "https://registry.npmjs.org/stream-shift/-/stream-shift-1.0.0.tgz",
- "integrity": "sha1-1cdSgl5TZ+eG944Y5EXqIjoVWVI="
+ "version": "1.0.1",
+ "resolved": "https://registry.npmjs.org/stream-shift/-/stream-shift-1.0.1.tgz",
+ "integrity": "sha512-AiisoFqQ0vbGcZgQPY1cdP2I76glaVA/RauYR4G4thNFgkTqr90yXTo4LYX60Jl+sIlPNHHdGSwo01AvbKUSVQ=="
},
"stream-to": {
"version": "0.2.2",
@@ -16596,6 +24998,30 @@
"resolved": "https://registry.npmjs.org/strict-uri-encode/-/strict-uri-encode-1.1.0.tgz",
"integrity": "sha1-J5siXfHVgrH1TmWt3UNS4Y+qBxM="
},
+ "string-length": {
+ "version": "3.1.0",
+ "resolved": "https://registry.npmjs.org/string-length/-/string-length-3.1.0.tgz",
+ "integrity": "sha512-Ttp5YvkGm5v9Ijagtaz1BnN+k9ObpvS0eIBblPMp2YWL8FBmi9qblQ9fexc2k/CXFgrTIteU3jAw3payCnwSTA==",
+ "requires": {
+ "astral-regex": "^1.0.0",
+ "strip-ansi": "^5.2.0"
+ },
+ "dependencies": {
+ "ansi-regex": {
+ "version": "4.1.0",
+ "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-4.1.0.tgz",
+ "integrity": "sha512-1apePfXM1UOSqw0o9IiFAovVz9M5S1Dg+4TrDwfMewQ6p/rmMueb7tWZjQ1rx4Loy1ArBggoqGpfqqdI4rondg=="
+ },
+ "strip-ansi": {
+ "version": "5.2.0",
+ "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-5.2.0.tgz",
+ "integrity": "sha512-DuRs1gKbBqsMKIZlrffwlug8MHkcnpjs5VPmL1PAh+mA30U0DTotfDZ0d2UUsXpPmPmMMJ6W773MaA3J+lbiWA==",
+ "requires": {
+ "ansi-regex": "^4.1.0"
+ }
+ }
+ }
+ },
"string-similarity": {
"version": "1.2.2",
"resolved": "https://registry.npmjs.org/string-similarity/-/string-similarity-1.2.2.tgz",
@@ -16632,6 +25058,196 @@
}
}
},
+ "string.prototype.matchall": {
+ "version": "4.0.2",
+ "resolved": "https://registry.npmjs.org/string.prototype.matchall/-/string.prototype.matchall-4.0.2.tgz",
+ "integrity": "sha512-N/jp6O5fMf9os0JU3E72Qhf590RSRZU/ungsL/qJUYVTNv7hTG0P/dbPjxINVN9jpscu3nzYwKESU3P3RY5tOg==",
+ "requires": {
+ "define-properties": "^1.1.3",
+ "es-abstract": "^1.17.0",
+ "has-symbols": "^1.0.1",
+ "internal-slot": "^1.0.2",
+ "regexp.prototype.flags": "^1.3.0",
+ "side-channel": "^1.0.2"
+ },
+ "dependencies": {
+ "es-abstract": {
+ "version": "1.17.6",
+ "resolved": "https://registry.npmjs.org/es-abstract/-/es-abstract-1.17.6.tgz",
+ "integrity": "sha512-Fr89bON3WFyUi5EvAeI48QTWX0AyekGgLA8H+c+7fbfCkJwRWRMLd8CQedNEyJuoYYhmtEqY92pgte1FAhBlhw==",
+ "requires": {
+ "es-to-primitive": "^1.2.1",
+ "function-bind": "^1.1.1",
+ "has": "^1.0.3",
+ "has-symbols": "^1.0.1",
+ "is-callable": "^1.2.0",
+ "is-regex": "^1.1.0",
+ "object-inspect": "^1.7.0",
+ "object-keys": "^1.1.1",
+ "object.assign": "^4.1.0",
+ "string.prototype.trimend": "^1.0.1",
+ "string.prototype.trimstart": "^1.0.1"
+ }
+ },
+ "es-to-primitive": {
+ "version": "1.2.1",
+ "resolved": "https://registry.npmjs.org/es-to-primitive/-/es-to-primitive-1.2.1.tgz",
+ "integrity": "sha512-QCOllgZJtaUo9miYBcLChTUaHNjJF3PYs1VidD7AwiEj1kYxKeQTctLAezAOH5ZKRH0g2IgPn6KwB4IT8iRpvA==",
+ "requires": {
+ "is-callable": "^1.1.4",
+ "is-date-object": "^1.0.1",
+ "is-symbol": "^1.0.2"
+ }
+ },
+ "has-symbols": {
+ "version": "1.0.1",
+ "resolved": "https://registry.npmjs.org/has-symbols/-/has-symbols-1.0.1.tgz",
+ "integrity": "sha512-PLcsoqu++dmEIZB+6totNFKq/7Do+Z0u4oT0zKOJNl3lYK6vGwwu2hjHs+68OEZbTjiUE9bgOABXbP/GvrS0Kg=="
+ },
+ "is-callable": {
+ "version": "1.2.0",
+ "resolved": "https://registry.npmjs.org/is-callable/-/is-callable-1.2.0.tgz",
+ "integrity": "sha512-pyVD9AaGLxtg6srb2Ng6ynWJqkHU9bEM087AKck0w8QwDarTfNcpIYoU8x8Hv2Icm8u6kFJM18Dag8lyqGkviw=="
+ },
+ "is-regex": {
+ "version": "1.1.0",
+ "resolved": "https://registry.npmjs.org/is-regex/-/is-regex-1.1.0.tgz",
+ "integrity": "sha512-iI97M8KTWID2la5uYXlkbSDQIg4F6o1sYboZKKTDpnDQMLtUL86zxhgDet3Q2SriaYsyGqZ6Mn2SjbRKeLHdqw==",
+ "requires": {
+ "has-symbols": "^1.0.1"
+ }
+ },
+ "object-keys": {
+ "version": "1.1.1",
+ "resolved": "https://registry.npmjs.org/object-keys/-/object-keys-1.1.1.tgz",
+ "integrity": "sha512-NuAESUOUMrlIXOfHKzD6bpPu3tYt3xvjNdRIQ+FeT0lNb4K8WR70CaDxhuNguS2XG+GjkyMwOzsN5ZktImfhLA=="
+ }
+ }
+ },
+ "string.prototype.trimend": {
+ "version": "1.0.1",
+ "resolved": "https://registry.npmjs.org/string.prototype.trimend/-/string.prototype.trimend-1.0.1.tgz",
+ "integrity": "sha512-LRPxFUaTtpqYsTeNKaFOw3R4bxIzWOnbQ837QfBylo8jIxtcbK/A/sMV7Q+OAV/vWo+7s25pOE10KYSjaSO06g==",
+ "requires": {
+ "define-properties": "^1.1.3",
+ "es-abstract": "^1.17.5"
+ },
+ "dependencies": {
+ "es-abstract": {
+ "version": "1.17.6",
+ "resolved": "https://registry.npmjs.org/es-abstract/-/es-abstract-1.17.6.tgz",
+ "integrity": "sha512-Fr89bON3WFyUi5EvAeI48QTWX0AyekGgLA8H+c+7fbfCkJwRWRMLd8CQedNEyJuoYYhmtEqY92pgte1FAhBlhw==",
+ "requires": {
+ "es-to-primitive": "^1.2.1",
+ "function-bind": "^1.1.1",
+ "has": "^1.0.3",
+ "has-symbols": "^1.0.1",
+ "is-callable": "^1.2.0",
+ "is-regex": "^1.1.0",
+ "object-inspect": "^1.7.0",
+ "object-keys": "^1.1.1",
+ "object.assign": "^4.1.0",
+ "string.prototype.trimend": "^1.0.1",
+ "string.prototype.trimstart": "^1.0.1"
+ }
+ },
+ "es-to-primitive": {
+ "version": "1.2.1",
+ "resolved": "https://registry.npmjs.org/es-to-primitive/-/es-to-primitive-1.2.1.tgz",
+ "integrity": "sha512-QCOllgZJtaUo9miYBcLChTUaHNjJF3PYs1VidD7AwiEj1kYxKeQTctLAezAOH5ZKRH0g2IgPn6KwB4IT8iRpvA==",
+ "requires": {
+ "is-callable": "^1.1.4",
+ "is-date-object": "^1.0.1",
+ "is-symbol": "^1.0.2"
+ }
+ },
+ "has-symbols": {
+ "version": "1.0.1",
+ "resolved": "https://registry.npmjs.org/has-symbols/-/has-symbols-1.0.1.tgz",
+ "integrity": "sha512-PLcsoqu++dmEIZB+6totNFKq/7Do+Z0u4oT0zKOJNl3lYK6vGwwu2hjHs+68OEZbTjiUE9bgOABXbP/GvrS0Kg=="
+ },
+ "is-callable": {
+ "version": "1.2.0",
+ "resolved": "https://registry.npmjs.org/is-callable/-/is-callable-1.2.0.tgz",
+ "integrity": "sha512-pyVD9AaGLxtg6srb2Ng6ynWJqkHU9bEM087AKck0w8QwDarTfNcpIYoU8x8Hv2Icm8u6kFJM18Dag8lyqGkviw=="
+ },
+ "is-regex": {
+ "version": "1.1.0",
+ "resolved": "https://registry.npmjs.org/is-regex/-/is-regex-1.1.0.tgz",
+ "integrity": "sha512-iI97M8KTWID2la5uYXlkbSDQIg4F6o1sYboZKKTDpnDQMLtUL86zxhgDet3Q2SriaYsyGqZ6Mn2SjbRKeLHdqw==",
+ "requires": {
+ "has-symbols": "^1.0.1"
+ }
+ },
+ "object-keys": {
+ "version": "1.1.1",
+ "resolved": "https://registry.npmjs.org/object-keys/-/object-keys-1.1.1.tgz",
+ "integrity": "sha512-NuAESUOUMrlIXOfHKzD6bpPu3tYt3xvjNdRIQ+FeT0lNb4K8WR70CaDxhuNguS2XG+GjkyMwOzsN5ZktImfhLA=="
+ }
+ }
+ },
+ "string.prototype.trimstart": {
+ "version": "1.0.1",
+ "resolved": "https://registry.npmjs.org/string.prototype.trimstart/-/string.prototype.trimstart-1.0.1.tgz",
+ "integrity": "sha512-XxZn+QpvrBI1FOcg6dIpxUPgWCPuNXvMD72aaRaUQv1eD4e/Qy8i/hFTe0BUmD60p/QA6bh1avmuPTfNjqVWRw==",
+ "requires": {
+ "define-properties": "^1.1.3",
+ "es-abstract": "^1.17.5"
+ },
+ "dependencies": {
+ "es-abstract": {
+ "version": "1.17.6",
+ "resolved": "https://registry.npmjs.org/es-abstract/-/es-abstract-1.17.6.tgz",
+ "integrity": "sha512-Fr89bON3WFyUi5EvAeI48QTWX0AyekGgLA8H+c+7fbfCkJwRWRMLd8CQedNEyJuoYYhmtEqY92pgte1FAhBlhw==",
+ "requires": {
+ "es-to-primitive": "^1.2.1",
+ "function-bind": "^1.1.1",
+ "has": "^1.0.3",
+ "has-symbols": "^1.0.1",
+ "is-callable": "^1.2.0",
+ "is-regex": "^1.1.0",
+ "object-inspect": "^1.7.0",
+ "object-keys": "^1.1.1",
+ "object.assign": "^4.1.0",
+ "string.prototype.trimend": "^1.0.1",
+ "string.prototype.trimstart": "^1.0.1"
+ }
+ },
+ "es-to-primitive": {
+ "version": "1.2.1",
+ "resolved": "https://registry.npmjs.org/es-to-primitive/-/es-to-primitive-1.2.1.tgz",
+ "integrity": "sha512-QCOllgZJtaUo9miYBcLChTUaHNjJF3PYs1VidD7AwiEj1kYxKeQTctLAezAOH5ZKRH0g2IgPn6KwB4IT8iRpvA==",
+ "requires": {
+ "is-callable": "^1.1.4",
+ "is-date-object": "^1.0.1",
+ "is-symbol": "^1.0.2"
+ }
+ },
+ "has-symbols": {
+ "version": "1.0.1",
+ "resolved": "https://registry.npmjs.org/has-symbols/-/has-symbols-1.0.1.tgz",
+ "integrity": "sha512-PLcsoqu++dmEIZB+6totNFKq/7Do+Z0u4oT0zKOJNl3lYK6vGwwu2hjHs+68OEZbTjiUE9bgOABXbP/GvrS0Kg=="
+ },
+ "is-callable": {
+ "version": "1.2.0",
+ "resolved": "https://registry.npmjs.org/is-callable/-/is-callable-1.2.0.tgz",
+ "integrity": "sha512-pyVD9AaGLxtg6srb2Ng6ynWJqkHU9bEM087AKck0w8QwDarTfNcpIYoU8x8Hv2Icm8u6kFJM18Dag8lyqGkviw=="
+ },
+ "is-regex": {
+ "version": "1.1.0",
+ "resolved": "https://registry.npmjs.org/is-regex/-/is-regex-1.1.0.tgz",
+ "integrity": "sha512-iI97M8KTWID2la5uYXlkbSDQIg4F6o1sYboZKKTDpnDQMLtUL86zxhgDet3Q2SriaYsyGqZ6Mn2SjbRKeLHdqw==",
+ "requires": {
+ "has-symbols": "^1.0.1"
+ }
+ },
+ "object-keys": {
+ "version": "1.1.1",
+ "resolved": "https://registry.npmjs.org/object-keys/-/object-keys-1.1.1.tgz",
+ "integrity": "sha512-NuAESUOUMrlIXOfHKzD6bpPu3tYt3xvjNdRIQ+FeT0lNb4K8WR70CaDxhuNguS2XG+GjkyMwOzsN5ZktImfhLA=="
+ }
+ }
+ },
"string_decoder": {
"version": "1.1.1",
"resolved": "http://registry.npmjs.org/string_decoder/-/string_decoder-1.1.1.tgz",
@@ -16701,6 +25317,11 @@
"resolved": "http://registry.npmjs.org/strip-eof/-/strip-eof-1.0.0.tgz",
"integrity": "sha1-u0P/VZim6wXYm1n80SnJgzE2Br8="
},
+ "strip-final-newline": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/strip-final-newline/-/strip-final-newline-2.0.0.tgz",
+ "integrity": "sha512-BrpvfNAE3dcvq7ll3xVumzjKjZQ5tI1sEUIKr3Uoks0XUl45St3FlatVqef9prk4jRDzhW6WZg+3bk93y6pLjA=="
+ },
"strip-indent": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/strip-indent/-/strip-indent-1.0.1.tgz",
@@ -16734,6 +25355,17 @@
"requires": {
"loader-utils": "^1.1.0",
"schema-utils": "^0.4.5"
+ },
+ "dependencies": {
+ "schema-utils": {
+ "version": "0.4.7",
+ "resolved": "https://registry.npmjs.org/schema-utils/-/schema-utils-0.4.7.tgz",
+ "integrity": "sha512-v/iwU6wvwGK8HbU9yi3/nhGzP0yGSuhQMzL6ySiec1FSrZZDkhm4noOSWzrNFo/jEc+SJY6jRTwuwbSXJPDUnQ==",
+ "requires": {
+ "ajv": "^6.1.0",
+ "ajv-keywords": "^3.1.0"
+ }
+ }
}
},
"style-to-object": {
@@ -16755,19 +25387,20 @@
},
"dependencies": {
"browserslist": {
- "version": "4.4.2",
- "resolved": "https://registry.npmjs.org/browserslist/-/browserslist-4.4.2.tgz",
- "integrity": "sha512-ISS/AIAiHERJ3d45Fz0AVYKkgcy+F/eJHzKEvv1j0wwKGKD9T3BrwKr/5g45L+Y4XIK5PlTqefHciRFcfE1Jxg==",
+ "version": "4.12.0",
+ "resolved": "https://registry.npmjs.org/browserslist/-/browserslist-4.12.0.tgz",
+ "integrity": "sha512-UH2GkcEDSI0k/lRkuDSzFl9ZZ87skSy9w2XAn1MsZnL+4c4rqbBd3e82UWHbYDpztABrPBhZsTEeuxVfHppqDg==",
"requires": {
- "caniuse-lite": "^1.0.30000939",
- "electron-to-chromium": "^1.3.113",
- "node-releases": "^1.1.8"
+ "caniuse-lite": "^1.0.30001043",
+ "electron-to-chromium": "^1.3.413",
+ "node-releases": "^1.1.53",
+ "pkg-up": "^2.0.0"
}
},
"caniuse-lite": {
- "version": "1.0.30000939",
- "resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30000939.tgz",
- "integrity": "sha512-oXB23ImDJOgQpGjRv1tCtzAvJr4/OvrHi5SO2vUgB0g0xpdZZoA/BxfImiWfdwoYdUTtQrPsXsvYU/dmCSM8gg=="
+ "version": "1.0.30001084",
+ "resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30001084.tgz",
+ "integrity": "sha512-ftdc5oGmhEbLUuMZ/Qp3mOpzfZLCxPYKcvGv6v2dJJ+8EdqcvZRbAGOiLmkM/PV1QGta/uwBs8/nCl6sokDW6w=="
},
"chalk": {
"version": "2.4.2",
@@ -16790,22 +25423,19 @@
}
},
"electron-to-chromium": {
- "version": "1.3.113",
- "resolved": "https://registry.npmjs.org/electron-to-chromium/-/electron-to-chromium-1.3.113.tgz",
- "integrity": "sha512-De+lPAxEcpxvqPTyZAXELNpRZXABRxf+uL/rSykstQhzj/B0l1150G/ExIIxKc16lI89Hgz81J0BHAcbTqK49g=="
+ "version": "1.3.474",
+ "resolved": "https://registry.npmjs.org/electron-to-chromium/-/electron-to-chromium-1.3.474.tgz",
+ "integrity": "sha512-fPkSgT9IBKmVJz02XioNsIpg0WYmkPrvU1lUJblMMJALxyE7/32NGvbJQKKxpNokozPvqfqkuUqVClYsvetcLw=="
},
"node-releases": {
- "version": "1.1.8",
- "resolved": "https://registry.npmjs.org/node-releases/-/node-releases-1.1.8.tgz",
- "integrity": "sha512-gQm+K9mGCiT/NXHy+V/ZZS1N/LOaGGqRAAJJs3X9Ah1g+CIbRcBgNyoNYQ+SEtcyAtB9KqDruu+fF7nWjsqRaA==",
- "requires": {
- "semver": "^5.3.0"
- }
+ "version": "1.1.58",
+ "resolved": "https://registry.npmjs.org/node-releases/-/node-releases-1.1.58.tgz",
+ "integrity": "sha512-NxBudgVKiRh/2aPWMgPR7bPTX0VPmGx5QBwCtdHitnqFE5/O8DeBXuIMH1nwNnw/aMo6AjOrpsHzfY3UbUJ7yg=="
},
"postcss": {
- "version": "7.0.14",
- "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.14.tgz",
- "integrity": "sha512-NsbD6XUUMZvBxtQAJuWDJeeC4QFsmWsfozWxCJPWf3M55K9iu2iMDaKqyoOdTJ1R4usBXuxlVFAIo8rZPQD4Bg==",
+ "version": "7.0.32",
+ "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.32.tgz",
+ "integrity": "sha512-03eXong5NLnNCD05xscnGKGDZ98CyzoqPSMjOe6SuoQY7Z2hIj0Ld1g/O/UQRuOle2aRtiIRDg9tDcTGAkLfKw==",
"requires": {
"chalk": "^2.4.2",
"source-map": "^0.6.1",
@@ -16813,11 +25443,11 @@
}
},
"postcss-selector-parser": {
- "version": "3.1.1",
- "resolved": "https://registry.npmjs.org/postcss-selector-parser/-/postcss-selector-parser-3.1.1.tgz",
- "integrity": "sha1-T4dfSvsMllc9XPTXQBGu4lCn6GU=",
+ "version": "3.1.2",
+ "resolved": "https://registry.npmjs.org/postcss-selector-parser/-/postcss-selector-parser-3.1.2.tgz",
+ "integrity": "sha512-h7fJ/5uWuRVyOtkO45pnt1Ih40CEleeyCHzipqAZO2e5H20g25Y48uYnFUiShvY4rZWNJ/Bib/KVPmanaCtOhA==",
"requires": {
- "dot-prop": "^4.1.1",
+ "dot-prop": "^5.2.0",
"indexes-of": "^1.0.1",
"uniq": "^1.0.1"
}
@@ -16837,6 +25467,28 @@
}
}
},
+ "subscriptions-transport-ws": {
+ "version": "0.9.16",
+ "resolved": "https://registry.npmjs.org/subscriptions-transport-ws/-/subscriptions-transport-ws-0.9.16.tgz",
+ "integrity": "sha512-pQdoU7nC+EpStXnCfh/+ho0zE0Z+ma+i7xvj7bkXKb1dvYHSZxgRPaU6spRP+Bjzow67c/rRDoix5RT0uU9omw==",
+ "requires": {
+ "backo2": "^1.0.2",
+ "eventemitter3": "^3.1.0",
+ "iterall": "^1.2.1",
+ "symbol-observable": "^1.0.4",
+ "ws": "^5.2.0"
+ },
+ "dependencies": {
+ "ws": {
+ "version": "5.2.2",
+ "resolved": "https://registry.npmjs.org/ws/-/ws-5.2.2.tgz",
+ "integrity": "sha512-jaHFD6PFv6UgoIVda6qZllptQsMlDEJkTQcybzzXDYM1XO9Y8em691FGMPmM46WGyLU4z9KMgQN+qrux/nhlHA==",
+ "requires": {
+ "async-limiter": "~1.0.0"
+ }
+ }
+ }
+ },
"supports-color": {
"version": "5.5.0",
"resolved": "https://registry.npmjs.org/supports-color/-/supports-color-5.5.0.tgz",
@@ -16845,6 +25497,30 @@
"has-flag": "^3.0.0"
}
},
+ "supports-hyperlinks": {
+ "version": "2.1.0",
+ "resolved": "https://registry.npmjs.org/supports-hyperlinks/-/supports-hyperlinks-2.1.0.tgz",
+ "integrity": "sha512-zoE5/e+dnEijk6ASB6/qrK+oYdm2do1hjoLWrqUC/8WEIW1gbxFcKuBof7sW8ArN6e+AYvsE8HBGiVRWL/F5CA==",
+ "requires": {
+ "has-flag": "^4.0.0",
+ "supports-color": "^7.0.0"
+ },
+ "dependencies": {
+ "has-flag": {
+ "version": "4.0.0",
+ "resolved": "https://registry.npmjs.org/has-flag/-/has-flag-4.0.0.tgz",
+ "integrity": "sha512-EykJT/Q1KjTWctppgIAgfSO0tKVuZUjhgMr17kqTumMl6Afv3EISleU7qZUzoXDFTAHTDC4NOoG/ZxU3EvlMPQ=="
+ },
+ "supports-color": {
+ "version": "7.1.0",
+ "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-7.1.0.tgz",
+ "integrity": "sha512-oRSIpR8pxT1Wr2FquTNnGet79b3BWljqOuoW/h4oBhxJ/HUbX5nX6JSruTkvXDCFMwDPvsaTTbvMLKZWSy0R5g==",
+ "requires": {
+ "has-flag": "^4.0.0"
+ }
+ }
+ }
+ },
"svg-react-loader": {
"version": "0.4.6",
"resolved": "https://registry.npmjs.org/svg-react-loader/-/svg-react-loader-0.4.6.tgz",
@@ -16880,6 +25556,11 @@
}
}
},
+ "svg-tag-names": {
+ "version": "2.0.1",
+ "resolved": "https://registry.npmjs.org/svg-tag-names/-/svg-tag-names-2.0.1.tgz",
+ "integrity": "sha512-BEZ508oR+X/b5sh7bT0RqDJ7GhTpezjj3P1D4kugrOaPs6HijviWksoQ63PS81vZn0QCjZmVKjHDBniTo+Domg=="
+ },
"svgo": {
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/svgo/-/svgo-1.1.1.tgz",
@@ -16943,22 +25624,22 @@
"integrity": "sha512-e900nM8RRtGhlV36KGEU9k65K3mPb1WV70OdjfxlG2EAuM1noi/E/BaW/uMhL7bPEssK8QV57vN3esixjUvcXQ=="
},
"table": {
- "version": "5.2.3",
- "resolved": "https://registry.npmjs.org/table/-/table-5.2.3.tgz",
- "integrity": "sha512-N2RsDAMvDLvYwFcwbPyF3VmVSSkuF+G1e+8inhBLtHpvwXGw4QRPEZhihQNeEN0i1up6/f6ObCJXNdlRG3YVyQ==",
+ "version": "5.4.6",
+ "resolved": "https://registry.npmjs.org/table/-/table-5.4.6.tgz",
+ "integrity": "sha512-wmEc8m4fjnob4gt5riFRtTu/6+4rSe12TpAELNSqHMfF3IqnA+CH37USM6/YR3qRZv7e56kAEAtd6nKZaxe0Ug==",
"requires": {
- "ajv": "^6.9.1",
- "lodash": "^4.17.11",
+ "ajv": "^6.10.2",
+ "lodash": "^4.17.14",
"slice-ansi": "^2.1.0",
"string-width": "^3.0.0"
},
"dependencies": {
"ajv": {
- "version": "6.10.0",
- "resolved": "https://registry.npmjs.org/ajv/-/ajv-6.10.0.tgz",
- "integrity": "sha512-nffhOpkymDECQyR0mnsUtoCE8RlX38G0rYP+wgLWFyZuUyuuojSSvi/+euOiQBIn63whYwYVIIH1TvE3tu4OEg==",
+ "version": "6.12.2",
+ "resolved": "https://registry.npmjs.org/ajv/-/ajv-6.12.2.tgz",
+ "integrity": "sha512-k+V+hzjm5q/Mr8ef/1Y9goCmlsK4I6Sm74teeyGvFk1XrOsbsKLjEdrvny42CZ+a8sXbk8KWpY/bDwS+FLL2UQ==",
"requires": {
- "fast-deep-equal": "^2.0.1",
+ "fast-deep-equal": "^3.1.1",
"fast-json-stable-stringify": "^2.0.0",
"json-schema-traverse": "^0.4.1",
"uri-js": "^4.2.2"
@@ -16969,6 +25650,16 @@
"resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-4.1.0.tgz",
"integrity": "sha512-1apePfXM1UOSqw0o9IiFAovVz9M5S1Dg+4TrDwfMewQ6p/rmMueb7tWZjQ1rx4Loy1ArBggoqGpfqqdI4rondg=="
},
+ "fast-deep-equal": {
+ "version": "3.1.3",
+ "resolved": "https://registry.npmjs.org/fast-deep-equal/-/fast-deep-equal-3.1.3.tgz",
+ "integrity": "sha512-f3qQ9oQy9j2AhBe/H9VC91wLmKBCCU/gDOnKNAYG5hswO7BLKj09Hc5HYNz9cGI++xlpDCIgDaitVs03ATR84Q=="
+ },
+ "lodash": {
+ "version": "4.17.15",
+ "resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz",
+ "integrity": "sha512-8xOcRHvCjnocdS5cpwXQXVzmmh5e5+saE2QGoeQmbKmRS6J3VQppPOIt0MnmE+4xlZoumy0GPG0D0MVIQbNA1A=="
+ },
"string-width": {
"version": "3.1.0",
"resolved": "https://registry.npmjs.org/string-width/-/string-width-3.1.0.tgz",
@@ -16980,9 +25671,9 @@
}
},
"strip-ansi": {
- "version": "5.1.0",
- "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-5.1.0.tgz",
- "integrity": "sha512-TjxrkPONqO2Z8QDCpeE2j6n0M6EwxzyDgzEeGp+FbdvaJAt//ClYi6W5my+3ROlC/hZX2KACUwDfK49Ka5eDvg==",
+ "version": "5.2.0",
+ "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-5.2.0.tgz",
+ "integrity": "sha512-DuRs1gKbBqsMKIZlrffwlug8MHkcnpjs5VPmL1PAh+mA30U0DTotfDZ0d2UUsXpPmPmMMJ6W773MaA3J+lbiWA==",
"requires": {
"ansi-regex": "^4.1.0"
}
@@ -16990,9 +25681,9 @@
}
},
"tapable": {
- "version": "1.1.1",
- "resolved": "https://registry.npmjs.org/tapable/-/tapable-1.1.1.tgz",
- "integrity": "sha512-9I2ydhj8Z9veORCw5PRm4u9uebCn0mcCa6scWoNcbZ6dAtoo2618u9UUzxgmsCOreJpqDDuv61LvwofW7hLcBA=="
+ "version": "1.1.3",
+ "resolved": "https://registry.npmjs.org/tapable/-/tapable-1.1.3.tgz",
+ "integrity": "sha512-4WK/bYZmj8xLr+HUCODHGF1ZFzsYffasLUgEiMBY4fgtltdO6B4WJtlSbPaDTLpYTcGVwM2qLnFTICEcNxs3kA=="
},
"tar": {
"version": "4.4.8",
@@ -17066,28 +25757,44 @@
}
},
"term-size": {
- "version": "1.2.0",
- "resolved": "https://registry.npmjs.org/term-size/-/term-size-1.2.0.tgz",
- "integrity": "sha1-RYuDiH8oj8Vtb/+/rSYuJmOO+mk=",
+ "version": "2.2.0",
+ "resolved": "https://registry.npmjs.org/term-size/-/term-size-2.2.0.tgz",
+ "integrity": "sha512-a6sumDlzyHVJWb8+YofY4TW112G6p2FCPEAFk+59gIYHv3XHRhm9ltVQ9kli4hNWeQBwSpe8cRN25x0ROunMOw=="
+ },
+ "terminal-link": {
+ "version": "2.1.1",
+ "resolved": "https://registry.npmjs.org/terminal-link/-/terminal-link-2.1.1.tgz",
+ "integrity": "sha512-un0FmiRUQNr5PJqy9kP7c40F5BOfpGlYTrxonDChEZB7pzZxRNp/bt+ymiy9/npwXya9KH99nJ/GXFIiUkYGFQ==",
"requires": {
- "execa": "^0.7.0"
+ "ansi-escapes": "^4.2.1",
+ "supports-hyperlinks": "^2.0.0"
+ },
+ "dependencies": {
+ "ansi-escapes": {
+ "version": "4.3.1",
+ "resolved": "https://registry.npmjs.org/ansi-escapes/-/ansi-escapes-4.3.1.tgz",
+ "integrity": "sha512-JWF7ocqNrp8u9oqpgV+wH5ftbt+cfvv+PTjOvKLT3AdYly/LmORARfEVT1iyjwN+4MqE5UmVKoAdIBqeoCHgLA==",
+ "requires": {
+ "type-fest": "^0.11.0"
+ }
+ },
+ "type-fest": {
+ "version": "0.11.0",
+ "resolved": "https://registry.npmjs.org/type-fest/-/type-fest-0.11.0.tgz",
+ "integrity": "sha512-OdjXJxnCN1AvyLSzeKIgXTXxV+99ZuXl3Hpo9XpJAv9MBcHrrJOQ5kV7ypXOuQie+AmWG25hLbiKdwYTifzcfQ=="
+ }
}
},
"terser": {
- "version": "3.16.1",
- "resolved": "https://registry.npmjs.org/terser/-/terser-3.16.1.tgz",
- "integrity": "sha512-JDJjgleBROeek2iBcSNzOHLKsB/MdDf+E/BOAJ0Tk9r7p9/fVobfv7LMJ/g/k3v9SXdmjZnIlFd5nfn/Rt0Xow==",
+ "version": "4.8.0",
+ "resolved": "https://registry.npmjs.org/terser/-/terser-4.8.0.tgz",
+ "integrity": "sha512-EAPipTNeWsb/3wLPeup1tVPaXfIaU68xMnVdPafIL1TV05OhASArYyIfFvnvJCNrR2NIOvDVNNTFRa+Re2MWyw==",
"requires": {
- "commander": "~2.17.1",
+ "commander": "^2.20.0",
"source-map": "~0.6.1",
- "source-map-support": "~0.5.9"
+ "source-map-support": "~0.5.12"
},
"dependencies": {
- "commander": {
- "version": "2.17.1",
- "resolved": "https://registry.npmjs.org/commander/-/commander-2.17.1.tgz",
- "integrity": "sha512-wPMUt6FnH2yzG95SA6mzjQOEKUU3aLaDEmzs1ti+1E9h+CsrZghRlqEM/EJ4KscsQVG8uNN4uVreUeT8+drlgg=="
- },
"source-map": {
"version": "0.6.1",
"resolved": "https://registry.npmjs.org/source-map/-/source-map-0.6.1.tgz",
@@ -17096,18 +25803,19 @@
}
},
"terser-webpack-plugin": {
- "version": "1.2.3",
- "resolved": "https://registry.npmjs.org/terser-webpack-plugin/-/terser-webpack-plugin-1.2.3.tgz",
- "integrity": "sha512-GOK7q85oAb/5kE12fMuLdn2btOS9OBZn4VsecpHDywoUC/jLhSAKOiYo0ezx7ss2EXPMzyEWFoE0s1WLE+4+oA==",
+ "version": "1.4.4",
+ "resolved": "https://registry.npmjs.org/terser-webpack-plugin/-/terser-webpack-plugin-1.4.4.tgz",
+ "integrity": "sha512-U4mACBHIegmfoEe5fdongHESNJWqsGU+W0S/9+BmYGVQDw1+c2Ow05TpMhxjPK1sRb7cuYq1BPl1e5YHJMTCqA==",
"requires": {
- "cacache": "^11.0.2",
- "find-cache-dir": "^2.0.0",
+ "cacache": "^12.0.2",
+ "find-cache-dir": "^2.1.0",
+ "is-wsl": "^1.1.0",
"schema-utils": "^1.0.0",
- "serialize-javascript": "^1.4.0",
+ "serialize-javascript": "^3.1.0",
"source-map": "^0.6.1",
- "terser": "^3.16.1",
- "webpack-sources": "^1.1.0",
- "worker-farm": "^1.5.2"
+ "terser": "^4.1.2",
+ "webpack-sources": "^1.4.0",
+ "worker-farm": "^1.7.0"
},
"dependencies": {
"schema-utils": {
@@ -17147,9 +25855,9 @@
}
},
"thunky": {
- "version": "1.0.3",
- "resolved": "https://registry.npmjs.org/thunky/-/thunky-1.0.3.tgz",
- "integrity": "sha512-YwT8pjmNcAXBZqrubu22P4FYsh2D4dxRmnWBOL8Jk8bUcRUtc5326kx32tuTmFDAZtLOGEVNl8POAR8j896Iow=="
+ "version": "1.1.0",
+ "resolved": "https://registry.npmjs.org/thunky/-/thunky-1.1.0.tgz",
+ "integrity": "sha512-eHY7nBftgThBqOyHGVN+l8gF0BucP09fMo0oO/Lb0w1OF80dJv+lDVpXG60WMQvkcxAkNybKsrEIE3ZtKGmPrA=="
},
"timed-out": {
"version": "4.0.1",
@@ -17157,9 +25865,9 @@
"integrity": "sha1-8y6srFoXW+ol1/q1Zas+2HQe9W8="
},
"timers-browserify": {
- "version": "2.0.10",
- "resolved": "https://registry.npmjs.org/timers-browserify/-/timers-browserify-2.0.10.tgz",
- "integrity": "sha512-YvC1SV1XdOUaL6gx5CoGroT3Gu49pK9+TZ38ErPldOWW4j49GI1HKs9DV+KGq/w6y+LZ72W1c8cKz2vzY+qpzg==",
+ "version": "2.0.11",
+ "resolved": "https://registry.npmjs.org/timers-browserify/-/timers-browserify-2.0.11.tgz",
+ "integrity": "sha512-60aV6sgJ5YEbzUdn9c8kYGIqOubPoUdqQCul3SBAsRCZ40s6Y5cMcrW4dt3/k/EsbLVJNl9n6Vz3fTc+k2GeKQ==",
"requires": {
"setimmediate": "^1.0.4"
}
@@ -17235,6 +25943,11 @@
}
}
},
+ "to-readable-stream": {
+ "version": "1.0.0",
+ "resolved": "https://registry.npmjs.org/to-readable-stream/-/to-readable-stream-1.0.0.tgz",
+ "integrity": "sha512-Iq25XBt6zD5npPhlLVXGFN3/gyR2/qODcKNNyTMd4vbm39HUaOiAM4PMq0eMVC/Tkxz+Zjdsc55g9yyz+Yq00Q=="
+ },
"to-regex": {
"version": "3.0.2",
"resolved": "https://registry.npmjs.org/to-regex/-/to-regex-3.0.2.tgz",
@@ -17260,6 +25973,11 @@
"resolved": "https://registry.npmjs.org/to-style/-/to-style-1.3.3.tgz",
"integrity": "sha1-Y6K3Cm9KfU/cLtV6C+TnI1y2aZw="
},
+ "toidentifier": {
+ "version": "1.0.0",
+ "resolved": "https://registry.npmjs.org/toidentifier/-/toidentifier-1.0.0.tgz",
+ "integrity": "sha512-yaOH/Pk/VEhBWWTlhI+qXxDFXlejDGcQipMlyxda9nthulaxLZUNcUqFxokp0vcYnvteJln5FNQDRrxj3YcbVw=="
+ },
"topo": {
"version": "2.0.2",
"resolved": "http://registry.npmjs.org/topo/-/topo-2.0.2.tgz",
@@ -17335,14 +26053,48 @@
"glob": "^7.1.2"
}
},
+ "ts-invariant": {
+ "version": "0.4.4",
+ "resolved": "https://registry.npmjs.org/ts-invariant/-/ts-invariant-0.4.4.tgz",
+ "integrity": "sha512-uEtWkFM/sdZvRNNDL3Ehu4WVpwaulhwQszV8mrtcdeE8nN00BV9mAmQ88RkrBhFgl9gMgvjJLAQcZbnPXI9mlA==",
+ "requires": {
+ "tslib": "^1.9.3"
+ }
+ },
+ "ts-pnp": {
+ "version": "1.2.0",
+ "resolved": "https://registry.npmjs.org/ts-pnp/-/ts-pnp-1.2.0.tgz",
+ "integrity": "sha512-csd+vJOb/gkzvcCHgTGSChYpy5f1/XKNsmvBGO4JXS+z1v2HobugDz4s1IeFXM3wZB44uczs+eazB5Q/ccdhQw=="
+ },
+ "tsconfig-paths": {
+ "version": "3.9.0",
+ "resolved": "https://registry.npmjs.org/tsconfig-paths/-/tsconfig-paths-3.9.0.tgz",
+ "integrity": "sha512-dRcuzokWhajtZWkQsDVKbWyY+jgcLC5sqJhg2PSgf4ZkH2aHPvaOY8YWGhmjb68b5qqTfasSsDO9k7RUiEmZAw==",
+ "requires": {
+ "@types/json5": "^0.0.29",
+ "json5": "^1.0.1",
+ "minimist": "^1.2.0",
+ "strip-bom": "^3.0.0"
+ },
+ "dependencies": {
+ "json5": {
+ "version": "1.0.1",
+ "resolved": "https://registry.npmjs.org/json5/-/json5-1.0.1.tgz",
+ "integrity": "sha512-aKS4WQjPenRxiQsC93MNfjx+nbF4PAdYzmd/1JIj8HYzqfbu86beTuNgXDzPknWk0n0uARlyewZo4s++ES36Ow==",
+ "requires": {
+ "minimist": "^1.2.0"
+ }
+ }
+ }
+ },
"tslib": {
- "version": "1.9.3",
- "resolved": "https://registry.npmjs.org/tslib/-/tslib-1.9.3.tgz",
- "integrity": "sha512-4krF8scpejhaOgqzBEcGM7yDIEfi0/8+8zDRZhNZZ2kjmHJ4hv3zCbQWxoJGz1iw5U0Jl0nma13xzHXcncMavQ=="
+ "version": "1.13.0",
+ "resolved": "https://registry.npmjs.org/tslib/-/tslib-1.13.0.tgz",
+ "integrity": "sha512-i/6DQjL8Xf3be4K/E6Wgpekn5Qasl1usyw++dAA35Ue5orEn65VIxOA+YvNNl9HV3qv70T7CNwjODHZrLwvd1Q=="
},
"tty-browserify": {
"version": "0.0.0",
- "resolved": "http://registry.npmjs.org/tty-browserify/-/tty-browserify-0.0.0.tgz",
+ "resolved": "https://registry.npmjs.org/tty-browserify/-/tty-browserify-0.0.0.tgz",
"integrity": "sha1-oVe6QC2iTpv5V/mqadUk7tQpAaY="
},
"tunnel-agent": {
@@ -17358,6 +26110,11 @@
"resolved": "https://registry.npmjs.org/tweetnacl/-/tweetnacl-0.14.5.tgz",
"integrity": "sha1-WuaBd/GS1EViadEIr6k/+HQ/T2Q="
},
+ "type": {
+ "version": "1.2.0",
+ "resolved": "https://registry.npmjs.org/type/-/type-1.2.0.tgz",
+ "integrity": "sha512-+5nt5AAniqsCnu2cEQQdpzCAh33kVx8n0VoFidKpB1dVVLAN/F+bgVOqOJqOnEnrhp222clB5p3vUlD+1QAnfg=="
+ },
"type-check": {
"version": "0.3.2",
"resolved": "https://registry.npmjs.org/type-check/-/type-check-0.3.2.tgz",
@@ -17366,13 +26123,33 @@
"prelude-ls": "~1.1.2"
}
},
+ "type-fest": {
+ "version": "0.8.1",
+ "resolved": "https://registry.npmjs.org/type-fest/-/type-fest-0.8.1.tgz",
+ "integrity": "sha512-4dbzIzqvjtgiM5rw1k5rEHtBANKmdudhGyBEajN01fEyhaAIhsoKNy6y7+IN93IfpFtwY9iqi7kD+xwKhQsNJA=="
+ },
"type-is": {
- "version": "1.6.16",
- "resolved": "https://registry.npmjs.org/type-is/-/type-is-1.6.16.tgz",
- "integrity": "sha512-HRkVv/5qY2G6I8iab9cI7v1bOIdhm94dVjQCPFElW9W+3GeDOSHmy2EBYe4VTApuzolPcmgFTN3ftVJRKR2J9Q==",
+ "version": "1.6.18",
+ "resolved": "https://registry.npmjs.org/type-is/-/type-is-1.6.18.tgz",
+ "integrity": "sha512-TkRKr9sUTxEH8MdfuCSP7VizJyzRNMjj2J2do2Jr3Kym598JVdEksuzPQCnlFPW4ky9Q+iA+ma9BGm06XQBy8g==",
"requires": {
"media-typer": "0.3.0",
- "mime-types": "~2.1.18"
+ "mime-types": "~2.1.24"
+ },
+ "dependencies": {
+ "mime-db": {
+ "version": "1.44.0",
+ "resolved": "https://registry.npmjs.org/mime-db/-/mime-db-1.44.0.tgz",
+ "integrity": "sha512-/NOTfLrsPBVeH7YtFPgsVWveuL+4SjjYxaQ1xtM1KMFj7HdxlBlxeyNLzhyJVx7r4rZGJAZ/6lkKCitSc/Nmpg=="
+ },
+ "mime-types": {
+ "version": "2.1.27",
+ "resolved": "https://registry.npmjs.org/mime-types/-/mime-types-2.1.27.tgz",
+ "integrity": "sha512-JIhqnCasI9yD+SsmkquHBxTSEuZdQX5BuQnS2Vc7puQQQ+8yiP5AY5uWhpdv4YL4VM5c6iliiYWPgJ/nJQLp7w==",
+ "requires": {
+ "mime-db": "1.44.0"
+ }
+ }
}
},
"type-of": {
@@ -17385,6 +26162,14 @@
"resolved": "https://registry.npmjs.org/typedarray/-/typedarray-0.0.6.tgz",
"integrity": "sha1-hnrHTjhkGHsdPUfZlqeOxciDB3c="
},
+ "typedarray-to-buffer": {
+ "version": "3.1.5",
+ "resolved": "https://registry.npmjs.org/typedarray-to-buffer/-/typedarray-to-buffer-3.1.5.tgz",
+ "integrity": "sha512-zdu8XMNEDepKKR+XYOXAVPtWui0ly0NtohUscw+UmaHiAWT8hrV1rr//H6V+0DvJ3OQ19S979M0laLfX8rm82Q==",
+ "requires": {
+ "is-typedarray": "^1.0.0"
+ }
+ },
"ua-parser-js": {
"version": "0.7.19",
"resolved": "https://registry.npmjs.org/ua-parser-js/-/ua-parser-js-0.7.19.tgz",
@@ -17528,19 +26313,19 @@
}
},
"unique-slug": {
- "version": "2.0.1",
- "resolved": "https://registry.npmjs.org/unique-slug/-/unique-slug-2.0.1.tgz",
- "integrity": "sha512-n9cU6+gITaVu7VGj1Z8feKMmfAjEAQGhwD9fE3zvpRRa0wEIx8ODYkVGfSc94M2OX00tUFV8wH3zYbm1I8mxFg==",
+ "version": "2.0.2",
+ "resolved": "https://registry.npmjs.org/unique-slug/-/unique-slug-2.0.2.tgz",
+ "integrity": "sha512-zoWr9ObaxALD3DOPfjPSqxt4fnZiWblxHIgeWqW8x7UqDzEtHEQLzji2cuJYQFCU6KmoJikOYAZlrTHHebjx2w==",
"requires": {
"imurmurhash": "^0.1.4"
}
},
"unique-string": {
- "version": "1.0.0",
- "resolved": "https://registry.npmjs.org/unique-string/-/unique-string-1.0.0.tgz",
- "integrity": "sha1-nhBXzKhRq7kzmPizOuGHuZyuwRo=",
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/unique-string/-/unique-string-2.0.0.tgz",
+ "integrity": "sha512-uNaeirEPvpZWSgzwsPGtU2zVSTrn/8L5q/IexZmH0eH6SA73CmAA5U4GwORTxQAZs95TAXLNqeLoPPNO5gZfWg==",
"requires": {
- "crypto-random-string": "^1.0.0"
+ "crypto-random-string": "^2.0.0"
}
},
"unist-builder": {
@@ -17654,6 +26439,14 @@
"resolved": "https://registry.npmjs.org/universalify/-/universalify-0.1.2.tgz",
"integrity": "sha512-rBJeI5CXAlmy1pV+617WB9J63U6XcazHHF2f2dbJix4XzpUF0RS3Zbj0FGIOCAva5P/d/GBOYaACQ1w+0azUkg=="
},
+ "unixify": {
+ "version": "1.0.0",
+ "resolved": "https://registry.npmjs.org/unixify/-/unixify-1.0.0.tgz",
+ "integrity": "sha1-OmQcjC/7zk2mg6XHDwOkYpQMIJA=",
+ "requires": {
+ "normalize-path": "^2.1.1"
+ }
+ },
"unpipe": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/unpipe/-/unpipe-1.0.0.tgz",
@@ -17700,31 +26493,179 @@
}
}
},
- "unzip-response": {
- "version": "2.0.1",
- "resolved": "https://registry.npmjs.org/unzip-response/-/unzip-response-2.0.1.tgz",
- "integrity": "sha1-0vD3N9FrBhXnKmk17QQhRXLVb5c="
- },
"upath": {
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/upath/-/upath-1.1.0.tgz",
"integrity": "sha512-bzpH/oBhoS/QI/YtbkqCg6VEiPYjSZtrHQM6/QnJS6OL9pKUFLqb3aFh4Scvwm45+7iAgiMkLhSbaZxUqmrprw=="
},
"update-notifier": {
- "version": "2.5.0",
- "resolved": "https://registry.npmjs.org/update-notifier/-/update-notifier-2.5.0.tgz",
- "integrity": "sha512-gwMdhgJHGuj/+wHJJs9e6PcCszpxR1b236igrOkUofGhqJuG+amlIKwApH1IW1WWl7ovZxsX49lMBWLxSdm5Dw==",
+ "version": "3.0.1",
+ "resolved": "https://registry.npmjs.org/update-notifier/-/update-notifier-3.0.1.tgz",
+ "integrity": "sha512-grrmrB6Zb8DUiyDIaeRTBCkgISYUgETNe7NglEbVsrLWXeESnlCSP50WfRSj/GmzMPl6Uchj24S/p80nP/ZQrQ==",
"requires": {
- "boxen": "^1.2.1",
+ "boxen": "^3.0.0",
"chalk": "^2.0.1",
- "configstore": "^3.0.0",
+ "configstore": "^4.0.0",
+ "has-yarn": "^2.1.0",
"import-lazy": "^2.1.0",
- "is-ci": "^1.0.10",
+ "is-ci": "^2.0.0",
"is-installed-globally": "^0.1.0",
- "is-npm": "^1.0.0",
- "latest-version": "^3.0.0",
+ "is-npm": "^3.0.0",
+ "is-yarn-global": "^0.3.0",
+ "latest-version": "^5.0.0",
"semver-diff": "^2.0.0",
"xdg-basedir": "^3.0.0"
+ },
+ "dependencies": {
+ "ansi-regex": {
+ "version": "4.1.0",
+ "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-4.1.0.tgz",
+ "integrity": "sha512-1apePfXM1UOSqw0o9IiFAovVz9M5S1Dg+4TrDwfMewQ6p/rmMueb7tWZjQ1rx4Loy1ArBggoqGpfqqdI4rondg=="
+ },
+ "boxen": {
+ "version": "3.2.0",
+ "resolved": "https://registry.npmjs.org/boxen/-/boxen-3.2.0.tgz",
+ "integrity": "sha512-cU4J/+NodM3IHdSL2yN8bqYqnmlBTidDR4RC7nJs61ZmtGz8VZzM3HLQX0zY5mrSmPtR3xWwsq2jOUQqFZN8+A==",
+ "requires": {
+ "ansi-align": "^3.0.0",
+ "camelcase": "^5.3.1",
+ "chalk": "^2.4.2",
+ "cli-boxes": "^2.2.0",
+ "string-width": "^3.0.0",
+ "term-size": "^1.2.0",
+ "type-fest": "^0.3.0",
+ "widest-line": "^2.0.0"
+ },
+ "dependencies": {
+ "chalk": {
+ "version": "2.4.2",
+ "resolved": "https://registry.npmjs.org/chalk/-/chalk-2.4.2.tgz",
+ "integrity": "sha512-Mti+f9lpJNcwF4tWV8/OrTTtF1gZi+f8FqlyAdouralcFWFQWF2+NgCHShjkCb+IFBLq9buZwE1xckQU4peSuQ==",
+ "requires": {
+ "ansi-styles": "^3.2.1",
+ "escape-string-regexp": "^1.0.5",
+ "supports-color": "^5.3.0"
+ }
+ }
+ }
+ },
+ "camelcase": {
+ "version": "5.3.1",
+ "resolved": "https://registry.npmjs.org/camelcase/-/camelcase-5.3.1.tgz",
+ "integrity": "sha512-L28STB170nwWS63UjtlEOE3dldQApaJXZkOI1uMFfzf3rRuPegHaHesyee+YxQ+W6SvRDQV6UrdOdRiR153wJg=="
+ },
+ "configstore": {
+ "version": "4.0.0",
+ "resolved": "https://registry.npmjs.org/configstore/-/configstore-4.0.0.tgz",
+ "integrity": "sha512-CmquAXFBocrzaSM8mtGPMM/HiWmyIpr4CcJl/rgY2uCObZ/S7cKU0silxslqJejl+t/T9HS8E0PUNQD81JGUEQ==",
+ "requires": {
+ "dot-prop": "^4.1.0",
+ "graceful-fs": "^4.1.2",
+ "make-dir": "^1.0.0",
+ "unique-string": "^1.0.0",
+ "write-file-atomic": "^2.0.0",
+ "xdg-basedir": "^3.0.0"
+ }
+ },
+ "crypto-random-string": {
+ "version": "1.0.0",
+ "resolved": "https://registry.npmjs.org/crypto-random-string/-/crypto-random-string-1.0.0.tgz",
+ "integrity": "sha1-ojD2T1aDEOFJgAmUB5DsmVRbyn4="
+ },
+ "dot-prop": {
+ "version": "4.2.0",
+ "resolved": "https://registry.npmjs.org/dot-prop/-/dot-prop-4.2.0.tgz",
+ "integrity": "sha512-tUMXrxlExSW6U2EXiiKGSBVdYgtV8qlHL+C10TsW4PURY/ic+eaysnSkwB4kA/mBlCyy/IKDJ+Lc3wbWeaXtuQ==",
+ "requires": {
+ "is-obj": "^1.0.0"
+ }
+ },
+ "string-width": {
+ "version": "3.1.0",
+ "resolved": "https://registry.npmjs.org/string-width/-/string-width-3.1.0.tgz",
+ "integrity": "sha512-vafcv6KjVZKSgz06oM/H6GDBrAtz8vdhQakGjFIvNrHA6y3HCF1CInLy+QLq8dTJPQ1b+KDUqDFctkdRW44e1w==",
+ "requires": {
+ "emoji-regex": "^7.0.1",
+ "is-fullwidth-code-point": "^2.0.0",
+ "strip-ansi": "^5.1.0"
+ }
+ },
+ "strip-ansi": {
+ "version": "5.2.0",
+ "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-5.2.0.tgz",
+ "integrity": "sha512-DuRs1gKbBqsMKIZlrffwlug8MHkcnpjs5VPmL1PAh+mA30U0DTotfDZ0d2UUsXpPmPmMMJ6W773MaA3J+lbiWA==",
+ "requires": {
+ "ansi-regex": "^4.1.0"
+ }
+ },
+ "term-size": {
+ "version": "1.2.0",
+ "resolved": "https://registry.npmjs.org/term-size/-/term-size-1.2.0.tgz",
+ "integrity": "sha1-RYuDiH8oj8Vtb/+/rSYuJmOO+mk=",
+ "requires": {
+ "execa": "^0.7.0"
+ }
+ },
+ "type-fest": {
+ "version": "0.3.1",
+ "resolved": "https://registry.npmjs.org/type-fest/-/type-fest-0.3.1.tgz",
+ "integrity": "sha512-cUGJnCdr4STbePCgqNFbpVNCepa+kAVohJs1sLhxzdH+gnEoOd8VhbYa7pD3zZYGiURWM2xzEII3fQcRizDkYQ=="
+ },
+ "unique-string": {
+ "version": "1.0.0",
+ "resolved": "https://registry.npmjs.org/unique-string/-/unique-string-1.0.0.tgz",
+ "integrity": "sha1-nhBXzKhRq7kzmPizOuGHuZyuwRo=",
+ "requires": {
+ "crypto-random-string": "^1.0.0"
+ }
+ },
+ "widest-line": {
+ "version": "2.0.1",
+ "resolved": "https://registry.npmjs.org/widest-line/-/widest-line-2.0.1.tgz",
+ "integrity": "sha512-Ba5m9/Fa4Xt9eb2ELXt77JxVDV8w7qQrH0zS/TWSJdLyAwQjWoOzpzj5lwVftDz6n/EOu3tNACS84v509qwnJA==",
+ "requires": {
+ "string-width": "^2.1.1"
+ },
+ "dependencies": {
+ "ansi-regex": {
+ "version": "3.0.0",
+ "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-3.0.0.tgz",
+ "integrity": "sha1-7QMXwyIGT3lGbAKWa922Bas32Zg="
+ },
+ "string-width": {
+ "version": "2.1.1",
+ "resolved": "https://registry.npmjs.org/string-width/-/string-width-2.1.1.tgz",
+ "integrity": "sha512-nOqH59deCq9SRHlxq1Aw85Jnt4w6KvLKqWVik6oA9ZklXLNIOlqg4F2yrT1MVaTjAqvVwdfeZ7w7aCvJD7ugkw==",
+ "requires": {
+ "is-fullwidth-code-point": "^2.0.0",
+ "strip-ansi": "^4.0.0"
+ }
+ },
+ "strip-ansi": {
+ "version": "4.0.0",
+ "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-4.0.0.tgz",
+ "integrity": "sha1-qEeQIusaw2iocTibY1JixQXuNo8=",
+ "requires": {
+ "ansi-regex": "^3.0.0"
+ }
+ }
+ }
+ },
+ "write-file-atomic": {
+ "version": "2.4.3",
+ "resolved": "https://registry.npmjs.org/write-file-atomic/-/write-file-atomic-2.4.3.tgz",
+ "integrity": "sha512-GaETH5wwsX+GcnzhPgKcKjJ6M2Cq3/iZp1WyY/X1CSqrW+jVNM9Y7D8EC2sM4ZG/V8wZlSniJnCKWPmBYAucRQ==",
+ "requires": {
+ "graceful-fs": "^4.1.11",
+ "imurmurhash": "^0.1.4",
+ "signal-exit": "^3.0.2"
+ }
+ },
+ "xdg-basedir": {
+ "version": "3.0.0",
+ "resolved": "https://registry.npmjs.org/xdg-basedir/-/xdg-basedir-3.0.0.tgz",
+ "integrity": "sha1-SWsswQnsqNus/i3HK2A8F8WHCtQ="
+ }
}
},
"upper-case": {
@@ -17833,6 +26774,15 @@
"resolved": "https://registry.npmjs.org/url-to-options/-/url-to-options-1.0.1.tgz",
"integrity": "sha1-FQWgOiiaSMvXpDTvuu7FBV9WM6k="
},
+ "urql": {
+ "version": "1.9.8",
+ "resolved": "https://registry.npmjs.org/urql/-/urql-1.9.8.tgz",
+ "integrity": "sha512-AMikyJ9ldVvFVRND7AjgHJ3dBZXH2ygTM9bj4BwQzE9gfJfWA1wK+dXffV1WTOdOoCRngIxGWgZIzSkoLGBpbw==",
+ "requires": {
+ "@urql/core": "^1.12.0",
+ "wonka": "^4.0.14"
+ }
+ },
"use": {
"version": "3.1.1",
"resolved": "https://registry.npmjs.org/use/-/use-3.1.1.tgz",
@@ -17900,9 +26850,9 @@
"integrity": "sha1-IpnwLG3tMNSllhsLn3RSShj2NPw="
},
"vendors": {
- "version": "1.0.2",
- "resolved": "https://registry.npmjs.org/vendors/-/vendors-1.0.2.tgz",
- "integrity": "sha512-w/hry/368nO21AN9QljsaIhb9ZiZtZARoVH5f3CsFbawdLdayCgKRPup7CggujvySMxx0I91NOyxdVENohprLQ=="
+ "version": "1.0.4",
+ "resolved": "https://registry.npmjs.org/vendors/-/vendors-1.0.4.tgz",
+ "integrity": "sha512-/juG65kTL4Cy2su4P8HjtkTxk6VmJDiOPBufWniqQ6wknac6jNiXS9vU+hO3wgusiyqWlzTbVHi0dyJqRONg3w=="
},
"verror": {
"version": "1.10.0",
@@ -17946,29 +26896,208 @@
}
},
"vm-browserify": {
- "version": "0.0.4",
- "resolved": "http://registry.npmjs.org/vm-browserify/-/vm-browserify-0.0.4.tgz",
- "integrity": "sha1-XX6kW7755Kb/ZflUOOCofDV9WnM=",
+ "version": "1.1.2",
+ "resolved": "https://registry.npmjs.org/vm-browserify/-/vm-browserify-1.1.2.tgz",
+ "integrity": "sha512-2ham8XPWTONajOR0ohOKOHXkm3+gaBmGut3SRuu75xLd/RRaY6vqgh8NBYYk7+RW3u5AtzPQZG8F10LHkl0lAQ=="
+ },
+ "vue-template-compiler": {
+ "version": "2.6.11",
+ "resolved": "https://registry.npmjs.org/vue-template-compiler/-/vue-template-compiler-2.6.11.tgz",
+ "integrity": "sha512-KIq15bvQDrcCjpGjrAhx4mUlyyHfdmTaoNfeoATHLAiWB+MU3cx4lOzMwrnUh9cCxy0Lt1T11hAFY6TQgroUAA==",
+ "optional": true,
"requires": {
- "indexof": "0.0.1"
+ "de-indent": "^1.0.2",
+ "he": "^1.1.0"
}
},
"warning": {
- "version": "3.0.0",
- "resolved": "https://registry.npmjs.org/warning/-/warning-3.0.0.tgz",
- "integrity": "sha1-MuU3fLVy3kqwR1O9+IIcAe1gW3w=",
+ "version": "4.0.3",
+ "resolved": "https://registry.npmjs.org/warning/-/warning-4.0.3.tgz",
+ "integrity": "sha512-rpJyN222KWIvHJ/F53XSZv0Zl/accqHR8et1kpaMTD/fLCRxtV8iX8czMzY7sVZupTI3zcUTg8eycS2kNF9l6w==",
"requires": {
"loose-envify": "^1.0.0"
}
},
"watchpack": {
- "version": "1.6.0",
- "resolved": "https://registry.npmjs.org/watchpack/-/watchpack-1.6.0.tgz",
- "integrity": "sha512-i6dHe3EyLjMmDlU1/bGQpEw25XSjkJULPuAVKCbNRefQVq48yXKUpwg538F7AZTf9kyr57zj++pQFltUa5H7yA==",
+ "version": "1.7.2",
+ "resolved": "https://registry.npmjs.org/watchpack/-/watchpack-1.7.2.tgz",
+ "integrity": "sha512-ymVbbQP40MFTp+cNMvpyBpBtygHnPzPkHqoIwRRj/0B8KhqQwV8LaKjtbaxF2lK4vl8zN9wCxS46IFCU5K4W0g==",
"requires": {
- "chokidar": "^2.0.2",
+ "chokidar": "^3.4.0",
"graceful-fs": "^4.1.2",
- "neo-async": "^2.5.0"
+ "neo-async": "^2.5.0",
+ "watchpack-chokidar2": "^2.0.0"
+ },
+ "dependencies": {
+ "anymatch": {
+ "version": "3.1.1",
+ "resolved": "https://registry.npmjs.org/anymatch/-/anymatch-3.1.1.tgz",
+ "integrity": "sha512-mM8522psRCqzV+6LhomX5wgp25YVibjh8Wj23I5RPkPppSVSjyKD2A2mBJmWGa+KN7f2D6LNh9jkBCeyLktzjg==",
+ "optional": true,
+ "requires": {
+ "normalize-path": "^3.0.0",
+ "picomatch": "^2.0.4"
+ }
+ },
+ "binary-extensions": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/binary-extensions/-/binary-extensions-2.0.0.tgz",
+ "integrity": "sha512-Phlt0plgpIIBOGTT/ehfFnbNlfsDEiqmzE2KRXoX1bLIlir4X/MR+zSyBEkL05ffWgnRSf/DXv+WrUAVr93/ow==",
+ "optional": true
+ },
+ "braces": {
+ "version": "3.0.2",
+ "resolved": "https://registry.npmjs.org/braces/-/braces-3.0.2.tgz",
+ "integrity": "sha512-b8um+L1RzM3WDSzvhm6gIz1yfTbBt6YTlcEKAvsmqCZZFw46z626lVj9j1yEPW33H5H+lBQpZMP1k8l+78Ha0A==",
+ "optional": true,
+ "requires": {
+ "fill-range": "^7.0.1"
+ }
+ },
+ "chokidar": {
+ "version": "3.4.0",
+ "resolved": "https://registry.npmjs.org/chokidar/-/chokidar-3.4.0.tgz",
+ "integrity": "sha512-aXAaho2VJtisB/1fg1+3nlLJqGOuewTzQpd/Tz0yTg2R0e4IGtshYvtjowyEumcBv2z+y4+kc75Mz7j5xJskcQ==",
+ "optional": true,
+ "requires": {
+ "anymatch": "~3.1.1",
+ "braces": "~3.0.2",
+ "fsevents": "~2.1.2",
+ "glob-parent": "~5.1.0",
+ "is-binary-path": "~2.1.0",
+ "is-glob": "~4.0.1",
+ "normalize-path": "~3.0.0",
+ "readdirp": "~3.4.0"
+ }
+ },
+ "fill-range": {
+ "version": "7.0.1",
+ "resolved": "https://registry.npmjs.org/fill-range/-/fill-range-7.0.1.tgz",
+ "integrity": "sha512-qOo9F+dMUmC2Lcb4BbVvnKJxTPjCm+RRpe4gDuGrzkL7mEVl/djYSu2OdQ2Pa302N4oqkSg9ir6jaLWJ2USVpQ==",
+ "optional": true,
+ "requires": {
+ "to-regex-range": "^5.0.1"
+ }
+ },
+ "glob-parent": {
+ "version": "5.1.1",
+ "resolved": "https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.1.tgz",
+ "integrity": "sha512-FnI+VGOpnlGHWZxthPGR+QhR78fuiK0sNLkHQv+bL9fQi57lNNdquIbna/WrfROrolq8GK5Ek6BiMwqL/voRYQ==",
+ "optional": true,
+ "requires": {
+ "is-glob": "^4.0.1"
+ }
+ },
+ "is-binary-path": {
+ "version": "2.1.0",
+ "resolved": "https://registry.npmjs.org/is-binary-path/-/is-binary-path-2.1.0.tgz",
+ "integrity": "sha512-ZMERYes6pDydyuGidse7OsHxtbI7WVeUEozgR/g7rd0xUimYNlvZRE/K2MgZTjWy725IfelLeVcEM97mmtRGXw==",
+ "optional": true,
+ "requires": {
+ "binary-extensions": "^2.0.0"
+ }
+ },
+ "is-glob": {
+ "version": "4.0.1",
+ "resolved": "https://registry.npmjs.org/is-glob/-/is-glob-4.0.1.tgz",
+ "integrity": "sha512-5G0tKtBTFImOqDnLB2hG6Bp2qcKEFduo4tZu9MT/H6NQv/ghhy30o55ufafxJ/LdH79LLs2Kfrn85TLKyA7BUg==",
+ "optional": true,
+ "requires": {
+ "is-extglob": "^2.1.1"
+ }
+ },
+ "is-number": {
+ "version": "7.0.0",
+ "resolved": "https://registry.npmjs.org/is-number/-/is-number-7.0.0.tgz",
+ "integrity": "sha512-41Cifkg6e8TylSpdtTpeLVMqvSBEVzTttHvERD741+pnZ8ANv0004MRL43QKPDlK9cGvNp6NZWZUBlbGXYxxng==",
+ "optional": true
+ },
+ "normalize-path": {
+ "version": "3.0.0",
+ "resolved": "https://registry.npmjs.org/normalize-path/-/normalize-path-3.0.0.tgz",
+ "integrity": "sha512-6eZs5Ls3WtCisHWp9S2GUy8dqkpGi4BVSz3GaqiE6ezub0512ESztXUwUB6C6IKbQkY2Pnb/mD4WYojCRwcwLA==",
+ "optional": true
+ },
+ "readdirp": {
+ "version": "3.4.0",
+ "resolved": "https://registry.npmjs.org/readdirp/-/readdirp-3.4.0.tgz",
+ "integrity": "sha512-0xe001vZBnJEK+uKcj8qOhyAKPzIT+gStxWr3LCB0DwcXR5NZJ3IaC+yGnHCYzB/S7ov3m3EEbZI2zeNvX+hGQ==",
+ "optional": true,
+ "requires": {
+ "picomatch": "^2.2.1"
+ }
+ },
+ "to-regex-range": {
+ "version": "5.0.1",
+ "resolved": "https://registry.npmjs.org/to-regex-range/-/to-regex-range-5.0.1.tgz",
+ "integrity": "sha512-65P7iz6X5yEr1cwcgvQxbbIw7Uk3gOy5dIdtZ4rDveLqhrdJP+Li/Hx6tyK0NEb+2GCyneCMJiGqrADCSNk8sQ==",
+ "optional": true,
+ "requires": {
+ "is-number": "^7.0.0"
+ }
+ }
+ }
+ },
+ "watchpack-chokidar2": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/watchpack-chokidar2/-/watchpack-chokidar2-2.0.0.tgz",
+ "integrity": "sha512-9TyfOyN/zLUbA288wZ8IsMZ+6cbzvsNyEzSBp6e/zkifi6xxbl8SmQ/CxQq32k8NNqrdVEVUVSEf56L4rQ/ZxA==",
+ "optional": true,
+ "requires": {
+ "chokidar": "^2.1.8"
+ },
+ "dependencies": {
+ "bindings": {
+ "version": "1.5.0",
+ "resolved": "https://registry.npmjs.org/bindings/-/bindings-1.5.0.tgz",
+ "integrity": "sha512-p2q/t/mhvuOj/UeLlV6566GD/guowlr0hHxClI0W9m7MWYkL1F0hLo+0Aexs9HSPCtR1SXQ0TD3MMKrXZajbiQ==",
+ "optional": true,
+ "requires": {
+ "file-uri-to-path": "1.0.0"
+ }
+ },
+ "chokidar": {
+ "version": "2.1.8",
+ "resolved": "https://registry.npmjs.org/chokidar/-/chokidar-2.1.8.tgz",
+ "integrity": "sha512-ZmZUazfOzf0Nve7duiCKD23PFSCs4JPoYyccjUFF3aQkQadqBhfzhjkwBH2mNOG9cTBwhamM37EIsIkZw3nRgg==",
+ "optional": true,
+ "requires": {
+ "anymatch": "^2.0.0",
+ "async-each": "^1.0.1",
+ "braces": "^2.3.2",
+ "fsevents": "^1.2.7",
+ "glob-parent": "^3.1.0",
+ "inherits": "^2.0.3",
+ "is-binary-path": "^1.0.0",
+ "is-glob": "^4.0.0",
+ "normalize-path": "^3.0.0",
+ "path-is-absolute": "^1.0.0",
+ "readdirp": "^2.2.1",
+ "upath": "^1.1.1"
+ }
+ },
+ "fsevents": {
+ "version": "1.2.13",
+ "resolved": "https://registry.npmjs.org/fsevents/-/fsevents-1.2.13.tgz",
+ "integrity": "sha512-oWb1Z6mkHIskLzEJ/XWX0srkpkTQ7vaopMQkyaEIoq0fmtFVxOthb8cCxeT+p3ynTdkk/RZwbgG4brR5BeWECw==",
+ "optional": true,
+ "requires": {
+ "bindings": "^1.5.0",
+ "nan": "^2.12.1"
+ }
+ },
+ "normalize-path": {
+ "version": "3.0.0",
+ "resolved": "https://registry.npmjs.org/normalize-path/-/normalize-path-3.0.0.tgz",
+ "integrity": "sha512-6eZs5Ls3WtCisHWp9S2GUy8dqkpGi4BVSz3GaqiE6ezub0512ESztXUwUB6C6IKbQkY2Pnb/mD4WYojCRwcwLA==",
+ "optional": true
+ },
+ "upath": {
+ "version": "1.2.0",
+ "resolved": "https://registry.npmjs.org/upath/-/upath-1.2.0.tgz",
+ "integrity": "sha512-aZwGpamFO61g3OlfT7OQCHqhGnW43ieH9WZeP7QxN/G/jS4jfqUkZxoryvJgVPEcrl5NL/ggHsSmLMHuH64Lhg==",
+ "optional": true
+ }
}
},
"wbuf": {
@@ -18016,106 +27145,164 @@
},
"dependencies": {
"acorn": {
- "version": "5.7.3",
- "resolved": "https://registry.npmjs.org/acorn/-/acorn-5.7.3.tgz",
- "integrity": "sha512-T/zvzYRfbVojPWahDsE5evJdHb3oJoQfFbsrKM7w5Zcs++Tr257tia3BmMP8XYVjp1S9RZXQMh7gao96BlqZOw=="
+ "version": "5.7.4",
+ "resolved": "https://registry.npmjs.org/acorn/-/acorn-5.7.4.tgz",
+ "integrity": "sha512-1D++VG7BhrtvQpNbBzovKNc1FLGGEE/oGe7b9xJm/RFHMBeUaUGpluV9RLjZa47YFdPcDAenEYuq9pQPcMdLJg=="
+ },
+ "acorn-dynamic-import": {
+ "version": "3.0.0",
+ "resolved": "https://registry.npmjs.org/acorn-dynamic-import/-/acorn-dynamic-import-3.0.0.tgz",
+ "integrity": "sha512-zVWV8Z8lislJoOKKqdNMOB+s6+XV5WERty8MnKBeFgwA+19XJjJHs2RP5dzM57FftIs+jQnRToLiWazKr6sSWg==",
+ "requires": {
+ "acorn": "^5.0.0"
+ }
},
"eslint-scope": {
- "version": "4.0.0",
- "resolved": "https://registry.npmjs.org/eslint-scope/-/eslint-scope-4.0.0.tgz",
- "integrity": "sha512-1G6UTDi7Jc1ELFwnR58HV4fK9OQK4S6N985f166xqXxpjU6plxFISJa2Ba9KCQuFa8RCnj/lSFJbHo7UFDBnUA==",
+ "version": "4.0.3",
+ "resolved": "https://registry.npmjs.org/eslint-scope/-/eslint-scope-4.0.3.tgz",
+ "integrity": "sha512-p7VutNr1O/QrxysMo3E45FjYDTeXBy0iTltPFNSqKAIfjDSXC+4dj+qfyuD8bfAXrW/y6lW3O76VaYNPKfpKrg==",
"requires": {
"esrecurse": "^4.1.0",
"estraverse": "^4.1.1"
}
+ },
+ "schema-utils": {
+ "version": "0.4.7",
+ "resolved": "https://registry.npmjs.org/schema-utils/-/schema-utils-0.4.7.tgz",
+ "integrity": "sha512-v/iwU6wvwGK8HbU9yi3/nhGzP0yGSuhQMzL6ySiec1FSrZZDkhm4noOSWzrNFo/jEc+SJY6jRTwuwbSXJPDUnQ==",
+ "requires": {
+ "ajv": "^6.1.0",
+ "ajv-keywords": "^3.1.0"
+ }
}
}
},
"webpack-dev-middleware": {
- "version": "3.6.0",
- "resolved": "https://registry.npmjs.org/webpack-dev-middleware/-/webpack-dev-middleware-3.6.0.tgz",
- "integrity": "sha512-oeXA3m+5gbYbDBGo4SvKpAHJJEGMoekUbHgo1RK7CP1sz7/WOSeu/dWJtSTk+rzDCLkPwQhGocgIq6lQqOyOwg==",
+ "version": "3.7.2",
+ "resolved": "https://registry.npmjs.org/webpack-dev-middleware/-/webpack-dev-middleware-3.7.2.tgz",
+ "integrity": "sha512-1xC42LxbYoqLNAhV6YzTYacicgMZQTqRd27Sim9wn5hJrX3I5nxYy1SxSd4+gjUFsz1dQFj+yEe6zEVmSkeJjw==",
"requires": {
"memory-fs": "^0.4.1",
- "mime": "^2.3.1",
- "range-parser": "^1.0.3",
+ "mime": "^2.4.4",
+ "mkdirp": "^0.5.1",
+ "range-parser": "^1.2.1",
"webpack-log": "^2.0.0"
+ },
+ "dependencies": {
+ "mime": {
+ "version": "2.4.6",
+ "resolved": "https://registry.npmjs.org/mime/-/mime-2.4.6.tgz",
+ "integrity": "sha512-RZKhC3EmpBchfTGBVb8fb+RL2cWyw/32lshnsETttkBAyAUXSGHxbEJWWRXc751DrIxG1q04b8QwMbAwkRPpUA=="
+ }
}
},
"webpack-dev-server": {
- "version": "3.2.1",
- "resolved": "https://registry.npmjs.org/webpack-dev-server/-/webpack-dev-server-3.2.1.tgz",
- "integrity": "sha512-sjuE4mnmx6JOh9kvSbPYw3u/6uxCLHNWfhWaIPwcXWsvWOPN+nc5baq4i9jui3oOBRXGonK9+OI0jVkaz6/rCw==",
+ "version": "3.11.0",
+ "resolved": "https://registry.npmjs.org/webpack-dev-server/-/webpack-dev-server-3.11.0.tgz",
+ "integrity": "sha512-PUxZ+oSTxogFQgkTtFndEtJIPNmml7ExwufBZ9L2/Xyyd5PnOL5UreWe5ZT7IU25DSdykL9p1MLQzmLh2ljSeg==",
"requires": {
"ansi-html": "0.0.7",
"bonjour": "^3.5.0",
- "chokidar": "^2.0.0",
- "compression": "^1.5.2",
- "connect-history-api-fallback": "^1.3.0",
+ "chokidar": "^2.1.8",
+ "compression": "^1.7.4",
+ "connect-history-api-fallback": "^1.6.0",
"debug": "^4.1.1",
- "del": "^3.0.0",
- "express": "^4.16.2",
- "html-entities": "^1.2.0",
- "http-proxy-middleware": "^0.19.1",
+ "del": "^4.1.1",
+ "express": "^4.17.1",
+ "html-entities": "^1.3.1",
+ "http-proxy-middleware": "0.19.1",
"import-local": "^2.0.0",
- "internal-ip": "^4.2.0",
+ "internal-ip": "^4.3.0",
"ip": "^1.1.5",
- "killable": "^1.0.0",
- "loglevel": "^1.4.1",
- "opn": "^5.1.0",
- "portfinder": "^1.0.9",
+ "is-absolute-url": "^3.0.3",
+ "killable": "^1.0.1",
+ "loglevel": "^1.6.8",
+ "opn": "^5.5.0",
+ "p-retry": "^3.0.1",
+ "portfinder": "^1.0.26",
"schema-utils": "^1.0.0",
- "selfsigned": "^1.9.1",
- "semver": "^5.6.0",
- "serve-index": "^1.7.2",
- "sockjs": "0.3.19",
- "sockjs-client": "1.3.0",
- "spdy": "^4.0.0",
- "strip-ansi": "^3.0.0",
+ "selfsigned": "^1.10.7",
+ "semver": "^6.3.0",
+ "serve-index": "^1.9.1",
+ "sockjs": "0.3.20",
+ "sockjs-client": "1.4.0",
+ "spdy": "^4.0.2",
+ "strip-ansi": "^3.0.1",
"supports-color": "^6.1.0",
"url": "^0.11.0",
- "webpack-dev-middleware": "^3.5.1",
+ "webpack-dev-middleware": "^3.7.2",
"webpack-log": "^2.0.0",
- "yargs": "12.0.2"
+ "ws": "^6.2.1",
+ "yargs": "^13.3.2"
},
"dependencies": {
+ "@types/glob": {
+ "version": "7.1.2",
+ "resolved": "https://registry.npmjs.org/@types/glob/-/glob-7.1.2.tgz",
+ "integrity": "sha512-VgNIkxK+j7Nz5P7jvUZlRvhuPSmsEfS03b0alKcq5V/STUKAa3Plemsn5mrQUO7am6OErJ4rhGEGJbACclrtRA==",
+ "requires": {
+ "@types/minimatch": "*",
+ "@types/node": "*"
+ }
+ },
"ansi-regex": {
- "version": "3.0.0",
- "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-3.0.0.tgz",
- "integrity": "sha1-7QMXwyIGT3lGbAKWa922Bas32Zg="
+ "version": "4.1.0",
+ "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-4.1.0.tgz",
+ "integrity": "sha512-1apePfXM1UOSqw0o9IiFAovVz9M5S1Dg+4TrDwfMewQ6p/rmMueb7tWZjQ1rx4Loy1ArBggoqGpfqqdI4rondg=="
+ },
+ "bindings": {
+ "version": "1.5.0",
+ "resolved": "https://registry.npmjs.org/bindings/-/bindings-1.5.0.tgz",
+ "integrity": "sha512-p2q/t/mhvuOj/UeLlV6566GD/guowlr0hHxClI0W9m7MWYkL1F0hLo+0Aexs9HSPCtR1SXQ0TD3MMKrXZajbiQ==",
+ "optional": true,
+ "requires": {
+ "file-uri-to-path": "1.0.0"
+ }
+ },
+ "camelcase": {
+ "version": "5.3.1",
+ "resolved": "https://registry.npmjs.org/camelcase/-/camelcase-5.3.1.tgz",
+ "integrity": "sha512-L28STB170nwWS63UjtlEOE3dldQApaJXZkOI1uMFfzf3rRuPegHaHesyee+YxQ+W6SvRDQV6UrdOdRiR153wJg=="
+ },
+ "chokidar": {
+ "version": "2.1.8",
+ "resolved": "https://registry.npmjs.org/chokidar/-/chokidar-2.1.8.tgz",
+ "integrity": "sha512-ZmZUazfOzf0Nve7duiCKD23PFSCs4JPoYyccjUFF3aQkQadqBhfzhjkwBH2mNOG9cTBwhamM37EIsIkZw3nRgg==",
+ "requires": {
+ "anymatch": "^2.0.0",
+ "async-each": "^1.0.1",
+ "braces": "^2.3.2",
+ "fsevents": "^1.2.7",
+ "glob-parent": "^3.1.0",
+ "inherits": "^2.0.3",
+ "is-binary-path": "^1.0.0",
+ "is-glob": "^4.0.0",
+ "normalize-path": "^3.0.0",
+ "path-is-absolute": "^1.0.0",
+ "readdirp": "^2.2.1",
+ "upath": "^1.1.1"
+ }
},
"cliui": {
- "version": "4.1.0",
- "resolved": "https://registry.npmjs.org/cliui/-/cliui-4.1.0.tgz",
- "integrity": "sha512-4FG+RSG9DL7uEwRUZXZn3SS34DiDPfzP0VOiEwtUWlE+AR2EIg+hSyvrIgUUfhdgR/UkAeW2QHgeP+hWrXs7jQ==",
+ "version": "5.0.0",
+ "resolved": "https://registry.npmjs.org/cliui/-/cliui-5.0.0.tgz",
+ "integrity": "sha512-PYeGSEmmHM6zvoef2w8TPzlrnNpXIjTipYK780YswmIP9vjxmd6Y2a3CB2Ks6/AU8NHjZugXvo8w3oWM2qnwXA==",
"requires": {
- "string-width": "^2.1.1",
- "strip-ansi": "^4.0.0",
- "wrap-ansi": "^2.0.0"
+ "string-width": "^3.1.0",
+ "strip-ansi": "^5.2.0",
+ "wrap-ansi": "^5.1.0"
},
"dependencies": {
"strip-ansi": {
- "version": "4.0.0",
- "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-4.0.0.tgz",
- "integrity": "sha1-qEeQIusaw2iocTibY1JixQXuNo8=",
+ "version": "5.2.0",
+ "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-5.2.0.tgz",
+ "integrity": "sha512-DuRs1gKbBqsMKIZlrffwlug8MHkcnpjs5VPmL1PAh+mA30U0DTotfDZ0d2UUsXpPmPmMMJ6W773MaA3J+lbiWA==",
"requires": {
- "ansi-regex": "^3.0.0"
+ "ansi-regex": "^4.1.0"
}
}
}
},
- "cross-spawn": {
- "version": "6.0.5",
- "resolved": "https://registry.npmjs.org/cross-spawn/-/cross-spawn-6.0.5.tgz",
- "integrity": "sha512-eTVLrBSt7fjbDygz805pMnstIs2VTBNkRm0qxZd+M7A5XDdxVRWO5MxGBXZhjY4cqLYLdtrGqRf8mBPmzwSpWQ==",
- "requires": {
- "nice-try": "^1.0.4",
- "path-key": "^2.0.1",
- "semver": "^5.5.0",
- "shebang-command": "^1.2.0",
- "which": "^1.2.9"
- }
- },
"debug": {
"version": "4.1.1",
"resolved": "https://registry.npmjs.org/debug/-/debug-4.1.1.tgz",
@@ -18124,12 +27311,18 @@
"ms": "^2.1.1"
}
},
- "decamelize": {
- "version": "2.0.0",
- "resolved": "https://registry.npmjs.org/decamelize/-/decamelize-2.0.0.tgz",
- "integrity": "sha512-Ikpp5scV3MSYxY39ymh45ZLEecsTdv/Xj2CaQfI8RLMuwi7XvjX9H/fhraiSuU+C5w5NTDu4ZU72xNiZnurBPg==",
+ "del": {
+ "version": "4.1.1",
+ "resolved": "https://registry.npmjs.org/del/-/del-4.1.1.tgz",
+ "integrity": "sha512-QwGuEUouP2kVwQenAsOof5Fv8K9t3D8Ca8NxcXKrIpEHjTXK5J2nXLdP+ALI1cgv8wj7KuwBhTwBkOZSJKM5XQ==",
"requires": {
- "xregexp": "4.0.0"
+ "@types/glob": "^7.1.1",
+ "globby": "^6.1.0",
+ "is-path-cwd": "^2.0.0",
+ "is-path-in-cwd": "^2.0.0",
+ "p-map": "^2.0.0",
+ "pify": "^4.0.1",
+ "rimraf": "^2.6.3"
}
},
"eventsource": {
@@ -18140,20 +27333,6 @@
"original": "^1.0.0"
}
},
- "execa": {
- "version": "1.0.0",
- "resolved": "https://registry.npmjs.org/execa/-/execa-1.0.0.tgz",
- "integrity": "sha512-adbxcyWV46qiHyvSp50TKt05tB4tK3HcmF7/nxfAdhnox83seTDbwnaqKO4sXRy7roHAIFqJP/Rw/AuEbX61LA==",
- "requires": {
- "cross-spawn": "^6.0.0",
- "get-stream": "^4.0.0",
- "is-stream": "^1.1.0",
- "npm-run-path": "^2.0.0",
- "p-finally": "^1.0.0",
- "signal-exit": "^3.0.0",
- "strip-eof": "^1.0.0"
- }
- },
"find-up": {
"version": "3.0.0",
"resolved": "https://registry.npmjs.org/find-up/-/find-up-3.0.0.tgz",
@@ -18162,25 +27341,45 @@
"locate-path": "^3.0.0"
}
},
- "get-stream": {
- "version": "4.1.0",
- "resolved": "https://registry.npmjs.org/get-stream/-/get-stream-4.1.0.tgz",
- "integrity": "sha512-GMat4EJ5161kIy2HevLlr4luNjBgvmj413KaQA7jt4V8B4RDsfpHk7WQ9GVqfYyyx8OS/L66Kox+rJRNklLK7w==",
+ "fsevents": {
+ "version": "1.2.13",
+ "resolved": "https://registry.npmjs.org/fsevents/-/fsevents-1.2.13.tgz",
+ "integrity": "sha512-oWb1Z6mkHIskLzEJ/XWX0srkpkTQ7vaopMQkyaEIoq0fmtFVxOthb8cCxeT+p3ynTdkk/RZwbgG4brR5BeWECw==",
+ "optional": true,
"requires": {
- "pump": "^3.0.0"
+ "bindings": "^1.5.0",
+ "nan": "^2.12.1"
}
},
- "invert-kv": {
- "version": "2.0.0",
- "resolved": "https://registry.npmjs.org/invert-kv/-/invert-kv-2.0.0.tgz",
- "integrity": "sha512-wPVv/y/QQ/Uiirj/vh3oP+1Ww+AWehmi1g5fFWGPF6IpCBCDVrhgHRMvrLfdYcwDh3QJbGXDW4JAuzxElLSqKA=="
+ "get-caller-file": {
+ "version": "2.0.5",
+ "resolved": "https://registry.npmjs.org/get-caller-file/-/get-caller-file-2.0.5.tgz",
+ "integrity": "sha512-DyFP3BM/3YHTQOCUL/w0OZHR0lpKeGrxotcHWcqNEdnltqFwXVfhEBQ94eIo34AfQpo0rGki4cyIiftY06h2Fg=="
},
- "lcid": {
- "version": "2.0.0",
- "resolved": "https://registry.npmjs.org/lcid/-/lcid-2.0.0.tgz",
- "integrity": "sha512-avPEb8P8EGnwXKClwsNUgryVjllcRqtMYa49NTsbQagYuT1DcXnl1915oxWjoyGrXR6zH/Y0Zc96xWsPcoDKeA==",
+ "is-absolute-url": {
+ "version": "3.0.3",
+ "resolved": "https://registry.npmjs.org/is-absolute-url/-/is-absolute-url-3.0.3.tgz",
+ "integrity": "sha512-opmNIX7uFnS96NtPmhWQgQx6/NYFgsUXYMllcfzwWKUMwfo8kku1TvE6hkNcH+Q1ts5cMVrsY7j0bxXQDciu9Q=="
+ },
+ "is-path-cwd": {
+ "version": "2.2.0",
+ "resolved": "https://registry.npmjs.org/is-path-cwd/-/is-path-cwd-2.2.0.tgz",
+ "integrity": "sha512-w942bTcih8fdJPJmQHFzkS76NEP8Kzzvmw92cXsazb8intwLqPibPPdXf4ANdKV3rYMuuQYGIWtvz9JilB3NFQ=="
+ },
+ "is-path-in-cwd": {
+ "version": "2.1.0",
+ "resolved": "https://registry.npmjs.org/is-path-in-cwd/-/is-path-in-cwd-2.1.0.tgz",
+ "integrity": "sha512-rNocXHgipO+rvnP6dk3zI20RpOtrAM/kzbB258Uw5BWr3TpXi861yzjo16Dn4hUox07iw5AyeMLHWsujkjzvRQ==",
"requires": {
- "invert-kv": "^2.0.0"
+ "is-path-inside": "^2.1.0"
+ }
+ },
+ "is-path-inside": {
+ "version": "2.1.0",
+ "resolved": "https://registry.npmjs.org/is-path-inside/-/is-path-inside-2.1.0.tgz",
+ "integrity": "sha512-wiyhTzfDWsvwAW53OBWF5zuvaOGlZ6PwYxAbPVDhpm+gM09xKQGjBq/8uYN12aDvMxnAnq3dxTyoSoRNmg5YFg==",
+ "requires": {
+ "path-is-inside": "^1.0.2"
}
},
"locate-path": {
@@ -18192,35 +27391,15 @@
"path-exists": "^3.0.0"
}
},
- "mem": {
- "version": "4.1.0",
- "resolved": "https://registry.npmjs.org/mem/-/mem-4.1.0.tgz",
- "integrity": "sha512-I5u6Q1x7wxO0kdOpYBB28xueHADYps5uty/zg936CiG8NTe5sJL8EjrCuLneuDW3PlMdZBGDIn8BirEVdovZvg==",
- "requires": {
- "map-age-cleaner": "^0.1.1",
- "mimic-fn": "^1.0.0",
- "p-is-promise": "^2.0.0"
- }
- },
- "os-locale": {
- "version": "3.1.0",
- "resolved": "https://registry.npmjs.org/os-locale/-/os-locale-3.1.0.tgz",
- "integrity": "sha512-Z8l3R4wYWM40/52Z+S265okfFj8Kt2cC2MKY+xNi3kFs+XGI7WXu/I309QQQYbRW4ijiZ+yxs9pqEhJh0DqW3Q==",
- "requires": {
- "execa": "^1.0.0",
- "lcid": "^2.0.0",
- "mem": "^4.0.0"
- }
- },
- "p-is-promise": {
- "version": "2.0.0",
- "resolved": "https://registry.npmjs.org/p-is-promise/-/p-is-promise-2.0.0.tgz",
- "integrity": "sha512-pzQPhYMCAgLAKPWD2jC3Se9fEfrD9npNos0y150EeqZll7akhEgGhTW/slB6lHku8AvYGiJ+YJ5hfHKePPgFWg=="
+ "normalize-path": {
+ "version": "3.0.0",
+ "resolved": "https://registry.npmjs.org/normalize-path/-/normalize-path-3.0.0.tgz",
+ "integrity": "sha512-6eZs5Ls3WtCisHWp9S2GUy8dqkpGi4BVSz3GaqiE6ezub0512ESztXUwUB6C6IKbQkY2Pnb/mD4WYojCRwcwLA=="
},
"p-limit": {
- "version": "2.1.0",
- "resolved": "https://registry.npmjs.org/p-limit/-/p-limit-2.1.0.tgz",
- "integrity": "sha512-NhURkNcrVB+8hNfLuysU8enY5xn2KXphsHBaC2YmRNTZRc7RWusw6apSpdEj3jo4CMb6W9nrF6tTnsJsJeyu6g==",
+ "version": "2.3.0",
+ "resolved": "https://registry.npmjs.org/p-limit/-/p-limit-2.3.0.tgz",
+ "integrity": "sha512-//88mFWSJx8lxCzwdAABTJL2MyWB12+eIY7MDL2SqLmAkeKU9qxRvWuSyTjm3FUmpBEMuFfckAIqEaVGUDxb6w==",
"requires": {
"p-try": "^2.0.0"
}
@@ -18233,6 +27412,29 @@
"p-limit": "^2.0.0"
}
},
+ "p-map": {
+ "version": "2.1.0",
+ "resolved": "https://registry.npmjs.org/p-map/-/p-map-2.1.0.tgz",
+ "integrity": "sha512-y3b8Kpd8OAN444hxfBbFfj1FY/RjtTd8tzYwhUqNYXx0fXx2iX4maP4Qr6qhIKbQXI02wTLAda4fYUbDagTUFw=="
+ },
+ "pify": {
+ "version": "4.0.1",
+ "resolved": "https://registry.npmjs.org/pify/-/pify-4.0.1.tgz",
+ "integrity": "sha512-uB80kBFb/tfd68bVleG9T5GGsGPjJrLAUpR5PZIrhBnIaRTQRjqdJSsIKkOP6OAIFbj7GOrcudc5pNjZ+geV2g=="
+ },
+ "require-main-filename": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/require-main-filename/-/require-main-filename-2.0.0.tgz",
+ "integrity": "sha512-NKN5kMDylKuldxYLSUfrbo5Tuzh4hd+2E8NPPX02mZtn1VuREQToYe/ZdlJy+J3uCpfaiGF05e7B8W0iXbQHmg=="
+ },
+ "rimraf": {
+ "version": "2.7.1",
+ "resolved": "https://registry.npmjs.org/rimraf/-/rimraf-2.7.1.tgz",
+ "integrity": "sha512-uWjbaKIK3T1OSVptzX7Nl6PvQ3qAGtKEtVRjRuazjfL3Bx5eI409VZSqgND+4UNnmzLVdPj9FqFJNPqBZFve4w==",
+ "requires": {
+ "glob": "^7.1.3"
+ }
+ },
"schema-utils": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/schema-utils/-/schema-utils-1.0.0.tgz",
@@ -18243,10 +27445,15 @@
"ajv-keywords": "^3.1.0"
}
},
+ "semver": {
+ "version": "6.3.0",
+ "resolved": "https://registry.npmjs.org/semver/-/semver-6.3.0.tgz",
+ "integrity": "sha512-b39TBaTSfV6yBrapU89p5fKekE2m/NwnDocOVruQFS1/veMgdzuPcnOM34M6CwxW8jH/lxEa5rBoDeUwu5HHTw=="
+ },
"sockjs-client": {
- "version": "1.3.0",
- "resolved": "https://registry.npmjs.org/sockjs-client/-/sockjs-client-1.3.0.tgz",
- "integrity": "sha512-R9jxEzhnnrdxLCNln0xg5uGHqMnkhPSTzUZH2eXcR03S/On9Yvoq2wyUZILRUhZCNVu2PmwWVoyuiPz8th8zbg==",
+ "version": "1.4.0",
+ "resolved": "https://registry.npmjs.org/sockjs-client/-/sockjs-client-1.4.0.tgz",
+ "integrity": "sha512-5zaLyO8/nri5cua0VtOrFXBPK1jbL4+1cebT/mmKA1E1ZXOvJrII75bPu0l0k843G/+iAbhEqzyKr0w/eCCj7g==",
"requires": {
"debug": "^3.2.5",
"eventsource": "^1.0.7",
@@ -18266,6 +27473,26 @@
}
}
},
+ "string-width": {
+ "version": "3.1.0",
+ "resolved": "https://registry.npmjs.org/string-width/-/string-width-3.1.0.tgz",
+ "integrity": "sha512-vafcv6KjVZKSgz06oM/H6GDBrAtz8vdhQakGjFIvNrHA6y3HCF1CInLy+QLq8dTJPQ1b+KDUqDFctkdRW44e1w==",
+ "requires": {
+ "emoji-regex": "^7.0.1",
+ "is-fullwidth-code-point": "^2.0.0",
+ "strip-ansi": "^5.1.0"
+ },
+ "dependencies": {
+ "strip-ansi": {
+ "version": "5.2.0",
+ "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-5.2.0.tgz",
+ "integrity": "sha512-DuRs1gKbBqsMKIZlrffwlug8MHkcnpjs5VPmL1PAh+mA30U0DTotfDZ0d2UUsXpPmPmMMJ6W773MaA3J+lbiWA==",
+ "requires": {
+ "ansi-regex": "^4.1.0"
+ }
+ }
+ }
+ },
"supports-color": {
"version": "6.1.0",
"resolved": "https://registry.npmjs.org/supports-color/-/supports-color-6.1.0.tgz",
@@ -18274,39 +27501,76 @@
"has-flag": "^3.0.0"
}
},
- "yargs": {
- "version": "12.0.2",
- "resolved": "https://registry.npmjs.org/yargs/-/yargs-12.0.2.tgz",
- "integrity": "sha512-e7SkEx6N6SIZ5c5H22RTZae61qtn3PYUE8JYbBFlK9sYmh3DMQ6E5ygtaG/2BW0JZi4WGgTR2IV5ChqlqrDGVQ==",
+ "upath": {
+ "version": "1.2.0",
+ "resolved": "https://registry.npmjs.org/upath/-/upath-1.2.0.tgz",
+ "integrity": "sha512-aZwGpamFO61g3OlfT7OQCHqhGnW43ieH9WZeP7QxN/G/jS4jfqUkZxoryvJgVPEcrl5NL/ggHsSmLMHuH64Lhg=="
+ },
+ "wrap-ansi": {
+ "version": "5.1.0",
+ "resolved": "https://registry.npmjs.org/wrap-ansi/-/wrap-ansi-5.1.0.tgz",
+ "integrity": "sha512-QC1/iN/2/RPVJ5jYK8BGttj5z83LmSKmvbvrXPNCLZSEb32KKVDJDl/MOt2N01qU2H/FkzEa9PKto1BqDjtd7Q==",
"requires": {
- "cliui": "^4.0.0",
- "decamelize": "^2.0.0",
+ "ansi-styles": "^3.2.0",
+ "string-width": "^3.0.0",
+ "strip-ansi": "^5.0.0"
+ },
+ "dependencies": {
+ "strip-ansi": {
+ "version": "5.2.0",
+ "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-5.2.0.tgz",
+ "integrity": "sha512-DuRs1gKbBqsMKIZlrffwlug8MHkcnpjs5VPmL1PAh+mA30U0DTotfDZ0d2UUsXpPmPmMMJ6W773MaA3J+lbiWA==",
+ "requires": {
+ "ansi-regex": "^4.1.0"
+ }
+ }
+ }
+ },
+ "ws": {
+ "version": "6.2.1",
+ "resolved": "https://registry.npmjs.org/ws/-/ws-6.2.1.tgz",
+ "integrity": "sha512-GIyAXC2cB7LjvpgMt9EKS2ldqr0MTrORaleiOno6TweZ6r3TKtoFQWay/2PceJ3RuBasOHzXNn5Lrw1X0bEjqA==",
+ "requires": {
+ "async-limiter": "~1.0.0"
+ }
+ },
+ "y18n": {
+ "version": "4.0.0",
+ "resolved": "https://registry.npmjs.org/y18n/-/y18n-4.0.0.tgz",
+ "integrity": "sha512-r9S/ZyXu/Xu9q1tYlpsLIsa3EeLXXk0VwlxqTcFRfg9EhMW+17kbt9G0NrgCmhGb5vT2hyhJZLfDGx+7+5Uj/w=="
+ },
+ "yargs": {
+ "version": "13.3.2",
+ "resolved": "https://registry.npmjs.org/yargs/-/yargs-13.3.2.tgz",
+ "integrity": "sha512-AX3Zw5iPruN5ie6xGRIDgqkT+ZhnRlZMLMHAs8tg7nRruy2Nb+i5o9bwghAogtM08q1dpr2LVoS8KSTMYpWXUw==",
+ "requires": {
+ "cliui": "^5.0.0",
"find-up": "^3.0.0",
- "get-caller-file": "^1.0.1",
- "os-locale": "^3.0.0",
+ "get-caller-file": "^2.0.1",
"require-directory": "^2.1.1",
- "require-main-filename": "^1.0.1",
+ "require-main-filename": "^2.0.0",
"set-blocking": "^2.0.0",
- "string-width": "^2.0.0",
+ "string-width": "^3.0.0",
"which-module": "^2.0.0",
- "y18n": "^3.2.1 || ^4.0.0",
- "yargs-parser": "^10.1.0"
+ "y18n": "^4.0.0",
+ "yargs-parser": "^13.1.2"
}
},
"yargs-parser": {
- "version": "10.1.0",
- "resolved": "https://registry.npmjs.org/yargs-parser/-/yargs-parser-10.1.0.tgz",
- "integrity": "sha512-VCIyR1wJoEBZUqk5PA+oOBF6ypbwh5aNB3I50guxAL/quggdfs4TtNHQrSazFA3fYZ+tEqfs0zIGlv0c/rgjbQ==",
+ "version": "13.1.2",
+ "resolved": "https://registry.npmjs.org/yargs-parser/-/yargs-parser-13.1.2.tgz",
+ "integrity": "sha512-3lbsNRf/j+A4QuSZfDRA7HRSfWrzO0YjqTJd5kjAq37Zep1CEgaYmrH9Q3GwPiB9cHyd1Y1UwggGhJGoxipbzg==",
"requires": {
- "camelcase": "^4.1.0"
+ "camelcase": "^5.0.0",
+ "decamelize": "^1.2.0"
}
}
}
},
"webpack-hot-middleware": {
- "version": "2.24.3",
- "resolved": "https://registry.npmjs.org/webpack-hot-middleware/-/webpack-hot-middleware-2.24.3.tgz",
- "integrity": "sha512-pPlmcdoR2Fn6UhYjAhp1g/IJy1Yc9hD+T6O9mjRcWV2pFbBjIFoJXhP0CoD0xPOhWJuWXuZXGBga9ybbOdzXpg==",
+ "version": "2.25.0",
+ "resolved": "https://registry.npmjs.org/webpack-hot-middleware/-/webpack-hot-middleware-2.25.0.tgz",
+ "integrity": "sha512-xs5dPOrGPCzuRXNi8F6rwhawWvQQkeli5Ro48PRuQh8pYPCPmNnltP9itiUPT4xI8oW+y0m59lyyeQk54s5VgA==",
"requires": {
"ansi-html": "0.0.7",
"html-entities": "^1.2.0",
@@ -18324,17 +27588,24 @@
}
},
"webpack-merge": {
- "version": "4.2.1",
- "resolved": "https://registry.npmjs.org/webpack-merge/-/webpack-merge-4.2.1.tgz",
- "integrity": "sha512-4p8WQyS98bUJcCvFMbdGZyZmsKuWjWVnVHnAS3FFg0HDaRVrPbkivx2RYCre8UiemD67RsiFFLfn4JhLAin8Vw==",
+ "version": "4.2.2",
+ "resolved": "https://registry.npmjs.org/webpack-merge/-/webpack-merge-4.2.2.tgz",
+ "integrity": "sha512-TUE1UGoTX2Cd42j3krGYqObZbOD+xF7u28WB7tfUordytSjbWTIjK/8V0amkBfTYN4/pB/GIDlJZZ657BGG19g==",
"requires": {
- "lodash": "^4.17.5"
+ "lodash": "^4.17.15"
+ },
+ "dependencies": {
+ "lodash": {
+ "version": "4.17.15",
+ "resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz",
+ "integrity": "sha512-8xOcRHvCjnocdS5cpwXQXVzmmh5e5+saE2QGoeQmbKmRS6J3VQppPOIt0MnmE+4xlZoumy0GPG0D0MVIQbNA1A=="
+ }
}
},
"webpack-sources": {
- "version": "1.3.0",
- "resolved": "https://registry.npmjs.org/webpack-sources/-/webpack-sources-1.3.0.tgz",
- "integrity": "sha512-OiVgSrbGu7NEnEvQJJgdSFPl2qWKkWq5lHMhgiToIiN9w34EBnjYzSYs+VbL5KoYiLNtFFa7BZIKxRED3I32pA==",
+ "version": "1.4.3",
+ "resolved": "https://registry.npmjs.org/webpack-sources/-/webpack-sources-1.4.3.tgz",
+ "integrity": "sha512-lgTS3Xhv1lCOKo7SA5TjKXMjpSM4sBjNV5+q2bqesbSPs5FjGmU6jjtBSkX9b4qW87vDIsCIlUPOEhbZrMdjeQ==",
"requires": {
"source-list-map": "^2.0.0",
"source-map": "~0.6.1"
@@ -18352,19 +27623,52 @@
"resolved": "https://registry.npmjs.org/webpack-stats-plugin/-/webpack-stats-plugin-0.1.5.tgz",
"integrity": "sha1-KeXxLr/VMVjTHWVqETrB97hhedk="
},
- "websocket-driver": {
- "version": "0.7.0",
- "resolved": "https://registry.npmjs.org/websocket-driver/-/websocket-driver-0.7.0.tgz",
- "integrity": "sha1-DK+dLXVdk67gSdS90NP+LMoqJOs=",
+ "websocket": {
+ "version": "1.0.31",
+ "resolved": "https://registry.npmjs.org/websocket/-/websocket-1.0.31.tgz",
+ "integrity": "sha512-VAouplvGKPiKFDTeCCO65vYHsyay8DqoBSlzIO3fayrfOgU94lQN5a1uWVnFrMLceTJw/+fQXR5PGbUVRaHshQ==",
"requires": {
- "http-parser-js": ">=0.4.0",
+ "debug": "^2.2.0",
+ "es5-ext": "^0.10.50",
+ "nan": "^2.14.0",
+ "typedarray-to-buffer": "^3.1.5",
+ "yaeti": "^0.0.6"
+ },
+ "dependencies": {
+ "debug": {
+ "version": "2.6.9",
+ "resolved": "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz",
+ "integrity": "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA==",
+ "requires": {
+ "ms": "2.0.0"
+ }
+ },
+ "ms": {
+ "version": "2.0.0",
+ "resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz",
+ "integrity": "sha1-VgiurfwAvmwpAd9fmGF4jeDVl8g="
+ },
+ "nan": {
+ "version": "2.14.1",
+ "resolved": "https://registry.npmjs.org/nan/-/nan-2.14.1.tgz",
+ "integrity": "sha512-isWHgVjnFjh2x2yuJ/tj3JbwoHu3UC2dX5G/88Cm24yB6YopVgxvBObDY7n5xW6ExmFhJpSEQqFPvq9zaXc8Jw=="
+ }
+ }
+ },
+ "websocket-driver": {
+ "version": "0.7.4",
+ "resolved": "https://registry.npmjs.org/websocket-driver/-/websocket-driver-0.7.4.tgz",
+ "integrity": "sha512-b17KeDIQVjvb0ssuSDF2cYXSg2iztliJ4B9WdsuB6J952qCPKmnVq4DyW5motImXHDC1cBT/1UezrJVsKw5zjg==",
+ "requires": {
+ "http-parser-js": ">=0.5.1",
+ "safe-buffer": ">=5.1.0",
"websocket-extensions": ">=0.1.1"
}
},
"websocket-extensions": {
- "version": "0.1.3",
- "resolved": "https://registry.npmjs.org/websocket-extensions/-/websocket-extensions-0.1.3.tgz",
- "integrity": "sha512-nqHUnMXmBzT0w570r2JpJxfiSD1IzoI+HGVdd3aZ0yNi3ngvQ4jv1dtHt5VGxfI2yj5yqImPhOK4vmIh2xMbGg=="
+ "version": "0.1.4",
+ "resolved": "https://registry.npmjs.org/websocket-extensions/-/websocket-extensions-0.1.4.tgz",
+ "integrity": "sha512-OqedPIGOfsDlo31UNwYbCFMSaO9m9G/0faIHj5/dZFDMFqPTcx6UwqyOy3COEaEOg/9VsGIpdqn62W5KhoKSpg=="
},
"whatwg-fetch": {
"version": "3.0.0",
@@ -18403,11 +27707,46 @@
}
},
"widest-line": {
- "version": "2.0.1",
- "resolved": "https://registry.npmjs.org/widest-line/-/widest-line-2.0.1.tgz",
- "integrity": "sha512-Ba5m9/Fa4Xt9eb2ELXt77JxVDV8w7qQrH0zS/TWSJdLyAwQjWoOzpzj5lwVftDz6n/EOu3tNACS84v509qwnJA==",
+ "version": "3.1.0",
+ "resolved": "https://registry.npmjs.org/widest-line/-/widest-line-3.1.0.tgz",
+ "integrity": "sha512-NsmoXalsWVDMGupxZ5R08ka9flZjjiLvHVAWYOKtiKM8ujtZWr9cRffak+uSE48+Ob8ObalXpwyeUiyDD6QFgg==",
"requires": {
- "string-width": "^2.1.1"
+ "string-width": "^4.0.0"
+ },
+ "dependencies": {
+ "ansi-regex": {
+ "version": "5.0.0",
+ "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.0.tgz",
+ "integrity": "sha512-bY6fj56OUQ0hU1KjFNDQuJFezqKdrAyFdIevADiqrWHwSlbmBNMHp5ak2f40Pm8JTFyM2mqxkG6ngkHO11f/lg=="
+ },
+ "emoji-regex": {
+ "version": "8.0.0",
+ "resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-8.0.0.tgz",
+ "integrity": "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A=="
+ },
+ "is-fullwidth-code-point": {
+ "version": "3.0.0",
+ "resolved": "https://registry.npmjs.org/is-fullwidth-code-point/-/is-fullwidth-code-point-3.0.0.tgz",
+ "integrity": "sha512-zymm5+u+sCsSWyD9qNaejV3DFvhCKclKdizYaJUuHA83RLjb7nSuGnddCHGv0hk+KY7BMAlsWeK4Ueg6EV6XQg=="
+ },
+ "string-width": {
+ "version": "4.2.0",
+ "resolved": "https://registry.npmjs.org/string-width/-/string-width-4.2.0.tgz",
+ "integrity": "sha512-zUz5JD+tgqtuDjMhwIg5uFVV3dtqZ9yQJlZVfq4I01/K5Paj5UHj7VyrQOJvzawSVlKpObApbfD0Ed6yJc+1eg==",
+ "requires": {
+ "emoji-regex": "^8.0.0",
+ "is-fullwidth-code-point": "^3.0.0",
+ "strip-ansi": "^6.0.0"
+ }
+ },
+ "strip-ansi": {
+ "version": "6.0.0",
+ "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.0.tgz",
+ "integrity": "sha512-AuvKTrTfQNYNIctbR1K/YGTR1756GycPsg7b9bdV9Duqur4gv6aKqHXah67Z8ImS7WEz5QVcOtlfW2rZEugt6w==",
+ "requires": {
+ "ansi-regex": "^5.0.0"
+ }
+ }
}
},
"with-open-file": {
@@ -18420,10 +27759,15 @@
"pify": "^3.0.0"
}
},
- "wordwrap": {
- "version": "1.0.0",
- "resolved": "https://registry.npmjs.org/wordwrap/-/wordwrap-1.0.0.tgz",
- "integrity": "sha1-J1hIEIkUVqQXHI0CJkQa3pDLyus="
+ "wonka": {
+ "version": "4.0.14",
+ "resolved": "https://registry.npmjs.org/wonka/-/wonka-4.0.14.tgz",
+ "integrity": "sha512-v9vmsTxpZjrA8CYfztbuoTQSHEsG3ZH+NCYfasHm0V3GqBupXrjuuz0RJyUaw2cRO7ouW2js0P6i853/qxlDcA=="
+ },
+ "word-wrap": {
+ "version": "1.2.3",
+ "resolved": "https://registry.npmjs.org/word-wrap/-/word-wrap-1.2.3.tgz",
+ "integrity": "sha512-Hz/mrNwitNRh/HUAtM/VT/5VH+ygD6DV7mYKZAtHOrbs8U7lvPS6xf7EJKMF0uW1KJCl0H701g3ZGus+muE5vQ=="
},
"workbox-background-sync": {
"version": "3.6.3",
@@ -18578,9 +27922,9 @@
"integrity": "sha512-IQOUi+RLhvYCiv80RP23KBW/NTtIvzvjex28B8NW1jOm+iV4VIu3VXKXTA6er5/wjjuhmtB28qEAUqADLAyOSg=="
},
"worker-farm": {
- "version": "1.6.0",
- "resolved": "https://registry.npmjs.org/worker-farm/-/worker-farm-1.6.0.tgz",
- "integrity": "sha512-6w+3tHbM87WnSWnENBUvA2pxJPLhQUg5LKwUQHq3r+XPhIM+Gh2R5ycbwPCyuGbNg+lPgdcnQUhuC02kJCvffQ==",
+ "version": "1.7.0",
+ "resolved": "https://registry.npmjs.org/worker-farm/-/worker-farm-1.7.0.tgz",
+ "integrity": "sha512-rvw3QTZc8lAxyVrqcSGVm5yP/IJ2UcB3U0graE3LCFoZ0Yn2x4EoVSqJKdB/T5M+FLcRPjz4TDacRf3OCfNUzw==",
"requires": {
"errno": "~0.1.7"
}
@@ -18628,22 +27972,20 @@
}
},
"write-file-atomic": {
- "version": "2.4.2",
- "resolved": "https://registry.npmjs.org/write-file-atomic/-/write-file-atomic-2.4.2.tgz",
- "integrity": "sha512-s0b6vB3xIVRLWywa6X9TOMA7k9zio0TMOsl9ZnDkliA/cfJlpHXAscj0gbHVJiTdIuAYpIyqS5GW91fqm6gG5g==",
+ "version": "3.0.3",
+ "resolved": "https://registry.npmjs.org/write-file-atomic/-/write-file-atomic-3.0.3.tgz",
+ "integrity": "sha512-AvHcyZ5JnSfq3ioSyjrBkH9yW4m7Ayk8/9My/DD9onKeu/94fwrMocemO2QAJFAlnnDN+ZDS+ZjAR5ua1/PV/Q==",
"requires": {
- "graceful-fs": "^4.1.11",
"imurmurhash": "^0.1.4",
- "signal-exit": "^3.0.2"
+ "is-typedarray": "^1.0.0",
+ "signal-exit": "^3.0.2",
+ "typedarray-to-buffer": "^3.1.5"
}
},
"ws": {
- "version": "6.1.4",
- "resolved": "https://registry.npmjs.org/ws/-/ws-6.1.4.tgz",
- "integrity": "sha512-eqZfL+NE/YQc1/ZynhojeV8q+H050oR8AZ2uIev7RU10svA9ZnJUddHcOUZTJLinZ9yEfdA2kSATS2qZK5fhJA==",
- "requires": {
- "async-limiter": "~1.0.0"
- }
+ "version": "7.3.0",
+ "resolved": "https://registry.npmjs.org/ws/-/ws-7.3.0.tgz",
+ "integrity": "sha512-iFtXzngZVXPGgpTlP1rBqsUK82p9tKqsWRPg5L56egiljujJT3vGAYnHANvFxBieXrTFavhzhxW52jnaWV+w2w=="
},
"x-is-string": {
"version": "0.1.0",
@@ -18651,9 +27993,9 @@
"integrity": "sha1-R0tQhlrzpJqcRlfwWs0UVFj3fYI="
},
"xdg-basedir": {
- "version": "3.0.0",
- "resolved": "https://registry.npmjs.org/xdg-basedir/-/xdg-basedir-3.0.0.tgz",
- "integrity": "sha1-SWsswQnsqNus/i3HK2A8F8WHCtQ="
+ "version": "4.0.0",
+ "resolved": "https://registry.npmjs.org/xdg-basedir/-/xdg-basedir-4.0.0.tgz",
+ "integrity": "sha512-PSNhEJDejZYV7h50BohL09Er9VaIefr2LMAf3OEmpCkjOi34eYyQYAXUTjEQtZJTKcF0E2UKTh+osDLsgNim9Q=="
},
"xhr": {
"version": "2.5.0",
@@ -18694,9 +28036,21 @@
"integrity": "sha1-wodrBhaKrcQOV9l+gRkayPQ5iz4="
},
"xregexp": {
- "version": "4.0.0",
- "resolved": "https://registry.npmjs.org/xregexp/-/xregexp-4.0.0.tgz",
- "integrity": "sha512-PHyM+sQouu7xspQQwELlGwwd05mXUFqwFYfqPO0cC7x4fxyHnnuetmQr6CjJiafIDoH4MogHb9dOoJzR/Y4rFg=="
+ "version": "4.3.0",
+ "resolved": "https://registry.npmjs.org/xregexp/-/xregexp-4.3.0.tgz",
+ "integrity": "sha512-7jXDIFXh5yJ/orPn4SXjuVrWWoi4Cr8jfV1eHv9CixKSbU+jY4mxfrBwAuDvupPNKpMUY+FeIqsVw/JLT9+B8g==",
+ "requires": {
+ "@babel/runtime-corejs3": "^7.8.3"
+ }
+ },
+ "xss": {
+ "version": "1.0.7",
+ "resolved": "https://registry.npmjs.org/xss/-/xss-1.0.7.tgz",
+ "integrity": "sha512-A9v7tblGvxu8TWXQC9rlpW96a+LN1lyw6wyhpTmmGW+FwRMactchBR3ROKSi33UPCUcUHSu8s9YP6F+K3Mw//w==",
+ "requires": {
+ "commander": "^2.20.3",
+ "cssfilter": "0.0.10"
+ }
},
"xstate": {
"version": "3.3.3",
@@ -18713,11 +28067,21 @@
"resolved": "https://registry.npmjs.org/y18n/-/y18n-3.2.1.tgz",
"integrity": "sha1-bRX7qITAhnnA136I53WegR4H+kE="
},
+ "yaeti": {
+ "version": "0.0.6",
+ "resolved": "https://registry.npmjs.org/yaeti/-/yaeti-0.0.6.tgz",
+ "integrity": "sha1-8m9ITXJoTPQr7ft2lwqhYI+/lXc="
+ },
"yallist": {
"version": "2.1.2",
"resolved": "https://registry.npmjs.org/yallist/-/yallist-2.1.2.tgz",
"integrity": "sha1-HBH5IY8HYImkfdUS+TxmmaaoHVI="
},
+ "yaml": {
+ "version": "1.10.0",
+ "resolved": "https://registry.npmjs.org/yaml/-/yaml-1.10.0.tgz",
+ "integrity": "sha512-yr2icI4glYaNG+KWONODapy2/jDdMSDnrONSjblABjD9B4Z5LgiircSt8m8sRZFNi08kG9Sm0uSHtEmP3zaEGg=="
+ },
"yaml-loader": {
"version": "0.5.0",
"resolved": "https://registry.npmjs.org/yaml-loader/-/yaml-loader-0.5.0.tgz",
@@ -18768,46 +28132,88 @@
"resolved": "https://registry.npmjs.org/yeast/-/yeast-0.1.2.tgz",
"integrity": "sha1-AI4G2AlDIMNy28L47XagymyKxBk="
},
+ "yoga-layout-prebuilt": {
+ "version": "1.9.6",
+ "resolved": "https://registry.npmjs.org/yoga-layout-prebuilt/-/yoga-layout-prebuilt-1.9.6.tgz",
+ "integrity": "sha512-Wursw6uqLXLMjBAO4SEShuzj8+EJXhCF71/rJ7YndHTkRAYSU0GY3OghRqfAk9HPUAAFMuqp3U1Wl+01vmGRQQ==",
+ "requires": {
+ "@types/yoga-layout": "1.9.2"
+ }
+ },
"yurnalist": {
- "version": "1.0.5",
- "resolved": "https://registry.npmjs.org/yurnalist/-/yurnalist-1.0.5.tgz",
- "integrity": "sha512-EuLjqX3Q15iVM0UtZa5Ju536uRmklKd2kKhdE5D5fIh8RZmh+pJ8c6wj2oGo0TA+T/Ii2o79cIHCTMfciW8jlA==",
+ "version": "1.1.2",
+ "resolved": "https://registry.npmjs.org/yurnalist/-/yurnalist-1.1.2.tgz",
+ "integrity": "sha512-y7bsTXqL+YMJQ2De2CBtSftJNLQnB7gWIzzKm10GDyC8Fg4Dsmd2LG5YhT8pudvUiuotic80WVXt/g1femRVQg==",
"requires": {
"babel-runtime": "^6.26.0",
- "chalk": "^2.1.0",
+ "chalk": "^2.4.2",
"cli-table3": "^0.5.1",
- "debug": "^4.1.0",
- "deep-equal": "^1.0.1",
- "detect-indent": "^5.0.0",
- "inquirer": "^6.2.0",
+ "debug": "^4.1.1",
+ "deep-equal": "^1.1.0",
+ "detect-indent": "^6.0.0",
+ "inquirer": "^7.0.0",
"invariant": "^2.2.0",
"is-builtin-module": "^3.0.0",
"is-ci": "^2.0.0",
- "leven": "^2.0.0",
- "loud-rejection": "^1.2.0",
- "node-emoji": "^1.6.1",
+ "leven": "^3.1.0",
+ "loud-rejection": "^2.2.0",
+ "node-emoji": "^1.10.0",
"object-path": "^0.11.2",
"read": "^1.0.7",
- "rimraf": "^2.5.0",
- "semver": "^5.1.0",
- "strip-ansi": "^5.0.0",
- "strip-bom": "^3.0.0"
+ "rimraf": "^3.0.0",
+ "semver": "^6.3.0",
+ "strip-ansi": "^5.2.0",
+ "strip-bom": "^4.0.0"
},
"dependencies": {
+ "ansi-escapes": {
+ "version": "4.3.1",
+ "resolved": "https://registry.npmjs.org/ansi-escapes/-/ansi-escapes-4.3.1.tgz",
+ "integrity": "sha512-JWF7ocqNrp8u9oqpgV+wH5ftbt+cfvv+PTjOvKLT3AdYly/LmORARfEVT1iyjwN+4MqE5UmVKoAdIBqeoCHgLA==",
+ "requires": {
+ "type-fest": "^0.11.0"
+ }
+ },
"ansi-regex": {
- "version": "4.0.0",
- "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-4.0.0.tgz",
- "integrity": "sha512-iB5Dda8t/UqpPI/IjsejXu5jOGDrzn41wJyljwPH65VCIbk6+1BzFIMJGFwTNrYXT1CrD+B4l19U7awiQ8rk7w=="
+ "version": "5.0.0",
+ "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.0.tgz",
+ "integrity": "sha512-bY6fj56OUQ0hU1KjFNDQuJFezqKdrAyFdIevADiqrWHwSlbmBNMHp5ak2f40Pm8JTFyM2mqxkG6ngkHO11f/lg=="
},
"builtin-modules": {
- "version": "3.0.0",
- "resolved": "https://registry.npmjs.org/builtin-modules/-/builtin-modules-3.0.0.tgz",
- "integrity": "sha512-hMIeU4K2ilbXV6Uv93ZZ0Avg/M91RaKXucQ+4me2Do1txxBDyDZWCBa5bJSLqoNTRpXTLwEzIk1KmloenDDjhg=="
+ "version": "3.1.0",
+ "resolved": "https://registry.npmjs.org/builtin-modules/-/builtin-modules-3.1.0.tgz",
+ "integrity": "sha512-k0KL0aWZuBt2lrxrcASWDfwOLMnodeQjodT/1SxEQAXsHANgo6ZC/VEaSEHCXt7aSTZ4/4H5LKa+tBXmW7Vtvw=="
},
- "ci-info": {
- "version": "2.0.0",
- "resolved": "https://registry.npmjs.org/ci-info/-/ci-info-2.0.0.tgz",
- "integrity": "sha512-5tK7EtrZ0N+OLFMthtqOj4fI2Jeb88C4CAZPu25LDVUgXJ0A3Js4PMGqrn0JU1W0Mh1/Z8wZzYPxqUrXeBboCQ=="
+ "chalk": {
+ "version": "2.4.2",
+ "resolved": "https://registry.npmjs.org/chalk/-/chalk-2.4.2.tgz",
+ "integrity": "sha512-Mti+f9lpJNcwF4tWV8/OrTTtF1gZi+f8FqlyAdouralcFWFQWF2+NgCHShjkCb+IFBLq9buZwE1xckQU4peSuQ==",
+ "requires": {
+ "ansi-styles": "^3.2.1",
+ "escape-string-regexp": "^1.0.5",
+ "supports-color": "^5.3.0"
+ }
+ },
+ "cli-cursor": {
+ "version": "3.1.0",
+ "resolved": "https://registry.npmjs.org/cli-cursor/-/cli-cursor-3.1.0.tgz",
+ "integrity": "sha512-I/zHAwsKf9FqGoXM4WWRACob9+SNukZTd94DWF57E4toouRulbCxcUh6RKUEOQlYTHJnzkPMySvPNaaSLNfLZw==",
+ "requires": {
+ "restore-cursor": "^3.1.0"
+ }
+ },
+ "color-convert": {
+ "version": "2.0.1",
+ "resolved": "https://registry.npmjs.org/color-convert/-/color-convert-2.0.1.tgz",
+ "integrity": "sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ==",
+ "requires": {
+ "color-name": "~1.1.4"
+ }
+ },
+ "color-name": {
+ "version": "1.1.4",
+ "resolved": "https://registry.npmjs.org/color-name/-/color-name-1.1.4.tgz",
+ "integrity": "sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA=="
},
"debug": {
"version": "4.1.1",
@@ -18817,6 +28223,93 @@
"ms": "^2.1.1"
}
},
+ "deep-equal": {
+ "version": "1.1.1",
+ "resolved": "https://registry.npmjs.org/deep-equal/-/deep-equal-1.1.1.tgz",
+ "integrity": "sha512-yd9c5AdiqVcR+JjcwUQb9DkhJc8ngNr0MahEBGvDiJw8puWab2yZlh+nkasOnZP+EGTAP6rRp2JzJhJZzvNF8g==",
+ "requires": {
+ "is-arguments": "^1.0.4",
+ "is-date-object": "^1.0.1",
+ "is-regex": "^1.0.4",
+ "object-is": "^1.0.1",
+ "object-keys": "^1.1.1",
+ "regexp.prototype.flags": "^1.2.0"
+ }
+ },
+ "emoji-regex": {
+ "version": "8.0.0",
+ "resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-8.0.0.tgz",
+ "integrity": "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A=="
+ },
+ "figures": {
+ "version": "3.2.0",
+ "resolved": "https://registry.npmjs.org/figures/-/figures-3.2.0.tgz",
+ "integrity": "sha512-yaduQFRKLXYOGgEn6AZau90j3ggSOyiqXU0F9JZfeXYhNa+Jk4X+s45A2zg5jns87GAFa34BBm2kXw4XpNcbdg==",
+ "requires": {
+ "escape-string-regexp": "^1.0.5"
+ }
+ },
+ "has-flag": {
+ "version": "4.0.0",
+ "resolved": "https://registry.npmjs.org/has-flag/-/has-flag-4.0.0.tgz",
+ "integrity": "sha512-EykJT/Q1KjTWctppgIAgfSO0tKVuZUjhgMr17kqTumMl6Afv3EISleU7qZUzoXDFTAHTDC4NOoG/ZxU3EvlMPQ=="
+ },
+ "inquirer": {
+ "version": "7.2.0",
+ "resolved": "https://registry.npmjs.org/inquirer/-/inquirer-7.2.0.tgz",
+ "integrity": "sha512-E0c4rPwr9ByePfNlTIB8z51kK1s2n6jrHuJeEHENl/sbq2G/S1auvibgEwNR4uSyiU+PiYHqSwsgGiXjG8p5ZQ==",
+ "requires": {
+ "ansi-escapes": "^4.2.1",
+ "chalk": "^3.0.0",
+ "cli-cursor": "^3.1.0",
+ "cli-width": "^2.0.0",
+ "external-editor": "^3.0.3",
+ "figures": "^3.0.0",
+ "lodash": "^4.17.15",
+ "mute-stream": "0.0.8",
+ "run-async": "^2.4.0",
+ "rxjs": "^6.5.3",
+ "string-width": "^4.1.0",
+ "strip-ansi": "^6.0.0",
+ "through": "^2.3.6"
+ },
+ "dependencies": {
+ "ansi-styles": {
+ "version": "4.2.1",
+ "resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-4.2.1.tgz",
+ "integrity": "sha512-9VGjrMsG1vePxcSweQsN20KY/c4zN0h9fLjqAbwbPfahM3t+NL+M9HC8xeXG2I8pX5NoamTGNuomEUFI7fcUjA==",
+ "requires": {
+ "@types/color-name": "^1.1.1",
+ "color-convert": "^2.0.1"
+ }
+ },
+ "chalk": {
+ "version": "3.0.0",
+ "resolved": "https://registry.npmjs.org/chalk/-/chalk-3.0.0.tgz",
+ "integrity": "sha512-4D3B6Wf41KOYRFdszmDqMCGq5VV/uMAB273JILmO+3jAlh8X4qDtdtgCR3fxtbLEMzSx22QdhnDcJvu2u1fVwg==",
+ "requires": {
+ "ansi-styles": "^4.1.0",
+ "supports-color": "^7.1.0"
+ }
+ },
+ "strip-ansi": {
+ "version": "6.0.0",
+ "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.0.tgz",
+ "integrity": "sha512-AuvKTrTfQNYNIctbR1K/YGTR1756GycPsg7b9bdV9Duqur4gv6aKqHXah67Z8ImS7WEz5QVcOtlfW2rZEugt6w==",
+ "requires": {
+ "ansi-regex": "^5.0.0"
+ }
+ },
+ "supports-color": {
+ "version": "7.1.0",
+ "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-7.1.0.tgz",
+ "integrity": "sha512-oRSIpR8pxT1Wr2FquTNnGet79b3BWljqOuoW/h4oBhxJ/HUbX5nX6JSruTkvXDCFMwDPvsaTTbvMLKZWSy0R5g==",
+ "requires": {
+ "has-flag": "^4.0.0"
+ }
+ }
+ }
+ },
"is-builtin-module": {
"version": "3.0.0",
"resolved": "https://registry.npmjs.org/is-builtin-module/-/is-builtin-module-3.0.0.tgz",
@@ -18825,34 +28318,128 @@
"builtin-modules": "^3.0.0"
}
},
- "is-ci": {
- "version": "2.0.0",
- "resolved": "https://registry.npmjs.org/is-ci/-/is-ci-2.0.0.tgz",
- "integrity": "sha512-YfJT7rkpQB0updsdHLGWrvhBJfcfzNNawYDNIyQXJz0IViGf75O8EBPKSdvw2rF+LGCsX4FZ8tcr3b19LcZq4w==",
+ "is-fullwidth-code-point": {
+ "version": "3.0.0",
+ "resolved": "https://registry.npmjs.org/is-fullwidth-code-point/-/is-fullwidth-code-point-3.0.0.tgz",
+ "integrity": "sha512-zymm5+u+sCsSWyD9qNaejV3DFvhCKclKdizYaJUuHA83RLjb7nSuGnddCHGv0hk+KY7BMAlsWeK4Ueg6EV6XQg=="
+ },
+ "lodash": {
+ "version": "4.17.15",
+ "resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz",
+ "integrity": "sha512-8xOcRHvCjnocdS5cpwXQXVzmmh5e5+saE2QGoeQmbKmRS6J3VQppPOIt0MnmE+4xlZoumy0GPG0D0MVIQbNA1A=="
+ },
+ "loud-rejection": {
+ "version": "2.2.0",
+ "resolved": "https://registry.npmjs.org/loud-rejection/-/loud-rejection-2.2.0.tgz",
+ "integrity": "sha512-S0FayMXku80toa5sZ6Ro4C+s+EtFDCsyJNG/AzFMfX3AxD5Si4dZsgzm/kKnbOxHl5Cv8jBlno8+3XYIh2pNjQ==",
"requires": {
- "ci-info": "^2.0.0"
+ "currently-unhandled": "^0.4.1",
+ "signal-exit": "^3.0.2"
+ }
+ },
+ "mimic-fn": {
+ "version": "2.1.0",
+ "resolved": "https://registry.npmjs.org/mimic-fn/-/mimic-fn-2.1.0.tgz",
+ "integrity": "sha512-OqbOk5oEQeAZ8WXWydlu9HJjz9WVdEIvamMCcXmuqUYjTknH/sqsWvhQ3vgwKFRR1HpjvNBKQ37nbJgYzGqGcg=="
+ },
+ "mute-stream": {
+ "version": "0.0.8",
+ "resolved": "https://registry.npmjs.org/mute-stream/-/mute-stream-0.0.8.tgz",
+ "integrity": "sha512-nnbWWOkoWyUsTjKrhgD0dcz22mdkSnpYqbEjIm2nhwhuxlSkpywJmBo8h0ZqJdkp73mb90SssHkN4rsRaBAfAA=="
+ },
+ "object-keys": {
+ "version": "1.1.1",
+ "resolved": "https://registry.npmjs.org/object-keys/-/object-keys-1.1.1.tgz",
+ "integrity": "sha512-NuAESUOUMrlIXOfHKzD6bpPu3tYt3xvjNdRIQ+FeT0lNb4K8WR70CaDxhuNguS2XG+GjkyMwOzsN5ZktImfhLA=="
+ },
+ "onetime": {
+ "version": "5.1.0",
+ "resolved": "https://registry.npmjs.org/onetime/-/onetime-5.1.0.tgz",
+ "integrity": "sha512-5NcSkPHhwTVFIQN+TUqXoS5+dlElHXdpAWu9I0HP20YOtIi+aZ0Ct82jdlILDxjLEAWwvm+qj1m6aEtsDVmm6Q==",
+ "requires": {
+ "mimic-fn": "^2.1.0"
+ }
+ },
+ "restore-cursor": {
+ "version": "3.1.0",
+ "resolved": "https://registry.npmjs.org/restore-cursor/-/restore-cursor-3.1.0.tgz",
+ "integrity": "sha512-l+sSefzHpj5qimhFSE5a8nufZYAM3sBSVMAPtYkmC+4EH2anSGaEMXSD0izRQbu9nfyQ9y5JrVmp7E8oZrUjvA==",
+ "requires": {
+ "onetime": "^5.1.0",
+ "signal-exit": "^3.0.2"
+ }
+ },
+ "rimraf": {
+ "version": "3.0.2",
+ "resolved": "https://registry.npmjs.org/rimraf/-/rimraf-3.0.2.tgz",
+ "integrity": "sha512-JZkJMZkAGFFPP2YqXZXPbMlMBgsxzE8ILs4lMIX/2o0L9UBw9O/Y3o6wFw/i9YLapcUJWwqbi3kdxIPdC62TIA==",
+ "requires": {
+ "glob": "^7.1.3"
+ }
+ },
+ "semver": {
+ "version": "6.3.0",
+ "resolved": "https://registry.npmjs.org/semver/-/semver-6.3.0.tgz",
+ "integrity": "sha512-b39TBaTSfV6yBrapU89p5fKekE2m/NwnDocOVruQFS1/veMgdzuPcnOM34M6CwxW8jH/lxEa5rBoDeUwu5HHTw=="
+ },
+ "string-width": {
+ "version": "4.2.0",
+ "resolved": "https://registry.npmjs.org/string-width/-/string-width-4.2.0.tgz",
+ "integrity": "sha512-zUz5JD+tgqtuDjMhwIg5uFVV3dtqZ9yQJlZVfq4I01/K5Paj5UHj7VyrQOJvzawSVlKpObApbfD0Ed6yJc+1eg==",
+ "requires": {
+ "emoji-regex": "^8.0.0",
+ "is-fullwidth-code-point": "^3.0.0",
+ "strip-ansi": "^6.0.0"
+ },
+ "dependencies": {
+ "strip-ansi": {
+ "version": "6.0.0",
+ "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.0.tgz",
+ "integrity": "sha512-AuvKTrTfQNYNIctbR1K/YGTR1756GycPsg7b9bdV9Duqur4gv6aKqHXah67Z8ImS7WEz5QVcOtlfW2rZEugt6w==",
+ "requires": {
+ "ansi-regex": "^5.0.0"
+ }
+ }
}
},
"strip-ansi": {
- "version": "5.0.0",
- "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-5.0.0.tgz",
- "integrity": "sha512-Uu7gQyZI7J7gn5qLn1Np3G9vcYGTVqB+lFTytnDJv83dd8T22aGH451P3jueT2/QemInJDfxHB5Tde5OzgG1Ow==",
+ "version": "5.2.0",
+ "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-5.2.0.tgz",
+ "integrity": "sha512-DuRs1gKbBqsMKIZlrffwlug8MHkcnpjs5VPmL1PAh+mA30U0DTotfDZ0d2UUsXpPmPmMMJ6W773MaA3J+lbiWA==",
"requires": {
- "ansi-regex": "^4.0.0"
+ "ansi-regex": "^4.1.0"
+ },
+ "dependencies": {
+ "ansi-regex": {
+ "version": "4.1.0",
+ "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-4.1.0.tgz",
+ "integrity": "sha512-1apePfXM1UOSqw0o9IiFAovVz9M5S1Dg+4TrDwfMewQ6p/rmMueb7tWZjQ1rx4Loy1ArBggoqGpfqqdI4rondg=="
+ }
}
+ },
+ "strip-bom": {
+ "version": "4.0.0",
+ "resolved": "https://registry.npmjs.org/strip-bom/-/strip-bom-4.0.0.tgz",
+ "integrity": "sha512-3xurFv5tEgii33Zi8Jtp55wEIILR9eh34FAW00PZf+JnSsTmV/ioewSgQl97JHvgjoRGwPShsWm+IdrxB35d0w=="
+ },
+ "type-fest": {
+ "version": "0.11.0",
+ "resolved": "https://registry.npmjs.org/type-fest/-/type-fest-0.11.0.tgz",
+ "integrity": "sha512-OdjXJxnCN1AvyLSzeKIgXTXxV+99ZuXl3Hpo9XpJAv9MBcHrrJOQ5kV7ypXOuQie+AmWG25hLbiKdwYTifzcfQ=="
}
}
},
"zen-observable": {
- "version": "0.8.13",
- "resolved": "https://registry.npmjs.org/zen-observable/-/zen-observable-0.8.13.tgz",
- "integrity": "sha512-fa+6aDUVvavYsefZw0zaZ/v3ckEtMgCFi30sn91SEZea4y/6jQp05E3omjkX91zV6RVdn15fqnFZ6RKjRGbp2g=="
+ "version": "0.8.15",
+ "resolved": "https://registry.npmjs.org/zen-observable/-/zen-observable-0.8.15.tgz",
+ "integrity": "sha512-PQ2PC7R9rslx84ndNBZB/Dkv8V8fZEpk83RLgXtYd0fwUgEjseMn1Dgajh2x6S8QbZAFa9p2qVCEuYZNgve0dQ=="
},
"zen-observable-ts": {
- "version": "0.8.15",
- "resolved": "https://registry.npmjs.org/zen-observable-ts/-/zen-observable-ts-0.8.15.tgz",
- "integrity": "sha512-sXKPWiw6JszNEkRv5dQ+lQCttyjHM2Iks74QU5NP8mMPS/NrzTlHDr780gf/wOBqmHkPO6NCLMlsa+fAQ8VE8w==",
+ "version": "0.8.21",
+ "resolved": "https://registry.npmjs.org/zen-observable-ts/-/zen-observable-ts-0.8.21.tgz",
+ "integrity": "sha512-Yj3yXweRc8LdRMrCC8nIc4kkjWecPAUVh0TI0OUrWXx6aX790vLcDlWca6I4vsyCGH3LpWxq0dJRcMOFoVqmeg==",
"requires": {
+ "tslib": "^1.9.3",
"zen-observable": "^0.8.0"
}
},
diff --git a/website/package.json b/website/package.json
index f43b9a6a0..afa0ff239 100644
--- a/website/package.json
+++ b/website/package.json
@@ -16,7 +16,7 @@
"autoprefixer": "^9.4.7",
"classnames": "^2.2.6",
"codemirror": "^5.43.0",
- "gatsby": "^2.1.18",
+ "gatsby": "^2.11.1",
"gatsby-image": "^2.0.29",
"gatsby-mdx": "^0.3.6",
"gatsby-plugin-catch-links": "^2.0.11",
From 19b9ea0436286da301742d1541abe0dbf5142a12 Mon Sep 17 00:00:00 2001
From: Ines Montani
Date: Tue, 16 Jun 2020 18:34:11 +0200
Subject: [PATCH 34/41] Fix languages.json
---
website/meta/languages.json | 14 +++-----------
1 file changed, 3 insertions(+), 11 deletions(-)
diff --git a/website/meta/languages.json b/website/meta/languages.json
index facfc3541..493f96c49 100644
--- a/website/meta/languages.json
+++ b/website/meta/languages.json
@@ -78,11 +78,14 @@
"name": "Japanese",
"models": ["ja_core_news_sm", "ja_core_news_md", "ja_core_news_lg"],
"dependencies": [
+ { "name": "Unidic", "url": "http://unidic.ninjal.ac.jp/back_number#unidic_cwj" },
+ { "name": "Mecab", "url": "https://github.com/taku910/mecab" },
{
"name": "SudachiPy",
"url": "https://github.com/WorksApplications/SudachiPy"
}
],
+ "example": "これは文章です。",
"has_examples": true
},
{
@@ -191,17 +194,6 @@
"example": "นี่คือประโยค",
"has_examples": true
},
- {
- "code": "ja",
- "name": "Japanese",
- "dependencies": [
- { "name": "Unidic", "url": "http://unidic.ninjal.ac.jp/back_number#unidic_cwj" },
- { "name": "Mecab", "url": "https://github.com/taku910/mecab" },
- { "name": "fugashi", "url": "https://github.com/polm/fugashi" }
- ],
- "example": "これは文章です。",
- "has_examples": true
- },
{
"code": "ko",
"name": "Korean",
From 66889de1664e0cac92cbc650b96812d1e7f7596a Mon Sep 17 00:00:00 2001
From: Adriane Boyd
Date: Fri, 19 Jun 2020 12:43:41 +0200
Subject: [PATCH 35/41] Warning for sudachipy 0.4.5 (#5611)
---
website/docs/usage/models.md | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/website/docs/usage/models.md b/website/docs/usage/models.md
index 4549e8433..b11e6347a 100644
--- a/website/docs/usage/models.md
+++ b/website/docs/usage/models.md
@@ -214,6 +214,14 @@ the provided Japanese models use SudachiPy split mode `A`.
The `meta` argument of the `Japanese` language class can be used to configure
the split mode to `A`, `B` or `C`.
+
+
+If you run into errors related to `sudachipy`, which is currently under active
+development, we suggest downgrading to `sudachipy==0.4.5`, which is the version
+used for training the current [Japanese models](/models/ja).
+
+
+
## Installing and using models {#download}
> #### Downloading models in spaCy < v1.7
From fcdecefacf912fcd675a130fce6041ad0cfbc570 Mon Sep 17 00:00:00 2001
From: Adriane Boyd
Date: Mon, 22 Jun 2020 14:37:24 +0200
Subject: [PATCH 36/41] Add warnings example in v2.3 migration guide (#5627)
---
website/docs/usage/v2-3.md | 12 ++++++++++--
1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/website/docs/usage/v2-3.md b/website/docs/usage/v2-3.md
index d59b50a6e..378b1ec34 100644
--- a/website/docs/usage/v2-3.md
+++ b/website/docs/usage/v2-3.md
@@ -161,10 +161,18 @@ debugging your tokenizer configuration.
spaCy's custom warnings have been replaced with native Python
[`warnings`](https://docs.python.org/3/library/warnings.html). Instead of
-setting `SPACY_WARNING_IGNORE`, use the
-[`warnings` filters](https://docs.python.org/3/library/warnings.html#the-warnings-filter)
+setting `SPACY_WARNING_IGNORE`, use the [`warnings`
+filters](https://docs.python.org/3/library/warnings.html#the-warnings-filter)
to manage warnings.
+```diff
+import spacy
++ import warnings
+
+- spacy.errors.SPACY_WARNING_IGNORE.append('W007')
++ warnings.filterwarnings("ignore", message=r"\[W007\]", category=UserWarning)
+```
+
#### Normalization tables
The normalization tables have moved from the language data in
From 4f73ced9140e9bd4d1628e123d7c087b16a88b3c Mon Sep 17 00:00:00 2001
From: Adriane Boyd
Date: Tue, 23 Jun 2020 16:48:59 +0200
Subject: [PATCH 37/41] Extend what's new in v2.3 with vocab / is_oov (#5635)
---
website/docs/usage/v2-3.md | 45 ++++++++++++++++++++++++++++++++++++++
1 file changed, 45 insertions(+)
diff --git a/website/docs/usage/v2-3.md b/website/docs/usage/v2-3.md
index 378b1ec34..c56b44267 100644
--- a/website/docs/usage/v2-3.md
+++ b/website/docs/usage/v2-3.md
@@ -182,6 +182,51 @@ If you're adding data for a new language, the normalization table should be
added to `spacy-lookups-data`. See
[adding norm exceptions](/usage/adding-languages#norm-exceptions).
+#### No preloaded lexemes/vocab for models with vectors
+
+To reduce the initial loading time, the lexemes in `nlp.vocab` are no longer
+loaded on initialization for models with vectors. As you process texts, the
+lexemes will be added to the vocab automatically, just as in models without
+vectors.
+
+To see the number of unique vectors and number of words with vectors, see
+`nlp.meta['vectors']`, for example for `en_core_web_md` there are `20000`
+unique vectors and `684830` words with vectors:
+
+```python
+{
+ 'width': 300,
+ 'vectors': 20000,
+ 'keys': 684830,
+ 'name': 'en_core_web_md.vectors'
+}
+```
+
+If required, for instance if you are working directly with word vectors rather
+than processing texts, you can load all lexemes for words with vectors at once:
+
+```python
+for orth in nlp.vocab.vectors:
+ _ = nlp.vocab[orth]
+```
+
+#### Lexeme.is_oov and Token.is_oov
+
+
+
+Due to a bug, the values for `is_oov` are reversed in v2.3.0, but this will be
+fixed in the next patch release v2.3.1.
+
+
+
+In v2.3, `Lexeme.is_oov` and `Token.is_oov` are `True` if the lexeme does not
+have a word vector. This is equivalent to `token.orth not in
+nlp.vocab.vectors`.
+
+Previously in v2.2, `is_oov` corresponded to whether a lexeme had stored
+probability and cluster features. The probability and cluster features are no
+longer included in the provided medium and large models (see the next section).
+
#### Probability and cluster features
> #### Load and save extra prob lookups table
From a2660bd9c63f2ce893ef44a35d3d23ce6fdf00a0 Mon Sep 17 00:00:00 2001
From: Adriane Boyd
Date: Wed, 24 Jun 2020 10:26:12 +0200
Subject: [PATCH 38/41] Fix backslashes in warnings config diff (#5640)
Fix backslashes in warnings config diff in v2.3 migration section.
---
website/docs/usage/v2-3.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/docs/usage/v2-3.md b/website/docs/usage/v2-3.md
index c56b44267..e6b88c779 100644
--- a/website/docs/usage/v2-3.md
+++ b/website/docs/usage/v2-3.md
@@ -170,7 +170,7 @@ import spacy
+ import warnings
- spacy.errors.SPACY_WARNING_IGNORE.append('W007')
-+ warnings.filterwarnings("ignore", message=r"\[W007\]", category=UserWarning)
++ warnings.filterwarnings("ignore", message=r"\\[W007\\]", category=UserWarning)
```
#### Normalization tables
From d777d9cc382a23a598b2a11a3a3902251a894b4a Mon Sep 17 00:00:00 2001
From: Adriane Boyd
Date: Fri, 26 Jun 2020 14:12:29 +0200
Subject: [PATCH 39/41] Extend v2.3 migration guide (#5653)
* Extend preloaded vocab section
* Add section on tag maps
---
website/docs/usage/v2-3.md | 78 ++++++++++++++++++++++++++++++++++++--
1 file changed, 75 insertions(+), 3 deletions(-)
diff --git a/website/docs/usage/v2-3.md b/website/docs/usage/v2-3.md
index e6b88c779..b6c4d7dfb 100644
--- a/website/docs/usage/v2-3.md
+++ b/website/docs/usage/v2-3.md
@@ -182,12 +182,12 @@ If you're adding data for a new language, the normalization table should be
added to `spacy-lookups-data`. See
[adding norm exceptions](/usage/adding-languages#norm-exceptions).
-#### No preloaded lexemes/vocab for models with vectors
+#### No preloaded vocab for models with vectors
To reduce the initial loading time, the lexemes in `nlp.vocab` are no longer
loaded on initialization for models with vectors. As you process texts, the
-lexemes will be added to the vocab automatically, just as in models without
-vectors.
+lexemes will be added to the vocab automatically, just as in small models
+without vectors.
To see the number of unique vectors and number of words with vectors, see
`nlp.meta['vectors']`, for example for `en_core_web_md` there are `20000`
@@ -210,6 +210,20 @@ for orth in nlp.vocab.vectors:
_ = nlp.vocab[orth]
```
+If your workflow previously iterated over `nlp.vocab`, a similar alternative
+is to iterate over words with vectors instead:
+
+```diff
+- lexemes = [w for w in nlp.vocab]
++ lexemes = [nlp.vocab[orth] for orth in nlp.vocab.vectors]
+```
+
+Be aware that the set of preloaded lexemes in a v2.2 model is not equivalent to
+the set of words with vectors. For English, v2.2 `md/lg` models have 1.3M
+provided lexemes but only 685K words with vectors. The vectors have been
+updated for most languages in v2.2, but the English models contain the same
+vectors for both v2.2 and v2.3.
+
#### Lexeme.is_oov and Token.is_oov
@@ -254,6 +268,28 @@ model vocab, which will take a few seconds on initial loading. When you save
this model after loading the `prob` table, the full `prob` table will be saved
as part of the model vocab.
+To load the probability table into a provided model, first make sure you have
+`spacy-lookups-data` installed. To load the table, remove the empty provided
+`lexeme_prob` table and then access `Lexeme.prob` for any word to load the
+table from `spacy-lookups-data`:
+
+```diff
++ # prerequisite: pip install spacy-lookups-data
+import spacy
+
+nlp = spacy.load("en_core_web_md")
+
+# remove the empty placeholder prob table
++ if nlp.vocab.lookups_extra.has_table("lexeme_prob"):
++ nlp.vocab.lookups_extra.remove_table("lexeme_prob")
+
+# access any `.prob` to load the full table into the model
+assert nlp.vocab["a"].prob == -3.9297883511
+
+# if desired, save this model with the probability table included
+nlp.to_disk("/path/to/model")
+```
+
If you'd like to include custom `cluster`, `prob`, or `sentiment` tables as part
of a new model, add the data to
[`spacy-lookups-data`](https://github.com/explosion/spacy-lookups-data) under
@@ -271,3 +307,39 @@ When you initialize a new model with [`spacy init-model`](/api/cli#init-model),
the `prob` table from `spacy-lookups-data` may be loaded as part of the
initialization. If you'd like to omit this extra data as in spaCy's provided
v2.3 models, use the new flag `--omit-extra-lookups`.
+
+#### Tag maps in provided models vs. blank models
+
+The tag maps in the provided models may differ from the tag maps in the spaCy
+library. You can access the tag map in a loaded model under
+`nlp.vocab.morphology.tag_map`.
+
+The tag map from `spacy.lang.lg.tag_map` is still used when a blank model is
+initialized. If you want to provide an alternate tag map, update
+`nlp.vocab.morphology.tag_map` after initializing the model or if you're using
+the [train CLI](/api/cli#train), you can use the new `--tag-map-path` option to
+provide in the tag map as a JSON dict.
+
+If you want to export a tag map from a provided model for use with the train
+CLI, you can save it as a JSON dict. To only use string keys as required by
+JSON and to make it easier to read and edit, any internal integer IDs need to
+be converted back to strings:
+
+```python
+import spacy
+import srsly
+
+nlp = spacy.load("en_core_web_sm")
+tag_map = {}
+
+# convert any integer IDs to strings for JSON
+for tag, morph in nlp.vocab.morphology.tag_map.items():
+ tag_map[tag] = {}
+ for feat, val in morph.items():
+ feat = nlp.vocab.strings.as_string(feat)
+ if not isinstance(val, bool):
+ val = nlp.vocab.strings.as_string(val)
+ tag_map[tag][feat] = val
+
+srsly.write_json("tag_map.json", tag_map)
+```
From 305221f3e5cd33b374fcf1c331e11a7fd1cdbe9c Mon Sep 17 00:00:00 2001
From: Matthias Hertel
Date: Tue, 30 Jun 2020 19:58:23 +0200
Subject: [PATCH 40/41] Website: fixed the token span in the text about the
rule-based matching example (#5669)
* fixed token span in pattern matcher example
* contributor agreement
---
.github/contributors/hertelm.md | 106 ++++++++++++++++++++++
website/docs/usage/rule-based-matching.md | 2 +-
2 files changed, 107 insertions(+), 1 deletion(-)
create mode 100644 .github/contributors/hertelm.md
diff --git a/.github/contributors/hertelm.md b/.github/contributors/hertelm.md
new file mode 100644
index 000000000..ba4250bfc
--- /dev/null
+++ b/.github/contributors/hertelm.md
@@ -0,0 +1,106 @@
+# spaCy contributor agreement
+
+This spaCy Contributor Agreement (**"SCA"**) is based on the
+[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
+The SCA applies to any contribution that you make to any product or project
+managed by us (the **"project"**), and sets out the intellectual property rights
+you grant to us in the contributed materials. The term **"us"** shall mean
+[ExplosionAI GmbH](https://explosion.ai/legal). The term
+**"you"** shall mean the person or entity identified below.
+
+If you agree to be bound by these terms, fill in the information requested
+below and include the filled-in version with your first pull request, under the
+folder [`.github/contributors/`](/.github/contributors/). The name of the file
+should be your GitHub username, with the extension `.md`. For example, the user
+example_user would create the file `.github/contributors/example_user.md`.
+
+Read this agreement carefully before signing. These terms and conditions
+constitute a binding legal agreement.
+
+## Contributor Agreement
+
+1. The term "contribution" or "contributed materials" means any source code,
+object code, patch, tool, sample, graphic, specification, manual,
+documentation, or any other material posted or submitted by you to the project.
+
+2. With respect to any worldwide copyrights, or copyright applications and
+registrations, in your contribution:
+
+ * you hereby assign to us joint ownership, and to the extent that such
+ assignment is or becomes invalid, ineffective or unenforceable, you hereby
+ grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
+ royalty-free, unrestricted license to exercise all rights under those
+ copyrights. This includes, at our option, the right to sublicense these same
+ rights to third parties through multiple levels of sublicensees or other
+ licensing arrangements;
+
+ * you agree that each of us can do all things in relation to your
+ contribution as if each of us were the sole owners, and if one of us makes
+ a derivative work of your contribution, the one who makes the derivative
+ work (or has it made will be the sole owner of that derivative work;
+
+ * you agree that you will not assert any moral rights in your contribution
+ against us, our licensees or transferees;
+
+ * you agree that we may register a copyright in your contribution and
+ exercise all ownership rights associated with it; and
+
+ * you agree that neither of us has any duty to consult with, obtain the
+ consent of, pay or render an accounting to the other for any use or
+ distribution of your contribution.
+
+3. With respect to any patents you own, or that you can license without payment
+to any third party, you hereby grant to us a perpetual, irrevocable,
+non-exclusive, worldwide, no-charge, royalty-free license to:
+
+ * make, have made, use, sell, offer to sell, import, and otherwise transfer
+ your contribution in whole or in part, alone or in combination with or
+ included in any product, work or materials arising out of the project to
+ which your contribution was submitted, and
+
+ * at our option, to sublicense these same rights to third parties through
+ multiple levels of sublicensees or other licensing arrangements.
+
+4. Except as set out above, you keep all right, title, and interest in your
+contribution. The rights that you grant to us under these terms are effective
+on the date you first submitted a contribution to us, even if your submission
+took place before the date you sign these terms.
+
+5. You covenant, represent, warrant and agree that:
+
+ * Each contribution that you submit is and shall be an original work of
+ authorship and you can legally grant the rights set out in this SCA;
+
+ * to the best of your knowledge, each contribution will not violate any
+ third party's copyrights, trademarks, patents, or other intellectual
+ property rights; and
+
+ * each contribution shall be in compliance with U.S. export control laws and
+ other applicable export and import laws. You agree to notify us if you
+ become aware of any circumstance which would make any of the foregoing
+ representations inaccurate in any respect. We may publicly disclose your
+ participation in the project, including the fact that you have signed the SCA.
+
+6. This SCA is governed by the laws of the State of California and applicable
+U.S. Federal law. Any choice of law rules will not apply.
+
+7. Please place an “x” on one of the applicable statement below. Please do NOT
+mark both statements:
+
+ * [x] I am signing on behalf of myself as an individual and no other person
+ or entity, including my employer, has or will have rights with respect to my
+ contributions.
+
+ * [ ] I am signing on behalf of my employer or a legal entity and I have the
+ actual authority to contractually bind that entity.
+
+## Contributor Details
+
+| Field | Entry |
+|------------------------------- | -------------------- |
+| Name | Matthias Hertel |
+| Company name (if applicable) | |
+| Title or role (if applicable) | |
+| Date | June 29, 2020 |
+| GitHub username | hertelm |
+| Website (optional) | |
diff --git a/website/docs/usage/rule-based-matching.md b/website/docs/usage/rule-based-matching.md
index f7866fe31..252aa8c77 100644
--- a/website/docs/usage/rule-based-matching.md
+++ b/website/docs/usage/rule-based-matching.md
@@ -122,7 +122,7 @@ for match_id, start, end in matches:
```
The matcher returns a list of `(match_id, start, end)` tuples – in this case,
-`[('15578876784678163569', 0, 2)]`, which maps to the span `doc[0:2]` of our
+`[('15578876784678163569', 0, 3)]`, which maps to the span `doc[0:3]` of our
original document. The `match_id` is the [hash value](/usage/spacy-101#vocab) of
the string ID "HelloWorld". To get the string value, you can look up the ID in
the [`StringStore`](/api/stringstore).
From 7111b9de2e39a03f55502183d0536e446e49dbd0 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?=C3=81lvaro=20Abella=20Bascar=C3=A1n?=
Date: Tue, 30 Jun 2020 20:00:50 +0200
Subject: [PATCH 41/41] Fix in docs: pipe(docs) instead of pipe(texts) (#5680)
Very minor fix in docs, specifically in this part:
```
matcher = PhraseMatcher(nlp.vocab)
> for doc in matcher.pipe(texts, batch_size=50):
> pass
```
`texts` suggests the input is an iterable of strings. I replaced it for `docs`.
---
website/docs/api/phrasematcher.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/website/docs/api/phrasematcher.md b/website/docs/api/phrasematcher.md
index a72277420..49211174c 100644
--- a/website/docs/api/phrasematcher.md
+++ b/website/docs/api/phrasematcher.md
@@ -91,7 +91,7 @@ Match a stream of documents, yielding them in turn.
> ```python
> from spacy.matcher import PhraseMatcher
> matcher = PhraseMatcher(nlp.vocab)
-> for doc in matcher.pipe(texts, batch_size=50):
+> for doc in matcher.pipe(docs, batch_size=50):
> pass
> ```