2018-04-03 16:50:31 +03:00
|
|
|
def add_codes(err_cls):
|
|
|
|
"""Add error codes to string messages via class attribute names."""
|
💫 Tidy up and auto-format .py files (#2983)
<!--- Provide a general summary of your changes in the title. -->
## Description
- [x] Use [`black`](https://github.com/ambv/black) to auto-format all `.py` files.
- [x] Update flake8 config to exclude very large files (lemmatization tables etc.)
- [x] Update code to be compatible with flake8 rules
- [x] Fix various small bugs, inconsistencies and messy stuff in the language data
- [x] Update docs to explain new code style (`black`, `flake8`, when to use `# fmt: off` and `# fmt: on` and what `# noqa` means)
Once #2932 is merged, which auto-formats and tidies up the CLI, we'll be able to run `flake8 spacy` actually get meaningful results.
At the moment, the code style and linting isn't applied automatically, but I'm hoping that the new [GitHub Actions](https://github.com/features/actions) will let us auto-format pull requests and post comments with relevant linting information.
### Types of change
enhancement, code style
## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.
2018-11-30 19:03:03 +03:00
|
|
|
|
2020-05-14 16:45:58 +03:00
|
|
|
class ErrorsWithCodes(err_cls):
|
2018-04-03 16:50:31 +03:00
|
|
|
def __getattribute__(self, code):
|
2020-05-21 21:52:48 +03:00
|
|
|
msg = super(ErrorsWithCodes, self).__getattribute__(code)
|
2020-05-21 15:14:01 +03:00
|
|
|
if code.startswith("__"): # python system attributes like __class__
|
2020-05-14 16:45:58 +03:00
|
|
|
return msg
|
2020-05-14 15:14:15 +03:00
|
|
|
else:
|
2020-05-14 16:45:58 +03:00
|
|
|
return "[{code}] {msg}".format(code=code, msg=msg)
|
💫 Tidy up and auto-format .py files (#2983)
<!--- Provide a general summary of your changes in the title. -->
## Description
- [x] Use [`black`](https://github.com/ambv/black) to auto-format all `.py` files.
- [x] Update flake8 config to exclude very large files (lemmatization tables etc.)
- [x] Update code to be compatible with flake8 rules
- [x] Fix various small bugs, inconsistencies and messy stuff in the language data
- [x] Update docs to explain new code style (`black`, `flake8`, when to use `# fmt: off` and `# fmt: on` and what `# noqa` means)
Once #2932 is merged, which auto-formats and tidies up the CLI, we'll be able to run `flake8 spacy` actually get meaningful results.
At the moment, the code style and linting isn't applied automatically, but I'm hoping that the new [GitHub Actions](https://github.com/features/actions) will let us auto-format pull requests and post comments with relevant linting information.
### Types of change
enhancement, code style
## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.
2018-11-30 19:03:03 +03:00
|
|
|
|
2018-04-03 16:50:31 +03:00
|
|
|
return ErrorsWithCodes()
|
|
|
|
|
|
|
|
|
💫 Tidy up and auto-format .py files (#2983)
<!--- Provide a general summary of your changes in the title. -->
## Description
- [x] Use [`black`](https://github.com/ambv/black) to auto-format all `.py` files.
- [x] Update flake8 config to exclude very large files (lemmatization tables etc.)
- [x] Update code to be compatible with flake8 rules
- [x] Fix various small bugs, inconsistencies and messy stuff in the language data
- [x] Update docs to explain new code style (`black`, `flake8`, when to use `# fmt: off` and `# fmt: on` and what `# noqa` means)
Once #2932 is merged, which auto-formats and tidies up the CLI, we'll be able to run `flake8 spacy` actually get meaningful results.
At the moment, the code style and linting isn't applied automatically, but I'm hoping that the new [GitHub Actions](https://github.com/features/actions) will let us auto-format pull requests and post comments with relevant linting information.
### Types of change
enhancement, code style
## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.
2018-11-30 19:03:03 +03:00
|
|
|
# fmt: off
|
|
|
|
|
2018-04-03 16:50:31 +03:00
|
|
|
@add_codes
|
2020-07-12 15:03:23 +03:00
|
|
|
class Warnings:
|
2018-04-03 16:50:31 +03:00
|
|
|
W004 = ("No text fixing enabled. Run `pip install ftfy` to enable fixing "
|
|
|
|
"using ftfy.fix_text if necessary.")
|
|
|
|
W005 = ("Doc object not parsed. This means displaCy won't be able to "
|
|
|
|
"generate a dependency visualization for it. Make sure the Doc "
|
|
|
|
"was processed with a model that supports dependency parsing, and "
|
|
|
|
"not just a language class like `English()`. For more info, see "
|
2020-09-04 13:58:50 +03:00
|
|
|
"the docs:\nhttps://nightly.spacy.io/usage/models")
|
2018-04-03 16:50:31 +03:00
|
|
|
W006 = ("No entities to visualize found in Doc object. If this is "
|
|
|
|
"surprising to you, make sure the Doc was processed using a model "
|
|
|
|
"that supports named entity recognition, and check the `doc.ents` "
|
|
|
|
"property manually if necessary.")
|
2018-05-21 02:22:38 +03:00
|
|
|
W007 = ("The model you're using has no word vectors loaded, so the result "
|
|
|
|
"of the {obj}.similarity method will be based on the tagger, "
|
|
|
|
"parser and NER, which may not give useful similarity judgements. "
|
|
|
|
"This may happen if you're using one of the small models, e.g. "
|
|
|
|
"`en_core_web_sm`, which don't ship with word vectors and only "
|
|
|
|
"use context-sensitive tensors. You can always add your own word "
|
|
|
|
"vectors, or use one of the larger models instead if available.")
|
|
|
|
W008 = ("Evaluating {obj}.similarity based on empty vectors.")
|
2018-12-20 19:32:04 +03:00
|
|
|
W011 = ("It looks like you're calling displacy.serve from within a "
|
|
|
|
"Jupyter notebook or a similar environment. This likely means "
|
|
|
|
"you're already running a local web server, so there's no need to "
|
|
|
|
"make displaCy start another one. Instead, you should be able to "
|
|
|
|
"replace displacy.serve with displacy.render to show the "
|
|
|
|
"visualization.")
|
2019-02-12 17:45:31 +03:00
|
|
|
W012 = ("A Doc object you're adding to the PhraseMatcher for pattern "
|
|
|
|
"'{key}' is parsed and/or tagged, but to match on '{attr}', you "
|
|
|
|
"don't actually need this information. This means that creating "
|
|
|
|
"the patterns is potentially much slower, because all pipeline "
|
|
|
|
"components are applied. To only create tokenized Doc objects, "
|
|
|
|
"try using `nlp.make_doc(text)` or process all texts as a stream "
|
|
|
|
"using `list(nlp.tokenizer.pipe(all_texts))`.")
|
2019-10-14 13:28:53 +03:00
|
|
|
W017 = ("Alias '{alias}' already exists in the Knowledge Base.")
|
2019-12-13 12:45:29 +03:00
|
|
|
W018 = ("Entity '{entity}' already exists in the Knowledge Base - "
|
|
|
|
"ignoring the duplicate entry.")
|
2019-09-16 16:16:12 +03:00
|
|
|
W020 = ("Unnamed vectors. This won't allow multiple vectors models to be "
|
|
|
|
"loaded. (Shape: {shape})")
|
2019-09-27 17:22:34 +03:00
|
|
|
W021 = ("Unexpected hash collision in PhraseMatcher. Matches may be "
|
|
|
|
"incorrect. Modify PhraseMatcher._terminal_hash to fix.")
|
2019-10-14 13:28:53 +03:00
|
|
|
W024 = ("Entity '{entity}' - Alias '{alias}' combination already exists in "
|
|
|
|
"the Knowledge Base.")
|
2020-09-21 14:01:26 +03:00
|
|
|
W026 = ("Unable to set all sentence boundaries from dependency parses. If "
|
|
|
|
"you are constructing a parse tree incrementally by setting "
|
|
|
|
"token.head values, you can probably ignore this warning. Consider "
|
|
|
|
"using Doc(words, ..., heads=heads, deps=deps) instead.")
|
2019-12-21 23:12:19 +03:00
|
|
|
W027 = ("Found a large training file of {size} bytes. Note that it may "
|
|
|
|
"be more efficient to split your training data into multiple "
|
|
|
|
"smaller JSON files instead.")
|
2020-07-09 20:43:39 +03:00
|
|
|
W028 = ("Doc.from_array was called with a vector of type '{type}', "
|
|
|
|
"but is expecting one of type 'uint64' instead. This may result "
|
|
|
|
"in problems with the vocab further on in the pipeline.")
|
2020-05-19 17:01:18 +03:00
|
|
|
W030 = ("Some entities could not be aligned in the text \"{text}\" with "
|
|
|
|
"entities \"{entities}\". Use "
|
2020-09-22 12:50:19 +03:00
|
|
|
"`spacy.training.offsets_to_biluo_tags(nlp.make_doc(text), entities)`"
|
2020-05-19 17:01:18 +03:00
|
|
|
" to check the alignment. Misaligned entities ('-') will be "
|
|
|
|
"ignored during training.")
|
2020-06-15 15:56:04 +03:00
|
|
|
W033 = ("Training a new {model} using a model with no lexeme normalization "
|
|
|
|
"table. This may degrade the performance of the model to some "
|
|
|
|
"degree. If this is intentional or the language you're using "
|
|
|
|
"doesn't have a normalization table, please ignore this warning. "
|
|
|
|
"If this is surprising, make sure you have the spacy-lookups-data "
|
|
|
|
"package installed. The languages with lexeme normalization tables "
|
2020-07-25 12:51:30 +03:00
|
|
|
"are currently: {langs}")
|
2020-07-22 17:04:58 +03:00
|
|
|
W034 = ("Please install the package spacy-lookups-data in order to include "
|
|
|
|
"the default lexeme normalization table for the language '{lang}'.")
|
2020-08-05 15:56:14 +03:00
|
|
|
W035 = ('Discarding subpattern "{pattern}" due to an unrecognized '
|
|
|
|
"attribute or operator.")
|
2018-04-03 16:50:31 +03:00
|
|
|
|
2020-02-27 20:42:27 +03:00
|
|
|
# TODO: fix numbering after merging develop into master
|
2020-08-18 17:06:37 +03:00
|
|
|
W090 = ("Could not locate any binary .spacy files in path '{path}'.")
|
2020-06-29 19:22:33 +03:00
|
|
|
W091 = ("Could not clean/remove the temp directory at {dir}: {msg}.")
|
2020-06-29 15:33:00 +03:00
|
|
|
W092 = ("Ignoring annotations for sentence starts, as dependency heads are set.")
|
2020-06-26 20:34:12 +03:00
|
|
|
W093 = ("Could not find any data to train the {name} on. Is your "
|
2020-08-18 17:06:37 +03:00
|
|
|
"input data correctly formatted?")
|
2020-06-05 13:42:15 +03:00
|
|
|
W094 = ("Model '{model}' ({model_version}) specifies an under-constrained "
|
|
|
|
"spaCy version requirement: {version}. This can lead to compatibility "
|
|
|
|
"problems with older versions, or as new spaCy versions are "
|
|
|
|
"released, because the model may say it's compatible when it's "
|
|
|
|
'not. Consider changing the "spacy_version" in your meta.json to a '
|
|
|
|
"version range, with a lower and upper pin. For example: {example}")
|
2020-05-30 16:34:54 +03:00
|
|
|
W095 = ("Model '{model}' ({model_version}) requires spaCy {version} and is "
|
|
|
|
"incompatible with the current version ({current}). This may lead "
|
|
|
|
"to unexpected results or runtime errors. To resolve this, "
|
|
|
|
"download a newer compatible model or retrain your custom model "
|
|
|
|
"with the current spaCy version. For more details and available "
|
|
|
|
"updates, run: python -m spacy validate")
|
2020-05-18 23:27:10 +03:00
|
|
|
W096 = ("The method 'disable_pipes' has become deprecated - use 'select_pipes' "
|
|
|
|
"instead.")
|
2020-05-15 12:02:10 +03:00
|
|
|
W097 = ("No Model config was provided to create the '{name}' component, "
|
|
|
|
"and no default configuration could be found either.")
|
2020-02-27 20:42:27 +03:00
|
|
|
W098 = ("No Model config was provided to create the '{name}' component, "
|
|
|
|
"so a default configuration was used.")
|
|
|
|
W099 = ("Expected 'dict' type for the 'model' argument of pipe '{pipe}', "
|
|
|
|
"but got '{type}' instead, so ignoring it.")
|
2020-06-29 15:33:00 +03:00
|
|
|
W100 = ("Skipping unsupported morphological feature(s): '{feature}'. "
|
2020-06-03 15:36:59 +03:00
|
|
|
"Provide features as a dict {{\"Field1\": \"Value1,Value2\"}} or "
|
|
|
|
"string \"Field1=Value1,Value2|Field2=Value3\".")
|
2020-07-03 12:32:42 +03:00
|
|
|
W101 = ("Skipping `Doc` custom extension '{name}' while merging docs.")
|
|
|
|
W102 = ("Skipping unsupported user data '{key}: {value}' while merging docs.")
|
2020-07-19 14:34:37 +03:00
|
|
|
W103 = ("Unknown {lang} word segmenter '{segmenter}'. Supported "
|
|
|
|
"word segmenters: {supported}. Defaulting to {default}.")
|
|
|
|
W104 = ("Skipping modifications for '{target}' segmenter. The current "
|
|
|
|
"segmenter is '{current}'.")
|
2020-08-31 18:01:24 +03:00
|
|
|
W105 = ("As of spaCy v3.0, the {matcher}.pipe method is deprecated. If you "
|
|
|
|
"need to match on a stream of documents, you can use nlp.pipe and "
|
|
|
|
"call the {matcher} on each Doc object.")
|
2020-09-17 01:14:01 +03:00
|
|
|
W107 = ("The property Doc.{prop} is deprecated. Use "
|
|
|
|
"Doc.has_annotation(\"{attr}\") instead.")
|
2020-02-27 20:42:27 +03:00
|
|
|
|
2020-05-21 15:14:01 +03:00
|
|
|
|
2018-04-03 16:50:31 +03:00
|
|
|
@add_codes
|
2020-07-12 15:03:23 +03:00
|
|
|
class Errors:
|
2018-04-03 16:50:31 +03:00
|
|
|
E001 = ("No component '{name}' found in pipeline. Available names: {opts}")
|
2020-07-22 14:42:59 +03:00
|
|
|
E002 = ("Can't find factory for '{name}' for language {lang} ({lang_code}). "
|
|
|
|
"This usually happens when spaCy calls nlp.{method} with custom "
|
|
|
|
"component name that's not registered on the current language class. "
|
|
|
|
"If you're using a custom component, make sure you've added the "
|
|
|
|
"decorator @Language.component (for function components) or "
|
|
|
|
"@Language.factory (for class components).\n\nAvailable "
|
|
|
|
"factories: {opts}")
|
2018-04-03 16:50:31 +03:00
|
|
|
E003 = ("Not a valid pipeline component. Expected callable, but "
|
2020-07-22 14:42:59 +03:00
|
|
|
"got {component} (name: '{name}'). If you're using a custom "
|
|
|
|
"component factory, double-check that it correctly returns your "
|
|
|
|
"initialized component.")
|
2020-08-28 17:27:22 +03:00
|
|
|
E004 = ("Can't set up pipeline component: a factory for '{name}' already "
|
|
|
|
"exists. Existing factory: {func}. New factory: {new_func}")
|
2018-04-03 16:50:31 +03:00
|
|
|
E005 = ("Pipeline component '{name}' returned None. If you're using a "
|
|
|
|
"custom component, maybe you forgot to return the processed Doc?")
|
2020-07-22 14:42:59 +03:00
|
|
|
E006 = ("Invalid constraints for adding pipeline component. You can only "
|
|
|
|
"set one of the following: before (component name or index), "
|
|
|
|
"after (component name or index), first (True) or last (True). "
|
|
|
|
"Invalid configuration: {args}. Existing components: {opts}")
|
2018-04-03 16:50:31 +03:00
|
|
|
E007 = ("'{name}' already exists in pipeline. Existing names: {opts}")
|
2020-08-28 21:35:26 +03:00
|
|
|
E008 = ("Can't restore disabled pipeline component '{name}' because it "
|
|
|
|
"doesn't exist in the pipeline anymore. If you want to remove "
|
|
|
|
"components from the pipeline, you should do it before calling "
|
|
|
|
"`nlp.select_pipes()` or after restoring the disabled components.")
|
2018-04-03 16:50:31 +03:00
|
|
|
E010 = ("Word vectors set to length 0. This may be because you don't have "
|
|
|
|
"a model installed or loaded, or because your model doesn't "
|
|
|
|
"include word vectors. For more info, see the docs:\n"
|
2020-09-04 13:58:50 +03:00
|
|
|
"https://nightly.spacy.io/usage/models")
|
2018-04-03 16:50:31 +03:00
|
|
|
E011 = ("Unknown operator: '{op}'. Options: {opts}")
|
|
|
|
E012 = ("Cannot add pattern for zero tokens to matcher.\nKey: {key}")
|
2019-09-18 22:37:17 +03:00
|
|
|
E014 = ("Unknown tag ID: {tag}")
|
2018-04-03 16:50:31 +03:00
|
|
|
E016 = ("MultitaskObjective target should be function or one of: dep, "
|
|
|
|
"tag, ent, dep_tag_offset, ent_tag.")
|
|
|
|
E017 = ("Can only add unicode or bytes. Got type: {value_type}")
|
2019-08-20 17:03:45 +03:00
|
|
|
E018 = ("Can't retrieve string for hash '{hash_value}'. This usually "
|
|
|
|
"refers to an issue with the `Vocab` or `StringStore`.")
|
2018-04-03 16:50:31 +03:00
|
|
|
E019 = ("Can't create transition with unknown action ID: {action}. Action "
|
|
|
|
"IDs are enumerated in spacy/syntax/{src}.pyx.")
|
|
|
|
E022 = ("Could not find a transition with the name '{name}' in the NER "
|
|
|
|
"model.")
|
|
|
|
E024 = ("Could not find an optimal move to supervise the parser. Usually, "
|
2019-06-01 15:37:27 +03:00
|
|
|
"this means that the model can't be updated in a way that's valid "
|
|
|
|
"and satisfies the correct annotations specified in the GoldParse. "
|
|
|
|
"For example, are all labels added to the model? If you're "
|
|
|
|
"training a named entity recognizer, also make sure that none of "
|
2020-01-06 16:59:28 +03:00
|
|
|
"your annotated entity spans have leading or trailing whitespace "
|
|
|
|
"or punctuation. "
|
2020-08-06 16:47:31 +03:00
|
|
|
"You can also use the experimental `debug data` command to "
|
2019-06-01 15:37:27 +03:00
|
|
|
"validate your JSON-formatted training data. For details, run:\n"
|
2020-08-06 16:47:31 +03:00
|
|
|
"python -m spacy debug data --help")
|
2018-04-03 16:50:31 +03:00
|
|
|
E025 = ("String is too long: {length} characters. Max is 2**30.")
|
|
|
|
E026 = ("Error accessing token at position {i}: out of bounds in Doc of "
|
|
|
|
"length {length}.")
|
|
|
|
E027 = ("Arguments 'words' and 'spaces' should be sequences of the same "
|
|
|
|
"length, or 'spaces' should be left default at None. spaces "
|
|
|
|
"should be a sequence of booleans, with True meaning that the "
|
|
|
|
"word owns a ' ' character following it.")
|
|
|
|
E028 = ("orths_and_spaces expects either a list of unicode string or a "
|
|
|
|
"list of (unicode, bool) tuples. Got bytes instance: {value}")
|
|
|
|
E029 = ("noun_chunks requires the dependency parse, which requires a "
|
|
|
|
"statistical model to be installed and loaded. For more info, see "
|
2020-09-04 13:58:50 +03:00
|
|
|
"the documentation:\nhttps://nightly.spacy.io/usage/models")
|
2018-04-03 16:50:31 +03:00
|
|
|
E030 = ("Sentence boundaries unset. You can add the 'sentencizer' "
|
|
|
|
"component to the pipeline with: "
|
2020-07-22 14:42:59 +03:00
|
|
|
"nlp.add_pipe('sentencizer'). "
|
2018-04-03 16:50:31 +03:00
|
|
|
"Alternatively, add the dependency parser, or set sentence "
|
|
|
|
"boundaries by setting doc[i].is_sent_start.")
|
|
|
|
E031 = ("Invalid token: empty string ('') at position {i}.")
|
|
|
|
E033 = ("Cannot load into non-empty Doc of length {length}.")
|
|
|
|
E035 = ("Error creating span with start {start} and end {end} for Doc of "
|
|
|
|
"length {length}.")
|
|
|
|
E036 = ("Error calculating span: Can't find a token starting at character "
|
|
|
|
"offset {start}.")
|
|
|
|
E037 = ("Error calculating span: Can't find a token ending at character "
|
|
|
|
"offset {end}.")
|
|
|
|
E039 = ("Array bounds exceeded while searching for root word. This likely "
|
|
|
|
"means the parse tree is in an invalid state. Please report this "
|
|
|
|
"issue here: http://github.com/explosion/spaCy/issues")
|
|
|
|
E040 = ("Attempt to access token at {i}, max length {max_length}.")
|
|
|
|
E041 = ("Invalid comparison operator: {op}. Likely a Cython bug?")
|
|
|
|
E042 = ("Error accessing doc[{i}].nbor({j}), for doc of length {length}.")
|
|
|
|
E043 = ("Refusing to write to token.sent_start if its document is parsed, "
|
|
|
|
"because this may cause inconsistent state.")
|
|
|
|
E044 = ("Invalid value for token.sent_start: {value}. Must be one of: "
|
|
|
|
"None, True, False")
|
|
|
|
E045 = ("Possibly infinite loop encountered while looking for {attr}.")
|
|
|
|
E046 = ("Can't retrieve unregistered extension attribute '{name}'. Did "
|
|
|
|
"you forget to call the `set_extension` method?")
|
|
|
|
E047 = ("Can't assign a value to unregistered extension attribute "
|
|
|
|
"'{name}'. Did you forget to call the `set_extension` method?")
|
2019-02-13 18:52:25 +03:00
|
|
|
E048 = ("Can't import language {lang} from spacy.lang: {err}")
|
2020-02-18 19:20:17 +03:00
|
|
|
E050 = ("Can't find model '{name}'. It doesn't seem to be a Python "
|
|
|
|
"package or a valid path to a data directory.")
|
2018-04-03 16:50:31 +03:00
|
|
|
E052 = ("Can't find model directory: {path}")
|
2020-02-27 20:42:27 +03:00
|
|
|
E053 = ("Could not read {name} from {path}")
|
2018-04-03 16:50:31 +03:00
|
|
|
E054 = ("No valid '{setting}' setting found in model meta.json.")
|
|
|
|
E055 = ("Invalid ORTH value in exception:\nKey: {key}\nOrths: {orths}")
|
|
|
|
E056 = ("Invalid tokenizer exception: ORTH values combined don't match "
|
|
|
|
"original string.\nKey: {key}\nOrths: {orths}")
|
|
|
|
E057 = ("Stepped slices not supported in Span objects. Try: "
|
|
|
|
"list(tokens)[start:stop:step] instead.")
|
|
|
|
E058 = ("Could not retrieve vector for key {key}.")
|
|
|
|
E059 = ("One (and only one) keyword arg must be set. Got: {kwargs}")
|
|
|
|
E060 = ("Cannot add new key to vectors: the table is full. Current shape: "
|
|
|
|
"({rows}, {cols}).")
|
|
|
|
E062 = ("Cannot find empty bit for new lexical flag. All bits between 0 "
|
|
|
|
"and 63 are occupied. You can replace one by specifying the "
|
|
|
|
"`flag_id` explicitly, e.g. "
|
|
|
|
"`nlp.vocab.add_flag(your_func, flag_id=IS_ALPHA`.")
|
|
|
|
E063 = ("Invalid value for flag_id: {value}. Flag IDs must be between 1 "
|
|
|
|
"and 63 (inclusive).")
|
|
|
|
E064 = ("Error fetching a Lexeme from the Vocab. When looking up a "
|
|
|
|
"string, the lexeme returned had an orth ID that did not match "
|
|
|
|
"the query string. This means that the cached lexeme structs are "
|
|
|
|
"mismatched to the string encoding table. The mismatched:\n"
|
|
|
|
"Query string: {string}\nOrth cached: {orth}\nOrth ID: {orth_id}")
|
|
|
|
E065 = ("Only one of the vector table's width and shape can be specified. "
|
|
|
|
"Got width {width} and shape {shape}.")
|
2020-09-08 23:44:25 +03:00
|
|
|
E067 = ("Invalid BILUO tag sequence: Got a tag starting with {start} "
|
|
|
|
"without a preceding 'B' (beginning of an entity). "
|
2018-04-03 16:50:31 +03:00
|
|
|
"Tag sequence:\n{tags}")
|
|
|
|
E068 = ("Invalid BILUO tag: '{tag}'.")
|
|
|
|
E071 = ("Error creating lexeme: specified orth ID ({orth}) does not "
|
|
|
|
"match the one in the vocab ({vocab_orth}).")
|
|
|
|
E073 = ("Cannot assign vector of length {new_length}. Existing vectors "
|
|
|
|
"are of length {length}. You can use `vocab.reset_vectors` to "
|
|
|
|
"clear the existing vectors and resize the table.")
|
|
|
|
E074 = ("Error interpreting compiled match pattern: patterns are expected "
|
|
|
|
"to end with the attribute {attr}. Got: {bad_attr}.")
|
|
|
|
E082 = ("Error deprojectivizing parse: number of heads ({n_heads}), "
|
|
|
|
"projective heads ({n_proj_heads}) and labels ({n_labels}) do not "
|
|
|
|
"match.")
|
2018-04-03 19:30:17 +03:00
|
|
|
E083 = ("Error setting extension: only one of `default`, `method`, or "
|
|
|
|
"`getter` (plus optional `setter`) is allowed. Got: {nr_defined}")
|
2018-04-03 16:50:31 +03:00
|
|
|
E084 = ("Error assigning label ID {label} to span: not in StringStore.")
|
|
|
|
E085 = ("Can't create lexeme for string '{string}'.")
|
|
|
|
E087 = ("Unknown displaCy style: {style}.")
|
|
|
|
E088 = ("Text of length {length} exceeds maximum of {max_length}. The "
|
|
|
|
"v2.x parser and NER models require roughly 1GB of temporary "
|
|
|
|
"memory per 100,000 characters in the input. This means long "
|
|
|
|
"texts may cause memory allocation errors. If you're not using "
|
|
|
|
"the parser or NER, it's probably safe to increase the "
|
|
|
|
"`nlp.max_length` limit. The limit is in number of characters, so "
|
|
|
|
"you can check whether your inputs are too long by checking "
|
|
|
|
"`len(text)`.")
|
2018-04-03 19:30:17 +03:00
|
|
|
E089 = ("Extensions can't have a setter argument without a getter "
|
|
|
|
"argument. Check the keyword arguments on `set_extension`.")
|
|
|
|
E090 = ("Extension '{name}' already exists on {obj}. To overwrite the "
|
|
|
|
"existing extension, set `force=True` on `{obj}.set_extension`.")
|
|
|
|
E091 = ("Invalid extension attribute {name}: expected callable or None, "
|
|
|
|
"but got: {value}")
|
2018-04-03 22:40:29 +03:00
|
|
|
E093 = ("token.ent_iob values make invalid sequence: I without B\n{seq}")
|
2018-04-10 22:26:37 +03:00
|
|
|
E094 = ("Error reading line {line_num} in vectors file {loc}.")
|
2018-05-20 16:13:37 +03:00
|
|
|
E095 = ("Can't write to frozen dictionary. This is likely an internal "
|
|
|
|
"error. Are you writing to a default function argument?")
|
2018-06-25 15:55:16 +03:00
|
|
|
E096 = ("Invalid object passed to displaCy: Can only visualize Doc or "
|
💫 Tidy up and auto-format .py files (#2983)
<!--- Provide a general summary of your changes in the title. -->
## Description
- [x] Use [`black`](https://github.com/ambv/black) to auto-format all `.py` files.
- [x] Update flake8 config to exclude very large files (lemmatization tables etc.)
- [x] Update code to be compatible with flake8 rules
- [x] Fix various small bugs, inconsistencies and messy stuff in the language data
- [x] Update docs to explain new code style (`black`, `flake8`, when to use `# fmt: off` and `# fmt: on` and what `# noqa` means)
Once #2932 is merged, which auto-formats and tidies up the CLI, we'll be able to run `flake8 spacy` actually get meaningful results.
At the moment, the code style and linting isn't applied automatically, but I'm hoping that the new [GitHub Actions](https://github.com/features/actions) will let us auto-format pull requests and post comments with relevant linting information.
### Types of change
enhancement, code style
## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.
2018-11-30 19:03:03 +03:00
|
|
|
"Span objects, or dicts if set to manual=True.")
|
2018-07-18 20:43:16 +03:00
|
|
|
E097 = ("Invalid pattern: expected token pattern (list of dicts) or "
|
|
|
|
"phrase pattern (string) but got:\n{pattern}")
|
2020-08-31 21:04:26 +03:00
|
|
|
E098 = ("Invalid pattern: expected both RIGHT_ID and RIGHT_ATTRS.")
|
|
|
|
E099 = ("Invalid pattern: the first node of pattern should be an anchor "
|
|
|
|
"node. The node should only contain RIGHT_ID and RIGHT_ATTRS.")
|
|
|
|
E100 = ("Nodes other than the anchor node should all contain LEFT_ID, "
|
|
|
|
"REL_OP and RIGHT_ID.")
|
|
|
|
E101 = ("RIGHT_ID should be a new node and LEFT_ID should already have "
|
2018-10-30 01:21:39 +03:00
|
|
|
"have been declared in previous edges.")
|
2019-02-24 17:11:28 +03:00
|
|
|
E102 = ("Can't merge non-disjoint spans. '{token}' is already part of "
|
2019-10-01 22:59:50 +03:00
|
|
|
"tokens to merge. If you want to find the longest non-overlapping "
|
|
|
|
"spans, you can use the util.filter_spans helper:\n"
|
2020-09-04 13:58:50 +03:00
|
|
|
"https://nightly.spacy.io/api/top-level#util.filter_spans")
|
2019-08-20 17:03:45 +03:00
|
|
|
E103 = ("Trying to set conflicting doc.ents: '{span1}' and '{span2}'. A "
|
|
|
|
"token can only be part of one entity, so make sure the entities "
|
|
|
|
"you're setting don't overlap.")
|
2018-11-30 22:16:14 +03:00
|
|
|
E106 = ("Can't find doc._.{attr} attribute specified in the underscore "
|
|
|
|
"settings: {opts}")
|
|
|
|
E107 = ("Value of doc._.{attr} is not JSON-serializable: {value}")
|
2020-02-27 20:42:27 +03:00
|
|
|
E109 = ("Component '{name}' could not be run. Did you forget to "
|
|
|
|
"call begin_training()?")
|
2018-12-20 19:32:04 +03:00
|
|
|
E110 = ("Invalid displaCy render wrapper. Expected callable, got: {obj}")
|
2019-02-13 13:27:04 +03:00
|
|
|
E111 = ("Pickling a token is not supported, because tokens are only views "
|
|
|
|
"of the parent Doc and can't exist on their own. A pickled token "
|
|
|
|
"would always have to include its Doc and Vocab, which has "
|
|
|
|
"practically no advantage over pickling the parent Doc directly. "
|
|
|
|
"So instead of pickling the token, pickle the Doc it belongs to.")
|
2019-02-13 15:22:05 +03:00
|
|
|
E112 = ("Pickling a span is not supported, because spans are only views "
|
|
|
|
"of the parent Doc and can't exist on their own. A pickled span "
|
|
|
|
"would always have to include its Doc and Vocab, which has "
|
|
|
|
"practically no advantage over pickling the parent Doc directly. "
|
|
|
|
"So instead of pickling the span, pickle the Doc it belongs to or "
|
|
|
|
"use Span.as_doc to convert the span to a standalone Doc object.")
|
2019-02-24 17:11:28 +03:00
|
|
|
E115 = ("All subtokens must have associated heads.")
|
2019-02-15 19:32:31 +03:00
|
|
|
E117 = ("The newly split tokens must match the text of the original token. "
|
2019-02-17 14:22:07 +03:00
|
|
|
"New orths: {new}. Old text: {old}.")
|
2019-02-24 20:38:47 +03:00
|
|
|
E118 = ("The custom extension attribute '{attr}' is not registered on the "
|
|
|
|
"Token object so it can't be set during retokenization. To "
|
|
|
|
"register an attribute, use the Token.set_extension classmethod.")
|
2019-08-20 17:03:45 +03:00
|
|
|
E119 = ("Can't set custom extension attribute '{attr}' during "
|
|
|
|
"retokenization because it's not writable. This usually means it "
|
|
|
|
"was registered with a getter function (and no setter) or as a "
|
|
|
|
"method extension, so the value is computed dynamically. To "
|
|
|
|
"overwrite a custom attribute manually, it should be registered "
|
|
|
|
"with a default value or with a getter AND setter.")
|
2019-02-24 20:38:47 +03:00
|
|
|
E120 = ("Can't set custom extension attributes during retokenization. "
|
|
|
|
"Expected dict mapping attribute names to values, but got: {value}")
|
2019-03-08 13:42:26 +03:00
|
|
|
E121 = ("Can't bulk merge spans. Attribute length {attr_len} should be "
|
|
|
|
"equal to span length ({span_len}).")
|
|
|
|
E122 = ("Cannot find token to be split. Did it get merged?")
|
|
|
|
E123 = ("Cannot find head of token to be split. Did it get merged?")
|
|
|
|
E125 = ("Unexpected value: {value}")
|
|
|
|
E126 = ("Unexpected matcher predicate: '{bad}'. Expected one of: {good}. "
|
|
|
|
"This is likely a bug in spaCy, so feel free to open an issue.")
|
2019-03-15 02:46:45 +03:00
|
|
|
E129 = ("Cannot write the label of an existing Span object because a Span "
|
2019-08-20 17:03:45 +03:00
|
|
|
"is a read-only view of the underlying Token objects stored in the "
|
|
|
|
"Doc. Instead, create a new Span object and specify the `label` "
|
|
|
|
"keyword argument, for example:\nfrom spacy.tokens import Span\n"
|
2019-03-15 02:46:45 +03:00
|
|
|
"span = Span(doc, start={start}, end={end}, label='{label}')")
|
2019-03-20 11:55:45 +03:00
|
|
|
E130 = ("You are running a narrow unicode build, which is incompatible "
|
|
|
|
"with spacy >= 2.1.0. To fix this, reinstall Python and use a wide "
|
|
|
|
"unicode build instead. You can also rebuild Python and set the "
|
|
|
|
"--enable-unicode=ucs4 flag.")
|
2019-03-22 14:05:35 +03:00
|
|
|
E131 = ("Cannot write the kb_id of an existing Span object because a Span "
|
2019-08-20 17:03:45 +03:00
|
|
|
"is a read-only view of the underlying Token objects stored in "
|
|
|
|
"the Doc. Instead, create a new Span object and specify the "
|
|
|
|
"`kb_id` keyword argument, for example:\nfrom spacy.tokens "
|
|
|
|
"import Span\nspan = Span(doc, start={start}, end={end}, "
|
|
|
|
"label='{label}', kb_id='{kb_id}')")
|
|
|
|
E132 = ("The vectors for entities and probabilities for alias '{alias}' "
|
|
|
|
"should have equal length, but found {entities_length} and "
|
|
|
|
"{probabilities_length} respectively.")
|
|
|
|
E133 = ("The sum of prior probabilities for alias '{alias}' should not "
|
|
|
|
"exceed 1, but found {sum}.")
|
2019-10-14 13:28:53 +03:00
|
|
|
E134 = ("Entity '{entity}' is not defined in the Knowledge Base.")
|
2019-08-20 17:03:45 +03:00
|
|
|
E137 = ("Expected 'dict' type, but got '{type}' from '{line}'. Make sure "
|
|
|
|
"to provide a valid JSON object as input with either the `text` "
|
|
|
|
"or `tokens` key. For more info, see the docs:\n"
|
2020-09-04 13:58:50 +03:00
|
|
|
"https://nightly.spacy.io/api/cli#pretrain-jsonl")
|
2019-08-20 17:03:45 +03:00
|
|
|
E138 = ("Invalid JSONL format for raw text '{text}'. Make sure the input "
|
|
|
|
"includes either the `text` or `tokens` key. For more info, see "
|
2020-09-04 13:58:50 +03:00
|
|
|
"the docs:\nhttps://nightly.spacy.io/api/cli#pretrain-jsonl")
|
2020-08-04 15:34:09 +03:00
|
|
|
E139 = ("Knowledge Base for component '{name}' is empty. Use the methods "
|
|
|
|
"kb.add_entity and kb.add_alias to add entries.")
|
2019-08-20 17:03:45 +03:00
|
|
|
E140 = ("The list of entities, prior probabilities and entity vectors "
|
|
|
|
"should be of equal length.")
|
|
|
|
E141 = ("Entity vectors should be of length {required} instead of the "
|
|
|
|
"provided {found}.")
|
2020-09-08 23:44:25 +03:00
|
|
|
E143 = ("Labels for component '{name}' not initialized. This can be fixed "
|
|
|
|
"by calling add_label, or by providing a representative batch of "
|
|
|
|
"examples to the component's begin_training method.")
|
2019-07-22 15:36:07 +03:00
|
|
|
E145 = ("Error reading `{param}` from input file.")
|
2019-07-22 15:56:13 +03:00
|
|
|
E146 = ("Could not access `{path}`.")
|
2019-08-20 17:03:45 +03:00
|
|
|
E147 = ("Unexpected error in the {method} functionality of the "
|
|
|
|
"EntityLinker: {msg}. This is likely a bug in spaCy, so feel free "
|
|
|
|
"to open an issue.")
|
|
|
|
E148 = ("Expected {ents} KB identifiers but got {ids}. Make sure that "
|
|
|
|
"each entity in `doc.ents` is assigned to a KB identifier.")
|
|
|
|
E149 = ("Error deserializing model. Check that the config used to create "
|
|
|
|
"the component matches the model being loaded.")
|
|
|
|
E150 = ("The language of the `nlp` object and the `vocab` should be the "
|
|
|
|
"same, but found '{nlp}' and '{vocab}' respectively.")
|
2019-08-21 15:00:37 +03:00
|
|
|
E152 = ("The attribute {attr} is not supported for token patterns. "
|
|
|
|
"Please use the option validate=True with Matcher, PhraseMatcher, "
|
|
|
|
"or EntityRuler for more details.")
|
|
|
|
E153 = ("The value type {vtype} is not supported for token patterns. "
|
|
|
|
"Please use the option validate=True with Matcher, PhraseMatcher, "
|
|
|
|
"or EntityRuler for more details.")
|
|
|
|
E154 = ("One of the attributes or values is not supported for token "
|
|
|
|
"patterns. Please use the option validate=True with Matcher, "
|
|
|
|
"PhraseMatcher, or EntityRuler for more details.")
|
2020-09-17 01:14:01 +03:00
|
|
|
E155 = ("The pipeline needs to include a {pipe} in order to use "
|
|
|
|
"Matcher or PhraseMatcher with the attribute {attr}. "
|
2019-08-21 21:52:36 +03:00
|
|
|
"Try using nlp() instead of nlp.make_doc() or list(nlp.pipe()) "
|
|
|
|
"instead of list(nlp.tokenizer.pipe()).")
|
2019-08-21 15:00:37 +03:00
|
|
|
E156 = ("The pipeline needs to include a parser in order to use "
|
2019-08-21 21:52:36 +03:00
|
|
|
"Matcher or PhraseMatcher with the attribute DEP. Try using "
|
2019-08-21 15:00:37 +03:00
|
|
|
"nlp() instead of nlp.make_doc() or list(nlp.pipe()) instead of "
|
|
|
|
"list(nlp.tokenizer.pipe()).")
|
|
|
|
E157 = ("Can't render negative values for dependency arc start or end. "
|
2019-08-20 16:51:37 +03:00
|
|
|
"Make sure that you're passing in absolute token indices, not "
|
|
|
|
"relative token offsets.\nstart: {start}, end: {end}, label: "
|
|
|
|
"{label}, direction: {dir}")
|
2019-09-09 20:17:55 +03:00
|
|
|
E158 = ("Can't add table '{name}' to lookups because it already exists.")
|
|
|
|
E159 = ("Can't find table '{name}' in lookups. Available tables: {tables}")
|
|
|
|
E160 = ("Can't find language data file: {path}")
|
2019-09-13 17:30:05 +03:00
|
|
|
E161 = ("Found an internal inconsistency when predicting entity links. "
|
|
|
|
"This is likely a bug in spaCy, so feel free to open an issue.")
|
2019-09-15 23:31:31 +03:00
|
|
|
E162 = ("Cannot evaluate textcat model on data with different labels.\n"
|
|
|
|
"Labels in model: {model_labels}\nLabels in evaluation "
|
|
|
|
"data: {eval_labels}")
|
|
|
|
E163 = ("cumsum was found to be unstable: its last element does not "
|
|
|
|
"correspond to sum")
|
|
|
|
E164 = ("x is neither increasing nor decreasing: {}.")
|
|
|
|
E165 = ("Only one class present in y_true. ROC AUC score is not defined in "
|
|
|
|
"that case.")
|
2019-09-18 21:23:21 +03:00
|
|
|
E166 = ("Can only merge DocBins with the same pre-defined attributes.\n"
|
|
|
|
"Current DocBin: {current}\nOther DocBin: {other}")
|
2019-09-21 15:37:06 +03:00
|
|
|
E169 = ("Can't find module: {module}")
|
|
|
|
E170 = ("Cannot apply transition {name}: invalid for the current state.")
|
2020-07-29 12:04:43 +03:00
|
|
|
E171 = ("Matcher.add received invalid 'on_match' callback argument: expected "
|
2019-09-25 00:06:24 +03:00
|
|
|
"callable or None, but got: {arg_type}")
|
2019-10-10 15:01:53 +03:00
|
|
|
E175 = ("Can't remove rule for unknown match pattern ID: {key}")
|
2019-10-14 13:28:53 +03:00
|
|
|
E176 = ("Alias '{alias}' is not defined in the Knowledge Base.")
|
2019-10-21 13:20:28 +03:00
|
|
|
E177 = ("Ill-formed IOB input detected: {tag}")
|
2020-07-29 12:04:43 +03:00
|
|
|
E178 = ("Each pattern should be a list of dicts, but got: {pat}. Maybe you "
|
2019-10-25 23:21:08 +03:00
|
|
|
"accidentally passed a single pattern to Matcher.add instead of a "
|
|
|
|
"list of patterns? If you only want to add one pattern, make sure "
|
|
|
|
"to wrap it in a list. For example: matcher.add('{key}', [pattern])")
|
|
|
|
E179 = ("Invalid pattern. Expected a list of Doc objects but got a single "
|
|
|
|
"Doc. If you only want to add one pattern, make sure to wrap it "
|
|
|
|
"in a list. For example: matcher.add('{key}', [doc])")
|
2019-10-27 15:35:49 +03:00
|
|
|
E180 = ("Span attributes can't be declared as required or assigned by "
|
|
|
|
"components, since spans are only views of the Doc. Use Doc and "
|
2019-10-30 19:19:36 +03:00
|
|
|
"Token attributes (or custom extension attributes) only and remove "
|
|
|
|
"the following: {attrs}")
|
2019-10-27 15:35:49 +03:00
|
|
|
E181 = ("Received invalid attributes for unkown object {obj}: {attrs}. "
|
|
|
|
"Only Doc and Token attributes are supported.")
|
|
|
|
E182 = ("Received invalid attribute declaration: {attr}\nDid you forget "
|
|
|
|
"to define the attribute? For example: {attr}.???")
|
|
|
|
E183 = ("Received invalid attribute declaration: {attr}\nOnly top-level "
|
|
|
|
"attributes are supported, for example: {solution}")
|
|
|
|
E184 = ("Only attributes without underscores are supported in component "
|
|
|
|
"attribute declarations (because underscore and non-underscore "
|
|
|
|
"attributes are connected anyways): {attr} -> {solution}")
|
|
|
|
E185 = ("Received invalid attribute in component attribute declaration: "
|
|
|
|
"{obj}.{attr}\nAttribute '{attr}' does not exist on {obj}.")
|
2019-10-28 14:36:23 +03:00
|
|
|
E186 = ("'{tok_a}' and '{tok_b}' are different texts.")
|
2019-11-21 18:24:10 +03:00
|
|
|
E187 = ("Only unicode strings are supported as labels.")
|
2020-09-21 21:43:54 +03:00
|
|
|
E189 = ("Each argument to Doc.__init__ should be of equal length.")
|
2020-03-03 23:44:51 +03:00
|
|
|
E190 = ("Token head out of range in `Doc.from_array()` for token index "
|
|
|
|
"'{index}' with value '{value}' (equivalent to relative head "
|
|
|
|
"index: '{rel_head_index}'). The head indices should be relative "
|
|
|
|
"to the current token index rather than absolute indices in the "
|
|
|
|
"array.")
|
|
|
|
E191 = ("Invalid head: the head token must be from the same doc as the "
|
|
|
|
"token itself.")
|
2020-03-29 14:51:20 +03:00
|
|
|
E192 = ("Unable to resize vectors in place with cupy.")
|
2020-04-02 11:43:13 +03:00
|
|
|
E193 = ("Unable to resize vectors in place if the resized vector dimension "
|
|
|
|
"({new_dim}) is not the same as the current vector dimension "
|
|
|
|
"({curr_dim}).")
|
2020-04-14 20:15:52 +03:00
|
|
|
E194 = ("Unable to aligned mismatched text '{text}' and words '{words}'.")
|
2020-04-15 14:51:33 +03:00
|
|
|
E195 = ("Matcher can be called on {good} only, got {got}.")
|
2020-04-29 13:53:16 +03:00
|
|
|
E196 = ("Refusing to write to token.is_sent_end. Sentence boundaries can "
|
|
|
|
"only be fixed with token.is_sent_start.")
|
2020-05-13 23:08:28 +03:00
|
|
|
E197 = ("Row out of bounds, unable to add row {row} for key {key}.")
|
2020-05-19 17:41:26 +03:00
|
|
|
E198 = ("Unable to return {n} most similar vectors for the current vectors "
|
|
|
|
"table, which contains {n_rows} vectors.")
|
2020-05-22 11:14:34 +03:00
|
|
|
E199 = ("Unable to merge 0-length span at doc[{start}:{end}].")
|
2020-05-29 18:38:33 +03:00
|
|
|
E200 = ("Specifying a base model with a pretrained component '{component}' "
|
|
|
|
"can not be combined with adding a pretrained Tok2Vec layer.")
|
2020-08-04 14:35:25 +03:00
|
|
|
E201 = ("Span index out of range.")
|
2019-02-13 17:29:08 +03:00
|
|
|
|
2019-12-21 20:55:03 +03:00
|
|
|
# TODO: fix numbering after merging develop into master
|
2020-09-23 17:57:14 +03:00
|
|
|
E917 = ("Received invalid value {value} for 'state_type' in "
|
|
|
|
"TransitionBasedParser: only 'parser' or 'ner' are valid options.")
|
2020-09-15 14:25:34 +03:00
|
|
|
E918 = ("Received invalid value for vocab: {vocab} ({vocab_type}). Valid "
|
|
|
|
"values are an instance of spacy.vocab.Vocab or True to create one"
|
|
|
|
" (default).")
|
2020-09-14 18:08:00 +03:00
|
|
|
E919 = ("A textcat 'positive_label' '{pos_label}' was provided for training "
|
|
|
|
"data that does not appear to be a binary classification problem "
|
|
|
|
"with two labels. Labels found: {labels}")
|
|
|
|
E920 = ("The textcat's 'positive_label' config setting '{pos_label}' "
|
|
|
|
"does not match any label in the training data. Labels found: {labels}")
|
2020-09-08 23:44:25 +03:00
|
|
|
E921 = ("The method 'set_output' can only be called on components that have "
|
|
|
|
"a Model with a 'resize_output' attribute. Otherwise, the output "
|
|
|
|
"layer can not be dynamically changed.")
|
|
|
|
E922 = ("Component '{name}' has been initialized with an output dimension of "
|
|
|
|
"{nO} - cannot add any more labels.")
|
|
|
|
E923 = ("It looks like there is no proper sample data to initialize the "
|
|
|
|
"Model of component '{name}'. "
|
|
|
|
"This is likely a bug in spaCy, so feel free to open an issue.")
|
|
|
|
E924 = ("The '{name}' component does not seem to be initialized properly. "
|
|
|
|
"This is likely a bug in spaCy, so feel free to open an issue.")
|
2020-09-04 00:05:41 +03:00
|
|
|
E925 = ("Invalid color values for displaCy visualizer: expected dictionary "
|
|
|
|
"mapping label names to colors but got: {obj}")
|
2020-08-29 16:20:11 +03:00
|
|
|
E926 = ("It looks like you're trying to modify nlp.{attr} directly. This "
|
|
|
|
"doesn't work because it's an immutable computed property. If you "
|
|
|
|
"need to modify the pipeline, use the built-in methods like "
|
|
|
|
"nlp.add_pipe, nlp.remove_pipe, nlp.disable_pipe or nlp.enable_pipe "
|
|
|
|
"instead.")
|
|
|
|
E927 = ("Can't write to frozen list Maybe you're trying to modify a computed "
|
|
|
|
"property or default function argument?")
|
2020-08-18 17:10:36 +03:00
|
|
|
E928 = ("A 'KnowledgeBase' should be written to / read from a file, but the "
|
|
|
|
"provided argument {loc} is an existing directory.")
|
|
|
|
E929 = ("A 'KnowledgeBase' could not be read from {loc} - the path does "
|
|
|
|
"not seem to exist.")
|
2020-08-12 00:29:31 +03:00
|
|
|
E930 = ("Received invalid get_examples callback in {name}.begin_training. "
|
|
|
|
"Expected function that returns an iterable of Example objects but "
|
|
|
|
"got: {obj}")
|
|
|
|
E931 = ("Encountered Pipe subclass without Pipe.{method} method in component "
|
|
|
|
"'{name}'. If the component is trainable and you want to use this "
|
|
|
|
"method, make sure it's overwritten on the subclass. If your "
|
|
|
|
"component isn't trainable, add a method that does nothing or "
|
|
|
|
"don't use the Pipe base class.")
|
|
|
|
E940 = ("Found NaN values in scores.")
|
2020-08-06 00:10:29 +03:00
|
|
|
E941 = ("Can't find model '{name}'. It looks like you're trying to load a "
|
|
|
|
"model from a shortcut, which is deprecated as of spaCy v3.0. To "
|
2020-08-06 00:15:05 +03:00
|
|
|
"load the model, use its full name instead:\n\n"
|
2020-08-06 00:10:29 +03:00
|
|
|
"nlp = spacy.load(\"{full}\")\n\nFor more details on the available "
|
2020-08-06 00:15:05 +03:00
|
|
|
"models, see the models directory: https://spacy.io/models. If you "
|
|
|
|
"want to create a blank model, use spacy.blank: "
|
|
|
|
"nlp = spacy.blank(\"{name}\")")
|
2020-08-05 20:47:54 +03:00
|
|
|
E942 = ("Executing after_{name} callback failed. Expected the function to "
|
|
|
|
"return an initialized nlp object but got: {value}. Maybe "
|
|
|
|
"you forgot to return the modified object in your function?")
|
|
|
|
E943 = ("Executing before_creation callback failed. Expected the function to "
|
|
|
|
"return an uninitialized Language subclass but got: {value}. Maybe "
|
|
|
|
"you forgot to return the modified object in your function or "
|
|
|
|
"returned the initialized nlp object instead?")
|
2020-08-05 00:39:19 +03:00
|
|
|
E944 = ("Can't copy pipeline component '{name}' from source model '{model}': "
|
|
|
|
"not found in pipeline. Available components: {opts}")
|
|
|
|
E945 = ("Can't copy pipeline component '{name}' from source. Expected loaded "
|
|
|
|
"nlp object, but got: {source}")
|
2020-07-29 12:04:43 +03:00
|
|
|
E947 = ("Matcher.add received invalid 'greedy' argument: expected "
|
|
|
|
"a string value from {expected} but got: '{arg}'")
|
|
|
|
E948 = ("Matcher.add received invalid 'patterns' argument: expected "
|
|
|
|
"a List, but got: {arg_type}")
|
2020-08-04 17:29:18 +03:00
|
|
|
E949 = ("Can only create an alignment when the texts are the same.")
|
2020-07-28 17:24:14 +03:00
|
|
|
E952 = ("The section '{name}' is not a valid section in the provided config.")
|
2020-07-25 16:01:15 +03:00
|
|
|
E953 = ("Mismatched IDs received by the Tok2Vec listener: {id1} vs. {id2}")
|
2020-09-18 17:43:15 +03:00
|
|
|
E954 = ("The Tok2Vec listener did not receive any valid input from an upstream "
|
|
|
|
"component.")
|
2020-08-07 16:27:13 +03:00
|
|
|
E955 = ("Can't find table(s) '{table}' for language '{lang}' in spacy-lookups-data.")
|
2020-07-22 14:42:59 +03:00
|
|
|
E956 = ("Can't find component '{name}' in [components] block in the config. "
|
|
|
|
"Available components: {opts}")
|
|
|
|
E957 = ("Writing directly to Language.factories isn't needed anymore in "
|
|
|
|
"spaCy v3. Instead, you can use the @Language.factory decorator "
|
|
|
|
"to register your custom component factory or @Language.component "
|
|
|
|
"to register a simple stateless function component that just takes "
|
|
|
|
"a Doc and returns it.")
|
|
|
|
E958 = ("Language code defined in config ({bad_lang_code}) does not match "
|
2020-09-15 15:24:06 +03:00
|
|
|
"language code of current Language subclass {lang} ({lang_code}). "
|
|
|
|
"If you want to create an nlp object from a config, make sure to "
|
|
|
|
"use the matching subclass with the language-specific settings and "
|
|
|
|
"data.")
|
2020-07-22 14:42:59 +03:00
|
|
|
E959 = ("Can't insert component {dir} index {idx}. Existing components: {opts}")
|
|
|
|
E960 = ("No config data found for component '{name}'. This is likely a bug "
|
|
|
|
"in spaCy.")
|
|
|
|
E961 = ("Found non-serializable Python object in config. Configs should "
|
|
|
|
"only include values that can be serialized to JSON. If you need "
|
|
|
|
"to pass models or other objects to your component, use a reference "
|
|
|
|
"to a registered function or initialize the object in your "
|
|
|
|
"component.\n\n{config}")
|
|
|
|
E962 = ("Received incorrect {style} for pipe '{name}'. Expected dict, "
|
|
|
|
"got: {cfg_type}.")
|
|
|
|
E963 = ("Can't read component info from @Language.{decorator} decorator. "
|
|
|
|
"Maybe you forgot to call it? Make sure you're using "
|
|
|
|
"@Language.{decorator}() instead of @Language.{decorator}.")
|
|
|
|
E964 = ("The pipeline component factory for '{name}' needs to have the "
|
|
|
|
"following named arguments, which are passed in by spaCy:\n- nlp: "
|
|
|
|
"receives the current nlp object and lets you access the vocab\n- "
|
|
|
|
"name: the name of the component instance, can be used to identify "
|
|
|
|
"the component, output losses etc.")
|
|
|
|
E965 = ("It looks like you're using the @Language.component decorator to "
|
|
|
|
"register '{name}' on a class instead of a function component. If "
|
|
|
|
"you need to register a class or function that *returns* a component "
|
|
|
|
"function, use the @Language.factory decorator instead.")
|
|
|
|
E966 = ("nlp.add_pipe now takes the string name of the registered component "
|
|
|
|
"factory, not a callable component. Expected string, but got "
|
|
|
|
"{component} (name: '{name}').\n\n- If you created your component "
|
|
|
|
"with nlp.create_pipe('name'): remove nlp.create_pipe and call "
|
|
|
|
"nlp.add_pipe('name') instead.\n\n- If you passed in a component "
|
|
|
|
"like TextCategorizer(): call nlp.add_pipe with the string name "
|
|
|
|
"instead, e.g. nlp.add_pipe('textcat').\n\n- If you're using a custom "
|
|
|
|
"component: Add the decorator @Language.component (for function "
|
|
|
|
"components) or @Language.factory (for class components / factories) "
|
|
|
|
"to your custom component and assign it a name, e.g. "
|
|
|
|
"@Language.component('your_name'). You can then run "
|
|
|
|
"nlp.add_pipe('your_name') to add it to the pipeline.")
|
|
|
|
E967 = ("No {meta} meta information found for '{name}'. This is likely a bug in spaCy.")
|
|
|
|
E968 = ("nlp.replace_pipe now takes the string name of the registered component "
|
|
|
|
"factory, not a callable component. Expected string, but got "
|
|
|
|
"{component}.\n\n- If you created your component with"
|
|
|
|
"with nlp.create_pipe('name'): remove nlp.create_pipe and call "
|
|
|
|
"nlp.replace_pipe('{name}', 'name') instead.\n\n- If you passed in a "
|
|
|
|
"component like TextCategorizer(): call nlp.replace_pipe with the "
|
|
|
|
"string name instead, e.g. nlp.replace_pipe('{name}', 'textcat').\n\n"
|
|
|
|
"- If you're using a custom component: Add the decorator "
|
|
|
|
"@Language.component (for function components) or @Language.factory "
|
|
|
|
"(for class components / factories) to your custom component and "
|
|
|
|
"assign it a name, e.g. @Language.component('your_name'). You can "
|
|
|
|
"then run nlp.replace_pipe('{name}', 'your_name').")
|
2020-07-07 19:46:00 +03:00
|
|
|
E969 = ("Expected string values for field '{field}', but received {types} instead. ")
|
2020-06-30 18:28:43 +03:00
|
|
|
E970 = ("Can not execute command '{str_command}'. Do you have '{tool}' installed?")
|
2020-06-29 15:33:00 +03:00
|
|
|
E971 = ("Found incompatible lengths in Doc.from_array: {array_length} for the "
|
|
|
|
"array and {doc_length} for the Doc itself.")
|
|
|
|
E972 = ("Example.__init__ got None for '{arg}'. Requires Doc.")
|
|
|
|
E973 = ("Unexpected type for NER data")
|
|
|
|
E974 = ("Unknown {obj} attribute: {key}")
|
2020-07-07 19:46:00 +03:00
|
|
|
E976 = ("The method 'Example.from_dict' expects a {type} as {n} argument, "
|
2020-06-29 15:33:00 +03:00
|
|
|
"but received None.")
|
|
|
|
E977 = ("Can not compare a MorphAnalysis with a string object. "
|
|
|
|
"This is likely a bug in spaCy, so feel free to open an issue.")
|
2020-08-12 00:29:31 +03:00
|
|
|
E978 = ("The {name} method takes a list of Example objects, but got: {types}")
|
2020-06-26 20:34:12 +03:00
|
|
|
E979 = ("Cannot convert {type} to an Example object.")
|
|
|
|
E980 = ("Each link annotation should refer to a dictionary with at most one "
|
|
|
|
"identifier mapping to 1.0, and all others to 0.0.")
|
2020-07-02 14:57:35 +03:00
|
|
|
E981 = ("The offsets of the annotations for 'links' could not be aligned "
|
|
|
|
"to token boundaries.")
|
2020-06-26 20:34:12 +03:00
|
|
|
E982 = ("The 'ent_iob' attribute of a Token should be an integer indexing "
|
|
|
|
"into {values}, but found {value}.")
|
|
|
|
E983 = ("Invalid key for '{dict}': {key}. Available keys: "
|
2020-06-12 03:02:07 +03:00
|
|
|
"{keys}")
|
2020-08-05 00:39:19 +03:00
|
|
|
E984 = ("Invalid component config for '{name}': component block needs either "
|
|
|
|
"a key 'factory' specifying the registered function used to "
|
|
|
|
"initialize the component, or a key 'source' key specifying a "
|
|
|
|
"spaCy model to copy the component from. For example, factory = "
|
|
|
|
"\"ner\" will use the 'ner' factory and all other settings in the "
|
|
|
|
"block will be passed to it as arguments. Alternatively, source = "
|
|
|
|
"\"en_core_web_sm\" will copy the component from that model.\n\n{config}")
|
2020-07-22 14:42:59 +03:00
|
|
|
E985 = ("Can't load model from config file: no 'nlp' section found.\n\n{config}")
|
2020-06-03 11:00:21 +03:00
|
|
|
E986 = ("Could not create any training batches: check your input. "
|
2020-08-18 17:06:37 +03:00
|
|
|
"Are the train and dev paths defined? "
|
|
|
|
"Is 'discard_oversize' set appropriately? ")
|
2020-05-20 12:41:12 +03:00
|
|
|
E987 = ("The text of an example training instance is either a Doc or "
|
|
|
|
"a string, but found {type} instead.")
|
|
|
|
E988 = ("Could not parse any training examples. Ensure the data is "
|
|
|
|
"formatted correctly.")
|
|
|
|
E989 = ("'nlp.update()' was called with two positional arguments. This "
|
|
|
|
"may be due to a backwards-incompatible change to the format "
|
|
|
|
"of the training data in spaCy 3.0 onwards. The 'update' "
|
|
|
|
"function should now be called with a batch of 'Example' "
|
|
|
|
"objects, instead of (text, annotation) tuples. ")
|
2020-05-18 23:27:10 +03:00
|
|
|
E991 = ("The function 'select_pipes' should be called with either a "
|
|
|
|
"'disable' argument to list the names of the pipe components "
|
|
|
|
"that should be disabled, or with an 'enable' argument that "
|
|
|
|
"specifies which pipes should not be disabled.")
|
|
|
|
E992 = ("The function `select_pipes` was called with `enable`={enable} "
|
|
|
|
"and `disable`={disable} but that information is conflicting "
|
|
|
|
"for the `nlp` pipeline with components {names}.")
|
2020-07-22 14:42:59 +03:00
|
|
|
E993 = ("The config for 'nlp' needs to include a key 'lang' specifying "
|
|
|
|
"the code of the language to initialize it with (for example "
|
2020-07-27 18:50:12 +03:00
|
|
|
"'en' for English) - this can't be 'None'.\n\n{config}")
|
2020-01-01 15:16:48 +03:00
|
|
|
E996 = ("Could not parse {file}: {msg}")
|
2019-12-21 20:55:03 +03:00
|
|
|
E997 = ("Tokenizer special cases are not allowed to modify the text. "
|
Generalize handling of tokenizer special cases (#4259)
* Generalize handling of tokenizer special cases
Handle tokenizer special cases more generally by using the Matcher
internally to match special cases after the affix/token_match
tokenization is complete.
Instead of only matching special cases while processing balanced or
nearly balanced prefixes and suffixes, this recognizes special cases in
a wider range of contexts:
* Allows arbitrary numbers of prefixes/affixes around special cases
* Allows special cases separated by infixes
Existing tests/settings that couldn't be preserved as before:
* The emoticon '")' is no longer a supported special case
* The emoticon ':)' in "example:)" is a false positive again
When merged with #4258 (or the relevant cache bugfix), the affix and
token_match properties should be modified to flush and reload all
special cases to use the updated internal tokenization with the Matcher.
* Remove accidentally added test case
* Really remove accidentally added test
* Reload special cases when necessary
Reload special cases when affixes or token_match are modified. Skip
reloading during initialization.
* Update error code number
* Fix offset and whitespace in Matcher special cases
* Fix offset bugs when merging and splitting tokens
* Set final whitespace on final token in inserted special case
* Improve cache flushing in tokenizer
* Separate cache and specials memory (temporarily)
* Flush cache when adding special cases
* Repeated `self._cache = PreshMap()` and `self._specials = PreshMap()`
are necessary due to this bug:
https://github.com/explosion/preshed/issues/21
* Remove reinitialized PreshMaps on cache flush
* Update UD bin scripts
* Update imports for `bin/`
* Add all currently supported languages
* Update subtok merger for new Matcher validation
* Modify blinded check to look at tokens instead of lemmas (for corpora
with tokens but not lemmas like Telugu)
* Use special Matcher only for cases with affixes
* Reinsert specials cache checks during normal tokenization for special
cases as much as possible
* Additionally include specials cache checks while splitting on infixes
* Since the special Matcher needs consistent affix-only tokenization
for the special cases themselves, introduce the argument
`with_special_cases` in order to do tokenization with or without
specials cache checks
* After normal tokenization, postprocess with special cases Matcher for
special cases containing affixes
* Replace PhraseMatcher with Aho-Corasick
Replace PhraseMatcher with the Aho-Corasick algorithm over numpy arrays
of the hash values for the relevant attribute. The implementation is
based on FlashText.
The speed should be similar to the previous PhraseMatcher. It is now
possible to easily remove match IDs and matches don't go missing with
large keyword lists / vocabularies.
Fixes #4308.
* Restore support for pickling
* Fix internal keyword add/remove for numpy arrays
* Add test for #4248, clean up test
* Improve efficiency of special cases handling
* Use PhraseMatcher instead of Matcher
* Improve efficiency of merging/splitting special cases in document
* Process merge/splits in one pass without repeated token shifting
* Merge in place if no splits
* Update error message number
* Remove UD script modifications
Only used for timing/testing, should be a separate PR
* Remove final traces of UD script modifications
* Update UD bin scripts
* Update imports for `bin/`
* Add all currently supported languages
* Update subtok merger for new Matcher validation
* Modify blinded check to look at tokens instead of lemmas (for corpora
with tokens but not lemmas like Telugu)
* Add missing loop for match ID set in search loop
* Remove cruft in matching loop for partial matches
There was a bit of unnecessary code left over from FlashText in the
matching loop to handle partial token matches, which we don't have with
PhraseMatcher.
* Replace dict trie with MapStruct trie
* Fix how match ID hash is stored/added
* Update fix for match ID vocab
* Switch from map_get_unless_missing to map_get
* Switch from numpy array to Token.get_struct_attr
Access token attributes directly in Doc instead of making a copy of the
relevant values in a numpy array.
Add unsatisfactory warning for hash collision with reserved terminal
hash key. (Ideally it would change the reserved terminal hash and redo
the whole trie, but for now, I'm hoping there won't be collisions.)
* Restructure imports to export find_matches
* Implement full remove()
Remove unnecessary trie paths and free unused maps.
Parallel to Matcher, raise KeyError when attempting to remove a match ID
that has not been added.
* Switch to PhraseMatcher.find_matches
* Switch to local cdef functions for span filtering
* Switch special case reload threshold to variable
Refer to variable instead of hard-coded threshold
* Move more of special case retokenize to cdef nogil
Move as much of the special case retokenization to nogil as possible.
* Rewrap sort as stdsort for OS X
* Rewrap stdsort with specific types
* Switch to qsort
* Fix merge
* Improve cmp functions
* Fix realloc
* Fix realloc again
* Initialize span struct while retokenizing
* Temporarily skip retokenizing
* Revert "Move more of special case retokenize to cdef nogil"
This reverts commit 0b7e52c797cd8ff1548f214bd4186ebb3a7ce8b1.
* Revert "Switch to qsort"
This reverts commit a98d71a942fc9bca531cf5eb05cf89fa88153b60.
* Fix specials check while caching
* Modify URL test with emoticons
The multiple suffix tests result in the emoticon `:>`, which is now
retokenized into one token as a special case after the suffixes are
split off.
* Refactor _apply_special_cases()
* Use cdef ints for span info used in multiple spots
* Modify _filter_special_spans() to prefer earlier
Parallel to #4414, modify _filter_special_spans() so that the earlier
span is preferred for overlapping spans of the same length.
* Replace MatchStruct with Entity
Replace MatchStruct with Entity since the existing Entity struct is
nearly identical.
* Replace Entity with more general SpanC
* Replace MatchStruct with SpanC
* Add error in debug-data if no dev docs are available (see #4575)
* Update azure-pipelines.yml
* Revert "Update azure-pipelines.yml"
This reverts commit ed1060cf59e5895b5fe92ad5b894fd1078ec4c49.
* Use latest wasabi
* Reorganise install_requires
* add dframcy to universe.json (#4580)
* Update universe.json [ci skip]
* Fix multiprocessing for as_tuples=True (#4582)
* Fix conllu script (#4579)
* force extensions to avoid clash between example scripts
* fix arg order and default file encoding
* add example config for conllu script
* newline
* move extension definitions to main function
* few more encodings fixes
* Add load_from_docbin example [ci skip]
TODO: upload the file somewhere
* Update README.md
* Add warnings about 3.8 (resolves #4593) [ci skip]
* Fixed typo: Added space between "recognize" and "various" (#4600)
* Fix DocBin.merge() example (#4599)
* Replace function registries with catalogue (#4584)
* Replace functions registries with catalogue
* Update __init__.py
* Fix test
* Revert unrelated flag [ci skip]
* Bugfix/dep matcher issue 4590 (#4601)
* add contributor agreement for prilopes
* add test for issue #4590
* fix on_match params for DependencyMacther (#4590)
* Minor updates to language example sentences (#4608)
* Add punctuation to Spanish example sentences
* Combine multilanguage examples for lang xx
* Add punctuation to nb examples
* Always realloc to a larger size
Avoid potential (unlikely) edge case and cymem error seen in #4604.
* Add error in debug-data if no dev docs are available (see #4575)
* Update debug-data for GoldCorpus / Example
* Ignore None label in misaligned NER data
2019-11-13 23:24:35 +03:00
|
|
|
"This would map '{chunk}' to '{orth}' given token attributes "
|
|
|
|
"'{token_attrs}'.")
|
2020-07-03 12:32:42 +03:00
|
|
|
E999 = ("Unable to merge the `Doc` objects because they do not all share "
|
|
|
|
"the same `Vocab`.")
|
2020-07-19 14:34:37 +03:00
|
|
|
E1000 = ("No pkuseg model available. Provide a pkuseg model when "
|
2020-07-22 14:42:59 +03:00
|
|
|
"initializing the pipeline:\n"
|
2020-07-22 16:59:37 +03:00
|
|
|
'cfg = {"tokenizer": {"segmenter": "pkuseg", "pkuseg_model": name_or_path}}\n'
|
2020-07-22 14:42:59 +03:00
|
|
|
'nlp = Chinese(config=cfg)')
|
2020-08-04 18:02:39 +03:00
|
|
|
E1001 = ("Target token outside of matched span for match with tokens "
|
|
|
|
"'{span}' and offset '{index}' matched by patterns '{patterns}'.")
|
|
|
|
E1002 = ("Span index out of range.")
|
2020-08-07 16:27:13 +03:00
|
|
|
E1003 = ("Unsupported lemmatizer mode '{mode}'.")
|
|
|
|
E1004 = ("Missing lemmatizer table(s) found for lemmatizer mode '{mode}'. "
|
|
|
|
"Required tables '{tables}', found '{found}'. If you are not "
|
|
|
|
"providing custom lookups, make sure you have the package "
|
|
|
|
"spacy-lookups-data installed.")
|
2020-08-31 10:42:06 +03:00
|
|
|
E1005 = ("Unable to set attribute '{attr}' in tokenizer exception for "
|
|
|
|
"'{chunk}'. Tokenizer exceptions are only allowed to specify "
|
|
|
|
"`ORTH` and `NORM`.")
|
2020-08-31 22:24:33 +03:00
|
|
|
E1006 = ("Unable to initialize {name} model with 0 labels.")
|
2020-08-31 21:04:26 +03:00
|
|
|
E1007 = ("Unsupported DependencyMatcher operator '{op}'.")
|
|
|
|
E1008 = ("Invalid pattern: each pattern should be a list of dicts. Check "
|
|
|
|
"that you are providing a list of patterns as `List[List[dict]]`.")
|
2020-09-13 15:06:07 +03:00
|
|
|
E1009 = ("String for hash '{val}' not found in StringStore. Set the value "
|
|
|
|
"through token.morph_ instead or add the string to the "
|
|
|
|
"StringStore with `nlp.vocab.strings.add(string)`.")
|
2020-07-03 12:32:42 +03:00
|
|
|
|
2019-08-18 16:09:16 +03:00
|
|
|
|
2018-04-03 16:50:31 +03:00
|
|
|
@add_codes
|
2020-07-12 15:03:23 +03:00
|
|
|
class TempErrors:
|
2019-10-02 11:37:39 +03:00
|
|
|
T003 = ("Resizing pretrained Tagger models is not currently supported.")
|
2018-04-03 16:50:31 +03:00
|
|
|
T007 = ("Can't yet set {attr} from Span. Vote for this feature on the "
|
|
|
|
"issue tracker: http://github.com/explosion/spaCy/issues")
|
|
|
|
|
|
|
|
|
2020-08-06 00:10:29 +03:00
|
|
|
# Deprecated model shortcuts, only used in errors and warnings
|
|
|
|
OLD_MODEL_SHORTCUTS = {
|
|
|
|
"en": "en_core_web_sm", "de": "de_core_news_sm", "es": "es_core_news_sm",
|
|
|
|
"pt": "pt_core_news_sm", "fr": "fr_core_news_sm", "it": "it_core_news_sm",
|
|
|
|
"nl": "nl_core_news_sm", "el": "el_core_news_sm", "nb": "nb_core_news_sm",
|
|
|
|
"lt": "lt_core_news_sm", "xx": "xx_ent_wiki_sm"
|
|
|
|
}
|
|
|
|
|
|
|
|
|
💫 Tidy up and auto-format .py files (#2983)
<!--- Provide a general summary of your changes in the title. -->
## Description
- [x] Use [`black`](https://github.com/ambv/black) to auto-format all `.py` files.
- [x] Update flake8 config to exclude very large files (lemmatization tables etc.)
- [x] Update code to be compatible with flake8 rules
- [x] Fix various small bugs, inconsistencies and messy stuff in the language data
- [x] Update docs to explain new code style (`black`, `flake8`, when to use `# fmt: off` and `# fmt: on` and what `# noqa` means)
Once #2932 is merged, which auto-formats and tidies up the CLI, we'll be able to run `flake8 spacy` actually get meaningful results.
At the moment, the code style and linting isn't applied automatically, but I'm hoping that the new [GitHub Actions](https://github.com/features/actions) will let us auto-format pull requests and post comments with relevant linting information.
### Types of change
enhancement, code style
## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.
2018-11-30 19:03:03 +03:00
|
|
|
# fmt: on
|
|
|
|
|
|
|
|
|
2019-02-12 17:47:26 +03:00
|
|
|
class MatchPatternError(ValueError):
|
|
|
|
def __init__(self, key, errors):
|
|
|
|
"""Custom error for validating match patterns.
|
|
|
|
|
2020-05-24 18:20:58 +03:00
|
|
|
key (str): The name of the matcher rule.
|
2019-02-12 17:47:26 +03:00
|
|
|
errors (dict): Validation errors (sequence of strings) mapped to pattern
|
|
|
|
ID, i.e. the index of the added pattern.
|
|
|
|
"""
|
2019-12-22 03:53:56 +03:00
|
|
|
msg = f"Invalid token patterns for matcher rule '{key}'\n"
|
2019-02-12 17:47:26 +03:00
|
|
|
for pattern_idx, error_msgs in errors.items():
|
2019-12-22 03:53:56 +03:00
|
|
|
pattern_errors = "\n".join([f"- {e}" for e in error_msgs])
|
|
|
|
msg += f"\nPattern {pattern_idx}:\n{pattern_errors}\n"
|
2019-02-12 17:47:26 +03:00
|
|
|
ValueError.__init__(self, msg)
|