Remove inline notes on v2 changes [ci skip]

This commit is contained in:
Ines Montani 2020-07-01 22:29:22 +02:00
parent 79540e1eea
commit a4cfe9fc33
5 changed files with 0 additions and 70 deletions

View File

@ -309,15 +309,6 @@ out to the directory. Accuracy scores and model details will be added to a
[`meta.json`](/usage/training#models-generating) to allow packaging the model [`meta.json`](/usage/training#models-generating) to allow packaging the model
using the [`package`](/api/cli#package) command. using the [`package`](/api/cli#package) command.
<Infobox title="Changed in v2.1" variant="warning">
As of spaCy 2.1, the `--no-tagger`, `--no-parser` and `--no-entities` flags have
been replaced by a `--pipeline` option, which lets you define comma-separated
names of pipeline components to train. For example, `--pipeline tagger,parser`
will only train the tagger and parser.
</Infobox>
```bash ```bash
$ python -m spacy train [lang] [output_path] [train_path] [dev_path] $ python -m spacy train [lang] [output_path] [train_path] [dev_path]
[--base-model] [--pipeline] [--vectors] [--n-iter] [--n-early-stopping] [--base-model] [--pipeline] [--vectors] [--n-iter] [--n-early-stopping]

View File

@ -55,18 +55,6 @@ contain arbitrary whitespace. Alignment into the original string is preserved.
| `disable` | list | Names of pipeline components to [disable](/usage/processing-pipelines#disabling). | | `disable` | list | Names of pipeline components to [disable](/usage/processing-pipelines#disabling). |
| **RETURNS** | `Doc` | A container for accessing the annotations. | | **RETURNS** | `Doc` | A container for accessing the annotations. |
<Infobox title="Changed in v2.0" variant="warning">
Pipeline components to prevent from being loaded can now be added as a list to
`disable`, instead of specifying one keyword argument per component.
```diff
- doc = nlp("I don't want parsed", parse=False)
+ doc = nlp("I don't want parsed", disable=["parser"])
```
</Infobox>
## Language.pipe {#pipe tag="method"} ## Language.pipe {#pipe tag="method"}
Process texts as a stream, and yield `Doc` objects in order. This is usually Process texts as a stream, and yield `Doc` objects in order. This is usually
@ -426,19 +414,6 @@ available to the loaded object.
| `exclude` | list | Names of pipeline components or [serialization fields](#serialization-fields) to exclude. | | `exclude` | list | Names of pipeline components or [serialization fields](#serialization-fields) to exclude. |
| **RETURNS** | `Language` | The `Language` object. | | **RETURNS** | `Language` | The `Language` object. |
<Infobox title="Changed in v2.0" variant="warning">
Pipeline components to prevent from being loaded can now be added as a list to
`disable` (v2.0) or `exclude` (v2.1), instead of specifying one keyword argument
per component.
```diff
- nlp = English().from_bytes(bytes, tagger=False, entity=False)
+ nlp = English().from_bytes(bytes, exclude=["tagger", "ner"])
```
</Infobox>
## Attributes {#attributes} ## Attributes {#attributes}
| Name | Type | Description | | Name | Type | Description |

View File

@ -5,19 +5,6 @@ tag: class
source: spacy/matcher/matcher.pyx source: spacy/matcher/matcher.pyx
--- ---
<Infobox title="Changed in v2.0" variant="warning">
As of spaCy 2.0, `Matcher.add_pattern` and `Matcher.add_entity` are deprecated
and have been replaced with a simpler [`Matcher.add`](/api/matcher#add) that
lets you add a list of patterns and a callback for a given match ID.
`Matcher.get_entity` is now called [`matcher.get`](/api/matcher#get).
`Matcher.load` (not useful, as it didn't allow specifying callbacks), and
`Matcher.has_entity` (now redundant) have been removed. The concept of "acceptor
functions" has also been retired this logic can now be handled in the callback
functions.
</Infobox>
## Matcher.\_\_init\_\_ {#init tag="method"} ## Matcher.\_\_init\_\_ {#init tag="method"}
Create the rule-based `Matcher`. If `validate=True` is set, all patterns added Create the rule-based `Matcher`. If `validate=True` is set, all patterns added

View File

@ -38,18 +38,10 @@ be shown.
| Name | Type | Description | | Name | Type | Description |
| --------------------------------------- | --------------- | ------------------------------------------------------------------------------------------- | | --------------------------------------- | --------------- | ------------------------------------------------------------------------------------------- |
| `vocab` | `Vocab` | The vocabulary object, which must be shared with the documents the matcher will operate on. | | `vocab` | `Vocab` | The vocabulary object, which must be shared with the documents the matcher will operate on. |
| `max_length` | int | Deprecated argument - the `PhraseMatcher` does not have a phrase length limit anymore. |
| `attr` <Tag variant="new">2.1</Tag> | int / str | The token attribute to match on. Defaults to `ORTH`, i.e. the verbatim token text. | | `attr` <Tag variant="new">2.1</Tag> | int / str | The token attribute to match on. Defaults to `ORTH`, i.e. the verbatim token text. |
| `validate` <Tag variant="new">2.1</Tag> | bool | Validate patterns added to the matcher. | | `validate` <Tag variant="new">2.1</Tag> | bool | Validate patterns added to the matcher. |
| **RETURNS** | `PhraseMatcher` | The newly constructed object. | | **RETURNS** | `PhraseMatcher` | The newly constructed object. |
<Infobox title="Changed in v2.1" variant="warning">
As of v2.1, the `PhraseMatcher` doesn't have a phrase length limit anymore, so
the `max_length` argument is now deprecated.
</Infobox>
## PhraseMatcher.\_\_call\_\_ {#call tag="method"} ## PhraseMatcher.\_\_call\_\_ {#call tag="method"}
Find all token sequences matching the supplied patterns on the `Doc`. Find all token sequences matching the supplied patterns on the `Doc`.

View File

@ -48,21 +48,6 @@ for name in pipeline: component = nlp.create_pipe(name) # create each pipelin
nlp.from_disk(model_data_path) # load in model data nlp.from_disk(model_data_path) # load in model data
``` ```
<Infobox title="Changed in v2.0" variant="warning">
As of spaCy 2.0, the `path` keyword argument is deprecated. spaCy will also
raise an error if no model could be loaded and never just return an empty
`Language` object. If you need a blank language, you can use the new function
[`spacy.blank()`](/api/top-level#spacy.blank) or import the class explicitly,
e.g. `from spacy.lang.en import English`.
```diff
- nlp = spacy.load("en", path="/model")
+ nlp = spacy.load("/model")
```
</Infobox>
### spacy.blank {#spacy.blank tag="function" new="2"} ### spacy.blank {#spacy.blank tag="function" new="2"}
Create a blank model of a given language class. This function is the twin of Create a blank model of a given language class. This function is the twin of