diff --git a/website/docs/api/cli.md b/website/docs/api/cli.md
index 672142f31..b26e034ce 100644
--- a/website/docs/api/cli.md
+++ b/website/docs/api/cli.md
@@ -309,15 +309,6 @@ out to the directory. Accuracy scores and model details will be added to a
[`meta.json`](/usage/training#models-generating) to allow packaging the model
using the [`package`](/api/cli#package) command.
-
-
-As of spaCy 2.1, the `--no-tagger`, `--no-parser` and `--no-entities` flags have
-been replaced by a `--pipeline` option, which lets you define comma-separated
-names of pipeline components to train. For example, `--pipeline tagger,parser`
-will only train the tagger and parser.
-
-
-
```bash
$ python -m spacy train [lang] [output_path] [train_path] [dev_path]
[--base-model] [--pipeline] [--vectors] [--n-iter] [--n-early-stopping]
diff --git a/website/docs/api/language.md b/website/docs/api/language.md
index 792f2217d..9413ef486 100644
--- a/website/docs/api/language.md
+++ b/website/docs/api/language.md
@@ -55,18 +55,6 @@ contain arbitrary whitespace. Alignment into the original string is preserved.
| `disable` | list | Names of pipeline components to [disable](/usage/processing-pipelines#disabling). |
| **RETURNS** | `Doc` | A container for accessing the annotations. |
-
-
-Pipeline components to prevent from being loaded can now be added as a list to
-`disable`, instead of specifying one keyword argument per component.
-
-```diff
-- doc = nlp("I don't want parsed", parse=False)
-+ doc = nlp("I don't want parsed", disable=["parser"])
-```
-
-
-
## Language.pipe {#pipe tag="method"}
Process texts as a stream, and yield `Doc` objects in order. This is usually
@@ -426,19 +414,6 @@ available to the loaded object.
| `exclude` | list | Names of pipeline components or [serialization fields](#serialization-fields) to exclude. |
| **RETURNS** | `Language` | The `Language` object. |
-
-
-Pipeline components to prevent from being loaded can now be added as a list to
-`disable` (v2.0) or `exclude` (v2.1), instead of specifying one keyword argument
-per component.
-
-```diff
-- nlp = English().from_bytes(bytes, tagger=False, entity=False)
-+ nlp = English().from_bytes(bytes, exclude=["tagger", "ner"])
-```
-
-
-
## Attributes {#attributes}
| Name | Type | Description |
diff --git a/website/docs/api/matcher.md b/website/docs/api/matcher.md
index 5244244b1..8210f7094 100644
--- a/website/docs/api/matcher.md
+++ b/website/docs/api/matcher.md
@@ -5,19 +5,6 @@ tag: class
source: spacy/matcher/matcher.pyx
---
-
-
-As of spaCy 2.0, `Matcher.add_pattern` and `Matcher.add_entity` are deprecated
-and have been replaced with a simpler [`Matcher.add`](/api/matcher#add) that
-lets you add a list of patterns and a callback for a given match ID.
-`Matcher.get_entity` is now called [`matcher.get`](/api/matcher#get).
-`Matcher.load` (not useful, as it didn't allow specifying callbacks), and
-`Matcher.has_entity` (now redundant) have been removed. The concept of "acceptor
-functions" has also been retired – this logic can now be handled in the callback
-functions.
-
-
-
## Matcher.\_\_init\_\_ {#init tag="method"}
Create the rule-based `Matcher`. If `validate=True` is set, all patterns added
diff --git a/website/docs/api/phrasematcher.md b/website/docs/api/phrasematcher.md
index 6e793a7b9..f02d81de9 100644
--- a/website/docs/api/phrasematcher.md
+++ b/website/docs/api/phrasematcher.md
@@ -38,18 +38,10 @@ be shown.
| Name | Type | Description |
| --------------------------------------- | --------------- | ------------------------------------------------------------------------------------------- |
| `vocab` | `Vocab` | The vocabulary object, which must be shared with the documents the matcher will operate on. |
-| `max_length` | int | Deprecated argument - the `PhraseMatcher` does not have a phrase length limit anymore. |
| `attr` 2.1 | int / str | The token attribute to match on. Defaults to `ORTH`, i.e. the verbatim token text. |
| `validate` 2.1 | bool | Validate patterns added to the matcher. |
| **RETURNS** | `PhraseMatcher` | The newly constructed object. |
-
-
-As of v2.1, the `PhraseMatcher` doesn't have a phrase length limit anymore, so
-the `max_length` argument is now deprecated.
-
-
-
## PhraseMatcher.\_\_call\_\_ {#call tag="method"}
Find all token sequences matching the supplied patterns on the `Doc`.
diff --git a/website/docs/api/top-level.md b/website/docs/api/top-level.md
index bd6c30d0f..6ee324af9 100644
--- a/website/docs/api/top-level.md
+++ b/website/docs/api/top-level.md
@@ -48,21 +48,6 @@ for name in pipeline: component = nlp.create_pipe(name) # create each pipelin
nlp.from_disk(model_data_path) # load in model data
```
-
-
-As of spaCy 2.0, the `path` keyword argument is deprecated. spaCy will also
-raise an error if no model could be loaded and never just return an empty
-`Language` object. If you need a blank language, you can use the new function
-[`spacy.blank()`](/api/top-level#spacy.blank) or import the class explicitly,
-e.g. `from spacy.lang.en import English`.
-
-```diff
-- nlp = spacy.load("en", path="/model")
-+ nlp = spacy.load("/model")
-```
-
-
-
### spacy.blank {#spacy.blank tag="function" new="2"}
Create a blank model of a given language class. This function is the twin of