From d7469283c5bcce87c7cdc088a161e909d5e87f19 Mon Sep 17 00:00:00 2001 From: Ines Montani Date: Tue, 29 Sep 2020 16:59:21 +0200 Subject: [PATCH] Update docs [ci skip] --- website/docs/api/data-formats.md | 35 +++++++++++++++++++++++--- website/docs/api/dependencyparser.md | 20 +++++++-------- website/docs/api/entitylinker.md | 22 ++++++++-------- website/docs/api/entityrecognizer.md | 20 +++++++-------- website/docs/api/language.md | 19 +++++++++----- website/docs/api/morphologizer.md | 21 +++++++--------- website/docs/api/pipe.md | 20 +++++++-------- website/docs/api/sentencerecognizer.md | 20 +++++++-------- website/docs/api/tagger.md | 20 +++++++-------- website/docs/api/textcategorizer.md | 20 +++++++-------- website/docs/api/tok2vec.md | 9 +++---- website/docs/api/transformer.md | 9 +++---- 12 files changed, 126 insertions(+), 109 deletions(-) diff --git a/website/docs/api/data-formats.md b/website/docs/api/data-formats.md index 6ff3bfd0d..0d2c78598 100644 --- a/website/docs/api/data-formats.md +++ b/website/docs/api/data-formats.md @@ -190,8 +190,6 @@ process that are used when you run [`spacy train`](/api/cli#train). | `eval_frequency` | How often to evaluate during training (steps). Defaults to `200`. ~~int~~ | | `frozen_components` | Pipeline component names that are "frozen" and shouldn't be updated during training. See [here](/usage/training#config-components) for details. Defaults to `[]`. ~~List[str]~~ | | `gpu_allocator` | Library for cupy to route GPU memory allocation to. Can be `"pytorch"` or `"tensorflow"`. Defaults to variable `${system.gpu_allocator}`. ~~str~~ | -| `init_tok2vec` | Optional path to pretrained tok2vec weights created with [`spacy pretrain`](/api/cli#pretrain). Defaults to variable `${paths.init_tok2vec}`. ~~Optional[str]~~ | -| `lookups` | Additional lexeme and vocab data from [`spacy-lookups-data`](https://github.com/explosion/spacy-lookups-data). Defaults to `null`. ~~Optional[Lookups]~~ | | `max_epochs` | Maximum number of epochs to train for. Defaults to `0`. ~~int~~ | | `max_steps` | Maximum number of update steps to train for. Defaults to `20000`. ~~int~~ | | `optimizer` | The optimizer. The learning rate schedule and other settings can be configured as part of the optimizer. Defaults to [`Adam`](https://thinc.ai/docs/api-optimizers#adam). ~~Optimizer~~ | @@ -200,7 +198,6 @@ process that are used when you run [`spacy train`](/api/cli#train). | `score_weights` | Score names shown in metrics mapped to their weight towards the final weighted score. See [here](/usage/training#metrics) for details. Defaults to `{}`. ~~Dict[str, float]~~ | | `seed` | The random seed. Defaults to variable `${system.seed}`. ~~int~~ | | `train_corpus` | Dot notation of the config location defining the train corpus. Defaults to `corpora.train`. ~~str~~ | -| `vectors` | Name or path of pipeline containing pretrained word vectors to use, e.g. created with [`init vocab`](/api/cli#init-vocab). Defaults to `null`. ~~Optional[str]~~ | ### pretraining {#config-pretraining tag="section,optional"} @@ -220,6 +217,38 @@ used when you run [`spacy pretrain`](/api/cli#pretrain). | `component` | Component to find the layer to pretrain. Defaults to `"tok2vec"`. ~~str~~ | | `layer` | The layer to pretrain. If empty, the whole component model will be used. ~~str~~ | +### initialize {#config-initialize tag="section"} + +This config block lets you define resources for **initializing the pipeline**. +It's used by [`Language.initialize`](/api/language#initialize) and typically +called right before training (but not at runtime). The section allows you to +specify local file paths or custom functions to load data resources from, +without requiring them at runtime when you load the trained pipeline back in. + +> #### Example +> +> ```ini +> [initialize] +> vectors = "/path/to/vectors_nlp" +> init_tok2vec = "/path/to/pretrain.bin" +> +> [initialize_components] +> +> [initialize.components.my_component] +> data_path = "/path/to/component_data" +> ``` + + + +| Name | Description | +| -------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `components` | Additional arguments passed to the `initialize` method of a pipeline component, keyed by component name. If type annotations are available on the method, the config will be validated against them. The `initialize` methods will always receive the `get_examples` callback and the current `nlp` object. ~~Dict[str, Dict[str, Any]]~~ | +| `init_tok2vec` | Optional path to pretrained tok2vec weights created with [`spacy pretrain`](/api/cli#pretrain). Defaults to variable `${paths.init_tok2vec}`. ~~Optional[str]~~ | +| `lookups` | Additional lexeme and vocab data from [`spacy-lookups-data`](https://github.com/explosion/spacy-lookups-data). Defaults to `null`. ~~Optional[Lookups]~~ | +| `tokenizer` | Additional arguments passed to the `initialize` method of the specified tokenizer. Can be used for languages like Chinese that depend on dictionaries or trained models for tokenization. If type annotations are available on the method, the config will be validated against them. The `initialize` method will always receive the `get_examples` callback and the current `nlp` object. ~~Dict[str, Any]~~ | +| `vectors` | Name or path of pipeline containing pretrained word vectors to use, e.g. created with [`init vocab`](/api/cli#init-vocab). Defaults to `null`. ~~Optional[str]~~ | +| `vocab_data` | Path to JSONL-formatted [vocabulary file](/api/data-formats#vocab-jsonl) to initialize vocabulary. ~~Optional[str]~~ | + ## Training data {#training} ### Binary training format {#binary-training new="3"} diff --git a/website/docs/api/dependencyparser.md b/website/docs/api/dependencyparser.md index c7c41f2a1..7c56ce84e 100644 --- a/website/docs/api/dependencyparser.md +++ b/website/docs/api/dependencyparser.md @@ -142,14 +142,14 @@ applied to the `Doc` in order. Both [`__call__`](/api/dependencyparser#call) and ## DependencyParser.initialize {#initialize tag="method"} -Initialize the component for training and return an -[`Optimizer`](https://thinc.ai/docs/api-optimizers). `get_examples` should be a -function that returns an iterable of [`Example`](/api/example) objects. The data -examples are used to **initialize the model** of the component and can either be -the full training data or a representative sample. Initialization includes -validating the network, +Initialize the component for training. `get_examples` should be a function that +returns an iterable of [`Example`](/api/example) objects. The data examples are +used to **initialize the model** of the component and can either be the full +training data or a representative sample. Initialization includes validating the +network, [inferring missing shapes](https://thinc.ai/docs/usage-models#validation) and -setting up the label scheme based on the data. +setting up the label scheme based on the data. This method is typically called +by [`Language.initialize`](/api/language#initialize). @@ -161,16 +161,14 @@ This method was previously called `begin_training`. > > ```python > parser = nlp.add_pipe("parser") -> optimizer = parser.initialize(lambda: [], pipeline=nlp.pipeline) +> parser.initialize(lambda: [], nlp=nlp) > ``` | Name | Description | | -------------- | ------------------------------------------------------------------------------------------------------------------------------------- | | `get_examples` | Function that returns gold-standard annotations in the form of [`Example`](/api/example) objects. ~~Callable[[], Iterable[Example]]~~ | | _keyword-only_ | | -| `pipeline` | Optional list of pipeline components that this component is part of. ~~Optional[List[Tuple[str, Callable[[Doc], Doc]]]]~~ | -| `sgd` | An optimizer. Will be created via [`create_optimizer`](#create_optimizer) if not set. ~~Optional[Optimizer]~~ | -| **RETURNS** | The optimizer. ~~Optimizer~~ | +| `nlp` | The current `nlp` object. Defaults to `None`. ~~Optional[Language]~~ | ## DependencyParser.predict {#predict tag="method"} diff --git a/website/docs/api/entitylinker.md b/website/docs/api/entitylinker.md index 1dbe78703..b104fb69a 100644 --- a/website/docs/api/entitylinker.md +++ b/website/docs/api/entitylinker.md @@ -141,14 +141,14 @@ applied to the `Doc` in order. Both [`__call__`](/api/entitylinker#call) and ## EntityLinker.initialize {#initialize tag="method"} -Initialize the component for training and return an -[`Optimizer`](https://thinc.ai/docs/api-optimizers). `get_examples` should be a -function that returns an iterable of [`Example`](/api/example) objects. The data -examples are used to **initialize the model** of the component and can either be -the full training data or a representative sample. Initialization includes -validating the network, +Initialize the component for training. `get_examples` should be a function that +returns an iterable of [`Example`](/api/example) objects. The data examples are +used to **initialize the model** of the component and can either be the full +training data or a representative sample. Initialization includes validating the +network, [inferring missing shapes](https://thinc.ai/docs/usage-models#validation) and -setting up the label scheme based on the data. +setting up the label scheme based on the data. This method is typically called +by [`Language.initialize`](/api/language#initialize). @@ -159,17 +159,15 @@ This method was previously called `begin_training`. > #### Example > > ```python -> entity_linker = nlp.add_pipe("entity_linker", last=True) -> optimizer = entity_linker.initialize(lambda: [], pipeline=nlp.pipeline) +> entity_linker = nlp.add_pipe("entity_linker") +> entity_linker.initialize(lambda: [], nlp=nlp) > ``` | Name | Description | | -------------- | ------------------------------------------------------------------------------------------------------------------------------------- | | `get_examples` | Function that returns gold-standard annotations in the form of [`Example`](/api/example) objects. ~~Callable[[], Iterable[Example]]~~ | | _keyword-only_ | | -| `pipeline` | Optional list of pipeline components that this component is part of. ~~Optional[List[Tuple[str, Callable[[Doc], Doc]]]]~~ | -| `sgd` | An optimizer. Will be created via [`create_optimizer`](#create_optimizer) if not set. ~~Optional[Optimizer]~~ | -| **RETURNS** | The optimizer. ~~Optimizer~~ | +| `nlp` | The current `nlp` object. Defaults to `None`. ~~Optional[Language]~~ | ## EntityLinker.predict {#predict tag="method"} diff --git a/website/docs/api/entityrecognizer.md b/website/docs/api/entityrecognizer.md index 2c32ff753..b930660d9 100644 --- a/website/docs/api/entityrecognizer.md +++ b/website/docs/api/entityrecognizer.md @@ -131,14 +131,14 @@ applied to the `Doc` in order. Both [`__call__`](/api/entityrecognizer#call) and ## EntityRecognizer.initialize {#initialize tag="method"} -Initialize the component for training and return an -[`Optimizer`](https://thinc.ai/docs/api-optimizers). `get_examples` should be a -function that returns an iterable of [`Example`](/api/example) objects. The data -examples are used to **initialize the model** of the component and can either be -the full training data or a representative sample. Initialization includes -validating the network, +Initialize the component for training. `get_examples` should be a function that +returns an iterable of [`Example`](/api/example) objects. The data examples are +used to **initialize the model** of the component and can either be the full +training data or a representative sample. Initialization includes validating the +network, [inferring missing shapes](https://thinc.ai/docs/usage-models#validation) and -setting up the label scheme based on the data. +setting up the label scheme based on the data. This method is typically called +by [`Language.initialize`](/api/language#initialize). @@ -150,16 +150,14 @@ This method was previously called `begin_training`. > > ```python > ner = nlp.add_pipe("ner") -> optimizer = ner.initialize(lambda: [], pipeline=nlp.pipeline) +> ner.initialize(lambda: [], nlp=nlp) > ``` | Name | Description | | -------------- | ------------------------------------------------------------------------------------------------------------------------------------- | | `get_examples` | Function that returns gold-standard annotations in the form of [`Example`](/api/example) objects. ~~Callable[[], Iterable[Example]]~~ | | _keyword-only_ | | -| `pipeline` | Optional list of pipeline components that this component is part of. ~~Optional[List[Tuple[str, Callable[[Doc], Doc]]]]~~ | -| `sgd` | An optimizer. Will be created via [`create_optimizer`](#create_optimizer) if not set. ~~Optional[Optimizer]~~ | -| **RETURNS** | The optimizer. ~~Optimizer~~ | +| `nlp` | The current `nlp` object. Defaults to `None`. ~~Optional[Language]~~ | ## EntityRecognizer.predict {#predict tag="method"} diff --git a/website/docs/api/language.md b/website/docs/api/language.md index 11631502c..8dbb0d821 100644 --- a/website/docs/api/language.md +++ b/website/docs/api/language.md @@ -204,12 +204,19 @@ more efficient than processing texts one-by-one. ## Language.initialize {#initialize tag="method"} Initialize the pipeline for training and return an -[`Optimizer`](https://thinc.ai/docs/api-optimizers). `get_examples` should be a -function that returns an iterable of [`Example`](/api/example) objects. The data -examples can either be the full training data or a representative sample. They -are used to **initialize the models** of trainable pipeline components and are -passed each component's [`initialize`](/api/pipe#initialize) method, if -available. Initialization includes validating the network, +[`Optimizer`](https://thinc.ai/docs/api-optimizers). Under the hood, it uses the +settings defined in the [`[initialize]`](/api/data-formats#config-initialize) +config block to set up the vocabulary, load in vectors and tok2vec weights and +pass optional arguments to the `initialize` methods implemented by pipeline +components or the tokenizer. This method is typically called automatically when +you run [`spacy train`](/api/cli#train). + +`get_examples` should be a function that returns an iterable of +[`Example`](/api/example) objects. The data examples can either be the full +training data or a representative sample. They are used to **initialize the +models** of trainable pipeline components and are passed each component's +[`initialize`](/api/pipe#initialize) method, if available. Initialization +includes validating the network, [inferring missing shapes](/usage/layers-architectures#thinc-shape-inference) and setting up the label scheme based on the data. diff --git a/website/docs/api/morphologizer.md b/website/docs/api/morphologizer.md index 4f00a09ef..68e096ab7 100644 --- a/website/docs/api/morphologizer.md +++ b/website/docs/api/morphologizer.md @@ -119,30 +119,27 @@ applied to the `Doc` in order. Both [`__call__`](/api/morphologizer#call) and ## Morphologizer.initialize {#initialize tag="method"} -Initialize the component for training and return an -[`Optimizer`](https://thinc.ai/docs/api-optimizers). `get_examples` should be a -function that returns an iterable of [`Example`](/api/example) objects. The data -examples are used to **initialize the model** of the component and can either be -the full training data or a representative sample. Initialization includes -validating the network, +Initialize the component for training. `get_examples` should be a function that +returns an iterable of [`Example`](/api/example) objects. The data examples are +used to **initialize the model** of the component and can either be the full +training data or a representative sample. Initialization includes validating the +network, [inferring missing shapes](https://thinc.ai/docs/usage-models#validation) and -setting up the label scheme based on the data. +setting up the label scheme based on the data. This method is typically called +by [`Language.initialize`](/api/language#initialize). > #### Example > > ```python > morphologizer = nlp.add_pipe("morphologizer") -> nlp.pipeline.append(morphologizer) -> optimizer = morphologizer.initialize(lambda: [], pipeline=nlp.pipeline) +> morphologizer.initialize(lambda: [], nlp=nlp) > ``` | Name | Description | | -------------- | ------------------------------------------------------------------------------------------------------------------------------------- | | `get_examples` | Function that returns gold-standard annotations in the form of [`Example`](/api/example) objects. ~~Callable[[], Iterable[Example]]~~ | | _keyword-only_ | | -| `pipeline` | Optional list of pipeline components that this component is part of. ~~Optional[List[Tuple[str, Callable[[Doc], Doc]]]]~~ | -| `sgd` | An optimizer. Will be created via [`create_optimizer`](#create_optimizer) if not set. ~~Optional[Optimizer]~~ | -| **RETURNS** | The optimizer. ~~Optimizer~~ | +| `nlp` | The current `nlp` object. Defaults to `None`. ~~Optional[Language]~~ | ## Morphologizer.predict {#predict tag="method"} diff --git a/website/docs/api/pipe.md b/website/docs/api/pipe.md index 17752ed5e..385ad7ec9 100644 --- a/website/docs/api/pipe.md +++ b/website/docs/api/pipe.md @@ -100,14 +100,14 @@ applied to the `Doc` in order. Both [`__call__`](/api/pipe#call) and ## Pipe.initialize {#initialize tag="method"} -Initialize the component for training and return an -[`Optimizer`](https://thinc.ai/docs/api-optimizers). `get_examples` should be a -function that returns an iterable of [`Example`](/api/example) objects. The data -examples are used to **initialize the model** of the component and can either be -the full training data or a representative sample. Initialization includes -validating the network, +Initialize the component for training. `get_examples` should be a function that +returns an iterable of [`Example`](/api/example) objects. The data examples are +used to **initialize the model** of the component and can either be the full +training data or a representative sample. Initialization includes validating the +network, [inferring missing shapes](https://thinc.ai/docs/usage-models#validation) and -setting up the label scheme based on the data. +setting up the label scheme based on the data. This method is typically called +by [`Language.initialize`](/api/language#initialize). @@ -119,16 +119,14 @@ This method was previously called `begin_training`. > > ```python > pipe = nlp.add_pipe("your_custom_pipe") -> optimizer = pipe.initialize(lambda: [], pipeline=nlp.pipeline) +> pipe.initialize(lambda: [], pipeline=nlp.pipeline) > ``` | Name | Description | | -------------- | ------------------------------------------------------------------------------------------------------------------------------------- | | `get_examples` | Function that returns gold-standard annotations in the form of [`Example`](/api/example) objects. ~~Callable[[], Iterable[Example]]~~ | | _keyword-only_ | | -| `pipeline` | Optional list of pipeline components that this component is part of. ~~Optional[List[Tuple[str, Callable[[Doc], Doc]]]]~~ | -| `sgd` | An optimizer. Will be created via [`create_optimizer`](#create_optimizer) if not set. ~~Optional[Optimizer]~~ | -| **RETURNS** | The optimizer. ~~Optimizer~~ | +| `nlp` | The current `nlp` object. Defaults to `None`. ~~Optional[Language]~~ | ## Pipe.predict {#predict tag="method"} diff --git a/website/docs/api/sentencerecognizer.md b/website/docs/api/sentencerecognizer.md index d81725343..ac7008465 100644 --- a/website/docs/api/sentencerecognizer.md +++ b/website/docs/api/sentencerecognizer.md @@ -116,29 +116,27 @@ and [`pipe`](/api/sentencerecognizer#pipe) delegate to the ## SentenceRecognizer.initialize {#initialize tag="method"} -Initialize the component for training and return an -[`Optimizer`](https://thinc.ai/docs/api-optimizers). `get_examples` should be a -function that returns an iterable of [`Example`](/api/example) objects. The data -examples are used to **initialize the model** of the component and can either be -the full training data or a representative sample. Initialization includes -validating the network, +Initialize the component for training. `get_examples` should be a function that +returns an iterable of [`Example`](/api/example) objects. The data examples are +used to **initialize the model** of the component and can either be the full +training data or a representative sample. Initialization includes validating the +network, [inferring missing shapes](https://thinc.ai/docs/usage-models#validation) and -setting up the label scheme based on the data. +setting up the label scheme based on the data. This method is typically called +by [`Language.initialize`](/api/language#initialize). > #### Example > > ```python > senter = nlp.add_pipe("senter") -> optimizer = senter.initialize(lambda: [], pipeline=nlp.pipeline) +> senter.initialize(lambda: [], nlp=nlp) > ``` | Name | Description | | -------------- | ------------------------------------------------------------------------------------------------------------------------------------- | | `get_examples` | Function that returns gold-standard annotations in the form of [`Example`](/api/example) objects. ~~Callable[[], Iterable[Example]]~~ | | _keyword-only_ | | -| `pipeline` | Optional list of pipeline components that this component is part of. ~~Optional[List[Tuple[str, Callable[[Doc], Doc]]]]~~ | -| `sgd` | An optimizer. Will be created via [`create_optimizer`](#create_optimizer) if not set. ~~Optional[Optimizer]~~ | -| **RETURNS** | The optimizer. ~~Optimizer~~ | +| `nlp` | The current `nlp` object. Defaults to `None`. ~~Optional[Language]~~ | ## SentenceRecognizer.predict {#predict tag="method"} diff --git a/website/docs/api/tagger.md b/website/docs/api/tagger.md index 6ca554f49..ff9763e61 100644 --- a/website/docs/api/tagger.md +++ b/website/docs/api/tagger.md @@ -114,14 +114,14 @@ applied to the `Doc` in order. Both [`__call__`](/api/tagger#call) and ## Tagger.initialize {#initialize tag="method"} -Initialize the component for training and return an -[`Optimizer`](https://thinc.ai/docs/api-optimizers). `get_examples` should be a -function that returns an iterable of [`Example`](/api/example) objects. The data -examples are used to **initialize the model** of the component and can either be -the full training data or a representative sample. Initialization includes -validating the network, +Initialize the component for training. `get_examples` should be a function that +returns an iterable of [`Example`](/api/example) objects. The data examples are +used to **initialize the model** of the component and can either be the full +training data or a representative sample. Initialization includes validating the +network, [inferring missing shapes](https://thinc.ai/docs/usage-models#validation) and -setting up the label scheme based on the data. +setting up the label scheme based on the data. This method is typically called +by [`Language.initialize`](/api/language#initialize). @@ -133,16 +133,14 @@ This method was previously called `begin_training`. > > ```python > tagger = nlp.add_pipe("tagger") -> optimizer = tagger.initialize(lambda: [], pipeline=nlp.pipeline) +> tagger.initialize(lambda: [], nlp=nlp) > ``` | Name | Description | | -------------- | ------------------------------------------------------------------------------------------------------------------------------------- | | `get_examples` | Function that returns gold-standard annotations in the form of [`Example`](/api/example) objects. ~~Callable[[], Iterable[Example]]~~ | | _keyword-only_ | | -| `pipeline` | Optional list of pipeline components that this component is part of. ~~Optional[List[Tuple[str, Callable[[Doc], Doc]]]]~~ | -| `sgd` | An optimizer. Will be created via [`create_optimizer`](#create_optimizer) if not set. ~~Optional[Optimizer]~~ | -| **RETURNS** | The optimizer. ~~Optimizer~~ | +| `nlp` | The current `nlp` object. Defaults to `None`. ~~Optional[Language]~~ | ## Tagger.predict {#predict tag="method"} diff --git a/website/docs/api/textcategorizer.md b/website/docs/api/textcategorizer.md index 4c99d6984..6db960ea0 100644 --- a/website/docs/api/textcategorizer.md +++ b/website/docs/api/textcategorizer.md @@ -127,14 +127,14 @@ applied to the `Doc` in order. Both [`__call__`](/api/textcategorizer#call) and ## TextCategorizer.initialize {#initialize tag="method"} -Initialize the component for training and return an -[`Optimizer`](https://thinc.ai/docs/api-optimizers). `get_examples` should be a -function that returns an iterable of [`Example`](/api/example) objects. The data -examples are used to **initialize the model** of the component and can either be -the full training data or a representative sample. Initialization includes -validating the network, +Initialize the component for training. `get_examples` should be a function that +returns an iterable of [`Example`](/api/example) objects. The data examples are +used to **initialize the model** of the component and can either be the full +training data or a representative sample. Initialization includes validating the +network, [inferring missing shapes](https://thinc.ai/docs/usage-models#validation) and -setting up the label scheme based on the data. +setting up the label scheme based on the data. This method is typically called +by [`Language.initialize`](/api/language#initialize). @@ -146,16 +146,14 @@ This method was previously called `begin_training`. > > ```python > textcat = nlp.add_pipe("textcat") -> optimizer = textcat.initialize(lambda: [], pipeline=nlp.pipeline) +> textcat.initialize(lambda: [], nlp=nlp) > ``` | Name | Description | | -------------- | ------------------------------------------------------------------------------------------------------------------------------------- | | `get_examples` | Function that returns gold-standard annotations in the form of [`Example`](/api/example) objects. ~~Callable[[], Iterable[Example]]~~ | | _keyword-only_ | | -| `pipeline` | Optional list of pipeline components that this component is part of. ~~Optional[List[Tuple[str, Callable[[Doc], Doc]]]]~~ | -| `sgd` | An optimizer. Will be created via [`create_optimizer`](#create_optimizer) if not set. ~~Optional[Optimizer]~~ | -| **RETURNS** | The optimizer. ~~Optimizer~~ | +| `nlp` | The current `nlp` object. Defaults to `None`. ~~Optional[Language]~~ | ## TextCategorizer.predict {#predict tag="method"} diff --git a/website/docs/api/tok2vec.md b/website/docs/api/tok2vec.md index 8269ad7cf..fa6e6c689 100644 --- a/website/docs/api/tok2vec.md +++ b/website/docs/api/tok2vec.md @@ -132,22 +132,21 @@ examples are used to **initialize the model** of the component and can either be the full training data or a representative sample. Initialization includes validating the network, [inferring missing shapes](https://thinc.ai/docs/usage-models#validation) and -setting up the label scheme based on the data. +setting up the label scheme based on the data. This method is typically called +by [`Language.initialize`](/api/language#initialize). > #### Example > > ```python > tok2vec = nlp.add_pipe("tok2vec") -> optimizer = tok2vec.initialize(lambda: [], pipeline=nlp.pipeline) +> tok2vec.initialize(lambda: [], nlp=nlp) > ``` | Name | Description | | -------------- | ------------------------------------------------------------------------------------------------------------------------------------- | | `get_examples` | Function that returns gold-standard annotations in the form of [`Example`](/api/example) objects. ~~Callable[[], Iterable[Example]]~~ | | _keyword-only_ | | -| `pipeline` | Optional list of pipeline components that this component is part of. ~~Optional[List[Tuple[str, Callable[[Doc], Doc]]]]~~ | -| `sgd` | An optimizer. Will be created via [`create_optimizer`](#create_optimizer) if not set. ~~Optional[Optimizer]~~ | -| **RETURNS** | The optimizer. ~~Optimizer~~ | +| `nlp` | The current `nlp` object. Defaults to `None`. ~~Optional[Language]~~ | ## Tok2Vec.predict {#predict tag="method"} diff --git a/website/docs/api/transformer.md b/website/docs/api/transformer.md index 712214fec..938574f2e 100644 --- a/website/docs/api/transformer.md +++ b/website/docs/api/transformer.md @@ -167,22 +167,21 @@ examples are used to **initialize the model** of the component and can either be the full training data or a representative sample. Initialization includes validating the network, [inferring missing shapes](https://thinc.ai/docs/usage-models#validation) and -setting up the label scheme based on the data. +setting up the label scheme based on the data. This method is typically called +by [`Language.initialize`](/api/language#initialize). > #### Example > > ```python > trf = nlp.add_pipe("transformer") -> optimizer = trf.initialize(lambda: [], pipeline=nlp.pipeline) +> trf.initialize(lambda: [], nlp=nlp) > ``` | Name | Description | | -------------- | ------------------------------------------------------------------------------------------------------------------------------------- | | `get_examples` | Function that returns gold-standard annotations in the form of [`Example`](/api/example) objects. ~~Callable[[], Iterable[Example]]~~ | | _keyword-only_ | | -| `pipeline` | Optional list of pipeline components that this component is part of. ~~Optional[List[Tuple[str, Callable[[Doc], Doc]]]]~~ | -| `sgd` | An optimizer. Will be created via [`create_optimizer`](#create_optimizer) if not set. ~~Optional[Optimizer]~~ | -| **RETURNS** | The optimizer. ~~Optimizer~~ | +| `nlp` | The current `nlp` object. Defaults to `None`. ~~Optional[Language]~~ | ## Transformer.predict {#predict tag="method"}