From 985c1495dd3f6d7b097b98d515bae6072ccb8f56 Mon Sep 17 00:00:00 2001 From: shadeMe Date: Mon, 7 Aug 2023 16:58:50 +0200 Subject: [PATCH] Remove type aliases --- website/docs/api/architectures.mdx | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/website/docs/api/architectures.mdx b/website/docs/api/architectures.mdx index e0f256332..d275ec82d 100644 --- a/website/docs/api/architectures.mdx +++ b/website/docs/api/architectures.mdx @@ -513,7 +513,7 @@ Construct an ALBERT transformer model. | `type_vocab_size` | Type vocabulary size. ~~int~~ | | `mixed_precision` | Use mixed-precision training. ~~bool~~ | | `grad_scaler_config` | Configuration passed to the PyTorch gradient scaler. ~~dict~~ | -| **CREATES** | The model using the architecture ~~Model[TransformerInT, TransformerOutT]~~ | +| **CREATES** | The model using the architecture ~~Model~~ | ### spacy-curated-transformers.BertTransformer.v1 @@ -538,7 +538,7 @@ Construct a BERT transformer model. | `type_vocab_size` | Type vocabulary size. ~~int~~ | | `mixed_precision` | Use mixed-precision training. ~~bool~~ | | `grad_scaler_config` | Configuration passed to the PyTorch gradient scaler. ~~dict~~ | -| **CREATES** | The model using the architecture ~~Model[TransformerInT, TransformerOutT]~~ | +| **CREATES** | The model using the architecture ~~Model~~ | ### spacy-curated-transformers.CamembertTransformer.v1 @@ -563,7 +563,7 @@ Construct a CamemBERT transformer model. | `type_vocab_size` | Type vocabulary size. ~~int~~ | | `mixed_precision` | Use mixed-precision training. ~~bool~~ | | `grad_scaler_config` | Configuration passed to the PyTorch gradient scaler. ~~dict~~ | -| **CREATES** | The model using the architecture ~~Model[TransformerInT, TransformerOutT]~~ | +| **CREATES** | The model using the architecture ~~Model~~ | ### spacy-curated-transformers.RobertaTransformer.v1 @@ -588,7 +588,7 @@ Construct a RoBERTa transformer model. | `type_vocab_size` | Type vocabulary size. ~~int~~ | | `mixed_precision` | Use mixed-precision training. ~~bool~~ | | `grad_scaler_config` | Configuration passed to the PyTorch gradient scaler. ~~dict~~ | -| **CREATES** | The model using the architecture ~~Model[TransformerInT, TransformerOutT]~~ | +| **CREATES** | The model using the architecture ~~Model~~ | ### spacy-curated-transformers.XlmrTransformer.v1 @@ -613,7 +613,7 @@ Construct a XLM-RoBERTa transformer model. | `type_vocab_size` | Type vocabulary size. ~~int~~ | | `mixed_precision` | Use mixed-precision training. ~~bool~~ | | `grad_scaler_config` | Configuration passed to the PyTorch gradient scaler. ~~dict~~ | -| **CREATES** | The model using the architecture ~~Model[TransformerInT, TransformerOutT]~~ | +| **CREATES** | The model using the architecture ~~Model~~ | ### spacy-curated-transformers.ScalarWeight.v1