mirror of
https://github.com/explosion/spaCy.git
synced 2025-04-21 01:21:58 +03:00
Remove type aliases
This commit is contained in:
parent
cca478152e
commit
985c1495dd
|
@ -513,7 +513,7 @@ Construct an ALBERT transformer model.
|
|||
| `type_vocab_size` | Type vocabulary size. ~~int~~ |
|
||||
| `mixed_precision` | Use mixed-precision training. ~~bool~~ |
|
||||
| `grad_scaler_config` | Configuration passed to the PyTorch gradient scaler. ~~dict~~ |
|
||||
| **CREATES** | The model using the architecture ~~Model[TransformerInT, TransformerOutT]~~ |
|
||||
| **CREATES** | The model using the architecture ~~Model~~ |
|
||||
|
||||
### spacy-curated-transformers.BertTransformer.v1
|
||||
|
||||
|
@ -538,7 +538,7 @@ Construct a BERT transformer model.
|
|||
| `type_vocab_size` | Type vocabulary size. ~~int~~ |
|
||||
| `mixed_precision` | Use mixed-precision training. ~~bool~~ |
|
||||
| `grad_scaler_config` | Configuration passed to the PyTorch gradient scaler. ~~dict~~ |
|
||||
| **CREATES** | The model using the architecture ~~Model[TransformerInT, TransformerOutT]~~ |
|
||||
| **CREATES** | The model using the architecture ~~Model~~ |
|
||||
|
||||
### spacy-curated-transformers.CamembertTransformer.v1
|
||||
|
||||
|
@ -563,7 +563,7 @@ Construct a CamemBERT transformer model.
|
|||
| `type_vocab_size` | Type vocabulary size. ~~int~~ |
|
||||
| `mixed_precision` | Use mixed-precision training. ~~bool~~ |
|
||||
| `grad_scaler_config` | Configuration passed to the PyTorch gradient scaler. ~~dict~~ |
|
||||
| **CREATES** | The model using the architecture ~~Model[TransformerInT, TransformerOutT]~~ |
|
||||
| **CREATES** | The model using the architecture ~~Model~~ |
|
||||
|
||||
### spacy-curated-transformers.RobertaTransformer.v1
|
||||
|
||||
|
@ -588,7 +588,7 @@ Construct a RoBERTa transformer model.
|
|||
| `type_vocab_size` | Type vocabulary size. ~~int~~ |
|
||||
| `mixed_precision` | Use mixed-precision training. ~~bool~~ |
|
||||
| `grad_scaler_config` | Configuration passed to the PyTorch gradient scaler. ~~dict~~ |
|
||||
| **CREATES** | The model using the architecture ~~Model[TransformerInT, TransformerOutT]~~ |
|
||||
| **CREATES** | The model using the architecture ~~Model~~ |
|
||||
|
||||
### spacy-curated-transformers.XlmrTransformer.v1
|
||||
|
||||
|
@ -613,7 +613,7 @@ Construct a XLM-RoBERTa transformer model.
|
|||
| `type_vocab_size` | Type vocabulary size. ~~int~~ |
|
||||
| `mixed_precision` | Use mixed-precision training. ~~bool~~ |
|
||||
| `grad_scaler_config` | Configuration passed to the PyTorch gradient scaler. ~~dict~~ |
|
||||
| **CREATES** | The model using the architecture ~~Model[TransformerInT, TransformerOutT]~~ |
|
||||
| **CREATES** | The model using the architecture ~~Model~~ |
|
||||
|
||||
### spacy-curated-transformers.ScalarWeight.v1
|
||||
|
||||
|
|
Loading…
Reference in New Issue
Block a user