mirror of
https://github.com/explosion/spaCy.git
synced 2025-07-12 09:12:21 +03:00
Remove mentions of Torchscript and quantization
Both are disabled in the initial release of `spacy-curated-transformers`.
This commit is contained in:
parent
b48ab353a1
commit
a282aec814
|
@ -524,7 +524,6 @@ Construct an ALBERT transformer model.
|
|||
| `num_hidden_layers` | Number of hidden layers. ~~int~~ |
|
||||
| `padding_idx` | Index of the padding meta-token. ~~int~~ |
|
||||
| `type_vocab_size` | Type vocabulary size. ~~int~~ |
|
||||
| `torchscript` | Set to `True` when loading TorchScript models, `False` otherwise. ~~bool~~ |
|
||||
| `mixed_precision` | Use mixed-precision training. ~~bool~~ |
|
||||
| `grad_scaler_config` | Configuration passed to the PyTorch gradient scaler. ~~dict~~ |
|
||||
| **CREATES** | The model using the architecture ~~Model[TransformerInT, TransformerOutT]~~ |
|
||||
|
@ -553,7 +552,6 @@ Construct a BERT transformer model.
|
|||
| `num_hidden_layers` | Number of hidden layers. ~~int~~ |
|
||||
| `padding_idx` | Index of the padding meta-token. ~~int~~ |
|
||||
| `type_vocab_size` | Type vocabulary size. ~~int~~ |
|
||||
| `torchscript` | Set to `True` when loading TorchScript models, `False` otherwise. ~~bool~~ |
|
||||
| `mixed_precision` | Use mixed-precision training. ~~bool~~ |
|
||||
| `grad_scaler_config` | Configuration passed to the PyTorch gradient scaler. ~~dict~~ |
|
||||
| **CREATES** | The model using the architecture ~~Model[TransformerInT, TransformerOutT]~~ |
|
||||
|
@ -582,7 +580,6 @@ Construct a CamemBERT transformer model.
|
|||
| `num_hidden_layers` | Number of hidden layers. ~~int~~ |
|
||||
| `padding_idx` | Index of the padding meta-token. ~~int~~ |
|
||||
| `type_vocab_size` | Type vocabulary size. ~~int~~ |
|
||||
| `torchscript` | Set to `True` when loading TorchScript models, `False` otherwise. ~~bool~~ |
|
||||
| `mixed_precision` | Use mixed-precision training. ~~bool~~ |
|
||||
| `grad_scaler_config` | Configuration passed to the PyTorch gradient scaler. ~~dict~~ |
|
||||
| **CREATES** | The model using the architecture ~~Model[TransformerInT, TransformerOutT]~~ |
|
||||
|
@ -611,7 +608,6 @@ Construct a RoBERTa transformer model.
|
|||
| `num_hidden_layers` | Number of hidden layers. ~~int~~ |
|
||||
| `padding_idx` | Index of the padding meta-token. ~~int~~ |
|
||||
| `type_vocab_size` | Type vocabulary size. ~~int~~ |
|
||||
| `torchscript` | Set to `True` when loading TorchScript models, `False` otherwise. ~~bool~~ |
|
||||
| `mixed_precision` | Use mixed-precision training. ~~bool~~ |
|
||||
| `grad_scaler_config` | Configuration passed to the PyTorch gradient scaler. ~~dict~~ |
|
||||
| **CREATES** | The model using the architecture ~~Model[TransformerInT, TransformerOutT]~~ |
|
||||
|
@ -641,7 +637,6 @@ Construct a XLM-RoBERTa transformer model.
|
|||
| `num_hidden_layers` | Number of hidden layers. ~~int~~ |
|
||||
| `padding_idx` | Index of the padding meta-token. ~~int~~ |
|
||||
| `type_vocab_size` | Type vocabulary size. ~~int~~ |
|
||||
| `torchscript` | Set to `True` when loading TorchScript models, `False` otherwise. ~~bool~~ |
|
||||
| `mixed_precision` | Use mixed-precision training. ~~bool~~ |
|
||||
| `grad_scaler_config` | Configuration passed to the PyTorch gradient scaler. ~~dict~~ |
|
||||
| **CREATES** | The model using the architecture ~~Model[TransformerInT, TransformerOutT]~~ |
|
||||
|
|
|
@ -1054,18 +1054,6 @@ Token length range: [1, 8]
|
|||
```
|
||||
</Accordion>
|
||||
|
||||
## quantize {id="quantize",tag="command",version="3.6"}
|
||||
|
||||
Quantize a curated transformers model to reduce its size.
|
||||
|
||||
| Name | Description |
|
||||
|----------------|---------------------------------------------------------------------|
|
||||
| `model_path` | Model to quantize. ~~Path (positional)~~ |
|
||||
| `output_path` | Output directory to store quantized model in. ~~Path (positional)~~ |
|
||||
| `max_mse_loss` | Maximum MSE loss of quantized parameters. ~~float (option)~~ |
|
||||
| `skip_embeds` | Do not quantize embeddings. ~~bool (option)~~ |
|
||||
| `skip_linear` | Do not quantize linear layers. ~~bool (option)~~ |
|
||||
|
||||
## train {id="train",tag="command"}
|
||||
|
||||
Train a pipeline. Expects data in spaCy's
|
||||
|
|
|
@ -25,7 +25,7 @@ work out-of-the-box.
|
|||
|
||||
</Infobox>
|
||||
|
||||
This Python package provides a curated set of transformer models for spaCy. It is focused on deep integration into spaCy and will support deployment-focused features such as distillation and quantization. Curated transformers currently supports the following model types:
|
||||
This Python package provides a curated set of transformer models for spaCy. It is focused on deep integration into spaCy and will support deployment-focused features such as distillation and quantization in the future. spaCy curated transformers currently supports the following model types:
|
||||
|
||||
* ALBERT
|
||||
* BERT
|
||||
|
|
Loading…
Reference in New Issue
Block a user