mirror of
https://github.com/explosion/spaCy.git
synced 2025-06-29 17:33:10 +03:00
Merge 6f55065ed9
into b3c46c315e
This commit is contained in:
commit
97e5391833
|
@ -1402,7 +1402,7 @@ provider's API.
|
||||||
Currently, these models are provided as part of the core library:
|
Currently, these models are provided as part of the core library:
|
||||||
|
|
||||||
| Model | Provider | Supported names | Default name | Default config |
|
| Model | Provider | Supported names | Default name | Default config |
|
||||||
| ----------------------------- | ----------------- | ------------------------------------------------------------------------------------------------------------------ | ---------------------- | ------------------------------------ |
|
| ----------------------------- | ----------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ---------------------- | ------------------------------------ |
|
||||||
| `spacy.GPT-4.v1` | OpenAI | `["gpt-4", "gpt-4-0314", "gpt-4-32k", "gpt-4-32k-0314"]` | `"gpt-4"` | `{}` |
|
| `spacy.GPT-4.v1` | OpenAI | `["gpt-4", "gpt-4-0314", "gpt-4-32k", "gpt-4-32k-0314"]` | `"gpt-4"` | `{}` |
|
||||||
| `spacy.GPT-4.v2` | OpenAI | `["gpt-4", "gpt-4-0314", "gpt-4-32k", "gpt-4-32k-0314"]` | `"gpt-4"` | `{temperature=0.0}` |
|
| `spacy.GPT-4.v2` | OpenAI | `["gpt-4", "gpt-4-0314", "gpt-4-32k", "gpt-4-32k-0314"]` | `"gpt-4"` | `{temperature=0.0}` |
|
||||||
| `spacy.GPT-4.v3` | OpenAI | All names of [GPT-4 models](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo) offered by OpenAI | `"gpt-4"` | `{temperature=0.0}` |
|
| `spacy.GPT-4.v3` | OpenAI | All names of [GPT-4 models](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo) offered by OpenAI | `"gpt-4"` | `{temperature=0.0}` |
|
||||||
|
@ -1438,6 +1438,7 @@ Currently, these models are provided as part of the core library:
|
||||||
| `spacy.Claude-instant-1.v1` | Anthropic | `["claude-instant-1", "claude-instant-1-100k"]` | `"claude-instant-1"` | `{}` |
|
| `spacy.Claude-instant-1.v1` | Anthropic | `["claude-instant-1", "claude-instant-1-100k"]` | `"claude-instant-1"` | `{}` |
|
||||||
| `spacy.Claude-instant-1-1.v1` | Anthropic | `["claude-instant-1.1", "claude-instant-1.1-100k"]` | `"claude-instant-1.1"` | `{}` |
|
| `spacy.Claude-instant-1-1.v1` | Anthropic | `["claude-instant-1.1", "claude-instant-1.1-100k"]` | `"claude-instant-1.1"` | `{}` |
|
||||||
| `spacy.PaLM.v1` | Google | `["chat-bison-001", "text-bison-001"]` | `"text-bison-001"` | `{temperature=0.0}` |
|
| `spacy.PaLM.v1` | Google | `["chat-bison-001", "text-bison-001"]` | `"text-bison-001"` | `{temperature=0.0}` |
|
||||||
|
| `spacy.Ollama.v1` | Ollama | `["llama3", "phi3", "wizardlm2", "mistral", "gemma", "mixtral", "llama2", "codegemma", "command-r", "command-r-plus", "llava", "dbrx", "codellama", "qwen", "dolphin-mixtral", "llama2-uncensored", "mistral-openorca", "deepseek-coder", "phi", "dolphin-mistral", "nomic-embed-text", "nous-hermes2", "orca-mini", "llama2-chinese", "zephyr", "wizard-vicuna-uncensored", "openhermes", "vicuna", "tinyllama", "tinydolphin", "openchat", "starcoder2", "wizardcoder", "stable-code", "starcoder", "neural-chat", "yi", "phind-codellama", "starling-lm", "wizard-math", "falcon", "dolphin-phi", "orca2", "dolphincoder", "mxbai-embed-large", "nous-hermes", "solar", "bakllava", "sqlcoder", "medllama2", "nous-hermes2-mixtral", "wizardlm-uncensored", "dolphin-llama3", "codeup", "stablelm2", "everythinglm", "all-minilm", "samantha-mistral", "yarn-mistral", "stable-beluga", "meditron", "yarn-llama2", "deepseek-llm", "llama-pro", "magicoder", "stablelm-zephyr", "codebooga", "codeqwen", "mistrallite", "wizard-vicuna", "nexusraven", "xwinlm", "goliath", "open-orca-platypus2", "wizardlm", "notux", "megadolphin", "duckdb-nsql", "alfred", "notus", "snowflake-arctic-embed"]` | | `{}` |
|
||||||
|
|
||||||
To use these models, make sure that you've [set the relevant API](#api-keys)
|
To use these models, make sure that you've [set the relevant API](#api-keys)
|
||||||
keys as environment variables.
|
keys as environment variables.
|
||||||
|
@ -1460,6 +1461,14 @@ different than working with models from other providers:
|
||||||
`"completions"` or `"chat"`, depending on whether the deployed model is a
|
`"completions"` or `"chat"`, depending on whether the deployed model is a
|
||||||
completion or chat model.
|
completion or chat model.
|
||||||
|
|
||||||
|
**⚠️ A note on `spacy.Ollama.v1`.** The Ollama models are all local models that
|
||||||
|
run on your GPU-backed machine. Please refer to the
|
||||||
|
[Ollama docs](https://ollama.com/) for more information on installation, but the
|
||||||
|
basic flow will see you running `ollama serve` to start the local server that
|
||||||
|
will route incoming requests from `spacy-llm` to the model. Depending on which
|
||||||
|
model you want, you'll then need to run `ollama pull <MODEL_NAME>` which will
|
||||||
|
download the quantised model files to your local machine.
|
||||||
|
|
||||||
#### API Keys {id="api-keys"}
|
#### API Keys {id="api-keys"}
|
||||||
|
|
||||||
Note that when using hosted services, you have to ensure that the proper API
|
Note that when using hosted services, you have to ensure that the proper API
|
||||||
|
|
|
@ -511,6 +511,7 @@ provider's documentation.
|
||||||
| [`spacy.StableLM.v1`](/api/large-language-models#models-hf) | StableLM models through HuggingFace. |
|
| [`spacy.StableLM.v1`](/api/large-language-models#models-hf) | StableLM models through HuggingFace. |
|
||||||
| [`spacy.OpenLLaMA.v1`](/api/large-language-models#models-hf) | OpenLLaMA models through HuggingFace. |
|
| [`spacy.OpenLLaMA.v1`](/api/large-language-models#models-hf) | OpenLLaMA models through HuggingFace. |
|
||||||
| [LangChain models](/api/large-language-models#langchain-models) | LangChain models for API retrieval. |
|
| [LangChain models](/api/large-language-models#langchain-models) | LangChain models for API retrieval. |
|
||||||
|
| [`spacy.Ollama.v1`](/api/large-language-models#models-rest) | Ollama's locally-running models. |
|
||||||
|
|
||||||
Note that the chat models variants of Llama 2 are currently not supported. This
|
Note that the chat models variants of Llama 2 are currently not supported. This
|
||||||
is because they need a particular prompting setup and don't add any discernible
|
is because they need a particular prompting setup and don't add any discernible
|
||||||
|
|
Loading…
Reference in New Issue
Block a user