Remove reference to a default model

This commit is contained in:
Alex Strick van Linschoten 2024-04-28 11:45:33 +02:00 committed by GitHub
parent 34615aa7e3
commit 3fc17cf862
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -1438,7 +1438,7 @@ Currently, these models are provided as part of the core library:
| `spacy.Claude-instant-1.v1` | Anthropic | `["claude-instant-1", "claude-instant-1-100k"]` | `"claude-instant-1"` | `{}` | | `spacy.Claude-instant-1.v1` | Anthropic | `["claude-instant-1", "claude-instant-1-100k"]` | `"claude-instant-1"` | `{}` |
| `spacy.Claude-instant-1-1.v1` | Anthropic | `["claude-instant-1.1", "claude-instant-1.1-100k"]` | `"claude-instant-1.1"` | `{}` | | `spacy.Claude-instant-1-1.v1` | Anthropic | `["claude-instant-1.1", "claude-instant-1.1-100k"]` | `"claude-instant-1.1"` | `{}` |
| `spacy.PaLM.v1` | Google | `["chat-bison-001", "text-bison-001"]` | `"text-bison-001"` | `{temperature=0.0}` | | `spacy.PaLM.v1` | Google | `["chat-bison-001", "text-bison-001"]` | `"text-bison-001"` | `{temperature=0.0}` |
| `spacy.Ollama.v1` | Ollama | `["llama3", "phi3", "wizardlm2", "mistral", "gemma", "mixtral", "llama2", "codegemma", "command-r", "command-r-plus", "llava", "dbrx", "codellama", "qwen", "dolphin-mixtral", "llama2-uncensored", "mistral-openorca", "deepseek-coder", "phi", "dolphin-mistral", "nomic-embed-text", "nous-hermes2", "orca-mini", "llama2-chinese", "zephyr", "wizard-vicuna-uncensored", "openhermes", "vicuna", "tinyllama", "tinydolphin", "openchat", "starcoder2", "wizardcoder", "stable-code", "starcoder", "neural-chat", "yi", "phind-codellama", "starling-lm", "wizard-math", "falcon", "dolphin-phi", "orca2", "dolphincoder", "mxbai-embed-large", "nous-hermes", "solar", "bakllava", "sqlcoder", "medllama2", "nous-hermes2-mixtral", "wizardlm-uncensored", "dolphin-llama3", "codeup", "stablelm2", "everythinglm", "all-minilm", "samantha-mistral", "yarn-mistral", "stable-beluga", "meditron", "yarn-llama2", "deepseek-llm", "llama-pro", "magicoder", "stablelm-zephyr", "codebooga", "codeqwen", "mistrallite", "wizard-vicuna", "nexusraven", "xwinlm", "goliath", "open-orca-platypus2", "wizardlm", "notux", "megadolphin", "duckdb-nsql", "alfred", "notus", "snowflake-arctic-embed"]` | `"mistral"` | `{}` | | `spacy.Ollama.v1` | Ollama | `["llama3", "phi3", "wizardlm2", "mistral", "gemma", "mixtral", "llama2", "codegemma", "command-r", "command-r-plus", "llava", "dbrx", "codellama", "qwen", "dolphin-mixtral", "llama2-uncensored", "mistral-openorca", "deepseek-coder", "phi", "dolphin-mistral", "nomic-embed-text", "nous-hermes2", "orca-mini", "llama2-chinese", "zephyr", "wizard-vicuna-uncensored", "openhermes", "vicuna", "tinyllama", "tinydolphin", "openchat", "starcoder2", "wizardcoder", "stable-code", "starcoder", "neural-chat", "yi", "phind-codellama", "starling-lm", "wizard-math", "falcon", "dolphin-phi", "orca2", "dolphincoder", "mxbai-embed-large", "nous-hermes", "solar", "bakllava", "sqlcoder", "medllama2", "nous-hermes2-mixtral", "wizardlm-uncensored", "dolphin-llama3", "codeup", "stablelm2", "everythinglm", "all-minilm", "samantha-mistral", "yarn-mistral", "stable-beluga", "meditron", "yarn-llama2", "deepseek-llm", "llama-pro", "magicoder", "stablelm-zephyr", "codebooga", "codeqwen", "mistrallite", "wizard-vicuna", "nexusraven", "xwinlm", "goliath", "open-orca-platypus2", "wizardlm", "notux", "megadolphin", "duckdb-nsql", "alfred", "notus", "snowflake-arctic-embed"]` | | `{}` |
To use these models, make sure that you've [set the relevant API](#api-keys) To use these models, make sure that you've [set the relevant API](#api-keys)
keys as environment variables. keys as environment variables.