diff --git a/website/docs/api/large-language-models.mdx b/website/docs/api/large-language-models.mdx index 7c4b345f5..74eed5d3c 100644 --- a/website/docs/api/large-language-models.mdx +++ b/website/docs/api/large-language-models.mdx @@ -1267,14 +1267,16 @@ These models all take the same parameters: Currently, these models are provided as part of the core library: -| Model | Provider | Supported names | HF directory | -| -------------------- | --------------- | ------------------------------------------------------------------------------------------------------------ | -------------------------------------- | -| `spacy.Dolly.v1` | Databricks | `["dolly-v2-3b", "dolly-v2-7b", "dolly-v2-12b"]` | https://huggingface.co/databricks | -| `spacy.Falcon.v1` | TII | `["falcon-rw-1b", "falcon-7b", "falcon-7b-instruct", "falcon-40b-instruct"]` | https://huggingface.co/tiiuae | -| `spacy.Llama2.v1` | Meta AI | `["Llama-2-7b-hf", "Llama-2-13b-hf", "Llama-2-70b-hf"]` | https://huggingface.co/meta-llama | -| `spacy.Mistral.v1` | Mistral AI | `["Mistral-7B-v0.1", "Mistral-7B-Instruct-v0.1"]` | https://huggingface.co/mistralai | -| `spacy.StableLM.v1` | Stability AI | `["stablelm-base-alpha-3b", "stablelm-base-alpha-7b", "stablelm-tuned-alpha-3b", "stablelm-tuned-alpha-7b"]` | https://huggingface.co/stabilityai | -| `spacy.OpenLLaMA.v1` | OpenLM Research | `["open_llama_3b", "open_llama_7b", "open_llama_7b_v2", "open_llama_13b"]` | https://huggingface.co/openlm-research | +| Model | Provider | Supported names | HF directory | +| -------------------- | --------------- | ------------------------------------------------------------------------------------------------------------------------- | -------------------------------------- | +| `spacy.Dolly.v1` | Databricks | `["dolly-v2-3b", "dolly-v2-7b", "dolly-v2-12b"]` | https://huggingface.co/databricks | +| `spacy.Falcon.v1` | TII | `["falcon-rw-1b", "falcon-7b", "falcon-7b-instruct", "falcon-40b-instruct"]` | https://huggingface.co/tiiuae | +| `spacy.Llama2.v1` | Meta AI | `["Llama-2-7b-hf", "Llama-2-13b-hf", "Llama-2-70b-hf"]` | https://huggingface.co/meta-llama | +| `spacy.Mistral.v1` | Mistral AI | `["Mistral-7B-v0.1", "Mistral-7B-Instruct-v0.1"]` | https://huggingface.co/mistralai | +| `spacy.StableLM.v1` | Stability AI | `["stablelm-base-alpha-3b", "stablelm-base-alpha-7b", "stablelm-tuned-alpha-3b", "stablelm-tuned-alpha-7b"]` | https://huggingface.co/stabilityai | +| `spacy.OpenLLaMA.v1` | OpenLM Research | `["open_llama_3b", "open_llama_7b", "open_llama_7b_v2", "open_llama_13b"]` | https://huggingface.co/openlm-research | +| `spacy.Yi.v1` | 01 AI | `["Yi-34B", "Yi-34B-chat-8bits", "Yi-6B-chat", "Yi-6B", "Yi-6B-200K", "Yi-34B-chat", "Yi-34B-chat-4bits", "Yi-34B-200K"]` | https://huggingface.co/01-ai | +| `spacy.Zephyr.v1` | Hugging Face | `["zephyr-7b-beta"]` | https://huggingface.co/HuggingFaceH4 | diff --git a/website/docs/usage/large-language-models.mdx b/website/docs/usage/large-language-models.mdx index 43b22ce07..572bc5221 100644 --- a/website/docs/usage/large-language-models.mdx +++ b/website/docs/usage/large-language-models.mdx @@ -493,6 +493,8 @@ provider's documentation. | [`spacy.Llama2.v1`](/api/large-language-models#models-hf) | Llama2 models through HuggingFace. | | [`spacy.StableLM.v1`](/api/large-language-models#models-hf) | StableLM models through HuggingFace. | | [`spacy.OpenLLaMA.v1`](/api/large-language-models#models-hf) | OpenLLaMA models through HuggingFace. | +| [`spacy.Yi.v1`](/api/large-language-models#models-hf) | Yi models through HuggingFace. | +| [`spacy.Zephyr.v1`](/api/large-language-models#models-hf) | Zephyr model through HuggingFace. | | [LangChain models](/api/large-language-models#langchain-models) | LangChain models for API retrieval. | Note that the chat models variants of Llama 2 are currently not supported. This