From 1162fcf0994dd7f83a744eddd0bdd31bccd5ca29 Mon Sep 17 00:00:00 2001 From: Raphael Mitsch Date: Thu, 5 Oct 2023 14:44:38 +0200 Subject: [PATCH] Add Mistral mentions. (#13037) --- website/docs/api/large-language-models.mdx | 3 ++- website/docs/usage/large-language-models.mdx | 3 ++- 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/website/docs/api/large-language-models.mdx b/website/docs/api/large-language-models.mdx index c5d106e29..f8404cb2e 100644 --- a/website/docs/api/large-language-models.mdx +++ b/website/docs/api/large-language-models.mdx @@ -1101,8 +1101,9 @@ Currently, these models are provided as part of the core library: | Model | Provider | Supported names | HF directory | | -------------------- | --------------- | ------------------------------------------------------------------------------------------------------------ | -------------------------------------- | | `spacy.Dolly.v1` | Databricks | `["dolly-v2-3b", "dolly-v2-7b", "dolly-v2-12b"]` | https://huggingface.co/databricks | -| `spacy.Llama2.v1` | Meta AI | `["Llama-2-7b-hf", "Llama-2-13b-hf", "Llama-2-70b-hf"]` | https://huggingface.co/meta-llama | | `spacy.Falcon.v1` | TII | `["falcon-rw-1b", "falcon-7b", "falcon-7b-instruct", "falcon-40b-instruct"]` | https://huggingface.co/tiiuae | +| `spacy.Llama2.v1` | Meta AI | `["Llama-2-7b-hf", "Llama-2-13b-hf", "Llama-2-70b-hf"]` | https://huggingface.co/meta-llama | +| `spacy.Mistral.v1` | Mistral AI | `["Mistral-7B-v0.1", "Mistral-7B-Instruct-v0.1"]` | https://huggingface.co/mistralai | | `spacy.StableLM.v1` | Stability AI | `["stablelm-base-alpha-3b", "stablelm-base-alpha-7b", "stablelm-tuned-alpha-3b", "stablelm-tuned-alpha-7b"]` | https://huggingface.co/stabilityai | | `spacy.OpenLLaMA.v1` | OpenLM Research | `["open_llama_3b", "open_llama_7b", "open_llama_7b_v2", "open_llama_13b"]` | https://huggingface.co/openlm-research | diff --git a/website/docs/usage/large-language-models.mdx b/website/docs/usage/large-language-models.mdx index 875ff33d4..94494b4e1 100644 --- a/website/docs/usage/large-language-models.mdx +++ b/website/docs/usage/large-language-models.mdx @@ -436,7 +436,7 @@ respectively. Alternatively you can use LangChain to access hosted or local models by specifying one of the models registered with the `langchain.` prefix. -_Why LangChain if there are also are a native REST and a HuggingFace interface? When should I use what?_ +_Why LangChain if there are also are native REST and HuggingFace interfaces? When should I use what?_ Third-party libraries like `langchain` focus on prompt management, integration of many different LLM APIs, and other related features such as conversational @@ -488,6 +488,7 @@ provider's documentation. | [`spacy.PaLM.v1`](/api/large-language-models#models-rest) | Google’s `PaLM` model family. | | [`spacy.Dolly.v1`](/api/large-language-models#models-hf) | Dolly models through HuggingFace. | | [`spacy.Falcon.v1`](/api/large-language-models#models-hf) | Falcon models through HuggingFace. | +| [`spacy.Mistral.v1`](/api/large-language-models#models-hf) | Mistral models through HuggingFace. | | [`spacy.Llama2.v1`](/api/large-language-models#models-hf) | Llama2 models through HuggingFace. | | [`spacy.StableLM.v1`](/api/large-language-models#models-hf) | StableLM models through HuggingFace. | | [`spacy.OpenLLaMA.v1`](/api/large-language-models#models-hf) | OpenLLaMA models through HuggingFace. |