mirror of
				https://github.com/explosion/spaCy.git
				synced 2025-10-31 16:07:41 +03:00 
			
		
		
		
	Add Mistral mentions. (#13037)
This commit is contained in:
		
							parent
							
								
									862f8254e8
								
							
						
					
					
						commit
						1162fcf099
					
				|  | @ -1101,8 +1101,9 @@ Currently, these models are provided as part of the core library: | |||
| | Model                | Provider        | Supported names                                                                                              | HF directory                           | | ||||
| | -------------------- | --------------- | ------------------------------------------------------------------------------------------------------------ | -------------------------------------- | | ||||
| | `spacy.Dolly.v1`     | Databricks      | `["dolly-v2-3b", "dolly-v2-7b", "dolly-v2-12b"]`                                                             | https://huggingface.co/databricks      | | ||||
| | `spacy.Llama2.v1`    | Meta AI         | `["Llama-2-7b-hf", "Llama-2-13b-hf", "Llama-2-70b-hf"]`                                                      | https://huggingface.co/meta-llama      | | ||||
| | `spacy.Falcon.v1`    | TII             | `["falcon-rw-1b", "falcon-7b", "falcon-7b-instruct", "falcon-40b-instruct"]`                                 | https://huggingface.co/tiiuae          | | ||||
| | `spacy.Llama2.v1`    | Meta AI         | `["Llama-2-7b-hf", "Llama-2-13b-hf", "Llama-2-70b-hf"]`                                                      | https://huggingface.co/meta-llama      | | ||||
| | `spacy.Mistral.v1`   | Mistral AI      | `["Mistral-7B-v0.1", "Mistral-7B-Instruct-v0.1"]`                                                            | https://huggingface.co/mistralai       | | ||||
| | `spacy.StableLM.v1`  | Stability AI    | `["stablelm-base-alpha-3b", "stablelm-base-alpha-7b", "stablelm-tuned-alpha-3b", "stablelm-tuned-alpha-7b"]` | https://huggingface.co/stabilityai     | | ||||
| | `spacy.OpenLLaMA.v1` | OpenLM Research | `["open_llama_3b", "open_llama_7b", "open_llama_7b_v2", "open_llama_13b"]`                                   | https://huggingface.co/openlm-research | | ||||
| 
 | ||||
|  |  | |||
|  | @ -436,7 +436,7 @@ respectively. Alternatively you can use LangChain to access hosted or local | |||
| models by specifying one of the models registered with the `langchain.` prefix. | ||||
| 
 | ||||
| <Infobox> | ||||
| _Why LangChain if there are also are a native REST and a HuggingFace interface? When should I use what?_ | ||||
| _Why LangChain if there are also are native REST and HuggingFace interfaces? When should I use what?_ | ||||
| 
 | ||||
| Third-party libraries like `langchain` focus on prompt management, integration | ||||
| of many different LLM APIs, and other related features such as conversational | ||||
|  | @ -488,6 +488,7 @@ provider's documentation. | |||
| | [`spacy.PaLM.v1`](/api/large-language-models#models-rest)               | Google’s `PaLM` model family.                  | | ||||
| | [`spacy.Dolly.v1`](/api/large-language-models#models-hf)                | Dolly models through HuggingFace.              | | ||||
| | [`spacy.Falcon.v1`](/api/large-language-models#models-hf)               | Falcon models through HuggingFace.             | | ||||
| | [`spacy.Mistral.v1`](/api/large-language-models#models-hf)              | Mistral models through HuggingFace.            | | ||||
| | [`spacy.Llama2.v1`](/api/large-language-models#models-hf)               | Llama2 models through HuggingFace.             | | ||||
| | [`spacy.StableLM.v1`](/api/large-language-models#models-hf)             | StableLM models through HuggingFace.           | | ||||
| | [`spacy.OpenLLaMA.v1`](/api/large-language-models#models-hf)            | OpenLLaMA models through HuggingFace.          | | ||||
|  |  | |||
		Loading…
	
		Reference in New Issue
	
	Block a user