From 30d7c917be989174c52a43c57c3820947f6b3464 Mon Sep 17 00:00:00 2001 From: Sofie Van Landeghem Date: Fri, 29 Dec 2023 21:28:03 +0100 Subject: [PATCH] fix typo's --- website/docs/api/large-language-models.mdx | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/website/docs/api/large-language-models.mdx b/website/docs/api/large-language-models.mdx index 1fa6a27cb..9e6616cea 100644 --- a/website/docs/api/large-language-models.mdx +++ b/website/docs/api/large-language-models.mdx @@ -225,7 +225,7 @@ All tasks are registered in the `llm_tasks` registry. dataset across multiple storage units for easier processing and lookups. In `spacy-llm` we use this term (synonymously: "mapping") to describe the splitting up of prompts if they are too long for a model to handle, and "fusing" -(synonymously: "reducing") to describe how the model responses for several shars +(synonymously: "reducing") to describe how the model responses for several shards are merged back together into a single document. Prompts are broken up in a manner that _always_ keeps the prompt in the template @@ -239,10 +239,10 @@ prompt template for our fictional, sharding-supporting task looks like this: ``` Estimate the sentiment of this text: "{text}" -Estimated entiment: +Estimated sentiment: ``` -Depening on how tokens are counted exactly (this is a config setting), we might +Depending on how tokens are counted exactly (this is a config setting), we might come up with `n = 12` tokens for the number of tokens in the prompt instructions. Furthermore let's assume that our `text` is "This has been amazing - I can't remember the last time I left the cinema so impressed." - @@ -259,7 +259,7 @@ _(Prompt 1/2)_ ``` Estimate the sentiment of this text: "This has been amazing - I can't remember " -Estimated entiment: +Estimated sentiment: ``` _(Prompt 2/2)_ @@ -267,7 +267,7 @@ _(Prompt 2/2)_ ``` Estimate the sentiment of this text: "the last time I left the cinema so impressed." -Estimated entiment: +Estimated sentiment: ``` The reduction step is task-specific - a sentiment estimation task might e. g. do