From a83c0add2ef3c69476b4a1087536345f18dc86a3 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Bj=C3=B6rn=20B=C3=B6ing?= Date: Thu, 1 Aug 2019 14:28:38 +0200 Subject: [PATCH] Add links to tokenizer API docs to refer relevant information. (#4064) * Add links to tokenizer API docs to refer relevant information. * Add suggested changes Co-Authored-By: Ines Montani --- website/docs/api/tokenizer.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/website/docs/api/tokenizer.md b/website/docs/api/tokenizer.md index 67e67f5c9..ce1ba9a21 100644 --- a/website/docs/api/tokenizer.md +++ b/website/docs/api/tokenizer.md @@ -5,7 +5,7 @@ tag: class source: spacy/tokenizer.pyx --- -Segment text, and create `Doc` objects with the discovered segment boundaries. +Segment text, and create `Doc` objects with the discovered segment boundaries. For a deeper understanding, see the docs on [how spaCy's tokenizer works](/usage/linguistic-features#how-tokenizer-works). ## Tokenizer.\_\_init\_\_ {#init tag="method"} @@ -109,7 +109,7 @@ if no suffix rules match. Add a special-case tokenization rule. This mechanism is also used to add custom tokenizer exceptions to the language data. See the usage guide on -[adding languages](/usage/adding-languages#tokenizer-exceptions) for more +[adding languages](/usage/adding-languages#tokenizer-exceptions) and [linguistic features](/usage/linguistic-features#special-cases) for more details and examples. > #### Example