diff --git a/website/docs/api/doc.jade b/website/docs/api/doc.jade index 86c4dd65a..fab1dd86b 100644 --- a/website/docs/api/doc.jade +++ b/website/docs/api/doc.jade @@ -164,8 +164,8 @@ p +cell #[code other] +cell - +cell - | The object to compare with. By default, accepts #[code Doc], - | #[code Span], #[code Token] and #[code Lexeme] objects. + | The object to compare with. By default, accepts #[code Doc], + | #[code Span], #[code Token] and #[code Lexeme] objects. +footrow +cell return diff --git a/website/docs/api/span.jade b/website/docs/api/span.jade index f071f5abc..a07ee25d9 100644 --- a/website/docs/api/span.jade +++ b/website/docs/api/span.jade @@ -156,8 +156,8 @@ p +cell #[code other] +cell - +cell - | The object to compare with. By default, accepts #[code Doc], - | #[code Span], #[code Token] and #[code Lexeme] objects. + | The object to compare with. By default, accepts #[code Doc], + | #[code Span], #[code Token] and #[code Lexeme] objects. +footrow +cell return diff --git a/website/docs/api/vocab.jade b/website/docs/api/vocab.jade index 96356cb41..7490bccf4 100644 --- a/website/docs/api/vocab.jade +++ b/website/docs/api/vocab.jade @@ -70,8 +70,8 @@ p Create the vocabulary. +cell #[code lex_attr_getters] +cell dict +cell - | A dictionary mapping attribute IDs to functions to compute them. - | Defaults to #[code None]. + | A dictionary mapping attribute IDs to functions to compute them. + | Defaults to #[code None]. +row +cell #[code lemmatizer] diff --git a/website/docs/usage/processing-text.jade b/website/docs/usage/processing-text.jade index b26538dcb..205986e8a 100644 --- a/website/docs/usage/processing-text.jade +++ b/website/docs/usage/processing-text.jade @@ -73,7 +73,7 @@ p | one-by-one. After a long and bitter struggle, the global interpreter | lock was freed around spaCy's main parsing loop in v0.100.3. This means | that the #[code .pipe()] method will be significantly faster in most - | practical situations, because it allows shared memory parallelism. + | practical situations, because it allows shared memory parallelism. +code. for doc in nlp.pipe(texts, batch_size=10000, n_threads=3):