Fix a bunch of missing spaces of the website

This commit is contained in:
Mark Amery 2016-11-20 17:02:45 +00:00
parent bcc76e42de
commit b4e1dc0e3f
4 changed files with 7 additions and 7 deletions

View File

@ -164,8 +164,8 @@ p
+cell #[code other] +cell #[code other]
+cell - +cell -
+cell +cell
| The object to compare with. By default, accepts #[code Doc], | The object to compare with. By default, accepts #[code Doc],
| #[code Span], #[code Token] and #[code Lexeme] objects. | #[code Span], #[code Token] and #[code Lexeme] objects.
+footrow +footrow
+cell return +cell return

View File

@ -156,8 +156,8 @@ p
+cell #[code other] +cell #[code other]
+cell - +cell -
+cell +cell
| The object to compare with. By default, accepts #[code Doc], | The object to compare with. By default, accepts #[code Doc],
| #[code Span], #[code Token] and #[code Lexeme] objects. | #[code Span], #[code Token] and #[code Lexeme] objects.
+footrow +footrow
+cell return +cell return

View File

@ -70,8 +70,8 @@ p Create the vocabulary.
+cell #[code lex_attr_getters] +cell #[code lex_attr_getters]
+cell dict +cell dict
+cell +cell
| A dictionary mapping attribute IDs to functions to compute them. | A dictionary mapping attribute IDs to functions to compute them.
| Defaults to #[code None]. | Defaults to #[code None].
+row +row
+cell #[code lemmatizer] +cell #[code lemmatizer]

View File

@ -73,7 +73,7 @@ p
| one-by-one. After a long and bitter struggle, the global interpreter | one-by-one. After a long and bitter struggle, the global interpreter
| lock was freed around spaCy's main parsing loop in v0.100.3. This means | lock was freed around spaCy's main parsing loop in v0.100.3. This means
| that the #[code .pipe()] method will be significantly faster in most | that the #[code .pipe()] method will be significantly faster in most
| practical situations, because it allows shared memory parallelism. | practical situations, because it allows shared memory parallelism.
+code. +code.
for doc in nlp.pipe(texts, batch_size=10000, n_threads=3): for doc in nlp.pipe(texts, batch_size=10000, n_threads=3):