Remove n_threads

This commit is contained in:
Ines Montani 2019-02-17 22:25:42 +01:00
parent 4c7ab7620a
commit 04b4df0ec9
5 changed files with 18 additions and 23 deletions

View File

@ -95,7 +95,6 @@ multiprocessing.
| ------------ | ----- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `texts` | - | A sequence of unicode objects. |
| `as_tuples` | bool | If set to `True`, inputs should be a sequence of `(text, context)` tuples. Output will then be a sequence of `(doc, context)` tuples. Defaults to `False`. |
| `n_threads` | int | The number of worker threads to use. If `-1`, OpenMP will decide how many to use at run time. Default is `2`. |
| `batch_size` | int | The number of texts to buffer. |
| `disable` | list | Names of pipeline components to [disable](/usage/processing-pipelines#disabling). |
| **YIELDS** | `Doc` | Documents in the order of the original text. |

View File

@ -68,7 +68,7 @@ matched phrases with entity types. Instead, actions need to be specified when
custom actions per pattern within the same matcher. For example, you might only
want to merge some entity types, and set custom flags for other matched
patterns. For more details and examples, see the usage guide on
[rule-based matching](/usage/linguistic-features#rule-based-matching).
[rule-based matching](/usage/rule-based-matching).
</Infobox>
@ -81,7 +81,7 @@ Match a stream of documents, yielding them in turn.
> ```python
> from spacy.matcher import Matcher
> matcher = Matcher(nlp.vocab)
> for doc in matcher.pipe(docs, batch_size=50, n_threads=4):
> for doc in matcher.pipe(docs, batch_size=50):
> pass
> ```
@ -89,7 +89,6 @@ Match a stream of documents, yielding them in turn.
| --------------------------------------------- | -------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `docs` | iterable | A stream of documents. |
| `batch_size` | int | The number of documents to accumulate into a working set. |
| `n_threads` | int | The number of threads with which to work on the buffer in parallel, if the `Matcher` implementation supports multi-threading. |
| `return_matches` <Tag variant="new">2.1</Tag> | bool | Yield the match lists along with the docs, making results `(doc, matches)` tuples. |
| `as_tuples` | bool | Interpret the input stream as `(doc, context)` tuples, and yield `(result, context)` tuples out. If both `return_matches` and `as_tuples` are `True`, the output will be a sequence of `((doc, matches), context)` tuples. |
| **YIELDS** | `Doc` | Documents, in order. |

View File

@ -78,16 +78,15 @@ Match a stream of documents, yielding them in turn.
> ```python
> from spacy.matcher import PhraseMatcher
> matcher = PhraseMatcher(nlp.vocab)
> for doc in matcher.pipe(texts, batch_size=50, n_threads=4):
> for doc in matcher.pipe(texts, batch_size=50):
> pass
> ```
| Name | Type | Description |
| ------------ | -------- | ----------------------------------------------------------------------------------------------------------------------------------- |
| `docs` | iterable | A stream of documents. |
| `batch_size` | int | The number of documents to accumulate into a working set. |
| `n_threads` | int | The number of threads with which to work on the buffer in parallel, if the `PhraseMatcher` implementation supports multi-threading. |
| **YIELDS** | `Doc` | Documents, in order. |
| Name | Type | Description |
| ------------ | -------- | --------------------------------------------------------- |
| `docs` | iterable | A stream of documents. |
| `batch_size` | int | The number of documents to accumulate into a working set. |
| **YIELDS** | `Doc` | Documents, in order. |
## PhraseMatcher.\_\_len\_\_ {#len tag="method"}

View File

@ -82,12 +82,11 @@ delegate to the [`predict`](/api/textcategorizer#predict) and
> pass
> ```
| Name | Type | Description |
| ------------ | -------- | -------------------------------------------------------------------------------------------------------------- |
| `stream` | iterable | A stream of documents. |
| `batch_size` | int | The number of texts to buffer. Defaults to `128`. |
| `n_threads` | int | The number of worker threads to use. If `-1`, OpenMP will decide how many to use at run time. Default is `-1`. |
| **YIELDS** | `Doc` | Processed documents in the order of the original text. |
| Name | Type | Description |
| ------------ | -------- | ------------------------------------------------------ |
| `stream` | iterable | A stream of documents. |
| `batch_size` | int | The number of texts to buffer. Defaults to `128`. |
| **YIELDS** | `Doc` | Processed documents in the order of the original text. |
## TextCategorizer.predict {#predict tag="method"}

View File

@ -61,12 +61,11 @@ Tokenize a stream of texts.
> pass
> ```
| Name | Type | Description |
| ------------ | ----- | ----------------------------------------------------------------------------------------------------------------------- |
| `texts` | - | A sequence of unicode texts. |
| `batch_size` | int | The number of texts to accumulate in an internal buffer. |
| `n_threads` | int | The number of threads to use, if the implementation supports multi-threading. The default tokenizer is single-threaded. |
| **YIELDS** | `Doc` | A sequence of Doc objects, in order. |
| Name | Type | Description |
| ------------ | ----- | -------------------------------------------------------- |
| `texts` | - | A sequence of unicode texts. |
| `batch_size` | int | The number of texts to accumulate in an internal buffer. |
| **YIELDS** | `Doc` | A sequence of Doc objects, in order. |
## Tokenizer.find_infix {#find_infix tag="method"}