mirror of
https://github.com/explosion/spaCy.git
synced 2025-07-10 16:22:29 +03:00
Improve built-in component API docs
This commit is contained in:
parent
235a0e948e
commit
c03cb1cc63
|
@ -38,16 +38,18 @@ shortcut for this and instantiate the component using its string name and
|
||||||
> ```
|
> ```
|
||||||
|
|
||||||
| Name | Type | Description |
|
| Name | Type | Description |
|
||||||
| ----------- | ------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------- |
|
| ----------- | ----------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||||
| `vocab` | `Vocab` | The shared vocabulary. |
|
| `vocab` | `Vocab` | The shared vocabulary. |
|
||||||
| `model` | `thinc.neural.Model` or `True` | The model powering the pipeline component. If no model is supplied, the model is created when you call `begin_training`, `from_disk` or `from_bytes`. |
|
| `model` | `thinc.neural.Model` / `True` | The model powering the pipeline component. If no model is supplied, the model is created when you call `begin_training`, `from_disk` or `from_bytes`. |
|
||||||
| `**cfg` | - | Configuration parameters. |
|
| `**cfg` | - | Configuration parameters. |
|
||||||
| **RETURNS** | `DependencyParser` | The newly constructed object. |
|
| **RETURNS** | `DependencyParser` | The newly constructed object. |
|
||||||
|
|
||||||
## DependencyParser.\_\_call\_\_ {#call tag="method"}
|
## DependencyParser.\_\_call\_\_ {#call tag="method"}
|
||||||
|
|
||||||
Apply the pipe to one document. The document is modified in place, and returned.
|
Apply the pipe to one document. The document is modified in place, and returned.
|
||||||
Both [`__call__`](/api/dependencyparser#call) and
|
This usually happens under the hood when you call the `nlp` object on a text and
|
||||||
|
all pipeline components are applied to the `Doc` in order. Both
|
||||||
|
[`__call__`](/api/dependencyparser#call) and
|
||||||
[`pipe`](/api/dependencyparser#pipe) delegate to the
|
[`pipe`](/api/dependencyparser#pipe) delegate to the
|
||||||
[`predict`](/api/dependencyparser#predict) and
|
[`predict`](/api/dependencyparser#predict) and
|
||||||
[`set_annotations`](/api/dependencyparser#set_annotations) methods.
|
[`set_annotations`](/api/dependencyparser#set_annotations) methods.
|
||||||
|
@ -57,6 +59,7 @@ Both [`__call__`](/api/dependencyparser#call) and
|
||||||
> ```python
|
> ```python
|
||||||
> parser = DependencyParser(nlp.vocab)
|
> parser = DependencyParser(nlp.vocab)
|
||||||
> doc = nlp(u"This is a sentence.")
|
> doc = nlp(u"This is a sentence.")
|
||||||
|
> # This usually happens under the hood
|
||||||
> processed = parser(doc)
|
> processed = parser(doc)
|
||||||
> ```
|
> ```
|
||||||
|
|
||||||
|
@ -83,7 +86,7 @@ Apply the pipe to a stream of documents. Both
|
||||||
> ```
|
> ```
|
||||||
|
|
||||||
| Name | Type | Description |
|
| Name | Type | Description |
|
||||||
| ------------ | -------- | -------------------------------------------------------------------------------------------------------------- |
|
| ------------ | -------- | ------------------------------------------------------ |
|
||||||
| `stream` | iterable | A stream of documents. |
|
| `stream` | iterable | A stream of documents. |
|
||||||
| `batch_size` | int | The number of texts to buffer. Defaults to `128`. |
|
| `batch_size` | int | The number of texts to buffer. Defaults to `128`. |
|
||||||
| **YIELDS** | `Doc` | Processed documents in the order of the original text. |
|
| **YIELDS** | `Doc` | Processed documents in the order of the original text. |
|
||||||
|
|
|
@ -38,16 +38,18 @@ shortcut for this and instantiate the component using its string name and
|
||||||
> ```
|
> ```
|
||||||
|
|
||||||
| Name | Type | Description |
|
| Name | Type | Description |
|
||||||
| ----------- | ------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------- |
|
| ----------- | ----------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||||
| `vocab` | `Vocab` | The shared vocabulary. |
|
| `vocab` | `Vocab` | The shared vocabulary. |
|
||||||
| `model` | `thinc.neural.Model` or `True` | The model powering the pipeline component. If no model is supplied, the model is created when you call `begin_training`, `from_disk` or `from_bytes`. |
|
| `model` | `thinc.neural.Model` / `True` | The model powering the pipeline component. If no model is supplied, the model is created when you call `begin_training`, `from_disk` or `from_bytes`. |
|
||||||
| `**cfg` | - | Configuration parameters. |
|
| `**cfg` | - | Configuration parameters. |
|
||||||
| **RETURNS** | `EntityRecognizer` | The newly constructed object. |
|
| **RETURNS** | `EntityRecognizer` | The newly constructed object. |
|
||||||
|
|
||||||
## EntityRecognizer.\_\_call\_\_ {#call tag="method"}
|
## EntityRecognizer.\_\_call\_\_ {#call tag="method"}
|
||||||
|
|
||||||
Apply the pipe to one document. The document is modified in place, and returned.
|
Apply the pipe to one document. The document is modified in place, and returned.
|
||||||
Both [`__call__`](/api/entityrecognizer#call) and
|
This usually happens under the hood when you call the `nlp` object on a text and
|
||||||
|
all pipeline components are applied to the `Doc` in order. Both
|
||||||
|
[`__call__`](/api/entityrecognizer#call) and
|
||||||
[`pipe`](/api/entityrecognizer#pipe) delegate to the
|
[`pipe`](/api/entityrecognizer#pipe) delegate to the
|
||||||
[`predict`](/api/entityrecognizer#predict) and
|
[`predict`](/api/entityrecognizer#predict) and
|
||||||
[`set_annotations`](/api/entityrecognizer#set_annotations) methods.
|
[`set_annotations`](/api/entityrecognizer#set_annotations) methods.
|
||||||
|
@ -57,6 +59,7 @@ Both [`__call__`](/api/entityrecognizer#call) and
|
||||||
> ```python
|
> ```python
|
||||||
> ner = EntityRecognizer(nlp.vocab)
|
> ner = EntityRecognizer(nlp.vocab)
|
||||||
> doc = nlp(u"This is a sentence.")
|
> doc = nlp(u"This is a sentence.")
|
||||||
|
> # This usually happens under the hood
|
||||||
> processed = ner(doc)
|
> processed = ner(doc)
|
||||||
> ```
|
> ```
|
||||||
|
|
||||||
|
@ -83,7 +86,7 @@ Apply the pipe to a stream of documents. Both
|
||||||
> ```
|
> ```
|
||||||
|
|
||||||
| Name | Type | Description |
|
| Name | Type | Description |
|
||||||
| ------------ | -------- | -------------------------------------------------------------------------------------------------------------- |
|
| ------------ | -------- | ------------------------------------------------------ |
|
||||||
| `stream` | iterable | A stream of documents. |
|
| `stream` | iterable | A stream of documents. |
|
||||||
| `batch_size` | int | The number of texts to buffer. Defaults to `128`. |
|
| `batch_size` | int | The number of texts to buffer. Defaults to `128`. |
|
||||||
| **YIELDS** | `Doc` | Processed documents in the order of the original text. |
|
| **YIELDS** | `Doc` | Processed documents in the order of the original text. |
|
||||||
|
|
|
@ -38,17 +38,19 @@ shortcut for this and instantiate the component using its string name and
|
||||||
> ```
|
> ```
|
||||||
|
|
||||||
| Name | Type | Description |
|
| Name | Type | Description |
|
||||||
| ----------- | ------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------- |
|
| ----------- | ----------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||||
| `vocab` | `Vocab` | The shared vocabulary. |
|
| `vocab` | `Vocab` | The shared vocabulary. |
|
||||||
| `model` | `thinc.neural.Model` or `True` | The model powering the pipeline component. If no model is supplied, the model is created when you call `begin_training`, `from_disk` or `from_bytes`. |
|
| `model` | `thinc.neural.Model` / `True` | The model powering the pipeline component. If no model is supplied, the model is created when you call `begin_training`, `from_disk` or `from_bytes`. |
|
||||||
| `**cfg` | - | Configuration parameters. |
|
| `**cfg` | - | Configuration parameters. |
|
||||||
| **RETURNS** | `Tagger` | The newly constructed object. |
|
| **RETURNS** | `Tagger` | The newly constructed object. |
|
||||||
|
|
||||||
## Tagger.\_\_call\_\_ {#call tag="method"}
|
## Tagger.\_\_call\_\_ {#call tag="method"}
|
||||||
|
|
||||||
Apply the pipe to one document. The document is modified in place, and returned.
|
Apply the pipe to one document. The document is modified in place, and returned.
|
||||||
Both [`__call__`](/api/tagger#call) and [`pipe`](/api/tagger#pipe) delegate to
|
This usually happens under the hood when you call the `nlp` object on a text and
|
||||||
the [`predict`](/api/tagger#predict) and
|
all pipeline components are applied to the `Doc` in order. Both
|
||||||
|
[`__call__`](/api/tagger#call) and [`pipe`](/api/tagger#pipe) delegate to the
|
||||||
|
[`predict`](/api/tagger#predict) and
|
||||||
[`set_annotations`](/api/tagger#set_annotations) methods.
|
[`set_annotations`](/api/tagger#set_annotations) methods.
|
||||||
|
|
||||||
> #### Example
|
> #### Example
|
||||||
|
@ -56,6 +58,7 @@ the [`predict`](/api/tagger#predict) and
|
||||||
> ```python
|
> ```python
|
||||||
> tagger = Tagger(nlp.vocab)
|
> tagger = Tagger(nlp.vocab)
|
||||||
> doc = nlp(u"This is a sentence.")
|
> doc = nlp(u"This is a sentence.")
|
||||||
|
> # This usually happens under the hood
|
||||||
> processed = tagger(doc)
|
> processed = tagger(doc)
|
||||||
> ```
|
> ```
|
||||||
|
|
||||||
|
@ -80,7 +83,7 @@ Apply the pipe to a stream of documents. Both [`__call__`](/api/tagger#call) and
|
||||||
> ```
|
> ```
|
||||||
|
|
||||||
| Name | Type | Description |
|
| Name | Type | Description |
|
||||||
| ------------ | -------- | -------------------------------------------------------------------------------------------------------------- |
|
| ------------ | -------- | ------------------------------------------------------ |
|
||||||
| `stream` | iterable | A stream of documents. |
|
| `stream` | iterable | A stream of documents. |
|
||||||
| `batch_size` | int | The number of texts to buffer. Defaults to `128`. |
|
| `batch_size` | int | The number of texts to buffer. Defaults to `128`. |
|
||||||
| **YIELDS** | `Doc` | Processed documents in the order of the original text. |
|
| **YIELDS** | `Doc` | Processed documents in the order of the original text. |
|
||||||
|
|
|
@ -48,9 +48,10 @@ shortcut for this and instantiate the component using its string name and
|
||||||
## TextCategorizer.\_\_call\_\_ {#call tag="method"}
|
## TextCategorizer.\_\_call\_\_ {#call tag="method"}
|
||||||
|
|
||||||
Apply the pipe to one document. The document is modified in place, and returned.
|
Apply the pipe to one document. The document is modified in place, and returned.
|
||||||
Both [`__call__`](/api/textcategorizer#call) and
|
This usually happens under the hood when you call the `nlp` object on a text and
|
||||||
[`pipe`](/api/textcategorizer#pipe) delegate to the
|
all pipeline components are applied to the `Doc` in order. Both
|
||||||
[`predict`](/api/textcategorizer#predict) and
|
[`__call__`](/api/textcategorizer#call) and [`pipe`](/api/textcategorizer#pipe)
|
||||||
|
delegate to the [`predict`](/api/textcategorizer#predict) and
|
||||||
[`set_annotations`](/api/textcategorizer#set_annotations) methods.
|
[`set_annotations`](/api/textcategorizer#set_annotations) methods.
|
||||||
|
|
||||||
> #### Example
|
> #### Example
|
||||||
|
@ -58,6 +59,7 @@ Both [`__call__`](/api/textcategorizer#call) and
|
||||||
> ```python
|
> ```python
|
||||||
> textcat = TextCategorizer(nlp.vocab)
|
> textcat = TextCategorizer(nlp.vocab)
|
||||||
> doc = nlp(u"This is a sentence.")
|
> doc = nlp(u"This is a sentence.")
|
||||||
|
> # This usually happens under the hood
|
||||||
> processed = textcat(doc)
|
> processed = textcat(doc)
|
||||||
> ```
|
> ```
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue
Block a user