8.2 KiB
title | tag | source | teaser | api_base_class | api_string_name | api_trainable |
---|---|---|---|---|---|---|
Sentencizer | class | spacy/pipeline/sentencizer.pyx | Pipeline component for rule-based sentence boundary detection | /api/pipe | sentencizer | false |
A simple pipeline component, to allow custom sentence boundary detection logic
that doesn't require the dependency parse. By default, sentence segmentation is
performed by the DependencyParser
, so the
Sentencizer
lets you implement a simpler, rule-based strategy that doesn't
require a statistical model to be loaded.
Config and implementation
The default config is defined by the pipeline component factory and describes
how the component should be configured. You can override its settings via the
config
argument on nlp.add_pipe
or in your
config.cfg
for training.
Example
config = {"punct_chars": None} nlp.add_pipe("entity_ruler", config=config)
Setting | Type | Description | Default |
---|---|---|---|
punct_chars |
List[str] |
Optional custom list of punctuation characters that mark sentence ends. See below for defaults if not set. | None |
https://github.com/explosion/spaCy/blob/develop/spacy/pipeline/sentencizer.pyx
Sentencizer.__init__
Initialize the sentencizer.
Example
# Construction via add_pipe sentencizer = nlp.add_pipe("sentencizer") # Construction from class from spacy.pipeline import Sentencizer sentencizer = Sentencizer()
Name | Type | Description |
---|---|---|
keyword-only | ||
punct_chars |
List[str] |
Optional custom list of punctuation characters that mark sentence ends. See below for defaults. |
### punct_chars defaults
['!', '.', '?', '։', '؟', '۔', '܀', '܁', '܂', '߹', '।', '॥', '၊', '။', '።',
'፧', '፨', '᙮', '᜵', '᜶', '᠃', '᠉', '᥄', '᥅', '᪨', '᪩', '᪪', '᪫',
'᭚', '᭛', '᭞', '᭟', '᰻', '᰼', '᱾', '᱿', '‼', '‽', '⁇', '⁈', '⁉',
'⸮', '⸼', '꓿', '꘎', '꘏', '꛳', '꛷', '꡶', '꡷', '꣎', '꣏', '꤯', '꧈',
'꧉', '꩝', '꩞', '꩟', '꫰', '꫱', '꯫', '﹒', '﹖', '﹗', '!', '.', '?',
'𐩖', '𐩗', '𑁇', '𑁈', '𑂾', '𑂿', '𑃀', '𑃁', '𑅁', '𑅂', '𑅃', '𑇅',
'𑇆', '𑇍', '𑇞', '𑇟', '𑈸', '𑈹', '𑈻', '𑈼', '𑊩', '𑑋', '𑑌', '𑗂',
'𑗃', '𑗉', '𑗊', '𑗋', '𑗌', '𑗍', '𑗎', '𑗏', '𑗐', '𑗑', '𑗒', '𑗓',
'𑗔', '𑗕', '𑗖', '𑗗', '𑙁', '𑙂', '𑜼', '𑜽', '𑜾', '𑩂', '𑩃', '𑪛',
'𑪜', '𑱁', '𑱂', '𖩮', '𖩯', '𖫵', '𖬷', '𖬸', '𖭄', '𛲟', '𝪈', '。', '。']
Sentencizer.__call__
Apply the sentencizer on a Doc
. Typically, this happens automatically after
the component has been added to the pipeline using
nlp.add_pipe
.
Example
from spacy.lang.en import English nlp = English() nlp.add_pipe("sentencizer") doc = nlp("This is a sentence. This is another sentence.") assert len(list(doc.sents)) == 2
Name | Type | Description |
---|---|---|
doc |
Doc |
The Doc object to process, e.g. the Doc in the pipeline. |
RETURNS | Doc |
The modified Doc with added sentence boundaries. |
Sentencizer.pipe
Apply the pipe to a stream of documents. This usually happens under the hood
when the nlp
object is called on a text and all pipeline components are
applied to the Doc
in order.
Example
sentencizer = nlp.add_pipe("sentencizer") for doc in sentencizer.pipe(docs, batch_size=50): pass
Name | Type | Description |
---|---|---|
stream |
Iterable[Doc] |
A stream of documents. |
keyword-only | ||
batch_size |
int | The number of documents to buffer. Defaults to 128 . |
YIELDS | Doc |
The processed documents in order. |
Sentencizer.score
Score a batch of examples.
Example
scores = sentencizer.score(examples)
Name | Type | Description |
---|---|---|
examples |
Iterable[Example] |
The examples to score. |
RETURNS | Dict[str, Any] |
The scores, produced by Scorer.score_spans . |
Sentencizer.to_disk
Save the sentencizer settings (punctuation characters) a directory. Will create
a file sentencizer.json
. This also happens automatically when you save an
nlp
object with a sentencizer added to its pipeline.
Example
config = {"punct_chars": [".", "?", "!", "。"]} sentencizer = nlp.add_pipe("sentencizer", config=config) sentencizer.to_disk("/path/to/sentencizer.json")
Name | Type | Description |
---|---|---|
path |
str / Path |
A path to a JSON file, which will be created if it doesn't exist. Paths may be either strings or Path -like objects. |
Sentencizer.from_disk
Load the sentencizer settings from a file. Expects a JSON file. This also
happens automatically when you load an nlp
object or model with a sentencizer
added to its pipeline.
Example
sentencizer = nlp.add_pipe("sentencizer") sentencizer.from_disk("/path/to/sentencizer.json")
Name | Type | Description |
---|---|---|
path |
str / Path |
A path to a JSON file. Paths may be either strings or Path -like objects. |
RETURNS | Sentencizer |
The modified Sentencizer object. |
Sentencizer.to_bytes
Serialize the sentencizer settings to a bytestring.
Example
config = {"punct_chars": [".", "?", "!", "。"]} sentencizer = nlp.add_pipe("sentencizer", config=config) sentencizer_bytes = sentencizer.to_bytes()
Name | Type | Description |
---|---|---|
RETURNS | bytes | The serialized data. |
Sentencizer.from_bytes
Load the pipe from a bytestring. Modifies the object in place and returns it.
Example
sentencizer_bytes = sentencizer.to_bytes() sentencizer = nlp.add_pipe("sentencizer") sentencizer.from_bytes(sentencizer_bytes)
Name | Type | Description |
---|---|---|
bytes_data |
bytes | The bytestring to load. |
RETURNS | Sentencizer |
The modified Sentencizer object. |