* Draft out initial Spans data structure * Initial span group commit * Basic span group support on Doc * Basic test for span group * Compile span_group.pyx * Draft addition of SpanGroup to DocBin * Add deserialization for SpanGroup * Add tests for serializing SpanGroup * Fix serialization of SpanGroup * Add EdgeC and GraphC structs * Add draft Graph data structure * Compile graph * More work on Graph * Update GraphC * Upd graph * Fix walk functions * Let Graph take nodes and edges on construction * Fix walking and getting * Add graph tests * Fix import * Add module with the SpanGroups dict thingy * Update test * Rename 'span_groups' attribute * Try to fix c++11 compilation * Fix test * Update DocBin * Try to fix compilation * Try to fix graph * Improve SpanGroup docstrings * Add doc.spans to documentation * Fix serialization * Tidy up and add docs * Update docs [ci skip] * Add SpanGroup.has_overlap * WIP updated Graph API * Start testing new Graph API * Update Graph tests * Update Graph * Add docstring Co-authored-by: Ines Montani <ines@ines.io>
9.2 KiB
The central data structures in spaCy are the Language
class,
the Vocab
and the Doc
object. The Language
class
is used to process a text and turn it into a Doc
object. It's typically stored
as a variable called nlp
. The Doc
object owns the sequence of tokens and
all their annotations. By centralizing strings, word vectors and lexical
attributes in the Vocab
, we avoid storing multiple copies of this data. This
saves memory, and ensures there's a single source of truth.
Text annotations are also designed to allow a single source of truth: the Doc
object owns the data, and Span
and Token
are
views that point into it. The Doc
object is constructed by the
Tokenizer
, and then modified in place by the components
of the pipeline. The Language
object coordinates these components. It takes
raw text and sends it through the pipeline, returning an annotated document.
It also orchestrates training and serialization.
Container objects
Name | Description |
---|---|
Doc |
A container for accessing linguistic annotations. |
DocBin |
A collection of Doc objects for efficient binary serialization. Also used for training data. |
Example |
A collection of training annotations, containing two Doc objects: the reference data and the predictions. |
Language |
Processing class that turns text into Doc objects. Different languages implement their own subclasses of it. The variable is typically called nlp . |
Lexeme |
An entry in the vocabulary. It's a word type with no context, as opposed to a word token. It therefore has no part-of-speech tag, dependency parse etc. |
Span |
A slice from a Doc object. |
SpanGroup |
A named collection of spans belonging to a Doc . |
Token |
An individual token — i.e. a word, punctuation symbol, whitespace, etc. |
Processing pipeline
The processing pipeline consists of one or more pipeline components that are
called on the Doc
in order. The tokenizer runs before the components. Pipeline
components can be added using Language.add_pipe
.
They can contain a statistical model and trained weights, or only make
rule-based modifications to the Doc
. spaCy provides a range of built-in
components for different language processing tasks and also allows adding
custom components.
Name | Description |
---|---|
AttributeRuler |
Set token attributes using matcher rules. |
DependencyParser |
Predict syntactic dependencies. |
EntityLinker |
Disambiguate named entities to nodes in a knowledge base. |
EntityRecognizer |
Predict named entities, e.g. persons or products. |
EntityRuler |
Add entity spans to the Doc using token-based rules or exact phrase matches. |
Lemmatizer |
Determine the base forms of words. |
Morphologizer |
Predict morphological features and coarse-grained part-of-speech tags. |
SentenceRecognizer |
Predict sentence boundaries. |
Sentencizer |
Implement rule-based sentence boundary detection that doesn't require the dependency parse. |
Tagger |
Predict part-of-speech tags. |
TextCategorizer |
Predict categories or labels over the whole document. |
Tok2Vec |
Apply a "token-to-vector" model and set its outputs. |
Tokenizer |
Segment raw text and create Doc objects from the words. |
TrainablePipe |
Class that all trainable pipeline components inherit from. |
Transformer |
Use a transformer model and set its outputs. |
Other functions | Automatically apply something to the Doc , e.g. to merge spans of tokens. |
Matchers
Matchers help you find and extract information from Doc
objects
based on match patterns describing the sequences you're looking for. A matcher
operates on a Doc
and gives you access to the matched tokens in context.
Name | Description |
---|---|
DependencyMatcher |
Match sequences of tokens based on dependency trees using Semgrex operators. |
Matcher |
Match sequences of tokens, based on pattern rules, similar to regular expressions. |
PhraseMatcher |
Match sequences of tokens based on phrases. |
Other classes
Name | Description |
---|---|
Corpus |
Class for managing annotated corpora for training and evaluation data. |
KnowledgeBase |
Storage for entities and aliases of a knowledge base for entity linking. |
Lookups |
Container for convenient access to large lookup tables and dictionaries. |
MorphAnalysis |
A morphological analysis. |
Morphology |
Store morphological analyses and map them to and from hash values. |
Scorer |
Compute evaluation scores. |
StringStore |
Map strings to and from hash values. |
Vectors |
Container class for vector data keyed by string. |
Vocab |
The shared vocabulary that stores strings and gives you access to Lexeme objects. |