* Update website models for v2.3.0
* Add docs for Chinese word segmentation
* Tighten up Chinese docs section
* Merge branch 'master' into docs/v2.3.0 [ci skip]
* Merge branch 'master' into docs/v2.3.0 [ci skip]
* Auto-format and update version
* Update matcher.md
* Update languages and sorting
* Typo in landing page
* Infobox about token_match behavior
* Add meta and basic docs for Japanese
* POS -> TAG in models table
* Add info about lookups for normalization
* Updates to API docs for v2.3
* Update adding norm exceptions for adding languages
* Add --omit-extra-lookups to CLI API docs
* Add initial draft of "What's New in v2.3"
* Add new in v2.3 tags to Chinese and Japanese sections
* Add tokenizer to migration section
* Add new in v2.3 flags to init-model
* Typo
* More what's new in v2.3
Co-authored-by: Ines Montani <ines@ines.io>
An individual token — i.e. a word, punctuation symbol, whitespace, etc.
class
spacy/tokens/token.pyx
Token.__init__
Construct a Token object.
Example
doc=nlp("Give it back! He pleaded.")token=doc[0]asserttoken.text=="Give"
Name
Type
Description
vocab
Vocab
A storage container for lexical types.
doc
Doc
The parent document.
offset
int
The index of the token within the document.
RETURNS
Token
The newly constructed object.
Token.__len__
The number of unicode characters in the token, i.e. token.text.
Example
doc=nlp("Give it back! He pleaded.")token=doc[0]assertlen(token)==4
Name
Type
Description
RETURNS
int
The number of unicode characters in the token.
Token.set_extension
Define a custom attribute on the Token which becomes available via Token._.
For details, see the documentation on
custom attributes.
Example
fromspacy.tokensimportTokenfruit_getter=lambdatoken:token.textin("apple","pear","banana")Token.set_extension("is_fruit",getter=fruit_getter)doc=nlp("I have an apple")assertdoc[3]._.is_fruit
Name
Type
Description
name
unicode
Name of the attribute to set by the extension. For example, 'my_attr' will be available as token._.my_attr.
default
-
Optional default value of the attribute if no getter or method is defined.
method
callable
Set a custom method on the object, for example token._.compare(other_token).
getter
callable
Getter function that takes the object and returns an attribute value. Is called when the user accesses the ._ attribute.
setter
callable
Setter function that takes the Token and a value, and modifies the object. Is called when the user writes to the Token._ attribute.
force
bool
Force overwriting existing attribute.
Token.get_extension
Look up a previously registered extension by name. Returns a 4-tuple
(default, method, getter, setter) if the extension is registered. Raises a
KeyError otherwise.
A (default, method, getter, setter) tuple of the removed extension.
Token.check_flag
Check the value of a boolean flag.
Example
fromspacy.attrsimportIS_TITLEdoc=nlp("Give it back! He pleaded.")token=doc[0]asserttoken.check_flag(IS_TITLE)==True
Name
Type
Description
flag_id
int
The attribute ID of the flag to check.
RETURNS
bool
Whether the flag is set.
Token.similarity
Compute a semantic similarity estimate. Defaults to cosine over vectors.
Example
apples,_,oranges=nlp("apples and oranges")apples_oranges=apples.similarity(oranges)oranges_apples=oranges.similarity(apples)assertapples_oranges==oranges_apples
Name
Type
Description
other
-
The object to compare with. By default, accepts Doc, Span, Token and Lexeme objects.
RETURNS
float
A scalar similarity score. Higher is more similar.
Token.nbor
Get a neighboring token.
Example
doc=nlp("Give it back! He pleaded.")give_nbor=doc[0].nbor()assertgive_nbor.text=="it"
Name
Type
Description
i
int
The relative position of the token to get. Defaults to 1.
RETURNS
Token
The token at position self.doc[self.i+i].
Token.is_ancestor
Check whether this token is a parent, grandparent, etc. of another in the
dependency tree.
Example
doc=nlp("Give it back! He pleaded.")give=doc[0]it=doc[1]assertgive.is_ancestor(it)
Name
Type
Description
descendant
Token
Another token.
RETURNS
bool
Whether this token is the ancestor of the descendant.
Token.ancestors
The rightmost token of this token's syntactic descendants.
Example
doc=nlp("Give it back! He pleaded.")it_ancestors=doc[1].ancestorsassert[t.textfortinit_ancestors]==["Give"]he_ancestors=doc[4].ancestorsassert[t.textfortinhe_ancestors]==["pleaded"]
Name
Type
Description
YIELDS
Token
A sequence of ancestor tokens such that ancestor.is_ancestor(self).
Token.conjuncts
A tuple of coordinated tokens, not including the token itself.
Example
doc=nlp("I like apples and oranges")apples_conjuncts=doc[2].conjunctsassert[t.textfortinapples_conjuncts]==["oranges"]
Name
Type
Description
RETURNS
tuple
The coordinated tokens.
Token.children
A sequence of the token's immediate syntactic children.
Example
doc=nlp("Give it back! He pleaded.")give_children=doc[0].childrenassert[t.textfortingive_children]==["it","back","!"]
Name
Type
Description
YIELDS
Token
A child token such that child.head==self.
Token.lefts
The leftward immediate children of the word, in the syntactic dependency parse.
Example
doc=nlp("I like New York in Autumn.")lefts=[t.textfortindoc[3].lefts]assertlefts==["New"]
Name
Type
Description
YIELDS
Token
A left-child of the token.
Token.rights
The rightward immediate children of the word, in the syntactic dependency parse.
Example
doc=nlp("I like New York in Autumn.")rights=[t.textfortindoc[3].rights]assertrights==["in"]
Name
Type
Description
YIELDS
Token
A right-child of the token.
Token.n_lefts
The number of leftward immediate children of the word, in the syntactic
dependency parse.
Example
doc=nlp("I like New York in Autumn.")assertdoc[3].n_lefts==1
Name
Type
Description
RETURNS
int
The number of left-child tokens.
Token.n_rights
The number of rightward immediate children of the word, in the syntactic
dependency parse.
Example
doc=nlp("I like New York in Autumn.")assertdoc[3].n_rights==1
Name
Type
Description
RETURNS
int
The number of right-child tokens.
Token.subtree
A sequence containing the token and all the token's syntactic descendants.
Example
doc=nlp("Give it back! He pleaded.")give_subtree=doc[0].subtreeassert[t.textfortingive_subtree]==["Give","it","back","!"]
Name
Type
Description
YIELDS
Token
A descendant token such that self.is_ancestor(token) or token == self.
Token.is_sent_start
A boolean value indicating whether the token starts a sentence. None if
unknown. Defaults to True for the first token in the Doc.
Example
doc=nlp("Give it back! He pleaded.")assertdoc[4].is_sent_startassertnotdoc[5].is_sent_start
Name
Type
Description
RETURNS
bool
Whether the token starts a sentence.
As of spaCy v2.0, the Token.sent_start property is deprecated and has been
replaced with Token.is_sent_start, which returns a boolean value instead of a
misleading 0 for False and 1 for True. It also now returns None if the
answer is unknown, and fixes a quirk in the old logic that would always set the
property to 0 for the first word of the document.
A boolean value indicating whether a word vector is associated with the token.
Example
doc=nlp("I like apples")apples=doc[2]assertapples.has_vector
Name
Type
Description
RETURNS
bool
Whether the token has a vector data attached.
Token.vector
A real-valued meaning representation.
Example
doc=nlp("I like apples")apples=doc[2]assertapples.vector.dtype=="float32"assertapples.vector.shape==(300,)
Name
Type
Description
RETURNS
numpy.ndarray[ndim=1, dtype='float32']
A 1D numpy array representing the token's semantics.
Token.vector_norm
The L2 norm of the token's vector representation.
Example
doc=nlp("I like apples and pasta")apples=doc[2]pasta=doc[4]apples.vector_norm# 6.89589786529541pasta.vector_norm# 7.759851932525635assertapples.vector_norm!=pasta.vector_norm
Name
Type
Description
RETURNS
float
The L2 norm of the vector representation.
Attributes
Name
Type
Description
doc
Doc
The parent document.
sent 2.0.12
Span
The sentence span that this token is a part of.
text
unicode
Verbatim text content.
text_with_ws
unicode
Text content, with trailing space character if present.
whitespace_
unicode
Trailing space character if present.
orth
int
ID of the verbatim text content.
orth_
unicode
Verbatim text content (identical to Token.text). Exists mostly for consistency with the other attributes.
vocab
Vocab
The vocab object of the parent Doc.
tensor 2.1.7
ndarray
The tokens's slice of the parent Doc's tensor.
head
Token
The syntactic parent, or "governor", of this token.
left_edge
Token
The leftmost token of this token's syntactic descendants.
right_edge
Token
The rightmost token of this token's syntactic descendants.
i
int
The index of the token within the parent document.
ent_type
int
Named entity type.
ent_type_
unicode
Named entity type.
ent_iob
int
IOB code of named entity tag. 3 means the token begins an entity, 2 means it is outside an entity, 1 means it is inside an entity, and 0 means no entity tag is set.
ent_iob_
unicode
IOB code of named entity tag. "B" means the token begins an entity, "I" means it is inside an entity, "O" means it is outside an entity, and "" means no entity tag is set.
ent_kb_id 2.2
int
Knowledge base ID that refers to the named entity this token is a part of, if any.
ent_kb_id_ 2.2
unicode
Knowledge base ID that refers to the named entity this token is a part of, if any.
ent_id
int
ID of the entity the token is an instance of, if any. Currently not used, but potentially for coreference resolution.
ent_id_
unicode
ID of the entity the token is an instance of, if any. Currently not used, but potentially for coreference resolution.
lemma
int
Base form of the token, with no inflectional suffixes.
lemma_
unicode
Base form of the token, with no inflectional suffixes.
Lowercase form of the token text. Equivalent to Token.text.lower().
shape
int
Transform of the tokens's string, to show orthographic features. Alphabetic characters are replaced by x or X, and numeric characters are replaced by d, and sequences of the same character are truncated after length 4. For example,"Xxxx"or"dd".
shape_
unicode
Transform of the tokens's string, to show orthographic features. Alphabetic characters are replaced by x or X, and numeric characters are replaced by d, and sequences of the same character are truncated after length 4. For example,"Xxxx"or"dd".
prefix
int
Hash value of a length-N substring from the start of the token. Defaults to N=1.
prefix_
unicode
A length-N substring from the start of the token. Defaults to N=1.
suffix
int
Hash value of a length-N substring from the end of the token. Defaults to N=3.
suffix_
unicode
Length-N substring from the end of the token. Defaults to N=3.
is_alpha
bool
Does the token consist of alphabetic characters? Equivalent to token.text.isalpha().
is_ascii
bool
Does the token consist of ASCII characters? Equivalent to all(ord(c) < 128 for c in token.text).
is_digit
bool
Does the token consist of digits? Equivalent to token.text.isdigit().
is_lower
bool
Is the token in lowercase? Equivalent to token.text.islower().
is_upper
bool
Is the token in uppercase? Equivalent to token.text.isupper().
is_title
bool
Is the token in titlecase? Equivalent to token.text.istitle().
is_punct
bool
Is the token punctuation?
is_left_punct
bool
Is the token a left punctuation mark, e.g. '(' ?
is_right_punct
bool
Is the token a right punctuation mark, e.g. ')' ?
is_space
bool
Does the token consist of whitespace characters? Equivalent to token.text.isspace().
is_bracket
bool
Is the token a bracket?
is_quote
bool
Is the token a quotation mark?
is_currency 2.0.8
bool
Is the token a currency symbol?
like_url
bool
Does the token resemble a URL?
like_num
bool
Does the token represent a number? e.g. "10.9", "10", "ten", etc.