2016-10-03 21:19:13 +03:00
|
|
|
|
//- ----------------------------------
|
|
|
|
|
//- 💫 DOCS > API > TOKEN
|
|
|
|
|
//- ----------------------------------
|
2016-03-31 17:24:48 +03:00
|
|
|
|
|
2016-10-03 21:19:13 +03:00
|
|
|
|
+section("token")
|
|
|
|
|
+h(2, "token", "https://github.com/" + SOCIAL.github + "/spaCy/blob/master/spacy/tokens/token.pyx")
|
|
|
|
|
| #[+tag class] Token
|
2016-03-31 17:24:48 +03:00
|
|
|
|
|
|
|
|
|
p.
|
2016-10-03 21:19:13 +03:00
|
|
|
|
A Token represents a single word, punctuation or significant whitespace
|
|
|
|
|
symbol. Integer IDs are provided for all string features. The (unicode)
|
|
|
|
|
string is provided by an attribute of the same name followed by an underscore,
|
|
|
|
|
e.g. #[code token.orth] is an integer ID, #[code token.orth_] is the unicode
|
|
|
|
|
value. The only exception is the #[code token.text] attribute, which is (unicode)
|
2016-03-31 17:24:48 +03:00
|
|
|
|
string-typed.
|
|
|
|
|
|
2016-10-03 21:19:13 +03:00
|
|
|
|
+section("token-init")
|
|
|
|
|
+h(3, "token-init")
|
2016-03-31 17:24:48 +03:00
|
|
|
|
| Token.__init__
|
|
|
|
|
|
2016-10-03 21:19:13 +03:00
|
|
|
|
+code("python", "Definition").
|
2016-03-31 17:24:48 +03:00
|
|
|
|
def __init__(vocab, doc, offset):
|
|
|
|
|
return Token()
|
|
|
|
|
|
2016-10-03 21:19:13 +03:00
|
|
|
|
+table(["Name", "Type", "Description"])
|
2016-03-31 17:24:48 +03:00
|
|
|
|
+row
|
|
|
|
|
+cell vocab
|
2016-10-03 21:19:13 +03:00
|
|
|
|
+cell Vocab
|
2016-03-31 17:24:48 +03:00
|
|
|
|
+cell A Vocab object
|
|
|
|
|
|
|
|
|
|
+row
|
|
|
|
|
+cell doc
|
2016-10-03 21:19:13 +03:00
|
|
|
|
+cell Doc
|
2016-03-31 17:24:48 +03:00
|
|
|
|
+cell The parent sequence
|
|
|
|
|
|
|
|
|
|
+row
|
|
|
|
|
+cell offset
|
2016-10-03 21:19:13 +03:00
|
|
|
|
+cell #[+a(link_int) int]
|
2016-03-31 17:24:48 +03:00
|
|
|
|
+cell The index of the token within the document
|
|
|
|
|
|
2016-10-03 21:19:13 +03:00
|
|
|
|
+section("token-stringfeatures")
|
|
|
|
|
+h(3, "token-stringfeatures")
|
2016-03-31 17:24:48 +03:00
|
|
|
|
| String Features
|
|
|
|
|
|
2016-10-03 21:19:13 +03:00
|
|
|
|
+table(["Name", "Description"])
|
2016-03-31 17:24:48 +03:00
|
|
|
|
+row
|
|
|
|
|
+cell lemma / lemma_
|
|
|
|
|
+cell.
|
2016-10-03 21:19:13 +03:00
|
|
|
|
The "base" of the word, with no inflectional suffixes, e.g.
|
|
|
|
|
the lemma of "developing" is "develop", the lemma of "geese"
|
|
|
|
|
is "goose", etc. Note that #[em derivational] suffixes are
|
|
|
|
|
not stripped, e.g. the lemma of "instutitions" is "institution",
|
|
|
|
|
not "institute". Lemmatization is performed using the WordNet
|
|
|
|
|
data, but extended to also cover closed-class words such as
|
|
|
|
|
pronouns. By default, the WN lemmatizer returns "hi" as the
|
2016-03-31 17:24:48 +03:00
|
|
|
|
lemma of "his". We assign pronouns the lemma #[code -PRON-].
|
|
|
|
|
|
|
|
|
|
+row
|
|
|
|
|
+cell orth / orth_
|
|
|
|
|
+cell.
|
2016-10-03 21:19:13 +03:00
|
|
|
|
The form of the word with no string normalization or processing,
|
2016-03-31 17:24:48 +03:00
|
|
|
|
as it appears in the string, without trailing whitespace.
|
|
|
|
|
|
|
|
|
|
+row
|
|
|
|
|
+cell lower / lower_
|
|
|
|
|
+cell.
|
2016-10-03 21:19:13 +03:00
|
|
|
|
The form of the word, but forced to lower-case, i.e.
|
2016-03-31 17:24:48 +03:00
|
|
|
|
#[code lower = word.orth_.lower()]
|
|
|
|
|
|
|
|
|
|
+row
|
|
|
|
|
+cell shape / shape_
|
|
|
|
|
+cell.
|
|
|
|
|
A transform of the word's string, to show orthographic features.
|
2016-10-03 21:19:13 +03:00
|
|
|
|
The characters a-z are mapped to x, A-Z is mapped to X, 0-9
|
|
|
|
|
is mapped to d. After these mappings, sequences of 4 or more
|
|
|
|
|
of the same character are truncated to length 4. Examples:
|
2016-03-31 17:24:48 +03:00
|
|
|
|
C3Po --> XdXx, favorite --> xxxx, :) --> :)
|
|
|
|
|
|
|
|
|
|
+row
|
|
|
|
|
+cell prefix / prefix_
|
|
|
|
|
+cell.
|
2016-10-03 21:19:13 +03:00
|
|
|
|
A length-N substring from the start of the word. Length may
|
|
|
|
|
vary by language; currently for English n=1, i.e.
|
2016-03-31 17:24:48 +03:00
|
|
|
|
#[code prefix = word.orth_[:1]]
|
|
|
|
|
|
|
|
|
|
+row
|
|
|
|
|
+cell suffix / suffix_
|
|
|
|
|
+cell.
|
2016-10-03 21:19:13 +03:00
|
|
|
|
A length-N substring from the end of the word. Length may
|
|
|
|
|
vary by language; currently for English n=3, i.e.
|
2016-03-31 17:24:48 +03:00
|
|
|
|
#[code suffix = word.orth_[-3:]]
|
|
|
|
|
|
2016-10-03 21:19:13 +03:00
|
|
|
|
+section("token-booleanflags")
|
|
|
|
|
+h(3, "token-booleanflags")
|
2016-03-31 17:24:48 +03:00
|
|
|
|
| Boolean Flags
|
|
|
|
|
|
2016-10-03 21:19:13 +03:00
|
|
|
|
+table(["Name", "Description"])
|
2016-03-31 17:24:48 +03:00
|
|
|
|
+row
|
|
|
|
|
+cell is_alpha
|
|
|
|
|
+cell.
|
|
|
|
|
Equivalent to #[code word.orth_.isalpha()]
|
|
|
|
|
|
|
|
|
|
+row
|
|
|
|
|
+cell is_ascii
|
|
|
|
|
+cell.
|
|
|
|
|
Equivalent to any(ord(c) >= 128 for c in word.orth_)]
|
|
|
|
|
|
|
|
|
|
+row
|
|
|
|
|
+cell is_digit
|
|
|
|
|
+cell.
|
|
|
|
|
Equivalent to #[code word.orth_.isdigit()]
|
|
|
|
|
|
|
|
|
|
+row
|
|
|
|
|
+cell is_lower
|
|
|
|
|
+cell.
|
|
|
|
|
Equivalent to #[code word.orth_.islower()]
|
|
|
|
|
|
|
|
|
|
+row
|
|
|
|
|
+cell is_title
|
|
|
|
|
+cell.
|
|
|
|
|
Equivalent to #[code word.orth_.istitle()]
|
|
|
|
|
|
|
|
|
|
+row
|
|
|
|
|
+cell is_punct
|
|
|
|
|
+cell.
|
|
|
|
|
Equivalent to #[code word.orth_.ispunct()]
|
|
|
|
|
|
|
|
|
|
+row
|
|
|
|
|
+cell is_space
|
|
|
|
|
+cell.
|
|
|
|
|
Equivalent to #[code word.orth_.isspace()]
|
|
|
|
|
|
|
|
|
|
+row
|
|
|
|
|
+cell like_url
|
|
|
|
|
+cell.
|
|
|
|
|
Does the word resemble a URL?
|
|
|
|
|
|
|
|
|
|
+row
|
|
|
|
|
+cell like_num
|
|
|
|
|
+cell.
|
|
|
|
|
Does the word represent a number? e.g. “10.9”, “10”, “ten”, etc.
|
|
|
|
|
|
|
|
|
|
+row
|
|
|
|
|
+cell like_email
|
|
|
|
|
+cell.
|
|
|
|
|
Does the word resemble an email?
|
|
|
|
|
|
|
|
|
|
+row
|
|
|
|
|
+cell is_oov
|
|
|
|
|
+cell.
|
|
|
|
|
Is the word out-of-vocabulary?
|
|
|
|
|
|
|
|
|
|
+row
|
|
|
|
|
+cell is_stop
|
|
|
|
|
+cell.
|
2016-10-03 21:19:13 +03:00
|
|
|
|
Is the word part of a "stop list"? Stop lists are used to
|
|
|
|
|
improve the quality of topic models, by filtering out common,
|
2016-03-31 17:24:48 +03:00
|
|
|
|
domain-general words.
|
|
|
|
|
|
2016-10-03 21:19:13 +03:00
|
|
|
|
+section("token-distributional")
|
|
|
|
|
+h(3, "token-distributional")
|
2016-03-31 17:24:48 +03:00
|
|
|
|
| Distributional Features
|
|
|
|
|
|
2016-10-03 21:19:13 +03:00
|
|
|
|
+table(["Name", "Description"])
|
2016-03-31 17:24:48 +03:00
|
|
|
|
+row
|
|
|
|
|
+cell prob
|
|
|
|
|
+cell.
|
2016-10-03 21:19:13 +03:00
|
|
|
|
The unigram log-probability of the word, estimated from
|
|
|
|
|
counts from a large corpus, smoothed using Simple Good Turing
|
2016-03-31 17:24:48 +03:00
|
|
|
|
estimation.
|
|
|
|
|
|
|
|
|
|
+row
|
|
|
|
|
+cell cluster
|
|
|
|
|
+cell.
|
2016-10-03 21:19:13 +03:00
|
|
|
|
The Brown cluster ID of the word. These are often useful features
|
|
|
|
|
for linear models. If you’re using a non-linear model, particularly
|
|
|
|
|
a neural net or random forest, consider using the real-valued
|
2016-03-31 17:24:48 +03:00
|
|
|
|
word representation vector, in #[code Token.repvec], instead.
|
|
|
|
|
|
|
|
|
|
+row
|
|
|
|
|
+cell vector
|
|
|
|
|
+cell.
|
2016-10-03 21:19:13 +03:00
|
|
|
|
A "word embedding" representation: a dense real-valued vector
|
|
|
|
|
that supports similarity queries between words. By default,
|
|
|
|
|
spaCy currently loads vectors produced by the Levy and
|
2016-03-31 17:24:48 +03:00
|
|
|
|
Goldberg (2014) dependency-based word2vec model.
|
|
|
|
|
|
|
|
|
|
+row
|
|
|
|
|
+cell has_vector
|
|
|
|
|
+cell.
|
|
|
|
|
A boolean value indicating whether a vector.
|
|
|
|
|
|
2016-10-03 21:19:13 +03:00
|
|
|
|
+section("token-alignment")
|
|
|
|
|
+h(3, "token-alignment")
|
2016-03-31 17:24:48 +03:00
|
|
|
|
| Alignment and Output
|
|
|
|
|
|
2016-10-03 21:19:13 +03:00
|
|
|
|
+table(["Name", "Description"])
|
2016-03-31 17:24:48 +03:00
|
|
|
|
+row
|
|
|
|
|
+cell idx
|
|
|
|
|
+cell.
|
|
|
|
|
Start index of the token in the string
|
|
|
|
|
|
|
|
|
|
+row
|
|
|
|
|
+cell len(token)
|
|
|
|
|
+cell.
|
|
|
|
|
Length of the token's orth string, in unicode code-points.
|
|
|
|
|
|
|
|
|
|
+row
|
|
|
|
|
+cell unicode(token)
|
|
|
|
|
+cell.
|
|
|
|
|
Same as #[code token.orth_].
|
|
|
|
|
|
|
|
|
|
+row
|
|
|
|
|
+cell str(token)
|
|
|
|
|
+cell.
|
2016-10-03 21:19:13 +03:00
|
|
|
|
In Python 3, returns #[code token.orth_]. In Python 2, returns
|
2016-03-31 17:24:48 +03:00
|
|
|
|
#[code token.orth_.encode('utf8')].
|
|
|
|
|
|
|
|
|
|
+row
|
|
|
|
|
+cell text
|
|
|
|
|
+cell.
|
|
|
|
|
An alias for #[code token.orth_].
|
|
|
|
|
|
|
|
|
|
+row
|
|
|
|
|
+cell text_with_ws
|
|
|
|
|
+cell.
|
2016-10-03 21:19:13 +03:00
|
|
|
|
#[code token.orth_ + token.whitespace_], i.e. the form of the
|
|
|
|
|
word as it appears in the string, trailing whitespace. This is
|
|
|
|
|
useful when you need to use linguistic features to add inline
|
2016-03-31 17:24:48 +03:00
|
|
|
|
mark-up to the string.
|
|
|
|
|
|
|
|
|
|
+row
|
|
|
|
|
+cell whitespace_
|
|
|
|
|
+cell.
|
|
|
|
|
The number of immediate syntactic children following the word
|
|
|
|
|
in the string.
|
|
|
|
|
|
2016-10-03 21:19:13 +03:00
|
|
|
|
+section("token-postags")
|
|
|
|
|
+h(3, "token-postags")
|
2016-03-31 17:24:48 +03:00
|
|
|
|
| Part-of-Speech Tags
|
|
|
|
|
|
2016-10-03 21:19:13 +03:00
|
|
|
|
+table(["Name", "Description"])
|
2016-03-31 17:24:48 +03:00
|
|
|
|
+row
|
|
|
|
|
+cell pos / pos_
|
|
|
|
|
+cell.
|
2016-10-03 21:19:13 +03:00
|
|
|
|
A coarse-grained, less detailed tag that represents the
|
|
|
|
|
word-class of the token. The set of #[code .pos] tags are
|
|
|
|
|
consistent across languages. The available tags are #[code ADJ],
|
|
|
|
|
#[code ADP], #[code ADV], #[code AUX], #[code CONJ], #[code DET],
|
|
|
|
|
#[code INTJ], #[code NOUN], #[code NUM], #[code PART],
|
|
|
|
|
#[code PRON], #[code PROPN], #[code PUNCT], #[code SCONJ],
|
2016-03-31 17:24:48 +03:00
|
|
|
|
#[code SYM], #[code VERB], #[code X], #[code EOL], #[code SPACE].
|
|
|
|
|
|
|
|
|
|
+row
|
|
|
|
|
+cell tag / tag_
|
|
|
|
|
+cell.
|
2016-10-03 21:19:13 +03:00
|
|
|
|
A fine-grained, more detailed tag that represents the
|
|
|
|
|
word-class and some basic morphological information for the
|
|
|
|
|
token. These tags are primarily designed to be good features
|
|
|
|
|
for subsequent models, particularly the syntactic parser.
|
|
|
|
|
They are language and treebank dependent. The tagger is
|
|
|
|
|
trained to predict these fine-grained tags, and then a
|
|
|
|
|
mapping table is used to reduce them to the coarse-grained
|
2016-03-31 17:24:48 +03:00
|
|
|
|
#[code .pos] tags.
|
|
|
|
|
|
2016-10-03 21:19:13 +03:00
|
|
|
|
+section("token-navigating")
|
|
|
|
|
+h(3, "token-navigating") Navigating the Parse Tree
|
2016-03-31 17:24:48 +03:00
|
|
|
|
|
2016-10-03 21:19:13 +03:00
|
|
|
|
+table(["Name", "Description"])
|
|
|
|
|
+row
|
|
|
|
|
+cell dep / dep_
|
|
|
|
|
+cell.
|
|
|
|
|
The syntactic relation type, aka the dependency label, connecting the word to its head.
|
2016-03-31 17:24:48 +03:00
|
|
|
|
+row
|
|
|
|
|
+cell head
|
|
|
|
|
+cell.
|
2016-10-03 21:19:13 +03:00
|
|
|
|
The immediate syntactic head of the token. If the token is the
|
|
|
|
|
root of its sentence, it is the token itself, i.e.
|
2016-03-31 17:24:48 +03:00
|
|
|
|
#[code root_token.head is root_token].
|
|
|
|
|
|
|
|
|
|
+row
|
|
|
|
|
+cell children
|
|
|
|
|
+cell.
|
|
|
|
|
An iterator that yields from lefts, and then yields from rights.
|
|
|
|
|
|
|
|
|
|
+row
|
|
|
|
|
+cell subtree
|
|
|
|
|
+cell.
|
|
|
|
|
An iterator for the part of the sentence syntactically governed
|
|
|
|
|
by the word, including the word itself.
|
|
|
|
|
|
|
|
|
|
+row
|
|
|
|
|
+cell left_edge
|
|
|
|
|
+cell.
|
|
|
|
|
The leftmost edge of the token's subtree.
|
|
|
|
|
|
|
|
|
|
+row
|
|
|
|
|
+cell right_edge
|
|
|
|
|
+cell.
|
|
|
|
|
The rightmost edge of the token's subtree.
|
|
|
|
|
|
|
|
|
|
+row
|
|
|
|
|
+cell nbor(i=1)
|
|
|
|
|
+cell.
|
|
|
|
|
Get the #[code i]#[sup th] next / previous neighboring token.
|
|
|
|
|
|
2016-10-03 21:19:13 +03:00
|
|
|
|
+section("token-namedentities")
|
|
|
|
|
+h(3, "token-namedentities")
|
2016-03-31 17:24:48 +03:00
|
|
|
|
| Named Entity Recognition
|
|
|
|
|
|
2016-10-03 21:19:13 +03:00
|
|
|
|
+table(["Name", "Description"])
|
2016-03-31 17:24:48 +03:00
|
|
|
|
+row
|
|
|
|
|
+cell ent_type
|
|
|
|
|
+cell.
|
|
|
|
|
If the token is part of an entity, its entity type.
|
|
|
|
|
|
|
|
|
|
+row
|
|
|
|
|
+cell ent_iob
|
|
|
|
|
+cell.
|
2016-10-03 21:19:13 +03:00
|
|
|
|
The IOB (inside, outside, begin) entity recognition tag for
|
2016-03-31 17:24:48 +03:00
|
|
|
|
the token.
|