2017-05-24 00:16:31 +03:00
|
|
|
|
//- 💫 DOCS > USAGE > SPACY 101 > TOKENIZATION
|
|
|
|
|
|
|
|
|
|
p
|
|
|
|
|
| During processing, spaCy first #[strong tokenizes] the text, i.e.
|
2017-05-25 12:17:21 +03:00
|
|
|
|
| segments it into words, punctuation and so on. This is done by applying
|
|
|
|
|
| rules specific to each language. For example, punctuation at the end of a
|
|
|
|
|
| sentence should be split off – whereas "U.K." should remain one token.
|
|
|
|
|
| Each #[code Doc] consists of individual tokens, and we can simply iterate
|
|
|
|
|
| over them:
|
2017-05-24 00:16:31 +03:00
|
|
|
|
|
|
|
|
|
+code.
|
|
|
|
|
for token in doc:
|
|
|
|
|
print(token.text)
|
|
|
|
|
|
|
|
|
|
+table([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]).u-text-center
|
|
|
|
|
+row
|
|
|
|
|
for cell in ["Apple", "is", "looking", "at", "buying", "U.K.", "startup", "for", "$", "1", "billion"]
|
|
|
|
|
+cell=cell
|