mirror of
https://github.com/explosion/spaCy.git
synced 2024-11-11 12:18:04 +03:00
19 lines
720 B
Plaintext
19 lines
720 B
Plaintext
//- 💫 DOCS > USAGE > SPACY 101 > TOKENIZATION
|
||
|
||
p
|
||
| During processing, spaCy first #[strong tokenizes] the text, i.e.
|
||
| segments it into words, punctuation and so on. For example, punctuation
|
||
| at the end of a sentence should be split off – whereas "U.K." should
|
||
| remain one token. This is done by applying rules specific to each
|
||
| language. Each #[code Doc] consists of individual tokens, and we can
|
||
| simply iterate over them:
|
||
|
||
+code.
|
||
for token in doc:
|
||
print(token.text)
|
||
|
||
+table([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]).u-text-center
|
||
+row
|
||
for cell in ["Apple", "is", "looking", "at", "buying", "U.K.", "startup", "for", "$", "1", "billion"]
|
||
+cell=cell
|