spaCy/website/api/_annotation/_text-processing.jade

56 lines
2.5 KiB
Plaintext
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

//- 💫 DOCS > API > ANNOTATION > TEXT PROCESSING
+aside-code("Example").
from spacy.lang.en import English
nlp = English()
tokens = nlp('Some\nspaces and\ttab characters')
tokens_text = [t.text for t in tokens]
assert tokens_text == ['Some', '\n', 'spaces', ' ', 'and',
'\t', 'tab', 'characters']
p
| Tokenization standards are based on the
| #[+a("https://catalog.ldc.upenn.edu/LDC2013T19") OntoNotes 5] corpus.
| The tokenizer differs from most by including
| #[strong tokens for significant whitespace]. Any sequence of
| whitespace characters beyond a single space (#[code ' ']) is included
| as a token. The whitespace tokens are useful for much the same reason
| punctuation is it's often an important delimiter in the text. By
| preserving it in the token output, we are able to maintain a simple
| alignment between the tokens and the original string, and we ensure
| that #[strong no information is lost] during processing.
+h(3, "lemmatization") Lemmatization
+aside("Examples")
| In English, this means:#[br]
| #[strong Adjectives]: happier, happiest → happy#[br]
| #[strong Adverbs]: worse, worst → badly#[br]
| #[strong Nouns]: dogs, children → dog, child#[br]
| #[strong Verbs]: writes, wirting, wrote, written → write
p
| A lemma is the uninflected form of a word. The English lemmatization
| data is taken from #[+a("https://wordnet.princeton.edu") WordNet].
| Lookup tables are taken from
| #[+a("http://www.lexiconista.com/datasets/lemmatization/") Lexiconista].
| spaCy also adds a #[strong special case for pronouns]: all pronouns
| are lemmatized to the special token #[code -PRON-].
+infobox("About spaCy's custom pronoun lemma", "⚠️")
| Unlike verbs and common nouns, there's no clear base form of a personal
| pronoun. Should the lemma of "me" be "I", or should we normalize person
| as well, giving "it" — or maybe "he"? spaCy's solution is to introduce a
| novel symbol, #[code -PRON-], which is used as the lemma for
| all personal pronouns.
+h(3, "sentence-boundary") Sentence boundary detection
p
| Sentence boundaries are calculated from the syntactic parse tree, so
| features such as punctuation and capitalisation play an important but
| non-decisive role in determining the sentence boundaries. Usually this
| means that the sentence boundaries will at least coincide with clause
| boundaries, even given poorly punctuated text.