2017-05-26 13:45:01 +03:00
|
|
|
|
//- 💫 DOCS > USAGE > SPACY 101 > VOCAB & STRINGSTORE
|
|
|
|
|
|
|
|
|
|
p
|
|
|
|
|
| Whenever possible, spaCy tries to store data in a vocabulary, the
|
|
|
|
|
| #[+api("vocab") #[code Vocab]], that will be
|
|
|
|
|
| #[strong shared by multiple documents]. To save memory, spaCy also
|
2017-05-28 19:19:11 +03:00
|
|
|
|
| encodes all strings to #[strong hash values] – in this case for example,
|
2017-05-29 02:06:49 +03:00
|
|
|
|
| "coffee" has the hash #[code 3197928453018144401]. Entity labels like
|
2017-05-28 19:19:11 +03:00
|
|
|
|
| "ORG" and part-of-speech tags like "VERB" are also encoded. Internally,
|
|
|
|
|
| spaCy only "speaks" in hash values.
|
2017-05-26 13:45:01 +03:00
|
|
|
|
|
|
|
|
|
+aside
|
|
|
|
|
| #[strong Token]: A word, punctuation mark etc. #[em in context], including
|
|
|
|
|
| its attributes, tags and dependencies.#[br]
|
|
|
|
|
| #[strong Lexeme]: A "word type" with no context. Includes the word shape
|
|
|
|
|
| and flags, e.g. if it's lowercase, a digit or punctuation.#[br]
|
|
|
|
|
| #[strong Doc]: A processed container of tokens in context.#[br]
|
|
|
|
|
| #[strong Vocab]: The collection of lexemes.#[br]
|
2017-05-28 19:19:11 +03:00
|
|
|
|
| #[strong StringStore]: The dictionary mapping hash values to strings, for
|
2017-05-29 02:06:49 +03:00
|
|
|
|
| example #[code 3197928453018144401] → "coffee".
|
2017-05-26 13:45:01 +03:00
|
|
|
|
|
2017-10-03 15:26:20 +03:00
|
|
|
|
+graphic("/assets/img/vocab_stringstore.svg")
|
|
|
|
|
include ../../assets/img/vocab_stringstore.svg
|
2017-05-26 13:45:01 +03:00
|
|
|
|
|
|
|
|
|
p
|
|
|
|
|
| If you process lots of documents containing the word "coffee" in all
|
|
|
|
|
| kinds of different contexts, storing the exact string "coffee" every time
|
2017-05-28 19:19:11 +03:00
|
|
|
|
| would take up way too much space. So instead, spaCy hashes the string
|
2017-05-26 13:45:01 +03:00
|
|
|
|
| and stores it in the #[+api("stringstore") #[code StringStore]]. You can
|
|
|
|
|
| think of the #[code StringStore] as a
|
|
|
|
|
| #[strong lookup table that works in both directions] – you can look up a
|
2017-05-28 19:19:11 +03:00
|
|
|
|
| string to get its hash, or a hash to get its string:
|
2017-05-26 13:45:01 +03:00
|
|
|
|
|
|
|
|
|
+code.
|
|
|
|
|
doc = nlp(u'I like coffee')
|
2017-05-29 02:06:49 +03:00
|
|
|
|
assert doc.vocab.strings[u'coffee'] == 3197928453018144401
|
|
|
|
|
assert doc.vocab.strings[3197928453018144401] == u'coffee'
|
2017-05-26 13:45:01 +03:00
|
|
|
|
|
2017-05-29 12:05:05 +03:00
|
|
|
|
+aside("What does 'L' at the end of a hash mean?")
|
|
|
|
|
| If you return a hash value in the #[strong Python 2 interpreter], it'll
|
|
|
|
|
| show up as #[code 3197928453018144401L]. The #[code L] just means "long
|
|
|
|
|
| integer" – it's #[strong not] actually a part of the hash value.
|
|
|
|
|
|
2017-05-26 13:45:01 +03:00
|
|
|
|
p
|
|
|
|
|
| Now that all strings are encoded, the entries in the vocabulary
|
|
|
|
|
| #[strong don't need to include the word text] themselves. Instead,
|
2017-05-28 19:19:11 +03:00
|
|
|
|
| they can look it up in the #[code StringStore] via its hash value. Each
|
2017-05-26 13:45:01 +03:00
|
|
|
|
| entry in the vocabulary, also called #[+api("lexeme") #[code Lexeme]],
|
|
|
|
|
| contains the #[strong context-independent] information about a word.
|
|
|
|
|
| For example, no matter if "love" is used as a verb or a noun in some
|
|
|
|
|
| context, its spelling and whether it consists of alphabetic characters
|
2017-05-28 19:19:11 +03:00
|
|
|
|
| won't ever change. Its hash value will also always be the same.
|
2017-05-26 13:45:01 +03:00
|
|
|
|
|
|
|
|
|
+code.
|
|
|
|
|
for word in doc:
|
|
|
|
|
lexeme = doc.vocab[word.text]
|
|
|
|
|
print(lexeme.text, lexeme.orth, lexeme.shape_, lexeme.prefix_, lexeme.suffix_,
|
|
|
|
|
lexeme.is_alpha, lexeme.is_digit, lexeme.is_title, lexeme.lang_)
|
|
|
|
|
|
|
|
|
|
+aside
|
|
|
|
|
| #[strong Text]: The original text of the lexeme.#[br]
|
2017-05-28 19:19:11 +03:00
|
|
|
|
| #[strong Orth]: The hash value of the lexeme.#[br]
|
2017-05-26 13:45:01 +03:00
|
|
|
|
| #[strong Shape]: The abstract word shape of the lexeme.#[br]
|
|
|
|
|
| #[strong Prefix]: By default, the first letter of the word string.#[br]
|
|
|
|
|
| #[strong Suffix]: By default, the last three letters of the word string.#[br]
|
|
|
|
|
| #[strong is alpha]: Does the lexeme consist of alphabetic characters?#[br]
|
|
|
|
|
| #[strong is digit]: Does the lexeme consist of digits?#[br]
|
|
|
|
|
|
2017-05-28 19:19:11 +03:00
|
|
|
|
+table(["text", "orth", "shape", "prefix", "suffix", "is_alpha", "is_digit"])
|
|
|
|
|
- var style = [0, 1, 1, 0, 0, 1, 1]
|
2017-05-29 02:06:49 +03:00
|
|
|
|
+annotation-row(["I", "4690420944186131903", "X", "I", "I", true, false], style)
|
|
|
|
|
+annotation-row(["love", "3702023516439754181", "xxxx", "l", "ove", true, false], style)
|
|
|
|
|
+annotation-row(["coffee", "3197928453018144401", "xxxx", "c", "ffe", true, false], style)
|
2017-05-26 13:45:01 +03:00
|
|
|
|
|
|
|
|
|
p
|
2017-05-28 19:19:11 +03:00
|
|
|
|
| The mapping of words to hashes doesn't depend on any state. To make sure
|
|
|
|
|
| each value is unique, spaCy uses a
|
|
|
|
|
| #[+a("https://en.wikipedia.org/wiki/Hash_function") hash function] to
|
|
|
|
|
| calculate the hash #[strong based on the word string]. This also means
|
|
|
|
|
| that the hash for "coffee" will always be the same, no matter which model
|
|
|
|
|
| you're using or how you've configured spaCy.
|
|
|
|
|
|
|
|
|
|
p
|
|
|
|
|
| However, hashes #[strong cannot be reversed] and there's no way to
|
2017-05-29 02:06:49 +03:00
|
|
|
|
| resolve #[code 3197928453018144401] back to "coffee". All spaCy can do
|
2017-05-28 19:19:11 +03:00
|
|
|
|
| is look it up in the vocabulary. That's why you always need to make
|
|
|
|
|
| sure all objects you create have access to the same vocabulary. If they
|
|
|
|
|
| don't, spaCy might not be able to find the strings it needs.
|
2017-05-26 13:45:01 +03:00
|
|
|
|
|
|
|
|
|
+code.
|
|
|
|
|
from spacy.tokens import Doc
|
|
|
|
|
from spacy.vocab import Vocab
|
|
|
|
|
|
|
|
|
|
doc = nlp(u'I like coffee') # original Doc
|
2017-05-29 02:06:49 +03:00
|
|
|
|
assert doc.vocab.strings[u'coffee'] == 3197928453018144401 # get hash
|
|
|
|
|
assert doc.vocab.strings[3197928453018144401] == u'coffee' # 👍
|
2017-05-28 19:19:11 +03:00
|
|
|
|
|
|
|
|
|
empty_doc = Doc(Vocab()) # new Doc with empty Vocab
|
2017-05-29 02:06:49 +03:00
|
|
|
|
# doc.vocab.strings[3197928453018144401] will raise an error :(
|
2017-05-28 19:19:11 +03:00
|
|
|
|
|
|
|
|
|
empty_doc.vocab.strings.add(u'coffee') # add "coffee" and generate hash
|
2017-05-29 02:06:49 +03:00
|
|
|
|
assert doc.vocab.strings[3197928453018144401] == u'coffee' # 👍
|
2017-05-28 19:19:11 +03:00
|
|
|
|
|
|
|
|
|
new_doc = Doc(doc.vocab) # create new doc with first doc's vocab
|
2017-05-29 02:06:49 +03:00
|
|
|
|
assert doc.vocab.strings[3197928453018144401] == u'coffee' # 👍
|
2017-05-26 13:45:01 +03:00
|
|
|
|
|
|
|
|
|
p
|
2017-05-29 15:14:26 +03:00
|
|
|
|
| If the vocabulary doesn't contain a string for #[code 3197928453018144401],
|
|
|
|
|
| spaCy will raise an error. You can re-add "coffee" manually, but this
|
|
|
|
|
| only works if you actually #[em know] that the document contains that
|
|
|
|
|
| word. To prevent this problem, spaCy will also export the #[code Vocab]
|
|
|
|
|
| when you save a #[code Doc] or #[code nlp] object. This will give you
|
2017-09-27 03:25:28 +03:00
|
|
|
|
| the object and its encoded annotations, plus the "key" to decode it.
|