Use returns/yields instead of return/yield

This commit is contained in:
ines 2017-05-19 00:02:34 +02:00
parent 0fc05e54e4
commit 5b68579eb8
15 changed files with 127 additions and 127 deletions

View File

@ -26,7 +26,7 @@ p Load the statistical model from the supplied path.
+cell Whether to raise an error if the files are not found.
+footrow
+cell return
+cell returns
+cell #[code DependencyParser]
+cell The newly constructed object.
@ -47,7 +47,7 @@ p Create a #[code DependencyParser].
+cell The statistical model.
+footrow
+cell return
+cell returns
+cell #[code DependencyParser]
+cell The newly constructed object.
@ -65,7 +65,7 @@ p
+cell The document to be processed.
+footrow
+cell return
+cell returns
+cell #[code None]
+cell -
@ -93,7 +93,7 @@ p Process a stream of documents.
| parallel.
+footrow
+cell yield
+cell yields
+cell #[code Doc]
+cell Documents, in order.
@ -114,7 +114,7 @@ p Update the statistical model.
+cell The gold-standard annotations, to calculate the loss.
+footrow
+cell return
+cell returns
+cell int
+cell The loss on this example.
@ -130,6 +130,6 @@ p Set up a stepwise state, to introspect and control the transition sequence.
+cell The document to step through.
+footrow
+cell return
+cell returns
+cell #[code StepwiseState]
+cell A state object, to step through the annotation process.

View File

@ -112,7 +112,7 @@ p Render a dependency parse tree or named entity visualization.
+cell #[code {}]
+footrow
+cell return
+cell returns
+cell unicode
+cell Rendered HTML markup.
+cell

View File

@ -48,7 +48,7 @@ p
| specified. Defaults to a sequence of #[code True].
+footrow
+cell return
+cell returns
+cell #[code Doc]
+cell The newly constructed object.
@ -74,7 +74,7 @@ p
+cell The index of the token.
+footrow
+cell return
+cell returns
+cell #[code Token]
+cell The token at #[code doc[i]].
@ -97,7 +97,7 @@ p
+cell The slice of the document to get.
+footrow
+cell return
+cell returns
+cell #[code Span]
+cell The span at #[code doc[start : end]].
@ -122,7 +122,7 @@ p
+table(["Name", "Type", "Description"])
+footrow
+cell yield
+cell yields
+cell #[code Token]
+cell A #[code Token] object.
@ -137,7 +137,7 @@ p Get the number of tokens in the document.
+table(["Name", "Type", "Description"])
+footrow
+cell return
+cell returns
+cell int
+cell The number of tokens in the document.
@ -164,7 +164,7 @@ p
| #[code Span], #[code Token] and #[code Lexeme] objects.
+footrow
+cell return
+cell returns
+cell float
+cell A scalar similarity score. Higher is more similar.
@ -191,7 +191,7 @@ p
+cell The attribute ID
+footrow
+cell return
+cell returns
+cell dict
+cell A dictionary mapping attributes to integer counts.
@ -216,7 +216,7 @@ p
+cell A list of attribute ID ints.
+footrow
+cell return
+cell returns
+cell #[code numpy.ndarray[ndim=2, dtype='int32']]
+cell
| The exported attributes as a 2D numpy array, with one row per
@ -249,7 +249,7 @@ p
+cell The attribute values to load.
+footrow
+cell return
+cell returns
+cell #[code Doc]
+cell Itself.
@ -264,7 +264,7 @@ p Serialize, i.e. export the document contents to a binary string.
+table(["Name", "Type", "Description"])
+footrow
+cell return
+cell returns
+cell bytes
+cell
| A losslessly serialized copy of the #[code Doc], including all
@ -290,7 +290,7 @@ p Deserialize, i.e. import the document contents from a binary string.
+cell The string to load from.
+footrow
+cell return
+cell returns
+cell #[code Doc]
+cell Itself.
@ -329,7 +329,7 @@ p
| the span.
+footrow
+cell return
+cell returns
+cell #[code Token]
+cell
| The newly merged token, or #[code None] if the start and end
@ -364,7 +364,7 @@ p
+cell Don't include arcs or modifiers.
+footrow
+cell return
+cell returns
+cell dict
+cell Parse tree as dict.
@ -380,7 +380,7 @@ p A unicode representation of the document text.
+table(["Name", "Type", "Description"])
+footrow
+cell return
+cell returns
+cell unicode
+cell The original verbatim text of the document.
@ -393,7 +393,7 @@ p
+table(["Name", "Type", "Description"])
+footrow
+cell return
+cell returns
+cell unicode
+cell The original verbatim text of the document.
@ -415,7 +415,7 @@ p
+table(["Name", "Type", "Description"])
+footrow
+cell yield
+cell yields
+cell #[code Span]
+cell Entities in the document.
@ -438,7 +438,7 @@ p
+table(["Name", "Type", "Description"])
+footrow
+cell yield
+cell yields
+cell #[code Span]
+cell Noun chunks in the document.
@ -460,7 +460,7 @@ p
+table(["Name", "Type", "Description"])
+footrow
+cell yield
+cell yields
+cell #[code Span]
+cell Sentences in the document.
@ -478,7 +478,7 @@ p
+table(["Name", "Type", "Description"])
+footrow
+cell return
+cell returns
+cell bool
+cell Whether the document has a vector data attached.
@ -497,7 +497,7 @@ p
+table(["Name", "Type", "Description"])
+footrow
+cell return
+cell returns
+cell #[code numpy.ndarray[ndim=1, dtype='float32']]
+cell A 1D numpy array representing the document's semantics.
@ -510,7 +510,7 @@ p
+table(["Name", "Type", "Description"])
+footrow
+cell return
+cell returns
+cell float
+cell The L2 norm of the vector representation.

View File

@ -26,7 +26,7 @@ p Load the statistical model from the supplied path.
+cell Whether to raise an error if the files are not found.
+footrow
+cell return
+cell returns
+cell #[code EntityRecognizer]
+cell The newly constructed object.
@ -47,7 +47,7 @@ p Create an #[code EntityRecognizer].
+cell The statistical model.
+footrow
+cell return
+cell returns
+cell #[code EntityRecognizer]
+cell The newly constructed object.
@ -63,7 +63,7 @@ p Apply the entity recognizer, setting the NER tags onto the #[code Doc] object.
+cell The document to be processed.
+footrow
+cell return
+cell returns
+cell #[code None]
+cell -
@ -91,7 +91,7 @@ p Process a stream of documents.
| parallel.
+footrow
+cell yield
+cell yields
+cell #[code Doc]
+cell Documents, in order.
@ -112,7 +112,7 @@ p Update the statistical model.
+cell The gold-standard annotations, to calculate the loss.
+footrow
+cell return
+cell returns
+cell int
+cell The loss on this example.
@ -128,6 +128,6 @@ p Set up a stepwise state, to introspect and control the transition sequence.
+cell The document to step through.
+footrow
+cell return
+cell returns
+cell #[code StepwiseState]
+cell A state object, to step through the annotation process.

View File

@ -74,7 +74,7 @@ p Create a GoldParse.
+cell A sequence of named entity annotations, either as BILUO tag strings, or as #[code (start_char, end_char, label)] tuples, representing the entity positions.
+footrow
+cell return
+cell returns
+cell #[code GoldParse]
+cell The newly constructed object.
@ -85,7 +85,7 @@ p Get the number of gold-standard tokens.
+table(["Name", "Type", "Description"])
+footrow
+cell return
+cell returns
+cell int
+cell The number of gold-standard tokens.
@ -98,6 +98,6 @@ p
+table(["Name", "Type", "Description"])
+footrow
+cell return
+cell returns
+cell bool
+cell Whether annotations form projective tree.

View File

@ -50,7 +50,7 @@ p Initialise a #[code Language] object.
| models to add model meta data.
+footrow
+cell return
+cell returns
+cell #[code Language]
+cell The newly constructed object.
@ -79,7 +79,7 @@ p
+cell Elements of the pipeline that should not be run.
+footrow
+cell return
+cell returns
+cell #[code Doc]
+cell A container for accessing the annotations.
@ -116,7 +116,7 @@ p Update the models in the pipeline.
+cell An optimizer.
+footrow
+cell return
+cell returns
+cell dict
+cell Results from the update.
@ -145,7 +145,7 @@ p
+cell Config parameters.
+footrow
+cell yield
+cell yields
+cell tuple
+cell A trainer and an optimizer.
@ -204,7 +204,7 @@ p
+cell The number of texts to buffer.
+footrow
+cell yield
+cell yields
+cell #[code Doc]
+cell Documents in the order of the original text.
@ -252,7 +252,7 @@ p Loads state from a directory. Modifies the object in place and returns it.
+cell Named attributes to prevent from being loaded.
+footrow
+cell return
+cell returns
+cell #[code Language]
+cell The modified #[code Language] object.
@ -271,7 +271,7 @@ p Serialize the current state to a binary string.
+cell Named attributes to prevent from being serialized.
+footrow
+cell return
+cell returns
+cell bytes
+cell The serialized form of the #[code Language] object.
@ -298,7 +298,7 @@ p Load state from a binary string.
+cell Named attributes to prevent from being loaded.
+footrow
+cell return
+cell returns
+cell bytes
+cell The serialized form of the #[code Language] object.

View File

@ -157,7 +157,7 @@ p Create a #[code Lexeme] object.
+cell The orth id of the lexeme.
+footrow
+cell return
+cell returns
+cell #[code Lexeme]
+cell The newly constructed object.
@ -178,7 +178,7 @@ p Change the value of a boolean flag.
+cell The new value of the flag.
+footrow
+cell return
+cell returns
+cell #[code None]
+cell -
@ -194,7 +194,7 @@ p Check the value of a boolean flag.
+cell The attribute ID of the flag to query.
+footrow
+cell return
+cell returns
+cell bool
+cell The value of the flag.
@ -212,7 +212,7 @@ p Compute a semantic similarity estimate. Defaults to cosine over vectors.
| #[code Span], #[code Token] and #[code Lexeme] objects.
+footrow
+cell return
+cell returns
+cell float
+cell A scalar similarity score. Higher is more similar.
@ -223,7 +223,7 @@ p A real-valued meaning representation.
+table(["Name", "Type", "Description"])
+footrow
+cell return
+cell returns
+cell #[code numpy.ndarray[ndim=1, dtype='float32']]
+cell A real-valued meaning representation.
@ -234,6 +234,6 @@ p A boolean value indicating whether a word vector is associated with the object
+table(["Name", "Type", "Description"])
+footrow
+cell return
+cell returns
+cell bool
+cell Whether a word vector is associated with the object.

View File

@ -21,7 +21,7 @@ p Load the matcher and patterns from a file path.
+cell The vocabulary that the documents to match over will refer to.
+footrow
+cell return
+cell returns
+cell #[code Matcher]
+cell The newly constructed object.
@ -44,7 +44,7 @@ p Create the Matcher.
+cell Patterns to add to the matcher.
+footrow
+cell return
+cell returns
+cell #[code Matcher]
+cell The newly constructed object.
@ -60,7 +60,7 @@ p Find all token sequences matching the supplied patterns on the Doc.
+cell The document to match over.
+footrow
+cell return
+cell returns
+cell list
+cell
| A list of#[code (entity_key, label_id, start, end)] tuples,
@ -93,7 +93,7 @@ p Match a stream of documents, yielding them in turn.
| multi-threading.
+footrow
+cell yield
+cell yields
+cell #[code Doc]
+cell Documents, in order.
@ -132,7 +132,7 @@ p Add an entity to the matcher.
+cell Callback function to act on matches of the entity.
+footrow
+cell return
+cell returns
+cell #[code None]
+cell -
@ -158,7 +158,7 @@ p Add a pattern to the matcher.
+cell Label to assign to the matched pattern. Defaults to #[code ""].
+footrow
+cell return
+cell returns
+cell #[code None]
+cell -
@ -174,6 +174,6 @@ p Check whether the matcher has an entity.
+cell The entity key to check.
+footrow
+cell return
+cell returns
+cell bool
+cell Whether the matcher has the entity.

View File

@ -89,7 +89,7 @@ p Create a Span object from the #[code slice doc[start : end]].
+cell A meaning representation of the span.
+footrow
+cell return
+cell returns
+cell #[code Span]
+cell The newly constructed object.
@ -105,7 +105,7 @@ p Get a #[code Token] object.
+cell The index of the token within the span.
+footrow
+cell return
+cell returns
+cell #[code Token]
+cell The token at #[code span[i]].
@ -118,7 +118,7 @@ p Get a #[code Span] object.
+cell The slice of the span to get.
+footrow
+cell return
+cell returns
+cell #[code Span]
+cell The span at #[code span[start : end]].
@ -129,7 +129,7 @@ p Iterate over #[code Token] objects.
+table(["Name", "Type", "Description"])
+footrow
+cell yield
+cell yields
+cell #[code Token]
+cell A #[code Token] object.
@ -140,7 +140,7 @@ p Get the number of tokens in the span.
+table(["Name", "Type", "Description"])
+footrow
+cell return
+cell returns
+cell int
+cell The number of tokens in the span.
@ -160,7 +160,7 @@ p
| #[code Span], #[code Token] and #[code Lexeme] objects.
+footrow
+cell return
+cell returns
+cell float
+cell A scalar similarity score. Higher is more similar.
@ -178,7 +178,7 @@ p Retokenize the document, such that the span is merged into a single token.
| are inherited from the syntactic root token of the span.
+footrow
+cell return
+cell returns
+cell #[code Token]
+cell The newly merged token.
@ -189,7 +189,7 @@ p A unicode representation of the span text.
+table(["Name", "Type", "Description"])
+footrow
+cell return
+cell returns
+cell unicode
+cell The original verbatim text of the span.
@ -202,7 +202,7 @@ p
+table(["Name", "Type", "Description"])
+footrow
+cell return
+cell returns
+cell unicode
+cell The text content of the span (with trailing whitespace).
@ -213,7 +213,7 @@ p The sentence span that this span is a part of.
+table(["Name", "Type", "Description"])
+footrow
+cell return
+cell returns
+cell #[code Span]
+cell The sentence this is part of.
@ -226,7 +226,7 @@ p
+table(["Name", "Type", "Description"])
+footrow
+cell return
+cell returns
+cell #[code Token]
+cell The root token.
@ -237,7 +237,7 @@ p Tokens that are to the left of the span, whose head is within the span.
+table(["Name", "Type", "Description"])
+footrow
+cell yield
+cell yields
+cell #[code Token]
+cell A left-child of a token of the span.
@ -248,7 +248,7 @@ p Tokens that are to the right of the span, whose head is within the span.
+table(["Name", "Type", "Description"])
+footrow
+cell yield
+cell yields
+cell #[code Token]
+cell A right-child of a token of the span.
@ -259,6 +259,6 @@ p Tokens that descend from tokens in the span, but fall outside it.
+table(["Name", "Type", "Description"])
+footrow
+cell yield
+cell yields
+cell #[code Token]
+cell A descendant of a token within the span.

View File

@ -16,7 +16,7 @@ p Create the #[code StringStore].
+cell A sequence of unicode strings to add to the store.
+footrow
+cell return
+cell returns
+cell #[code StringStore]
+cell The newly constructed object.
@ -27,7 +27,7 @@ p Get the number of strings in the store.
+table(["Name", "Type", "Description"])
+footrow
+cell return
+cell returns
+cell int
+cell The number of strings in the store.
@ -43,7 +43,7 @@ p Retrieve a string from a given integer ID, or vice versa.
+cell The value to encode.
+footrow
+cell return
+cell returns
+cell unicode / int
+cell The value to retrieved.
@ -59,7 +59,7 @@ p Check whether a string is in the store.
+cell The string to check.
+footrow
+cell return
+cell returns
+cell bool
+cell Whether the store contains the string.
@ -70,7 +70,7 @@ p Iterate over the strings in the store, in order.
+table(["Name", "Type", "Description"])
+footrow
+cell yield
+cell yields
+cell unicode
+cell A string in the store.
@ -86,7 +86,7 @@ p Save the strings to a JSON file.
+cell The file to save the strings.
+footrow
+cell return
+cell returns
+cell #[code None]
+cell -
@ -102,6 +102,6 @@ p Load the strings from a JSON file.
+cell The file from which to load the strings.
+footrow
+cell return
+cell returns
+cell #[code None]
+cell -

View File

@ -26,7 +26,7 @@ p Load the statistical model from the supplied path.
+cell Whether to raise an error if the files are not found.
+footrow
+cell return
+cell returns
+cell #[code Tagger]
+cell The newly constructed object.
@ -47,7 +47,7 @@ p Create a #[code Tagger].
+cell The statistical model.
+footrow
+cell return
+cell returns
+cell #[code Tagger]
+cell The newly constructed object.
@ -63,7 +63,7 @@ p Apply the tagger, setting the POS tags onto the #[code Doc] object.
+cell The tokens to be tagged.
+footrow
+cell return
+cell returns
+cell #[code None]
+cell -
@ -91,7 +91,7 @@ p Tag a stream of documents.
| parallel.
+footrow
+cell yield
+cell yields
+cell #[code Doc]
+cell Documents, in order.
@ -112,6 +112,6 @@ p Update the statistical model, with tags supplied for the given document.
+cell Manager for the gold-standard tags.
+footrow
+cell return
+cell returns
+cell int
+cell Number of tags predicted correctly.

View File

@ -271,7 +271,7 @@ p Construct a #[code Token] object.
+cell The index of the token within the document.
+footrow
+cell return
+cell returns
+cell #[code Token]
+cell The newly constructed object.
@ -282,7 +282,7 @@ p Get the number of unicode characters in the token.
+table(["Name", "Type", "Description"])
+footrow
+cell return
+cell returns
+cell int
+cell The number of unicode characters in the token.
@ -299,7 +299,7 @@ p Check the value of a boolean flag.
+cell The attribute ID of the flag to check.
+footrow
+cell return
+cell returns
+cell bool
+cell Whether the flag is set.
@ -315,7 +315,7 @@ p Get a neighboring token.
+cell The relative position of the token to get. Defaults to #[code 1].
+footrow
+cell return
+cell returns
+cell #[code Token]
+cell The token at position #[code self.doc[self.i+i]]
@ -333,7 +333,7 @@ p Compute a semantic similarity estimate. Defaults to cosine over vectors.
| #[code Span], #[code Token] and #[code Lexeme] objects.
+footrow
+cell return
+cell returns
+cell float
+cell A scalar similarity score. Higher is more similar.
@ -351,7 +351,7 @@ p
+cell Another token.
+footrow
+cell return
+cell returns
+cell bool
+cell Whether this token is the ancestor of the descendant.
@ -363,7 +363,7 @@ p A real-valued meaning representation.
+table(["Name", "Type", "Description"])
+footrow
+cell return
+cell returns
+cell #[code numpy.ndarray[ndim=1, dtype='float32']]
+cell A 1D numpy array representing the token's semantics.
@ -376,7 +376,7 @@ p
+table(["Name", "Type", "Description"])
+footrow
+cell return
+cell returns
+cell bool
+cell Whether the token has a vector data attached.
@ -387,7 +387,7 @@ p The syntactic parent, or "governor", of this token.
+table(["Name", "Type", "Description"])
+footrow
+cell return
+cell returns
+cell #[code Token]
+cell The head.
@ -398,7 +398,7 @@ p A sequence of coordinated tokens, including the token itself.
+table(["Name", "Type", "Description"])
+footrow
+cell yield
+cell yields
+cell #[code Token]
+cell A coordinated token.
@ -409,7 +409,7 @@ p A sequence of the token's immediate syntactic children.
+table(["Name", "Type", "Description"])
+footrow
+cell yield
+cell yields
+cell #[code Token]
+cell A child token such that #[code child.head==self].
@ -420,7 +420,7 @@ p A sequence of all the token's syntactic descendents.
+table(["Name", "Type", "Description"])
+footrow
+cell yield
+cell yields
+cell #[code Token]
+cell A descendant token such that #[code self.is_ancestor(descendant)].
@ -431,7 +431,7 @@ p The leftmost token of this token's syntactic descendants.
+table(["Name", "Type", "Description"])
+footrow
+cell return
+cell returns
+cell #[code Token]
+cell The first token such that #[code self.is_ancestor(token)].
@ -442,7 +442,7 @@ p The rightmost token of this token's syntactic descendents.
+table(["Name", "Type", "Description"])
+footrow
+cell return
+cell returns
+cell #[code Token]
+cell The last token such that #[code self.is_ancestor(token)].
@ -453,7 +453,7 @@ p The rightmost token of this token's syntactic descendants.
+table(["Name", "Type", "Description"])
+footrow
+cell yield
+cell yields
+cell #[code Token]
+cell
| A sequence of ancestor tokens such that

View File

@ -79,7 +79,7 @@ p Load a #[code Tokenizer], reading unsupplied components from the path.
| #[code re.compile(string).finditer] to find infixes.
+footrow
+cell return
+cell returns
+cell #[code Tokenizer]
+cell The newly constructed object.
@ -121,7 +121,7 @@ p Create a #[code Tokenizer], to create #[code Doc] objects given unicode text.
| #[code re.compile(string).finditer] to find infixes.
+footrow
+cell return
+cell returns
+cell #[code Tokenizer]
+cell The newly constructed object.
@ -137,7 +137,7 @@ p Tokenize a string.
+cell The string to tokenize.
+footrow
+cell return
+cell returns
+cell #[code Doc]
+cell A container for linguistic annotations.
@ -165,7 +165,7 @@ p Tokenize a stream of texts.
| multi-threading. The default tokenizer is single-threaded.
+footrow
+cell yield
+cell yields
+cell #[code Doc]
+cell A sequence of Doc objects, in order.
@ -181,7 +181,7 @@ p Find internal split points of the string.
+cell The string to split.
+footrow
+cell return
+cell returns
+cell #[code List[re.MatchObject]]
+cell
| A list of objects that have #[code .start()] and #[code .end()]
@ -202,7 +202,7 @@ p
+cell The string to segment.
+footrow
+cell return
+cell returns
+cell int / #[code None]
+cell The length of the prefix if present, otherwise #[code None].
@ -220,7 +220,7 @@ p
+cell The string to segment.
+footrow
+cell return
+cell returns
+cell int / #[code None]
+cell The length of the suffix if present, otherwise #[code None].
@ -244,6 +244,6 @@ p Add a special-case tokenization rule.
| exactly match the string when they are concatenated.
+footrow
+cell return
+cell returns
+cell #[code None]
+cell -

View File

@ -28,7 +28,7 @@ p
+cell Only return path if it exists, otherwise return #[code None].
+footrow
+cell return
+cell returns
+cell #[code Path] / #[code None]
+cell Data path or #[code None].
@ -70,7 +70,7 @@ p
+cell Two-letter language code, e.g. #[code 'en'].
+footrow
+cell return
+cell returns
+cell #[code Language]
+cell Language class.
@ -90,7 +90,7 @@ p Resolve a model name or string to a model path.
+cell Package name, shortcut link or model path.
+footrow
+cell return
+cell returns
+cell #[code Path]
+cell Path to model data directory.
@ -112,7 +112,7 @@ p
+cell Name of package.
+footrow
+cell return
+cell returns
+cell #[code bool]
+cell #[code True] if installed package, #[code False] if not.
@ -134,7 +134,7 @@ p
+cell Name of installed package.
+footrow
+cell return
+cell returns
+cell #[code Path]
+cell Path to model data directory.
@ -163,7 +163,7 @@ p
+cell If #[code True], raise error if no #[code meta.json] is found.
+footrow
+cell return
+cell returns
+cell dict / #[code None]
+cell Model meta data or #[code None].
@ -194,7 +194,7 @@ p
+cell Exception dictionaries to add to the base exceptions, in order.
+footrow
+cell return
+cell returns
+cell dict
+cell Combined tokenizer exceptions.

View File

@ -56,7 +56,7 @@ p Load the vocabulary from a path.
+cell The default probability for out-of-vocabulary words.
+footrow
+cell return
+cell returns
+cell #[code Vocab]
+cell The newly constructed object.
@ -91,7 +91,7 @@ p Create the vocabulary.
+cell The default probability for out-of-vocabulary words.
+footrow
+cell return
+cell returns
+cell #[code Vocab]
+cell The newly constructed object.
@ -102,7 +102,7 @@ p Get the number of lexemes in the vocabulary.
+table(["Name", "Type", "Description"])
+footrow
+cell return
+cell returns
+cell int
+cell The number of lexems in the vocabulary.
@ -120,7 +120,7 @@ p
+cell The integer ID of a word, or its unicode string.
+footrow
+cell return
+cell returns
+cell #[code Lexeme]
+cell The lexeme indicated by the given ID.
@ -131,7 +131,7 @@ p Iterate over the lexemes in the vocabulary.
+table(["Name", "Type", "Description"])
+footrow
+cell yield
+cell yields
+cell #[code Lexeme]
+cell An entry in the vocabulary.
@ -147,7 +147,7 @@ p Check whether the string has an entry in the vocabulary.
+cell The ID string.
+footrow
+cell return
+cell returns
+cell bool
+cell Whether the string has an entry in the vocabulary.
@ -165,7 +165,7 @@ p
+cell The new size of the vectors.
+footrow
+cell return
+cell returns
+cell #[code None]
+cell -
@ -189,7 +189,7 @@ p Set a new boolean flag to words in the vocabulary.
| available bit will be chosen.
+footrow
+cell return
+cell returns
+cell int
+cell The integer ID by which the flag value can be checked.
@ -205,7 +205,7 @@ p Save the lexemes binary data to the given location.
+cell The path to load from.
+footrow
+cell return
+cell returns
+cell #[code None]
+cell -
@ -221,7 +221,7 @@ p
+cell Path to load the lexemes.bin file from.
+footrow
+cell return
+cell returns
+cell #[code None]
+cell -
@ -237,7 +237,7 @@ p Save the word vectors to a binary file.
+cell The path to save to.
+footrow
+cell return
+cell returns
+cell #[code None]
+cell -
@ -257,7 +257,7 @@ p Load vectors from a text-based file.
| should be the values of the vector.
+footrow
+cell return
+cell returns
+cell int
+cell The length of the vectors loaded.
@ -273,6 +273,6 @@ p Load vectors from the location of a binary file.
+cell The path of the binary file to load from.
+footrow
+cell return
+cell returns
+cell int
+cell The length of the vectors loaded.