Update pipeline component usage docs

This commit is contained in:
ines 2017-10-10 04:24:39 +02:00
parent 3d5154811a
commit 7a592d01dc
5 changed files with 347 additions and 146 deletions

View File

@ -105,9 +105,9 @@
"menu": {
"How Pipelines Work": "pipelines",
"Custom Components": "custom-components",
"Developing Extensions": "extensions",
"Multi-threading": "multithreading",
"Serialization": "serialization",
"Developing Extensions": "extensions"
"Serialization": "serialization"
}
},

View File

@ -1,12 +1,11 @@
//- 💫 DOCS > USAGE > PROCESSING PIPELINES > CUSTOM COMPONENTS
p
| A component receives a #[code Doc] object and
| #[strong performs the actual processing] for example, using the current
| weights to make a prediction and set some annotation on the document. By
| adding a component to the pipeline, you'll get access to the #[code Doc]
| at any point #[strong during] processing instead of only being able to
| modify it afterwards.
| A component receives a #[code Doc] object and can modify it for example,
| by using the current weights to make a prediction and set some annotation
| on the document. By adding a component to the pipeline, you'll get access
| to the #[code Doc] at any point #[strong during processing] instead of
| only being able to modify it afterwards.
+aside-code("Example").
def my_component(doc):
@ -27,10 +26,10 @@ p
p
| Custom components can be added to the pipeline using the
| #[+api("language#add_pipe") #[code add_pipe]] method. Optionally, you
| can either specify a component to add it before or after, tell spaCy
| to add it first or last in the pipeline, or define a custom name.
| If no name is set and no #[code name] attribute is present on your
| component, the function name, e.g. #[code component.__name__] is used.
| can either specify a component to add it #[strong before or after], tell
| spaCy to add it #[strong first or last] in the pipeline, or define a
| #[strong custom name]. If no name is set and no #[code name] attribute
| is present on your component, the function name is used.
+code("Adding pipeline components").
def my_component(doc):
@ -67,7 +66,19 @@ p
nlp.add_pipe(my_component, first=True)
+h(3, "custom-components-attributes")
| Setting attributes on the #[code Doc], #[code Span] and #[code Token]
| Extension attributes on #[code Doc], #[code Span] and #[code Token]
+tag-new(2)
p
| As of v2.0, spaCy allows you to set any custom attributes and methods
| on the #[code Doc], #[code Span] and #[code Token], which become
| available as #[code Doc._], #[code Span._] and #[code Token._] for
| example, #[code Token._.my_attr]. This lets you store additional
| information relevant to your application, add new features and
| functionality to spaCy, and implement your own models trained with other
| machine learning libraries. It also lets you take advantage of spaCy's
| data structures and the #[code Doc] object as the "single source of
| truth".
+aside("Why ._?")
| Writing to a #[code ._] attribute instead of to the #[code Doc] directly
@ -78,9 +89,218 @@ p
| what's custom for example, #[code doc.sentiment] is spaCy, while
| #[code doc._.sent_score] isn't.
+under-construction
p
| There are three main types of extensions, which can be defined using the
| #[+api("doc#set_extension") #[code Doc.set_extension]],
| #[+api("span#set_extension") #[code Span.set_extension]] and
| #[+api("token#set_extension") #[code Token.set_extension]] methods.
+h(3, "custom-components-user-hooks") Other user hooks
+list("numbers")
+item #[strong Attribute extensions].
| Set a default value for an attribute, which can be overwritten
| manually at any time. Attribute extensions work like "normal"
| variables and are the quickest way to store arbitrary information
| on a #[code Doc], #[code Span] or #[code Token].
+code-wrapper
+code.
Doc.set_extension('hello', default=True)
assert doc._.hello
doc._.hello = False
+item #[strong Property extensions].
| Define a getter and an optional setter function. If no setter is
| provided, the extension is immutable. Since the getter and setter
| functions are only called when you #[em retrieve] the attribute,
| you can also access values of previously added attribute extensions.
| For example, a #[code Doc] getter can average over #[code Token]
| attributes. For #[code Span] extensions, you'll almost always want
| to use a property otherwise, you'd have to write to
| #[em every possible] #[code Span] in the #[code Doc] to set up the
| values correctly.
+code-wrapper
+code.
Doc.set_extension('hello', getter=get_hello_value, setter=set_hello_value)
assert doc._.hello
doc._.hello = 'Hi!'
+item #[strong Method extensions].
| Assign a function that becomes available as an object method. Method
| extensions are always immutable. For more details and implementation
| ideas, see
| #[+a("/usage/examples#custom-components-attr-methods") these examples].
+code-wrapper
+code.o-no-block.
Doc.set_extension('hello', method=lambda doc, name: 'Hi {}!'.format(name))
assert doc._.hello('Bob') == 'Hi Bob!'
p
| Before you can access a custom extension, you need to register it using
| the #[code set_extension] method on the object you want
| to add it to, e.g. the #[code Doc]. Keep in mind that extensions are
| always #[strong added globally] and not just on a particular instance.
| If an attribute of the same name
| already exists, or if you're trying to access an attribute that hasn't
| been registered, spaCy will raise an #[code AttributeError].
+code("Example").
from spacy.tokens.token import Token
from spacy.tokens.doc import Doc
from spacy.tokens.span import Span
fruits = ['apple', 'pear', 'banana', 'orange', 'strawberry']
is_fruit_getter = lambda token: token.text in fruits
has_fruit_getter = lambda obj: any([t.text in fruits for t in obj])
Token.set_extension('is_fruit', getter=is_fruit_getter)
Doc.set_extension('has_fruit', getter=has_fruit_getter)
Span.set_extension('has_fruit', getter=has_fruit_getter)
+aside-code("Usage example").
doc = nlp(u"I have an apple and a melon")
assert doc[3]._.is_fruit # get Token attributes
assert not doc[0]._.is_fruit
assert doc._.has_fruit # get Doc attributes
assert doc[1:4]._.has_fruit # get Span attributes
p
| Once you've registered your custom attribute, you can also use the
| built-in #[code set], #[code get] and #[code has] methods to modify and
| retrieve the attributes. This is especially useful it you want to pass in
| a string instead of calling #[code doc._.my_attr].
+table(["Method", "Description", "Valid for", "Example"])
+row
+cell #[code ._.set()]
+cell Set a value for an attribute.
+cell Attributes, mutable properties.
+cell #[code.u-break token._.set('my_attr', True)]
+row
+cell #[code ._.get()]
+cell Get the value of an attribute.
+cell Attributes, mutable properties, immutable properties, methods.
+cell #[code.u-break my_attr = span._.get('my_attr')]
+row
+cell #[code ._.has()]
+cell Check if an attribute exists.
+cell Attributes, mutable properties, immutable properties, methods.
+cell #[code.u-break doc._.has('my_attr')]
+infobox("How the ._ is implemented")
| Extension definitions the defaults, methods, getters and setters you
| pass in to #[code set_extension] are stored in class attributes on the
| #[code Underscore] class. If you write to an extension attribute, e.g.
| #[code doc._.hello = True], the data is stored within the
| #[+api("doc#attributes") #[code Doc.user_data]] dictionary. To keep the
| underscore data separate from your other dictionary entries, the string
| #[code "._."] is placed before the name, in a tuple.
+h(4, "component-example1") Example: Custom sentence segmentation logic
p
| Let's say you want to implement custom logic to improve spaCy's sentence
| boundary detection. Currently, sentence segmentation is based on the
| dependency parse, which doesn't always produce ideal results. The custom
| logic should therefore be applied #[strong after] tokenization, but
| #[strong before] the dependency parsing this way, the parser can also
| take advantage of the sentence boundaries.
+code.
def sbd_component(doc):
for i, token in enumerate(doc[:-2]):
# define sentence start if period + titlecase token
if token.text == '.' and doc[i+1].is_title:
doc[i+1].sent_start = True
return doc
nlp = spacy.load('en')
nlp.add_pipe(sbd_component, before='parser') # insert before the parser
+h(4, "component-example2")
| Example: Pipeline component for entity matching and tagging with
| custom attributes
p
| This example shows how to create a spaCy extension that takes a
| terminology list (in this case, single- and multi-word company names),
| matches the occurences in a document, labels them as #[code ORG] entities,
| merges the tokens and sets custom #[code is_tech_org] and
| #[code has_tech_org] attributes. For efficient matching, the example uses
| the #[+api("phrasematcher") #[code PhraseMatcher]] which accepts
| #[code Doc] objects as match patterns and works well for large
| terminology lists. It also ensures your patterns will always match, even
| when you customise spaCy's tokenization rules. When you call #[code nlp]
| on a text, the custom pipeline component is applied to the #[code Doc]
+github("spacy", "examples/pipeline/custom_component_entities.py", false, 500)
p
| Wrapping this functionality in a
| pipeline component allows you to reuse the module with different
| settings, and have all pre-processing taken care of when you call
| #[code nlp] on your text and receive a #[code Doc] object.
+h(4, "component-example3")
| Example: Pipeline component for GPE entities and country meta data via a
| REST API
p
| This example shows the implementation of a pipeline component
| that fetches country meta data via the
| #[+a("https://restcountries.eu") REST Countries API] sets entity
| annotations for countries, merges entities into one token and
| sets custom attributes on the #[code Doc], #[code Span] and
| #[code Token] for example, the capital, latitude/longitude coordinates
| and even the country flag.
+github("spacy", "examples/pipeline/custom_component_countries_api.py", false, 500)
p
| In this case, all data can be fetched on initialisation in one request.
| However, if you're working with text that contains incomplete country
| names, spelling mistakes or foreign-language versions, you could also
| implement a #[code like_country]-style getter function that makes a
| request to the search API endpoint and returns the best-matching
| result.
+h(4, "custom-components-usage-ideas") Other usage ideas
+list
+item
| #[strong Adding new features and hooking in models]. For example,
| a sentiment analysis model, or your preferred solution for
| lemmatization or sentiment analysis. spaCy's built-in tagger,
| parser and entity recognizer respect annotations that were already
| set on the #[code Doc] in a previous step of the pipeline.
+item
| #[strong Integrating other libraries and APIs]. For example, your
| pipeline component can write additional information and data
| directly to the #[code Doc] or #[code Token] as custom attributes,
| while making sure no information is lost in the process. This can
| be output generated by other libraries and models, or an external
| service with a REST API.
+item
| #[strong Debugging and logging]. For example, a component which
| stores and/or exports relevant information about the current state
| of the processed document, and insert it at any point of your
| pipeline.
+infobox("Developing third-party extensions")
| The new pipeline management and custom attributes finally make it easy
| to develop your own spaCy extensions and plugins and share them with
| others. Extensions can claim their own #[code ._] namespace and exist as
| standalone packages. If you're developing a tool or library and want to
| make it easy for others to use it with spaCy and add it to their
| pipeline, all you have to do is expose a function that takes a
| #[code Doc], modifies it and returns it. For more details and
| #[strong best practices], see the section on
| #[+a("#extensions") developing spaCy extensions].
+h(3, "custom-components-user-hooks") User hooks
p
| While it's generally recommended to use the #[code Doc._], #[code Span._]

View File

@ -1,126 +0,0 @@
//- 💫 DOCS > USAGE > PROCESSING PIPELINES > EXAMPLES
p
| To see real-world examples of pipeline factories and components in action,
| you can have a look at the source of spaCy's built-in components, e.g.
| the #[+api("tagger") #[code Tagger]], #[+api("parser") #[code Parser]] or
| #[+api("entityrecognizer") #[code EntityRecongnizer]].
+h(3, "example1") Example: Custom sentence segmentation logic
p
| Let's say you want to implement custom logic to improve spaCy's sentence
| boundary detection. Currently, sentence segmentation is based on the
| dependency parse, which doesn't always produce ideal results. The custom
| logic should therefore be applied #[strong after] tokenization, but
| #[strong before] the dependency parsing this way, the parser can also
| take advantage of the sentence boundaries.
+code.
def sbd_component(doc):
for i, token in enumerate(doc[:-2]):
# define sentence start if period + titlecase token
if token.text == '.' and doc[i+1].is_title:
doc[i+1].sent_start = True
return doc
p
| In this case, we simply want to add the component to the existing
| pipeline of the English model. We can do this by inserting it at index 0
| of #[code nlp.pipeline]:
+code.
nlp = spacy.load('en')
nlp.pipeline.insert(0, sbd_component)
p
| When you call #[code nlp] on some text, spaCy will tokenize it to create
| a #[code Doc] object, and first call #[code sbd_component] on it, followed
| by the model's default pipeline.
+h(3, "example2") Example: Sentiment model
p
| Let's say you have trained your own document sentiment model on English
| text. After tokenization, you want spaCy to first execute the
| #[strong default tensorizer], followed by a custom
| #[strong sentiment component] that adds a #[code .sentiment]
| property to the #[code Doc], containing your model's sentiment precition.
p
| Your component class will have a #[code from_disk()] method that spaCy
| calls to load the model data. When called, the component will compute
| the sentiment score, add it to the #[code Doc] and return the modified
| document. Optionally, the component can include an #[code update()] method
| to allow training the model.
+code.
import pickle
from pathlib import Path
class SentimentComponent(object):
def __init__(self, vocab):
self.weights = None
def __call__(self, doc):
doc.sentiment = sum(self.weights*doc.vector) # set sentiment property
return doc
def from_disk(self, path): # path = model path + factory ID ('sentiment')
self.weights = pickle.load(Path(path) / 'weights.bin') # load weights
return self
def update(self, doc, gold): # update weights allows training!
prediction = sum(self.weights*doc.vector)
self.weights -= 0.001*doc.vector*(prediction-gold.sentiment)
p
| The factory will initialise the component with the #[code Vocab] object.
| To be able to add it to your model's pipeline as #[code 'sentiment'],
| it also needs to be registered via
| #[+api("spacy#set_factory") #[code set_factory()]].
+code.
def sentiment_factory(vocab):
component = SentimentComponent(vocab) # initialise component
return component
spacy.set_factory('sentiment', sentiment_factory)
p
| The above code should be #[strong shipped with your model]. You can use
| the #[+api("cli#package") #[code package]] command to create all required
| files and directories. The model package will include an
| #[+src(gh("spacy-dev-resources", "templates/model/en_model_name/__init__.py")) #[code __init__.py]]
| with a #[code load()] method, that will initialise the language class with
| the model's pipeline and call the #[code from_disk()] method to load
| the model data.
p
| In the model package's meta.json, specify the language class and pipeline
| IDs:
+code("meta.json (excerpt)", "json").
{
"name": "sentiment_model",
"lang": "en",
"version": "1.0.0",
"spacy_version": ">=2.0.0,<3.0.0",
"pipeline": ["tensorizer", "sentiment"]
}
p
| When you load your new model, spaCy will call the model's #[code load()]
| method. This will return a #[code Language] object with a pipeline
| containing the default tensorizer, and the sentiment component returned
| by your custom #[code "sentiment"] factory.
+code.
nlp = spacy.load('en_sentiment_model')
doc = nlp(u'I love pizza')
assert doc.sentiment
+infobox("Saving and loading models")
| For more information and a detailed guide on how to package your model,
| see the documentation on
| #[+a("/usage/training#saving-loading") saving and loading models].

View File

@ -1,3 +1,110 @@
//- 💫 DOCS > USAGE > PROCESSING PIPELINES > DEVELOPING EXTENSIONS
+under-construction
p
| We're very excited about all the new possibilities for community
| extensions and plugins in spaCy v2.0, and we can't wait to see what
| you build with it! To get you started, here are a few tips, tricks and
| best practices:
+list
+item
| Make sure to choose a #[strong descriptive and specific name] for
| your pipeline component class, and set it as its #[code name]
| attribute. Avoid names that are too common or likely to clash with
| built-in or a user's other custom components. While it's fine to call
| your package "spacy_my_extension", avoid component names including
| "spacy", since this can easily lead to confusion.
+code-wrapper
+code-new name = 'myapp_lemmatizer'
+code-old name = 'lemmatizer'
+item
| When writing to #[code Doc], #[code Token] or #[code Span] objects,
| #[strong use getter functions] wherever possible, and avoid setting
| values explicitly. Tokens and spans don't own any data themselves,
| so you should provide a function that allows them to compute the
| values instead of writing static properties to individual objects.
+code-wrapper
+code-new.
is_fruit = lambda token: token.text in ('apple', 'orange')
Token.set_extension('is_fruit', getter=is_fruit)
+code-old.
token._.set_extension('is_fruit', default=False)
if token.text in ('apple', 'orange'):
token._.set('is_fruit', True)
+item
| Always add your custom attributes to the #[strong global] #[code Doc]
| #[code Token] or #[code Span] objects, not a particular instance of
| them. Add the attributes #[strong as early as possible], e.g. in
| your extension's #[code __init__] method or in the global scope of
| your module. This means that in the case of namespace collisions,
| the user will see an error immediately, not just when they run their
| pipeline.
+code-wrapper
+code-new.
from spacy.tokens.doc import Doc
def __init__(attr='my_attr'):
Doc.set_extension(attr, getter=self.get_doc_attr)
+code-old.
def __call__(doc):
doc.set_extension('my_attr', getter=self.get_doc_attr)
+item
| If your extension is setting properties on the #[code Doc],
| #[code Token] or #[code Span], include an option to
| #[strong let the user to change those attribute names]. This makes
| it easier to avoid namespace collisions and accommodate users with
| different naming preferences. We recommend adding an #[code attrs]
| argument to the #[code __init__] method of your class so you can
| write the names to class attributes and reuse them across your
| component.
+code-wrapper
+code-new Doc.set_extension(self.doc_attr, default='some value')
+code-old Doc.set_extension('my_doc_attr', default='some value')
+item
| Ideally, extensions should be #[strong standalone packages] with
| spaCy and optionally, other packages specified as a dependency. They
| can freely assign to their own #[code ._] namespace, but should stick
| to that. If your extension's only job is to provide a better
| #[code .similarity] implementation, and your docs state this
| explicitly, there's no problem with writing to the
| #[+a("#custom-components-user-hooks") #[code user_hooks]], and
| overwriting spaCy's built-in method. However, a third-party
| extension should #[strong never silently overwrite built-ins], or
| attributes set by other extensions.
+item
| If you're looking to publish a model that depends on a custom
| pipeline component, you can either #[strong require it] in the model
| package's dependencies, or if the component is specific and
| lightweight choose to #[strong ship it with your model package]
| and add it to the #[code Language] instance returned by the
| model's #[code load()] method. For examples of this, check out the
| implementations of spaCy's
| #[+api("util#load_model_from_init_py") #[code load_model_from_init_py()]]
| and #[+api("util#load_model_from_path") #[code load_model_from_path()]]
| utility functions.
+code-wrapper
+code-new.
nlp.add_pipe(my_custom_component)
return nlp.from_disk(model_path)
+item
| Once you're ready to share your extension with others, make sure to
| #[strong add docs and installation instructions] (you can
| always link to this page for more info). Make it easy for others to
| install and use your extension, for example by uploading it to
| #[+a("https://pypi.python.org") PyPi]. If you're sharing your code on
| GitHub, don't forget to tag it
| with #[+a("https://github.com/search?q=topic%3Aspacy") #[code spacy]]
| and #[+a("https://github.com/search?q=topic%3Aspacy-pipeline") #[code spacy-pipeline]]
| to help people find it. If you post it on Twitter, feel free to tag
| #[+a("https://twitter.com/" + SOCIAL.twitter) @#{SOCIAL.twitter}]
| so we can check it out.

View File

@ -12,6 +12,10 @@ include _spacy-101/_pipelines
+h(2, "custom-components") Creating custom pipeline components
include _processing-pipelines/_custom-components
+section("extensions")
+h(2, "extensions") Developing spaCy extensions
include _processing-pipelines/_extensions
+section("multithreading")
+h(2, "multithreading") Multi-threading
include _processing-pipelines/_multithreading
@ -19,7 +23,3 @@ include _spacy-101/_pipelines
+section("serialization")
+h(2, "serialization") Serialization
include _processing-pipelines/_serialization
+section("extensions")
+h(2, "extensions") Developing spaCy extensions
include _processing-pipelines/_extensions