mirror of
https://github.com/explosion/spaCy.git
synced 2024-12-27 10:26:35 +03:00
d33953037e
* Create aryaprabhudesai.md (#2681) * Update _install.jade (#2688) Typo fix: "models" -> "model" * Add FAC to spacy.explain (resolves #2706) * Remove docstrings for deprecated arguments (see #2703) * When calling getoption() in conftest.py, pass a default option (#2709) * When calling getoption() in conftest.py, pass a default option This is necessary to allow testing an installed spacy by running: pytest --pyargs spacy * Add contributor agreement * update bengali token rules for hyphen and digits (#2731) * Less norm computations in token similarity (#2730) * Less norm computations in token similarity * Contributor agreement * Remove ')' for clarity (#2737) Sorry, don't mean to be nitpicky, I just noticed this when going through the CLI and thought it was a quick fix. That said, if this was intention than please let me know. * added contributor agreement for mbkupfer (#2738) * Basic support for Telugu language (#2751) * Lex _attrs for polish language (#2750) * Signed spaCy contributor agreement * Added polish version of english lex_attrs * Introduces a bulk merge function, in order to solve issue #653 (#2696) * Fix comment * Introduce bulk merge to increase performance on many span merges * Sign contributor agreement * Implement pull request suggestions * Describe converters more explicitly (see #2643) * Add multi-threading note to Language.pipe (resolves #2582) [ci skip] * Fix formatting * Fix dependency scheme docs (closes #2705) [ci skip] * Don't set stop word in example (closes #2657) [ci skip] * Add words to portuguese language _num_words (#2759) * Add words to portuguese language _num_words * Add words to portuguese language _num_words * Update Indonesian model (#2752) * adding e-KTP in tokenizer exceptions list * add exception token * removing lines with containing space as it won't matter since we use .split() method in the end, added new tokens in exception * add tokenizer exceptions list * combining base_norms with norm_exceptions * adding norm_exception * fix double key in lemmatizer * remove unused import on punctuation.py * reformat stop_words to reduce number of lines, improve readibility * updating tokenizer exception * implement is_currency for lang/id * adding orth_first_upper in tokenizer_exceptions * update the norm_exception list * remove bunch of abbreviations * adding contributors file * Fixed spaCy+Keras example (#2763) * bug fixes in keras example * created contributor agreement * Adding French hyphenated first name (#2786) * Fix typo (closes #2784) * Fix typo (#2795) [ci skip] Fixed typo on line 6 "regcognizer --> recognizer" * Adding basic support for Sinhala language. (#2788) * adding Sinhala language package, stop words, examples and lex_attrs. * Adding contributor agreement * Updating contributor agreement * Also include lowercase norm exceptions * Fix error (#2802) * Fix error ValueError: cannot resize an array that references or is referenced by another array in this way. Use the resize function * added spaCy Contributor Agreement * Add charlax's contributor agreement (#2805) * agreement of contributor, may I introduce a tiny pl languge contribution (#2799) * Contributors agreement * Contributors agreement * Contributors agreement * Add jupyter=True to displacy.render in documentation (#2806) * Revert "Also include lowercase norm exceptions" This reverts commit70f4e8adf3
. * Remove deprecated encoding argument to msgpack * Set up dependency tree pattern matching skeleton (#2732) * Fix bug when too many entity types. Fixes #2800 * Fix Python 2 test failure * Require older msgpack-numpy * Restore encoding arg on msgpack-numpy * Try to fix version pin for msgpack-numpy * Update Portuguese Language (#2790) * Add words to portuguese language _num_words * Add words to portuguese language _num_words * Portuguese - Add/remove stopwords, fix tokenizer, add currency symbols * Extended punctuation and norm_exceptions in the Portuguese language * Correct error in spacy universe docs concerning spacy-lookup (#2814) * Update Keras Example for (Parikh et al, 2016) implementation (#2803) * bug fixes in keras example * created contributor agreement * baseline for Parikh model * initial version of parikh 2016 implemented * tested asymmetric models * fixed grevious error in normalization * use standard SNLI test file * begin to rework parikh example * initial version of running example * start to document the new version * start to document the new version * Update Decompositional Attention.ipynb * fixed calls to similarity * updated the README * import sys package duh * simplified indexing on mapping word to IDs * stupid python indent error * added code from https://github.com/tensorflow/tensorflow/issues/3388 for tf bug workaround * Fix typo (closes #2815) [ci skip] * Update regex version dependency * Set version to 2.0.13.dev3 * Skip seemingly problematic test * Remove problematic test * Try previous version of regex * Revert "Remove problematic test" This reverts commitbdebbef455
. * Unskip test * Try older version of regex * 💫 Update training examples and use minibatching (#2830) <!--- Provide a general summary of your changes in the title. --> ## Description Update the training examples in `/examples/training` to show usage of spaCy's `minibatch` and `compounding` helpers ([see here](https://spacy.io/usage/training#tips-batch-size) for details). The lack of batching in the examples has caused some confusion in the past, especially for beginners who would copy-paste the examples, update them with large training sets and experienced slow and unsatisfying results. ### Types of change enhancements ## Checklist <!--- Before you submit the PR, go over this checklist and make sure you can tick off all the boxes. [] -> [x] --> - [x] I have submitted the spaCy Contributor Agreement. - [x] I ran the tests, and all new and existing tests passed. - [x] My changes don't require a change to the documentation, or if they do, I've added all required information. * Visual C++ link updated (#2842) (closes #2841) [ci skip] * New landing page * Add contribution agreement * Correcting lang/ru/examples.py (#2845) * Correct some grammatical inaccuracies in lang\ru\examples.py; filled Contributor Agreement * Correct some grammatical inaccuracies in lang\ru\examples.py * Move contributor agreement to separate file * Set version to 2.0.13.dev4 * Add Persian(Farsi) language support (#2797) * Also include lowercase norm exceptions * Remove in favour of https://github.com/explosion/spaCy/graphs/contributors * Rule-based French Lemmatizer (#2818) <!--- Provide a general summary of your changes in the title. --> ## Description <!--- Use this section to describe your changes. If your changes required testing, include information about the testing environment and the tests you ran. If your test fixes a bug reported in an issue, don't forget to include the issue number. If your PR is still a work in progress, that's totally fine – just include a note to let us know. --> Add a rule-based French Lemmatizer following the english one and the excellent PR for [greek language optimizations](https://github.com/explosion/spaCy/pull/2558) to adapt the Lemmatizer class. ### Types of change <!-- What type of change does your PR cover? Is it a bug fix, an enhancement or new feature, or a change to the documentation? --> - Lemma dictionary used can be found [here](http://infolingu.univ-mlv.fr/DonneesLinguistiques/Dictionnaires/telechargement.html), I used the XML version. - Add several files containing exhaustive list of words for each part of speech - Add some lemma rules - Add POS that are not checked in the standard Lemmatizer, i.e PRON, DET, ADV and AUX - Modify the Lemmatizer class to check in lookup table as a last resort if POS not mentionned - Modify the lemmatize function to check in lookup table as a last resort - Init files are updated so the model can support all the functionalities mentioned above - Add words to tokenizer_exceptions_list.py in respect to regex used in tokenizer_exceptions.py ## Checklist <!--- Before you submit the PR, go over this checklist and make sure you can tick off all the boxes. [] -> [x] --> - [X] I have submitted the spaCy Contributor Agreement. - [X] I ran the tests, and all new and existing tests passed. - [X] My changes don't require a change to the documentation, or if they do, I've added all required information. * Set version to 2.0.13 * Fix formatting and consistency * Update docs for new version [ci skip] * Increment version [ci skip] * Add info on wheels [ci skip] * Adding "This is a sentence" example to Sinhala (#2846) * Add wheels badge * Update badge [ci skip] * Update README.rst [ci skip] * Update murmurhash pin * Increment version to 2.0.14.dev0 * Update GPU docs for v2.0.14 * Add wheel to setup_requires * Import prefer_gpu and require_gpu functions from Thinc * Add tests for prefer_gpu() and require_gpu() * Update requirements and setup.py * Workaround bug in thinc require_gpu * Set version to v2.0.14 * Update push-tag script * Unhack prefer_gpu * Require thinc 6.10.6 * Update prefer_gpu and require_gpu docs [ci skip] * Fix specifiers for GPU * Set version to 2.0.14.dev1 * Set version to 2.0.14 * Update Thinc version pin * Increment version * Fix msgpack-numpy version pin * Increment version * Update version to 2.0.16 * Update version [ci skip] * Redundant ')' in the Stop words' example (#2856) <!--- Provide a general summary of your changes in the title. --> ## Description <!--- Use this section to describe your changes. If your changes required testing, include information about the testing environment and the tests you ran. If your test fixes a bug reported in an issue, don't forget to include the issue number. If your PR is still a work in progress, that's totally fine – just include a note to let us know. --> ### Types of change <!-- What type of change does your PR cover? Is it a bug fix, an enhancement or new feature, or a change to the documentation? --> ## Checklist <!--- Before you submit the PR, go over this checklist and make sure you can tick off all the boxes. [] -> [x] --> - [ ] I have submitted the spaCy Contributor Agreement. - [ ] I ran the tests, and all new and existing tests passed. - [ ] My changes don't require a change to the documentation, or if they do, I've added all required information. * Documentation improvement regarding joblib and SO (#2867) Some documentation improvements ## Description 1. Fixed the dead URL to joblib 2. Fixed Stack Overflow brand name (with space) ### Types of change Documentation ## Checklist <!--- Before you submit the PR, go over this checklist and make sure you can tick off all the boxes. [] -> [x] --> - [x] I have submitted the spaCy Contributor Agreement. - [x] I ran the tests, and all new and existing tests passed. - [x] My changes don't require a change to the documentation, or if they do, I've added all required information. * raise error when setting overlapping entities as doc.ents (#2880) * Fix out-of-bounds access in NER training The helper method state.B(1) gets the index of the first token of the buffer, or -1 if no such token exists. Normally this is safe because we pass this to functions like state.safe_get(), which returns an empty token. Here we used it directly as an array index, which is not okay! This error may have been the cause of out-of-bounds access errors during training. Similar errors may still be around, so much be hunted down. Hunting this one down took a long time...I printed out values across training runs and diffed, looking for points of divergence between runs, when no randomness should be allowed. * Change PyThaiNLP Url (#2876) * Fix missing comma * Add example showing a fix-up rule for space entities * Set version to 2.0.17.dev0 * Update regex version * Revert "Update regex version" This reverts commit62358dd867
. * Try setting older regex version, to align with conda * Set version to 2.0.17 * Add spacy-js to universe [ci-skip] * Add spacy-raspberry to universe (closes #2889) * Add script to validate universe json [ci skip] * Removed space in docs + added contributor indo (#2909) * - removed unneeded space in documentation * - added contributor info * Allow input text of length up to max_length, inclusive (#2922) * Include universe spec for spacy-wordnet component (#2919) * feat: include universe spec for spacy-wordnet component * chore: include spaCy contributor agreement * Minor formatting changes [ci skip] * Fix image [ci skip] Twitter URL doesn't work on live site * Check if the word is in one of the regular lists specific to each POS (#2886) * 💫 Create random IDs for SVGs to prevent ID clashes (#2927) Resolves #2924. ## Description Fixes problem where multiple visualizations in Jupyter notebooks would have clashing arc IDs, resulting in weirdly positioned arc labels. Generating a random ID prefix so even identical parses won't receive the same IDs for consistency (even if effect of ID clash isn't noticable here.) ### Types of change bug fix ## Checklist <!--- Before you submit the PR, go over this checklist and make sure you can tick off all the boxes. [] -> [x] --> - [x] I have submitted the spaCy Contributor Agreement. - [x] I ran the tests, and all new and existing tests passed. - [x] My changes don't require a change to the documentation, or if they do, I've added all required information. * Fix typo [ci skip] * fixes symbolic link on py3 and windows (#2949) * fixes symbolic link on py3 and windows during setup of spacy using command python -m spacy link en_core_web_sm en closes #2948 * Update spacy/compat.py Co-Authored-By: cicorias <cicorias@users.noreply.github.com> * Fix formatting * Update universe [ci skip] * Catalan Language Support (#2940) * Catalan language Support * Ddding Catalan to documentation * Sort languages alphabetically [ci skip] * Update tests for pytest 4.x (#2965) <!--- Provide a general summary of your changes in the title. --> ## Description - [x] Replace marks in params for pytest 4.0 compat ([see here](https://docs.pytest.org/en/latest/deprecations.html#marks-in-pytest-mark-parametrize)) - [x] Un-xfail passing tests (some fixes in a recent update resolved a bunch of issues, but tests were apparently never updated here) ### Types of change <!-- What type of change does your PR cover? Is it a bug fix, an enhancement or new feature, or a change to the documentation? --> ## Checklist <!--- Before you submit the PR, go over this checklist and make sure you can tick off all the boxes. [] -> [x] --> - [x] I have submitted the spaCy Contributor Agreement. - [x] I ran the tests, and all new and existing tests passed. - [x] My changes don't require a change to the documentation, or if they do, I've added all required information. * Fix regex pin to harmonize with conda (#2964) * Update README.rst * Fix bug where Vocab.prune_vector did not use 'batch_size' (#2977) Fixes #2976 * Fix typo * Fix typo * Remove duplicate file * Require thinc 7.0.0.dev2 Fixes bug in gpu_ops that would use cupy instead of numpy on CPU * Add missing import * Fix error IDs * Fix tests
155 lines
7.0 KiB
Plaintext
155 lines
7.0 KiB
Plaintext
//- 💫 DOCS > USAGE > PROCESSING PIPELINES > PIPELINES
|
||
|
||
p
|
||
| spaCy makes it very easy to create your own pipelines consisting of
|
||
| reusable components – this includes spaCy's default tagger,
|
||
| parser and entity recognizer, but also your own custom processing
|
||
| functions. A pipeline component can be added to an already existing
|
||
| #[code nlp] object, specified when initialising a #[code Language] class,
|
||
| or defined within a
|
||
| #[+a("/usage/training#saving-loading") model package].
|
||
|
||
p
|
||
| When you load a model, spaCy first consults the model's
|
||
| #[+a("/usage/training#saving-loading") #[code meta.json]]. The
|
||
| meta typically includes the model details, the ID of a language class,
|
||
| and an optional list of pipeline components. spaCy then does the
|
||
| following:
|
||
|
||
+aside-code("meta.json (excerpt)", "json").
|
||
{
|
||
"name": "example_model",
|
||
"lang": "en"
|
||
"description": "Example model for spaCy",
|
||
"pipeline": ["tagger", "parser"]
|
||
}
|
||
|
||
+list("numbers")
|
||
+item
|
||
| Load the #[strong language class and data] for the given ID via
|
||
| #[+api("top-level#util.get_lang_class") #[code get_lang_class]] and initialise
|
||
| it. The #[code Language] class contains the shared vocabulary,
|
||
| tokenization rules and the language-specific annotation scheme.
|
||
+item
|
||
| Iterate over the #[strong pipeline names] and create each component
|
||
| using #[+api("language#create_pipe") #[code create_pipe]], which
|
||
| looks them up in #[code Language.factories].
|
||
+item
|
||
| Add each pipeline component to the pipeline in order, using
|
||
| #[+api("language#add_pipe") #[code add_pipe]].
|
||
+item
|
||
| Make the #[strong model data] available to the #[code Language] class
|
||
| by calling #[+api("language#from_disk") #[code from_disk]] with the
|
||
| path to the model data directory.
|
||
|
||
p
|
||
| So when you call this...
|
||
|
||
+code.
|
||
nlp = spacy.load('en')
|
||
|
||
p
|
||
| ... the model tells spaCy to use the language #[code "en"] and the
|
||
| pipeline #[code.u-break ["tagger", "parser", "ner"]]. spaCy will then
|
||
| initialise #[code spacy.lang.en.English], and create each pipeline
|
||
| component and add it to the processing pipeline. It'll then load in the
|
||
| model's data from its data directory and return the modified
|
||
| #[code Language] class for you to use as the #[code nlp] object.
|
||
|
||
p
|
||
| Fundamentally, a #[+a("/models") spaCy model] consists of three
|
||
| components: #[strong the weights], i.e. binary data loaded in from a
|
||
| directory, a #[strong pipeline] of functions called in order,
|
||
| and #[strong language data] like the tokenization rules and annotation
|
||
| scheme. All of this is specific to each model, and defined in the
|
||
| model's #[code meta.json] – for example, a Spanish NER model requires
|
||
| different weights, language data and pipeline components than an English
|
||
| parsing and tagging model. This is also why the pipeline state is always
|
||
| held by the #[code Language] class.
|
||
| #[+api("spacy#load") #[code spacy.load]] puts this all together and
|
||
| returns an instance of #[code Language] with a pipeline set and access
|
||
| to the binary data:
|
||
|
||
+code("spacy.load under the hood").
|
||
lang = 'en'
|
||
pipeline = ['tagger', 'parser', 'ner']
|
||
data_path = 'path/to/en_core_web_sm/en_core_web_sm-2.0.0'
|
||
|
||
cls = spacy.util.get_lang_class(lang) # 1. get Language instance, e.g. English()
|
||
nlp = cls() # 2. initialise it
|
||
for name in pipeline:
|
||
component = nlp.create_pipe(name) # 3. create the pipeline components
|
||
nlp.add_pipe(component) # 4. add the component to the pipeline
|
||
nlp.from_disk(model_data_path) # 5. load in the binary data
|
||
|
||
p
|
||
| When you call #[code nlp] on a text, spaCy will #[strong tokenize] it and
|
||
| then #[strong call each component] on the #[code Doc], in order.
|
||
| Since the model data is loaded, the components can access it to assign
|
||
| annotations to the #[code Doc] object, and subsequently to the
|
||
| #[code Token] and #[code Span] which are only views of the #[code Doc],
|
||
| and don't own any data themselves. All components return the modified
|
||
| document, which is then processed by the component next in the pipeline.
|
||
|
||
+code("The pipeline under the hood").
|
||
doc = nlp.make_doc(u'This is a sentence') # create a Doc from raw text
|
||
for name, proc in nlp.pipeline: # iterate over components in order
|
||
doc = proc(doc) # apply each component
|
||
|
||
p
|
||
| The current processing pipeline is available as #[code nlp.pipeline],
|
||
| which returns a list of #[code (name, component)] tuples, or
|
||
| #[code nlp.pipe_names], which only returns a list of human-readable
|
||
| component names.
|
||
|
||
+code.
|
||
nlp.pipeline
|
||
# [('tagger', <spacy.pipeline.Tagger>), ('parser', <spacy.pipeline.DependencyParser>), ('ner', <spacy.pipeline.EntityRecognizer>)]
|
||
nlp.pipe_names
|
||
# ['tagger', 'parser', 'ner']
|
||
|
||
+h(3, "disabling") Disabling and modifying pipeline components
|
||
|
||
p
|
||
| If you don't need a particular component of the pipeline – for
|
||
| example, the tagger or the parser, you can disable loading it. This can
|
||
| sometimes make a big difference and improve loading speed. Disabled
|
||
| component names can be provided to #[+api("spacy#load") #[code spacy.load()]],
|
||
| #[+api("language#from_disk") #[code Language.from_disk()]] or the
|
||
| #[code nlp] object itself as a list:
|
||
|
||
+code.
|
||
nlp = spacy.load('en', disable=['parser', 'tagger'])
|
||
nlp = English().from_disk('/model', disable=['ner'])
|
||
|
||
p
|
||
| You can also use the #[+api("language#remove_pipe") #[code remove_pipe]]
|
||
| method to remove pipeline components from an existing pipeline, the
|
||
| #[+api("language#rename_pipe") #[code rename_pipe]] method to rename them,
|
||
| or the #[+api("language#replace_pipe") #[code replace_pipe]] method
|
||
| to replace them with a custom component entirely (more details on this
|
||
| in the section on #[+a("#custom-components") custom components].
|
||
|
||
+code.
|
||
nlp.remove_pipe('parser')
|
||
nlp.rename_pipe('ner', 'entityrecognizer')
|
||
nlp.replace_pipe('tagger', my_custom_tagger)
|
||
|
||
+infobox("Important note: disabling pipeline components")
|
||
.o-block
|
||
| Since spaCy v2.0 comes with better support for customising the
|
||
| processing pipeline components, the #[code parser], #[code tagger]
|
||
| and #[code entity] keyword arguments have been replaced with
|
||
| #[code disable], which takes a list of pipeline component names.
|
||
| This lets you disable pre-defined components when loading
|
||
| a model, or initialising a Language class via
|
||
| #[+api("language#from_disk") #[code from_disk]].
|
||
|
||
+code-new.
|
||
nlp = spacy.load('en', disable=['ner'])
|
||
nlp.remove_pipe('parser')
|
||
doc = nlp(u"I don't want parsed")
|
||
+code-old.
|
||
nlp = spacy.load('en', tagger=False, entity=False)
|
||
doc = nlp(u"I don't want parsed", parse=False)
|