mirror of
				https://github.com/explosion/spaCy.git
				synced 2025-10-31 07:57:35 +03:00 
			
		
		
		
	* Create aryaprabhudesai.md (#2681) * Update _install.jade (#2688) Typo fix: "models" -> "model" * Add FAC to spacy.explain (resolves #2706) * Remove docstrings for deprecated arguments (see #2703) * When calling getoption() in conftest.py, pass a default option (#2709) * When calling getoption() in conftest.py, pass a default option This is necessary to allow testing an installed spacy by running: pytest --pyargs spacy * Add contributor agreement * update bengali token rules for hyphen and digits (#2731) * Less norm computations in token similarity (#2730) * Less norm computations in token similarity * Contributor agreement * Remove ')' for clarity (#2737) Sorry, don't mean to be nitpicky, I just noticed this when going through the CLI and thought it was a quick fix. That said, if this was intention than please let me know. * added contributor agreement for mbkupfer (#2738) * Basic support for Telugu language (#2751) * Lex _attrs for polish language (#2750) * Signed spaCy contributor agreement * Added polish version of english lex_attrs * Introduces a bulk merge function, in order to solve issue #653 (#2696) * Fix comment * Introduce bulk merge to increase performance on many span merges * Sign contributor agreement * Implement pull request suggestions * Describe converters more explicitly (see #2643) * Add multi-threading note to Language.pipe (resolves #2582) [ci skip] * Fix formatting * Fix dependency scheme docs (closes #2705) [ci skip] * Don't set stop word in example (closes #2657) [ci skip] * Add words to portuguese language _num_words (#2759) * Add words to portuguese language _num_words * Add words to portuguese language _num_words * Update Indonesian model (#2752) * adding e-KTP in tokenizer exceptions list * add exception token * removing lines with containing space as it won't matter since we use .split() method in the end, added new tokens in exception * add tokenizer exceptions list * combining base_norms with norm_exceptions * adding norm_exception * fix double key in lemmatizer * remove unused import on punctuation.py * reformat stop_words to reduce number of lines, improve readibility * updating tokenizer exception * implement is_currency for lang/id * adding orth_first_upper in tokenizer_exceptions * update the norm_exception list * remove bunch of abbreviations * adding contributors file * Fixed spaCy+Keras example (#2763) * bug fixes in keras example * created contributor agreement * Adding French hyphenated first name (#2786) * Fix typo (closes #2784) * Fix typo (#2795) [ci skip] Fixed typo on line 6 "regcognizer --> recognizer" * Adding basic support for Sinhala language. (#2788) * adding Sinhala language package, stop words, examples and lex_attrs. * Adding contributor agreement * Updating contributor agreement * Also include lowercase norm exceptions * Fix error (#2802) * Fix error ValueError: cannot resize an array that references or is referenced by another array in this way. Use the resize function * added spaCy Contributor Agreement * Add charlax's contributor agreement (#2805) * agreement of contributor, may I introduce a tiny pl languge contribution (#2799) * Contributors agreement * Contributors agreement * Contributors agreement * Add jupyter=True to displacy.render in documentation (#2806) * Revert "Also include lowercase norm exceptions" This reverts commit70f4e8adf3. * Remove deprecated encoding argument to msgpack * Set up dependency tree pattern matching skeleton (#2732) * Fix bug when too many entity types. Fixes #2800 * Fix Python 2 test failure * Require older msgpack-numpy * Restore encoding arg on msgpack-numpy * Try to fix version pin for msgpack-numpy * Update Portuguese Language (#2790) * Add words to portuguese language _num_words * Add words to portuguese language _num_words * Portuguese - Add/remove stopwords, fix tokenizer, add currency symbols * Extended punctuation and norm_exceptions in the Portuguese language * Correct error in spacy universe docs concerning spacy-lookup (#2814) * Update Keras Example for (Parikh et al, 2016) implementation (#2803) * bug fixes in keras example * created contributor agreement * baseline for Parikh model * initial version of parikh 2016 implemented * tested asymmetric models * fixed grevious error in normalization * use standard SNLI test file * begin to rework parikh example * initial version of running example * start to document the new version * start to document the new version * Update Decompositional Attention.ipynb * fixed calls to similarity * updated the README * import sys package duh * simplified indexing on mapping word to IDs * stupid python indent error * added code from https://github.com/tensorflow/tensorflow/issues/3388 for tf bug workaround * Fix typo (closes #2815) [ci skip] * Update regex version dependency * Set version to 2.0.13.dev3 * Skip seemingly problematic test * Remove problematic test * Try previous version of regex * Revert "Remove problematic test" This reverts commitbdebbef455. * Unskip test * Try older version of regex * 💫 Update training examples and use minibatching (#2830) <!--- Provide a general summary of your changes in the title. --> ## Description Update the training examples in `/examples/training` to show usage of spaCy's `minibatch` and `compounding` helpers ([see here](https://spacy.io/usage/training#tips-batch-size) for details). The lack of batching in the examples has caused some confusion in the past, especially for beginners who would copy-paste the examples, update them with large training sets and experienced slow and unsatisfying results. ### Types of change enhancements ## Checklist <!--- Before you submit the PR, go over this checklist and make sure you can tick off all the boxes. [] -> [x] --> - [x] I have submitted the spaCy Contributor Agreement. - [x] I ran the tests, and all new and existing tests passed. - [x] My changes don't require a change to the documentation, or if they do, I've added all required information. * Visual C++ link updated (#2842) (closes #2841) [ci skip] * New landing page * Add contribution agreement * Correcting lang/ru/examples.py (#2845) * Correct some grammatical inaccuracies in lang\ru\examples.py; filled Contributor Agreement * Correct some grammatical inaccuracies in lang\ru\examples.py * Move contributor agreement to separate file * Set version to 2.0.13.dev4 * Add Persian(Farsi) language support (#2797) * Also include lowercase norm exceptions * Remove in favour of https://github.com/explosion/spaCy/graphs/contributors * Rule-based French Lemmatizer (#2818) <!--- Provide a general summary of your changes in the title. --> ## Description <!--- Use this section to describe your changes. If your changes required testing, include information about the testing environment and the tests you ran. If your test fixes a bug reported in an issue, don't forget to include the issue number. If your PR is still a work in progress, that's totally fine – just include a note to let us know. --> Add a rule-based French Lemmatizer following the english one and the excellent PR for [greek language optimizations](https://github.com/explosion/spaCy/pull/2558) to adapt the Lemmatizer class. ### Types of change <!-- What type of change does your PR cover? Is it a bug fix, an enhancement or new feature, or a change to the documentation? --> - Lemma dictionary used can be found [here](http://infolingu.univ-mlv.fr/DonneesLinguistiques/Dictionnaires/telechargement.html), I used the XML version. - Add several files containing exhaustive list of words for each part of speech - Add some lemma rules - Add POS that are not checked in the standard Lemmatizer, i.e PRON, DET, ADV and AUX - Modify the Lemmatizer class to check in lookup table as a last resort if POS not mentionned - Modify the lemmatize function to check in lookup table as a last resort - Init files are updated so the model can support all the functionalities mentioned above - Add words to tokenizer_exceptions_list.py in respect to regex used in tokenizer_exceptions.py ## Checklist <!--- Before you submit the PR, go over this checklist and make sure you can tick off all the boxes. [] -> [x] --> - [X] I have submitted the spaCy Contributor Agreement. - [X] I ran the tests, and all new and existing tests passed. - [X] My changes don't require a change to the documentation, or if they do, I've added all required information. * Set version to 2.0.13 * Fix formatting and consistency * Update docs for new version [ci skip] * Increment version [ci skip] * Add info on wheels [ci skip] * Adding "This is a sentence" example to Sinhala (#2846) * Add wheels badge * Update badge [ci skip] * Update README.rst [ci skip] * Update murmurhash pin * Increment version to 2.0.14.dev0 * Update GPU docs for v2.0.14 * Add wheel to setup_requires * Import prefer_gpu and require_gpu functions from Thinc * Add tests for prefer_gpu() and require_gpu() * Update requirements and setup.py * Workaround bug in thinc require_gpu * Set version to v2.0.14 * Update push-tag script * Unhack prefer_gpu * Require thinc 6.10.6 * Update prefer_gpu and require_gpu docs [ci skip] * Fix specifiers for GPU * Set version to 2.0.14.dev1 * Set version to 2.0.14 * Update Thinc version pin * Increment version * Fix msgpack-numpy version pin * Increment version * Update version to 2.0.16 * Update version [ci skip] * Redundant ')' in the Stop words' example (#2856) <!--- Provide a general summary of your changes in the title. --> ## Description <!--- Use this section to describe your changes. If your changes required testing, include information about the testing environment and the tests you ran. If your test fixes a bug reported in an issue, don't forget to include the issue number. If your PR is still a work in progress, that's totally fine – just include a note to let us know. --> ### Types of change <!-- What type of change does your PR cover? Is it a bug fix, an enhancement or new feature, or a change to the documentation? --> ## Checklist <!--- Before you submit the PR, go over this checklist and make sure you can tick off all the boxes. [] -> [x] --> - [ ] I have submitted the spaCy Contributor Agreement. - [ ] I ran the tests, and all new and existing tests passed. - [ ] My changes don't require a change to the documentation, or if they do, I've added all required information. * Documentation improvement regarding joblib and SO (#2867) Some documentation improvements ## Description 1. Fixed the dead URL to joblib 2. Fixed Stack Overflow brand name (with space) ### Types of change Documentation ## Checklist <!--- Before you submit the PR, go over this checklist and make sure you can tick off all the boxes. [] -> [x] --> - [x] I have submitted the spaCy Contributor Agreement. - [x] I ran the tests, and all new and existing tests passed. - [x] My changes don't require a change to the documentation, or if they do, I've added all required information. * raise error when setting overlapping entities as doc.ents (#2880) * Fix out-of-bounds access in NER training The helper method state.B(1) gets the index of the first token of the buffer, or -1 if no such token exists. Normally this is safe because we pass this to functions like state.safe_get(), which returns an empty token. Here we used it directly as an array index, which is not okay! This error may have been the cause of out-of-bounds access errors during training. Similar errors may still be around, so much be hunted down. Hunting this one down took a long time...I printed out values across training runs and diffed, looking for points of divergence between runs, when no randomness should be allowed. * Change PyThaiNLP Url (#2876) * Fix missing comma * Add example showing a fix-up rule for space entities * Set version to 2.0.17.dev0 * Update regex version * Revert "Update regex version" This reverts commit62358dd867. * Try setting older regex version, to align with conda * Set version to 2.0.17 * Add spacy-js to universe [ci-skip] * Add spacy-raspberry to universe (closes #2889) * Add script to validate universe json [ci skip] * Removed space in docs + added contributor indo (#2909) * - removed unneeded space in documentation * - added contributor info * Allow input text of length up to max_length, inclusive (#2922) * Include universe spec for spacy-wordnet component (#2919) * feat: include universe spec for spacy-wordnet component * chore: include spaCy contributor agreement * Minor formatting changes [ci skip] * Fix image [ci skip] Twitter URL doesn't work on live site * Check if the word is in one of the regular lists specific to each POS (#2886) * 💫 Create random IDs for SVGs to prevent ID clashes (#2927) Resolves #2924. ## Description Fixes problem where multiple visualizations in Jupyter notebooks would have clashing arc IDs, resulting in weirdly positioned arc labels. Generating a random ID prefix so even identical parses won't receive the same IDs for consistency (even if effect of ID clash isn't noticable here.) ### Types of change bug fix ## Checklist <!--- Before you submit the PR, go over this checklist and make sure you can tick off all the boxes. [] -> [x] --> - [x] I have submitted the spaCy Contributor Agreement. - [x] I ran the tests, and all new and existing tests passed. - [x] My changes don't require a change to the documentation, or if they do, I've added all required information. * Fix typo [ci skip] * fixes symbolic link on py3 and windows (#2949) * fixes symbolic link on py3 and windows during setup of spacy using command python -m spacy link en_core_web_sm en closes #2948 * Update spacy/compat.py Co-Authored-By: cicorias <cicorias@users.noreply.github.com> * Fix formatting * Update universe [ci skip] * Catalan Language Support (#2940) * Catalan language Support * Ddding Catalan to documentation * Sort languages alphabetically [ci skip] * Update tests for pytest 4.x (#2965) <!--- Provide a general summary of your changes in the title. --> ## Description - [x] Replace marks in params for pytest 4.0 compat ([see here](https://docs.pytest.org/en/latest/deprecations.html#marks-in-pytest-mark-parametrize)) - [x] Un-xfail passing tests (some fixes in a recent update resolved a bunch of issues, but tests were apparently never updated here) ### Types of change <!-- What type of change does your PR cover? Is it a bug fix, an enhancement or new feature, or a change to the documentation? --> ## Checklist <!--- Before you submit the PR, go over this checklist and make sure you can tick off all the boxes. [] -> [x] --> - [x] I have submitted the spaCy Contributor Agreement. - [x] I ran the tests, and all new and existing tests passed. - [x] My changes don't require a change to the documentation, or if they do, I've added all required information. * Fix regex pin to harmonize with conda (#2964) * Update README.rst * Fix bug where Vocab.prune_vector did not use 'batch_size' (#2977) Fixes #2976 * Fix typo * Fix typo * Remove duplicate file * Require thinc 7.0.0.dev2 Fixes bug in gpu_ops that would use cupy instead of numpy on CPU * Add missing import * Fix error IDs * Fix tests
		
			
				
	
	
		
			703 lines
		
	
	
		
			20 KiB
		
	
	
	
		
			Plaintext
		
	
	
	
	
	
			
		
		
	
	
			703 lines
		
	
	
		
			20 KiB
		
	
	
	
		
			Plaintext
		
	
	
	
	
	
| //- 💫 DOCS > API > LANGUAGE
 | |
| 
 | |
| include ../_includes/_mixins
 | |
| 
 | |
| p
 | |
|     |  Usually you'll load this once per process as #[code nlp] and pass the
 | |
|     |  instance around your application. The #[code Language] class is created
 | |
|     |  when you call #[+api("spacy#load") #[code spacy.load()]] and contains
 | |
|     |  the shared vocabulary and #[+a("/usage/adding-languages") language data],
 | |
|     |  optional model data loaded from a #[+a("/models") model package] or
 | |
|     |  a path, and a #[+a("/usage/processing-pipelines") processing pipeline]
 | |
|     |  containing components like the tagger or parser that are called on a
 | |
|     |  document in order. You can also add your own processing pipeline
 | |
|     |  components that take a #[code Doc] object, modify it and return it.
 | |
| 
 | |
| +h(2, "init") Language.__init__
 | |
|     +tag method
 | |
| 
 | |
| p Initialise a #[code Language] object.
 | |
| 
 | |
| +aside-code("Example").
 | |
|     from spacy.vocab import Vocab
 | |
|     from spacy.language import Language
 | |
|     nlp = Language(Vocab())
 | |
| 
 | |
|     from spacy.lang.en import English
 | |
|     nlp = English()
 | |
| 
 | |
| +table(["Name", "Type", "Description"])
 | |
|     +row
 | |
|         +cell #[code vocab]
 | |
|         +cell #[code Vocab]
 | |
|         +cell
 | |
|             |  A #[code Vocab] object. If #[code True], a vocab is created via
 | |
|             |  #[code Language.Defaults.create_vocab].
 | |
| 
 | |
|     +row
 | |
|         +cell #[code make_doc]
 | |
|         +cell callable
 | |
|         +cell
 | |
|             |  A function that takes text and returns a #[code Doc] object.
 | |
|             |  Usually a #[code Tokenizer].
 | |
| 
 | |
|     +row
 | |
|         +cell #[code meta]
 | |
|         +cell dict
 | |
|         +cell
 | |
|             |  Custom meta data for the #[code Language] class. Is written to by
 | |
|             |  models to add model meta data.
 | |
| 
 | |
|     +row("foot")
 | |
|         +cell returns
 | |
|         +cell #[code Language]
 | |
|         +cell The newly constructed object.
 | |
| 
 | |
| +h(2, "call") Language.__call__
 | |
|     +tag method
 | |
| 
 | |
| p
 | |
|     |  Apply the pipeline to some text. The text can span multiple sentences,
 | |
|     |  and can contain arbtrary whitespace. Alignment into the original string
 | |
|     |  is preserved.
 | |
| 
 | |
| +aside-code("Example").
 | |
|     doc = nlp(u'An example sentence. Another sentence.')
 | |
|     assert (doc[0].text, doc[0].head.tag_) == ('An', 'NN')
 | |
| 
 | |
| +table(["Name", "Type", "Description"])
 | |
|     +row
 | |
|         +cell #[code text]
 | |
|         +cell unicode
 | |
|         +cell The text to be processed.
 | |
| 
 | |
|     +row
 | |
|         +cell #[code disable]
 | |
|         +cell list
 | |
|         +cell
 | |
|             |  Names of pipeline components to
 | |
|             |  #[+a("/usage/processing-pipelines#disabling") disable].
 | |
| 
 | |
|     +row("foot")
 | |
|         +cell returns
 | |
|         +cell #[code Doc]
 | |
|         +cell A container for accessing the annotations.
 | |
| 
 | |
| +infobox("Changed in v2.0", "⚠️")
 | |
|     |  Pipeline components to prevent from being loaded can now be added as
 | |
|     |  a list to #[code disable], instead of specifying one keyword argument
 | |
|     |  per component.
 | |
| 
 | |
|     +code-wrapper
 | |
|         +code-new doc = nlp(u"I don't want parsed", disable=['parser'])
 | |
|         +code-old doc = nlp(u"I don't want parsed", parse=False)
 | |
| 
 | |
| +h(2, "pipe") Language.pipe
 | |
|     +tag method
 | |
| 
 | |
| p
 | |
|     |  Process texts as a stream, and yield #[code Doc] objects in order.
 | |
|     |  Supports GIL-free multi-threading.
 | |
| 
 | |
| +infobox("Important note for spaCy v2.0.x", "⚠️")
 | |
|     |  By default, multiple threads will be launched for matrix multiplication,
 | |
|     |  which may be inefficient on multi-core machines. Setting
 | |
|     |  #[code OPENBLAS_NUM_THREADS=1] should fix this problem. spaCy v2.1.x
 | |
|     |  will be switching to single-thread by default.
 | |
| 
 | |
| +aside-code("Example").
 | |
|     texts = [u'One document.', u'...', u'Lots of documents']
 | |
|     for doc in nlp.pipe(texts, batch_size=50, n_threads=4):
 | |
|         assert doc.is_parsed
 | |
| 
 | |
| +table(["Name", "Type", "Description"])
 | |
|     +row
 | |
|         +cell #[code texts]
 | |
|         +cell -
 | |
|         +cell A sequence of unicode objects.
 | |
| 
 | |
|     +row
 | |
|         +cell #[code as_tuples]
 | |
|         +cell bool
 | |
|         +cell
 | |
|             |  If set to #[code True], inputs should be a sequence of
 | |
|             |  #[code (text, context)] tuples. Output will then be a sequence of
 | |
|             |  #[code (doc, context)] tuples. Defaults to #[code False].
 | |
| 
 | |
|     +row
 | |
|         +cell #[code n_threads]
 | |
|         +cell int
 | |
|         +cell
 | |
|             |  The number of worker threads to use. If #[code -1], OpenMP will
 | |
|             |  decide how many to use at run time. Default is #[code 2].
 | |
| 
 | |
|     +row
 | |
|         +cell #[code batch_size]
 | |
|         +cell int
 | |
|         +cell The number of texts to buffer.
 | |
| 
 | |
|     +row
 | |
|         +cell #[code disable]
 | |
|         +cell list
 | |
|         +cell
 | |
|             |  Names of pipeline components to
 | |
|             |  #[+a("/usage/processing-pipelines#disabling") disable].
 | |
| 
 | |
|     +row("foot")
 | |
|         +cell yields
 | |
|         +cell #[code Doc]
 | |
|         +cell Documents in the order of the original text.
 | |
| 
 | |
| +h(2, "update") Language.update
 | |
|     +tag method
 | |
| 
 | |
| p Update the models in the pipeline.
 | |
| 
 | |
| +aside-code("Example").
 | |
|     for raw_text, entity_offsets in train_data:
 | |
|         doc = nlp.make_doc(raw_text)
 | |
|         gold = GoldParse(doc, entities=entity_offsets)
 | |
|         nlp.update([doc], [gold], drop=0.5, sgd=optimizer)
 | |
| 
 | |
| +table(["Name", "Type", "Description"])
 | |
|     +row
 | |
|         +cell #[code docs]
 | |
|         +cell iterable
 | |
|         +cell
 | |
|             |  A batch of #[code Doc] objects or unicode. If unicode, a
 | |
|             |  #[code Doc] object will be created from the text.
 | |
| 
 | |
|     +row
 | |
|         +cell #[code golds]
 | |
|         +cell iterable
 | |
|         +cell
 | |
|             |  A batch of #[code GoldParse] objects or dictionaries.
 | |
|             |  Dictionaries will be used to create
 | |
|             |  #[+api("goldparse") #[code GoldParse]] objects. For the available
 | |
|             |  keys and their usage, see
 | |
|             |  #[+api("goldparse#init") #[code GoldParse.__init__]].
 | |
| 
 | |
|     +row
 | |
|         +cell #[code drop]
 | |
|         +cell float
 | |
|         +cell The dropout rate.
 | |
| 
 | |
|     +row
 | |
|         +cell #[code sgd]
 | |
|         +cell callable
 | |
|         +cell An optimizer.
 | |
| 
 | |
|     +row("foot")
 | |
|         +cell returns
 | |
|         +cell dict
 | |
|         +cell Results from the update.
 | |
| 
 | |
| +h(2, "begin_training") Language.begin_training
 | |
|     +tag method
 | |
| 
 | |
| p
 | |
|     |  Allocate models, pre-process training data and acquire an optimizer.
 | |
| 
 | |
| +aside-code("Example").
 | |
|     optimizer = nlp.begin_training(gold_tuples)
 | |
| 
 | |
| +table(["Name", "Type", "Description"])
 | |
|     +row
 | |
|         +cell #[code gold_tuples]
 | |
|         +cell iterable
 | |
|         +cell Gold-standard training data.
 | |
| 
 | |
|     +row
 | |
|         +cell #[code **cfg]
 | |
|         +cell -
 | |
|         +cell Config parameters.
 | |
| 
 | |
|     +row("foot")
 | |
|         +cell returns
 | |
|         +cell callable
 | |
|         +cell An optimizer.
 | |
| 
 | |
| +h(2, "use_params") Language.use_params
 | |
|     +tag contextmanager
 | |
|     +tag method
 | |
| 
 | |
| p
 | |
|     |  Replace weights of models in the pipeline with those provided in the
 | |
|     |  params dictionary. Can be used as a contextmanager, in which case, models
 | |
|     |  go back to their original weights after the block.
 | |
| 
 | |
| +aside-code("Example").
 | |
|     with nlp.use_params(optimizer.averages):
 | |
|         nlp.to_disk('/tmp/checkpoint')
 | |
| 
 | |
| +table(["Name", "Type", "Description"])
 | |
|     +row
 | |
|         +cell #[code params]
 | |
|         +cell dict
 | |
|         +cell A dictionary of parameters keyed by model ID.
 | |
| 
 | |
|     +row
 | |
|         +cell #[code **cfg]
 | |
|         +cell -
 | |
|         +cell Config parameters.
 | |
| 
 | |
| +h(2, "preprocess_gold") Language.preprocess_gold
 | |
|     +tag method
 | |
| 
 | |
| p
 | |
|     |  Can be called before training to pre-process gold data. By default, it
 | |
|     |  handles nonprojectivity and adds missing tags to the tag map.
 | |
| 
 | |
| +table(["Name", "Type", "Description"])
 | |
|     +row
 | |
|         +cell #[code docs_golds]
 | |
|         +cell iterable
 | |
|         +cell Tuples of #[code Doc] and #[code GoldParse] objects.
 | |
| 
 | |
|     +row("foot")
 | |
|         +cell yields
 | |
|         +cell tuple
 | |
|         +cell Tuples of #[code Doc] and #[code GoldParse] objects.
 | |
| 
 | |
| +h(2, "create_pipe") Language.create_pipe
 | |
|     +tag method
 | |
|     +tag-new(2)
 | |
| 
 | |
| p Create a pipeline component from a factory.
 | |
| 
 | |
| +aside-code("Example").
 | |
|     parser = nlp.create_pipe('parser')
 | |
|     nlp.add_pipe(parser)
 | |
| 
 | |
| +table(["Name", "Type", "Description"])
 | |
|     +row
 | |
|         +cell #[code name]
 | |
|         +cell unicode
 | |
|         +cell
 | |
|             |  Factory name to look up in
 | |
|             |  #[+api("language#class-attributes") #[code Language.factories]].
 | |
| 
 | |
|     +row
 | |
|         +cell #[code config]
 | |
|         +cell dict
 | |
|         +cell Configuration parameters to initialise component.
 | |
| 
 | |
|     +row("foot")
 | |
|         +cell returns
 | |
|         +cell callable
 | |
|         +cell The pipeline component.
 | |
| 
 | |
| +h(2, "add_pipe") Language.add_pipe
 | |
|     +tag method
 | |
|     +tag-new(2)
 | |
| 
 | |
| p
 | |
|     |  Add a component to the processing pipeline. Valid components are
 | |
|     |  callables that take a #[code Doc] object, modify it and return it. Only
 | |
|     |  one of #[code before], #[code after], #[code first] or #[code last] can
 | |
|     |  be set. Default behaviour is #[code last=True].
 | |
| 
 | |
| +aside-code("Example").
 | |
|     def component(doc):
 | |
|         # modify Doc and return it
 | |
|         return doc
 | |
| 
 | |
|     nlp.add_pipe(component, before='ner')
 | |
|     nlp.add_pipe(component, name='custom_name', last=True)
 | |
| 
 | |
| +table(["Name", "Type", "Description"])
 | |
|     +row
 | |
|         +cell #[code component]
 | |
|         +cell callable
 | |
|         +cell The pipeline component.
 | |
| 
 | |
|     +row
 | |
|         +cell #[code name]
 | |
|         +cell unicode
 | |
|         +cell
 | |
|             |  Name of pipeline component. Overwrites existing
 | |
|             |  #[code component.name] attribute if available. If no #[code name]
 | |
|             |  is set and the component exposes no name attribute,
 | |
|             |  #[code component.__name__] is used. An error is raised if the
 | |
|             |  name already exists in the pipeline.
 | |
| 
 | |
|     +row
 | |
|         +cell #[code before]
 | |
|         +cell unicode
 | |
|         +cell Component name to insert component directly before.
 | |
| 
 | |
|     +row
 | |
|         +cell #[code after]
 | |
|         +cell unicode
 | |
|         +cell Component name to insert component directly after:
 | |
| 
 | |
|     +row
 | |
|         +cell #[code first]
 | |
|         +cell bool
 | |
|         +cell Insert component first / not first in the pipeline.
 | |
| 
 | |
|     +row
 | |
|         +cell #[code last]
 | |
|         +cell bool
 | |
|         +cell Insert component last / not last in the pipeline.
 | |
| 
 | |
| +h(2, "has_pipe") Language.has_pipe
 | |
|     +tag method
 | |
|     +tag-new(2)
 | |
| 
 | |
| p
 | |
|     |  Check whether a component is present in the pipeline. Equivalent to
 | |
|     |  #[code name in nlp.pipe_names].
 | |
| 
 | |
| +aside-code("Example").
 | |
|     nlp.add_pipe(lambda doc: doc, name='component')
 | |
|     assert 'component' in nlp.pipe_names
 | |
|     assert nlp.has_pipe('component')
 | |
| 
 | |
| +table(["Name", "Type", "Description"])
 | |
|     +row
 | |
|         +cell #[code name]
 | |
|         +cell unicode
 | |
|         +cell Name of the pipeline component to check.
 | |
| 
 | |
|     +row("foot")
 | |
|         +cell returns
 | |
|         +cell bool
 | |
|         +cell Whether a component of that name exists in the pipeline.
 | |
| 
 | |
| +h(2, "get_pipe") Language.get_pipe
 | |
|     +tag method
 | |
|     +tag-new(2)
 | |
| 
 | |
| p Get a pipeline component for a given component name.
 | |
| 
 | |
| +aside-code("Example").
 | |
|     parser = nlp.get_pipe('parser')
 | |
|     custom_component = nlp.get_pipe('custom_component')
 | |
| 
 | |
| +table(["Name", "Type", "Description"])
 | |
|     +row
 | |
|         +cell #[code name]
 | |
|         +cell unicode
 | |
|         +cell Name of the pipeline component to get.
 | |
| 
 | |
|     +row("foot")
 | |
|         +cell returns
 | |
|         +cell callable
 | |
|         +cell The pipeline component.
 | |
| 
 | |
| +h(2, "replace_pipe") Language.replace_pipe
 | |
|     +tag method
 | |
|     +tag-new(2)
 | |
| 
 | |
| p Replace a component in the pipeline.
 | |
| 
 | |
| +aside-code("Example").
 | |
|     nlp.replace_pipe('parser', my_custom_parser)
 | |
| 
 | |
| +table(["Name", "Type", "Description"])
 | |
|     +row
 | |
|         +cell #[code name]
 | |
|         +cell unicode
 | |
|         +cell Name of the component to replace.
 | |
| 
 | |
|     +row
 | |
|         +cell #[code component]
 | |
|         +cell callable
 | |
|         +cell The pipeline component to inser.
 | |
| 
 | |
| 
 | |
| +h(2, "rename_pipe") Language.rename_pipe
 | |
|     +tag method
 | |
|     +tag-new(2)
 | |
| 
 | |
| p
 | |
|     |  Rename a component in the pipeline. Useful to create custom names for
 | |
|     |  pre-defined and pre-loaded components. To change the default name of
 | |
|     |  a component added to the pipeline, you can also use the #[code name]
 | |
|     |  argument on #[+api("language#add_pipe") #[code add_pipe]].
 | |
| 
 | |
| +aside-code("Example").
 | |
|     nlp.rename_pipe('parser', 'spacy_parser')
 | |
| 
 | |
| +table(["Name", "Type", "Description"])
 | |
|     +row
 | |
|         +cell #[code old_name]
 | |
|         +cell unicode
 | |
|         +cell Name of the component to rename.
 | |
| 
 | |
|     +row
 | |
|         +cell #[code new_name]
 | |
|         +cell unicode
 | |
|         +cell New name of the component.
 | |
| 
 | |
| +h(2, "remove_pipe") Language.remove_pipe
 | |
|     +tag method
 | |
|     +tag-new(2)
 | |
| 
 | |
| p
 | |
|     |  Remove a component from the pipeline. Returns the removed component name
 | |
|     |  and component function.
 | |
| 
 | |
| +aside-code("Example").
 | |
|     name, component = nlp.remove_pipe('parser')
 | |
|     assert name == 'parser'
 | |
| 
 | |
| +table(["Name", "Type", "Description"])
 | |
|     +row
 | |
|         +cell #[code name]
 | |
|         +cell unicode
 | |
|         +cell Name of the component to remove.
 | |
| 
 | |
|     +row("foot")
 | |
|         +cell returns
 | |
|         +cell tuple
 | |
|         +cell A #[code (name, component)] tuple of the removed component.
 | |
| 
 | |
| +h(2, "disable_pipes") Language.disable_pipes
 | |
|     +tag contextmanager
 | |
|     +tag-new(2)
 | |
| 
 | |
| p
 | |
|     |  Disable one or more pipeline components. If used as a context manager,
 | |
|     |  the pipeline will be restored to the initial state at the end of the
 | |
|     |  block. Otherwise, a #[code DisabledPipes] object is returned, that has a
 | |
|     |  #[code .restore()] method you can use to undo your changes.
 | |
| 
 | |
| +aside-code("Example").
 | |
|     with nlp.disable_pipes('tagger', 'parser'):
 | |
|         optimizer = nlp.begin_training(gold_tuples)
 | |
| 
 | |
|     disabled = nlp.disable_pipes('tagger', 'parser')
 | |
|     optimizer = nlp.begin_training(gold_tuples)
 | |
|     disabled.restore()
 | |
| 
 | |
| +table(["Name", "Type", "Description"])
 | |
|     +row
 | |
|         +cell #[code *disabled]
 | |
|         +cell unicode
 | |
|         +cell Names of pipeline components to disable.
 | |
| 
 | |
|     +row("foot")
 | |
|         +cell returns
 | |
|         +cell #[code DisabledPipes]
 | |
|         +cell
 | |
|             |  The disabled pipes that can be restored by calling the object's
 | |
|             |  #[code .restore()] method.
 | |
| 
 | |
| +h(2, "to_disk") Language.to_disk
 | |
|     +tag method
 | |
|     +tag-new(2)
 | |
| 
 | |
| p
 | |
|     |  Save the current state to a directory. If a model is loaded, this will
 | |
|     |  #[strong include the model].
 | |
| 
 | |
| +aside-code("Example").
 | |
|     nlp.to_disk('/path/to/models')
 | |
| 
 | |
| +table(["Name", "Type", "Description"])
 | |
|     +row
 | |
|         +cell #[code path]
 | |
|         +cell unicode or #[code Path]
 | |
|         +cell
 | |
|             |  A path to a directory, which will be created if it doesn't exist.
 | |
|             |  Paths may be either strings or #[code Path]-like objects.
 | |
| 
 | |
|     +row
 | |
|         +cell #[code disable]
 | |
|         +cell list
 | |
|         +cell
 | |
|             |  Names of pipeline components to
 | |
|             |  #[+a("/usage/processing-pipelines#disabling") disable]
 | |
|             |  and prevent from being saved.
 | |
| 
 | |
| +h(2, "from_disk") Language.from_disk
 | |
|     +tag method
 | |
|     +tag-new(2)
 | |
| 
 | |
| p
 | |
|     |  Loads state from a directory. Modifies the object in place and returns
 | |
|     |  it. If the saved #[code Language] object contains a model, the
 | |
|     |  model will be loaded. Note that this method is commonly used via the
 | |
|     |  subclasses like #[code English] or #[code German] to make
 | |
|     |  language-specific functionality like the
 | |
|     |  #[+a("/usage/adding-languages#lex-attrs") lexical attribute getters]
 | |
|     |  available to the loaded object.
 | |
| 
 | |
| +aside-code("Example").
 | |
|     from spacy.language import Language
 | |
|     nlp = Language().from_disk('/path/to/model')
 | |
| 
 | |
|     # using language-specific subclass
 | |
|     from spacy.lang.en import English
 | |
|     nlp = English().from_disk('/path/to/en_model')
 | |
| 
 | |
| +table(["Name", "Type", "Description"])
 | |
|     +row
 | |
|         +cell #[code path]
 | |
|         +cell unicode or #[code Path]
 | |
|         +cell
 | |
|             |  A path to a directory. Paths may be either strings or
 | |
|             |  #[code Path]-like objects.
 | |
| 
 | |
|     +row
 | |
|         +cell #[code disable]
 | |
|         +cell list
 | |
|         +cell
 | |
|             |  Names of pipeline components to
 | |
|             |  #[+a("/usage/processing-pipelines#disabling") disable].
 | |
| 
 | |
|     +row("foot")
 | |
|         +cell returns
 | |
|         +cell #[code Language]
 | |
|         +cell The modified #[code Language] object.
 | |
| 
 | |
| +infobox("Changed in v2.0", "⚠️")
 | |
|     |  As of spaCy v2.0, the #[code save_to_directory] method has been
 | |
|     |  renamed to #[code to_disk], to improve consistency across classes.
 | |
|     |  Pipeline components to prevent from being loaded can now be added as
 | |
|     |  a list to #[code disable], instead of specifying one keyword argument
 | |
|     |  per component.
 | |
| 
 | |
|     +code-wrapper
 | |
|         +code-new nlp = English().from_disk(disable=['tagger', 'ner'])
 | |
|         +code-old nlp = spacy.load('en', tagger=False, entity=False)
 | |
| 
 | |
| +h(2, "to_bytes") Language.to_bytes
 | |
|     +tag method
 | |
| 
 | |
| p Serialize the current state to a binary string.
 | |
| 
 | |
| +aside-code("Example").
 | |
|     nlp_bytes = nlp.to_bytes()
 | |
| 
 | |
| +table(["Name", "Type", "Description"])
 | |
|     +row
 | |
|         +cell #[code disable]
 | |
|         +cell list
 | |
|         +cell
 | |
|             |  Names of pipeline components to
 | |
|             |  #[+a("/usage/processing-pipelines#disabling") disable]
 | |
|             |  and prevent from being serialized.
 | |
| 
 | |
|     +row("foot")
 | |
|         +cell returns
 | |
|         +cell bytes
 | |
|         +cell The serialized form of the #[code Language] object.
 | |
| 
 | |
| +h(2, "from_bytes") Language.from_bytes
 | |
|     +tag method
 | |
| 
 | |
| p
 | |
|     |  Load state from a binary string. Note that this method is commonly used
 | |
|     |  via the subclasses like #[code English] or #[code German] to make
 | |
|     |  language-specific functionality like the
 | |
|     |  #[+a("/usage/adding-languages#lex-attrs") lexical attribute getters]
 | |
|     |  available to the loaded object.
 | |
| 
 | |
| +aside-code("Example").
 | |
|     from spacy.lang.en import English
 | |
|     nlp_bytes = nlp.to_bytes()
 | |
|     nlp2 = English()
 | |
|     nlp2.from_bytes(nlp_bytes)
 | |
| 
 | |
| +table(["Name", "Type", "Description"])
 | |
|     +row
 | |
|         +cell #[code bytes_data]
 | |
|         +cell bytes
 | |
|         +cell The data to load from.
 | |
| 
 | |
|     +row
 | |
|         +cell #[code disable]
 | |
|         +cell list
 | |
|         +cell
 | |
|             |  Names of pipeline components to
 | |
|             |  #[+a("/usage/processing-pipelines#disabling") disable].
 | |
| 
 | |
|     +row("foot")
 | |
|         +cell returns
 | |
|         +cell #[code Language]
 | |
|         +cell The #[code Language] object.
 | |
| 
 | |
| +infobox("Changed in v2.0", "⚠️")
 | |
|     |  Pipeline components to prevent from being loaded can now be added as
 | |
|     |  a list to #[code disable], instead of specifying one keyword argument
 | |
|     |  per component.
 | |
| 
 | |
|     +code-wrapper
 | |
|         +code-new nlp = English().from_bytes(bytes, disable=['tagger', 'ner'])
 | |
|         +code-old nlp = English().from_bytes('en', tagger=False, entity=False)
 | |
| 
 | |
| +h(2, "attributes") Attributes
 | |
| 
 | |
| +table(["Name", "Type", "Description"])
 | |
|     +row
 | |
|         +cell #[code vocab]
 | |
|         +cell #[code Vocab]
 | |
|         +cell A container for the lexical types.
 | |
| 
 | |
|     +row
 | |
|         +cell #[code tokenizer]
 | |
|         +cell #[code Tokenizer]
 | |
|         +cell The tokenizer.
 | |
| 
 | |
|     +row
 | |
|         +cell #[code make_doc]
 | |
|         +cell #[code lambda text: Doc]
 | |
|         +cell Create a #[code Doc] object from unicode text.
 | |
| 
 | |
|     +row
 | |
|         +cell #[code pipeline]
 | |
|         +cell list
 | |
|         +cell
 | |
|             |  List of #[code (name, component)] tuples describing the current
 | |
|             |  processing pipeline, in order.
 | |
| 
 | |
|     +row
 | |
|         +cell #[code pipe_names]
 | |
|             +tag-new(2)
 | |
|         +cell list
 | |
|         +cell List of pipeline component names, in order.
 | |
| 
 | |
|     +row
 | |
|         +cell #[code meta]
 | |
|         +cell dict
 | |
|         +cell
 | |
|             |  Custom meta data for the Language class. If a model is loaded,
 | |
|             |  contains meta data of the model.
 | |
| 
 | |
|     +row
 | |
|         +cell #[code path]
 | |
|             +tag-new(2)
 | |
|         +cell #[code Path]
 | |
|         +cell
 | |
|             |  Path to the model data directory, if a model is loaded. Otherwise
 | |
|             |  #[code None].
 | |
| 
 | |
| +h(2, "class-attributes") Class attributes
 | |
| 
 | |
| +table(["Name", "Type", "Description"])
 | |
|     +row
 | |
|         +cell #[code Defaults]
 | |
|         +cell class
 | |
|         +cell
 | |
|             |  Settings, data and factory methods for creating the
 | |
|             |  #[code nlp] object and processing pipeline.
 | |
| 
 | |
|     +row
 | |
|         +cell #[code lang]
 | |
|         +cell unicode
 | |
|         +cell
 | |
|             |  Two-letter language ID, i.e.
 | |
|             |  #[+a("https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes") ISO code].
 | |
| 
 | |
|     +row
 | |
|         +cell #[code factories]
 | |
|             +tag-new(2)
 | |
|         +cell dict
 | |
|         +cell
 | |
|             |  Factories that create pre-defined pipeline components, e.g. the
 | |
|             |  tagger, parser or entity recognizer, keyed by their component
 | |
|             |  name.
 |