Commit Graph

14792 Commits

Author SHA1 Message Date
meghanabhange
49ff1126bf
Project Idea : denomme | Multilingual Name Detection (#7845)
* Add denomme

* spaCy contributor agreement

Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
2021-04-22 08:48:17 +02:00
Sam Edwardes
05c609cdeb Added a logo to spaCyTextBlob (#7818)
* Added a logo to spaCyTextBlob

* Updated to better thumb
2021-04-22 08:42:14 +02:00
Sam Edwardes
b8c6c10c6f
Added a logo to spaCyTextBlob (#7818)
* Added a logo to spaCyTextBlob

* Updated to better thumb
2021-04-22 08:41:55 +02:00
Diego Palma
ac101cba00 Add TRUNAJOD to spaCy universe. (#7754)
* Add TRUNAJOD to spaCy universe.

* Add trunajod logo and thumb.

Co-authored-by: Diego <dpalma@evernote.com>
2021-04-22 08:41:03 +02:00
Diego Palma
bbade153ed
Add TRUNAJOD to spaCy universe. (#7754)
* Add TRUNAJOD to spaCy universe.

* Add trunajod logo and thumb.

Co-authored-by: Diego <dpalma@evernote.com>
2021-04-22 08:40:28 +02:00
Ines Montani
ee68dc260f Auto-format [ci skip] 2021-04-22 10:58:18 +10:00
Ines Montani
a9e5ae9b5c Auto-format [ci skip] 2021-04-22 10:58:05 +10:00
Ines Montani
3931fa146b Merge branch 'spacy.io' of https://github.com/explosion/spaCy into spacy.io 2021-04-22 10:57:25 +10:00
Ines Montani
c3f7d33f8e Merge pull request #7851 from plison/master [ci skip] 2021-04-22 10:57:08 +10:00
Pierre Lison
663a160867 adding skweak to the SpaCy universe 2021-04-22 10:57:08 +10:00
Pierre Lison
bb961a2c11 adding skweak to the SpaCy universe 2021-04-22 10:57:08 +10:00
Ines Montani
5cbe414ce6
Merge pull request #7851 from plison/master [ci skip] 2021-04-22 10:56:35 +10:00
Pierre Lison
2f0ef2c9cc adding skweak to the SpaCy universe 2021-04-22 01:16:34 +02:00
Pierre Lison
debfb46088 adding skweak to the SpaCy universe 2021-04-22 00:58:09 +02:00
Shantam Raj
5aac993604 Default code for Setting Entity annotations on the website errors (#7738)
* the default example for "Setting entity annotations" errors on Binder

* updating contributer info

* using a new variable to store original entities
2021-04-21 09:18:22 +02:00
Shantam Raj
6017fcf693
Default code for Setting Entity annotations on the website errors (#7738)
* the default example for "Setting entity annotations" errors on Binder

* updating contributer info

* using a new variable to store original entities
2021-04-21 09:16:32 +02:00
Ines Montani
1c1087e4ff Merge pull request #7826 from richardpaulhudson/master
Add entry for Coreferee project to universe.json
2021-04-21 16:23:09 +10:00
hudsonr
1eaf6e5ccb Added universe entry for Coreferee 2021-04-21 16:23:09 +10:00
Ines Montani
aad5ba13af
Merge pull request #7826 from richardpaulhudson/master
Add entry for Coreferee project to universe.json
2021-04-21 16:22:43 +10:00
hudsonr
2722424ec5 Added universe entry for Coreferee 2021-04-19 14:28:06 +02:00
langdonholmes
cef9f25ec0 Update processing-pipelines.md to mention method for doc metadata (#7480)
* Update processing-pipelines.md

Under "things to try," inform users they can save metadata when using nlp.pipe(foobar, as_tuples=True)

Link to a new example on the attributes page detailing the following:

> ```
> data = [
>   ("Some text to process", {"meta": "foo"}),
>   ("And more text...", {"meta": "bar"})
> ]
> 
> for doc, context in nlp.pipe(data, as_tuples=True):
>     # Let's assume you have a "meta" extension registered on the Doc
>     doc._.meta = context["meta"]
> ```

from https://stackoverflow.com/questions/57058798/make-spacy-nlp-pipe-process-tuples-of-text-and-additional-information-to-add-as

* Updating the attributes section

Update the attributes section with example of how extensions can be used to store metadata.

* Update processing-pipelines.md

* Update processing-pipelines.md

Made as_tuples example executable and relocated to the end of the "Processing Text" section.

* Update processing-pipelines.md

* Update processing-pipelines.md

Removed extra line

* Reformat and rephrase

Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
2021-04-19 12:00:45 +02:00
langdonholmes
df541c6b5e
Update processing-pipelines.md to mention method for doc metadata (#7480)
* Update processing-pipelines.md

Under "things to try," inform users they can save metadata when using nlp.pipe(foobar, as_tuples=True)

Link to a new example on the attributes page detailing the following:

> ```
> data = [
>   ("Some text to process", {"meta": "foo"}),
>   ("And more text...", {"meta": "bar"})
> ]
> 
> for doc, context in nlp.pipe(data, as_tuples=True):
>     # Let's assume you have a "meta" extension registered on the Doc
>     doc._.meta = context["meta"]
> ```

from https://stackoverflow.com/questions/57058798/make-spacy-nlp-pipe-process-tuples-of-text-and-additional-information-to-add-as

* Updating the attributes section

Update the attributes section with example of how extensions can be used to store metadata.

* Update processing-pipelines.md

* Update processing-pipelines.md

Made as_tuples example executable and relocated to the end of the "Processing Text" section.

* Update processing-pipelines.md

* Update processing-pipelines.md

Removed extra line

* Reformat and rephrase

Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
2021-04-19 11:58:12 +02:00
Adriane Boyd
0e7f94b247
Update Tokenizer.explain with special matches (#7749)
* Update Tokenizer.explain with special matches

Update `Tokenizer.explain` and the pseudo-code in the docs to include
the processing of special cases that contain affixes or whitespace.

* Handle optional settings in explain

* Add test for special matches in explain

Add test for `Tokenizer.explain` for special cases containing affixes.
2021-04-19 19:08:20 +10:00
Adriane Boyd
07b41c38ae
Register CharEmbed layer (#7805) 2021-04-19 18:39:34 +10:00
Sofie Van Landeghem
c786e98e56
assemble CLI command (#7783)
* assemble CLI command

* ensure assemble runs even without training section

* cleanup
2021-04-19 18:39:11 +10:00
Adriane Boyd
15bd230413
Set catalogue lower pin to v2.0.3 (#7762)
* Set catalogue lower pin to v2.0.2

* Update importlib-metadata pins to match

* Require catalogue v2.0.3

Switch to vendored `importlib-metadata` v3.2.0 provided by `catalogue`.
2021-04-19 18:37:17 +10:00
Adriane Boyd
1ad646cbcf
Improve checks for sourced components (#7490)
* Improve checks for sourced components

* Remove language class checks

* Convert python warning to logger warning

* Remove unused warning

* Fix formatting
2021-04-19 18:36:32 +10:00
Sofie Van Landeghem
05bdbe28bb
Fix vectors data on GPU (#7626)
* ensure vectors data is stored on right device

* ensure the added vector is on the right device

* move vector to numpy before iterating

* move best_rows to numpy before iterating
2021-04-19 18:30:03 +10:00
Bram Vanroy
ed561cf428
Terminology: deprecated vs obsolete (#7621)
* Terminology: deprecated vs obsolete

Typically, deprecated is used for functionality that is bound to become unavailable but that can still be used. Obsolete is used for features that have been removed. In E941, I think what is meant is "obsolete" since loading a model by a shortcut simply does not work anymore (and throws an error). This is different from downloading a model with a shortcut, which is deprecated but still works.

In light of this, perhaps all other error codes should be checked as well.

* clarify that the link command is removed and not just deprecated

Co-authored-by: svlandeg <sofie.vanlandeghem@gmail.com>
2021-04-12 14:37:00 +02:00
Sofie Van Landeghem
8d7af5b2b1
Ensure hyphen in config file works as string value (#7642)
* add test for serializing '-' in a config file

* bump srsly to 2.4.1
2021-04-12 14:35:57 +02:00
Sofie Van Landeghem
27dbbb9903
Bugfix/nel crossing sentence (#7630)
* ensure each entity gets a KB ID, even when it's not within a sentence

* cleanup
2021-04-12 18:08:01 +10:00
Sofie Van Landeghem
fd6eebbfdc expand quickstart widget with cuda 11.1 and 11.2 (#7615) 2021-04-09 20:36:55 +02:00
Adriane Boyd
673e2bc4c0
Add usage docs for streamed train corpora (#7693) 2021-04-09 16:15:38 +02:00
Adriane Boyd
73a8c0f992
Update debug data further for v3 (#7602)
* Update debug data further for v3

* Remove new/existing label distinction (new labels are not immediately
distinguishable because the pipeline is already initialized)
* Warn on missing labels in training data for all components except parser
* Separate textcat and textcat_multilabel sections
* Add section for morphologizer

* Reword missing label warnings
2021-04-09 11:53:42 +02:00
Stanislav Schmidt
2516896849
Make vocab update in get_docs deterministic (#7603)
* Make vocab update in get_docs deterministic

The attribute `DocBin.strings` is a set. In `DocBin.get_docs`
a given vocab is updated by iterating over this set.
Iteration over a python set produces an arbitrary ordering,
therefore vocab is updated non-deterministically.

When training (fine-tuning) a spacy model, the base model's
vocabulary will be updated with the new vocabulary in the
training data in exactly the way described above. After
serialization, the file `model/vocab/strings.json` will
be sorted in an arbitrary way. This prevents reproducible
model training.

* Revert "Make vocab update in get_docs deterministic"

This reverts commit d6b87a2f55.

* Sort strings in StringStore serialization

Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
2021-04-09 11:53:13 +02:00
Adriane Boyd
8008e2f75b
Use morph hash in lemmatizer cache key (#7690)
Use the morph hash rather than the `MorphAnalysis` object in the cache
key so that the `Lemmatizer` can be pickled.
2021-04-08 13:22:38 +02:00
Sofie Van Landeghem
3e5bd5055e
expand quickstart widget with cuda 11.1 and 11.2 (#7615) 2021-04-08 12:25:42 +02:00
Adriane Boyd
e6b7600adf
Fix parser sourcing in NER converter (#7631) 2021-04-08 12:25:03 +02:00
Sofie Van Landeghem
204c2f116b
Extend score_spans for overlapping & non-labeled spans (#7209)
* extend span scorer with consider_label and allow_overlap

* unit test for spans y2x overlap

* add score_spans unit test

* docs for new fields in scorer.score_spans

* rename to include_label

* spell out if-else for clarity

* rename to 'labeled'

Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
2021-04-08 12:19:17 +02:00
Paul O'Leary McCann
c362006cb9
Fix is_sent_start when converting from JSON (fix #7635) (#7655)
Data in the JSON format is split into sentences, and each sentence is
saved with is_sent_start flags. Currently the flags are 1 for the first
token and 0 for the others. When deserialized this results in a pattern
of True, None, None, None... which makes single-sentence documents look
as though they haven't had sentence boundaries set.

Since items saved in JSON format have been split into sentences already,
the is_sent_start values should all be True or False.
2021-04-08 18:24:52 +10:00
Adriane Boyd
82d3caf861
Implement replace_listeners for source in config (#7620)
Implement replace_listeners for sourced components loaded from a config.
2021-04-08 18:21:22 +10:00
broaddeep
ee159b8543
Support match alignments (#7321)
* Support match alignments

* change naming from match_alignments to with_alignments, add conditional flow if with_alignments is given, validate with_alignments, add related test case

* remove added errors, utilize bint type, cleanup whitespace

* fix no new line in end of file

* Minor formatting

* Skip alignments processing if as_spans is set

* Add with_alignments to Matcher API docs

* Update website/docs/api/matcher.md

Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>

Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>
2021-04-08 18:10:14 +10:00
Adriane Boyd
ff84075839
Support large/infinite training corpora (#7208)
* Support infinite generators for training corpora

Support a training corpus with an infinite generator in the `spacy
train` training loop:

* Revert `create_train_batches` to the state where an infinite generator
can be used as the in the first epoch of exactly one epoch without
resulting in a memory leak (`max_epochs != 1` will still result in a
memory leak)
* Move the shuffling for the first epoch into the corpus reader,
renaming it to `spacy.Corpus.v2`.

* Switch to training option for shuffling in memory

Training loop:

* Add option `training.shuffle_train_corpus_in_memory` that controls
whether the corpus is loaded in memory once and shuffled in the training
loop
  * Revert changes to `create_train_batches` and rename to
`create_train_batches_with_shuffling` for use with `spacy.Corpus.v1` and
a corpus that should be loaded in memory
  * Add `create_train_batches_without_shuffling` for a corpus that
should not be shuffled in the training loop: the corpus is merely
batched during training

Corpus readers:

* Restore `spacy.Corpus.v1`
* Add `spacy.ShuffledCorpus.v1` for a corpus shuffled in memory in the
reader instead of the training loop
  * In combination with `shuffle_train_corpus_in_memory = False`, each
epoch could result in a different augmentation

* Refactor create_train_batches, validation

* Rename config setting to `training.shuffle_train_corpus`
* Refactor to use a single `create_train_batches` method with a
`shuffle` option
* Only validate `get_examples` in initialize step if:
  * labels are required
  * labels are not provided

* Switch back to max_epochs=-1 for streaming train corpus

* Use first 100 examples for stream train corpus init

* Always check validate_get_examples in initialize
2021-04-08 18:08:04 +10:00
graue70
81fd595223
Fix __add__ method of PRFScore (#7557)
* Add failing test for PRFScore

* Fix erroneous implementation of __add__

* Simplify constructor

Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>

Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>
2021-04-08 17:34:14 +10:00
Ines Montani
13949e9f9b Add more link anchors [ci skip] 2021-04-06 14:15:29 +10:00
Ines Montani
de4f4c9b8a Add more link anchors [ci skip] 2021-04-06 14:15:21 +10:00
Ines Montani
1e0a478805 Update pipeline design docs [ci skip] 2021-04-06 14:13:43 +10:00
Ines Montani
e9496feca6 Fix formatting [ci skip] 2021-04-06 14:13:36 +10:00
Ines Montani
5bbdd7dc4c Update pipeline design docs [ci skip] 2021-04-06 14:13:22 +10:00
Ines Montani
1d1cfadbca Fix formatting [ci skip] 2021-04-06 14:13:13 +10:00