Commit Graph

16051 Commits

Author SHA1 Message Date
Raphael Mitsch
be29216fe2
Merge pull request #13044 from explosion/docs/llm_main
Sync `master` with `docs/llm_main`
2023-10-05 16:10:19 +02:00
Raphael Mitsch
1162fcf099
Add Mistral mentions. (#13037) 2023-10-05 14:44:38 +02:00
Raphael Mitsch
862f8254e8
Add docs on Azure OpenAI support in spacy-llm (#13043)
* Add gpt-3.5-turbo-instruct to list of supported OpenAI models.

* Update `spacy-llm` task argument docs w.r.t. task refactoring (#12995)

* Update task arguments w.r.t. task refactoring in 0.5.0.

* Add disclaimer w.r.t. gated models/Llama 2.

* Update website/docs/api/large-language-models.mdx

* Update website/docs/api/large-language-models.mdx

* Update docs w.r.t. PaLM support. (#13018)

* Add info on spacy.Azure.v1.

* Attempt to fix netlify check fails.

* Attempt to fix netlify check fails.

* Attempt to fix netlify check fails.

* Attempt to fix netlify check fails.

* Attempt to fix netlify check fails.

* Attempt to fix netlify check fails.

* Attempt to fix netlify check fails.

* Attempt to fix netlify check fails.

* Attempt to fix netlify check fails.

* Format.
2023-10-05 13:18:27 +02:00
Raphael Mitsch
1dec138e61
Update docs w.r.t. PaLM support. (#13018) 2023-10-05 08:50:41 +02:00
Adriane Boyd
6e54360a3d
Remove pathy dependency, update docs for cloudpathlib in Weasel (#13035) 2023-10-05 08:50:22 +02:00
Raphael Mitsch
734826db79
Update spacy-llm task argument docs w.r.t. task refactoring (#12995)
* Update task arguments w.r.t. task refactoring in 0.5.0.

* Add disclaimer w.r.t. gated models/Llama 2.

* Update website/docs/api/large-language-models.mdx

* Update website/docs/api/large-language-models.mdx
2023-10-05 08:45:25 +02:00
Raphael Mitsch
829613b959
Merge pull request #12999 from rmitsch/docs/gpt-3.5-turbo-instruct
Add `gpt-3.5-turbo-instruct` to list of supported OpenAI models
2023-10-05 08:41:07 +02:00
Adriane Boyd
9d036607f1
Set version to v3.7.1 (#13042) 2023-10-04 18:13:12 +02:00
Adriane Boyd
aec59c0088
Merge pull request #13040 from adrianeboyd/revert/12962-spacy-info
Revert "Load the cli module lazily for spacy.info (#12962)"
2023-10-04 17:20:32 +02:00
Adriane Boyd
6d0185f7fb Revert "Load the cli module lazily for spacy.info (#12962)"
This reverts commit beda27a91e.
2023-10-04 12:33:33 +02:00
Adriane Boyd
92ce32aa3f
Update binder version to v3.7 (#13034) 2023-10-02 12:53:46 +02:00
Adriane Boyd
160e61772e
Docs for v3.7.0 (#13029)
* Docs for v3.7.0

* Minor fixes

* Extend Weasel notes

* Minor edits

* Update version in README
2023-10-01 21:40:07 +02:00
Adriane Boyd
0fed2d9e28
Merge pull request #12899 from adrianeboyd/chore/v3.7.0
Reenable model tests for v3.7.0
2023-10-01 21:38:17 +02:00
Adriane Boyd
1b043dde3f Revert "disable tests until 3.7 models are available"
This reverts commit 991bcc111e.
2023-10-01 18:48:31 +02:00
Adriane Boyd
4ec41e98f6
Merge pull request #12979 from adrianeboyd/feature/cython-profile-312
Redesigned cython profiling and other minor updates for python 3.12
2023-09-29 08:23:38 +02:00
Matthew Hoffman
483d4a5bc0
Allow spacy-transformers v1.3.x in transformers extra (#13025)
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
2023-09-29 08:22:56 +02:00
Adriane Boyd
6b4f774418
Set version to v3.7.0 (#13028) 2023-09-28 21:27:42 +02:00
Adriane Boyd
78504c25a5 CI: Add python 3.12.0rc2 2023-09-28 17:12:42 +02:00
Adriane Boyd
467c82439e Always use tqdm with disable=None
`tqdm` can cause deadlocks in the test suite if enabled.
2023-09-28 17:12:42 +02:00
Adriane Boyd
b4990395f9 Update mypy requirements 2023-09-28 17:12:42 +02:00
Adriane Boyd
76d94b31f2 Branch on python 3.12+ shutil.rmtree in make_tempdir 2023-09-28 17:09:41 +02:00
Adriane Boyd
1adf79414e Set cython profiling default to True for <3.12, False for >=3.12 2023-09-28 17:09:41 +02:00
Adriane Boyd
538304948e Remove profile=True from currently profiled cython 2023-09-28 17:09:41 +02:00
Adriane Boyd
55614d6799 Add profile=False to currently unprofiled cython 2023-09-28 17:09:41 +02:00
Adriane Boyd
36201cb6a1
Merge pull request #13027 from adrianeboyd/chore/update-develop-from-master-v3.7-1
Update develop from master for v3.7
2023-09-28 16:01:40 +02:00
Adriane Boyd
406794a081 Merge remote-tracking branch 'upstream/master' into chore/update-develop-from-master-v3.7-1 2023-09-28 15:09:06 +02:00
Daniël de Kok
beda27a91e
Load the cli module lazily for spacy.info (#12962)
* Load the cli module lazily for spacy.info

This avoids that the `spacy` module cannot be imported when the
users chooses not to install `typer`/`requests`.

* Add test

---------

Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
2023-09-28 11:36:44 +02:00
Sergiu Nisioi
6255e38695
Adding rolegal model to the spaCy universe (#13017)
* adding rolegal model to the spaCy universe

* Fix formatting

* Use raw URL

* update image url and example

* fix pip and update url to raw

* okay, let's add thumb instead of image 🐙

* Update website/meta/universe.json

---------

Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
2023-09-28 11:06:50 +02:00
Madeesh Kannan
b4501db6f8
Update emoji library in rule-based matcher example (#13014) 2023-09-25 18:20:30 +02:00
Adriane Boyd
ff4215f1c7
Drop support for python 3.6 (#13009)
* Drop support for python 3.6

* Update docs
2023-09-25 14:48:38 +02:00
Adriane Boyd
935a5455b6
Docs: add new tag for evaluate CLI --spans-keys (#13013) 2023-09-25 11:49:28 +02:00
Ikko Eltociear Ashimine
ed8c11e2aa
Fix typo in lemmatizer.py (#13003)
specfic -> specific
2023-09-25 11:44:35 +02:00
Eliana Vornov
4e3360ad12
add --spans-key option for CLI spancat evaluation (#12981)
* add span key option for CLI evaluation

* Rephrase CLI help to refer to Doc.spans instead of spancat

* Rephrase docs to refer to Doc.spans instead of spancat

---------

Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
2023-09-25 11:25:41 +02:00
Raphael Mitsch
bef9f63e13 Add gpt-3.5-turbo-instruct to list of supported OpenAI models. 2023-09-21 11:28:58 +02:00
Raphael Mitsch
830eba5426
Merge pull request #12994 from explosion/docs/llm_main
Synch `llm_develop` with `llm_main`
2023-09-20 10:05:40 +02:00
Raphael Mitsch
163ec6fba8
Merge pull request #12993 from explosion/master
Synch `llm_main` with `master`
2023-09-20 10:04:35 +02:00
Sofie Van Landeghem
8f0d6b0a8c
Fix in BertTokenizer docs (#12955)
* fix BertWordPieceTokenizer constructor call

* fix

* Update website/docs/usage/linguistic-features.mdx

---------

Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
2023-09-13 13:21:58 +02:00
Adriane Boyd
36d4767aca
Skip project remotes test for python 3.12 (#12980)
`weasel` (using `cloudpathlib`) does not currently support remote paths
for python 3.12.
2023-09-13 13:16:05 +02:00
Sofie Van Landeghem
013762be41
Few spacy-llm doc fixes (#12969)
* fix construction example

* shorten task-specific factory list

* small edits to HF models

* small edit to API models

* typo

* fix space

Co-authored-by: Raphael Mitsch <r.mitsch@outlook.com>

---------

Co-authored-by: Raphael Mitsch <r.mitsch@outlook.com>
2023-09-08 11:35:38 +02:00
Sofie Van Landeghem
def7013eec
Docs for spacy-llm 0.5.0 (#12968)
* Update incorrect example config. (#12893)

* spacy-llm docs cleanup (#12945)

* Shorten NER section

* fix template references

* simplify sections

* set temperature to 0.0 in examples

* condense model information

* fix parameters for REST models

* set temperature to 0.0

* spelling fix

* trigger preview

* fix quotes

* add small note on noop.v1

* move up example noop config

* set appropriate model example configs

* explain config

* fix

Co-authored-by: Raphael Mitsch <r.mitsch@outlook.com>

---------

Co-authored-by: Raphael Mitsch <r.mitsch@outlook.com>

* Docs for ner.v3 and spancat.v3 spacy-llm tasks (#12949)

* formatting

* update usage table with NER.v3

* fix typo in links

* v3 overview of parameters

* add spancat.v3

* add further v3 explanations

* remove TODO comment

* few more small fixes

* Add doc section on LLM + task factories (#12905)

* Add section on LLM + task factories.

* Apply suggestions from code review

---------

Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>

* add default config to openai models (#12961)

* Docs for spacy-llm 0.5.0 (#12967)

* simplify Python example

* simplify Python example

* Refer only to latest OpenAI model versions from usage doc

* Typo fix

Co-authored-by: Raphael Mitsch <r.mitsch@outlook.com>

* clarify accuracy claim

---------

Co-authored-by: Raphael Mitsch <r.mitsch@outlook.com>

---------

Co-authored-by: Raphael Mitsch <r.mitsch@outlook.com>
2023-09-08 10:25:14 +02:00
Magdalena Aniol
cc78847688
fix training.batch_size example (#12963) 2023-09-06 16:38:13 +02:00
Sofie Van Landeghem
6d1f6d9a23
Fix LLM usage example (#12950)
* fix usage example

* revert back to v2 to allow hot fix on main
2023-09-04 09:05:50 +02:00
Sofie Van Landeghem
5c1f9264c2
fix typo in link (#12948)
* fix typo in link

* fix REL.v1 parameter
2023-09-01 13:47:20 +02:00
David Berenstein
065ead4eed
updated add_pipe docs (#12947) 2023-09-01 11:05:36 +02:00
vincent d warmerdam
3e4264899c
Update large-language-models.mdx (#12944) 2023-08-30 11:58:14 +02:00
Ines Montani
52758e1afa Add headers to netlify.toml [ci skip] 2023-08-30 11:55:23 +02:00
Vinit Ravishankar
c2303858e6
Documentation for spacy-curated-transformers (#12677)
* initial

* initial documentation run

* fix typo

* Remove mentions of Torchscript and quantization

Both are disabled in the initial release of `spacy-curated-transformers`.

* Fix `piece_encoder` entries

* Remove `spacy-transformers`-specific warning

* Fix duplicate entries in tables

* Doc fixes

Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>

* Remove type aliases

* Fix copy-paste typo

* Change `debug pieces` version tag to `3.7`

* Set curated transformers API version to  `3.7`

* Fix transformer listener naming

* Add docs for `init fill-config-transformer`

* Update CLI command invocation syntax

* Update intro section of the pipeline component docs

* Fix source URL

* Add a note to the architectures section about the `init fill-config-transformer` CLI command

* Apply suggestions from code review

Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>

* Update CLI command name, args

* Remove hyphen from the `curated-transformers.mdx` filename

* Fix links

* Remove placeholder text

* Add text to the model/tokenizer loader sections

* Fill in the `DocTransformerOutput` section

* Formatting fixes

* Add curated transformer page to API docs sidebar

* More formatting fixes

* Remove TODO comment

* Remove outdated info about default config

* Apply suggestions from code review

Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>

* Add link to HF model hub

* `prettier`

---------

Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>
Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>
2023-08-29 17:52:16 +02:00
PD Hall
d8a32c1050
docs: fix ngram_range_suggester max_size description (#12939) 2023-08-29 11:10:58 +02:00
Sofie Van Landeghem
869cc4ab0b
warn when an unsupported/unknown key is given to the dependency matcher (#12928) 2023-08-22 09:03:35 +02:00
Connor Brinton
6dd56868de
📝 Fix formula for receptive field in docs (#12918)
SpaCy's HashEmbedCNN layer performs convolutions over tokens to produce
contextualized embeddings using a `MaxoutWindowEncoder` layer. These
convolutions are implemented using Thinc's `expand_window` layer, which
concatenates `window_size` neighboring sequence items on either side of
the sequence item being processed. This is repeated across `depth`
convolutional layers.

For example, consider the sequence "ABCDE" and a `MaxoutWindowEncoder`
layer with a context window of 1 and a depth of 2. We'll focus on the
token "C". We can visually represent the contextual embedding produced
for "C" as:
```mermaid
flowchart LR
A0(A<sub>0</sub>)
B0(B<sub>0</sub>)
C0(C<sub>0</sub>)
D0(D<sub>0</sub>)
E0(E<sub>0</sub>)
B1(B<sub>1</sub>)
C1(C<sub>1</sub>)
D1(D<sub>1</sub>)
C2(C<sub>2</sub>)
A0 --> B1
B0 --> B1
C0 --> B1
B0 --> C1
C0 --> C1
D0 --> C1
C0 --> D1
D0 --> D1
E0 --> D1
B1 --> C2
C1 --> C2
D1 --> C2
```

Described in words, this graph shows that before the first layer of the
convolution, the "receptive field" centered at each token consists only
of that same token. That is to say, that we have a receptive field of 1.
The first layer of the convolution adds one neighboring token on either
side to the receptive field. Since this is done on both sides, the
receptive field increases by 2, giving the first layer a receptive field
of 3. The second layer of the convolutions adds an _additional_
neighboring token on either side to the receptive field, giving a final
receptive field of 5.

However, this doesn't match the formula currently given in the docs,
which read:
> The receptive field of the CNN will be
> `depth * (window_size * 2 + 1)`, so a 4-layer network with a window
> size of `2` will be sensitive to 20 words at a time.

Substituting in our depth of 2 and window size of 1, this formula gives
us a receptive field of:
```
depth * (window_size * 2 + 1)
= 2 * (1 * 2 + 1)
= 2 * (2 + 1)
= 2 * 3
= 6
```

This not only doesn't match our computations from above, it's also an
even number! This is suspicious, since the receptive field is supposed
to be centered on a token, and not between tokens. Generally, this
formula results in an even number for any even value of `depth`.

The error in this formula is that the adjustment for the center token
is multiplied by the depth, when it should occur only once. The
corrected formula, `depth * window_size * 2 + 1`, gives the correct
value for our small example from above:
```
depth * window_size * 2 + 1
= 2 * 1 * 2 + 1
= 4 + 1
= 5
```

These changes update the docs to correct the receptive field formula and
the example receptive field size.
2023-08-21 10:52:32 +02:00