diff --git a/website/docs/api/cli.mdx b/website/docs/api/cli.mdx index 92a123241..4ce98b02f 100644 --- a/website/docs/api/cli.mdx +++ b/website/docs/api/cli.mdx @@ -167,7 +167,7 @@ validation error with more details. > > #### Example diff > -> ![Screenshot of visual diff in terminal](../images/cli_init_fill-config_diff.jpg) +> ![Screenshot of visual diff in terminal](/images/cli_init_fill-config_diff.jpg) ```cli $ python -m spacy init fill-config [base_path] [output_file] [--diff] @@ -1490,7 +1490,7 @@ $ python -m spacy project document [project_dir] [--output] [--no-emoji] For more examples, see the templates in our [`projects`](https://github.com/explosion/projects) repo. -![Screenshot of auto-generated Markdown Readme](../images/project_document.jpg) +![Screenshot of auto-generated Markdown Readme](/images/project_document.jpg) diff --git a/website/docs/models/index.mdx b/website/docs/models/index.mdx index 560c04675..ca077fe91 100644 --- a/website/docs/models/index.mdx +++ b/website/docs/models/index.mdx @@ -91,7 +91,7 @@ Main changes from spaCy v2 models: ### CNN/CPU pipeline design {#design-cnn} -![Components and their dependencies in the CNN pipelines](../images/pipeline-design.svg) +![Components and their dependencies in the CNN pipelines](/images/pipeline-design.svg) In the `sm`/`md`/`lg` models: diff --git a/website/docs/usage/101/_architecture.mdx b/website/docs/usage/101/_architecture.mdx index 4ebca2756..4089e5160 100644 --- a/website/docs/usage/101/_architecture.mdx +++ b/website/docs/usage/101/_architecture.mdx @@ -14,7 +14,7 @@ of the pipeline. The `Language` object coordinates these components. It takes raw text and sends it through the pipeline, returning an **annotated document**. It also orchestrates training and serialization. -![Library architecture](../../images/architecture.svg) +![Library architecture](/images/architecture.svg) ### Container objects {#architecture-containers} @@ -39,7 +39,7 @@ rule-based modifications to the `Doc`. spaCy provides a range of built-in components for different language processing tasks and also allows adding [custom components](/usage/processing-pipelines#custom-components). -![The processing pipeline](../../images/pipeline.svg) +![The processing pipeline](/images/pipeline.svg) | Name | Description | | ----------------------------------------------- | ------------------------------------------------------------------------------------------- | diff --git a/website/docs/usage/101/_pipelines.mdx b/website/docs/usage/101/_pipelines.mdx index f43219f41..92c4cbf02 100644 --- a/website/docs/usage/101/_pipelines.mdx +++ b/website/docs/usage/101/_pipelines.mdx @@ -5,7 +5,7 @@ referred to as the **processing pipeline**. The pipeline used by the and an entity recognizer. Each pipeline component returns the processed `Doc`, which is then passed on to the next component. -![The processing pipeline](../../images/pipeline.svg) +![The processing pipeline](/images/pipeline.svg) > - **Name**: ID of the pipeline component. > - **Component:** spaCy's implementation of the component. diff --git a/website/docs/usage/101/_tokenization.mdx b/website/docs/usage/101/_tokenization.mdx index b82150f1a..2835f0fb9 100644 --- a/website/docs/usage/101/_tokenization.mdx +++ b/website/docs/usage/101/_tokenization.mdx @@ -41,7 +41,7 @@ marks. > - **Suffix:** Character(s) at the end, e.g. `km`, `)`, `”`, `!`. > - **Infix:** Character(s) in between, e.g. `-`, `--`, `/`, `…`. -![Example of the tokenization process](../../images/tokenization.svg) +![Example of the tokenization process](/images/tokenization.svg) While punctuation rules are usually pretty general, tokenizer exceptions strongly depend on the specifics of the individual language. This is why each diff --git a/website/docs/usage/101/_training.mdx b/website/docs/usage/101/_training.mdx index d904c3631..6587ea339 100644 --- a/website/docs/usage/101/_training.mdx +++ b/website/docs/usage/101/_training.mdx @@ -21,7 +21,7 @@ predictions become more similar to the reference labels over time. > Minimising the gradient of the weights should result in predictions that are > closer to the reference labels on the training data. -![The training process](../../images/training.svg) +![The training process](/images/training.svg) When training a model, we don't just want it to memorize our examples – we want it to come up with a theory that can be **generalized across unseen data**. diff --git a/website/docs/usage/101/_vectors-similarity.mdx b/website/docs/usage/101/_vectors-similarity.mdx index f05fedd7d..e64bcf0a6 100644 --- a/website/docs/usage/101/_vectors-similarity.mdx +++ b/website/docs/usage/101/_vectors-similarity.mdx @@ -136,7 +136,7 @@ useful for your purpose. Here are some important considerations to keep in mind: -[![](../../images/sense2vec.jpg)](https://github.com/explosion/sense2vec) +[![](/images/sense2vec.jpg)](https://github.com/explosion/sense2vec) [`sense2vec`](https://github.com/explosion/sense2vec) is a library developed by us that builds on top of spaCy and lets you train and query more interesting and diff --git a/website/docs/usage/embeddings-transformers.mdx b/website/docs/usage/embeddings-transformers.mdx index 4173a6c37..828c2f2a3 100644 --- a/website/docs/usage/embeddings-transformers.mdx +++ b/website/docs/usage/embeddings-transformers.mdx @@ -85,7 +85,7 @@ difficult to swap components or retrain parts of the pipeline. Multi-task learning can affect your accuracy (either positively or negatively), and may require some retuning of your hyper-parameters. -![Pipeline components using a shared embedding component vs. independent embedding layers](../images/tok2vec.svg) +![Pipeline components using a shared embedding component vs. independent embedding layers](/images/tok2vec.svg) | Shared | Independent | | ------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------- | @@ -99,7 +99,7 @@ components by adding a [`Transformer`](/api/transformer) or later in the pipeline can "connect" to it by including a **listener layer** like [Tok2VecListener](/api/architectures#Tok2VecListener) within their model. -![Pipeline components listening to shared embedding component](../images/tok2vec-listener.svg) +![Pipeline components listening to shared embedding component](/images/tok2vec-listener.svg) At the beginning of training, the [`Tok2Vec`](/api/tok2vec) component will grab a reference to the relevant listener layers in the rest of your pipeline. When @@ -249,7 +249,7 @@ the standard way, like any other spaCy pipeline. Instead of using the transformers as subnetworks directly, you can also use them via the [`Transformer`](/api/transformer) pipeline component. -![The processing pipeline with the transformer component](../images/pipeline_transformer.svg) +![The processing pipeline with the transformer component](/images/pipeline_transformer.svg) The `Transformer` component sets the [`Doc._.trf_data`](/api/transformer#custom_attributes) extension attribute, diff --git a/website/docs/usage/layers-architectures.mdx b/website/docs/usage/layers-architectures.mdx index 9f8e4ed08..4d435dbc7 100644 --- a/website/docs/usage/layers-architectures.mdx +++ b/website/docs/usage/layers-architectures.mdx @@ -111,7 +111,7 @@ If you're using a modern editor like Visual Studio Code, you can custom Thinc plugin and get live feedback about mismatched types as you write code. -[![](../images/thinc_mypy.jpg)](https://thinc.ai/docs/usage-type-checking#linting) +[![](/images/thinc_mypy.jpg)](https://thinc.ai/docs/usage-type-checking#linting) @@ -785,7 +785,7 @@ To use our new relation extraction model as part of a custom [trainable component](/usage/processing-pipelines#trainable-components), we create a subclass of [`TrainablePipe`](/api/pipe) that holds the model. -![Illustration of Pipe methods](../images/trainable_component.svg) +![Illustration of Pipe methods](/images/trainable_component.svg) ```python ### Pipeline component skeleton diff --git a/website/docs/usage/linguistic-features.mdx b/website/docs/usage/linguistic-features.mdx index d662ad447..1e17ae633 100644 --- a/website/docs/usage/linguistic-features.mdx +++ b/website/docs/usage/linguistic-features.mdx @@ -1154,7 +1154,7 @@ different signature from all the other components: it takes a text and returns a [`Doc`](/api/doc), whereas all other components expect to already receive a tokenized `Doc`. -![The processing pipeline](../images/pipeline.svg) +![The processing pipeline](/images/pipeline.svg) To overwrite the existing tokenizer, you need to replace `nlp.tokenizer` with a custom function that takes a text and returns a [`Doc`](/api/doc). diff --git a/website/docs/usage/processing-pipelines.mdx b/website/docs/usage/processing-pipelines.mdx index 6638676c1..31a74bc19 100644 --- a/website/docs/usage/processing-pipelines.mdx +++ b/website/docs/usage/processing-pipelines.mdx @@ -1158,7 +1158,7 @@ pipeline is loaded. For more background on this, see the usage guides on the [config lifecycle](/usage/training#config-lifecycle) and [custom initialization](/usage/training#initialization). -![Illustration of pipeline lifecycle](../images/lifecycle.svg) +![Illustration of pipeline lifecycle](/images/lifecycle.svg) A component's `initialize` method needs to take at least **two named arguments**: a `get_examples` callback that gives it access to the training @@ -1274,7 +1274,7 @@ trainable components that have their own model instance, make predictions over `Doc` objects and can be updated using [`spacy train`](/api/cli#train). This lets you plug fully custom machine learning components into your pipeline. -![Illustration of Pipe methods](../images/trainable_component.svg) +![Illustration of Pipe methods](/images/trainable_component.svg) You'll need the following: diff --git a/website/docs/usage/projects.mdx b/website/docs/usage/projects.mdx index c90a50924..72c6fba59 100644 --- a/website/docs/usage/projects.mdx +++ b/website/docs/usage/projects.mdx @@ -27,7 +27,7 @@ and share your results with your team. spaCy projects can be used via the new [`spacy project`](/api/cli#project) command and we provide templates in our [`projects`](https://github.com/explosion/projects) repo. -![Illustration of project workflow and commands](../images/projects.svg) +![Illustration of project workflow and commands](/images/projects.svg) @@ -594,7 +594,7 @@ commands: > For more examples, see the [`projects`](https://github.com/explosion/projects) > repo. > -> ![Screenshot of auto-generated Markdown Readme](../images/project_document.jpg) +> ![Screenshot of auto-generated Markdown Readme](/images/project_document.jpg) When your custom project is ready and you want to share it with others, you can use the [`spacy project document`](/api/cli#project-document) command to @@ -887,7 +887,7 @@ commands: > #### Example train curve output > -> [![Screenshot of train curve terminal output](../images/prodigy_train_curve.jpg)](https://prodi.gy/docs/recipes#train-curve) +> [![Screenshot of train curve terminal output](/images/prodigy_train_curve.jpg)](https://prodi.gy/docs/recipes#train-curve) The [`train-curve`](https://prodi.gy/docs/recipes#train-curve) recipe is another cool workflow you can include in your project. It will run the training with @@ -942,7 +942,7 @@ full embedded visualizer, as well as individual components. > $ pip install spacy-streamlit --pre > ``` -![](../images/spacy-streamlit.png) +![](/images/spacy-streamlit.png) Using [`spacy-streamlit`](https://github.com/explosion/spacy-streamlit), your projects can easily define their own scripts that spin up an interactive @@ -1054,9 +1054,9 @@ and you'll be able to see the impact it has on your results. > model_log_interval = 1000 > ``` -![Screenshot: Visualized training results](../images/wandb1.jpg) +![Screenshot: Visualized training results](/images/wandb1.jpg) -![Screenshot: Parameter importance using config values](../images/wandb2.jpg 'Parameter importance using config values') +![Screenshot: Parameter importance using config values](/images/wandb2.jpg 'Parameter importance using config values') @@ -1107,7 +1107,7 @@ After uploading, you will see the live URL of your pipeline packages, as well as the direct URL to the model wheel you can install via `pip install`. You'll also be able to test your pipeline interactively from your browser: -![Screenshot: interactive NER visualizer](../images/huggingface_hub.jpg) +![Screenshot: interactive NER visualizer](/images/huggingface_hub.jpg) In your `project.yml`, you can add a command that uploads your trained and packaged pipeline to the hub. You can either run this as a manual step, or diff --git a/website/docs/usage/rule-based-matching.mdx b/website/docs/usage/rule-based-matching.mdx index 55ba058dd..4ee23e985 100644 --- a/website/docs/usage/rule-based-matching.mdx +++ b/website/docs/usage/rule-based-matching.mdx @@ -208,7 +208,7 @@ you need to describe fields like this. -[![Matcher demo](../images/matcher-demo.jpg)](https://explosion.ai/demos/matcher) +[![Matcher demo](/images/matcher-demo.jpg)](https://explosion.ai/demos/matcher) The [Matcher Explorer](https://explosion.ai/demos/matcher) lets you test the rule-based `Matcher` by creating token patterns interactively and running them @@ -1211,7 +1211,7 @@ each new token needs to be linked to an existing token on its left. As for `founded` in this example, a token may be linked to more than one token on its right: -![Dependency matcher pattern](../images/dep-match-diagram.svg) +![Dependency matcher pattern](/images/dep-match-diagram.svg) The full pattern comes together as shown in the example below: @@ -1752,7 +1752,7 @@ print([(ent.text, ent.label_) for ent in doc.ents]) > - `VBD`: Verb, past tense. > - `IN`: Conjunction, subordinating or preposition. -![Visualization of dependency parse](../images/displacy-model-rules.svg "[`spacy.displacy`](/api/top-level#displacy) visualization with `options={'fine_grained': True}` to output the fine-grained part-of-speech tags, i.e. `Token.tag_`") +![Visualization of dependency parse](/images/displacy-model-rules.svg "[`spacy.displacy`](/api/top-level#displacy) visualization with `options={'fine_grained': True}` to output the fine-grained part-of-speech tags, i.e. `Token.tag_`") In this example, "worked" is the root of the sentence and is a past tense verb. Its subject is "Alex Smith", the person who worked. "at Acme Corp Inc." is a @@ -1835,7 +1835,7 @@ notice that our current logic fails and doesn't correctly detect the company as a past organization. That's because the root is a participle and the tense information is in the attached auxiliary "was": -![Visualization of dependency parse](../images/displacy-model-rules2.svg) +![Visualization of dependency parse](/images/displacy-model-rules2.svg) To solve this, we can adjust the rules to also check for the above construction: diff --git a/website/docs/usage/spacy-101.mdx b/website/docs/usage/spacy-101.mdx index eaff7bb3d..ce9decbd9 100644 --- a/website/docs/usage/spacy-101.mdx +++ b/website/docs/usage/spacy-101.mdx @@ -30,7 +30,7 @@ quick introduction. -[![Advanced NLP with spaCy](../images/course.jpg)](https://course.spacy.io) +[![Advanced NLP with spaCy](/images/course.jpg)](https://course.spacy.io) In this course you'll learn how to use spaCy to build advanced natural language understanding systems, using both rule-based and machine learning approaches. It @@ -292,7 +292,7 @@ and part-of-speech tags like "VERB" are also encoded. Internally, spaCy only > - **StringStore**: The dictionary mapping hash values to strings, for example > `3197928453018144401` → "coffee". -![Doc, Vocab, Lexeme and StringStore](../images/vocab_stringstore.svg) +![Doc, Vocab, Lexeme and StringStore](/images/vocab_stringstore.svg) If you process lots of documents containing the word "coffee" in all kinds of different contexts, storing the exact string "coffee" every time would take up @@ -437,7 +437,7 @@ source of truth", both at **training** and **runtime**. > initial_rate = 0.01 > ``` -![Illustration of pipeline lifecycle](../images/lifecycle.svg) +![Illustration of pipeline lifecycle](/images/lifecycle.svg) @@ -466,7 +466,7 @@ configured via a single training config. > width = 128 > ``` -![Illustration of Pipe methods](../images/trainable_component.svg) +![Illustration of Pipe methods](/images/trainable_component.svg) diff --git a/website/docs/usage/training.mdx b/website/docs/usage/training.mdx index 3366600bb..41dfe4a40 100644 --- a/website/docs/usage/training.mdx +++ b/website/docs/usage/training.mdx @@ -23,7 +23,7 @@ import Training101 from 'usage/101/_training.mdx' -[![Prodigy: Radically efficient machine teaching](../images/prodigy.jpg)](https://prodi.gy) +[![Prodigy: Radically efficient machine teaching](/images/prodigy.jpg)](https://prodi.gy) If you need to label a lot of data, check out [Prodigy](https://prodi.gy), a new, active learning-powered annotation tool we've developed. Prodigy is fast @@ -222,7 +222,7 @@ config is available as [`nlp.config`](/api/language#config) and it includes all information about the pipeline, as well as the settings used to train and initialize it. -![Illustration of pipeline lifecycle](../images/lifecycle.svg) +![Illustration of pipeline lifecycle](/images/lifecycle.svg) At runtime spaCy will only use the `[nlp]` and `[components]` blocks of the config and load all data, including tokenization rules, model weights and other @@ -1120,7 +1120,7 @@ because the component settings required for training (load data from an external file) wouldn't match the component settings required at runtime (load what's included with the saved `nlp` object and don't depend on external file). -![Illustration of pipeline lifecycle](../images/lifecycle.svg) +![Illustration of pipeline lifecycle](/images/lifecycle.svg) @@ -1572,6 +1572,77 @@ token-based annotations like the dependency parse or entity labels, you'll need to take care to adjust the `Example` object so its annotations match and remain valid. +## Parallel & distributed training with Ray {#parallel-training} + +> #### Installation +> +> ```cli +> $ pip install -U %%SPACY_PKG_NAME[ray]%%SPACY_PKG_FLAGS +> # Check that the CLI is registered +> $ python -m spacy ray --help +> ``` + +[Ray](https://ray.io/) is a fast and simple framework for building and running +**distributed applications**. You can use Ray to train spaCy on one or more +remote machines, potentially speeding up your training process. Parallel +training won't always be faster though – it depends on your batch size, models, +and hardware. + + + +To use Ray with spaCy, you need the +[`spacy-ray`](https://github.com/explosion/spacy-ray) package installed. +Installing the package will automatically add the `ray` command to the spaCy +CLI. + + + +The [`spacy ray train`](/api/cli#ray-train) command follows the same API as +[`spacy train`](/api/cli#train), with a few extra options to configure the Ray +setup. You can optionally set the `--address` option to point to your Ray +cluster. If it's not set, Ray will run locally. + +```cli +python -m spacy ray train config.cfg --n-workers 2 +``` + + + +Get started with parallel training using our project template. It trains a +simple model on a Universal Dependencies Treebank and lets you parallelize the +training with Ray. + + + +### How parallel training works {#parallel-training-details} + +Each worker receives a shard of the **data** and builds a copy of the **model +and optimizer** from the [`config.cfg`](#config). It also has a communication +channel to **pass gradients and parameters** to the other workers. Additionally, +each worker is given ownership of a subset of the parameter arrays. Every +parameter array is owned by exactly one worker, and the workers are given a +mapping so they know which worker owns which parameter. + +![Illustration of setup](/images/spacy-ray.svg) + +As training proceeds, every worker will be computing gradients for **all** of +the model parameters. When they compute gradients for parameters they don't own, +they'll **send them to the worker** that does own that parameter, along with a +version identifier so that the owner can decide whether to discard the gradient. +Workers use the gradients they receive and the ones they compute locally to +update the parameters they own, and then broadcast the updated array and a new +version ID to the other workers. + +This training procedure is **asynchronous** and **non-blocking**. Workers always +push their gradient increments and parameter updates, they do not have to pull +them and block on the result, so the transfers can happen in the background, +overlapped with the actual training work. The workers also do not have to stop +and wait for each other ("synchronize") at the start of each batch. This is very +useful for spaCy, because spaCy is often trained on long documents, which means +**batches can vary in size** significantly. Uneven workloads make synchronous +gradient descent inefficient, because if one batch is slow, all of the other +workers are stuck waiting for it to complete before they can continue. + ## Internal training API {#api} diff --git a/website/docs/usage/v2.mdx b/website/docs/usage/v2.mdx index 210565c11..fc1d302ca 100644 --- a/website/docs/usage/v2.mdx +++ b/website/docs/usage/v2.mdx @@ -130,7 +130,7 @@ write any **attributes, properties and methods** to the `Doc`, `Token` and `Span`. You can add data, implement new features, integrate other libraries with spaCy or plug in your own machine learning models. -![The processing pipeline](../images/pipeline.svg) +![The processing pipeline](/images/pipeline.svg) diff --git a/website/docs/usage/v3-1.mdx b/website/docs/usage/v3-1.mdx index 2725cacb9..955044f68 100644 --- a/website/docs/usage/v3-1.mdx +++ b/website/docs/usage/v3-1.mdx @@ -76,7 +76,7 @@ This project trains a span categorizer for Indonesian NER. -[![Prodigy: example of the new manual spans UI](../images/prodigy_spans-manual.jpg)](https://support.prodi.gy/t/3861) +[![Prodigy: example of the new manual spans UI](/images/prodigy_spans-manual.jpg)](https://support.prodi.gy/t/3861) The upcoming version of our annotation tool [Prodigy](https://prodi.gy) (currently available as a [pre-release](https://support.prodi.gy/t/3861) for all diff --git a/website/docs/usage/v3.mdx b/website/docs/usage/v3.mdx index b4053a9de..b4f8295a5 100644 --- a/website/docs/usage/v3.mdx +++ b/website/docs/usage/v3.mdx @@ -86,7 +86,7 @@ transformer support interoperates with [PyTorch](https://pytorch.org) and the [HuggingFace `transformers`](https://huggingface.co/transformers/) library, giving you access to thousands of pretrained models for your pipelines. -![Pipeline components listening to shared embedding component](../images/tok2vec-listener.svg) +![Pipeline components listening to shared embedding component](/images/tok2vec-listener.svg) import Benchmarks from 'usage/_benchmarks-models.mdx' @@ -158,7 +158,7 @@ your pipeline. Some settings can also be registered **functions** that you can swap out and customize, making it easy to implement your own custom models and architectures. -![Illustration of pipeline lifecycle](../images/lifecycle.svg) +![Illustration of pipeline lifecycle](/images/lifecycle.svg) @@ -198,7 +198,7 @@ follow the same unified [`Model`](https://thinc.ai/docs/api-model) API and each `Model` can also be used as a sublayer of a larger network, allowing you to freely combine implementations from different frameworks into a single model. -![Illustration of Pipe methods](../images/trainable_component.svg) +![Illustration of Pipe methods](/images/trainable_component.svg) @@ -234,7 +234,7 @@ project template, adjust it to fit your needs, load in your data, train a pipeline, export it as a Python package, upload your outputs to a remote storage and share your results with your team. -![Illustration of project workflow and commands](../images/projects.svg) +![Illustration of project workflow and commands](/images/projects.svg) spaCy projects also make it easy to **integrate with other tools** in the data science and machine learning ecosystem, including [DVC](/usage/projects#dvc) for @@ -283,7 +283,7 @@ the [`ray`](/api/cli#ray) command to your spaCy CLI if it's installed in the same environment. You can then run [`spacy ray train`](/api/cli#ray-train) for parallel training. -![Illustration of setup](../images/spacy-ray.svg) +![Illustration of setup](/images/spacy-ray.svg) @@ -386,7 +386,7 @@ A pattern added to the dependency matcher consists of a **list of dictionaries**, with each dictionary describing a **token to match** and its **relation to an existing token** in the pattern. -![Dependency matcher pattern](../images/dep-match-diagram.svg) +![Dependency matcher pattern](/images/dep-match-diagram.svg) @@ -494,7 +494,7 @@ format for documenting argument and return types. -[![Library architecture](../images/architecture.svg)](/api) +[![Library architecture](/images/architecture.svg)](/api) diff --git a/website/docs/usage/visualizers.mdx b/website/docs/usage/visualizers.mdx index b0c02db60..f8a9a893b 100644 --- a/website/docs/usage/visualizers.mdx +++ b/website/docs/usage/visualizers.mdx @@ -44,7 +44,7 @@ doc = nlp("This is a sentence.") displacy.serve(doc, style="dep") ``` -![displaCy visualizer](../images/displacy.svg) +![displaCy visualizer](/images/displacy.svg) The argument `options` lets you specify a dictionary of settings to customize the layout, for example: @@ -77,7 +77,7 @@ For a list of all available options, see the > displacy.serve(doc, style="dep", options=options) > ``` -![displaCy visualizer (compact mode)](../images/displacy-compact.svg) +![displaCy visualizer (compact mode)](/images/displacy-compact.svg) ### Visualizing long texts {#dep-long-text new="2.0.12"} @@ -267,7 +267,7 @@ rendering if auto-detection fails. -![displaCy visualizer in a Jupyter notebook](../images/displacy_jupyter.jpg) +![displaCy visualizer in a Jupyter notebook](/images/displacy_jupyter.jpg) Internally, displaCy imports `display` and `HTML` from `IPython.core.display` and returns a Jupyter HTML object. If you were doing it manually, it'd look like @@ -455,6 +455,6 @@ Alternatively, if you're using [Streamlit](https://streamlit.io), check out the helps you integrate spaCy visualizations into your apps. It includes a full embedded visualizer, as well as individual components. -![](../images/spacy-streamlit.png) +![](/images/spacy-streamlit.png) diff --git a/website/docs/images/architecture.svg b/website/public/images/architecture.svg similarity index 100% rename from website/docs/images/architecture.svg rename to website/public/images/architecture.svg diff --git a/website/docs/images/cli_init_fill-config_diff.jpg b/website/public/images/cli_init_fill-config_diff.jpg similarity index 100% rename from website/docs/images/cli_init_fill-config_diff.jpg rename to website/public/images/cli_init_fill-config_diff.jpg diff --git a/website/docs/images/course.jpg b/website/public/images/course.jpg similarity index 100% rename from website/docs/images/course.jpg rename to website/public/images/course.jpg diff --git a/website/docs/images/dep-match-diagram.svg b/website/public/images/dep-match-diagram.svg similarity index 100% rename from website/docs/images/dep-match-diagram.svg rename to website/public/images/dep-match-diagram.svg diff --git a/website/docs/images/displacy-compact.svg b/website/public/images/displacy-compact.svg similarity index 100% rename from website/docs/images/displacy-compact.svg rename to website/public/images/displacy-compact.svg diff --git a/website/docs/images/displacy-custom-parser.svg b/website/public/images/displacy-custom-parser.svg similarity index 100% rename from website/docs/images/displacy-custom-parser.svg rename to website/public/images/displacy-custom-parser.svg diff --git a/website/docs/images/displacy-dep-founded.html b/website/public/images/displacy-dep-founded.html similarity index 100% rename from website/docs/images/displacy-dep-founded.html rename to website/public/images/displacy-dep-founded.html diff --git a/website/docs/images/displacy-ent-custom.html b/website/public/images/displacy-ent-custom.html similarity index 100% rename from website/docs/images/displacy-ent-custom.html rename to website/public/images/displacy-ent-custom.html diff --git a/website/docs/images/displacy-ent-snek.html b/website/public/images/displacy-ent-snek.html similarity index 100% rename from website/docs/images/displacy-ent-snek.html rename to website/public/images/displacy-ent-snek.html diff --git a/website/docs/images/displacy-ent1.html b/website/public/images/displacy-ent1.html similarity index 100% rename from website/docs/images/displacy-ent1.html rename to website/public/images/displacy-ent1.html diff --git a/website/docs/images/displacy-ent2.html b/website/public/images/displacy-ent2.html similarity index 100% rename from website/docs/images/displacy-ent2.html rename to website/public/images/displacy-ent2.html diff --git a/website/docs/images/displacy-long.html b/website/public/images/displacy-long.html similarity index 100% rename from website/docs/images/displacy-long.html rename to website/public/images/displacy-long.html diff --git a/website/docs/images/displacy-long2.html b/website/public/images/displacy-long2.html similarity index 100% rename from website/docs/images/displacy-long2.html rename to website/public/images/displacy-long2.html diff --git a/website/docs/images/displacy-model-rules.svg b/website/public/images/displacy-model-rules.svg similarity index 100% rename from website/docs/images/displacy-model-rules.svg rename to website/public/images/displacy-model-rules.svg diff --git a/website/docs/images/displacy-model-rules2.svg b/website/public/images/displacy-model-rules2.svg similarity index 100% rename from website/docs/images/displacy-model-rules2.svg rename to website/public/images/displacy-model-rules2.svg diff --git a/website/docs/images/displacy-small.svg b/website/public/images/displacy-small.svg similarity index 100% rename from website/docs/images/displacy-small.svg rename to website/public/images/displacy-small.svg diff --git a/website/docs/images/displacy-span-custom.html b/website/public/images/displacy-span-custom.html similarity index 100% rename from website/docs/images/displacy-span-custom.html rename to website/public/images/displacy-span-custom.html diff --git a/website/docs/images/displacy-span.html b/website/public/images/displacy-span.html similarity index 100% rename from website/docs/images/displacy-span.html rename to website/public/images/displacy-span.html diff --git a/website/docs/images/displacy.svg b/website/public/images/displacy.svg similarity index 100% rename from website/docs/images/displacy.svg rename to website/public/images/displacy.svg diff --git a/website/docs/images/displacy_jupyter.jpg b/website/public/images/displacy_jupyter.jpg similarity index 100% rename from website/docs/images/displacy_jupyter.jpg rename to website/public/images/displacy_jupyter.jpg diff --git a/website/docs/images/huggingface_hub.jpg b/website/public/images/huggingface_hub.jpg similarity index 100% rename from website/docs/images/huggingface_hub.jpg rename to website/public/images/huggingface_hub.jpg diff --git a/website/docs/images/lifecycle.svg b/website/public/images/lifecycle.svg similarity index 100% rename from website/docs/images/lifecycle.svg rename to website/public/images/lifecycle.svg diff --git a/website/docs/images/matcher-demo.jpg b/website/public/images/matcher-demo.jpg similarity index 100% rename from website/docs/images/matcher-demo.jpg rename to website/public/images/matcher-demo.jpg diff --git a/website/docs/images/pipeline-design.svg b/website/public/images/pipeline-design.svg similarity index 100% rename from website/docs/images/pipeline-design.svg rename to website/public/images/pipeline-design.svg diff --git a/website/docs/images/pipeline.svg b/website/public/images/pipeline.svg similarity index 100% rename from website/docs/images/pipeline.svg rename to website/public/images/pipeline.svg diff --git a/website/docs/images/pipeline_transformer.svg b/website/public/images/pipeline_transformer.svg similarity index 100% rename from website/docs/images/pipeline_transformer.svg rename to website/public/images/pipeline_transformer.svg diff --git a/website/docs/images/prodigy.jpg b/website/public/images/prodigy.jpg similarity index 100% rename from website/docs/images/prodigy.jpg rename to website/public/images/prodigy.jpg diff --git a/website/docs/images/prodigy_overview.jpg b/website/public/images/prodigy_overview.jpg similarity index 100% rename from website/docs/images/prodigy_overview.jpg rename to website/public/images/prodigy_overview.jpg diff --git a/website/docs/images/prodigy_spans-manual.jpg b/website/public/images/prodigy_spans-manual.jpg similarity index 100% rename from website/docs/images/prodigy_spans-manual.jpg rename to website/public/images/prodigy_spans-manual.jpg diff --git a/website/docs/images/prodigy_train_curve.jpg b/website/public/images/prodigy_train_curve.jpg similarity index 100% rename from website/docs/images/prodigy_train_curve.jpg rename to website/public/images/prodigy_train_curve.jpg diff --git a/website/docs/images/project_document.jpg b/website/public/images/project_document.jpg similarity index 100% rename from website/docs/images/project_document.jpg rename to website/public/images/project_document.jpg diff --git a/website/docs/images/projects.png b/website/public/images/projects.png similarity index 100% rename from website/docs/images/projects.png rename to website/public/images/projects.png diff --git a/website/docs/images/projects.svg b/website/public/images/projects.svg similarity index 100% rename from website/docs/images/projects.svg rename to website/public/images/projects.svg diff --git a/website/docs/images/sense2vec.jpg b/website/public/images/sense2vec.jpg similarity index 100% rename from website/docs/images/sense2vec.jpg rename to website/public/images/sense2vec.jpg diff --git a/website/docs/images/spacy-ray.svg b/website/public/images/spacy-ray.svg similarity index 100% rename from website/docs/images/spacy-ray.svg rename to website/public/images/spacy-ray.svg diff --git a/website/docs/images/spacy-streamlit.png b/website/public/images/spacy-streamlit.png similarity index 100% rename from website/docs/images/spacy-streamlit.png rename to website/public/images/spacy-streamlit.png diff --git a/website/docs/images/spacy-tailored-pipelines_wide.png b/website/public/images/spacy-tailored-pipelines_wide.png similarity index 100% rename from website/docs/images/spacy-tailored-pipelines_wide.png rename to website/public/images/spacy-tailored-pipelines_wide.png diff --git a/website/docs/images/thinc_mypy.jpg b/website/public/images/thinc_mypy.jpg similarity index 100% rename from website/docs/images/thinc_mypy.jpg rename to website/public/images/thinc_mypy.jpg diff --git a/website/docs/images/tok2vec-listener.svg b/website/public/images/tok2vec-listener.svg similarity index 100% rename from website/docs/images/tok2vec-listener.svg rename to website/public/images/tok2vec-listener.svg diff --git a/website/docs/images/tok2vec.svg b/website/public/images/tok2vec.svg similarity index 100% rename from website/docs/images/tok2vec.svg rename to website/public/images/tok2vec.svg diff --git a/website/docs/images/tokenization.svg b/website/public/images/tokenization.svg similarity index 100% rename from website/docs/images/tokenization.svg rename to website/public/images/tokenization.svg diff --git a/website/docs/images/trainable_component.svg b/website/public/images/trainable_component.svg similarity index 100% rename from website/docs/images/trainable_component.svg rename to website/public/images/trainable_component.svg diff --git a/website/docs/images/training.svg b/website/public/images/training.svg similarity index 100% rename from website/docs/images/training.svg rename to website/public/images/training.svg diff --git a/website/docs/images/vocab_stringstore.svg b/website/public/images/vocab_stringstore.svg similarity index 100% rename from website/docs/images/vocab_stringstore.svg rename to website/public/images/vocab_stringstore.svg diff --git a/website/docs/images/wandb1.jpg b/website/public/images/wandb1.jpg similarity index 100% rename from website/docs/images/wandb1.jpg rename to website/public/images/wandb1.jpg diff --git a/website/docs/images/wandb2.jpg b/website/public/images/wandb2.jpg similarity index 100% rename from website/docs/images/wandb2.jpg rename to website/public/images/wandb2.jpg diff --git a/website/src/widgets/landing.js b/website/src/widgets/landing.js index 3fb45414f..f3c3573bd 100644 --- a/website/src/widgets/landing.js +++ b/website/src/widgets/landing.js @@ -22,10 +22,10 @@ import QuickstartTraining from '../widgets/quickstart-training' import Project from '../widgets/project' import Features from '../widgets/features' import Layout from '../components/layout' -import courseImage from '../../docs/images/course.jpg' -import prodigyImage from '../../docs/images/prodigy_overview.jpg' -import projectsImage from '../../docs/images/projects.png' -import tailoredPipelinesImage from '../../docs/images/spacy-tailored-pipelines_wide.png' +import courseImage from '../../public/images/course.jpg' +import prodigyImage from '../../public/images/prodigy_overview.jpg' +import projectsImage from '../../public/images/projects.png' +import tailoredPipelinesImage from '../../public/images/spacy-tailored-pipelines_wide.png' import { nightly, legacy } from '../../meta/dynamicMeta' import Benchmarks from '../../docs/usage/_benchmarks-models.mdx'