mirror of
https://github.com/explosion/spaCy.git
synced 2024-12-27 10:26:35 +03:00
554df9ef20
* Rename all MDX file to `.mdx`
* Lock current node version (#11885)
* Apply Prettier (#11996)
* Minor website fixes (#11974) [ci skip]
* fix table
* Migrate to Next WEB-17 (#12005)
* Initial commit
* Run `npx create-next-app@13 next-blog`
* Install MDX packages
Following: 77b5f79a4d/packages/next-mdx/readme.md
* Add MDX to Next
* Allow Next to handle `.md` and `.mdx` files.
* Add VSCode extension recommendation
* Disabled TypeScript strict mode for now
* Add prettier
* Apply Prettier to all files
* Make sure to use correct Node version
* Add basic implementation for `MDXRemote`
* Add experimental Rust MDX parser
* Add `/public`
* Add SASS support
* Remove default pages and styling
* Convert to module
This allows to use `import/export` syntax
* Add import for custom components
* Add ability to load plugins
* Extract function
This will make the next commit easier to read
* Allow to handle directories for page creation
* Refactoring
* Allow to parse subfolders for pages
* Extract logic
* Redirect `index.mdx` to parent directory
* Disabled ESLint during builds
* Disabled typescript during build
* Remove Gatsby from `README.md`
* Rephrase Docker part of `README.md`
* Update project structure in `README.md`
* Move and rename plugins
* Update plugin for wrapping sections
* Add dependencies for plugin
* Use plugin
* Rename wrapper type
* Simplify unnessary adding of id to sections
The slugified section ids are useless, because they can not be referenced anywhere anyway. The navigation only works if the section has the same id as the heading.
* Add plugin for custom attributes on Markdown elements
* Add plugin to readd support for tables
* Add plugin to fix problem with wrapped images
For more details see this issue: https://github.com/mdx-js/mdx/issues/1798
* Add necessary meta data to pages
* Install necessary dependencies
* Remove outdated MDX handling
* Remove reliance on `InlineList`
* Use existing Remark components
* Remove unallowed heading
Before `h1` components where not overwritten and would never have worked and they aren't used anywhere either.
* Add missing components to MDX
* Add correct styling
* Fix broken list
* Fix broken CSS classes
* Implement layout
* Fix links
* Fix broken images
* Fix pattern image
* Fix heading attributes
* Rename heading attribute
`new` was causing some weird issue, so renaming it to `version`
* Update comment syntax in MDX
* Merge imports
* Fix markdown rendering inside components
* Add model pages
* Simplify anchors
* Fix default value for theme
* Add Universe index page
* Add Universe categories
* Add Universe projects
* Fix Next problem with copy
Next complains when the server renders something different then the client, therfor we move the differing logic to `useEffect`
* Fix improper component nesting
Next doesn't allow block elements inside a `<p>`
* Replace landing page MDX with page component
* Remove inlined iframe content
* Remove ability to inline HTML content in iFrames
* Remove MDX imports
* Fix problem with image inside link in MDX
* Escape character for MDX
* Fix unescaped characters in MDX
* Fix headings with logo
* Allow to export static HTML pages
* Add prebuild script
This command is automatically run by Next
* Replace `svg-loader` with `react-inlinesvg`
`svg-loader` is no longer maintained
* Fix ESLint `react-hooks/exhaustive-deps`
* Fix dropdowns
* Change code language from `cli` to `bash`
* Remove unnessary language `none`
* Fix invalid code language
`markdown_` with an underscore was used to basically turn of syntax highlighting, but using unknown languages know throws an error.
* Enable code blocks plugin
* Readd `InlineCode` component
MDX2 removed the `inlineCode` component
> The special component name `inlineCode` was removed, we recommend to use `pre` for the block version of code, and code for both the block and inline versions
Source: https://mdxjs.com/migrating/v2/#update-mdx-content
* Remove unused code
* Extract function to own file
* Fix code syntax highlighting
* Update syntax for code block meta data
* Remove unused prop
* Fix internal link recognition
There is a problem with regex between Node and browser, and since Next runs the component on both, this create an error.
`Prop `rel` did not match. Server: "null" Client: "noopener nofollow noreferrer"`
This simplifies the implementation and fixes the above error.
* Replace `react-helmet` with `next/head`
* Fix `className` problem for JSX component
* Fix broken bold markdown
* Convert file to `.mjs` to be used by Node process
* Add plugin to replace strings
* Fix custom table row styling
* Fix problem with `span` inside inline `code`
React doesn't allow a `span` inside an inline `code` element and throws an error in dev mode.
* Add `_document` to be able to customize `<html>` and `<body>`
* Add `lang="en"`
* Store Netlify settings in file
This way we don't need to update via Netlify UI, which can be tricky if changing build settings.
* Add sitemap
* Add Smartypants
* Add PWA support
* Add `manifest.webmanifest`
* Fix bug with anchor links after reloading
There was no need for the previous implementation, since the browser handles this nativly. Additional the manual scrolling into view was actually broken, because the heading would disappear behind the menu bar.
* Rename custom event
I was googeling for ages to find out what kind of event `inview` is, only to figure out it was a custom event with a name that sounds pretty much like a native one. 🫠
* Fix missing comment syntax highlighting
* Refactor Quickstart component
The previous implementation was hidding the irrelevant lines via data-props and dynamically generated CSS. This created problems with Next and was also hard to follow. CSS was used to do what React is supposed to handle.
The new implementation simplfy filters the list of children (React elements) via their props.
* Fix syntax highlighting for Training Quickstart
* Unify code rendering
* Improve error logging in Juniper
* Fix Juniper component
* Automatically generate "Read Next" link
* Add Plausible
* Use recent DocSearch component and adjust styling
* Fix images
* Turn of image optimization
> Image Optimization using Next.js' default loader is not compatible with `next export`.
We currently deploy to Netlify via `next export`
* Dont build pages starting with `_`
* Remove unused files
* Add Next plugin to Netlify
* Fix button layout
MDX automatically adds `p` tags around text on a new line and Prettier wants to put the text on a new line. Hacking with JSX string.
* Add 404 page
* Apply Prettier
* Update Prettier for `package.json`
Next sometimes wants to patch `package-lock.json`. The old Prettier setting indended with 4 spaces, but Next always indends with 2 spaces. Since `npm install` automatically uses the indendation from `package.json` for `package-lock.json` and to avoid the format switching back and forth, both files are now set to 2 spaces.
* Apply Next patch to `package-lock.json`
When starting the dev server Next would warn `warn - Found lockfile missing swc dependencies, patching...` and update the `package-lock.json`. These are the patched changes.
* fix link
Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>
* small backslash fixes
* adjust to new style
Co-authored-by: Marcus Blättermann <marcus@essenmitsosse.de>
323 lines
14 KiB
Plaintext
323 lines
14 KiB
Plaintext
---
|
|
title: What's New in v3.1
|
|
teaser: New features and how to upgrade
|
|
menu:
|
|
- ['New Features', 'features']
|
|
- ['Upgrading Notes', 'upgrading']
|
|
---
|
|
|
|
## New Features {id="features",hidden="true"}
|
|
|
|
It's been great to see the adoption of the new spaCy v3, which introduced
|
|
[transformer-based](/usage/embeddings-transformers) pipelines, a new
|
|
[config and training system](/usage/training) for reproducible experiments,
|
|
[projects](/usage/projects) for end-to-end workflows, and many
|
|
[other features](/usage/v3). Version 3.1 adds more on top of it, including the
|
|
ability to use predicted annotations during training, a new `SpanCategorizer`
|
|
component for predicting arbitrary and potentially overlapping spans, support
|
|
for partial incorrect annotations in the entity recognizer, new trained
|
|
pipelines for Catalan and Danish, as well as many bug fixes and improvements.
|
|
|
|
### Using predicted annotations during training {id="predicted-annotations-training"}
|
|
|
|
By default, components are updated in isolation during training, which means
|
|
that they don't see the predictions of any earlier components in the pipeline.
|
|
The new
|
|
[`[training.annotating_components]`](/usage/training#annotating-components)
|
|
config setting lets you specify pipeline components that should set annotations
|
|
on the predicted docs during training. This makes it easy to use the predictions
|
|
of a previous component in the pipeline as features for a subsequent component,
|
|
e.g. the dependency labels in the tagger:
|
|
|
|
```ini {title="config.cfg (excerpt)",highlight="7,12"}
|
|
[nlp]
|
|
pipeline = ["parser", "tagger"]
|
|
|
|
[components.tagger.model.tok2vec.embed]
|
|
@architectures = "spacy.MultiHashEmbed.v1"
|
|
width = ${components.tagger.model.tok2vec.encode.width}
|
|
attrs = ["NORM","DEP"]
|
|
rows = [5000,2500]
|
|
include_static_vectors = false
|
|
|
|
[training]
|
|
annotating_components = ["parser"]
|
|
```
|
|
|
|
<Project id="pipelines/tagger_parser_predicted_annotations">
|
|
|
|
This project shows how to use the `token.dep` attribute predicted by the parser
|
|
as a feature for a subsequent tagger component in the pipeline.
|
|
|
|
</Project>
|
|
|
|
### SpanCategorizer for predicting arbitrary and overlapping spans {id="spancategorizer",tag="experimental"}
|
|
|
|
A common task in applied NLP is extracting spans of texts from documents,
|
|
including longer phrases or nested expressions. Named entity recognition isn't
|
|
the right tool for this problem, since an entity recognizer typically predicts
|
|
single token-based tags that are very sensitive to boundaries. This is effective
|
|
for proper nouns and self-contained expressions, but less useful for other types
|
|
of phrases or overlapping spans. The new
|
|
[`SpanCategorizer`](/api/spancategorizer) component and
|
|
[SpanCategorizer](/api/architectures#spancategorizer) architecture let you label
|
|
arbitrary and potentially overlapping spans of texts. A span categorizer
|
|
consists of two parts: a [suggester function](/api/spancategorizer#suggesters)
|
|
that proposes candidate spans, which may or may not overlap, and a labeler model
|
|
that predicts zero or more labels for each candidate. The predicted spans are
|
|
available via the [`Doc.spans`](/api/doc#spans) container.
|
|
|
|
<Project id="experimental/ner_spancat">
|
|
|
|
This project trains a span categorizer for Indonesian NER.
|
|
|
|
</Project>
|
|
|
|
<Infobox title="Tip: Create data with Prodigy's new span annotation UI">
|
|
|
|
<Image
|
|
src="/images/prodigy_spans-manual.jpg"
|
|
href="https://support.prodi.gy/t/3861"
|
|
alt="Prodigy: example of the new manual spans UI"
|
|
/>
|
|
|
|
The upcoming version of our annotation tool [Prodigy](https://prodi.gy)
|
|
(currently available as a [pre-release](https://support.prodi.gy/t/3861) for all
|
|
users) features a [new workflow and UI](https://support.prodi.gy/t/3861) for
|
|
annotating overlapping and nested spans. You can use it to create training data
|
|
for spaCy's `SpanCategorizer` component.
|
|
|
|
</Infobox>
|
|
|
|
### Update the entity recognizer with partial incorrect annotations {id="negative-samples"}
|
|
|
|
> #### config.cfg (excerpt)
|
|
>
|
|
> ```ini
|
|
> [components.ner]
|
|
> factory = "ner"
|
|
> incorrect_spans_key = "incorrect_spans"
|
|
> moves = null
|
|
> update_with_oracle_cut_size = 100
|
|
> ```
|
|
|
|
The [`EntityRecognizer`](/api/entityrecognizer) can now be updated with known
|
|
incorrect annotations, which lets you take advantage of partial and sparse data.
|
|
For example, you'll be able to use the information that certain spans of text
|
|
are definitely **not** `PERSON` entities, without having to provide the complete
|
|
gold-standard annotations for the given example. The incorrect span annotations
|
|
can be added via the [`Doc.spans`](/api/doc#spans) in the training data under
|
|
the key defined as [`incorrect_spans_key`](/api/entityrecognizer#init) in the
|
|
component config.
|
|
|
|
```python
|
|
train_doc = nlp.make_doc("Barack Obama was born in Hawaii.")
|
|
# The doc.spans key can be defined in the config
|
|
train_doc.spans["incorrect_spans"] = [
|
|
Span(doc, 0, 2, label="ORG"),
|
|
Span(doc, 5, 6, label="PRODUCT")
|
|
]
|
|
```
|
|
|
|
{/* TODO: more details and/or example project? */}
|
|
|
|
### New pipeline packages for Catalan and Danish {id="pipeline-packages"}
|
|
|
|
spaCy v3.1 adds 5 new pipeline packages, including a new core family for Catalan
|
|
and a new transformer-based pipeline for Danish using the
|
|
[`danish-bert-botxo`](http://huggingface.co/Maltehb/danish-bert-botxo) weights.
|
|
See the [models directory](/models) for an overview of all available trained
|
|
pipelines and the [training guide](/usage/training) for details on how to train
|
|
your own.
|
|
|
|
> Thanks to Carlos Rodríguez Penagos and the
|
|
> [Barcelona Supercomputing Center](https://temu.bsc.es/) for their
|
|
> contributions for Catalan and to Kenneth Enevoldsen for Danish. For additional
|
|
> Danish pipelines, check out [DaCy](https://github.com/KennethEnevoldsen/DaCy).
|
|
|
|
| Package | Language | UPOS | Parser LAS | NER F |
|
|
| ------------------------------------------------- | -------- | ---: | ---------: | ----: |
|
|
| [`ca_core_news_sm`](/models/ca#ca_core_news_sm) | Catalan | 98.2 | 87.4 | 79.8 |
|
|
| [`ca_core_news_md`](/models/ca#ca_core_news_md) | Catalan | 98.3 | 88.2 | 84.0 |
|
|
| [`ca_core_news_lg`](/models/ca#ca_core_news_lg) | Catalan | 98.5 | 88.4 | 84.2 |
|
|
| [`ca_core_news_trf`](/models/ca#ca_core_news_trf) | Catalan | 98.9 | 93.0 | 91.2 |
|
|
| [`da_core_news_trf`](/models/da#da_core_news_trf) | Danish | 98.0 | 85.0 | 82.9 |
|
|
|
|
### Resizable text classification architectures {id="resizable-textcat"}
|
|
|
|
Previously, the [`TextCategorizer`](/api/textcategorizer) architectures could
|
|
not be resized, meaning that you couldn't add new labels to an already trained
|
|
model. In spaCy v3.1, the [TextCatCNN](/api/architectures#TextCatCNN) and
|
|
[TextCatBOW](/api/architectures#TextCatBOW) architectures are now resizable,
|
|
while ensuring that the predictions for the old labels remain the same.
|
|
|
|
### CLI command to assemble pipeline from config {id="assemble"}
|
|
|
|
The [`spacy assemble`](/api/cli#assemble) command lets you assemble a pipeline
|
|
from a config file without additional training. It can be especially useful for
|
|
creating a blank pipeline with a custom tokenizer, rule-based components or word
|
|
vectors.
|
|
|
|
```bash
|
|
$ python -m spacy assemble config.cfg ./output
|
|
```
|
|
|
|
### Pretty pipeline package READMEs {id="package-readme"}
|
|
|
|
The [`spacy package`](/api/cli#package) command now auto-generates a pretty
|
|
`README.md` based on the pipeline information defined in the `meta.json`. This
|
|
includes a table with a general overview, as well as the label scheme and
|
|
accuracy figures, if available. For an example, see the
|
|
[model releases](https://github.com/explosion/spacy-models/releases).
|
|
|
|
### Support for streaming large or infinite corpora {id="streaming-corpora"}
|
|
|
|
> #### config.cfg (excerpt)
|
|
>
|
|
> ```ini
|
|
> [training]
|
|
> max_epochs = -1
|
|
> ```
|
|
|
|
The training process now supports streaming large or infinite corpora
|
|
out-of-the-box, which can be controlled via the
|
|
[`[training.max_epochs]`](/api/data-formats#training) config setting. Setting it
|
|
to `-1` means that the train corpus should be streamed rather than loaded into
|
|
memory with no shuffling within the training loop. For details on how to
|
|
implement a custom corpus loader, e.g. to stream in data from a remote storage,
|
|
see the usage guide on
|
|
[custom data reading](/usage/training#custom-code-readers-batchers).
|
|
|
|
When streaming a corpus, only the first 100 examples will be used for
|
|
[initialization](/usage/training#config-lifecycle). This is no problem if you're
|
|
training a component like the text classifier with data that specifies all
|
|
available labels in every example. If necessary, you can use the
|
|
[`init labels`](/api/cli#init-labels) command to pre-generate the labels for
|
|
your components using a representative sample so the model can be initialized
|
|
correctly before training.
|
|
|
|
### New lemmatizers for Catalan and Italian {id="pos-lemmatizers"}
|
|
|
|
The trained pipelines for [Catalan](/models/ca) and [Italian](/models/it) now
|
|
include lemmatizers that use the predicted part-of-speech tags as part of the
|
|
lookup lemmatization for higher lemmatization accuracy. If you're training your
|
|
own pipelines for these languages and you want to include a lemmatizer, make
|
|
sure you have the
|
|
[`spacy-lookups-data`](https://github.com/explosion/spacy-lookups-data) package
|
|
installed, which provides the relevant tables.
|
|
|
|
### Upload your pipelines to the Hugging Face Hub {id="huggingface-hub"}
|
|
|
|
The [Hugging Face Hub](https://huggingface.co/) lets you upload models and share
|
|
them with others, and it now supports spaCy pipelines out-of-the-box. The new
|
|
[`spacy-huggingface-hub`](https://github.com/explosion/spacy-huggingface-hub)
|
|
package automatically adds the `huggingface-hub` command to your `spacy` CLI. It
|
|
lets you upload any pipelines packaged with [`spacy package`](/api/cli#package)
|
|
and `--build wheel` and takes care of auto-generating all required meta
|
|
information.
|
|
|
|
After uploading, you'll get a live URL for your model page that includes all
|
|
details, files and interactive visualizers, as well as a direct URL to the wheel
|
|
file that you can install via `pip install`. For examples, check out the
|
|
[spaCy pipelines](https://huggingface.co/spacy) we've uploaded.
|
|
|
|
```bash
|
|
$ pip install spacy-huggingface-hub
|
|
$ huggingface-cli login
|
|
$ python -m spacy package ./en_ner_fashion ./output --build wheel
|
|
$ cd ./output/en_ner_fashion-0.0.0/dist
|
|
$ python -m spacy huggingface-hub push en_ner_fashion-0.0.0-py3-none-any.whl
|
|
```
|
|
|
|
You can also integrate the upload command into your
|
|
[project template](/usage/projects#huggingface_hub) to automatically upload your
|
|
packaged pipelines after training.
|
|
|
|
<Project id="integrations/huggingface_hub">
|
|
|
|
Get started with uploading your models to the Hugging Face hub using our project
|
|
template. It trains a simple pipeline, packages it and uploads it if the
|
|
packaged model has changed. This makes it easy to deploy your models end-to-end.
|
|
|
|
</Project>
|
|
|
|
## Notes about upgrading from v3.0 {id="upgrading"}
|
|
|
|
### Pipeline package version compatibility {id="version-compat"}
|
|
|
|
> #### Using legacy implementations
|
|
>
|
|
> In spaCy v3, you'll still be able to load and reference legacy implementations
|
|
> via [`spacy-legacy`](https://github.com/explosion/spacy-legacy), even if the
|
|
> components or architectures change and newer versions are available in the
|
|
> core library.
|
|
|
|
When you're loading a pipeline package trained with spaCy v3.0, you will see a
|
|
warning telling you that the pipeline may be incompatible. This doesn't
|
|
necessarily have to be true, but we recommend running your pipelines against
|
|
your test suite or evaluation data to make sure there are no unexpected results.
|
|
If you're using one of the [trained pipelines](/models) we provide, you should
|
|
run [`spacy download`](/api/cli#download) to update to the latest version. To
|
|
see an overview of all installed packages and their compatibility, you can run
|
|
[`spacy validate`](/api/cli#validate).
|
|
|
|
If you've trained your own custom pipeline and you've confirmed that it's still
|
|
working as expected, you can update the spaCy version requirements in the
|
|
[`meta.json`](/api/data-formats#meta):
|
|
|
|
```diff
|
|
- "spacy_version": ">=3.0.0,<3.1.0",
|
|
+ "spacy_version": ">=3.0.0,<3.2.0",
|
|
```
|
|
|
|
### Updating v3.0 configs
|
|
|
|
To update a config from spaCy v3.0 with the new v3.1 settings, run
|
|
[`init fill-config`](/api/cli#init-fill-config):
|
|
|
|
```bash
|
|
python -m spacy init fill-config config-v3.0.cfg config-v3.1.cfg
|
|
```
|
|
|
|
In many cases (`spacy train`, `spacy.load()`), the new defaults will be filled
|
|
in automatically, but you'll need to fill in the new settings to run
|
|
[`debug config`](/api/cli#debug) and [`debug data`](/api/cli#debug-data).
|
|
|
|
### Sourcing pipeline components with vectors {id="source-vectors"}
|
|
|
|
If you're sourcing a pipeline component that requires static vectors (for
|
|
example, a tagger or parser from an `md` or `lg` pretrained pipeline), be sure
|
|
to include the source model's vectors in the setting `[initialize.vectors]`. In
|
|
spaCy v3.0, a bug allowed vectors to be loaded implicitly through `source`,
|
|
however in v3.1 this setting must be provided explicitly as
|
|
`[initialize.vectors]`:
|
|
|
|
```ini {title="config.cfg (excerpt)"}
|
|
[components.ner]
|
|
source = "en_core_web_md"
|
|
|
|
[initialize]
|
|
vectors = "en_core_web_md"
|
|
```
|
|
|
|
<Infobox title="Important note" variant="warning">
|
|
|
|
Each pipeline can only store one set of static vectors, so it's not possible to
|
|
assemble a pipeline with components that were trained on different static
|
|
vectors.
|
|
|
|
</Infobox>
|
|
|
|
[`spacy train`](/api/cli#train) and [`spacy assemble`](/api/cli#assemble) will
|
|
provide warnings if the source and target pipelines don't contain the same
|
|
vectors. If you are sourcing a rule-based component like an entity ruler or
|
|
lemmatizer that does not use the vectors as a model feature, then this warning
|
|
can be safely ignored.
|
|
|
|
### Warnings {id="warnings"}
|
|
|
|
Logger warnings have been converted to Python warnings. Use
|
|
[`warnings.filterwarnings`](https://docs.python.org/3/library/warnings.html#warnings.filterwarnings)
|
|
or the new helper method `spacy.errors.filter_warning(action, error_msg='')` to
|
|
manage warnings.
|