--- title: What's New in v3.1 teaser: New features and how to upgrade menu: - ['New Features', 'features'] - ['Upgrading Notes', 'upgrading'] --- ## New Features {#features hidden="true"} ### Using predicted annotations during training {#predicted-annotations-training} This project shows how to use the `token.dep` attribute predicted by the parser as a feature for a subsequent tagger component in the pipeline. ### SpanCategorizer for predicting arbitrary and overlapping spans {#spancategorizer tag="experimental"} A common task in applied NLP is extracting spans of texts from documents, including longer phrases or nested expressions. Named entity recognition isn't the right tool for this problem, since an entity recognizer typically predicts single token-based tags that are very sensitive to boundaries. This is effective for proper nouns and self-contained expressions, but less useful for other types of phrases or overlapping spans. The new [`SpanCategorizer`](/api/spancategorizer) component and [SpanCategorizer](/api/architectures#spancategorizer) architecture let you label arbitrary and potentially overlapping spans of texts. A span categorizer consists of two parts: a [suggester function](/api/spancategorizer#suggesters) that proposes candidate spans, which may or may not overlap, and a labeler model that predicts zero or more labels for each candidate. The predicted spans are available via the [`Doc.spans`](/api/doc#spans) container. The upcoming version of our annotation tool [Prodigy](https://prodi.gy) (currently available as a [pre-release](https://support.prodi.gy/t/3861) for all users) features a [new workflow and UI](https://support.prodi.gy/t/3861) for annotating overlapping and nested spans. You can use it to create training data for spaCy's `SpanCategorizer` component. ### Update the entity recognizer with partial incorrect annotations {#negative-samples} > #### config.cfg (excerpt) > > ```ini > [components.ner] > factory = "ner" > incorrect_spans_key = "incorrect_spans" > moves = null > update_with_oracle_cut_size = 100 > ``` The [`EntityRecognizer`](/api/entityrecognizer) can now be updated with known incorrect annotations, which lets you take advantage of partial and sparse data. For example, you'll be able to use the information that certain spans of text are definitely **not** `PERSON` entities, without having to provide the complete-gold standard annotations for the given example. The incorrect span annotations can be added via the [`Doc.spans`](/api/doc#spans) in the training data under the key defined as [`incorrect_spans_key`](/api/entityrecognizer#init) in the component config. ### New pipeline packages for Catalan and Danish {#pipeline-packages} | Package | Language | Tagger | Parser |  NER | | ------------------------------------------------- | -------- | -----: | -----: | ---: | | [`ca_core_news_sm`](/models/ca#ca_core_news_sm) | Catalan | | | | | [`ca_core_news_md`](/models/ca#ca_core_news_md) | Catalan | | | | | [`ca_core_news_lg`](/models/ca#ca_core_news_lg) | Catalan | | | | | [`ca_core_news_trf`](/models/ca#ca_core_news_trf) | Catalan | | | | | [`da_core_news_trf`](/models/da#da_core_news_trf) | Danish | | | | ### Resizable text classification architectures {#resizable-textcat} ### CLI command to assemble pipeline from config {#assemble} The [`spacy assemble`](/api/cli#assemble) command lets you assemble a pipeline from a config file without additional training. It can be especially useful for creating a blank pipeline with a custom tokenizer, rule-based components or word vectors. ```cli $ python -m spacy assemble config.cfg ./output ``` ### Support for streaming large or infinite corpora {#streaming-corpora} ### New lemmatizers for Catalan and Italian {#pos-lemmatizers} ## Notes about upgrading from v3.0 {#upgrading}