mirror of
https://github.com/explosion/spaCy.git
synced 2024-12-27 10:26:35 +03:00
554df9ef20
* Rename all MDX file to `.mdx`
* Lock current node version (#11885)
* Apply Prettier (#11996)
* Minor website fixes (#11974) [ci skip]
* fix table
* Migrate to Next WEB-17 (#12005)
* Initial commit
* Run `npx create-next-app@13 next-blog`
* Install MDX packages
Following: 77b5f79a4d/packages/next-mdx/readme.md
* Add MDX to Next
* Allow Next to handle `.md` and `.mdx` files.
* Add VSCode extension recommendation
* Disabled TypeScript strict mode for now
* Add prettier
* Apply Prettier to all files
* Make sure to use correct Node version
* Add basic implementation for `MDXRemote`
* Add experimental Rust MDX parser
* Add `/public`
* Add SASS support
* Remove default pages and styling
* Convert to module
This allows to use `import/export` syntax
* Add import for custom components
* Add ability to load plugins
* Extract function
This will make the next commit easier to read
* Allow to handle directories for page creation
* Refactoring
* Allow to parse subfolders for pages
* Extract logic
* Redirect `index.mdx` to parent directory
* Disabled ESLint during builds
* Disabled typescript during build
* Remove Gatsby from `README.md`
* Rephrase Docker part of `README.md`
* Update project structure in `README.md`
* Move and rename plugins
* Update plugin for wrapping sections
* Add dependencies for plugin
* Use plugin
* Rename wrapper type
* Simplify unnessary adding of id to sections
The slugified section ids are useless, because they can not be referenced anywhere anyway. The navigation only works if the section has the same id as the heading.
* Add plugin for custom attributes on Markdown elements
* Add plugin to readd support for tables
* Add plugin to fix problem with wrapped images
For more details see this issue: https://github.com/mdx-js/mdx/issues/1798
* Add necessary meta data to pages
* Install necessary dependencies
* Remove outdated MDX handling
* Remove reliance on `InlineList`
* Use existing Remark components
* Remove unallowed heading
Before `h1` components where not overwritten and would never have worked and they aren't used anywhere either.
* Add missing components to MDX
* Add correct styling
* Fix broken list
* Fix broken CSS classes
* Implement layout
* Fix links
* Fix broken images
* Fix pattern image
* Fix heading attributes
* Rename heading attribute
`new` was causing some weird issue, so renaming it to `version`
* Update comment syntax in MDX
* Merge imports
* Fix markdown rendering inside components
* Add model pages
* Simplify anchors
* Fix default value for theme
* Add Universe index page
* Add Universe categories
* Add Universe projects
* Fix Next problem with copy
Next complains when the server renders something different then the client, therfor we move the differing logic to `useEffect`
* Fix improper component nesting
Next doesn't allow block elements inside a `<p>`
* Replace landing page MDX with page component
* Remove inlined iframe content
* Remove ability to inline HTML content in iFrames
* Remove MDX imports
* Fix problem with image inside link in MDX
* Escape character for MDX
* Fix unescaped characters in MDX
* Fix headings with logo
* Allow to export static HTML pages
* Add prebuild script
This command is automatically run by Next
* Replace `svg-loader` with `react-inlinesvg`
`svg-loader` is no longer maintained
* Fix ESLint `react-hooks/exhaustive-deps`
* Fix dropdowns
* Change code language from `cli` to `bash`
* Remove unnessary language `none`
* Fix invalid code language
`markdown_` with an underscore was used to basically turn of syntax highlighting, but using unknown languages know throws an error.
* Enable code blocks plugin
* Readd `InlineCode` component
MDX2 removed the `inlineCode` component
> The special component name `inlineCode` was removed, we recommend to use `pre` for the block version of code, and code for both the block and inline versions
Source: https://mdxjs.com/migrating/v2/#update-mdx-content
* Remove unused code
* Extract function to own file
* Fix code syntax highlighting
* Update syntax for code block meta data
* Remove unused prop
* Fix internal link recognition
There is a problem with regex between Node and browser, and since Next runs the component on both, this create an error.
`Prop `rel` did not match. Server: "null" Client: "noopener nofollow noreferrer"`
This simplifies the implementation and fixes the above error.
* Replace `react-helmet` with `next/head`
* Fix `className` problem for JSX component
* Fix broken bold markdown
* Convert file to `.mjs` to be used by Node process
* Add plugin to replace strings
* Fix custom table row styling
* Fix problem with `span` inside inline `code`
React doesn't allow a `span` inside an inline `code` element and throws an error in dev mode.
* Add `_document` to be able to customize `<html>` and `<body>`
* Add `lang="en"`
* Store Netlify settings in file
This way we don't need to update via Netlify UI, which can be tricky if changing build settings.
* Add sitemap
* Add Smartypants
* Add PWA support
* Add `manifest.webmanifest`
* Fix bug with anchor links after reloading
There was no need for the previous implementation, since the browser handles this nativly. Additional the manual scrolling into view was actually broken, because the heading would disappear behind the menu bar.
* Rename custom event
I was googeling for ages to find out what kind of event `inview` is, only to figure out it was a custom event with a name that sounds pretty much like a native one. 🫠
* Fix missing comment syntax highlighting
* Refactor Quickstart component
The previous implementation was hidding the irrelevant lines via data-props and dynamically generated CSS. This created problems with Next and was also hard to follow. CSS was used to do what React is supposed to handle.
The new implementation simplfy filters the list of children (React elements) via their props.
* Fix syntax highlighting for Training Quickstart
* Unify code rendering
* Improve error logging in Juniper
* Fix Juniper component
* Automatically generate "Read Next" link
* Add Plausible
* Use recent DocSearch component and adjust styling
* Fix images
* Turn of image optimization
> Image Optimization using Next.js' default loader is not compatible with `next export`.
We currently deploy to Netlify via `next export`
* Dont build pages starting with `_`
* Remove unused files
* Add Next plugin to Netlify
* Fix button layout
MDX automatically adds `p` tags around text on a new line and Prettier wants to put the text on a new line. Hacking with JSX string.
* Add 404 page
* Apply Prettier
* Update Prettier for `package.json`
Next sometimes wants to patch `package-lock.json`. The old Prettier setting indended with 4 spaces, but Next always indends with 2 spaces. Since `npm install` automatically uses the indendation from `package.json` for `package-lock.json` and to avoid the format switching back and forth, both files are now set to 2 spaces.
* Apply Next patch to `package-lock.json`
When starting the dev server Next would warn `warn - Found lockfile missing swc dependencies, patching...` and update the `package-lock.json`. These are the patched changes.
* fix link
Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>
* small backslash fixes
* adjust to new style
Co-authored-by: Marcus Blättermann <marcus@essenmitsosse.de>
265 lines
15 KiB
Plaintext
265 lines
15 KiB
Plaintext
---
|
|
title: Tokenizer
|
|
teaser: Segment text into words, punctuations marks, etc.
|
|
tag: class
|
|
source: spacy/tokenizer.pyx
|
|
---
|
|
|
|
> #### Default config
|
|
>
|
|
> ```ini
|
|
> [nlp.tokenizer]
|
|
> @tokenizers = "spacy.Tokenizer.v1"
|
|
> ```
|
|
|
|
Segment text, and create `Doc` objects with the discovered segment boundaries.
|
|
For a deeper understanding, see the docs on
|
|
[how spaCy's tokenizer works](/usage/linguistic-features#how-tokenizer-works).
|
|
The tokenizer is typically created automatically when a
|
|
[`Language`](/api/language) subclass is initialized and it reads its settings
|
|
like punctuation and special case rules from the
|
|
[`Language.Defaults`](/api/language#defaults) provided by the language subclass.
|
|
|
|
## Tokenizer.\_\_init\_\_ {id="init",tag="method"}
|
|
|
|
Create a `Tokenizer` to create `Doc` objects given unicode text. For examples of
|
|
how to construct a custom tokenizer with different tokenization rules, see the
|
|
[usage documentation](https://spacy.io/usage/linguistic-features#native-tokenizers).
|
|
|
|
> #### Example
|
|
>
|
|
> ```python
|
|
> # Construction 1
|
|
> from spacy.tokenizer import Tokenizer
|
|
> from spacy.lang.en import English
|
|
> nlp = English()
|
|
> # Create a blank Tokenizer with just the English vocab
|
|
> tokenizer = Tokenizer(nlp.vocab)
|
|
>
|
|
> # Construction 2
|
|
> from spacy.lang.en import English
|
|
> nlp = English()
|
|
> # Create a Tokenizer with the default settings for English
|
|
> # including punctuation rules and exceptions
|
|
> tokenizer = nlp.tokenizer
|
|
> ```
|
|
|
|
| Name | Description |
|
|
| -------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
|
| `vocab` | A storage container for lexical types. ~~Vocab~~ |
|
|
| `rules` | Exceptions and special-cases for the tokenizer. ~~Optional[Dict[str, List[Dict[int, str]]]]~~ |
|
|
| `prefix_search` | A function matching the signature of `re.compile(string).search` to match prefixes. ~~Optional[Callable[[str], Optional[Match]]]~~ |
|
|
| `suffix_search` | A function matching the signature of `re.compile(string).search` to match suffixes. ~~Optional[Callable[[str], Optional[Match]]]~~ |
|
|
| `infix_finditer` | A function matching the signature of `re.compile(string).finditer` to find infixes. ~~Optional[Callable[[str], Iterator[Match]]]~~ |
|
|
| `token_match` | A function matching the signature of `re.compile(string).match` to find token matches. ~~Optional[Callable[[str], Optional[Match]]]~~ |
|
|
| `url_match` | A function matching the signature of `re.compile(string).match` to find token matches after considering prefixes and suffixes. ~~Optional[Callable[[str], Optional[Match]]]~~ |
|
|
| `faster_heuristics` <Tag variant="new">3.3.0</Tag> | Whether to restrict the final `Matcher`-based pass for rules to those containing affixes or space. Defaults to `True`. ~~bool~~ |
|
|
|
|
## Tokenizer.\_\_call\_\_ {id="call",tag="method"}
|
|
|
|
Tokenize a string.
|
|
|
|
> #### Example
|
|
>
|
|
> ```python
|
|
> tokens = tokenizer("This is a sentence")
|
|
> assert len(tokens) == 4
|
|
> ```
|
|
|
|
| Name | Description |
|
|
| ----------- | ----------------------------------------------- |
|
|
| `string` | The string to tokenize. ~~str~~ |
|
|
| **RETURNS** | A container for linguistic annotations. ~~Doc~~ |
|
|
|
|
## Tokenizer.pipe {id="pipe",tag="method"}
|
|
|
|
Tokenize a stream of texts.
|
|
|
|
> #### Example
|
|
>
|
|
> ```python
|
|
> texts = ["One document.", "...", "Lots of documents"]
|
|
> for doc in tokenizer.pipe(texts, batch_size=50):
|
|
> pass
|
|
> ```
|
|
|
|
| Name | Description |
|
|
| ------------ | ------------------------------------------------------------------------------------ |
|
|
| `texts` | A sequence of unicode texts. ~~Iterable[str]~~ |
|
|
| `batch_size` | The number of texts to accumulate in an internal buffer. Defaults to `1000`. ~~int~~ |
|
|
| **YIELDS** | The tokenized `Doc` objects, in order. ~~Doc~~ |
|
|
|
|
## Tokenizer.find_infix {id="find_infix",tag="method"}
|
|
|
|
Find internal split points of the string.
|
|
|
|
| Name | Description |
|
|
| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
|
|
| `string` | The string to split. ~~str~~ |
|
|
| **RETURNS** | A list of `re.MatchObject` objects that have `.start()` and `.end()` methods, denoting the placement of internal segment separators, e.g. hyphens. ~~List[Match]~~ |
|
|
|
|
## Tokenizer.find_prefix {id="find_prefix",tag="method"}
|
|
|
|
Find the length of a prefix that should be segmented from the string, or `None`
|
|
if no prefix rules match.
|
|
|
|
| Name | Description |
|
|
| ----------- | ------------------------------------------------------------------------ |
|
|
| `string` | The string to segment. ~~str~~ |
|
|
| **RETURNS** | The length of the prefix if present, otherwise `None`. ~~Optional[int]~~ |
|
|
|
|
## Tokenizer.find_suffix {id="find_suffix",tag="method"}
|
|
|
|
Find the length of a suffix that should be segmented from the string, or `None`
|
|
if no suffix rules match.
|
|
|
|
| Name | Description |
|
|
| ----------- | ------------------------------------------------------------------------ |
|
|
| `string` | The string to segment. ~~str~~ |
|
|
| **RETURNS** | The length of the suffix if present, otherwise `None`. ~~Optional[int]~~ |
|
|
|
|
## Tokenizer.add_special_case {id="add_special_case",tag="method"}
|
|
|
|
Add a special-case tokenization rule. This mechanism is also used to add custom
|
|
tokenizer exceptions to the language data. See the usage guide on the
|
|
[languages data](/usage/linguistic-features#language-data) and
|
|
[tokenizer special cases](/usage/linguistic-features#special-cases) for more
|
|
details and examples.
|
|
|
|
> #### Example
|
|
>
|
|
> ```python
|
|
> from spacy.attrs import ORTH, NORM
|
|
> case = [{ORTH: "do"}, {ORTH: "n't", NORM: "not"}]
|
|
> tokenizer.add_special_case("don't", case)
|
|
> ```
|
|
|
|
| Name | Description |
|
|
| ------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
|
| `string` | The string to specially tokenize. ~~str~~ |
|
|
| `token_attrs` | A sequence of dicts, where each dict describes a token and its attributes. The `ORTH` fields of the attributes must exactly match the string when they are concatenated. ~~Iterable[Dict[int, str]]~~ |
|
|
|
|
## Tokenizer.explain {id="explain",tag="method"}
|
|
|
|
Tokenize a string with a slow debugging tokenizer that provides information
|
|
about which tokenizer rule or pattern was matched for each token. The tokens
|
|
produced are identical to `Tokenizer.__call__` except for whitespace tokens.
|
|
|
|
> #### Example
|
|
>
|
|
> ```python
|
|
> tok_exp = nlp.tokenizer.explain("(don't)")
|
|
> assert [t[0] for t in tok_exp] == ["PREFIX", "SPECIAL-1", "SPECIAL-2", "SUFFIX"]
|
|
> assert [t[1] for t in tok_exp] == ["(", "do", "n't", ")"]
|
|
> ```
|
|
|
|
| Name | Description |
|
|
| ----------- | ---------------------------------------------------------------------------- |
|
|
| `string` | The string to tokenize with the debugging tokenizer. ~~str~~ |
|
|
| **RETURNS** | A list of `(pattern_string, token_string)` tuples. ~~List[Tuple[str, str]]~~ |
|
|
|
|
## Tokenizer.to_disk {id="to_disk",tag="method"}
|
|
|
|
Serialize the tokenizer to disk.
|
|
|
|
> #### Example
|
|
>
|
|
> ```python
|
|
> tokenizer = Tokenizer(nlp.vocab)
|
|
> tokenizer.to_disk("/path/to/tokenizer")
|
|
> ```
|
|
|
|
| Name | Description |
|
|
| -------------- | ------------------------------------------------------------------------------------------------------------------------------------------ |
|
|
| `path` | A path to a directory, which will be created if it doesn't exist. Paths may be either strings or `Path`-like objects. ~~Union[str, Path]~~ |
|
|
| _keyword-only_ | |
|
|
| `exclude` | String names of [serialization fields](#serialization-fields) to exclude. ~~Iterable[str]~~ |
|
|
|
|
## Tokenizer.from_disk {id="from_disk",tag="method"}
|
|
|
|
Load the tokenizer from disk. Modifies the object in place and returns it.
|
|
|
|
> #### Example
|
|
>
|
|
> ```python
|
|
> tokenizer = Tokenizer(nlp.vocab)
|
|
> tokenizer.from_disk("/path/to/tokenizer")
|
|
> ```
|
|
|
|
| Name | Description |
|
|
| -------------- | ----------------------------------------------------------------------------------------------- |
|
|
| `path` | A path to a directory. Paths may be either strings or `Path`-like objects. ~~Union[str, Path]~~ |
|
|
| _keyword-only_ | |
|
|
| `exclude` | String names of [serialization fields](#serialization-fields) to exclude. ~~Iterable[str]~~ |
|
|
| **RETURNS** | The modified `Tokenizer` object. ~~Tokenizer~~ |
|
|
|
|
## Tokenizer.to_bytes {id="to_bytes",tag="method"}
|
|
|
|
> #### Example
|
|
>
|
|
> ```python
|
|
> tokenizer = tokenizer(nlp.vocab)
|
|
> tokenizer_bytes = tokenizer.to_bytes()
|
|
> ```
|
|
|
|
Serialize the tokenizer to a bytestring.
|
|
|
|
| Name | Description |
|
|
| -------------- | ------------------------------------------------------------------------------------------- |
|
|
| _keyword-only_ | |
|
|
| `exclude` | String names of [serialization fields](#serialization-fields) to exclude. ~~Iterable[str]~~ |
|
|
| **RETURNS** | The serialized form of the `Tokenizer` object. ~~bytes~~ |
|
|
|
|
## Tokenizer.from_bytes {id="from_bytes",tag="method"}
|
|
|
|
Load the tokenizer from a bytestring. Modifies the object in place and returns
|
|
it.
|
|
|
|
> #### Example
|
|
>
|
|
> ```python
|
|
> tokenizer_bytes = tokenizer.to_bytes()
|
|
> tokenizer = Tokenizer(nlp.vocab)
|
|
> tokenizer.from_bytes(tokenizer_bytes)
|
|
> ```
|
|
|
|
| Name | Description |
|
|
| -------------- | ------------------------------------------------------------------------------------------- |
|
|
| `bytes_data` | The data to load from. ~~bytes~~ |
|
|
| _keyword-only_ | |
|
|
| `exclude` | String names of [serialization fields](#serialization-fields) to exclude. ~~Iterable[str]~~ |
|
|
| **RETURNS** | The `Tokenizer` object. ~~Tokenizer~~ |
|
|
|
|
## Attributes {id="attributes"}
|
|
|
|
| Name | Description |
|
|
| ---------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
|
| `vocab` | The vocab object of the parent `Doc`. ~~Vocab~~ |
|
|
| `prefix_search` | A function to find segment boundaries from the start of a string. Returns the length of the segment, or `None`. ~~Optional[Callable[[str], Optional[Match]]]~~ |
|
|
| `suffix_search` | A function to find segment boundaries from the end of a string. Returns the length of the segment, or `None`. ~~Optional[Callable[[str], Optional[Match]]]~~ |
|
|
| `infix_finditer` | A function to find internal segment separators, e.g. hyphens. Returns a (possibly empty) sequence of `re.MatchObject` objects. ~~Optional[Callable[[str], Iterator[Match]]]~~ |
|
|
| `token_match` | A function matching the signature of `re.compile(string).match` to find token matches. Returns an `re.MatchObject` or `None`. ~~Optional[Callable[[str], Optional[Match]]]~~ |
|
|
| `rules` | A dictionary of tokenizer exceptions and special cases. ~~Optional[Dict[str, List[Dict[int, str]]]]~~ |
|
|
|
|
## Serialization fields {id="serialization-fields"}
|
|
|
|
During serialization, spaCy will export several data fields used to restore
|
|
different aspects of the object. If needed, you can exclude them from
|
|
serialization by passing in the string names via the `exclude` argument.
|
|
|
|
> #### Example
|
|
>
|
|
> ```python
|
|
> data = tokenizer.to_bytes(exclude=["vocab", "exceptions"])
|
|
> tokenizer.from_disk("./data", exclude=["token_match"])
|
|
> ```
|
|
|
|
| Name | Description |
|
|
| ---------------- | --------------------------------- |
|
|
| `vocab` | The shared [`Vocab`](/api/vocab). |
|
|
| `prefix_search` | The prefix rules. |
|
|
| `suffix_search` | The suffix rules. |
|
|
| `infix_finditer` | The infix rules. |
|
|
| `token_match` | The token match expression. |
|
|
| `exceptions` | The tokenizer exception rules. |
|