mirror of
https://github.com/explosion/spaCy.git
synced 2025-04-27 20:33:42 +03:00
Don't auto-slugify accordion links [ci skip]
This commit is contained in:
parent
8ac197d443
commit
cecc31b765
|
@ -78,7 +78,7 @@ assigned by spaCy's [models](/models). The individual mapping is specific to the
|
||||||
training corpus and can be defined in the respective language data's
|
training corpus and can be defined in the respective language data's
|
||||||
[`tag_map.py`](/usage/adding-languages#tag-map).
|
[`tag_map.py`](/usage/adding-languages#tag-map).
|
||||||
|
|
||||||
<Accordion title="Universal Part-of-speech Tags">
|
<Accordion title="Universal Part-of-speech Tags" id="pos-universal">
|
||||||
|
|
||||||
spaCy also maps all language-specific part-of-speech tags to a small, fixed set
|
spaCy also maps all language-specific part-of-speech tags to a small, fixed set
|
||||||
of word type tags following the
|
of word type tags following the
|
||||||
|
@ -269,7 +269,7 @@ This section lists the syntactic dependency labels assigned by spaCy's
|
||||||
[models](/models). The individual labels are language-specific and depend on the
|
[models](/models). The individual labels are language-specific and depend on the
|
||||||
training corpus.
|
training corpus.
|
||||||
|
|
||||||
<Accordion title="Universal Dependency Labels">
|
<Accordion title="Universal Dependency Labels" id="dependency-parsing-universal">
|
||||||
|
|
||||||
The [Universal Dependencies scheme](http://universaldependencies.org/u/dep/) is
|
The [Universal Dependencies scheme](http://universaldependencies.org/u/dep/) is
|
||||||
used in all languages trained on Universal Dependency Corpora.
|
used in all languages trained on Universal Dependency Corpora.
|
||||||
|
|
|
@ -621,7 +621,7 @@ For more details on the language-specific data, see the usage guide on
|
||||||
|
|
||||||
</Infobox>
|
</Infobox>
|
||||||
|
|
||||||
<Accordion title="Should I change the language data or add custom tokenizer rules?">
|
<Accordion title="Should I change the language data or add custom tokenizer rules?" id="lang-data-vs-tokenizer">
|
||||||
|
|
||||||
Tokenization rules that are specific to one language, but can be **generalized
|
Tokenization rules that are specific to one language, but can be **generalized
|
||||||
across that language** should ideally live in the language data in
|
across that language** should ideally live in the language data in
|
||||||
|
|
|
@ -426,7 +426,7 @@ spaCy, and implement your own models trained with other machine learning
|
||||||
libraries. It also lets you take advantage of spaCy's data structures and the
|
libraries. It also lets you take advantage of spaCy's data structures and the
|
||||||
`Doc` object as the "single source of truth".
|
`Doc` object as the "single source of truth".
|
||||||
|
|
||||||
<Accordion title="Why ._ and not just a top-level attribute?">
|
<Accordion title="Why ._ and not just a top-level attribute?" id="why-dot-underscore">
|
||||||
|
|
||||||
Writing to a `._` attribute instead of to the `Doc` directly keeps a clearer
|
Writing to a `._` attribute instead of to the `Doc` directly keeps a clearer
|
||||||
separation and makes it easier to ensure backwards compatibility. For example,
|
separation and makes it easier to ensure backwards compatibility. For example,
|
||||||
|
@ -437,7 +437,7 @@ immediately know what's built-in and what's custom – for example,
|
||||||
|
|
||||||
</Accordion>
|
</Accordion>
|
||||||
|
|
||||||
<Accordion title="How is the ._ implemented?">
|
<Accordion title="How is the ._ implemented?" id="dot-underscore-implementation">
|
||||||
|
|
||||||
Extension definitions – the defaults, methods, getters and setters you pass in
|
Extension definitions – the defaults, methods, getters and setters you pass in
|
||||||
to `set_extension` – are stored in class attributes on the `Underscore` class.
|
to `set_extension` – are stored in class attributes on the `Underscore` class.
|
||||||
|
|
|
@ -15,7 +15,7 @@ their relationships. This means you can easily access and analyze the
|
||||||
surrounding tokens, merge spans into single tokens or add entries to the named
|
surrounding tokens, merge spans into single tokens or add entries to the named
|
||||||
entities in `doc.ents`.
|
entities in `doc.ents`.
|
||||||
|
|
||||||
<Accordion title="Should I use rules or train a model?">
|
<Accordion title="Should I use rules or train a model?" id="rules-vs-model">
|
||||||
|
|
||||||
For complex tasks, it's usually better to train a statistical entity recognition
|
For complex tasks, it's usually better to train a statistical entity recognition
|
||||||
model. However, statistical models require training data, so for many
|
model. However, statistical models require training data, so for many
|
||||||
|
@ -41,7 +41,7 @@ on [rule-based entity recognition](#entityruler).
|
||||||
|
|
||||||
</Accordion>
|
</Accordion>
|
||||||
|
|
||||||
<Accordion title="When should I use the token matcher vs. the phrase matcher?">
|
<Accordion title="When should I use the token matcher vs. the phrase matcher?" id="matcher-vs-phrase-matcher">
|
||||||
|
|
||||||
The `PhraseMatcher` is useful if you already have a large terminology list or
|
The `PhraseMatcher` is useful if you already have a large terminology list or
|
||||||
gazetteer consisting of single or multi-token phrases that you want to find
|
gazetteer consisting of single or multi-token phrases that you want to find
|
||||||
|
|
|
@ -12,7 +12,6 @@
|
||||||
"@mdx-js/tag": "^0.17.5",
|
"@mdx-js/tag": "^0.17.5",
|
||||||
"@phosphor/widgets": "^1.6.0",
|
"@phosphor/widgets": "^1.6.0",
|
||||||
"@rehooks/online-status": "^1.0.0",
|
"@rehooks/online-status": "^1.0.0",
|
||||||
"@sindresorhus/slugify": "^0.8.0",
|
|
||||||
"@svgr/webpack": "^4.1.0",
|
"@svgr/webpack": "^4.1.0",
|
||||||
"autoprefixer": "^9.4.7",
|
"autoprefixer": "^9.4.7",
|
||||||
"classnames": "^2.2.6",
|
"classnames": "^2.2.6",
|
||||||
|
@ -62,7 +61,8 @@
|
||||||
"md-attr-parser": "^1.2.1",
|
"md-attr-parser": "^1.2.1",
|
||||||
"prettier": "^1.16.4",
|
"prettier": "^1.16.4",
|
||||||
"raw-loader": "^1.0.0",
|
"raw-loader": "^1.0.0",
|
||||||
"unist-util-visit": "^1.4.0"
|
"unist-util-visit": "^1.4.0",
|
||||||
|
"@sindresorhus/slugify": "^0.8.0"
|
||||||
},
|
},
|
||||||
"repository": {
|
"repository": {
|
||||||
"type": "git",
|
"type": "git",
|
||||||
|
|
|
@ -1,13 +1,11 @@
|
||||||
import React, { useState } from 'react'
|
import React, { useState } from 'react'
|
||||||
import PropTypes from 'prop-types'
|
import PropTypes from 'prop-types'
|
||||||
import classNames from 'classnames'
|
import classNames from 'classnames'
|
||||||
import slugify from '@sindresorhus/slugify'
|
|
||||||
|
|
||||||
import Link from './link'
|
import Link from './link'
|
||||||
import classes from '../styles/accordion.module.sass'
|
import classes from '../styles/accordion.module.sass'
|
||||||
|
|
||||||
const Accordion = ({ title, id, expanded, children }) => {
|
const Accordion = ({ title, id, expanded, children }) => {
|
||||||
const anchorId = id || slugify(title)
|
|
||||||
const [isExpanded, setIsExpanded] = useState(expanded)
|
const [isExpanded, setIsExpanded] = useState(expanded)
|
||||||
const contentClassNames = classNames(classes.content, {
|
const contentClassNames = classNames(classes.content, {
|
||||||
[classes.hidden]: !isExpanded,
|
[classes.hidden]: !isExpanded,
|
||||||
|
@ -16,7 +14,7 @@ const Accordion = ({ title, id, expanded, children }) => {
|
||||||
[classes.hidden]: isExpanded,
|
[classes.hidden]: isExpanded,
|
||||||
})
|
})
|
||||||
return (
|
return (
|
||||||
<section id={anchorId}>
|
<section id={id}>
|
||||||
<div className={classes.root}>
|
<div className={classes.root}>
|
||||||
<h3>
|
<h3>
|
||||||
<button
|
<button
|
||||||
|
@ -26,8 +24,8 @@ const Accordion = ({ title, id, expanded, children }) => {
|
||||||
>
|
>
|
||||||
<span>
|
<span>
|
||||||
{title}
|
{title}
|
||||||
{isExpanded && (
|
{isExpanded && !!id && (
|
||||||
<Link to={`#${anchorId}`} className={classes.anchor} hidden>
|
<Link to={`#${id}`} className={classes.anchor} hidden>
|
||||||
¶
|
¶
|
||||||
</Link>
|
</Link>
|
||||||
)}
|
)}
|
||||||
|
|
Loading…
Reference in New Issue
Block a user