+
+
-
+{' '}
-
+ + +{' '} + + +
## Components -### Table {#table} +### Table {id="table"} > #### Markdown > -> ```markdown_ +> ```markdown > | Header 1 | Header 2 | > | -------- | -------- | > | Column 1 | Column 2 | @@ -248,7 +243,7 @@ be italicized: > #### Markdown > -> ```markdown_ +> ```markdown > | Header 1 | Header 2 | Header 3 | > | -------- | -------- | -------- | > | Column 1 | Column 2 | Column 3 | @@ -262,11 +257,11 @@ be italicized: | _Hello_ | | | | Column 1 | Column 2 | Column 3 | -### Type Annotations {#type-annotations} +### Type Annotations {id="type-annotations"} > #### Markdown > -> ```markdown_ +> ```markdown > ~~Model[List[Doc], Floats2d]~~ > ``` > @@ -295,9 +290,9 @@ always be the **last element** in the row. > #### Markdown > -> ```markdown_ -> | Header 1 | Header 2 | -> | -------- | ----------------------- | +> ```markdown +> | Header 1 | Header 2 | +> | -------- | ---------------------- | > | Column 1 | Column 2 ~~List[Doc]~~ | > ``` @@ -307,11 +302,11 @@ always be the **last element** in the row. | `model` | The Thinc [`Model`](https://thinc.ai/docs/api-model) wrapping the transformer. ~~Model[List[Doc], FullTransformerBatch]~~ | | `set_extra_annotations` | Function that takes a batch of `Doc` objects and transformer outputs and can set additional annotations on the `Doc`. ~~Callable[[List[Doc], FullTransformerBatch], None]~~ | -### List {#list} +### List {id="list"} > #### Markdown > -> ```markdown_ +> ```markdown > 1. One > 2. Two > ``` @@ -338,12 +333,13 @@ automatically. 3. Lorem ipsum dolor 4. consectetur adipiscing elit -### Aside {#aside} +### Aside {id="aside"} > #### Markdown > -> ```markdown_ +> ```markdown > > #### Aside title +> > > > This is aside text. > ``` > @@ -363,11 +359,11 @@ To make them easier to use in Markdown, paragraphs formatted as blockquotes will turn into asides by default. Level 4 headlines (with a leading `####`) will become aside titles. -### Code Block {#code-block} +### Code Block {id="code-block"} > #### Markdown > -> ````markdown_ +> ````markdown > ```python > ### This is a title > import spacy @@ -388,8 +384,7 @@ to raw text with no highlighting. An optional label can be added as the first line with the prefix `####` (Python-like) and `///` (JavaScript-like). the indented block as plain text and preserve whitespace. -```python -### Using spaCy +```python {title="Using spaCy"} import spacy nlp = spacy.load("en_core_web_sm") doc = nlp("This is a sentence.") @@ -403,7 +398,7 @@ adding `{highlight="..."}` to the headline. Acceptable ranges are spans like > #### Markdown > -> ````markdown_ +> ````markdown > ```python > ### This is a title {highlight="1-2"} > import spacy @@ -411,8 +406,7 @@ adding `{highlight="..."}` to the headline. Acceptable ranges are spans like > ``` > ```` -```python -### Using the matcher {highlight="5-7"} +```python {title="Using the matcher",highlight="5-7"} import spacy from spacy.matcher import Matcher @@ -431,7 +425,7 @@ interactive widget defaults to a regular code block. > #### Markdown > -> ````markdown_ +> ````markdown > ```python > ### {executable="true"} > import spacy @@ -439,8 +433,7 @@ interactive widget defaults to a regular code block. > ``` > ```` -```python -### {executable="true"} +```python {executable="true"} import spacy nlp = spacy.load("en_core_web_sm") doc = nlp("This is a sentence.") @@ -454,7 +447,7 @@ original file is shown at the top of the widget. > #### Markdown > -> ````markdown_ +> ````markdown > ```python > https://github.com/... > ``` @@ -470,9 +463,7 @@ original file is shown at the top of the widget. https://github.com/explosion/spaCy/tree/master/spacy/language.py ``` -### Infobox {#infobox} - -import Infobox from 'components/infobox' +### Infobox {id="infobox"} > #### JSX > @@ -508,9 +499,7 @@ blocks. -### Accordion {#accordion} - -import Accordion from 'components/accordion' +### Accordion {id="accordion"} > #### JSX > @@ -537,9 +526,9 @@ sit amet dignissim justo congue. -## Markdown reference {#markdown} +## Markdown reference {id="markdown"} -All page content and page meta lives in the `.md` files in the `/docs` +All page content and page meta lives in the `.mdx` files in the `/docs` directory. The frontmatter block at the top of each file defines the page title and other settings like the sidebar menu. @@ -548,7 +537,7 @@ and other settings like the sidebar menu. title: Page title --- -## Headline starting a section {#some_id} +## Headline starting a section {id="some_id"} This is a regular paragraph with a [link](https://spacy.io) and **bold text**. @@ -562,8 +551,7 @@ This is a regular paragraph with a [link](https://spacy.io) and **bold text**. | -------- | -------- | | Column 1 | Column 2 | -```python -### Code block title {highlight="2-3"} +```python {title="Code block title",highlight="2-3"} import spacy nlp = spacy.load("en_core_web_sm") doc = nlp("Hello world") @@ -585,7 +573,7 @@ In addition to the native markdown elements, you can use the components [abbr]: https://spacy.io/styleguide#abbr [tag]: https://spacy.io/styleguide#tag -## Editorial {#editorial} +## Editorial {id="editorial"} - "spaCy" should always be spelled with a lowercase "s" and a capital "C", unless it specifically refers to the Python package or Python import `spacy` @@ -609,21 +597,16 @@ In addition to the native markdown elements, you can use the components - ❌ The [`Span`](/api/span) and [`Token`](/api/token) objects are views of a [`Doc`](/api/doc). [`Span.as_doc`](/api/span#as_doc) creates a [`Doc`](/api/doc) object from a [`Span`](/api/span). - -* Other things we format as code are: references to trained pipeline packages +- Other things we format as code are: references to trained pipeline packages like `en_core_web_sm` or file names like `code.py` or `meta.json`. - - ✅ After training, the `config.cfg` is saved to disk. - -* [Type annotations](#type-annotations) are a special type of code formatting, +- [Type annotations](#type-annotations) are a special type of code formatting, expressed by wrapping the text in `~~` instead of backticks. The result looks like this: ~~List[Doc]~~. All references to known types will be linked automatically. - - ✅ The model has the input type ~~List[Doc]~~ and it outputs a ~~List[Array2d]~~. - -* We try to keep links meaningful but short. +- We try to keep links meaningful but short. - ✅ For details, see the usage guide on [training with custom code](/usage/training#custom-code). - ❌ For details, see diff --git a/website/docs/usage/101/_architecture.md b/website/docs/usage/101/_architecture.mdx similarity index 96% rename from website/docs/usage/101/_architecture.md rename to website/docs/usage/101/_architecture.mdx index 4ebca2756..2a63a3741 100644 --- a/website/docs/usage/101/_architecture.md +++ b/website/docs/usage/101/_architecture.mdx @@ -14,9 +14,9 @@ of the pipeline. The `Language` object coordinates these components. It takes raw text and sends it through the pipeline, returning an **annotated document**. It also orchestrates training and serialization. - + -### Container objects {#architecture-containers} +### Container objects {id="architecture-containers"} | Name | Description | | ----------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------- | @@ -29,7 +29,7 @@ It also orchestrates training and serialization. | [`SpanGroup`](/api/spangroup) | A named collection of spans belonging to a `Doc`. | | [`Token`](/api/token) | An individual token — i.e. a word, punctuation symbol, whitespace, etc. | -### Processing pipeline {#architecture-pipeline} +### Processing pipeline {id="architecture-pipeline"} The processing pipeline consists of one or more **pipeline components** that are called on the `Doc` in order. The tokenizer runs before the components. Pipeline @@ -39,7 +39,7 @@ rule-based modifications to the `Doc`. spaCy provides a range of built-in components for different language processing tasks and also allows adding [custom components](/usage/processing-pipelines#custom-components). - + | Name | Description | | ----------------------------------------------- | ------------------------------------------------------------------------------------------- | @@ -61,7 +61,7 @@ components for different language processing tasks and also allows adding | [`Transformer`](/api/transformer) | Use a transformer model and set its outputs. | | [Other functions](/api/pipeline-functions) | Automatically apply something to the `Doc`, e.g. to merge spans of tokens. | -### Matchers {#architecture-matchers} +### Matchers {id="architecture-matchers"} Matchers help you find and extract information from [`Doc`](/api/doc) objects based on match patterns describing the sequences you're looking for. A matcher @@ -73,13 +73,13 @@ operates on a `Doc` and gives you access to the matched tokens **in context**. | [`Matcher`](/api/matcher) | Match sequences of tokens, based on pattern rules, similar to regular expressions. | | [`PhraseMatcher`](/api/phrasematcher) | Match sequences of tokens based on phrases. | -### Other classes {#architecture-other} +### Other classes {id="architecture-other"} | Name | Description | | ------------------------------------------------ | -------------------------------------------------------------------------------------------------- | | [`Corpus`](/api/corpus) | Class for managing annotated corpora for training and evaluation data. | | [`KnowledgeBase`](/api/kb) | Abstract base class for storage and retrieval of data for entity linking. | -| [`InMemoryLookupKB`](/api/kb_in_memory) | Implementation of `KnowledgeBase` storing all data in memory. | +| [`InMemoryLookupKB`](/api/inmemorylookupkb) | Implementation of `KnowledgeBase` storing all data in memory. | | [`Candidate`](/api/kb#candidate) | Object associating a textual mention with a specific entity contained in a `KnowledgeBase`. | | [`Lookups`](/api/lookups) | Container for convenient access to large lookup tables and dictionaries. | | [`MorphAnalysis`](/api/morphology#morphanalysis) | A morphological analysis. | diff --git a/website/docs/usage/101/_language-data.md b/website/docs/usage/101/_language-data.mdx similarity index 100% rename from website/docs/usage/101/_language-data.md rename to website/docs/usage/101/_language-data.mdx diff --git a/website/docs/usage/101/_named-entities.md b/website/docs/usage/101/_named-entities.mdx similarity index 75% rename from website/docs/usage/101/_named-entities.md rename to website/docs/usage/101/_named-entities.mdx index 2abc45cbd..9ae4134d8 100644 --- a/website/docs/usage/101/_named-entities.md +++ b/website/docs/usage/101/_named-entities.mdx @@ -1,14 +1,13 @@ A named entity is a "real-world object" that's assigned a name – for example, a person, a country, a product or a book title. spaCy can **recognize various -types of named entities in a document, by asking the model for a -prediction**. Because models are statistical and strongly depend on the -examples they were trained on, this doesn't always work _perfectly_ and might -need some tuning later, depending on your use case. +types of named entities in a document, by asking the model for a prediction**. +Because models are statistical and strongly depend on the examples they were +trained on, this doesn't always work _perfectly_ and might need some tuning +later, depending on your use case. Named entities are available as the `ents` property of a `Doc`: -```python -### {executable="true"} +```python {executable="true"} import spacy nlp = spacy.load("en_core_web_sm") @@ -32,7 +31,8 @@ for ent in doc.ents: Using spaCy's built-in [displaCy visualizer](/usage/visualizers), here's what our example sentence and its named entities look like: -import DisplaCyEntHtml from 'images/displacy-ent1.html'; import { Iframe } from -'components/embed' - - + diff --git a/website/docs/usage/101/_pipelines.md b/website/docs/usage/101/_pipelines.mdx similarity index 98% rename from website/docs/usage/101/_pipelines.md rename to website/docs/usage/101/_pipelines.mdx index f43219f41..315291762 100644 --- a/website/docs/usage/101/_pipelines.md +++ b/website/docs/usage/101/_pipelines.mdx @@ -5,7 +5,7 @@ referred to as the **processing pipeline**. The pipeline used by the and an entity recognizer. Each pipeline component returns the processed `Doc`, which is then passed on to the next component. - + > - **Name**: ID of the pipeline component. > - **Component:** spaCy's implementation of the component. @@ -35,8 +35,6 @@ the [config](/usage/training#config): pipeline = ["tok2vec", "tagger", "parser", "ner"] ``` -import Accordion from 'components/accordion.js' -
+
+
+ + Get a custom spaCy pipeline, tailor-made for your NLP problem by + spaCy's core developers. + +
+
+
+ Prodigy is an annotation tool so efficient that data + scientists can do the annotation themselves, enabling a new level of rapid + iteration. Whether you're working on entity recognition, intent + detection or image classification, Prodigy can help you{' '} + train and evaluate your models faster. +
- spaCy's new project system gives you a smooth path from prototype to + spaCy's new project system gives you a smooth path from prototype to production. It lets you keep track of all those{' '} data transformation, preprocessing and{' '} training steps, so you can make sure your project is always @@ -236,13 +243,15 @@ const Landing = ({ data }) => { button="See what's new" small > - spaCy v3.0 features all new transformer-based pipelines that - bring spaCy's accuracy right up to the current state-of-the-art - . You can use any pretrained transformer to train your own pipelines, and even - share one transformer between multiple components with{' '} - multi-task learning. Training is now fully configurable and - extensible, and you can define your own custom models using{' '} - PyTorch, TensorFlow and other frameworks. +
+ spaCy v3.0 features all new transformer-based pipelines{' '} + that bring spaCy's accuracy right up to the current{' '} + state-of-the-art. You can use any pretrained transformer to + train your own pipelines, and even share one transformer between multiple + components with multi-task learning. Training is now fully + configurable and extensible, and you can define your own custom models using{' '} + PyTorch, TensorFlow and other frameworks. +
+
+
+ In this free and interactive online course you’ll learn how + to use spaCy to build advanced natural language understanding systems, using + both rule-based and machine learning approaches. It includes{' '} + 55 exercises featuring videos, slide decks, multiple-choice + questions and interactive coding practice in the browser. +
- spaCy v3.0 introduces transformer-based pipelines that bring spaCy's
+ spaCy v3.0 introduces transformer-based pipelines that bring spaCy's
accuracy right up to the current state-of-the-art. You can
also use a CPU-optimized pipeline, which is less accurate but much cheaper
to run.
@@ -285,33 +296,8 @@ const Landing = ({ data }) => {
)
-export const Pre = props => {
- return {props.children}+export default CodeBlock + +export const Pre = (props) => { + return
{props.children}} export const InlineCode = ({ wrap = false, className, children, ...props }) => { - const codeClassNames = classNames(classes.inlineCode, className, { - [classes.wrap]: wrap || (isString(children) && children.length >= WRAP_THRESHOLD), + const codeClassNames = classNames(classes['inline-code'], className, { + [classes['wrap']]: wrap || (isString(children) && children.length >= WRAP_THRESHOLD), }) return (
@@ -68,39 +80,76 @@ export const TypeAnnotation = ({ lang = 'python', link = true, children }) => {
const code = Array.isArray(children) ? children.join('') : children || ''
const [rawText, meta] = code.split(/(?= \(.+\)$)/)
const rawStr = rawText.replace(/\./g, TMP_DOT)
- const rawHtml = lang === 'none' || !code ? code : highlightCode(lang, rawStr)
+ const rawHtml =
+ lang === 'none' || !code ? code : Prism.highlight(rawStr, Prism.languages[lang], lang)
const html = rawHtml.replace(new RegExp(TMP_DOT, 'g'), '.').replace(/\n/g, ' ')
const result = htmlToReact(html)
const elements = Array.isArray(result) ? result : [result]
const annotClassNames = classNames(
'type-annotation',
`language-${lang}`,
- classes.inlineCode,
- classes.typeAnnotation,
+ classes['inline-code'],
+ classes['type-annotation'],
{
- [classes.wrap]: code.length >= WRAP_THRESHOLD,
+ [classes['wrap']]: code.length >= WRAP_THRESHOLD,
}
)
return (
-
+
{elements.map((el, i) => (
{linkType(el, !!link)}
))}
- {meta && {meta}}
-
+ {meta && {meta}}
+
)
}
-function replacePrompt(line, prompt, isFirst = false) {
- let result = line
- const hasPrompt = result.startsWith(`${prompt} `)
- const showPrompt = hasPrompt || isFirst
- if (hasPrompt) result = result.slice(2)
- return result && showPrompt ? `${result}` : result
+const splitLines = (children) => {
+ const listChildrenPerLine = []
+
+ if (typeof children === 'string') {
+ listChildrenPerLine.push(...children.split('\n'))
+ } else {
+ listChildrenPerLine.push([])
+ let indexLine = 0
+ if (Array.isArray(children)) {
+ children.forEach((child) => {
+ if (typeof child === 'string' && child.includes('\n')) {
+ const listString = child.split('\n')
+ listString.forEach((string, index) => {
+ listChildrenPerLine[indexLine].push(string)
+
+ if (index !== listString.length - 1) {
+ indexLine += 1
+ listChildrenPerLine[indexLine] = []
+ }
+ })
+ } else {
+ listChildrenPerLine[indexLine].push(child)
+ }
+ })
+ } else {
+ listChildrenPerLine[indexLine].push(children)
+ indexLine += 1
+ listChildrenPerLine[indexLine] = []
+ }
+ }
+
+ const listLine = listChildrenPerLine[listChildrenPerLine.length - 1]
+ if (listLine === '' || (listLine.length === 1 && listLine[0] === '')) {
+ listChildrenPerLine.pop()
+ }
+
+ return listChildrenPerLine.map((childrenPerLine, index) => (
+ <>
+ {childrenPerLine}
+ {index !== listChildrenPerLine.length - 1 && '\n'}
+ >
+ ))
}
function parseArgs(raw) {
- let args = raw.split(' ').filter(arg => arg)
+ let args = raw.split(' ').filter((arg) => arg)
const result = {}
while (args.length) {
let opt = args.shift()
@@ -120,208 +169,219 @@ function parseArgs(raw) {
return result
}
-function convertLine(line, i) {
- const cliRegex = /^(\$ )?python -m spacy/
- if (cliRegex.test(line)) {
- const text = line.replace(cliRegex, '')
- const args = parseArgs(text)
- const cmd = Object.keys(args).map((key, i) => {
- const value = args[key]
- return value === null || value === true || i === 0 ? key : `${key} ${value}`
- })
- return (
-
-
- python -m
- {' '}
- spacy{' '}
- {cmd.map((item, j) => {
- const isCmd = j === 0
- const url = isCmd ? `/api/cli#${item.replace(' ', '-')}` : null
- const isAbstract = isString(item) && /^\[(.+)\]$/.test(item)
- const itemClassNames = classNames(classes.cliArg, {
- [classes.cliArgHighlight]: isCmd,
- [classes.cliArgEmphasis]: isAbstract,
- })
- const text = isAbstract ? item.slice(1, -1) : item
- return (
-
- {j !== 0 && ' '}
-
-
- {text}
-
-
-
- )
- })}
-
- )
+const flattenReact = (children) => {
+ if (children === null || children === undefined || children === false) {
+ return []
}
- const htmlLine = replacePrompt(highlightCode('bash', line), '$')
- return htmlToReact(htmlLine)
+
+ if (typeof children === 'string') {
+ return [children]
+ }
+
+ if (children.props) {
+ return flattenReact(children.props.children)
+ }
+
+ return children.flatMap(flattenReact)
}
-function formatCode(html, lang, prompt) {
- if (lang === 'cli') {
- const lines = html
- .trim()
- .split('\n')
- .map(line =>
- line
- .split(' | ')
- .map((l, i) => convertLine(l, i))
- .map((l, j) => (
-
- {j !== 0 && | }
- {l}
-
- ))
- )
- return lines.map((line, i) => (
-
- {i !== 0 &&
}
- {line}
-
- ))
+const checkoutForComment = (line) => {
+ const lineParts = line.split(' # ')
+
+ if (lineParts.length !== 2) {
+ return line
}
- const result = html
- .split('\n')
- .map((line, i) => {
- let newLine = prompt ? replacePrompt(line, prompt, i === 0) : line
- if (lang === 'diff' && !line.startsWith('<')) {
- newLine = highlightCode('python', line)
- }
- return newLine
- })
- .join('\n')
- return htmlToReact(result)
+
+ return (
+ <>
+ {lineParts[0]}
+ {` `}
+
+ {`# `}
+ {lineParts[1]}
+
+ >
+ )
+}
+
+const handlePromot = ({ lineFlat, prompt }) => {
+ const lineWithoutPrompt = lineFlat.slice(prompt.length + 1)
+
+ const cliRegex = /^python -m spacy/
+
+ if (!cliRegex.test(lineWithoutPrompt)) {
+ return {checkoutForComment(lineWithoutPrompt)}
+ }
+
+ const text = lineWithoutPrompt.replace(cliRegex, '')
+ const args = parseArgs(text)
+ const cmd = Object.keys(args).map((key, i) => {
+ const value = args[key]
+ return value === null || value === true || i === 0 ? key : `${key} ${value}`
+ })
+ return (
+
+ python -m spacy{' '}
+ {cmd.map((item, j) => {
+ const isCmd = j === 0
+ const url = isCmd ? `/api/cli#${item.replace(' ', '-')}` : null
+ const isAbstract = isString(item) && /^\[(.+)\]$/.test(item)
+ const itemClassNames = classNames(classes['cli-arg'], {
+ [classes['cli-arg-highlight']]: isCmd,
+ [classes['cli-arg-emphasis']]: isAbstract,
+ })
+ const text = isAbstract ? item.slice(1, -1) : item
+ return (
+
+ {j !== 0 && ' '}
+
+
+ {text}
+
+
+
+ )
+ })}
+
+ )
+}
+
+const convertLine = ({ line, prompt, lang }) => {
+ const lineFlat = flattenReact(line).join('')
+ if (lineFlat.startsWith(`${prompt} `)) {
+ return handlePromot({ lineFlat, prompt })
+ }
+
+ return lang === 'none' || !lineFlat ? (
+ lineFlat
+ ) : (
+
+ )
+}
+
+const addLineHighlight = (children, highlight) => {
+ if (!highlight) {
+ return children
+ }
+ const listHighlight = rangeParser(highlight)
+
+ if (listHighlight.length === 0) {
+ return children
+ }
+
+ return children.map((child, index) => {
+ const isHighlight = listHighlight.includes(index + 1)
+ return (
+
+ {child}
+
+ )
+ })
+}
+
+export const CodeHighlighted = ({ children, highlight, lang }) => {
+ const [html, setHtml] = useState()
+
+ useEffect(
+ () =>
+ setHtml(
+ addLineHighlight(
+ splitLines(children).map((line) => convertLine({ line, prompt: '$', lang })),
+ highlight
+ )
+ ),
+ [children, highlight, lang]
+ )
+
+ return <>{html}>
}
export class Code extends React.Component {
- state = { Juniper: null }
-
static defaultProps = {
lang: 'none',
executable: null,
- github: false,
}
static propTypes = {
lang: PropTypes.string,
title: PropTypes.string,
executable: PropTypes.oneOf(['true', 'false', true, false, null]),
- github: PropTypes.oneOf(['true', 'false', true, false, null]),
+ github: PropTypes.string,
prompt: PropTypes.string,
highlight: PropTypes.string,
className: PropTypes.string,
children: PropTypes.node,
}
- updateJuniper() {
- if (this.state.Juniper == null && window.Juniper !== null) {
- this.setState({ Juniper: window.Juniper })
- }
- }
-
- componentDidMount() {
- this.updateJuniper()
- }
-
- componentDidUpdate() {
- this.updateJuniper()
- }
-
render() {
- const {
- lang,
- title,
- executable,
- github,
- prompt,
- wrap,
- highlight,
- className,
- children,
- } = this.props
- const codeClassNames = classNames(classes.code, className, `language-${lang}`, {
- [classes.wrap]: !!highlight || !!wrap || lang === 'cli',
- [classes.cli]: lang === 'cli',
+ const { lang, title, executable, github, wrap, highlight, className, children } = this.props
+ const codeClassNames = classNames(classes['code'], className, `language-${lang}`, {
+ [classes['wrap']]: !!highlight || !!wrap || lang === 'cli',
+ [classes['cli']]: lang === 'cli',
})
- const ghClassNames = classNames(codeClassNames, classes.maxHeight)
- const { Juniper } = this.state
+ const ghClassNames = classNames(codeClassNames, classes['max-height'])
if (github) {
- return
+ return
}
- if (!!executable && Juniper) {
+ if (!!executable) {
return (
-
+
{children}
)
}
- const codeText = Array.isArray(children) ? children.join('') : children || ''
- const highlightRange = highlight ? rangeParser.parse(highlight).filter(n => n > 0) : []
- const rawHtml = ['none', 'cli'].includes(lang)
- ? codeText
- : highlightCode(lang, codeText, highlightRange)
- const html = formatCode(rawHtml, lang, prompt)
return (
<>
- {title && {title}
}
- {html}
+ {title && {title}
}
+
+
+ {children}
+
+
>
)
}
}
-const JuniperWrapper = ({ Juniper, title, lang, children }) => (
- {
- const { binderUrl, binderBranch, binderVersion } = data.site.siteMetadata
- const juniperTitle = title || 'Editable Code'
- return (
-
-
- {juniperTitle}
-
- spaCy v{binderVersion} · Python 3 · via{' '}
-
- Binder
-
-
-
+const JuniperWrapper = ({ title, lang, children }) => {
+ const { binderUrl, binderVersion } = siteMetadata
+ const juniperTitle = title || 'Editable Code'
+ return (
+
+
+ {juniperTitle}
+
+ spaCy v{binderVersion} · Python 3 · via{' '}
+
+ Binder
+
+
+
-
- {children}
-
-
- )
- }}
- />
-)
-
-const query = graphql`
- query JuniperQuery {
- site {
- siteMetadata {
- binderUrl
- binderBranch
- binderVersion
- }
- }
- }
-`
+
+ {children}
+
+
+ )
+}
diff --git a/website/src/components/copy.js b/website/src/components/copy.js
index e622c0f84..4caabac98 100644
--- a/website/src/components/copy.js
+++ b/website/src/components/copy.js
@@ -1,4 +1,4 @@
-import React, { useState, useRef } from 'react'
+import React, { useState, useRef, useEffect } from 'react'
import Icon from './icon'
import classes from '../styles/copy.module.sass'
@@ -16,7 +16,11 @@ export function copyToClipboard(ref, callback) {
export default function CopyInput({ text, prefix }) {
const isClient = typeof window !== 'undefined'
- const supportsCopy = isClient && document.queryCommandSupported('copy')
+ const [supportsCopy, setSupportsCopy] = useState(false)
+
+ useEffect(() => {
+ setSupportsCopy(isClient && document.queryCommandSupported('copy'))
+ }, [isClient])
const textareaRef = useRef()
const [copySuccess, setCopySuccess] = useState(false)
const onClick = () => copyToClipboard(textareaRef, setCopySuccess)
diff --git a/website/src/components/dropdown.js b/website/src/components/dropdown.js
index ae5c42415..bccf4ccd0 100644
--- a/website/src/components/dropdown.js
+++ b/website/src/components/dropdown.js
@@ -1,17 +1,18 @@
import React from 'react'
import PropTypes from 'prop-types'
import classNames from 'classnames'
-import { navigate } from 'gatsby'
+import { useRouter } from 'next/router'
import classes from '../styles/dropdown.module.sass'
export default function Dropdown({ defaultValue, className, onChange, children }) {
+ const router = useRouter()
const defaultOnChange = ({ target }) => {
const isExternal = /((http(s?)):\/\/|mailto:)/gi.test(target.value)
if (isExternal) {
window.location.href = target.value
} else {
- navigate(target.value)
+ router.push(target.value)
}
}
return (
diff --git a/website/src/components/embed.js b/website/src/components/embed.js
index 9f959bc99..53f4e9184 100644
--- a/website/src/components/embed.js
+++ b/website/src/components/embed.js
@@ -1,11 +1,12 @@
import React, { Fragment } from 'react'
import PropTypes from 'prop-types'
import classNames from 'classnames'
+import ImageNext from 'next/image'
import Link from './link'
import Button from './button'
import { InlineCode } from './code'
-import { markdownToReact } from './util'
+import { MarkdownToReact } from './util'
import classes from '../styles/embed.module.sass'
@@ -57,18 +58,12 @@ SoundCloud.propTypes = {
color: PropTypes.string,
}
-function formatHTML(html) {
- const encoded = encodeURIComponent(html)
- return `${encoded}`
-}
-
-const Iframe = ({ title, src, html, width = 800, height = 300 }) => {
- const source = html ? `data:text/html,${formatHTML(html)}` : src
+const Iframe = ({ title, src, width = 800, height = 300 }) => {
return (