mirror of
https://github.com/explosion/spaCy.git
synced 2024-12-25 17:36:30 +03:00
Fix broken links and add check_links shortcut script
This commit is contained in:
parent
f5855e539b
commit
2ba4e4fc88
|
@ -246,7 +246,7 @@ p
|
|||
p
|
||||
| Check if user is running spaCy from a #[+a("https://jupyter.org") Jupyter]
|
||||
| notebook by detecting the IPython kernel. Mainly used for the
|
||||
| #[+api("displacy") #[code displacy]] visualizer.
|
||||
| #[+api("top-level#displacy") #[code displacy]] visualizer.
|
||||
|
||||
+aside-code("Example").
|
||||
html = '<h1>Hello world!</h1>'
|
||||
|
|
|
@ -13,6 +13,8 @@
|
|||
},
|
||||
"dependencies": {},
|
||||
"scripts": {
|
||||
"check_links": "blc https://alpha.spacy.io -ro",
|
||||
|
||||
"compile": "NODE_ENV=deploy harp compile",
|
||||
"rollup_js": "rollup www/assets/js/rollup.js --output.format iife --output.file www/assets/js/rollup.js",
|
||||
"compile_rollup": "babel www/assets/js/rollup.js --out-file www/assets/js/rollup.js --presets=es2015",
|
||||
|
|
|
@ -33,7 +33,7 @@ p
|
|||
OSError: symbolic link privilege not held
|
||||
|
||||
p
|
||||
| To create #[+a("/usage/models/#usage") shortcut links] that let you
|
||||
| To create #[+a("/usage/models#usage") shortcut links] that let you
|
||||
| load models by name, spaCy creates a symbolic link in the
|
||||
| #[code spacy/data] directory. This means your user needs permission to do
|
||||
| this. The above error mostly occurs when doing a system-wide installation,
|
||||
|
@ -76,7 +76,7 @@ p
|
|||
p
|
||||
| As of spaCy v1.7, all models can be installed as Python packages. This means
|
||||
| that they'll become importable modules of your application. When creating
|
||||
| #[+a("/usage/models/#usage") shortcut links], spaCy will also try
|
||||
| #[+a("/usage/models#usage") shortcut links], spaCy will also try
|
||||
| to import the model to load its meta data. If this fails, it's usually a
|
||||
| sign that the package is not installed in the current environment.
|
||||
| Run #[code pip list] or #[code pip freeze] to check which model packages
|
||||
|
@ -93,9 +93,8 @@ p
|
|||
p
|
||||
| This error may occur when using #[code spacy.load()] to load
|
||||
| a language model – either because you haven't set up a
|
||||
| #[+a("/usage/models/#usage") shortcut link] for it, or because it
|
||||
| doesn't actually exist. Set up a
|
||||
| #[+a("/usage/models/#usage") shortcut link] for the model
|
||||
| #[+a("/usage/models#usage") shortcut link] for it, or because it
|
||||
| doesn't actually exist. Set up a link for the model
|
||||
| you want to load. This can either be an installed model package, or a
|
||||
| local directory containing the model data. If you want to use one of the
|
||||
| #[+a("/usage/models#languages") alpha tokenizers] for
|
||||
|
|
|
@ -187,8 +187,8 @@ p
|
|||
| The best way to understand spaCy's dependency parser is interactively.
|
||||
| To make this easier, spaCy v2.0+ comes with a visualization module. Simply
|
||||
| pass a #[code Doc] or a list of #[code Doc] objects to
|
||||
| displaCy and run #[+api("displacy#serve") #[code displacy.serve]] to
|
||||
| run the web server, or #[+api("displacy#render") #[code displacy.render]]
|
||||
| displaCy and run #[+api("top-level#displacy.serve") #[code displacy.serve]] to
|
||||
| run the web server, or #[+api("top-level#displacy.render") #[code displacy.render]]
|
||||
| to generate the raw markup. If you want to know how to write rules that
|
||||
| hook into some type of syntactic construction, just plug the sentence into
|
||||
| the visualizer and see how spaCy annotates it.
|
||||
|
@ -209,7 +209,7 @@ p
|
|||
p
|
||||
| In the #[+a("/models") default models], the parser is loaded and enabled
|
||||
| as part of the
|
||||
| #[+a("docs/usage/language-processing-pipelines") standard processing pipeline].
|
||||
| #[+a("/usage/processing-pipelines") standard processing pipeline].
|
||||
| If you don't need any of the syntactic information, you should disable
|
||||
| the parser. Disabling the parser will make spaCy load and run much faster.
|
||||
| If you want to load the parser, but need to disable it for specific
|
||||
|
@ -228,7 +228,7 @@ p
|
|||
| #[+a("/usage/processing-pipelines") pipeline component names].
|
||||
| This lets you disable both default and custom components when loading
|
||||
| a model, or initialising a Language class via
|
||||
| #[+api("language-from_disk") #[code from_disk]].
|
||||
| #[+api("language#from_disk") #[code from_disk]].
|
||||
+code-new.
|
||||
nlp = spacy.load('en', disable=['parser'])
|
||||
doc = nlp(u"I don't want parsed", disable=['parser'])
|
||||
|
|
|
@ -59,7 +59,7 @@ p
|
|||
+annotation-row(["delivery", 2, "O", '""', "outside an entity"], style)
|
||||
+annotation-row(["robots", 2, "O", '""', "outside an entity"], style)
|
||||
|
||||
+h(3, "setting") Setting entity annotations
|
||||
+h(3, "setting-entities") Setting entity annotations
|
||||
|
||||
p
|
||||
| To ensure that the sequence of token annotations remains consistent, you
|
||||
|
@ -186,8 +186,8 @@ p
|
|||
| If you're training a model, it's very useful to run the visualization
|
||||
| yourself. To help you do that, spaCy v2.0+ comes with a visualization
|
||||
| module. Simply pass a #[code Doc] or a list of #[code Doc] objects to
|
||||
| displaCy and run #[+api("displacy#serve") #[code displacy.serve]] to
|
||||
| run the web server, or #[+api("displacy#render") #[code displacy.render]]
|
||||
| displaCy and run #[+api("top-level#displacy.serve") #[code displacy.serve]] to
|
||||
| run the web server, or #[+api("top-level#displacy.render") #[code displacy.render]]
|
||||
| to generate the raw markup.
|
||||
|
||||
p
|
||||
|
|
|
@ -7,11 +7,11 @@ p
|
|||
| functions. A pipeline component can be added to an already existing
|
||||
| #[code nlp] object, specified when initialising a #[code Language] class,
|
||||
| or defined within a
|
||||
| #[+a("/usage/saving-loading#models-generating") model package].
|
||||
| #[+a("/usage/training#saving-loading") model package].
|
||||
|
||||
p
|
||||
| When you load a model, spaCy first consults the model's
|
||||
| #[+a("/usage/saving-loading#models-generating") #[code meta.json]]. The
|
||||
| #[+a("/usage/training#saving-loading") #[code meta.json]]. The
|
||||
| meta typically includes the model details, the ID of a language class,
|
||||
| and an optional list of pipeline components. spaCy then does the
|
||||
| following:
|
||||
|
@ -27,7 +27,7 @@ p
|
|||
+list("numbers")
|
||||
+item
|
||||
| Load the #[strong language class and data] for the given ID via
|
||||
| #[+api("util.get_lang_class") #[code get_lang_class]] and initialise
|
||||
| #[+api("top-level#util.get_lang_class") #[code get_lang_class]] and initialise
|
||||
| it. The #[code Language] class contains the shared vocabulary,
|
||||
| tokenization rules and the language-specific annotation scheme.
|
||||
+item
|
||||
|
|
|
@ -12,9 +12,9 @@ include ../_spacy-101/_serialization
|
|||
|
||||
p
|
||||
| For simplicity, let's assume you've
|
||||
| #[+a("/usage/entity-recognition#setting") added custom entities] to
|
||||
| #[+a("/usage/linguistic-features#setting-entities") added custom entities] to
|
||||
| a #[code Doc], either manually, or by using a
|
||||
| #[+a("/usage/rule-based-matching#on_match") match pattern]. You can
|
||||
| #[+a("/usage/linguistic-features#on_match") match pattern]. You can
|
||||
| save it locally by calling #[+api("doc#to_disk") #[code Doc.to_disk()]],
|
||||
| and load it again via #[+api("doc#from_disk") #[code Doc.from_disk()]].
|
||||
| This will overwrite the existing object and return it.
|
||||
|
|
|
@ -153,7 +153,7 @@ p
|
|||
displacy.serve(doc_ent, style='ent')
|
||||
|
||||
+infobox
|
||||
| #[+label-inline API:] #[+api("displacy") #[code displacy]]
|
||||
| #[+label-inline API:] #[+api("top-level#displacy") #[code displacy]]
|
||||
| #[+label-inline Usage:] #[+a("/usage/visualizers") Visualizers]
|
||||
|
||||
+h(3, "lightning-tour-word-vectors") Get word vectors and similarity
|
||||
|
|
|
@ -164,14 +164,17 @@ p
|
|||
| The improved #[code spacy.load] makes loading models easier and more
|
||||
| transparent. You can load a model by supplying its
|
||||
| #[+a("/usage/models#usage") shortcut link], the name of an installed
|
||||
| #[+a("/usage/saving-loading#generating") model package] or a path.
|
||||
| The #[code Language] class to initialise will be determined based on the
|
||||
| model's settings. For a blank language, you can import the class directly,
|
||||
| e.g. #[code from spacy.lang.en import English].
|
||||
| #[+a("/models") model package] or a path. The #[code Language] class to
|
||||
| initialise will be determined based on the model's settings. For a blank l
|
||||
| anguage, you can import the class directly, e.g.
|
||||
| #[code.u-break from spacy.lang.en import English] or use
|
||||
| #[+api("spacy#blank") #[code spacy.blank()]].
|
||||
|
||||
+infobox
|
||||
| #[+label-inline API:] #[+api("spacy#load") #[code spacy.load]]
|
||||
| #[+label-inline Usage:] #[+a("/usage/saving-loading") Saving and loading]
|
||||
| #[+label-inline API:] #[+api("spacy#load") #[code spacy.load]],
|
||||
| #[+api("language#to_disk") #[code Language.to_disk]]
|
||||
| #[+label-inline Usage:] #[+a("/usage/models#usage") Models],
|
||||
| #[+a("/usage/training#saving-loading") Saving and loading]
|
||||
|
||||
+h(3, "features-displacy") displaCy visualizer with Jupyter support
|
||||
|
||||
|
@ -190,7 +193,7 @@ p
|
|||
| visualizations in your notebook.
|
||||
|
||||
+infobox
|
||||
| #[+label-inline API:] #[+api("displacy") #[code displacy]]
|
||||
| #[+label-inline API:] #[+api("top-level#displacy") #[code displacy]]
|
||||
| #[+label-inline Usage:] #[+a("/usage/visualizers") Visualizing spaCy]
|
||||
|
||||
+h(3, "features-language") Improved language data and lazy loading
|
||||
|
@ -222,7 +225,7 @@ p
|
|||
|
||||
p
|
||||
| Patterns can now be added to the matcher by calling
|
||||
| #[+api("matcher-add") #[code matcher.add()]] with a match ID, an optional
|
||||
| #[+api("matcher#add") #[code matcher.add()]] with a match ID, an optional
|
||||
| callback function to be invoked on each match, and one or more patterns.
|
||||
| This allows you to write powerful, pattern-specific logic using only one
|
||||
| matcher. For example, you might only want to merge some entity types,
|
||||
|
@ -234,4 +237,5 @@ p
|
|||
+infobox
|
||||
| #[+label-inline API:] #[+api("matcher") #[code Matcher]],
|
||||
| #[+api("phrasematcher") #[code PhraseMatcher]]
|
||||
| #[+label-inline Usage:] #[+a("/usage/rule-based-matching") Rule-based matching]
|
||||
| #[+label-inline Usage:]
|
||||
| #[+a("/usage/linguistic-features#rule-based-matching") Rule-based matching]
|
||||
|
|
|
@ -64,7 +64,7 @@ p
|
|||
|
||||
p
|
||||
| If you've been using custom pipeline components, check out the new
|
||||
| guide on #[+a("/usage/language-processing-pipelines") processing pipelines].
|
||||
| guide on #[+a("/usage/processing-pipelines") processing pipelines].
|
||||
| Pipeline components are now #[code (name, func)] tuples. Appending
|
||||
| them to the pipeline still works – but the
|
||||
| #[+api("language#add_pipe") #[code add_pipe]] method now makes this
|
||||
|
@ -191,7 +191,7 @@ p
|
|||
| matcher now also supports string keys, which saves you an extra import.
|
||||
| If you've been using #[strong acceptor functions], you'll need to move
|
||||
| this logic into the
|
||||
| #[+a("/usage/rule-based-matching#on_match") #[code on_match] callbacks].
|
||||
| #[+a("/usage/linguistic-features#on_match") #[code on_match] callbacks].
|
||||
| The callback function is invoked on every match and will give you access to
|
||||
| the doc, the index of the current match and all total matches. This lets
|
||||
| you both accept or reject the match, and define the actions to be
|
||||
|
|
Loading…
Reference in New Issue
Block a user