Merge github.com:explosion/spaCy

This commit is contained in:
Roman Domrachev 2017-11-14 17:46:22 +03:00
commit 86ca434c93
24 changed files with 374 additions and 39 deletions

106
.github/contributors/DuyguA.md vendored Normal file
View File

@ -0,0 +1,106 @@
# spaCy contributor agreement
This spaCy Contributor Agreement (**"SCA"**) is based on the
[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
The SCA applies to any contribution that you make to any product or project
managed by us (the **"project"**), and sets out the intellectual property rights
you grant to us in the contributed materials. The term **"us"** shall mean
[ExplosionAI UG (haftungsbeschränkt)](https://explosion.ai/legal). The term
**"you"** shall mean the person or entity identified below.
If you agree to be bound by these terms, fill in the information requested
below and include the filled-in version with your first pull request, under the
folder [`.github/contributors/`](/.github/contributors/). The name of the file
should be your GitHub username, with the extension `.md`. For example, the user
example_user would create the file `.github/contributors/example_user.md`.
Read this agreement carefully before signing. These terms and conditions
constitute a binding legal agreement.
## Contributor Agreement
1. The term "contribution" or "contributed materials" means any source code,
object code, patch, tool, sample, graphic, specification, manual,
documentation, or any other material posted or submitted by you to the project.
2. With respect to any worldwide copyrights, or copyright applications and
registrations, in your contribution:
* you hereby assign to us joint ownership, and to the extent that such
assignment is or becomes invalid, ineffective or unenforceable, you hereby
grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
royalty-free, unrestricted license to exercise all rights under those
copyrights. This includes, at our option, the right to sublicense these same
rights to third parties through multiple levels of sublicensees or other
licensing arrangements;
* you agree that each of us can do all things in relation to your
contribution as if each of us were the sole owners, and if one of us makes
a derivative work of your contribution, the one who makes the derivative
work (or has it made will be the sole owner of that derivative work;
* you agree that you will not assert any moral rights in your contribution
against us, our licensees or transferees;
* you agree that we may register a copyright in your contribution and
exercise all ownership rights associated with it; and
* you agree that neither of us has any duty to consult with, obtain the
consent of, pay or render an accounting to the other for any use or
distribution of your contribution.
3. With respect to any patents you own, or that you can license without payment
to any third party, you hereby grant to us a perpetual, irrevocable,
non-exclusive, worldwide, no-charge, royalty-free license to:
* make, have made, use, sell, offer to sell, import, and otherwise transfer
your contribution in whole or in part, alone or in combination with or
included in any product, work or materials arising out of the project to
which your contribution was submitted, and
* at our option, to sublicense these same rights to third parties through
multiple levels of sublicensees or other licensing arrangements.
4. Except as set out above, you keep all right, title, and interest in your
contribution. The rights that you grant to us under these terms are effective
on the date you first submitted a contribution to us, even if your submission
took place before the date you sign these terms.
5. You covenant, represent, warrant and agree that:
* Each contribution that you submit is and shall be an original work of
authorship and you can legally grant the rights set out in this SCA;
* to the best of your knowledge, each contribution will not violate any
third party's copyrights, trademarks, patents, or other intellectual
property rights; and
* each contribution shall be in compliance with U.S. export control laws and
other applicable export and import laws. You agree to notify us if you
become aware of any circumstance which would make any of the foregoing
representations inaccurate in any respect. We may publicly disclose your
participation in the project, including the fact that you have signed the SCA.
6. This SCA is governed by the laws of the State of California and applicable
U.S. Federal law. Any choice of law rules will not apply.
7. Please place an “x” on one of the applicable statement below. Please do NOT
mark both statements:
* [x] I am signing on behalf of myself as an individual and no other person
or entity, including my employer, has or will have rights with respect to my
contributions.
* [ ] I am signing on behalf of my employer or a legal entity and I have the
actual authority to contractually bind that entity.
## Contributor Details
| Field | Entry |
|------------------------------- | -------------------- |
| Name | Duygu Altinok |
| Company name (if applicable) | |
| Title or role (if applicable) | |
| Date | 13 November 2017 |
| GitHub username | DuyguA |
| Website (optional) | |

106
.github/contributors/abhi18av.md vendored Normal file
View File

@ -0,0 +1,106 @@
# spaCy contributor agreement
This spaCy Contributor Agreement (**"SCA"**) is based on the
[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
The SCA applies to any contribution that you make to any product or project
managed by us (the **"project"**), and sets out the intellectual property rights
you grant to us in the contributed materials. The term **"us"** shall mean
[ExplosionAI UG (haftungsbeschränkt)](https://explosion.ai/legal). The term
**"you"** shall mean the person or entity identified below.
If you agree to be bound by these terms, fill in the information requested
below and include the filled-in version with your first pull request, under the
folder [`.github/contributors/`](/.github/contributors/). The name of the file
should be your GitHub username, with the extension `.md`. For example, the user
example_user would create the file `.github/contributors/example_user.md`.
Read this agreement carefully before signing. These terms and conditions
constitute a binding legal agreement.
## Contributor Agreement
1. The term "contribution" or "contributed materials" means any source code,
object code, patch, tool, sample, graphic, specification, manual,
documentation, or any other material posted or submitted by you to the project.
2. With respect to any worldwide copyrights, or copyright applications and
registrations, in your contribution:
* you hereby assign to us joint ownership, and to the extent that such
assignment is or becomes invalid, ineffective or unenforceable, you hereby
grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
royalty-free, unrestricted license to exercise all rights under those
copyrights. This includes, at our option, the right to sublicense these same
rights to third parties through multiple levels of sublicensees or other
licensing arrangements;
* you agree that each of us can do all things in relation to your
contribution as if each of us were the sole owners, and if one of us makes
a derivative work of your contribution, the one who makes the derivative
work (or has it made will be the sole owner of that derivative work;
* you agree that you will not assert any moral rights in your contribution
against us, our licensees or transferees;
* you agree that we may register a copyright in your contribution and
exercise all ownership rights associated with it; and
* you agree that neither of us has any duty to consult with, obtain the
consent of, pay or render an accounting to the other for any use or
distribution of your contribution.
3. With respect to any patents you own, or that you can license without payment
to any third party, you hereby grant to us a perpetual, irrevocable,
non-exclusive, worldwide, no-charge, royalty-free license to:
* make, have made, use, sell, offer to sell, import, and otherwise transfer
your contribution in whole or in part, alone or in combination with or
included in any product, work or materials arising out of the project to
which your contribution was submitted, and
* at our option, to sublicense these same rights to third parties through
multiple levels of sublicensees or other licensing arrangements.
4. Except as set out above, you keep all right, title, and interest in your
contribution. The rights that you grant to us under these terms are effective
on the date you first submitted a contribution to us, even if your submission
took place before the date you sign these terms.
5. You covenant, represent, warrant and agree that:
* Each contribution that you submit is and shall be an original work of
authorship and you can legally grant the rights set out in this SCA;
* to the best of your knowledge, each contribution will not violate any
third party's copyrights, trademarks, patents, or other intellectual
property rights; and
* each contribution shall be in compliance with U.S. export control laws and
other applicable export and import laws. You agree to notify us if you
become aware of any circumstance which would make any of the foregoing
representations inaccurate in any respect. We may publicly disclose your
participation in the project, including the fact that you have signed the SCA.
6. This SCA is governed by the laws of the State of California and applicable
U.S. Federal law. Any choice of law rules will not apply.
7. Please place an “x” on one of the applicable statement below. Please do NOT
mark both statements:
* [x] I am signing on behalf of myself as an individual and no other person
or entity, including my employer, has or will have rights with respect to my
contributions.
* [ ] I am signing on behalf of my employer or a legal entity and I have the
actual authority to contractually bind that entity.
## Contributor Details
| Field | Entry |
|------------------------------- | -------------------- |
| Name | Abhinav Sharma |
| Company name (if applicable) | Fourtek I.T. Solutions Pvt. Ltd. |
| Title or role (if applicable) | Machine Learning Engineer |
| Date | 3 November 2017 |
| GitHub username | abhi18av |
| Website (optional) | https://abhi18av.github.io/ |

3
.gitignore vendored
View File

@ -97,3 +97,6 @@ Desktop.ini
# Other
*.tgz
# Pycharm project files
*.idea

View File

@ -88,8 +88,10 @@ requests:
| [`models`](https://github.com/explosion/spaCy/labels/models), `language / [name]` | Issues related to the specific [models](https://github.com/explosion/spacy-models), languages and data |
| [`linux`](https://github.com/explosion/spaCy/labels/linux), [`osx`](https://github.com/explosion/spaCy/labels/osx), [`windows`](https://github.com/explosion/spaCy/labels/windows) | Issues related to the specific operating systems |
| [`pip`](https://github.com/explosion/spaCy/labels/pip), [`conda`](https://github.com/explosion/spaCy/labels/conda) | Issues related to the specific package managers |
| [`wip`](https://github.com/explosion/spaCy/labels/wip) | Work in progress, mostly used for pull requests. |
| [`wip`](https://github.com/explosion/spaCy/labels/wip) | Work in progress, mostly used for pull requests |
| [`v1`](https://github.com/explosion/spaCy/labels/v1) | Reports related to spaCy v1.x |
| [`duplicate`](https://github.com/explosion/spaCy/labels/duplicate) | Duplicates, i.e. issues that have been reported before |
| [`third-party`](https://github.com/explosion/spaCy/labels/third-party) | Issues related to third-party packages and services |
| [`meta`](https://github.com/explosion/spaCy/labels/meta) | Meta topics, e.g. repo organisation and issue management |
| [`help wanted`](https://github.com/explosion/spaCy/labels/help%20wanted), [`help wanted (easy)`](https://github.com/explosion/spaCy/labels/help%20wanted%20%28easy%29) | Requests for contributions |

View File

@ -30,7 +30,7 @@ def main(vectors_loc, lang=None):
nlp.vocab.reset_vectors(width=int(nr_dim))
for line in file_:
line = line.decode('utf8')
pieces = line.split()
pieces = line.rsplit(' ', nr_dim)
word = pieces[0]
vector = numpy.asarray([float(v) for v in pieces[1:]], dtype='f')
nlp.vocab.set_vector(word, vector) # add the vectors to the vocab

View File

@ -92,14 +92,29 @@ def _zero_init(model):
@layerize
def _preprocess_doc(docs, drop=0.):
keys = [doc.to_array([LOWER]) for doc in docs]
keys = [doc.to_array(LOWER) for doc in docs]
ops = Model.ops
# The dtype here matches what thinc is expecting -- which differs per
# platform (by int definition). This should be fixed once the problem
# is fixed on Thinc's side.
lengths = ops.asarray([arr.shape[0] for arr in keys], dtype=numpy.int_)
keys = ops.xp.concatenate(keys)
vals = ops.allocate(keys.shape[0]) + 1
vals = ops.allocate(keys.shape) + 1.
return (keys, vals, lengths), None
@layerize
def _preprocess_doc_bigrams(docs, drop=0.):
unigrams = [doc.to_array(LOWER) for doc in docs]
ops = Model.ops
bigrams = [ops.ngrams(2, doc_unis) for doc_unis in unigrams]
keys = [ops.xp.concatenate(feats) for feats in zip(unigrams, bigrams)]
keys, vals = zip(*[ops.xp.unique(k, return_counts=True) for k in keys])
# The dtype here matches what thinc is expecting -- which differs per
# platform (by int definition). This should be fixed once the problem
# is fixed on Thinc's side.
lengths = ops.asarray([arr.shape[0] for arr in keys], dtype=numpy.int_)
keys = ops.xp.concatenate(keys)
vals = ops.asarray(ops.xp.concatenate(vals), dtype='f')
return (keys, vals, lengths), None
@ -514,8 +529,9 @@ def build_text_classifier(nr_class, width=64, **cfg):
linear_model = (
_preprocess_doc
>> LinearModel(nr_class, drop_factor=0.)
>> LinearModel(nr_class)
)
#model = linear_model >> logistic
model = (
(linear_model | cnn_model)

View File

@ -459,6 +459,8 @@ _exc = {
"disorganised": "disorganized",
"distil": "distill",
"distils": "distills",
"doin": "doing",
"doin'": "doing",
"dramatisation": "dramatization",
"dramatisations": "dramatizations",
"dramatise": "dramatize",
@ -687,6 +689,8 @@ _exc = {
"globalises": "globalizes",
"globalising": "globalizing",
"glueing ": "gluing ",
"goin": "going",
"goin'":"going",
"goitre": "goiter",
"goitres": "goiters",
"gonorrhoea": "gonorrhea",
@ -733,6 +737,8 @@ _exc = {
"harmonised": "harmonized",
"harmonises": "harmonizes",
"harmonising": "harmonizing",
"havin": "having",
"havin'": "having",
"homoeopath": "homeopath",
"homoeopathic": "homeopathic",
"homoeopaths": "homeopaths",
@ -924,6 +930,8 @@ _exc = {
"localised": "localized",
"localises": "localizes",
"localising": "localizing",
"lovin": "loving",
"lovin'": "loving",
"louvre": "louver",
"louvred": "louvered",
"louvres": "louvers ",

View File

@ -387,6 +387,21 @@ for exc_data in [
{ORTH: "O'clock", LEMMA: "o'clock", NORM: "o'clock"},
{ORTH: "lovin'", LEMMA: "love", NORM: "loving"},
{ORTH: "Lovin'", LEMMA: "love", NORM: "loving"},
{ORTH: "lovin", LEMMA: "love", NORM: "loving"},
{ORTH: "Lovin", LEMMA: "love", NORM: "loving"},
{ORTH: "havin'", LEMMA: "have", NORM: "having"},
{ORTH: "Havin'", LEMMA: "have", NORM: "having"},
{ORTH: "havin", LEMMA: "have", NORM: "having"},
{ORTH: "Havin", LEMMA: "have", NORM: "having"},
{ORTH: "doin'", LEMMA: "do", NORM: "doing"},
{ORTH: "Doin'", LEMMA: "do", NORM: "doing"},
{ORTH: "doin", LEMMA: "do", NORM: "doing"},
{ORTH: "Doin", LEMMA: "do", NORM: "doing"},
{ORTH: "goin'", LEMMA: "go", NORM: "going"},
{ORTH: "Goin'", LEMMA: "go", NORM: "going"},
{ORTH: "goin", LEMMA: "go", NORM: "going"},
{ORTH: "Goin", LEMMA: "go", NORM: "going"},
{ORTH: "Mt.", LEMMA: "Mount", NORM: "Mount"},
{ORTH: "Ak.", LEMMA: "Alaska", NORM: "Alaska"},

View File

@ -5,14 +5,23 @@ from __future__ import unicode_literals
# Source: https://github.com/taranjeet/hindi-tokenizer/blob/master/stopwords.txt
STOP_WORDS = set("""
दर
अत
अदि
अप
अपन
अपनि
अपन
अपन
अभि
अभ
दर
आदि
आप
ि
इतयि
इति
इन
इनक
@ -21,13 +30,19 @@ STOP_WORDS = set("""
इन
इस
इसक
इसकि
इसक
इसक
इसम
इसि
इस
इस
ि
उन
उनक
उनकि
उनक
उनक
उनक
@ -36,13 +51,17 @@ STOP_WORDS = set("""
उन
उस
उसक
उसि
उस
उस
एक
एव
एस
एस
ऐस
ओर
और
कइ
कई
कर
करत
@ -53,14 +72,18 @@ STOP_WORDS = set("""
कहत
कह
ि
ि
ि
ि
ितन
ि
ि
ि
ि
ि
िि
ि
ि
@ -68,27 +91,38 @@ STOP_WORDS = set("""
नस
नस
गय
घर
जब
जह
जह
ि
ि
ितन
िधर
ि
ि
ि
ि
ि
धर
तक
तब
तरह
ि
ि
ि
ि
ि
@ -96,32 +130,41 @@ STOP_WORDS = set("""
ि
ि
दब
दव
ि
सर
सर
सर
ि
नह
ि
ियत
पर
पहल
ि
बनि
बन
बहि
बह
बह
िलक
ि
ितर
तर
मगर
@ -131,11 +174,14 @@ STOP_WORDS = set("""
यदि
यह
यह
यह
यहि
यह
ि
रख
रव
रह
रह
@ -143,17 +189,24 @@ STOP_WORDS = set("""
ि
ि
वगरह
वग़रह
वरग
वर
वह
वह
वह
वहि
वह
वग़रह
सकत
सकत
सबस
सभि
सभ
@ -162,16 +215,23 @@ STOP_WORDS = set("""
ि
ि
""".split())

View File

@ -561,9 +561,9 @@ class Language(object):
old_refs, recent_refs = recent_refs, old_refs
self.vocab.strings._cleanup_stale_strings()
nr_seen = 0
# Last batch can be not garbage collected and we cannot know it — last
# doc still here. Not erase that strings — just extend with original
# content
# We can't know which strings from the last batch have really expired.
# So we don't erase the strings — we just extend with the original
# content.
for string in original_strings_data:
self.vocab.strings.add(string)

View File

@ -251,7 +251,7 @@ cdef class StringStore:
def _cleanup_stale_strings(self):
if self.hits.size() == 0:
# If no any hits — just skip cleanup
# If we don't have any hits, just skip cleanup
return
cdef vector[hash_t] tmp

View File

@ -66,12 +66,6 @@ cdef class ParserBeam(object):
self.beams.append(beam)
self.dones = [False] * len(self.beams)
def __dealloc__(self):
if self.beams is not None:
for beam in self.beams:
if beam is not None:
_cleanup(beam)
@property
def is_done(self):
return all(b.is_done or self.dones[i]
@ -222,7 +216,8 @@ def update_beam(TransitionSystem moves, int nr_feature, int max_steps,
histories.append([])
losses.append([])
states_d_scores = get_gradient(moves.n_moves, beam_maps, histories, losses)
return states_d_scores, backprops[:len(states_d_scores)]
beams = list(pbeam.beams) + list(gbeam.beams)
return states_d_scores, backprops[:len(states_d_scores)], beams
def get_states(pbeams, gbeams, beam_map, nr_update):

View File

@ -374,6 +374,8 @@ cdef class Parser:
parse_states.append(<StateClass>beam.at(0))
self.set_annotations(subbatch, parse_states, tensors=tokvecs)
yield from batch
for beam in beams:
_cleanup(beam)
def parse_batch(self, docs):
cdef:
@ -609,7 +611,7 @@ cdef class Parser:
cuda_stream = util.get_cuda_stream()
(tokvecs, bp_tokvecs), state2vec, vec2scores = self.get_batch_model(
docs, cuda_stream, drop)
states_d_scores, backprops = _beam_utils.update_beam(
states_d_scores, backprops, beams = _beam_utils.update_beam(
self.moves, self.nr_feature, 500, states, golds, state2vec,
vec2scores, width, density, self.cfg.get('hist_size', 0),
drop=drop, losses=losses)
@ -634,6 +636,10 @@ cdef class Parser:
d_tokvecs = state2vec.ops.allocate((tokvecs.shape[0]+1, tokvecs.shape[1]))
self._make_updates(d_tokvecs, bp_tokvecs, backprop_lower, sgd,
cuda_stream)
cdef Beam beam
for beam in beams:
_cleanup(beam)
def _init_gold_batch(self, whole_docs, whole_golds):
"""Make a square batch, of length equal to the shortest doc. A long

View File

@ -82,7 +82,7 @@
}
],
"V_CSS": "2.0.0",
"V_CSS": "2.0.1",
"V_JS": "2.0.1",
"DEFAULT_SYNTAX": "python",
"ANALYTICS": "UA-58931649-1",

View File

@ -312,6 +312,14 @@ mixin github(repo, file, height, alt_file, language)
+button(gh(repo, alt_file || file), false, "primary", "small") View on GitHub
//- Youtube video embed
id - [string] ID of YouTube video.
ratio - [string] Video ratio, "16x9" or "4x3".
mixin youtube(id, ratio)
figure.o-video.o-block(class="o-video--" + (ratio || "16x9"))
iframe.o-video__iframe(src="https://www.youtube.com/embed/#{id}" frameborder="0" height="500" allowfullscreen)
//- Images / figures
url - [string] url or path to image

View File

@ -562,7 +562,7 @@ p
+cell #[code orth_]
+cell unicode
+cell
| Verbatim text content (identical to #[code Span.text]). Existst
| Verbatim text content (identical to #[code Span.text]). Exists
| mostly for consistency with the other attributes.
+row

View File

@ -177,6 +177,22 @@
border-radius: $border-radius
//- Responsive Video embeds
.o-video
position: relative
height: 0
@each $ratio1, $ratio2 in (16, 9), (4, 3)
&.o-video--#{$ratio1}x#{$ratio2}
padding-bottom: (100% * $ratio2 / $ratio1)
.o-video__iframe
@include position(absolute, top, left, 0, 0)
@include size(100%)
border-radius: var(--border-radius)
//- Form fields
.o-field

View File

@ -376,7 +376,7 @@ p
p
| Here's an example from the English
| #[+src(gh("spaCy", "spacy/en/lang/lex_attrs.py")) #[code lex_attrs.py]]:
| #[+src(gh("spaCy", "spacy/lang/en/lex_attrs.py")) #[code lex_attrs.py]]:
+code("lex_attrs.py").
_num_words = ['zero', 'one', 'two', 'three', 'four', 'five', 'six', 'seven',

View File

@ -166,6 +166,7 @@
"Demos & Visualizations": "demos",
"Books & Courses": "books",
"Jupyter Notebooks": "notebooks",
"Videos": "videos",
"Research": "research"
}
},

View File

@ -30,19 +30,12 @@ p
+h(3, "conda") conda
+badge("https://anaconda.org/conda-forge/spacy/badges/version.svg", "https://anaconda.org/conda-forge/spacy")
+infobox("Important note", "⚠️")
| We're still waiting for spaCy v2.0 to go live on #[code conda-forge],
| as there's currently a backlog of OSX builds on Travis.
| In the meantime, you can already try out the new version using pip. The
| conda download will follow as soon as possible.
p
| Thanks to our great community, we've finally re-added conda support. You
| can now install spaCy via #[code conda-forge]:
+code(false, "bash").
conda config --add channels conda-forge
conda install spacy
conda install -c conda-forge spacy
p
| For the feedstock including the build recipe and configuration, check out
@ -191,7 +184,8 @@ p
+h(4, "source-windows") Windows
p
| Install a version of
| Install a version of the
| #[+a("http://landinghub.visualstudio.com/visual-cpp-build-tools") Visual C++ Bulild Tools] or
| #[+a("https://www.visualstudio.com/vs/visual-studio-express/") Visual Studio Express]
| that matches the version that was used to compile your Python
| interpreter. For official distributions these are:

View File

@ -3,12 +3,6 @@
- QUICKSTART[QUICKSTART.length - 1].options = Object.keys(MODELS).map(m => ({ id: m, title: LANGUAGES[m] }))
+quickstart(QUICKSTART, "Quickstart")
+qs({package: 'conda'}) # Important note: We're still waiting for spaCy v2.0 to go
+qs({package: 'conda'}) # live on conda, due to a backlog of OSX builds on Travis.
+qs({package: 'conda'}) # In the meantime, you can download spaCy via pip.
+qs({package: 'conda'}, "divider")
+qs({package: 'conda'}) pip install -U spacy
+qs({config: 'venv', python: 2}) python -m pip install -U virtualenv
+qs({config: 'venv', python: 3}) python -m pip install -U venv
+qs({config: 'venv', python: 2}) virtualenv .env
@ -18,7 +12,7 @@
+qs({config: 'venv', os: 'windows'}) .env\Scripts\activate
+qs({package: 'pip'}) pip install -U spacy
//-+qs({package: 'conda'}) conda install -c conda-forge spacy
+qs({package: 'conda'}) conda install -c conda-forge spacy
+qs({package: 'source'}) git clone https://github.com/explosion/spaCy
+qs({package: 'source'}) cd spaCy

View File

@ -39,7 +39,7 @@ p
return doc
nlp = spacy.load('en')
nlp.pipeline.add_pipe(my_component, name='print_info', first=True)
nlp.add_pipe(my_component, name='print_info', first=True)
print(nlp.pipe_names) # ['print_info', 'tagger', 'parser', 'ner']
doc = nlp(u"This is a sentence.")

View File

@ -55,6 +55,6 @@ p
p
| While punctuation rules are usually pretty general, tokenizer exceptions
| strongly depend on the specifics of the individual language. This is
| why each #[+a("/models/#languages") available language] has its
| why each #[+a("/usage/models#languages") available language] has its
| own subclass like #[code English] or #[code German], that loads in lists
| of hard-coded data and exception rules.

View File

@ -114,6 +114,11 @@ include ../_includes/_mixins
.u-text-right
+button(gh("spacy-notebooks"), false, "primary", "small") See more notebooks on GitHub
+section("videos")
+h(2, "videos") Videos
+youtube("sqDHBH9IjRU")
+section("research")
+h(2, "research") Research systems