mirror of
https://github.com/explosion/spaCy.git
synced 2024-11-11 04:08:09 +03:00
Resolve stopwords conflict to merge Dutch
This commit is contained in:
commit
f2c48ef504
107
.github/contributors/RvanNieuwpoort.md
vendored
Executable file
107
.github/contributors/RvanNieuwpoort.md
vendored
Executable file
|
@ -0,0 +1,107 @@
|
||||||
|
# spaCy contributor agreement
|
||||||
|
|
||||||
|
This spaCy Contributor Agreement (**"SCA"**) is based on the
|
||||||
|
[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
|
||||||
|
The SCA applies to any contribution that you make to any product or project
|
||||||
|
managed by us (the **"project"**), and sets out the intellectual property rights
|
||||||
|
you grant to us in the contributed materials. The term **"us"** shall mean
|
||||||
|
[ExplosionAI UG (haftungsbeschränkt)](https://explosion.ai/legal). The term
|
||||||
|
**"you"** shall mean the person or entity identified below.
|
||||||
|
|
||||||
|
If you agree to be bound by these terms, fill in the information requested
|
||||||
|
below and include the filled-in version with your first pull request, under the
|
||||||
|
folder [`.github/contributors/`](/.github/contributors/). The name of the file
|
||||||
|
should be your GitHub username, with the extension `.md`. For example, the user
|
||||||
|
example_user would create the file `.github/contributors/example_user.md`.
|
||||||
|
|
||||||
|
Read this agreement carefully before signing. These terms and conditions
|
||||||
|
constitute a binding legal agreement.
|
||||||
|
|
||||||
|
## Contributor Agreement
|
||||||
|
|
||||||
|
1. The term "contribution" or "contributed materials" means any source code,
|
||||||
|
object code, patch, tool, sample, graphic, specification, manual,
|
||||||
|
documentation, or any other material posted or submitted by you to the project.
|
||||||
|
|
||||||
|
2. With respect to any worldwide copyrights, or copyright applications and
|
||||||
|
registrations, in your contribution:
|
||||||
|
|
||||||
|
* you hereby assign to us joint ownership, and to the extent that such
|
||||||
|
assignment is or becomes invalid, ineffective or unenforceable, you hereby
|
||||||
|
grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
|
||||||
|
royalty-free, unrestricted license to exercise all rights under those
|
||||||
|
copyrights. This includes, at our option, the right to sublicense these same
|
||||||
|
rights to third parties through multiple levels of sublicensees or other
|
||||||
|
licensing arrangements;
|
||||||
|
|
||||||
|
* you agree that each of us can do all things in relation to your
|
||||||
|
contribution as if each of us were the sole owners, and if one of us makes
|
||||||
|
a derivative work of your contribution, the one who makes the derivative
|
||||||
|
work (or has it made will be the sole owner of that derivative work;
|
||||||
|
|
||||||
|
* you agree that you will not assert any moral rights in your contribution
|
||||||
|
against us, our licensees or transferees;
|
||||||
|
|
||||||
|
* you agree that we may register a copyright in your contribution and
|
||||||
|
exercise all ownership rights associated with it; and
|
||||||
|
|
||||||
|
* you agree that neither of us has any duty to consult with, obtain the
|
||||||
|
consent of, pay or render an accounting to the other for any use or
|
||||||
|
distribution of your contribution.
|
||||||
|
|
||||||
|
3. With respect to any patents you own, or that you can license without payment
|
||||||
|
to any third party, you hereby grant to us a perpetual, irrevocable,
|
||||||
|
non-exclusive, worldwide, no-charge, royalty-free license to:
|
||||||
|
|
||||||
|
* make, have made, use, sell, offer to sell, import, and otherwise transfer
|
||||||
|
your contribution in whole or in part, alone or in combination with or
|
||||||
|
included in any product, work or materials arising out of the project to
|
||||||
|
which your contribution was submitted, and
|
||||||
|
|
||||||
|
* at our option, to sublicense these same rights to third parties through
|
||||||
|
multiple levels of sublicensees or other licensing arrangements.
|
||||||
|
|
||||||
|
4. Except as set out above, you keep all right, title, and interest in your
|
||||||
|
contribution. The rights that you grant to us under these terms are effective
|
||||||
|
on the date you first submitted a contribution to us, even if your submission
|
||||||
|
took place before the date you sign these terms.
|
||||||
|
|
||||||
|
5. You covenant, represent, warrant and agree that:
|
||||||
|
|
||||||
|
* Each contribution that you submit is and shall be an original work of
|
||||||
|
authorship and you can legally grant the rights set out in this SCA;
|
||||||
|
|
||||||
|
* to the best of your knowledge, each contribution will not violate any
|
||||||
|
third party's copyrights, trademarks, patents, or other intellectual
|
||||||
|
property rights; and
|
||||||
|
|
||||||
|
* each contribution shall be in compliance with U.S. export control laws and
|
||||||
|
other applicable export and import laws. You agree to notify us if you
|
||||||
|
become aware of any circumstance which would make any of the foregoing
|
||||||
|
representations inaccurate in any respect. We may publicly disclose your
|
||||||
|
participation in the project, including the fact that you have signed the SCA.
|
||||||
|
|
||||||
|
6. This SCA is governed by the laws of the State of California and applicable
|
||||||
|
U.S. Federal law. Any choice of law rules will not apply.
|
||||||
|
|
||||||
|
7. Please place an “x” on one of the applicable statement below. Please do NOT
|
||||||
|
mark both statements:
|
||||||
|
|
||||||
|
* [ ] I am signing on behalf of myself as an individual and no other person
|
||||||
|
or entity, including my employer, has or will have rights with respect my
|
||||||
|
contributions.
|
||||||
|
|
||||||
|
* [x] I am signing on behalf of my employer or a legal entity and I have the
|
||||||
|
actual authority to contractually bind that entity.
|
||||||
|
|
||||||
|
## Contributor Details
|
||||||
|
|
||||||
|
| Field | Entry |
|
||||||
|
|------------------------------- | -------------------------------- |
|
||||||
|
| Name | Rob van Nieuwpoort |
|
||||||
|
| Signing on behalf of | Dafne van Kuppevelt, Janneke van der Zwaan, Willem van Hage |
|
||||||
|
| Company name (if applicable) | Netherlands eScience center |
|
||||||
|
| Title or role (if applicable) | Director of technology |
|
||||||
|
| Date | 14-12-2016 |
|
||||||
|
| GitHub username | RvanNieuwpoort |
|
||||||
|
| Website (optional) | https://www.esciencecenter.nl/ |
|
|
@ -6,10 +6,12 @@ This is a list of everyone who has made significant contributions to spaCy, in a
|
||||||
* Andreas Grivas, [@andreasgrv](https://github.com/andreasgrv)
|
* Andreas Grivas, [@andreasgrv](https://github.com/andreasgrv)
|
||||||
* Chris DuBois, [@chrisdubois](https://github.com/chrisdubois)
|
* Chris DuBois, [@chrisdubois](https://github.com/chrisdubois)
|
||||||
* Christoph Schwienheer, [@chssch](https://github.com/chssch)
|
* Christoph Schwienheer, [@chssch](https://github.com/chssch)
|
||||||
|
* Dafne van Kuppevelt, [@dafnevk](https://github.com/dafnevk)
|
||||||
* Dmytro Sadovnychyi, [@sadovnychyi](https://github.com/sadovnychyi)
|
* Dmytro Sadovnychyi, [@sadovnychyi](https://github.com/sadovnychyi)
|
||||||
* Henning Peters, [@henningpeters](https://github.com/henningpeters)
|
* Henning Peters, [@henningpeters](https://github.com/henningpeters)
|
||||||
* Ines Montani, [@ines](https://github.com/ines)
|
* Ines Montani, [@ines](https://github.com/ines)
|
||||||
* J Nicolas Schrading, [@NSchrading](https://github.com/NSchrading)
|
* J Nicolas Schrading, [@NSchrading](https://github.com/NSchrading)
|
||||||
|
* Janneke van der Zwaan, [@jvdzwaan](https://github.com/jvdzwaan)
|
||||||
* Jordan Suchow, [@suchow](https://github.com/suchow)
|
* Jordan Suchow, [@suchow](https://github.com/suchow)
|
||||||
* Kendrick Tan, [@kendricktan](https://github.com/kendricktan)
|
* Kendrick Tan, [@kendricktan](https://github.com/kendricktan)
|
||||||
* Kyle P. Johnson, [@kylepjohnson](https://github.com/kylepjohnson)
|
* Kyle P. Johnson, [@kylepjohnson](https://github.com/kylepjohnson)
|
||||||
|
@ -19,11 +21,13 @@ This is a list of everyone who has made significant contributions to spaCy, in a
|
||||||
* Maxim Samsonov, [@maxirmx](https://github.com/maxirmx)
|
* Maxim Samsonov, [@maxirmx](https://github.com/maxirmx)
|
||||||
* Oleg Zd, [@olegzd](https://github.com/olegzd)
|
* Oleg Zd, [@olegzd](https://github.com/olegzd)
|
||||||
* Pokey Rule, [@pokey](https://github.com/pokey)
|
* Pokey Rule, [@pokey](https://github.com/pokey)
|
||||||
|
* Rob van Nieuwpoort, [@RvanNieuwpoort](https://github.com/RvanNieuwpoort)
|
||||||
* Sam Bozek, [@sambozek](https://github.com/sambozek)
|
* Sam Bozek, [@sambozek](https://github.com/sambozek)
|
||||||
* Sasho Savkov [@savkov](https://github.com/savkov)
|
* Sasho Savkov [@savkov](https://github.com/savkov)
|
||||||
* Tiago Rodrigues, [@TiagoMRodrigues](https://github.com/TiagoMRodrigues)
|
* Tiago Rodrigues, [@TiagoMRodrigues](https://github.com/TiagoMRodrigues)
|
||||||
* Vsevolod Solovyov, [@vsolovyov](https://github.com/vsolovyov)
|
* Vsevolod Solovyov, [@vsolovyov](https://github.com/vsolovyov)
|
||||||
* Wah Loon Keng, [@kengz](https://github.com/kengz)
|
* Wah Loon Keng, [@kengz](https://github.com/kengz)
|
||||||
|
* Willem van Hage, [@wrvhage](https://github.com/wrvhage)
|
||||||
* Wolfgang Seeker, [@wbwseeker](https://github.com/wbwseeker)
|
* Wolfgang Seeker, [@wbwseeker](https://github.com/wbwseeker)
|
||||||
* Yanhao Yang, [@YanhaoYang](https://github.com/YanhaoYang)
|
* Yanhao Yang, [@YanhaoYang](https://github.com/YanhaoYang)
|
||||||
* Yubing Dong, [@tomtung](https://github.com/tomtung)
|
* Yubing Dong, [@tomtung](https://github.com/tomtung)
|
||||||
|
|
|
@ -151,10 +151,10 @@ def _read_senses(loc):
|
||||||
def setup_vocab(lex_attr_getters, tag_map, src_dir, dst_dir):
|
def setup_vocab(lex_attr_getters, tag_map, src_dir, dst_dir):
|
||||||
if not dst_dir.exists():
|
if not dst_dir.exists():
|
||||||
dst_dir.mkdir()
|
dst_dir.mkdir()
|
||||||
|
print('Reading vocab from ', src_dir)
|
||||||
vectors_src = src_dir / 'vectors.bz2'
|
vectors_src = src_dir / 'vectors.bz2'
|
||||||
if vectors_src.exists():
|
if vectors_src.exists():
|
||||||
write_binary_vectors(vectors_src.as_posix, (dst_dir / 'vec.bin').as_posix())
|
write_binary_vectors(vectors_src.as_posix(), (dst_dir / 'vec.bin').as_posix())
|
||||||
else:
|
else:
|
||||||
print("Warning: Word vectors file not found")
|
print("Warning: Word vectors file not found")
|
||||||
vocab = Vocab(lex_attr_getters=lex_attr_getters, tag_map=tag_map)
|
vocab = Vocab(lex_attr_getters=lex_attr_getters, tag_map=tag_map)
|
||||||
|
|
22
examples/training/load_ner.py
Normal file
22
examples/training/load_ner.py
Normal file
|
@ -0,0 +1,22 @@
|
||||||
|
# Load NER
|
||||||
|
from __future__ import unicode_literals
|
||||||
|
import spacy
|
||||||
|
import pathlib
|
||||||
|
from spacy.pipeline import EntityRecognizer
|
||||||
|
from spacy.vocab import Vocab
|
||||||
|
|
||||||
|
def load_model(model_dir):
|
||||||
|
model_dir = pathlib.Path(model_dir)
|
||||||
|
nlp = spacy.load('en', parser=False, entity=False, add_vectors=False)
|
||||||
|
with (model_dir / 'vocab' / 'strings.json').open('r', encoding='utf8') as file_:
|
||||||
|
nlp.vocab.strings.load(file_)
|
||||||
|
nlp.vocab.load_lexemes(model_dir / 'vocab' / 'lexemes.bin')
|
||||||
|
ner = EntityRecognizer.load(model_dir, nlp.vocab, require=True)
|
||||||
|
return (nlp, ner)
|
||||||
|
|
||||||
|
(nlp, ner) = load_model('ner')
|
||||||
|
doc = nlp.make_doc('Who is Shaka Khan?')
|
||||||
|
nlp.tagger(doc)
|
||||||
|
ner(doc)
|
||||||
|
for word in doc:
|
||||||
|
print(word.text, word.orth, word.lower, word.tag_, word.ent_type_, word.ent_iob)
|
|
@ -10,6 +10,13 @@ from spacy.tagger import Tagger
|
||||||
|
|
||||||
|
|
||||||
def train_ner(nlp, train_data, entity_types):
|
def train_ner(nlp, train_data, entity_types):
|
||||||
|
# Add new words to vocab.
|
||||||
|
for raw_text, _ in train_data:
|
||||||
|
doc = nlp.make_doc(raw_text)
|
||||||
|
for word in doc:
|
||||||
|
_ = nlp.vocab[word.orth]
|
||||||
|
|
||||||
|
# Train NER.
|
||||||
ner = EntityRecognizer(nlp.vocab, entity_types=entity_types)
|
ner = EntityRecognizer(nlp.vocab, entity_types=entity_types)
|
||||||
for itn in range(5):
|
for itn in range(5):
|
||||||
random.shuffle(train_data)
|
random.shuffle(train_data)
|
||||||
|
@ -20,21 +27,30 @@ def train_ner(nlp, train_data, entity_types):
|
||||||
ner.model.end_training()
|
ner.model.end_training()
|
||||||
return ner
|
return ner
|
||||||
|
|
||||||
|
def save_model(ner, model_dir):
|
||||||
|
model_dir = pathlib.Path(model_dir)
|
||||||
|
if not model_dir.exists():
|
||||||
|
model_dir.mkdir()
|
||||||
|
assert model_dir.is_dir()
|
||||||
|
|
||||||
|
with (model_dir / 'config.json').open('w') as file_:
|
||||||
|
json.dump(ner.cfg, file_)
|
||||||
|
ner.model.dump(str(model_dir / 'model'))
|
||||||
|
if not (model_dir / 'vocab').exists():
|
||||||
|
(model_dir / 'vocab').mkdir()
|
||||||
|
ner.vocab.dump(str(model_dir / 'vocab' / 'lexemes.bin'))
|
||||||
|
with (model_dir / 'vocab' / 'strings.json').open('w', encoding='utf8') as file_:
|
||||||
|
ner.vocab.strings.dump(file_)
|
||||||
|
|
||||||
|
|
||||||
def main(model_dir=None):
|
def main(model_dir=None):
|
||||||
if model_dir is not None:
|
|
||||||
model_dir = pathlib.Path(model_dir)
|
|
||||||
if not model_dir.exists():
|
|
||||||
model_dir.mkdir()
|
|
||||||
assert model_dir.is_dir()
|
|
||||||
|
|
||||||
nlp = spacy.load('en', parser=False, entity=False, add_vectors=False)
|
nlp = spacy.load('en', parser=False, entity=False, add_vectors=False)
|
||||||
|
|
||||||
# v1.1.2 onwards
|
# v1.1.2 onwards
|
||||||
if nlp.tagger is None:
|
if nlp.tagger is None:
|
||||||
print('---- WARNING ----')
|
print('---- WARNING ----')
|
||||||
print('Data directory not found')
|
print('Data directory not found')
|
||||||
print('please run: `python -m spacy.en.download –force all` for better performance')
|
print('please run: `python -m spacy.en.download --force all` for better performance')
|
||||||
print('Using feature templates for tagging')
|
print('Using feature templates for tagging')
|
||||||
print('-----------------')
|
print('-----------------')
|
||||||
nlp.tagger = Tagger(nlp.vocab, features=Tagger.feature_templates)
|
nlp.tagger = Tagger(nlp.vocab, features=Tagger.feature_templates)
|
||||||
|
@ -56,16 +72,17 @@ def main(model_dir=None):
|
||||||
nlp.tagger(doc)
|
nlp.tagger(doc)
|
||||||
ner(doc)
|
ner(doc)
|
||||||
for word in doc:
|
for word in doc:
|
||||||
print(word.text, word.tag_, word.ent_type_, word.ent_iob)
|
print(word.text, word.orth, word.lower, word.tag_, word.ent_type_, word.ent_iob)
|
||||||
|
|
||||||
if model_dir is not None:
|
if model_dir is not None:
|
||||||
with (model_dir / 'config.json').open('w') as file_:
|
save_model(ner, model_dir)
|
||||||
json.dump(ner.cfg, file_)
|
|
||||||
ner.model.dump(str(model_dir / 'model'))
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
if __name__ == '__main__':
|
||||||
main()
|
main('ner')
|
||||||
# Who "" 2
|
# Who "" 2
|
||||||
# is "" 2
|
# is "" 2
|
||||||
# Shaka "" PERSON 3
|
# Shaka "" PERSON 3
|
||||||
|
|
|
@ -69,7 +69,7 @@ def main(output_dir=None):
|
||||||
print(word.text, word.tag_, word.pos_)
|
print(word.text, word.tag_, word.pos_)
|
||||||
if output_dir is not None:
|
if output_dir is not None:
|
||||||
tagger.model.dump(str(output_dir / 'pos' / 'model'))
|
tagger.model.dump(str(output_dir / 'pos' / 'model'))
|
||||||
with (output_dir / 'vocab' / 'strings.json').open('wb') as file_:
|
with (output_dir / 'vocab' / 'strings.json').open('w') as file_:
|
||||||
tagger.vocab.strings.dump(file_)
|
tagger.vocab.strings.dump(file_)
|
||||||
|
|
||||||
|
|
||||||
|
|
1
setup.py
1
setup.py
|
@ -28,6 +28,7 @@ PACKAGES = [
|
||||||
'spacy.fr',
|
'spacy.fr',
|
||||||
'spacy.it',
|
'spacy.it',
|
||||||
'spacy.pt',
|
'spacy.pt',
|
||||||
|
'spacy.nl',
|
||||||
'spacy.serialize',
|
'spacy.serialize',
|
||||||
'spacy.syntax',
|
'spacy.syntax',
|
||||||
'spacy.munge',
|
'spacy.munge',
|
||||||
|
|
|
@ -10,6 +10,7 @@ from . import es
|
||||||
from . import it
|
from . import it
|
||||||
from . import fr
|
from . import fr
|
||||||
from . import pt
|
from . import pt
|
||||||
|
from . import nl
|
||||||
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
|
@ -25,6 +26,7 @@ set_lang_class(pt.Portuguese.lang, pt.Portuguese)
|
||||||
set_lang_class(fr.French.lang, fr.French)
|
set_lang_class(fr.French.lang, fr.French)
|
||||||
set_lang_class(it.Italian.lang, it.Italian)
|
set_lang_class(it.Italian.lang, it.Italian)
|
||||||
set_lang_class(zh.Chinese.lang, zh.Chinese)
|
set_lang_class(zh.Chinese.lang, zh.Chinese)
|
||||||
|
set_lang_class(nl.Dutch.lang, nl.Dutch)
|
||||||
|
|
||||||
|
|
||||||
def load(name, **overrides):
|
def load(name, **overrides):
|
||||||
|
|
26
spacy/nl/__init__.py
Normal file
26
spacy/nl/__init__.py
Normal file
|
@ -0,0 +1,26 @@
|
||||||
|
from __future__ import unicode_literals, print_function
|
||||||
|
|
||||||
|
from os import path
|
||||||
|
|
||||||
|
from ..language import Language
|
||||||
|
from ..attrs import LANG
|
||||||
|
from . import language_data
|
||||||
|
|
||||||
|
|
||||||
|
class Dutch(Language):
|
||||||
|
lang = 'nl'
|
||||||
|
|
||||||
|
class Defaults(Language.Defaults):
|
||||||
|
tokenizer_exceptions = dict(language_data.TOKENIZER_EXCEPTIONS)
|
||||||
|
lex_attr_getters = dict(Language.Defaults.lex_attr_getters)
|
||||||
|
lex_attr_getters[LANG] = lambda text: 'nl'
|
||||||
|
|
||||||
|
prefixes = tuple(language_data.TOKENIZER_PREFIXES)
|
||||||
|
|
||||||
|
suffixes = tuple(language_data.TOKENIZER_SUFFIXES)
|
||||||
|
|
||||||
|
infixes = tuple(language_data.TOKENIZER_INFIXES)
|
||||||
|
|
||||||
|
tag_map = dict(language_data.TAG_MAP)
|
||||||
|
|
||||||
|
stop_words = set(language_data.STOP_WORDS)
|
285
spacy/nl/language_data.py
Normal file
285
spacy/nl/language_data.py
Normal file
|
@ -0,0 +1,285 @@
|
||||||
|
# encoding: utf8
|
||||||
|
from __future__ import unicode_literals
|
||||||
|
import re
|
||||||
|
|
||||||
|
# Stop words are retrieved from http://www.damienvanholten.com/downloads/dutch-stop-words.txt
|
||||||
|
STOP_WORDS = set("""
|
||||||
|
aan
|
||||||
|
af
|
||||||
|
al
|
||||||
|
alles
|
||||||
|
als
|
||||||
|
altijd
|
||||||
|
andere
|
||||||
|
ben
|
||||||
|
bij
|
||||||
|
daar
|
||||||
|
dan
|
||||||
|
dat
|
||||||
|
de
|
||||||
|
der
|
||||||
|
deze
|
||||||
|
die
|
||||||
|
dit
|
||||||
|
doch
|
||||||
|
doen
|
||||||
|
door
|
||||||
|
dus
|
||||||
|
een
|
||||||
|
eens
|
||||||
|
en
|
||||||
|
er
|
||||||
|
ge
|
||||||
|
geen
|
||||||
|
geweest
|
||||||
|
haar
|
||||||
|
had
|
||||||
|
heb
|
||||||
|
hebben
|
||||||
|
heeft
|
||||||
|
hem
|
||||||
|
het
|
||||||
|
hier
|
||||||
|
hij
|
||||||
|
hoe
|
||||||
|
hun
|
||||||
|
iemand
|
||||||
|
iets
|
||||||
|
ik
|
||||||
|
in
|
||||||
|
is
|
||||||
|
ja
|
||||||
|
je
|
||||||
|
kan
|
||||||
|
kon
|
||||||
|
kunnen
|
||||||
|
maar
|
||||||
|
me
|
||||||
|
meer
|
||||||
|
men
|
||||||
|
met
|
||||||
|
mij
|
||||||
|
mijn
|
||||||
|
moet
|
||||||
|
na
|
||||||
|
naar
|
||||||
|
niet
|
||||||
|
niets
|
||||||
|
nog
|
||||||
|
nu
|
||||||
|
of
|
||||||
|
om
|
||||||
|
omdat
|
||||||
|
ons
|
||||||
|
ook
|
||||||
|
op
|
||||||
|
over
|
||||||
|
reeds
|
||||||
|
te
|
||||||
|
tegen
|
||||||
|
toch
|
||||||
|
toen
|
||||||
|
tot
|
||||||
|
u
|
||||||
|
uit
|
||||||
|
uw
|
||||||
|
van
|
||||||
|
veel
|
||||||
|
voor
|
||||||
|
want
|
||||||
|
waren
|
||||||
|
was
|
||||||
|
wat
|
||||||
|
we
|
||||||
|
wel
|
||||||
|
werd
|
||||||
|
wezen
|
||||||
|
wie
|
||||||
|
wij
|
||||||
|
wil
|
||||||
|
worden
|
||||||
|
zal
|
||||||
|
ze
|
||||||
|
zei
|
||||||
|
zelf
|
||||||
|
zich
|
||||||
|
zij
|
||||||
|
zijn
|
||||||
|
zo
|
||||||
|
zonder
|
||||||
|
zou
|
||||||
|
""".split())
|
||||||
|
|
||||||
|
|
||||||
|
TOKENIZER_PREFIXES = map(re.escape, r'''
|
||||||
|
,
|
||||||
|
"
|
||||||
|
(
|
||||||
|
[
|
||||||
|
{
|
||||||
|
*
|
||||||
|
<
|
||||||
|
>
|
||||||
|
$
|
||||||
|
£
|
||||||
|
„
|
||||||
|
“
|
||||||
|
'
|
||||||
|
``
|
||||||
|
`
|
||||||
|
#
|
||||||
|
US$
|
||||||
|
C$
|
||||||
|
A$
|
||||||
|
a-
|
||||||
|
‘
|
||||||
|
....
|
||||||
|
...
|
||||||
|
‚
|
||||||
|
»
|
||||||
|
_
|
||||||
|
§
|
||||||
|
'''.strip().split('\n'))
|
||||||
|
|
||||||
|
|
||||||
|
TOKENIZER_SUFFIXES = r'''
|
||||||
|
,
|
||||||
|
\"
|
||||||
|
\)
|
||||||
|
\]
|
||||||
|
\}
|
||||||
|
\*
|
||||||
|
\!
|
||||||
|
\?
|
||||||
|
%
|
||||||
|
\$
|
||||||
|
>
|
||||||
|
:
|
||||||
|
;
|
||||||
|
'
|
||||||
|
”
|
||||||
|
“
|
||||||
|
«
|
||||||
|
_
|
||||||
|
''
|
||||||
|
's
|
||||||
|
'S
|
||||||
|
’s
|
||||||
|
’S
|
||||||
|
’
|
||||||
|
‘
|
||||||
|
°
|
||||||
|
€
|
||||||
|
\.\.
|
||||||
|
\.\.\.
|
||||||
|
\.\.\.\.
|
||||||
|
(?<=[a-zäöüßÖÄÜ)\]"'´«‘’%\)²“”])\.
|
||||||
|
\-\-
|
||||||
|
´
|
||||||
|
(?<=[0-9])km²
|
||||||
|
(?<=[0-9])m²
|
||||||
|
(?<=[0-9])cm²
|
||||||
|
(?<=[0-9])mm²
|
||||||
|
(?<=[0-9])km³
|
||||||
|
(?<=[0-9])m³
|
||||||
|
(?<=[0-9])cm³
|
||||||
|
(?<=[0-9])mm³
|
||||||
|
(?<=[0-9])ha
|
||||||
|
(?<=[0-9])km
|
||||||
|
(?<=[0-9])m
|
||||||
|
(?<=[0-9])cm
|
||||||
|
(?<=[0-9])mm
|
||||||
|
(?<=[0-9])µm
|
||||||
|
(?<=[0-9])nm
|
||||||
|
(?<=[0-9])yd
|
||||||
|
(?<=[0-9])in
|
||||||
|
(?<=[0-9])ft
|
||||||
|
(?<=[0-9])kg
|
||||||
|
(?<=[0-9])g
|
||||||
|
(?<=[0-9])mg
|
||||||
|
(?<=[0-9])µg
|
||||||
|
(?<=[0-9])t
|
||||||
|
(?<=[0-9])lb
|
||||||
|
(?<=[0-9])oz
|
||||||
|
(?<=[0-9])m/s
|
||||||
|
(?<=[0-9])km/h
|
||||||
|
(?<=[0-9])mph
|
||||||
|
(?<=[0-9])°C
|
||||||
|
(?<=[0-9])°K
|
||||||
|
(?<=[0-9])°F
|
||||||
|
(?<=[0-9])hPa
|
||||||
|
(?<=[0-9])Pa
|
||||||
|
(?<=[0-9])mbar
|
||||||
|
(?<=[0-9])mb
|
||||||
|
(?<=[0-9])T
|
||||||
|
(?<=[0-9])G
|
||||||
|
(?<=[0-9])M
|
||||||
|
(?<=[0-9])K
|
||||||
|
(?<=[0-9])kb
|
||||||
|
'''.strip().split('\n')
|
||||||
|
|
||||||
|
|
||||||
|
TOKENIZER_INFIXES = r'''
|
||||||
|
\.\.\.
|
||||||
|
(?<=[a-z])\.(?=[A-Z])
|
||||||
|
(?<=[a-zöäüßA-ZÖÄÜ"]):(?=[a-zöäüßA-ZÖÄÜ])
|
||||||
|
(?<=[a-zöäüßA-ZÖÄÜ"])>(?=[a-zöäüßA-ZÖÄÜ])
|
||||||
|
(?<=[a-zöäüßA-ZÖÄÜ"])<(?=[a-zöäüßA-ZÖÄÜ])
|
||||||
|
(?<=[a-zöäüßA-ZÖÄÜ"])=(?=[a-zöäüßA-ZÖÄÜ])
|
||||||
|
'''.strip().split('\n')
|
||||||
|
|
||||||
|
|
||||||
|
#TODO Make tokenizer excpetions for Dutch
|
||||||
|
TOKENIZER_EXCEPTIONS = {}
|
||||||
|
|
||||||
|
#TODO insert TAG_MAP for Dutch
|
||||||
|
TAG_MAP = {
|
||||||
|
"ADV": {
|
||||||
|
"pos": "ADV"
|
||||||
|
},
|
||||||
|
"NOUN": {
|
||||||
|
"pos": "NOUN"
|
||||||
|
},
|
||||||
|
"ADP": {
|
||||||
|
"pos": "ADP"
|
||||||
|
},
|
||||||
|
"PRON": {
|
||||||
|
"pos": "PRON"
|
||||||
|
},
|
||||||
|
"SCONJ": {
|
||||||
|
"pos": "SCONJ"
|
||||||
|
},
|
||||||
|
"PROPN": {
|
||||||
|
"pos": "PROPN"
|
||||||
|
},
|
||||||
|
"DET": {
|
||||||
|
"pos": "DET"
|
||||||
|
},
|
||||||
|
"SYM": {
|
||||||
|
"pos": "SYM"
|
||||||
|
},
|
||||||
|
"INTJ": {
|
||||||
|
"pos": "INTJ"
|
||||||
|
},
|
||||||
|
"PUNCT": {
|
||||||
|
"pos": "PUNCT"
|
||||||
|
},
|
||||||
|
"NUM": {
|
||||||
|
"pos": "NUM"
|
||||||
|
},
|
||||||
|
"AUX": {
|
||||||
|
"pos": "AUX"
|
||||||
|
},
|
||||||
|
"X": {
|
||||||
|
"pos": "X"
|
||||||
|
},
|
||||||
|
"CONJ": {
|
||||||
|
"pos": "CONJ"
|
||||||
|
},
|
||||||
|
"ADJ": {
|
||||||
|
"pos": "ADJ"
|
||||||
|
},
|
||||||
|
"VERB": {
|
||||||
|
"pos": "VERB"
|
||||||
|
}
|
||||||
|
}
|
|
@ -426,3 +426,9 @@ cpdef enum symbol_t:
|
||||||
#IS_QUOTE
|
#IS_QUOTE
|
||||||
#IS_LEFT_PUNCT
|
#IS_LEFT_PUNCT
|
||||||
#IS_RIGHT_PUNCT
|
#IS_RIGHT_PUNCT
|
||||||
|
|
||||||
|
# These symbols are currently missing. However, if we add them currently,
|
||||||
|
# we'll throw off the integer index and the model will have to be retrained.
|
||||||
|
# We therefore wait until the next data version to add them.
|
||||||
|
# acl
|
||||||
|
|
||||||
|
|
|
@ -1,6 +1,7 @@
|
||||||
# encoding: utf8
|
# encoding: utf8
|
||||||
from __future__ import unicode_literals
|
from __future__ import unicode_literals
|
||||||
from ...fr import French
|
from ...fr import French
|
||||||
|
from ...nl import Dutch
|
||||||
|
|
||||||
def test_load_french():
|
def test_load_french():
|
||||||
nlp = French()
|
nlp = French()
|
||||||
|
@ -10,3 +11,11 @@ def test_load_french():
|
||||||
assert doc[2].text == u'vous'
|
assert doc[2].text == u'vous'
|
||||||
assert doc[3].text == u'français'
|
assert doc[3].text == u'français'
|
||||||
assert doc[4].text == u'?'
|
assert doc[4].text == u'?'
|
||||||
|
|
||||||
|
def test_load_dutch():
|
||||||
|
nlp = Dutch()
|
||||||
|
doc = nlp(u'Is dit Nederlands?')
|
||||||
|
assert doc[0].text == u'Is'
|
||||||
|
assert doc[1].text == u'dit'
|
||||||
|
assert doc[2].text == u'Nederlands'
|
||||||
|
assert doc[3].text == u'?'
|
|
@ -47,7 +47,7 @@ p
|
||||||
+cell.u-text-center #[+procon(icon)]
|
+cell.u-text-center #[+procon(icon)]
|
||||||
|
|
||||||
+row
|
+row
|
||||||
+cell Entity Regonition
|
+cell Entity Recognition
|
||||||
each icon in [ "pro", "con", "pro", "pro" ]
|
each icon in [ "pro", "con", "pro", "pro" ]
|
||||||
+cell.u-text-center #[+procon(icon)]
|
+cell.u-text-center #[+procon(icon)]
|
||||||
|
|
||||||
|
|
|
@ -217,7 +217,7 @@ p
|
||||||
('I like London and Berlin.', [(7, 13, 'LOC'), (18, 24, 'LOC')])
|
('I like London and Berlin.', [(7, 13, 'LOC'), (18, 24, 'LOC')])
|
||||||
]
|
]
|
||||||
|
|
||||||
nlp = spacy.load(entity=False, parser=False)
|
nlp = spacy.load('en', entity=False, parser=False)
|
||||||
ner = EntityRecognizer(nlp.vocab, entity_types=['PERSON', 'LOC'])
|
ner = EntityRecognizer(nlp.vocab, entity_types=['PERSON', 'LOC'])
|
||||||
|
|
||||||
for itn in range(5):
|
for itn in range(5):
|
||||||
|
|
Loading…
Reference in New Issue
Block a user