mirror of
https://github.com/explosion/spaCy.git
synced 2024-11-10 19:57:17 +03:00
fix a bug causing mis-alignments (#5560)
This commit is contained in:
parent
009119fa66
commit
456bf47f51
106
.github/contributors/hiroshi-matsuda-rit.md
vendored
Normal file
106
.github/contributors/hiroshi-matsuda-rit.md
vendored
Normal file
|
@ -0,0 +1,106 @@
|
|||
# spaCy contributor agreement
|
||||
|
||||
This spaCy Contributor Agreement (**"SCA"**) is based on the
|
||||
[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
|
||||
The SCA applies to any contribution that you make to any product or project
|
||||
managed by us (the **"project"**), and sets out the intellectual property rights
|
||||
you grant to us in the contributed materials. The term **"us"** shall mean
|
||||
[ExplosionAI GmbH](https://explosion.ai/legal). The term
|
||||
**"you"** shall mean the person or entity identified below.
|
||||
|
||||
If you agree to be bound by these terms, fill in the information requested
|
||||
below and include the filled-in version with your first pull request, under the
|
||||
folder [`.github/contributors/`](/.github/contributors/). The name of the file
|
||||
should be your GitHub username, with the extension `.md`. For example, the user
|
||||
example_user would create the file `.github/contributors/example_user.md`.
|
||||
|
||||
Read this agreement carefully before signing. These terms and conditions
|
||||
constitute a binding legal agreement.
|
||||
|
||||
## Contributor Agreement
|
||||
|
||||
1. The term "contribution" or "contributed materials" means any source code,
|
||||
object code, patch, tool, sample, graphic, specification, manual,
|
||||
documentation, or any other material posted or submitted by you to the project.
|
||||
|
||||
2. With respect to any worldwide copyrights, or copyright applications and
|
||||
registrations, in your contribution:
|
||||
|
||||
* you hereby assign to us joint ownership, and to the extent that such
|
||||
assignment is or becomes invalid, ineffective or unenforceable, you hereby
|
||||
grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
|
||||
royalty-free, unrestricted license to exercise all rights under those
|
||||
copyrights. This includes, at our option, the right to sublicense these same
|
||||
rights to third parties through multiple levels of sublicensees or other
|
||||
licensing arrangements;
|
||||
|
||||
* you agree that each of us can do all things in relation to your
|
||||
contribution as if each of us were the sole owners, and if one of us makes
|
||||
a derivative work of your contribution, the one who makes the derivative
|
||||
work (or has it made will be the sole owner of that derivative work;
|
||||
|
||||
* you agree that you will not assert any moral rights in your contribution
|
||||
against us, our licensees or transferees;
|
||||
|
||||
* you agree that we may register a copyright in your contribution and
|
||||
exercise all ownership rights associated with it; and
|
||||
|
||||
* you agree that neither of us has any duty to consult with, obtain the
|
||||
consent of, pay or render an accounting to the other for any use or
|
||||
distribution of your contribution.
|
||||
|
||||
3. With respect to any patents you own, or that you can license without payment
|
||||
to any third party, you hereby grant to us a perpetual, irrevocable,
|
||||
non-exclusive, worldwide, no-charge, royalty-free license to:
|
||||
|
||||
* make, have made, use, sell, offer to sell, import, and otherwise transfer
|
||||
your contribution in whole or in part, alone or in combination with or
|
||||
included in any product, work or materials arising out of the project to
|
||||
which your contribution was submitted, and
|
||||
|
||||
* at our option, to sublicense these same rights to third parties through
|
||||
multiple levels of sublicensees or other licensing arrangements.
|
||||
|
||||
4. Except as set out above, you keep all right, title, and interest in your
|
||||
contribution. The rights that you grant to us under these terms are effective
|
||||
on the date you first submitted a contribution to us, even if your submission
|
||||
took place before the date you sign these terms.
|
||||
|
||||
5. You covenant, represent, warrant and agree that:
|
||||
|
||||
* Each contribution that you submit is and shall be an original work of
|
||||
authorship and you can legally grant the rights set out in this SCA;
|
||||
|
||||
* to the best of your knowledge, each contribution will not violate any
|
||||
third party's copyrights, trademarks, patents, or other intellectual
|
||||
property rights; and
|
||||
|
||||
* each contribution shall be in compliance with U.S. export control laws and
|
||||
other applicable export and import laws. You agree to notify us if you
|
||||
become aware of any circumstance which would make any of the foregoing
|
||||
representations inaccurate in any respect. We may publicly disclose your
|
||||
participation in the project, including the fact that you have signed the SCA.
|
||||
|
||||
6. This SCA is governed by the laws of the State of California and applicable
|
||||
U.S. Federal law. Any choice of law rules will not apply.
|
||||
|
||||
7. Please place an “x” on one of the applicable statement below. Please do NOT
|
||||
mark both statements:
|
||||
|
||||
* [x] I am signing on behalf of myself as an individual and no other person
|
||||
or entity, including my employer, has or will have rights with respect to my
|
||||
contributions.
|
||||
|
||||
* [ ] I am signing on behalf of my employer or a legal entity and I have the
|
||||
actual authority to contractually bind that entity.
|
||||
|
||||
## Contributor Details
|
||||
|
||||
| Field | Entry |
|
||||
|------------------------------- | -------------------- |
|
||||
| Name | Hiroshi Matsuda |
|
||||
| Company name (if applicable) | Megagon Labs, Tokyo |
|
||||
| Title or role (if applicable) | Research Scientist |
|
||||
| Date | June 6, 2020 |
|
||||
| GitHub username | hiroshi-matsuda-rit |
|
||||
| Website (optional) | |
|
|
@ -1,7 +1,6 @@
|
|||
# encoding: utf8
|
||||
from __future__ import unicode_literals, print_function
|
||||
|
||||
import re
|
||||
from collections import namedtuple
|
||||
|
||||
from .stop_words import STOP_WORDS
|
||||
|
@ -14,7 +13,9 @@ from ...compat import copy_reg
|
|||
from ...language import Language
|
||||
from ...symbols import POS
|
||||
from ...tokens import Doc
|
||||
from ...util import DummyTokenizer, get_words_and_spaces
|
||||
from ...util import DummyTokenizer
|
||||
|
||||
from ...errors import Errors
|
||||
|
||||
# Hold the attributes we need with convenient names
|
||||
DetailedToken = namedtuple("DetailedToken", ["surface", "pos", "lemma"])
|
||||
|
@ -41,7 +42,7 @@ def try_sudachi_import():
|
|||
)
|
||||
|
||||
|
||||
def resolve_pos(token, next_token):
|
||||
def resolve_pos(orth, pos, next_pos):
|
||||
"""If necessary, add a field to the POS tag for UD mapping.
|
||||
Under Universal Dependencies, sometimes the same Unidic POS tag can
|
||||
be mapped differently depending on the literal token or its context
|
||||
|
@ -53,22 +54,22 @@ def resolve_pos(token, next_token):
|
|||
# token.
|
||||
|
||||
# orth based rules
|
||||
if token.pos in TAG_ORTH_MAP:
|
||||
orth_map = TAG_ORTH_MAP[token.pos[0]]
|
||||
if token.surface in orth_map:
|
||||
return orth_map[token.surface], None
|
||||
if pos[0] in TAG_ORTH_MAP:
|
||||
orth_map = TAG_ORTH_MAP[pos[0]]
|
||||
if orth in orth_map:
|
||||
return orth_map[orth], None
|
||||
|
||||
# tag bi-gram mapping
|
||||
if next_token:
|
||||
tag_bigram = token.pos[0], next_token.pos[0]
|
||||
if next_pos:
|
||||
tag_bigram = pos[0], next_pos[0]
|
||||
if tag_bigram in TAG_BIGRAM_MAP:
|
||||
bipos = TAG_BIGRAM_MAP[tag_bigram]
|
||||
if bipos[0] is None:
|
||||
return TAG_MAP[token.pos[0]][POS], bipos[1]
|
||||
return TAG_MAP[pos[0]][POS], bipos[1]
|
||||
else:
|
||||
return bipos
|
||||
|
||||
return TAG_MAP[token.pos[0]][POS], None
|
||||
return TAG_MAP[pos[0]][POS], None
|
||||
|
||||
|
||||
# Use a mapping of paired punctuation to avoid splitting quoted sentences.
|
||||
|
@ -120,6 +121,48 @@ def get_dtokens(tokenizer, text):
|
|||
words = [ww for ww in words if len(ww.surface) > 0]
|
||||
return words
|
||||
|
||||
|
||||
def get_words_lemmas_tags_spaces(dtokens, text, gap_tag=("空白", "")):
|
||||
words = [x.surface for x in dtokens]
|
||||
if "".join("".join(words).split()) != "".join(text.split()):
|
||||
raise ValueError(Errors.E194.format(text=text, words=words))
|
||||
text_words = []
|
||||
text_lemmas = []
|
||||
text_tags = []
|
||||
text_spaces = []
|
||||
text_pos = 0
|
||||
# normalize words to remove all whitespace tokens
|
||||
norm_words, norm_dtokens = zip(*[(word, dtokens) for word, dtokens in zip(words, dtokens) if not word.isspace()])
|
||||
# align words with text
|
||||
for word, dtoken in zip(norm_words, norm_dtokens):
|
||||
try:
|
||||
word_start = text[text_pos:].index(word)
|
||||
except ValueError:
|
||||
raise ValueError(Errors.E194.format(text=text, words=words))
|
||||
if word_start > 0:
|
||||
w = text[text_pos:text_pos + word_start]
|
||||
text_words.append(w)
|
||||
text_lemmas.append(w)
|
||||
text_tags.append(gap_tag)
|
||||
text_spaces.append(False)
|
||||
text_pos += word_start
|
||||
text_words.append(word)
|
||||
text_lemmas.append(dtoken.lemma)
|
||||
text_tags.append(dtoken.pos)
|
||||
text_spaces.append(False)
|
||||
text_pos += len(word)
|
||||
if text_pos < len(text) and text[text_pos] == " ":
|
||||
text_spaces[-1] = True
|
||||
text_pos += 1
|
||||
if text_pos < len(text):
|
||||
w = text[text_pos:]
|
||||
text_words.append(w)
|
||||
text_lemmas.append(w)
|
||||
text_tags.append(gap_tag)
|
||||
text_spaces.append(False)
|
||||
return text_words, text_lemmas, text_tags, text_spaces
|
||||
|
||||
|
||||
class JapaneseTokenizer(DummyTokenizer):
|
||||
def __init__(self, cls, nlp=None):
|
||||
self.vocab = nlp.vocab if nlp is not None else cls.create_vocab(nlp)
|
||||
|
@ -128,22 +171,23 @@ class JapaneseTokenizer(DummyTokenizer):
|
|||
def __call__(self, text):
|
||||
dtokens = get_dtokens(self.tokenizer, text)
|
||||
|
||||
words = [x.surface for x in dtokens]
|
||||
words, spaces = get_words_and_spaces(words, text)
|
||||
unidic_tags = [",".join(x.pos) for x in dtokens]
|
||||
words, lemmas, unidic_tags, spaces = get_words_lemmas_tags_spaces(dtokens, text)
|
||||
doc = Doc(self.vocab, words=words, spaces=spaces)
|
||||
next_pos = None
|
||||
for ii, (token, dtoken) in enumerate(zip(doc, dtokens)):
|
||||
ntoken = dtokens[ii+1] if ii+1 < len(dtokens) else None
|
||||
token.tag_ = dtoken.pos[0]
|
||||
for idx, (token, lemma, unidic_tag) in enumerate(zip(doc, lemmas, unidic_tags)):
|
||||
token.tag_ = unidic_tag[0]
|
||||
if next_pos:
|
||||
token.pos = next_pos
|
||||
next_pos = None
|
||||
else:
|
||||
token.pos, next_pos = resolve_pos(dtoken, ntoken)
|
||||
token.pos, next_pos = resolve_pos(
|
||||
token.orth_,
|
||||
unidic_tag,
|
||||
unidic_tags[idx + 1] if idx + 1 < len(unidic_tags) else None
|
||||
)
|
||||
|
||||
# if there's no lemma info (it's an unk) just use the surface
|
||||
token.lemma_ = dtoken.lemma
|
||||
token.lemma_ = lemma
|
||||
doc.user_data["unidic_tags"] = unidic_tags
|
||||
|
||||
separate_sentences(doc)
|
||||
|
|
Loading…
Reference in New Issue
Block a user