mirror of
https://github.com/explosion/spaCy.git
synced 2024-11-11 04:08:09 +03:00
410fb7ee43
* Add more rules to deal with Japanese UD mappings Japanese UD rules sometimes give different UD tags to tokens with the same underlying POS tag. The UD spec indicates these cases should be disambiguated using the output of a tool called "comainu", but rules are enough to get the right result. These rules are taken from Ginza at time of writing, see #3756. * Add new tags from GSD This is a few rare tags that aren't in Unidic but are in the GSD data. * Add basic Japanese sentencization This code is taken from Ginza again. * Add sentenceizer quote handling Could probably add more paired characters but this will do for now. Also includes some tests. * Replace fugashi with SudachiPy * Modify tag format to match GSD annotations Some of the tests still need to be updated, but I want to get this up for testing training. * Deal with case with closing punct without opening * refactor resolve_pos() * change tag field separator from "," to "-" * add TAG_ORTH_MAP * add TAG_BIGRAM_MAP * revise rules for 連体詞 * revise rules for 連体詞 * improve POS about 2% * add syntax_iterator.py (not mature yet) * improve syntax_iterators.py * improve syntax_iterators.py * add phrases including nouns and drop NPs consist of STOP_WORDS * First take at noun chunks This works in many situations but still has issues in others. If the start of a subtree has no noun, then nested phrases can be generated. また行きたい、そんな気持ちにさせてくれるお店です。 [そんな気持ち, また行きたい、そんな気持ちにさせてくれるお店] For some reason て gets included sometimes. Not sure why. ゲンに連れ添って円盤生物を調査するパートナーとなる。 [て円盤生物, ...] Some phrases that look like they should be split are grouped together; not entirely sure that's wrong. This whole thing becomes one chunk: 道の駅遠山郷北側からかぐら大橋南詰現道交点までの1.060kmのみ開通済み * Use new generic get_words_and_spaces The new get_words_and_spaces function is simpler than what was used in Japanese, so it's good to be able to switch to it. However, there was an issue. The new function works just on text, so POS info could get out of sync. Fixing this required a small change to the way dtokens (tokens with POS and lemma info) were generated. Specifically, multiple extraneous spaces now become a single token, so when generating dtokens multiple space tokens should be created in a row. * Fix noun_chunks, should be working now * Fix some tests, add naughty strings tests Some of the existing tests changed because the tokenization mode of Sudachi changed to the more fine-grained A mode. Sudachi also has issues with some strings, so this adds a test against the naughty strings. * Remove empty Sudachi tokens Not doing this creates zero-length tokens and causes errors in the internal spaCy processing. * Add yield_bunsetu back in as a separate piece of code Co-authored-by: Hiroshi Matsuda <40782025+hiroshi-matsuda-rit@users.noreply.github.com> Co-authored-by: hiroshi <hiroshi_matsuda@megagon.ai>
56 lines
1.6 KiB
Python
56 lines
1.6 KiB
Python
# coding: utf8
|
|
from __future__ import unicode_literals
|
|
|
|
from ...symbols import NOUN, PROPN, PRON, VERB
|
|
|
|
# XXX this can probably be pruned a bit
|
|
labels = [
|
|
"nsubj",
|
|
"nmod",
|
|
"dobj",
|
|
"nsubjpass",
|
|
"pcomp",
|
|
"pobj",
|
|
"obj",
|
|
"obl",
|
|
"dative",
|
|
"appos",
|
|
"attr",
|
|
"ROOT",
|
|
]
|
|
|
|
def noun_chunks(obj):
|
|
"""
|
|
Detect base noun phrases from a dependency parse. Works on both Doc and Span.
|
|
"""
|
|
|
|
doc = obj.doc # Ensure works on both Doc and Span.
|
|
np_deps = [doc.vocab.strings.add(label) for label in labels]
|
|
conj = doc.vocab.strings.add("conj")
|
|
np_label = doc.vocab.strings.add("NP")
|
|
seen = set()
|
|
for i, word in enumerate(obj):
|
|
if word.pos not in (NOUN, PROPN, PRON):
|
|
continue
|
|
# Prevent nested chunks from being produced
|
|
if word.i in seen:
|
|
continue
|
|
if word.dep in np_deps:
|
|
unseen = [w.i for w in word.subtree if w.i not in seen]
|
|
if not unseen:
|
|
continue
|
|
|
|
# this takes care of particles etc.
|
|
seen.update(j.i for j in word.subtree)
|
|
# This avoids duplicating embedded clauses
|
|
seen.update(range(word.i + 1))
|
|
|
|
# if the head of this is a verb, mark that and rights seen
|
|
# Don't do the subtree as that can hide other phrases
|
|
if word.head.pos == VERB:
|
|
seen.add(word.head.i)
|
|
seen.update(w.i for w in word.head.rights)
|
|
yield unseen[0], word.i + 1, np_label
|
|
|
|
SYNTAX_ITERATORS = {"noun_chunks": noun_chunks}
|