mirror of
https://github.com/explosion/spaCy.git
synced 2025-10-24 12:41:23 +03:00
* Add convert CLI option to merge CoNLL-U subtokens Add `-T` option to convert CLI that merges CoNLL-U subtokens into one token in the converted data. Each CoNLL-U sentence is read into a `Doc` and the `Retokenizer` is used to merge subtokens with features as follows: * `orth` is the merged token orth (should correspond to raw text and `# text`) * `tag` is all subtoken tags concatenated with `_`, e.g. `ADP_DET` * `pos` is the POS of the syntactic root of the span (as determined by the Retokenizer) * `morph` is all morphological features merged * `lemma` is all subtoken lemmas concatenated with ` `, e.g. `de o` * with `-m` all morphological features are combined with the tag using the separator `__`, e.g. `ADP_DET__Definite=Def|Gender=Masc|Number=Sing|PronType=Art` * `dep` is the dependency relation for the syntactic root of the span (as determined by the Retokenizer) Concatenated tags will be mapped to the UD POS of the syntactic root (e.g., `ADP`) and the morphological features will be the combined features. In many cases, the original UD subtokens can be reconstructed from the available features given a language-specific lookup table, e.g., Portuguese `do / ADP_DET / Definite=Def|Gender=Masc|Number=Sing|PronType=Art` is `de / ADP`, `o / DET / Definite=Def|Gender=Masc|Number=Sing|PronType=Art` or lookup rules for forms containing open class words like Spanish `hablarlo / VERB_PRON / Case=Acc|Gender=Masc|Number=Sing|Person=3|PrepCase=Npr|PronType=Prs|VerbForm=Inf`. * Clean up imports |
||
|---|---|---|
| .. | ||
| cli | ||
| data | ||
| displacy | ||
| lang | ||
| matcher | ||
| ml | ||
| pipeline | ||
| syntax | ||
| tests | ||
| tokens | ||
| __init__.pxd | ||
| __init__.py | ||
| __main__.py | ||
| _ml.py | ||
| about.py | ||
| analysis.py | ||
| attrs.pxd | ||
| attrs.pyx | ||
| compat.py | ||
| errors.py | ||
| glossary.py | ||
| gold.pxd | ||
| gold.pyx | ||
| kb.pxd | ||
| kb.pyx | ||
| language.py | ||
| lemmatizer.py | ||
| lexeme.pxd | ||
| lexeme.pyx | ||
| lookups.py | ||
| morphology.pxd | ||
| morphology.pyx | ||
| parts_of_speech.pxd | ||
| parts_of_speech.pyx | ||
| schemas.py | ||
| scorer.py | ||
| strings.pxd | ||
| strings.pyx | ||
| structs.pxd | ||
| symbols.pxd | ||
| symbols.pyx | ||
| tokenizer.pxd | ||
| tokenizer.pyx | ||
| typedefs.pxd | ||
| typedefs.pyx | ||
| util.py | ||
| vectors.pyx | ||
| vocab.pxd | ||
| vocab.pyx | ||