spaCy/spacy/tests/lang/de/test_noun_chunks.py
Vishnu Priya VR 9ce059dd06
Limiting noun_chunks for specific languages (#5396)
* Limiting noun_chunks for specific langauges

* Limiting noun_chunks for specific languages

Contributor Agreement

* Addressing review comments

* Removed unused fixtures and imports

* Add fa_tokenizer in test suite

* Use fa_tokenizer in test

* Undo extraneous reformatting

Co-authored-by: adrianeboyd <adrianeboyd@gmail.com>
2020-05-14 12:58:06 +02:00

17 lines
508 B
Python

# coding: utf-8
from __future__ import unicode_literals
import pytest
def test_noun_chunks_is_parsed_de(de_tokenizer):
"""Test that noun_chunks raises Value Error for 'de' language if Doc is not parsed.
To check this test, we're constructing a Doc
with a new Vocab here and forcing is_parsed to 'False'
to make sure the noun chunks don't run.
"""
doc = de_tokenizer("Er lag auf seinem")
doc.is_parsed = False
with pytest.raises(ValueError):
list(doc.noun_chunks)