spaCy/spacy/tests/lang/ca/test_prefix_suffix_infix.py
Adriane Boyd fe5f5d6ac6
Update Catalan tokenizer (#9297)
* Update Makefile

For more recent python version

* updated for bsc changes

New tokenization changes

* Update test_text.py

* updating tests and requirements

* changed failed test in test/lang/ca

changed failed test in test/lang/ca

* Update .gitignore

deleted stashed changes line

* back to python 3.6 and remove transformer requirements

As per request

* Update test_exception.py

Change the test

* Update test_exception.py

Remove test print

* Update Makefile

For more recent python version

* updated for bsc changes

New tokenization changes

* updating tests and requirements

* Update requirements.txt

Removed spacy-transfromers from requirements

* Update test_exception.py

Added final punctuation to ensure consistency

* Update Makefile

Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>

* Format

* Update test to check all tokens

Co-authored-by: cayorodriguez <crodriguezp@gmail.com>
Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>
2021-09-27 14:42:30 +02:00

19 lines
493 B
Python

import pytest
@pytest.mark.parametrize(
"text,expected_tokens",
[
("d'un", ["d'", "un"]),
("s'ha", ["s'", "ha"]),
("del", ["d", "el"]),
("cantar-te", ["cantar", "-te"]),
("-hola", ["-", "hola"]),
],
)
def test_contractions(ca_tokenizer, text, expected_tokens):
"""Test that the contractions are split into two tokens"""
tokens = ca_tokenizer(text)
assert len(tokens) == 2
assert [t.text for t in tokens] == expected_tokens