spaCy/spacy/tests/lang/es/test_exception.py
Ines Montani 75f3234404
💫 Refactor test suite (#2568)
## Description

Related issues: #2379 (should be fixed by separating model tests)

* **total execution time down from > 300 seconds to under 60 seconds** 🎉
* removed all model-specific tests that could only really be run manually anyway – those will now live in a separate test suite in the [`spacy-models`](https://github.com/explosion/spacy-models) repository and are already integrated into our new model training infrastructure
* changed all relative imports to absolute imports to prepare for moving the test suite from `/spacy/tests` to `/tests` (it'll now always test against the installed version)
* merged old regression tests into collections, e.g. `test_issue1001-1500.py` (about 90% of the regression tests are very short anyways)
* tidied up and rewrote existing tests wherever possible

### Todo

- [ ] move tests to `/tests` and adjust CI commands accordingly
- [x] move model test suite from internal repo to `spacy-models`
- [x] ~~investigate why `pipeline/test_textcat.py` is flakey~~
- [x] review old regression tests (leftover files) and see if they can be merged, simplified or deleted
- [ ] update documentation on how to run tests


### Types of change
enhancement, tests

## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [ ] My changes don't require a change to the documentation, or if they do, I've added all required information.
2018-07-24 23:38:44 +02:00

24 lines
659 B
Python

# coding: utf-8
from __future__ import unicode_literals
import pytest
@pytest.mark.parametrize('text,lemma', [
("aprox.", "aproximadamente"),
("esq.", "esquina"),
("pág.", "página"),
("p.ej.", "por ejemplo")])
def test_es_tokenizer_handles_abbr(es_tokenizer, text, lemma):
tokens = es_tokenizer(text)
assert len(tokens) == 1
assert tokens[0].lemma_ == lemma
def test_es_tokenizer_handles_exc_in_text(es_tokenizer):
text = "Mariano Rajoy ha corrido aprox. medio kilómetro"
tokens = es_tokenizer(text)
assert len(tokens) == 7
assert tokens[4].text == "aprox."
assert tokens[4].lemma_ == "aproximadamente"