spaCy/spacy/tests/doc/test_array.py
Ines Montani 75f3234404
💫 Refactor test suite (#2568)
## Description

Related issues: #2379 (should be fixed by separating model tests)

* **total execution time down from > 300 seconds to under 60 seconds** 🎉
* removed all model-specific tests that could only really be run manually anyway – those will now live in a separate test suite in the [`spacy-models`](https://github.com/explosion/spacy-models) repository and are already integrated into our new model training infrastructure
* changed all relative imports to absolute imports to prepare for moving the test suite from `/spacy/tests` to `/tests` (it'll now always test against the installed version)
* merged old regression tests into collections, e.g. `test_issue1001-1500.py` (about 90% of the regression tests are very short anyways)
* tidied up and rewrote existing tests wherever possible

### Todo

- [ ] move tests to `/tests` and adjust CI commands accordingly
- [x] move model test suite from internal repo to `spacy-models`
- [x] ~~investigate why `pipeline/test_textcat.py` is flakey~~
- [x] review old regression tests (leftover files) and see if they can be merged, simplified or deleted
- [ ] update documentation on how to run tests


### Types of change
enhancement, tests

## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [ ] My changes don't require a change to the documentation, or if they do, I've added all required information.
2018-07-24 23:38:44 +02:00

62 lines
2.1 KiB
Python

# coding: utf-8
from __future__ import unicode_literals
from spacy.attrs import ORTH, SHAPE, POS, DEP
from ..util import get_doc
def test_doc_array_attr_of_token(en_tokenizer, en_vocab):
text = "An example sentence"
tokens = en_tokenizer(text)
example = tokens.vocab["example"]
assert example.orth != example.shape
feats_array = tokens.to_array((ORTH, SHAPE))
assert feats_array[0][0] != feats_array[0][1]
assert feats_array[0][0] != feats_array[0][1]
def test_doc_stringy_array_attr_of_token(en_tokenizer, en_vocab):
text = "An example sentence"
tokens = en_tokenizer(text)
example = tokens.vocab["example"]
assert example.orth != example.shape
feats_array = tokens.to_array((ORTH, SHAPE))
feats_array_stringy = tokens.to_array(("ORTH", "SHAPE"))
assert feats_array_stringy[0][0] == feats_array[0][0]
assert feats_array_stringy[0][1] == feats_array[0][1]
def test_doc_scalar_attr_of_token(en_tokenizer, en_vocab):
text = "An example sentence"
tokens = en_tokenizer(text)
example = tokens.vocab["example"]
assert example.orth != example.shape
feats_array = tokens.to_array(ORTH)
assert feats_array.shape == (3,)
def test_doc_array_tag(en_tokenizer):
text = "A nice sentence."
pos = ['DET', 'ADJ', 'NOUN', 'PUNCT']
tokens = en_tokenizer(text)
doc = get_doc(tokens.vocab, words=[t.text for t in tokens], pos=pos)
assert doc[0].pos != doc[1].pos != doc[2].pos != doc[3].pos
feats_array = doc.to_array((ORTH, POS))
assert feats_array[0][1] == doc[0].pos
assert feats_array[1][1] == doc[1].pos
assert feats_array[2][1] == doc[2].pos
assert feats_array[3][1] == doc[3].pos
def test_doc_array_dep(en_tokenizer):
text = "A nice sentence."
deps = ['det', 'amod', 'ROOT', 'punct']
tokens = en_tokenizer(text)
doc = get_doc(tokens.vocab, words=[t.text for t in tokens], deps=deps)
feats_array = doc.to_array((ORTH, DEP))
assert feats_array[0][1] == doc[0].dep
assert feats_array[1][1] == doc[1].dep
assert feats_array[2][1] == doc[2].dep
assert feats_array[3][1] == doc[3].dep