mirror of
https://github.com/explosion/spaCy.git
synced 2025-10-24 20:51:30 +03:00
The parser training makes use of a trick for long documents, where we use the oracle to cut up the document into sections, so that we can have batch items in the middle of a document. For instance, if we have one document of 600 words, we might make 6 states, starting at words 0, 100, 200, 300, 400 and 500. The problem is for v3, I screwed this up and didn't stop parsing! So instead of a batch of [100, 100, 100, 100, 100, 100], we'd have a batch of [600, 500, 400, 300, 200, 100]. Oops. The implementation here could probably be improved, it's annoying to have this extra variable in the state. But this'll do. This makes the v3 parser training 5-10 times faster, depending on document lengths. This problem wasn't in v2. |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| _state.pxd | ||
| _state.pyx | ||
| arc_eager.pxd | ||
| arc_eager.pyx | ||
| ner.pxd | ||
| ner.pyx | ||
| nonproj.pxd | ||
| nonproj.pyx | ||
| stateclass.pxd | ||
| stateclass.pyx | ||
| transition_system.pxd | ||
| transition_system.pyx | ||