This is closer to the traditional evaluation method. That uses an
average of three scores, this is just using the bcubed metric for now
(nothing special about bcubed, just picked one).
The scoring implementation comes from the coval project. It relies on
scipy, which is one issue, and is rather involved, which is another.
Besides being comparable with traditional evaluations, this scoring is
relatively fast.
This includes the coref code that was being tested separately, modified
to work in spaCy. It hasn't been tested yet and presumably still needs
fixes.
In particular, the evaluation code is currently omitted. It's unclear at
the moment whether we want to use a complex scorer similar to the
official one, or a simpler scorer using more modern evaluation methods.
* initial coref_er pipe
* matcher more flexible
* base coref component without actual model
* initial setup of coref_er.score
* rename to include_label
* preliminary score_clusters method
* apply scoring in coref component
* IO fix
* return None loss for now
* rename to CoreferenceResolver
* some preliminary unit tests
* use registry as callable