This follows the pattern used in the Biaffine Parser, which uses an init
function to get the size only after the tok2vec is available.
This works at first, but serialization fails with an error.
This changes the tok2vec size in coref to hardcoded 64 to get tests to
run. This should be reverted and hopefully replaced with proper shape
inference.
* Move coref scoring code to scorer.py
Includes some renames to make names less generic.
* Refactor coval code to remove ternary expressions
* Black formatting
* Add header
* Make scorers into registered scorers
* Small test fixes
* Skip coref tests when torch not present
Coref can't be loaded without Torch, so nothing works.
* Fix remaining type issues
Some of this just involves ignoring types in thorny areas. Two main
issues:
1. Some things have weird types due to indirection/ argskwargs
2. xp2torch return type seems to have changed at some point
* Update spacy/scorer.py
Co-authored-by: kadarakos <kadar.akos@gmail.com>
* Small changes from review
* Be specific about the ValueError
* Type fix
Co-authored-by: kadarakos <kadar.akos@gmail.com>
Torch is required for the coref/spanpred models but shouldn't be
required for spaCy in general.
The one tricky part of this is that one function in coref_util relied on
torch, but that file was imported in several places. Since the function
was only used in one place I moved it there.
The span predictor component is initialized but not used at all now.
Plan is to work on it after the word level clustering part is trainable
end-to-end.
This absolutely does not work. First step here is getting over most of
the code in roughly the files we want it in. After the code has been
pulled over it can be restructured to match spaCy and cleaned up.
In the reference implementations, there's usually a function to build a
ffnn of arbitrary depth, consisting of a stack of Linear >> Relu >>
Dropout. In practice the depth is always 1 in coref-hoi, but in earlier
iterations of the model, which are more similar to our model here (since
we aren't using attention or even necessarily BERT), using a small depth
like 2 was common. This hard-codes a stack of 2.
In brief tests this allows similar performance to the unstacked version
with much smaller embedding sizes.
The depth of the stack could be made into a hyperparameter.
This generall means fewer spans are considered, which makes individual
steps in training faster but can make training take longer to find the
good spans.