The autoblack job is an occasional cleanup job. If it runs on forks and
those PRs are accepted the git history will be weird and that doesn't
help anyone.
The way to make the job not run on forks is a little non-obvious but
based on this thread.
https://github.com/prisma/prisma/issues/3539
* avoid msg var impliciteness
* rename local msg
* Add CI tests for debug data and train
* Adjust debug data CLI test
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* Add the right return type for Language.pipe and an overload for the as_tuples version
* Reformat, tidy up
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
* Fix vectors check for sourced components
Since vectors are not loaded when components are sourced, store a hash
for the vectors of each sourced component and compare it to the loaded
vectors after the vectors are loaded from the `[initialize]` block.
* Pop temporary info
* Remove stored hash in remove_pipe
* Add default for pop
* Add additional convert/debug/assemble CLI tests
* Raise an error for textcat with <2 labels
Raise an error if initializing a `textcat` component without at least
two labels.
* Add similar note to docs
* Update positive_label description in API docs
Not necessary for convergence, but in coref-hoi this seems to add a few
f1 points.
Note that there are two width-related features in coref-hoi. This is a
"prior" that is added to mention scores. The other width related feature
is appended to the span embedding representation for other layers to
reference.
This rewrites the loss to not use the Thinc crossentropy code at all.
The main difference here is that the negative predictions are being
masked out (= marginalized over), but negative gradient is still being
reflected.
I'm still not sure this is exactly right but models seem to train
reliably now.
The calculation of this in the coref-hoi code is hard to follow. Based
on comments and variable names it sounds like it's using the doc length,
but it might actually be the number of mentions? Number of mentions
should be much larger and seems more correct, but might want to revisit
this.
I think this was technically incorrect but harmless. The reason the code
here is different than the reference in coref-hoi is that the indices
there are such that they get +1 at the end of processing, while the code
here handles indices directly.