mirror of
https://github.com/explosion/spaCy.git
synced 2024-11-14 05:37:03 +03:00
83ac227bd3
The new spacy pretrain command implemented BERT/ULMFit/etc-like transfer learning, using our Language Modelling with Approximate Outputs version of BERT's cloze task. Pretraining is convenient, but in some ways it's a bit of a strange solution. All we're doing is initialising the weights. At the same time, we're putting a lot of work into our optimisation so that it's less sensitive to initial conditions, and more likely to find good optima. I discuss this a bit in the pseudo-rehearsal blog post: https://explosion.ai/blog/pseudo-rehearsal-catastrophic-forgetting Support semi-supervised learning in spacy train One obvious way to improve these pretraining methods is to do multi-task learning, instead of just transfer learning. This has been shown to work very well: https://arxiv.org/pdf/1809.08370.pdf . This patch makes it easy to do this sort of thing. Add a new argument to spacy train, --raw-text. This takes a jsonl file with unlabelled data that can be used in arbitrary ways to do semi-supervised learning. Add a new method to the Language class and to pipeline components, .rehearse(). This is like .update(), but doesn't expect GoldParse objects. It takes a batch of Doc objects, and performs an update on some semi-supervised objective. Move the BERT-LMAO objective out from spacy/cli/pretrain.py into spacy/_ml.py, so we can create a new pipeline component, ClozeMultitask. This can be specified as a parser or NER multitask in the spacy train command. Example usage: python -m spacy train en ./tmp ~/data/en-core-web/train/nw.json ~/data/en-core-web/dev/nw.json --pipeline parser --raw-textt ~/data/unlabelled/reddit-100k.jsonl --vectors en_vectors_web_lg --parser-multitasks cloze Implement rehearsal methods for pipeline components The new --raw-text argument and nlp.rehearse() method also gives us a good place to implement the the idea in the pseudo-rehearsal blog post in the parser. This works as follows: Add a new nlp.resume_training() method. This allocates copies of pre-trained models in the pipeline, setting things up for the rehearsal updates. It also returns an optimizer object. This also greatly reduces confusion around the nlp.begin_training() method, which randomises the weights, making it not suitable for adding new labels or otherwise fine-tuning a pre-trained model. Implement rehearsal updates on the Parser class, making it available for the dependency parser and NER. During rehearsal, the initial model is used to supervise the model being trained. The current model is asked to match the predictions of the initial model on some data. This minimises catastrophic forgetting, by keeping the model's predictions close to the original. See the blog post for details. Implement rehearsal updates for tagger Implement rehearsal updates for text categoriz
25 lines
739 B
Cython
25 lines
739 B
Cython
from thinc.typedefs cimport atom_t
|
|
|
|
from .stateclass cimport StateClass
|
|
from .arc_eager cimport TransitionSystem
|
|
from ..vocab cimport Vocab
|
|
from ..tokens.doc cimport Doc
|
|
from ..structs cimport TokenC
|
|
from ._state cimport StateC
|
|
from ._parser_model cimport WeightsC, ActivationsC, SizesC
|
|
|
|
|
|
cdef class Parser:
|
|
cdef readonly Vocab vocab
|
|
cdef public object model
|
|
cdef public object _rehearsal_model
|
|
cdef readonly TransitionSystem moves
|
|
cdef readonly object cfg
|
|
cdef public object _multitasks
|
|
|
|
cdef void _parseC(self, StateC** states,
|
|
WeightsC weights, SizesC sizes) nogil
|
|
|
|
cdef void c_transition_batch(self, StateC** states, const float* scores,
|
|
int nr_class, int batch_size) nogil
|