Add and document CLI options for batch size, max doc length, min doc length for `spacy pretrain`.
Also improve CLI output.
Closes#3216
## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.
* merging conllu/conll and conllubio scripts
* tabs to spaces
* removing conllubio2json from converters/__init__.py
* Move not-really-CLI tests to misc
* Add converter test using no-ud data
* Fix test I broke
* removing include_biluo parameter
* fixing read_conllx
* remove include_biluo from convert.py
* Add custom MatchPatternError
* Improve validators and add validation option to Matcher
* Adjust formatting
* Never validate in Matcher within PhraseMatcher
If we do decide to make validate default to True, the PhraseMatcher's Matcher shouldn't ever validate. Here, we create the patterns automatically anyways (and it's currently unclear whether the validation has performance impacts at a very large scale).
* running UD eval
* printing timing of tokenizer: tokens per second
* timing of default English model
* structured output and parameterization to compare different runs
* additional flag to allow evaluation without parsing info
* printing verbose log of errors for manual inspection
* printing over- and undersegmented cases (and combo's)
* add under and oversegmented numbers to Score and structured output
* print high-freq over/under segmented words and word shapes
* printing examples as part of the structured output
* print the results to file
* batch run of different models and treebanks per language
* cleaning up code
* commandline script to process all languages in spaCy & UD
* heuristic to remove blinded corpora and option to run one single best per language
* pathlib instead of os for file paths
* Try to implement cosine loss
This one seems to be correct? Still unsure, but it performs okay
* Try to implement the von Mises-Fisher loss
This one's definitely not right yet.
The new spacy pretrain command implemented BERT/ULMFit/etc-like transfer learning, using our Language Modelling with Approximate Outputs version of BERT's cloze task. Pretraining is convenient, but in some ways it's a bit of a strange solution. All we're doing is initialising the weights. At the same time, we're putting a lot of work into our optimisation so that it's less sensitive to initial conditions, and more likely to find good optima. I discuss this a bit in the pseudo-rehearsal blog post: https://explosion.ai/blog/pseudo-rehearsal-catastrophic-forgetting
Support semi-supervised learning in spacy train
One obvious way to improve these pretraining methods is to do multi-task learning, instead of just transfer learning. This has been shown to work very well: https://arxiv.org/pdf/1809.08370.pdf . This patch makes it easy to do this sort of thing.
Add a new argument to spacy train, --raw-text. This takes a jsonl file with unlabelled data that can be used in arbitrary ways to do semi-supervised learning.
Add a new method to the Language class and to pipeline components, .rehearse(). This is like .update(), but doesn't expect GoldParse objects. It takes a batch of Doc objects, and performs an update on some semi-supervised objective.
Move the BERT-LMAO objective out from spacy/cli/pretrain.py into spacy/_ml.py, so we can create a new pipeline component, ClozeMultitask. This can be specified as a parser or NER multitask in the spacy train command. Example usage:
python -m spacy train en ./tmp ~/data/en-core-web/train/nw.json ~/data/en-core-web/dev/nw.json --pipeline parser --raw-textt ~/data/unlabelled/reddit-100k.jsonl --vectors en_vectors_web_lg --parser-multitasks cloze
Implement rehearsal methods for pipeline components
The new --raw-text argument and nlp.rehearse() method also gives us a good place to implement the the idea in the pseudo-rehearsal blog post in the parser. This works as follows:
Add a new nlp.resume_training() method. This allocates copies of pre-trained models in the pipeline, setting things up for the rehearsal updates. It also returns an optimizer object. This also greatly reduces confusion around the nlp.begin_training() method, which randomises the weights, making it not suitable for adding new labels or otherwise fine-tuning a pre-trained model.
Implement rehearsal updates on the Parser class, making it available for the dependency parser and NER. During rehearsal, the initial model is used to supervise the model being trained. The current model is asked to match the predictions of the initial model on some data. This minimises catastrophic forgetting, by keeping the model's predictions close to the original. See the blog post for details.
Implement rehearsal updates for tagger
Implement rehearsal updates for text categoriz
* Add todo
* Auto-format
* Update wasabi pin
* Format training results with wasabi
* Remove loading animation from model saving
Currently behaves weirdly
* Inline messages
* Remove unnecessary path2str
Already taken care of by printer
* Inline messages in CLI
* Remove unused function
* Move loading indicator into loading function
* Check for invalid whitespace entities