* Add error to `get_vectors_loss` for unsupported loss function of `pretrain`
* Add missing "--loss-func" argument to pretrain docs. Update pretrain plac annotations to match docs.
* Add missing quotation marks
* Changed learning rate by its param name.
I've been searching for a while how the parameter learning rate was named, with `beta1` and `beta2` its easy as they are marked as code, but learning rate wasn't. I think writing the actual parameter name would be helpful.
* Signing SCA
* Update tokenizer.md for construction example
Self contained example. You should really say what nlp is so that the example will work as is
* Update CONTRIBUTOR_AGREEMENT.md
* Restore contributor agreement
* Adjust construction examples
* Add check for empty input file to CLI pretrain
* Raise error if JSONL is not a dict or contains neither `tokens` nor `text` key
* Skip empty values for correct pretrain keys and log a counter as warning
* Add tests for CLI pretrain core function make_docs.
* Add a short hint for the `tokens` key to the CLI pretrain docs
* Add success message to CLI pretrain
* Update model loading to fix the tests
* Skip empty values and do not create docs out of it
* Request to add Holmes to spaCy Universe
Dear spaCy team, I would be grateful if you would consider my Python library Holmes for inclusion in the spaCy Universe. Holmes transforms the syntactic structures delivered by spaCy into semantic structures that, together with various other techniques including ontological matching and word embeddings, serve as the basis for information extraction. Holmes supports several use cases including chatbot, structured search, topic matching and supervised document classification. I had the basic idea for Holmes around 15 years ago and now spaCy has made it possible to build an implementation that is stable and fast enough to actually be of use - thank you! At present Holmes supports English and German (I am based in Munich) but could easily be extended to support any other language with a spaCy model.
* Added