From 3c2ce41dd8728dc8ebcfb891d5e10769f0b18127 Mon Sep 17 00:00:00 2001 From: Ayush Chaurasia Date: Thu, 1 Apr 2021 23:06:23 +0530 Subject: [PATCH] W&B integration: Optional support for dataset and model checkpoint logging and versioning (#7429) * Add optional artifacts logging * Update docs * Update spacy/training/loggers.py Co-authored-by: Sofie Van Landeghem * Update spacy/training/loggers.py Co-authored-by: Sofie Van Landeghem * Update spacy/training/loggers.py Co-authored-by: Sofie Van Landeghem * Bump WandbLogger Version * Add documentation of v1 to legacy docs * bump spacy-legacy to 3.0.2 (to be released) Co-authored-by: Sofie Van Landeghem Co-authored-by: svlandeg --- .github/contributors/AyushExel.md | 106 ++++++++++++++++++++++++++++++ requirements.txt | 2 +- setup.cfg | 2 +- spacy/training/loggers.py | 40 ++++++++++- spacy/training/loop.py | 3 +- website/docs/api/legacy.md | 25 ++++++- website/docs/api/top-level.md | 8 ++- website/docs/usage/projects.md | 2 +- 8 files changed, 178 insertions(+), 10 deletions(-) create mode 100644 .github/contributors/AyushExel.md diff --git a/.github/contributors/AyushExel.md b/.github/contributors/AyushExel.md new file mode 100644 index 000000000..281fd0cd0 --- /dev/null +++ b/.github/contributors/AyushExel.md @@ -0,0 +1,106 @@ +# spaCy contributor agreement + +This spaCy Contributor Agreement (**"SCA"**) is based on the +[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf). +The SCA applies to any contribution that you make to any product or project +managed by us (the **"project"**), and sets out the intellectual property rights +you grant to us in the contributed materials. The term **"us"** shall mean +[ExplosionAI GmbH](https://explosion.ai/legal). The term +**"you"** shall mean the person or entity identified below. + +If you agree to be bound by these terms, fill in the information requested +below and include the filled-in version with your first pull request, under the +folder [`.github/contributors/`](/.github/contributors/). The name of the file +should be your GitHub username, with the extension `.md`. For example, the user +example_user would create the file `.github/contributors/example_user.md`. + +Read this agreement carefully before signing. These terms and conditions +constitute a binding legal agreement. + +## Contributor Agreement + +1. The term "contribution" or "contributed materials" means any source code, +object code, patch, tool, sample, graphic, specification, manual, +documentation, or any other material posted or submitted by you to the project. + +2. With respect to any worldwide copyrights, or copyright applications and +registrations, in your contribution: + + * you hereby assign to us joint ownership, and to the extent that such + assignment is or becomes invalid, ineffective or unenforceable, you hereby + grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge, + royalty-free, unrestricted license to exercise all rights under those + copyrights. This includes, at our option, the right to sublicense these same + rights to third parties through multiple levels of sublicensees or other + licensing arrangements; + + * you agree that each of us can do all things in relation to your + contribution as if each of us were the sole owners, and if one of us makes + a derivative work of your contribution, the one who makes the derivative + work (or has it made will be the sole owner of that derivative work; + + * you agree that you will not assert any moral rights in your contribution + against us, our licensees or transferees; + + * you agree that we may register a copyright in your contribution and + exercise all ownership rights associated with it; and + + * you agree that neither of us has any duty to consult with, obtain the + consent of, pay or render an accounting to the other for any use or + distribution of your contribution. + +3. With respect to any patents you own, or that you can license without payment +to any third party, you hereby grant to us a perpetual, irrevocable, +non-exclusive, worldwide, no-charge, royalty-free license to: + + * make, have made, use, sell, offer to sell, import, and otherwise transfer + your contribution in whole or in part, alone or in combination with or + included in any product, work or materials arising out of the project to + which your contribution was submitted, and + + * at our option, to sublicense these same rights to third parties through + multiple levels of sublicensees or other licensing arrangements. + +4. Except as set out above, you keep all right, title, and interest in your +contribution. The rights that you grant to us under these terms are effective +on the date you first submitted a contribution to us, even if your submission +took place before the date you sign these terms. + +5. You covenant, represent, warrant and agree that: + + * Each contribution that you submit is and shall be an original work of + authorship and you can legally grant the rights set out in this SCA; + + * to the best of your knowledge, each contribution will not violate any + third party's copyrights, trademarks, patents, or other intellectual + property rights; and + + * each contribution shall be in compliance with U.S. export control laws and + other applicable export and import laws. You agree to notify us if you + become aware of any circumstance which would make any of the foregoing + representations inaccurate in any respect. We may publicly disclose your + participation in the project, including the fact that you have signed the SCA. + +6. This SCA is governed by the laws of the State of California and applicable +U.S. Federal law. Any choice of law rules will not apply. + +7. Please place an “x” on one of the applicable statement below. Please do NOT +mark both statements: + + * [X] I am signing on behalf of myself as an individual and no other person + or entity, including my employer, has or will have rights with respect to my + contributions. + + * [ ] I am signing on behalf of my employer or a legal entity and I have the + actual authority to contractually bind that entity. + +## Contributor Details + +| Field | Entry | +|------------------------------- | -------------------- | +| Name | Ayush Chaurasia | +| Company name (if applicable) | | +| Title or role (if applicable) | | +| Date | 2021-03-12 | +| GitHub username | AyushExel | +| Website (optional) | | diff --git a/requirements.txt b/requirements.txt index e09a5b221..f86efff3f 100644 --- a/requirements.txt +++ b/requirements.txt @@ -1,5 +1,5 @@ # Our libraries -spacy-legacy>=3.0.0,<3.1.0 +spacy-legacy>=3.0.2,<3.1.0 cymem>=2.0.2,<2.1.0 preshed>=3.0.2,<3.1.0 thinc>=8.0.2,<8.1.0 diff --git a/setup.cfg b/setup.cfg index e928e90a6..92e758aec 100644 --- a/setup.cfg +++ b/setup.cfg @@ -37,7 +37,7 @@ setup_requires = thinc>=8.0.2,<8.1.0 install_requires = # Our libraries - spacy-legacy>=3.0.0,<3.1.0 + spacy-legacy>=3.0.2,<3.1.0 murmurhash>=0.28.0,<1.1.0 cymem>=2.0.2,<2.1.0 preshed>=3.0.2,<3.1.0 diff --git a/spacy/training/loggers.py b/spacy/training/loggers.py index 8acf2783c..ef6c86044 100644 --- a/spacy/training/loggers.py +++ b/spacy/training/loggers.py @@ -101,8 +101,13 @@ def console_logger(progress_bar: bool = False): return setup_printer -@registry.loggers("spacy.WandbLogger.v1") -def wandb_logger(project_name: str, remove_config_values: List[str] = []): +@registry.loggers("spacy.WandbLogger.v2") +def wandb_logger( + project_name: str, + remove_config_values: List[str] = [], + model_log_interval: Optional[int] = None, + log_dataset_dir: Optional[str] = None, +): try: import wandb from wandb import init, log, join # test that these are available @@ -119,9 +124,23 @@ def wandb_logger(project_name: str, remove_config_values: List[str] = []): for field in remove_config_values: del config_dot[field] config = util.dot_to_dict(config_dot) - wandb.init(project=project_name, config=config, reinit=True) + run = wandb.init(project=project_name, config=config, reinit=True) console_log_step, console_finalize = console(nlp, stdout, stderr) + def log_dir_artifact( + path: str, + name: str, + type: str, + metadata: Optional[Dict[str, Any]] = {}, + aliases: Optional[List[str]] = [], + ): + dataset_artifact = wandb.Artifact(name, type=type, metadata=metadata) + dataset_artifact.add_dir(path, name=name) + wandb.log_artifact(dataset_artifact, aliases=aliases) + + if log_dataset_dir: + log_dir_artifact(path=log_dataset_dir, name="dataset", type="dataset") + def log_step(info: Optional[Dict[str, Any]]): console_log_step(info) if info is not None: @@ -133,6 +152,21 @@ def wandb_logger(project_name: str, remove_config_values: List[str] = []): wandb.log({f"loss_{k}": v for k, v in losses.items()}) if isinstance(other_scores, dict): wandb.log(other_scores) + if model_log_interval and info.get("output_path"): + if info["step"] % model_log_interval == 0 and info["step"] != 0: + log_dir_artifact( + path=info["output_path"], + name="pipeline_" + run.id, + type="checkpoint", + metadata=info, + aliases=[ + f"epoch {info['epoch']} step {info['step']}", + "latest", + "best" + if info["score"] == max(info["checkpoints"])[0] + else "", + ], + ) def finalize() -> None: console_finalize() diff --git a/spacy/training/loop.py b/spacy/training/loop.py index 55919014b..a1242aea6 100644 --- a/spacy/training/loop.py +++ b/spacy/training/loop.py @@ -96,12 +96,13 @@ def train( log_step, finalize_logger = train_logger(nlp, stdout, stderr) try: for batch, info, is_best_checkpoint in training_step_iterator: - log_step(info if is_best_checkpoint is not None else None) if is_best_checkpoint is not None: with nlp.select_pipes(disable=frozen_components): update_meta(T, nlp, info) if output_path is not None: save_checkpoint(is_best_checkpoint) + info["output_path"] = str(output_path / DIR_MODEL_LAST) + log_step(info if is_best_checkpoint is not None else None) except Exception as e: if output_path is not None: stdout.write( diff --git a/website/docs/api/legacy.md b/website/docs/api/legacy.md index 4b5e8df3a..3e5c7f75f 100644 --- a/website/docs/api/legacy.md +++ b/website/docs/api/legacy.md @@ -140,4 +140,27 @@ network has an internal CNN Tok2Vec layer and uses attention. | `ngram_size` | Determines the maximum length of the n-grams in the BOW model. For instance, `ngram_size=3`would give unigram, trigram and bigram features. ~~int~~ | | `dropout` | The dropout rate. ~~float~~ | | `nO` | Output dimension, determined by the number of different labels. If not set, the [`TextCategorizer`](/api/textcategorizer) component will set it when `initialize` is called. ~~Optional[int]~~ | -| **CREATES** | The model using the architecture. ~~Model[List[Doc], Floats2d]~~ | \ No newline at end of file +| **CREATES** | The model using the architecture. ~~Model[List[Doc], Floats2d]~~ | + + +## Loggers {#loggers} + +These functions are available from `@spacy.registry.loggers`. + +### spacy.WandbLogger.v1 {#WandbLogger_v1} + +The first version of the [`WandbLogger`](/api/top-level#WandbLogger) did not yet +support the `log_dataset_dir` and `model_log_interval` arguments. + +> #### Example config +> +> ```ini +> [training.logger] +> @loggers = "spacy.WandbLogger.v1" +> project_name = "monitor_spacy_training" +> remove_config_values = ["paths.train", "paths.dev", "corpora.train.path", "corpora.dev.path"] +> ``` +| Name | Description | +| ---------------------- | ------------------------------------------------------------------------------------------------------------------------------------- | +| `project_name` | The name of the project in the Weights & Biases interface. The project will be created automatically if it doesn't exist yet. ~~str~~ | +| `remove_config_values` | A list of values to include from the config before it is uploaded to W&B (default: empty). ~~List[str]~~ | diff --git a/website/docs/api/top-level.md b/website/docs/api/top-level.md index eef8958cf..38bc40b11 100644 --- a/website/docs/api/top-level.md +++ b/website/docs/api/top-level.md @@ -461,7 +461,7 @@ start decreasing across epochs. -#### spacy.WandbLogger.v1 {#WandbLogger tag="registered function"} +#### spacy.WandbLogger.v2 {#WandbLogger tag="registered function"} > #### Installation > @@ -493,15 +493,19 @@ remain in the config file stored on your local system. > > ```ini > [training.logger] -> @loggers = "spacy.WandbLogger.v1" +> @loggers = "spacy.WandbLogger.v2" > project_name = "monitor_spacy_training" > remove_config_values = ["paths.train", "paths.dev", "corpora.train.path", "corpora.dev.path"] +> log_dataset_dir = "corpus" +> model_log_interval = 1000 > ``` | Name | Description | | ---------------------- | ------------------------------------------------------------------------------------------------------------------------------------- | | `project_name` | The name of the project in the Weights & Biases interface. The project will be created automatically if it doesn't exist yet. ~~str~~ | | `remove_config_values` | A list of values to include from the config before it is uploaded to W&B (default: empty). ~~List[str]~~ | +| `model_log_interval` | Steps to wait between logging model checkpoints to W&B dasboard (default: None). ~~Optional[int]~~ | +| `log_dataset_dir` | Directory containing dataset to be logged and versioned as W&B artifact (default: None). ~~Optional[str]~~ | diff --git a/website/docs/usage/projects.md b/website/docs/usage/projects.md index 97b5b9f28..fc191824a 100644 --- a/website/docs/usage/projects.md +++ b/website/docs/usage/projects.md @@ -995,7 +995,7 @@ your results. > > ```ini > [training.logger] -> @loggers = "spacy.WandbLogger.v1" +> @loggers = "spacy.WandbLogger.v2" > project_name = "monitor_spacy_training" > remove_config_values = ["paths.train", "paths.dev", "corpora.train.path", "corpora.dev.path"] > ```