Avoiding reloading weights/datasets in ML edit-compile-run loop

Issue

In machine learning, the edit-compile-run loop is pretty slow as your script has to load large models and datasets.

In the past, I’ve avoided this by loading just a tiny subset of the data, and not using pre-initialized weights when setting up the code for training.

Solution

Use a Jupyter notebook or google colab.

You can edit and compile a cell at a time, and the dataset and trained weights in another cell will be persisted.

Somehow this didn’t click, until just now.

Answered By – Tom Huntington

This Answer collected from stackoverflow, is licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0

Leave a Reply

(*) Required, Your email will not be published