Keep track of your machine learning experiments.

ScalarStop is an open-source framework for reproducible machine learning research.

ScalarStop was written and open-sourced at Neocrym, where it is used to train thousands of models every week.

ScalarStop can help you:

organize datasets and models with content-addressable names.

ScalarStop datasets and models are given automatically-generated names based on their hyperparameters--making them easy to version and easy to find.

save/load datasets and models to/from the filesystem.

ScalarStop wraps existing dataset and model saving logic in TensorFlow for safety, correctness, and completion.

record hyperparameters and metrics to a relational database.

ScalarStop saves dataset and model names, hyperparameters, and training metrics to a SQLite or PostgreSQL database.

Getting started

System requirements

ScalarStop is a Python package that requires Python 3.8 or newer.

Currently, ScalarStop only supports tracking datasets and tf.keras.Model models. As such, ScalarStop requires TensorFlow 2.3.0 or newer.

We encourage anybody that would like to add support for other machine learning frameworks to ScalarStop. :)


ScalarStop is available on PyPI. You can install by running the command:

python3 -m pip install scalarstop

If you would like to make changes to ScalarStop, you can clone the repository from GitHub.

git clone
cd scalarstop
python3 -m pip install .


The best way to learn ScalarStop is to follow the Official ScalarStop Tutorial.

Afterwards, you might want to dig deeper into the ScalarStop documentation. In general, a typical ScalarStop workflow involves four steps:

  1. Organize your datasets with scalarstop.datablob.

  2. Describe your machine learning model architectures using scalarstop.model_template.

  3. Load, train, and save machine learning models with scalarstop.model.

  4. Save hyperparameters and training metrics to a SQLite or PostgreSQL database using scalarstop.train_store.