Multiscale perturbed gradient descent (MPGD) is an optimization framework where the gradient descent recursion is augmented with chaotic perturbations that evolve via an independent dynamical system.
This repository contains the implementation of MPGD and the code used for the results in the paper. Please refer to the paper for an introduction to the optimization tasks and other details.
- python 3
- pyTorch 1.9.*
- hydra 1.* (via pip install hydra-core --upgrade)
- sklearn
- numpy
- scipy
- pandas
- math
- statistics
python minimizing_widening_valley_loss.py
See also the Google Colab version here
python train.py
python ecg_classification_mlps.py
See also the Google Colab version here
Scripts for training runs can be found in train.sh
. Please check and specify the parameters appropriately before running.