Jiaqing-ASU / loss-landscapes

Approximating neural network loss landscapes in low-dimensional parameter subspaces for PyTorch

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Visualizations for Loss

Install Instructions

This part of the code has been edited and modified from the original loss-landscapes, so you need to manually install the Python package to successfully run the tests in this project.

python3 -m pip install --upgrade pip
python3 -m pip install --upgrade build
apt install python3.8-venv
python3 -m build
cd dist
pip install <tar.gz file>

Running Instructions

In order to compare the different effects of different corruption inputs with different models, we divide the training and visualization of the model into two parts. First, we train the model and save it in the corresponding folder.

python3 train_mnist.py --dataset=<training input dataset>

Then we can see the model_initial.pt and model_final.pt have been generated under the corresponding folder which should be under mnist_model folder.

To do the visualization, run the following code:

python3 plot_mnist.py --model=<trained model> --dataset=<corruption dataset>

The trained options are original, brightness, canny_edges, dotted_line, fog, glass_blur, identity, impulse_noise, motion_blur, rotate, scale, shear, shot_noise, spatter, stripe, translate, zigzag. The default setting is original.

The corruption dataset options are brightness, canny_edges, dotted_line, fog, glass_blur, identity, impulse_noise, motion_blur, rotate, scale, shear, shot_noise, spatter, stripe, translate, zigzag. The default setting is brightness.

About

Approximating neural network loss landscapes in low-dimensional parameter subspaces for PyTorch

License:MIT License


Languages

Language:Python 100.0%