sungyoon-lee / LossLandscapeMatters

[NeurIPS 2021] Towards Better Understanding of Training Certifiably Robust Models against Adversarial Examples | ⛰️⚠️

Home Page:https://papers.nips.cc/paper/2021/file/07c5807d0d927dcd0980f86024e5208b-Paper.pdf

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Towards Better Understanding of Training Certifiably Robust Models against Adversarial Examples

This repository is the official implementation of "Towards Better Understanding of Training Certifiably Robust Models against Adversarial Examples".

This repository is modified from the CROWN-IBP repository (https://github.com/huanzhang12/CROWN-IBP).

Figure

(↑) IBP starts with a higher loss but ends with a relatively lower loss, demonstrating smaller loss variations (colored area). Our method uses tight bounds like CROWN-IBP (β=1), while its landscape is as favorable as IBP, achieving the best performance among these four methods.

Requirements

It requires torch version>=1.3.0.

To install requirements:

conda env create -f environment.yml

Training (and Evaluation)

To train and evaluate the model(s) in the paper, run this command:

python train.py --config config/cifar10.json 
python train.py --config config/cifar10.json "training_params:epsilon=0.007843" "training_params:train_epsilon=0.007843" 
python train.py --config config/mnist.json
python train.py --config config/svhn.json


python eval.py --config config/cifar10.json "eval_params:model_paths=cifar_medium_8px"
python eval.py --config config/cifar10.json "eval_params:model_paths=cifar_medium_2px" "eval_params:epsilon=0.007843"
python eval.py --config config/mnist.json "eval_params:model_paths=mnist_large_train04"
python eval.py --config config/svhn.json "eval_params:model_paths=svhn_large_001"


About

[NeurIPS 2021] Towards Better Understanding of Training Certifiably Robust Models against Adversarial Examples | ⛰️⚠️

https://papers.nips.cc/paper/2021/file/07c5807d0d927dcd0980f86024e5208b-Paper.pdf

License:BSD 2-Clause "Simplified" License


Languages

Language:Python 100.0%