rrmhearts / PAG-ROB

[ICML 2023 - Oral] Do Perceptually Aligned Gradients Imply Robustness? -- Official Code Repository

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool


Do Perceptually Aligned Gradients Imply Robustness?

Roy GanzBahjat KawarMichael Elad

[ICML 2023 - Oral] Unofficial Code Repository of Do Perceptually Aligned Gradients Imply Robustness?

Additional

Additional results based on CM from Ryan (rrmhearts). Attempted to re-implement the original class mean (CM) method and got 46% clean accuracy and 54% robust accuracy. When moving a PGD vector away from the class mean, including pushing it cosine negative (180 degrees), the perforance improved to 66% clean accuracy and 59% robust accuracy. Since orthogonal to the latent class cluster should be sufficient for the property (as in the orignal paper), re-running the PGD version with only reducing positive cosine distances to 0 (in progress). These models and code are available here in ./TRAIN_CIFAR10_CM.py (CMmPGD code) and weights.

Installation

First, clone this repository:

git clone https://github.com/royg27/PAG-ROB.git
cd PAG-ROB

Next, to install the requirements in a new conda environment, run:

conda env create -f environment.yml

or

pip install numpy torch torchvision pyyaml wandb

Preparing Perceptually Aligned Gradients Data

The Perceptually Aligned Gradients' realization for the Score-Based Gradients for the CIFAR-10 dataset is provided in the following table:

PAG realization Data Labels
Score-Based Gradients Download Download

The data should be placed in the data folder, forming the following structure:

PAG-ROB
├── configs
│   ├── ......
├── data
│   ├── c10_sbg_data.pt
│   ├── c10_sbg_label.pt
├── models
│   ├── ......
├── TRAIN_CIFAR10.py

Training

python TRAIN_CIFAR10.py --config_path <config>

where <config> specifies the desired training configuration (e.g., configs/cifar10_sbg_rn18.yaml)

Trained Checkpoints

We provide pretrained checkpoints on the CIFAR-10 dataset in the table below:

RN18 OI RN18 CM RN18 NN RN18 SBG ViT SBG
Download Download Download Download Download

Citation

If you find this code or data to be useful for your research, please consider citing it.

@misc{ganz2023perceptually,
      title={Do Perceptually Aligned Gradients Imply Adversarial Robustness?}, 
      author={Roy Ganz and Bahjat Kawar and Michael Elad},
      year={2023},
      eprint={2207.11378},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Captum

May need to comment out the following if you get an associated error.

~/.local/lib/python3.10/site-packages/captum$ grep -Rnw . -e "#.*grid"
./attr/_utils/visualization.py:250:    # plt_axis.grid(b=False)

About

[ICML 2023 - Oral] Do Perceptually Aligned Gradients Imply Robustness? -- Official Code Repository


Languages

Language:Python 100.0%