Sunshine352 / fast_adversarial

Code for the CVPR 2019 article "Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses"

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

About

Code for the article "Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses" (https://arxiv.org/abs/1811.09600), to be presented at CVPR 2019 (Oral presentation)

Implementation is done in PyTorch 0.4.1 and runs with Python 3.6+. The code of the attack is also provided on TensorFlow. This repository also contains an implementation of the C&W L2 attack in PyTorch (ported from Carlini's TF version)

Installation

This package can be installed via pip as follows:

pip install git+https://github.com/jeromerony/fast_adversarial

Using DDN to attack a model

from fast_adv.attacks import DDN
attacker = DDN(steps=100, device=device)

adv = attacker.attack(model, x, labels=y, targeted=False)

Where model is a pytorch nn.Module that takes inputs x and outputs the pre-softmax activations (logits), x is a batch of images (N x C x H x W) and labels are either the true labels (for targeted=False) or the target labels (for targeted=True). Note: x is expected to be on the [0, 1] range: you can use fast_adv.utils.NormalizedModel to wrap any normalization, such as mean subtraction.

See the "examples" folder for a python and a jupyter notebook example

Adversarial training with DDN

The following commands were used to adversarially train the models:

MNIST:

python -m fast_adv.defenses.mnist --lr=0.01 --lrs=30 --adv=0 --max-norm=2.4 --sn=mnist_adv_2.4

CIFAR-10 (adversarial training starts at epoch 200):

python -m fast_adv.defenses.cifar10 -e=230 --adv=200 --max-norm=1 --sn=cifar10_wrn28-10_adv_1

Adversarially trained models

About

Code for the CVPR 2019 article "Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses"

License:BSD 3-Clause "New" or "Revised" License


Languages

Language:Python 100.0%