Jeffkang-94 / spbn_adversarial_training

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

SPBN(Split-BatchNorm) adversarial training

This is a not official repository

Adversarial training using split-batch normalization. To test out the efficiency of "AdvProp", i just tried simple study.

Specifically, BN normalizes input features by the mean and variance computed within each mini-batch. One intrinsic assumption of utilizing BN is that the input features should come from a single or similar distributions. This normalization behavior could be problematic if the mini-batch contains data from different distributions, therefore resulting in inaccurate statistics estimation. Screenshot

SPBN Adversarial Training Algorithm

This is the pseudo code for SPBN adversarial training. Note that the model will craft the adversaria examples using auxilary BNs, and update its loss using each BN. For instance, to update the loss against adversarial examples, the model uses auxiliary BN since the model crafts the examples using auxiliary BN, while updates the loss against clean examples using main BNs.

Screenshot

Objective Function

equation

We train the model with both adversarial examples and clean images. Proposed model is differnet in that the model is conveted into the split batchnorm model which computes the mean and variance using independent batchnormalization(also called auxiliary batchnormalization)

Experiments

We use basic ResNet introduced in Madry Paper. Baseline model represents the model who has been trained with adversarial data only and spbn was not applied. For training the baseline model, we followed the basic setting in Madry paper.

PGD adversarial Training
attack_steps = 7
attack_eps = 8.0/255.0
attack_lr = 2.0/255.0

About


Languages

Language:Python 98.3%Language:Shell 1.7%