xiaofanustc / adversarial

Pytorch - Adversarial Training

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Adversarial Training with pytorch

python main.py -a -v

Accuracy (WIP)

Model Acc.
VGG16 --.--%
ResNet18 51.99%
ResNet50 --.--%
ResNet101 --.--%
MobileNetV2 --.--%
ResNeXt29(32x4d) --.--%
ResNeXt29(2x64d) --.--%
DenseNet121 --.--%
PreActResNet18 --.--%
DPN92 --.--%

Learning rate adjustment

I manually change the lr during training:

  • 0.1 for epoch [0,50)
  • 0.01 for epoch [50,60)

Resume the training with python main.py -r --lr=0.01 -a -v

References

  1. Authors' code: MadryLab/cifar10_challenge

  2. Baseline code: kuangliu/pytorch-cifar

Notes

To read more about Projected Gradient Descent (PGD) attack, you can read the following papers:

  1. Towards Deep Learning Models Resistant to Adversarial Attacks

  2. Adversarially Robust Generalization Requires More Data

About

Pytorch - Adversarial Training

License:Other


Languages

Language:Python 100.0%