leetcode-notes / adversarial-attacks-pytorch

A pytorch implementations of Adversarial attacks and utils

Home Page:https://adversarial-attacks-pytorch.readthedocs.io/en/latest/index.html

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Adversarial-Attacks-Pytorch

This is a lightweight repository of adversarial attacks for Pytorch.

There are popular attack methods and some utils.

Here is a documentation for this package.

If you've installed torchattacks with version under 1.3 through pip, please upgrade it to newer version!!

Table of Contents

  1. Usage
  2. Attacks and Papers
  3. Demos
  4. Frequently Asked Questions
  5. Update Records
  6. Recommended Sites and Packages

Usage

Dependencies

  • torch 1.2.0
  • python 3.6

Installation

  • pip install torchattacks or
  • git clone https://github.com/Harry24k/adversairal-attacks-pytorch
import torchattacks
pgd_attack = torchattacks.PGD(model, eps = 4/255, alpha = 8/255)
adversarial_images = pgd_attack(images, labels)

Precautions

  • WARNING :: All images should be scaled to [0, 1] with transform[to.Tensor()] before used in attacks.
  • WARNING :: All models should return ONLY ONE vector of (N, C) where C = number of classes.

Attacks and Papers

The papers and the methods with a brief summary and example. All attacks in this repository are provided as CLASS. If you want to get attacks built in Function, please refer below repositories.

  • Explaining and harnessing adversarial examples : Paper, Repo

    • FGSM
  • DeepFool: a simple and accurate method to fool deep neural networks : Paper

    • DeepFool
  • Adversarial Examples in the Physical World : Paper, Repo

    • BIM or iterative-FSGM
    • StepLL
  • Towards Evaluating the Robustness of Neural Networks : Paper, Repo

    • CW(L2)
  • Ensemble Adversarial Traning : Attacks and Defences : Paper, Repo

    • RFGSM
  • Towards Deep Learning Models Resistant to Adversarial Attacks : Paper, Repo

    • PGD(Linf)
  • Comment on "Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network" : Paper

    • APGD(EOT + PGD)
Attack Clean Adversarial
FGSM
BIM
StepLL
RFGSM
CW
PGD(w/o random starts)
PGD(w/ random starts)
DeepFool

Demos

  • White Box Attack with Imagenet (code): To make adversarial examples with the Imagenet dataset to fool Inception v3. However, the Imagenet dataset is too large, so only 'Giant Panda' is used.

  • Black Box Attack with CIFAR10 (code): This demo provides an example of black box attack with two different models. First, make adversarial datasets from a holdout model with CIFAR10 and save it as torch dataset. Second, use the adversarial datasets to attack a target model.

  • Adversairal Training with MNIST (code): This code shows how to do adversarial training with this repository. The MNIST dataset and a custom model are used in this code. The adversarial training is performed with PGD, and then FGSM is applied to test the model.

  • Targeted PGD with Imagenet (code): It shows we can perturb images to be classified into the labels we want with targeted PGD.

  • MultiAttack with MNIST (code): This code shows an example of PGD with N-random-restarts.

Frequently Asked Questions

Update Records

~Version 1.2 (DON'T USE)

  • Pip packages were corrupted by accumulating previous versions.

Version 1.3

  • Pip Package Re-uploaded.

Version 1.4

  • PGD :
    • Now it supports targeted mode.

Version 1.5 (Stable)

  • MultiAttack :
    • MultiAttack added.
    • With it, you can use PGD with N-random-restarts or stronger attacks with different methods.

Recommended Sites and Packages

About

A pytorch implementations of Adversarial attacks and utils

https://adversarial-attacks-pytorch.readthedocs.io/en/latest/index.html

License:MIT License


Languages

Language:Python 100.0%