Harry24k / AEPW-pytorch

A pytorch implementation of "Adversarial Examples in the Physical World"

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

AEPW-pytorch

A pytorch implementation of "Adversarial Examples in the Physical World"

Summary

This code is a pytorch implementation of basic iterative method(known as I-FGSM) and iterative least-likely class method.
In this code, I used above methods to fool Inception v3.
'Giant Panda' used for an example.
You can add other pictures with a folder with the label name in the 'data/imagenet'.

Also, this code shows adversarial attack is possible with a photo if an adversarial image.

Requirements

  • python==3.6
  • numpy==1.14.2
  • pytorch==1.0.0

Important results not in the code

  • This paper proposed new metric(called destruction rate) to measure the influence of arbitrary trasnfromations. (p.6)
    • Destruction rate is the fraction of adversarial images which are no longer misclassified after the transformations.
    • Adversarial examples generated by the FGSM are the most robust to transformations.
    • Iterative least-likely method is the least robust.
    • Blur, noise and JPEG encoding have a higher destruction rate than brightness and contrast.
  • Paper shows that, to obtain very high confidence, iterative methods are weak to survive photo transformation. (p.8-9)
    • prefiltered case(clean image correctly classified, adversarial image confidently incorrectly classified) couldn't fool the model after transformations unlike average case(randomly chosen images)

Notice

About

A pytorch implementation of "Adversarial Examples in the Physical World"

License:MIT License


Languages

Language:Jupyter Notebook 100.0%