carlini / nn_robust_attacks

Robust evasion attacks against neural network to find adversarial examples

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

why the adversarial perturbations was damaged by saving the adversarial samples with scipy.misc.

shenqixiaojiang opened this issue · comments

Hello, @carlini . Sorry to bother you again. I'm try to defense your attack. But there is a strange thing happened: the adversarial perturbations was damaged by saving the adversarial samples with scipy.misc.
And the image is dealed with ' image = image / 255.0 - 0.5 ' when as the input of pre-trained model
and We find the final output is different with the start.

Discretizing the values from a real-numbered value to one of the 256 points degrades the quality of the adversarial examples. This can easily be fixed by performing a second optimization step on the lattice of discretized images (often only a few iterations is necessary). However, if you don't want to have to do this, you can also just save and load it as float32.

@carlini
Yeah, now the adversarial samples were saved with 'npy' format.
In addition, the no-targeted adversarial samples of MNIST datasets was used to attack the model used in the cleverhans library.
And the accuracy was 0.9 which means the 90% of adversarial samples was failed to attack.
So do you think it is normal and Can I defense your attack with accuracy on the test model of cleverhans library?

Actually, the model used in your 'setup_mnist.py' file was trained based on cleverhans library.
And the result was agree with the above.

Sorry, I'm not sure what you're trying to say. Are you attacking and defending using the same model? If they are different, you will need to generate transferable adversarial examples, to do this set the confidence to 3 or 4 on MNIST.

I'm not sure what this has to do with cleverhans.