carlini / nn_robust_attacks

Robust evasion attacks against neural network to find adversarial examples

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Unable to reproduce L0 attack on MNIST

VishaalMK opened this issue · comments

The generated adversarial examples end up being in the range [-0.5,0.5] instead of [0,1].
I tried making some modifications to change the range, but was unsuccessful.
Could you give me some pointers to get the adv. images in the range [0,1]?

I am using the model mentioned under setup_mnist.py
Thanks

If you're using the seup_mnist model and the data I supply, it's already in the range [-0.5,0.5].

If you need it in [0,1], then what I usually do is just transform the input ahead of time. So if your input is in the range [0,1] and your model takes [0,1] you can just make the first line of your model

def predict(self, xs):
    xs = xs+0.5
    [...]

and then call the attack by running

attack.attack(xs-0.5, ys)

Or, if you look at l2_attack.py there are boxmin and boxmax parameters there. That's another way to solve the issue, if you end up doing that I'd be happy to take a PR.

It works! Thanks a lot for your help.