carlini / nn_robust_attacks

Robust evasion attacks against neural network to find adversarial examples

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Setting confidence=0 produces different adversarial accuracy between different runs

ForeverZyh opened this issue · comments

Hello, I have ran the l2_attack for mnist on GPUs.
Sometimes it produces adversarial accuracy around 1%.
Sometimes it produces adversarial accuracy around 20%~30%.
But by setting confidence=0.01 (small values) resolving the issue.

Do you have any idea about why this happened?
Thanks!

And if I run it on CPU, it always (seemingly always) produces adversarial accuracy around 1%.

Not sure if I have anything insightful to offer. Have you tried GPU with batch_size=1 (and CPU with larger batch sizes)?

Does this happen even on the baseline code provided, without modification?

Another diagnostic would be to plot the loss of every input in the batch.

I have not tried those configurations. I will try later. Batch size=100 for all the experiments at that time.

l2_attack is not modified. But I'm trying to attack a model using CarliniL2.

I checked the losses between iterations, even for quite different adversarial accuracy, the losses are quite similar.

I forgot to mention that this issue only happens when the CPU burden is very high. I'm not sure but I feel this is GPU&CPU-related.
And now I'm using confidence=0.01, it seems the adversarial accuracy becomes always 0.

I tried GPU with batch_size=1, and it works pretty well. But that's too slow.

So you are using a new model with new code? I don't know what could be causing multi-batch code to behave differently than single, but that should give you something specific to look into. Try 2, or 3 examples. Do they behave the same? Does your code behave identically for different batch sizes? Are you using a reduce_sum somewhere? There are a lot of things to try.

I see. I have tried max_iterations=100, max_iterations=1000, max_iterations=10000. Strangely, max_iterations=100 produces the best result with 5% accuarcy, max_iterations=10000 produces the worst result with 46% accuarcy.

Accidentally confused github issues; didn't want to close this one.

This seems like something that should be easy to diagnose, if you check why some adversarial example no longer remain adversarial after more iterations of gradient descent.

My new guess: does your model have randomness involved? If so, you may be over-optimizing against one choice of randomness and you may need to ensemble over different randomness values.