carlini / nn_robust_attacks

Robust evasion attacks against neural network to find adversarial examples

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Unable to generate l2 attack examples

ericcheng09 opened this issue · comments

I have a pre-trained keras MNIST model and I want to use my model to generate adversarial examples.

Here is the code I change in class MNISTModel() in setup_mnist.py :
`

    self.num_channels = 1
    self.image_size = 28
    self.num_labels = 10
    K.set_session(session)
    model = load_model('MNIST_model1.h5')
    self.model = model

`

But when I ran the code, I got some examples that have all zeros.

Should I load the model like this?
Thank you

Yes, I am also facing a similar situation. Did you find some workaround to this ericcheng09.
Loading the model trained using train_models.py also some outputs adversarial examples which are having all zeros.

Any suggestions?
Thanks

When it outputs all zeros, that's an indication it failed to construct an adversarial example. Most of the time, this means the keras model you are loading includes a softmax activation layer as the final layer, and you should remove this (or otherwise change the predict function to be tf.log(self.model(x)).

Thanks for the feedback. It works for most of the examples now.

Can I generate cifar-10 attack examples on Windows?

I've never tested that, but that seems like a different issue than this one.