carlini / nn_robust_attacks

Robust evasion attacks against neural network to find adversarial examples

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Low validation accuracy of CIFAR

HaiQW opened this issue · comments

commented

I ran the script to train the model on CIFAR10 and also the L0 attack on the trained model.

However, the validation accuracy achieved by the script is very low. It is not reasonable to perform adversarial attacks on such low accuracy model.

Well, responding a year late is better than not. In the chance you see this: what accuracy do you get? I think I got ~80% accuracy on this.

commented

Well, responding a year late is better than not. In the chance you see this: what accuracy do you get? I think I got ~80% accuracy on this.

thx