MadryLab / cifar10_challenge

A challenge to explore adversarial robustness of neural networks on CIFAR10.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

White-box result of madry_lab_challenges in examples of cleverhans.

lepangdan opened this issue · comments

I run the code in 'cleverhans/examples/madry_lab_challenges/cifar10/attack_model.py' with default parameter settings to attack target model with 'models/adv_trained' checkpoints. And I get the results as follows, which are something different from those in the white-box leaderboard. I don't know why the resulting test accuraries are higher. Any help would be appreciated!
PGD: 0.5370
fgsm: 0.6330
cwl2 : 0.5420

Unfortunately, we are not familiar with the code/model in the cleverhans implementation of our challenge. The 20-step PGD attacks in the leaderboard use the default config.json using double the steps (20) and half the step size (1).