carlini / nn_robust_attacks

Robust evasion attacks against neural network to find adversarial examples

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

About the settings for imagenet

lith0613 opened this issue · comments

I want to know whether the parameter setting for imagenet is same as cifar ? I load the inception V3 and search for the c via 9 times binary search, which cost much time for training, can you share some ideas about how to accelarating the training process in your method ?

Ah, sorry. Missed this email. It's going to be slow with a few thousand iterations of gradient descent regardless. If you set the number of search steps to 3 or 4 it's probably fine if you have a good initial guess. But yeah, it's just slow. "reasonable" settings are here
cleverhans-lab/cleverhans#813

cw_params = {'binary_search_steps': 4,
            'max_iterations': 2000,
            'learning_rate': 0.0001,
            'initial_const': .1,
            'clip_min': 0,
            'clip_max': 1,
            'abort_early': True,
            'batch_size': 50}

Ah, sorry. Missed this email. It's going to be slow with a few thousand iterations of gradient descent regardless. If you set the number of search steps to 3 or 4 it's probably fine if you have a good initial guess. But yeah, it's just slow. "reasonable" settings are here
tensorflow/cleverhans#813

cw_params = {'binary_search_steps': 4,
            'max_iterations': 2000,
            'learning_rate': 0.0001,
            'initial_const': .1,
            'clip_min': 0,
            'clip_max': 1,
            'abort_early': True,
            'batch_size': 50}

Thanks, I have another question about the definition of best case and worst case setting in your paper,
For example, in your paper, which defines the best case as follows:

Best Case: perform the attack against all incorrect classes,
and report the target class that was least difficult to attack.

But I don't know what is the meaning of least difficult to attack, did you just attack all the other labels and take the best case according to which adversarial example hase the smallest L_norm value ?
I really don't know what is the meaning of least difficult to attack, looking forward to your reply !