Harry24k / adversarial-attacks-pytorch

PyTorch implementation of adversarial attacks [torchattacks]

Home Page:https://adversarial-attacks-pytorch.readthedocs.io/en/latest/index.html

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[QUESTION] About check_validity function

WWWWWLI opened this issue · comments

❔ Any questions

May I ask what is the purpose of determining in the check_validity function that len(set(ids)) ! = 1 is intended.
However, there is a case where the same parameters are used to attack different models, so that the generated adversarial samples can be attacked as many different types of models at the same time as possible, to get more aggressive adversarial samples. This requirement cannot be realized in code. Therefore I have made a distinction between len(set(ids)) ! = 1 limit is confusing, looking forward to a reply, thanks for your contribution.

Hi @WWWWWLI , well, I think the purpose of this restriction is to attack the same model by different attack methods, and to compare to get a better perturbation making it more aggressive for some special tasks (e.g. black-box models).

Do you mean you need a method that using the same attack method to attack many different models to get a strong adversarial sample? Because of the name of this class multiattack, its based on a multiattack method to attack a same model, maybe you can implement a multimodel based on this to achieve your functionality. 🤪🤪🤪

OK, thank you very much.