Trusted-AI / adversarial-robustness-toolbox

Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams

Home Page:https://adversarial-robustness-toolbox.readthedocs.io/en/latest/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

pgd attack usage

COD1995 opened this issue · comments

How do I apply a pgd attack if the image is normalized [0,1]

Let's say, a 20 -step PGD $\ell_{\infty}$ attack with attack with $\epsilon=8$.

Do I do?


attack = ProjectedGradientDescent(
    estimator=classifier,
    eps=8/255, 
    eps_step=2/255,  
    max_iter=20,  
    num_random_init=1 
)