Trusted-AI / adversarial-robustness-toolbox

Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams

Home Page:https://adversarial-robustness-toolbox.readthedocs.io/en/latest/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Flexible metric function for accuracy and l2 norm [JATIC-I2-IBM]

kieranfraser opened this issue · comments

Is your feature request related to a problem? Please describe.
Having run an attack, I'd like a function that automatically calculates the clean and robust accuracy as well as returning details about the perturbation added (e.g. average perturbation).

Describe the solution you'd like
A function into which I can pass attack input and output and the metrics for accuracy and l2 norm are calculated and returned without rerunning the attack.

Describe alternatives you've considered
Currently ART has a number of functions in metrics.py that return accuracy and perturbation - each of these require an attack to be passed to it. This would mean attacks would be run repeatedly to get all the mentioned metrics and for long running attacks this might be infeasible. The alternative is to code the calculations manually when needed - but this is repetitive code whenever evaluations are conducted.

Additional context
n/a