Trusted-AI / adversarial-robustness-toolbox

Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams

Home Page:https://adversarial-robustness-toolbox.readthedocs.io/en/latest/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Printable representation of Attack objects [JATIC-I2-IBM]

kieranfraser opened this issue · comments

Is your feature request related to a problem? Please describe.
Printing any attack object (excluding LaserAttack) currently doesn't give any user-friendly descriptive information (e.g. like the attack_params used) making it difficult to differentiate between attacks without manually inspecting parameters.

Describe the solution you'd like
Printing an attack object will print out attack metadata.
e.g. using __repr__

Describe alternatives you've considered
Currently, users have to explicitly and individually print attack parameters one by one.

Additional context
Useful for scenarios where multiple attacks are being evaluated and would like a quick wait of differentiating between attacks which might be of the same class, but have different attack parameters.