Trusted-AI / adversarial-robustness-toolbox

Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams

Home Page:https://adversarial-robustness-toolbox.readthedocs.io/en/latest/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Risky values in tests

AryazE opened this issue · comments

While running a dynamic analysis on the test suite, in this line, the compared values are in some cases floats within the float inaccuracy range (this might cause some cases to miss an improvement), and in some cases inf.