Implement dirty label poisoning attacks for speech recognition models
HSTEHSTEHSTE opened this issue · comments
Implement dirty label poisoning attacks for speech recognition models
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
https://adversarial-robustness-toolbox.readthedocs.io/en/latest/
HSTEHSTEHSTE opened this issue · comments