hbaniecki / adversarial-explainable-ai

💡 Adversarial attacks on explanations and how to defend them

Home Page:https://doi.org/10.1016/j.inffus.2024.102303

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Focus-Shifting Attack: An Adversarial Attack That Retains Saliency Map Information and Manipulates Model Explanations

hbaniecki opened this issue · comments