hbaniecki / adversarial-explainable-ai

💡 Adversarial attacks on explanations and how to defend them

Home Page:https://doi.org/10.1016/j.inffus.2024.102303

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

"Is your explanation stable?": A Robustness Evaluation Framework for Feature Attribution

hbaniecki opened this issue · comments