INK-USC / DIG

Discretized Integrated Gradients for Explaining Language Models (EMNLP 2021)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Plans to contribute DIG to Captum?

MoritzLaurer opened this issue · comments

Hey there, I've recently tested your method via the implementation in the Inseq library and it produced very good preliminary results. are you considering contributing your method to a library like Captum?

It would be great to have your research code implemented into a widely used framework like Captum and I'm sure many people would try it given the rise of Transformers and that there seem to be too few attribution methods specifically for text models.

Hi @MoritzLaurer, thanks for your comments. My codebase follows Captum implementations pretty closely, with some core changes on how the attribution calls are made and some preprocessing steps. But using the DIG attribution method requires one to also modify the forward function of the model (for e.g., here), which kind of makes it incompatible with other algorithms in Captum. Hence, I decided to not push this code to Captum.

ok, I understand. unfortunate that these kind of implementation differences limit the amount of methods in Captum