cdpierse / transformers-interpret

Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[Question] Map attention weights to original input text instead of tokenized input

hardianlawi opened this issue · comments

Thanks for creating this library. It's super useful :). I have some questions which I hope you can share some of your insights.

How can I map the attention weights assigned to the tokenized text to the original input text? or if there are any libraries that could help solve this?

The reason is that the tokenized text is not really UX friendly and not necessarily interpretable for non-ML people.

Some examples

  • "I have a new GPU" would become "i", "have", "a", "new", "gp", "##u".
  • "Don't you love 🤗Transformers Transformers? We sure do." would become don't you love [UNK] transformers? we sure do.. I want to be able to show which part of the original sentence the model looks at i.e. [UNK] is pointing to 🤗Transformers

https://stackoverflow.com/questions/70107997/mapping-huggingface-tokens-to-original-input-text

Hi @hardianlawi, this is an interesting question. The short answer is that I think what you would like to do is possible. While the tokenized text is not super UX friendly it's important to remember that that tokenized text IS what the model is fed as input so attribution with respect to the tokens is for me the best way to try and represent the model's behaviour.

Bert like models tend to use the wordpiece method for tokenization and there are times in the explainer when you can see a model giving a negative attribution to the starting word piece and positive attribution to the negative wordpiece. For researchers and ML users interpreting their model's this could be quite useful information. To get rid of this information would be detrimental to the true interpretation of what is happening with the model.

I don't personally think I will implement a feature for mapping the attributions back to the exact text, or at least I'll have to think on it for a bit, but if you were to do this yourself you would just need to write some logic that could identify certain when a wordpiece occurs e.g. ["gp", "##u"] and then you could average the two raw attributions for gp and ##u together for a single attribution score, this will then correspond back to the original text.

Hope this helps.