fhvilshoj / TorchLRP

A PyTorch 1.6 implementation of Layer-Wise Relevance Propagation (LRP).

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Relevance of model parameters

francescomalandrino opened this issue · comments

Is there a way to get the relevance of model parameters, in addition to the inputs (and intermediate results) thereof?

The PLOS paper ("On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation") mentions per-neuron relevance conservation, hence, I assume it would make sense to keep track of the incoming (or outgoing) relevance of each neuron (and, more generally, each DNN parameter), and make that accessible to the users of the library.

Does that make sense? Thanks!