BiomedSciAI / histocartography

A standardized Python API with necessary preprocessing, machine learning and explainability tools to facilitate graph-analytics in computational pathology.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

GraphGradCAMExplainer use of backpropogation

CarlinLiao opened this issue · comments

When using the GraphGradCAMExplainer, we use an pretrained torch GNN model set to eval mode since we're no longer training the model. However, to find the node importances, the Explainer module uses backpropogation to find the node importances via the weight coefficients of the hooked activation maps, which shouldn't be possible on an eval model instance.

image

For whatever reason, this doesn't throw an error in the recommended python 3.7, dgl 0.4.3post2, and torch 1.10 environment, but does in my more up-to-date python 3.9, dgl 0.9, torch 1.12.1 env even though the written code is identical.

The only solution I've found so far is to set the model used in the Explainer to training mode before running the explainer, but that's far from ideal.

Is there a way to find the node importances without committing to backpropogation? Is that what backpropogating in the original histocartography environment does instead? If it doesn't, is it not an issue that the model is being updated via backpropogation during the process of explaining node importance?

Did a bit more of my own investigation and

  • The only modification I needed to make to get this work was to set the model to training, after the setup process creating the model sets it to training. I suspect that this might not be necessary with earlier versions of torch but couldn't confirm.
  • Doing backdrop doesn't change or update the model so long as we don't call step on the optimizer or use the gradient to calculate the update so this fine from a model use standpoint.
  • Stylistically, best practice is likely to keep the model in eval mode whenever we're not intending to update it. Can we do this importance calculation without leaning on the backprop functionality?