kazuto1011 / grad-cam-pytorch

PyTorch re-implementation of Grad-CAM (+ vanilla/guided backpropagation, deconvnet, and occlusion sensitivity maps)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Can I use grad-cam in model inference?

yuyijie1995 opened this issue · comments

When I add backward hook in my model, I found that the conv's grad_fn is always none and the grad can not be stored in the grad_map. How can I fix this problem.

you use multi gpus? maybe, you can try to use single gpu.

@yuyijie1995 That's an issue I also encountered recently. Actually, it is not possible to use Grad-cam in any manner during inference. Once you activate inference mode in pytorch, all the gradients flows in model graph shuts off and only forward pass is possible. If you had read the grad-cam paper, you might know that a backward pass is necessary to compute the summation of gradients till the target layer. Running the model in inference mode will not make it perform backward pass. It will be better to make a seperate class wrapper for grad-cam which is different from your inference script.

Gradients can be computed even if you activate inference mode. In fact, my code calls model.eval(), which just changes the behaviors of some layers such as nn.BatchNorm2d and nn.Dropout.

grad-cam-pytorch/main.py

Lines 140 to 142 in fd10ff7

model = models.__dict__[arch](pretrained=True)
model.to(device)
model.eval()

with torch.no_grad() or torch.set_grad_enabled(False) disables the gradients.

Actually, I was trying to integrate the script in pytorch lightning by adding the gradcam script in test loop. My experience was that lightning wrapper was freezing the gradient resulting into conv's grad_fn as always None.
I didn't knew that model.eval() does still allow for backward pass to work. That's why I assumed that the issue might be occuring due to it in pytorch-lightning. Sorry, my bad.