kazuto1011 / grad-cam-pytorch

PyTorch re-implementation of Grad-CAM (+ vanilla/guided backpropagation, deconvnet, and occlusion sensitivity maps)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

How to use InceptionV3 and densenet161?

tuji-sjp opened this issue · comments

I find the "target_layer_names" of these two models, but when I run the modified code, I get the following error:

RuntimeError: size mismatch, m1: [1 x 277248], m2: [768 x 1000] at /opt/conda/conda-bld/pytorch_1535490206202/work/aten/src/THC/generic/THCTensorMathBlas.cu:249

How do I solve it? Please help me, thank you very much!

I cannot guess what caused the error because:

  • There's no hint related to this repository in the reported error.
  • This repository does not contain the keyword target_layer_names.

About another repository?
https://github.com/jacobgil/pytorch-grad-cam/blob/master/grad-cam.py#L76

Oh, this is indeed the code of another repository, I made a mistake, so sorry.
And "target_layer_names" is yours "target_layer".

In this repository, you can get various maps as follows:

$ python main.py demo1 --arch "inception_v3" --target-layer "Mixed_7c" -i "samples/cat_dog.png"
$ python main.py demo1 --arch "densenet161" --target-layer "features" -i "samples/cat_dog.png"

Please note that:

  • For Grad-CAM, we want to extract the activation maps from the last convolution to be exact but in densenet161 case the "features" output is not activated (currently cannot access this line).
  • The gradient maps generated by Deconvnet and Guided Backpropagation are not valid because torchvision's Inception v3 and DenseNet-161 includes F.relu not only nn.ReLU which my code cannot handle.

Hello, I fused the logits of the three models. Maybe you know how does this ensemble model gets its GradCAM?