kazuto1011 / grad-cam-pytorch

PyTorch re-implementation of Grad-CAM (+ vanilla/guided backpropagation, deconvnet, and occlusion sensitivity maps)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Can't find my own model's layer name

Sibozhu opened this issue · comments

Dear @kazuto1011 Thank you so much for this amazing repo. I have one question regarding the customized model engaging. When I used a YOLOv3 model in this project, I used your script of finding layer name print(*list(model.named_modules()), sep='\n') and I get
image
However, as you can see in the error below it doesn't recognize it as a valid layer name. Do you have any idea of this?

Update: When I enter
for name, paras, in model.named_modules(): print(name)
for directly getting the layer name, I get:
image
Which still doesn't take the last layer's name. It is so weird.

This code picks up the intermediate values when the module actually calls forward() or backward() in which a hook function is registered. What is the type of the module_list.105?

  • If its type is nn.ModuleList, I guess that the registered hook function is not called (the module just iterates child classes). You may need to specify the very end module module_list.105.conv_105 instead.
  • If not, I suspect that the model doesn't call backward() yet. The self.gradient may be an empty dict.

Though I'm not sure how to print the type of the module_list.105, no the module_list.105.conv_105 doesn't work. If it doesn't call backward, is there a way I can solve this? Thank you!

Although I don't know how you adapted to the detection model, please verify that you run GradCAM backward and enable autograd (don't call torch.set_grad_enabled(False), with torch.no_grad(), etc.). To validate the backward, you can print self.gradient or insert any commands into the hook function.

Thank you so much! I solved the issue eventually. Indeed it was because I lose the hook of backward when I load my own model. After all, everything works perfectly! 本当にありがとうございました!