alexandrosstergiou / Saliency-Tubes-Visual-Explanations-for-Spatio-Temporal-Convolutions

[ICIP 2019] Implementation of Saliency Tubes for 3D Convolutions in Pytoch and Keras to localise the focus spatio-temporal regions of 3D CNNs.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

How to use your code

Fazlik995 opened this issue · comments

Hi @alexandrosstergiou

Thank you for sharing your awesome work.

I would like to get visualization results for my network. However, I do not understand how to run your code.

Should I integrate some part of "heat_tubes_mfnet_pytorch.py" into my network or I should adjust my pre-trained model into your code?

Please, help me to clarify how to use your code

Thank you

Hi @Fazlik995

You can replace the import statement at line #11 and the model class call at line #60, if you want to use your own custom 3D-CNN. Also note that if you have not trained with DataParallel you should comment out line #61. The parser arguments can be used as normal, i.e. :

  • num_classes: The number of output units/dataset classes used in the network.
  • model_weights: Directory of the model weights.
  • frame_dir: Directory for all the image frames that you want to visualise. - If they are more than 16 you can adapt the load_images function to hold larger/smaller number of frames and change the frame_indices variable
  • label: integer for the class to visualise (should have a direct correspondence to a neurone in the output layer).
  • --base_output_dir: String for where to save the output visualisations

So the call to the script is something like:

$ python heat_tubes_mfnet_pytorch.py num_classes 100 model_weights /my/path/to/the/weights.pth frame_dir some/frames label 45 

In addition, you can changes the colours of the opencv ColorMap here