moboehle / CoDA-Nets

Official implementation for the CVPR paper "Convolutional Dynamic Alignment Networks for Interpretable Classifications"

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Convolutional Dynamic Alignment Networks for Interpretable Classifications

Official implementation of the CVPR 2021 paper (oral): Arxiv Paper | GitHub Pages

M. Böhle, M. Fritz, B. Schiele. Convolutional Dynamic Alignment Networks for Interpretable Classifications. CVPR, 2021.

Overview

Comparison to post-hoc explanation methods evaluated on the CoDA-Nets

Evaluated on videos

In order to highlight the stability of the contribution-based explanations of the CoDA-Nets, we present some examples for which the output for the respective class of the CoDA-Net was linearly decomposed frame by frame; for more information, see interpretability/eval_on_videos.py.

Quantitative Interpretability results

In order to reproduce these plots, check out the jupyter notebook CoDA-Networks Examples. For more information, see the paper and check out the code at interpretability/

Localisation metric

Pixel removal metric

Compared to others

Contributions per Layer Contributions per Layer

Trained w/ different temperatures

Contributions per Layer Contributions per Layer

Copyright and license

Copyright (c) 2021 Moritz Böhle, Max-Planck-Gesellschaft

This code is licensed under the BSD License 2.0, see license.

Further, you use any of the code in this repository for your research, please cite as:

  @inproceedings{Boehle2021CVPR,
          author    = {Moritz Böhle and Mario Fritz and Bernt Schiele},
          title     = {Convolutional Dynamic Alignment Networks for Interpretable Classifications},
          journal   = {IEEE/CVF Conference on Computer Vision and Pattern Recognition ({CVPR})},
          year      = {2021}
      }

About

Official implementation for the CVPR paper "Convolutional Dynamic Alignment Networks for Interpretable Classifications"

License:Other


Languages

Language:Jupyter Notebook 90.6%Language:Python 9.4%