radiukpavlo / transition-matrix-dl

Explainable Deep Learning: A Visual Analytics Approach with Transition Matrices

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Explainable Deep Learning: A Visual Analytics Approach with Transition Matrices

Pavlo Radiuk, Olexander Barmak, Eduard Manziuk, Iurii Krak

This is the official repository for the paper "Explainable Deep Learning: A Visual Analytics Approach with Transition Matrices", which has been submitted to Mathematics and is currently under review.

Overview

Abstract:
The opacity of artificial intelligence (AI) systems, especially in deep learning (DL), poses significant challenges to their comprehensibility and trustworthiness. This study aims to enhance the explainability of DL models through visual analytics (VA) and human-in-the-loop (HITL) principles, making these systems more transparent and understandable to users. In this work, we propose a novel approach that utilizes a transition matrix to interpret results from DL models through more comprehensible machine learning (ML) models. The methodology involves constructing a transition matrix between the feature spaces of DL and ML models as formal and mental models, respectively, improving the explainability of separating hyperplanes for classification tasks. The effectiveness of our methods is validated using the MNIST and Iris datasets, with quantitative analysis based on the Structural Similarity Index (SSIM) and Peak Sig-nal-to-Noise Ratio (PSNR) metrics. The application of the transition matrix to the MNIST and Iris datasets demonstrated significant improvements in model transparency and user comprehension. For the Iris dataset, the separating hyperplane achieved enhanced classification accuracy. Validation results showed notable improvements with average SSIM values of 0.697 and PSNR values reaching 17.94, indicating high-quality reconstruction and interpretation of DL model outcomes. Our study underscores the importance of explainable AI in bridging the gap between the complex decision-making processes of DL models and human understanding. By employing VA and a transition matrix, we have significantly improved the explainability of DL models without compromising their performance, marking a step forward in developing more transparent and trustworthy AI systems.

Citation

If you make use of our work, please cite our paper:

@Article{radiuk2024,
  TITLE={Explainable Deep Learning: A Visual Analytics Approach with Transition Matrices},
  AUTHOR={Pavlo Radiuk, Olexander Barmak, Eduard Manziuk and Iurii Krak},
  JOURNAL={Mathematics},
  YEAR={2024}
}

Getting Started

We recommend using the Anaconda package manager to avoid dependency/reproducibility problems. For Linux systems, you can find a conda installation guide here.

Installation

  1. Clone the repository
git clone https://github.com/radiukpavlo/transition-matrix-dl
  1. Install Python dependencies
conda env create -n my_project -f environment.yml
conda activate my_project

Alternatively, you can create a new conda environment and install the required packages manually:

conda create -n my_project -y python=3.9
conda activate my_project
pip install torch==1.12.1 torchmetrics==0.11.0 opencv-python==4.7.0.68 diffusers==0.12.0 transformers==4.25.1 accelerate==0.15.0 clean-fid==0.1.35 torchmetrics[image]==0.11.0

Pre-trained models

The model and checkpoints are available via folders .\models and .\checkpoints.

Datasets

The original datasets used in this research can be freely downloaded.

Start by downloading the original datasets from the following links:

TODO

  • include additional figures

LICENSE

Creative Commons License
All material is available under Creative Commons BY-NC 4.0. You can use, redistribute, and adapt the material for non-commercial purposes, as long as you give appropriate credit by citing our paper and indicate any changes you've made.

About

Explainable Deep Learning: A Visual Analytics Approach with Transition Matrices

License:Other


Languages

Language:Jupyter Notebook 99.8%Language:Python 0.2%Language:TeX 0.1%