There are 17 repositories under explainability topic.
A curated list of awesome open source libraries to deploy, monitor, version and scale your machine learning
Fit interpretable models. Explain blackbox machine learning.
[CVPR 2021] Official PyTorch implementation for Transformer Interpretability Beyond Attention Visualization, a novel method to visualize classifications by Transformer based networks.
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.
Visualization toolkit for neural networks in PyTorch! Demo -->
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
Making decision trees competitive with neural networks on CIFAR10, CIFAR100, TinyImagenet200, Imagenet
Papers about explainability of GNNs
Official implementation of Score-CAM in PyTorch
Neural network visualization toolkit for tf.keras
CLIP Surgery for Better Explainability with Enhancement in Open-Vocabulary Tasks
💡 Adversarial attacks on explanations and how to defend them
CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms
Training & evaluation library for text-based neural re-ranking and dense retrieval models built with PyTorch
OpenXAI : Towards a Transparent Evaluation of Model Explanations
[Not Actively Maintained] Whitebox is an open source E2E ML monitoring platform with edge capabilities that plays nicely with kubernetes
Can we use explanations to improve hate speech models? Our paper accepted at AAAI 2021 tries to explore that question.
Holds code for our CVPR'23 tutorial: All Things ViTs: Understanding and Interpreting Attention in Vision.
P-NET, Biologically informed deep neural network for prostate cancer classification and discovery
Collection of NLP model explanations and accompanying analysis tools
GraphXAI: Resource to support the development and evaluation of GNN explainers
Evaluating ChatGPT’s Information Extraction Capabilities: An Assessment of Performance, Explainability, Calibration, and Faithfulness
Reading list for "The Shapley Value in Machine Learning" (JCAI 2022)
PyTorch Explain: Interpretable Deep Learning in Python.
Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)