There are 2 repositories under explainable-machine-learning topic.
📍 Interactive Studio for Explanatory Model Analysis
Explainable Machine Learning in Survival Analysis
Reading list for adversarial perspective and robustness in deep reinforcement learning.
Classification and Object Detection XAI methods (CAM-based, backpropagation-based, perturbation-based, statistic-based) for thyroid cancer ultrasound images
A collection of algorithms of counterfactual explanations.
A utility for generating heatmaps of YOLOv8 using Layerwise Relevance Propagation (LRP/CRP).
[CIKM'2023] "STExplainer: Explainable Spatio-Temporal Graph Neural Networks"
Principal Image Sections Mapping. Convolutional Neural Network Visualisation and Explanation Framework
t-viSNE: Interactive Assessment and Interpretation of t-SNE Projections
Counterfactual SHAP: a framework for counterfactual feature importance
The PyTorch implementation for "DEAL: Disentangle and Localize Concept-level Explanations for VLMs" (ECCV 2024 Strong Double Blind)
Repo of the paper "On the Robustness of Sparse Counterfactual Explanations to Adverse Perturbations"
Counterfactual Shapley Additive Explanation: Experiments
The Pytorch implementation for "Are Data-driven Explanations Robust against Out-of-distribution Data?" (CVPR 2023)
This repository contains the Business Intelligence insights generated as part of the final project challenge for the DTU Data Science course 42578: Advanced Business Analytics
An R package providing functions for interpreting and distilling machine learning models
Measuring galaxy environmental distance scales with GNNs and explainable ML models
A baseline genetic algorithm for the discovery of counterfactuals, implemented in Python for ease of use and heavily leveraging NumPy for speed.
[Frontiers in AI Journal] Implementation of the paper "Interpreting Vision and Language Generative Models with Semantic Visual Priors"
Predicting whether an African country will be in recession or not with advanced machine learning techniques involving class imbalance, cost-sensitive learning and explainable machine learning
Code for the School of AI challenge "Explainable AI for Wildfire Forecasting", sponsored by Pi School to help NOA, the National Observatory of Athens, work with Explainable Deep Learning for Wildfire Forecasting.
How to use SHAP to interpret machine learning models
BBBP Explainer is a code to generate structural alerts of blood-brain barrier penetrating and non-penetrating drugs using Local Interpretable Model-Agnostic Explanations (LIME) of machine learning models from BBBP dataset.
This repository is associated with interpretable/explainable ML model for liquefaction potential assessment of soils. This model is developed using XGBoost and SHAP.
This repository consists the supplemental materials of the paper "Decomposition of Expected Goal Models: Aggregated SHAP Values for Analyzing Scoring Potential of Player/Team".
Explaining sentiment classification by generating synthetic exemplars and counter-exemplars in the latent space
Building a model is just one piece of the puzzle in data science; explaining how it works is just as important, especially in finance where transparency and explainability is key.
Explanation-guided boosting of machine learning evasion attacks.
Ths repo has the list of Interesting Literature in the domain of XAI