liuyilin950623 / SHAP_on_Autoencoder

Explaining Anomalies Detected by Autoencoders Using SHAP

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

SHAP_on_Autoencoder

Explaining Anomalies Detected by Autoencoders Using SHAP

Dataset: Boston Housing Dataset

Machine Learning Methods: Autoencoder, Kernel SHAP

Paper: Explaining Anomalies Detected by Autoencoders Using SHAP https://arxiv.org/pdf/1903.02407.pdf

The implementation has 3 steps.

  1. Select the top features with largest reconstruction errors.
  2. For each feature in the list of top features:
    • We want to explain what features (other than itself) have led to the reconstruction error
    • Set the weights in the autoencoder that is specific to multiply the feature and keep all other weights
    • Use model agnostic Kernal SHAP to calculate the Shapley values
  3. We then decide whether the feature is a contributing feature or an offsetting feature (depending on the sign of the reconstruction error) Here, I made some minor adjustments to the original paper for the ease of interpretatbility. Contributing factors are marked as postive Shapley values.

About

Explaining Anomalies Detected by Autoencoders Using SHAP


Languages

Language:Jupyter Notebook 99.4%Language:Python 0.6%