shap / shap

A game theoretic approach to explain the output of any machine learning model.

Home Page:https://shap.readthedocs.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

BUG: AssertionError: The SHAP explanations do not sum up to the model's output

tommyfuu opened this issue · comments

Issue Description

AssertionError: The SHAP explanations do not sum up to the model's output! This is either because of a rounding error or because an operator in your computation graph was not fully supported. If the sum difference of %f is significant compared to the scale of your model outputs, please post as a github issue, with a reproducible example so we can debug it. Used framework: pytorch - Max. diff: 0.6907583416792136 - Tolerance: 0.01

I was able to override this error but resetting the tolerance, but it will be good to have these nn.Modules supported:

unrecognized nn.Module: Residual
unrecognized nn.Module: Identity
unrecognized nn.Module: GELU
unrecognized nn.Module: ScaleNorm

Minimal Reproducible Example

not super able to since the dataset is private. I am reporting as the assertionError from the developer says it will be good to report.

Traceback

Traceback (most recent call last):
  File "MSK_labs_explain.py", line 285, in <module>
    shap_values = explainer.shap_values(selected_X[:20])
  File "/lila/home/fuc/mambaforge/envs/DuETT/lib/python3.8/site-packages/shap/explainers/_deep/__init__.py", line 125, in shap_values
    return self.explainer.shap_values(X, ranked_outputs, output_rank_order, check_additivity=check_additivity)
  File "/lila/home/fuc/mambaforge/envs/DuETT/lib/python3.8/site-packages/shap/explainers/_deep/deep_pytorch.py", line 219, in shap_values
    _check_additivity(self, model_output_values.cpu(), output_phis)
  File "/lila/home/fuc/mambaforge/envs/DuETT/lib/python3.8/site-packages/shap/explainers/_deep/deep_utils.py", line 21, in _check_additivity
    assert maxdiff < TOLERANCE, "The SHAP explanations do not sum up to the model's output! This is either because of a " \
AssertionError: The SHAP explanations do not sum up to the model's output! This is either because of a rounding error or because an operator in your computation graph was not fully supported. If the sum difference of %f is significant compared to the scale of your model outputs, please post as a github issue, with a reproducible example so we can debug it. Used framework: pytorch - Max. diff: 0.6907583416792136 - Tolerance: 0.01

Expected Behavior

No response

Bug report checklist

  • I have checked that this issue has not already been reported.
  • I have confirmed this bug exists on the latest release of shap.
  • I have confirmed this bug exists on the master branch of shap.
  • I'd be interested in making a PR to fix this bug

Installed Versions

0.44.1

@connortann This happens quite often. The reason is obvious that we do not support a lot of modules which are then not tracked in the backpropagation. I do not understand how we to construct the overridden gradients in e.g. here or why the tf 2d function is implemented the way it is. Am currently taking a look into captum which seems like it solves all the issues for pytorch.