marcotcr / anchor

Code for "High-Precision Model-Agnostic Explanations" paper

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Anchor contains all the features

laramdemajo opened this issue · comments

I am using HELOC dataset which can be downloaded from https://community.fico.com/s/explainable-machine-learning-challenge?tabset-3158a=2.

I am using an XGBoost model as classification function and trying to use Anchors as an explainability technique over and above XGBoost.

I am using the below code to implement Anchors, however the anchors that are being outputted contain all the features (for most instances in test data), which is obviously very hard to read (and therefore, not that interpretable). Moreover, the precision for the whole anchor when given a threshold of 0.8 is only 0.33.

explainer = anchor_tabular.AnchorTabularExplainer(class_names=['Bad', 'Good'],
       feature_names=dfTrain.columns, train_data=np.array(dfTrain), categorical_names={})

idx = 100
np.random.seed(1)
predict_fn = lambda x: model.predict(xgb.DMatrix(pd.DataFrame(x, columns=list(dfTest.columns)), label = [yTest[idx]]))
print('Prediction: ', explainer.class_names[int(round(predict_fn(dfTest.iloc[[idx],:])[0]))])
exp = explainer.explain_instance(np.array(dfTest.iloc[[idx],:]), predict_fn, threshold=0.8)

print('Anchor: %s' % (' AND '.join(exp.names())))
print('Precision: %.2f' % exp.precision())
print('Coverage: %.2f' % exp.coverage())

Here is a screenshot of a sample anchor:
Screen Shot 2020-05-26 at 21 16 12

Is there something I can do from my end to improve this?

Thanks,
Lara

If precision is still not 1 with all the features, it means that the discretization is too coarse. Try using discretizer=decile or providing your own discretizer.

It may very well be the case that the model is too 'jumpy', in which case a full anchor is the right thing even if it's not useful (this is a limitation of anchors as discussed in the paper). But it sounds like the problem here is potentially the discretization.