netsatsawat / model-interpretation

Method and walkthrough on the method to help explaining and interpreting the machine learning algorithm using the churn banking dataset

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Model Interpretation

This repository is storing the Jupyter notebook and HTML files, with the notebook explaining the method and walkthrough on how we can explain and interpret the machine learning model (or so-called blackbox model).


Quick glance

The explanability and interpretability has become one of the hot topics in the real world (see some news below). We know what is predicted, but now we want to know why prediction is made. Knowing the why can essentially lead us to understand the source of the problem (why do we build the model in the first place) and help preventing it from the start.

Normally, we will view the interpretation as two-main classes: local and global.

  • Local interpretability: provide detailed explanations of how each individual prediction was made. This will help the users to trust our model and understand how the model / recommendations are being made.
  • Global interpretability: provide overall understanding of the structure of the model, how the model works in general. This is more important for senior or sponsors needing to understand the model at high level.

In the notebook, following methods are covered:

  1. Feature importance
  2. eli5
  3. PDP
  4. SHAP
  5. LIME

About

Method and walkthrough on the method to help explaining and interpreting the machine learning algorithm using the churn banking dataset


Languages

Language:Jupyter Notebook 54.2%Language:HTML 45.8%