akifcinar / Machine_Learning_Interpretability

Overview of machine learning interpretation techniques and their implementations

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Machine Learning Interpretability

This GitHub repository is describing parts of my master-thesis with the title "Analysis of existing methods for the interpretation of the decision process of machine learning methods regarding their suitablity for automotive applications"

For a detailed overview of the most used and most cited methods, please refer to the following publication: here


Sometimes Human brain is so complex that it just cannot understand its own creation.

Anonym


Methods of interpretable machine learning can help you to understand how models behave and make certain predictions.

Methods of interpretable machine learning can be very helpful for:

  • Identifying bugs in your model and optimizing
  • Getting insights how certain predictions were made
  • Getting insights how models behave globally (Feature interaction)
  • Detecting bias in traning data
  • Verfiying legal requierements for model
  • Verfiying confidence of model and prediction
  • Creating an interface between Humans and AI

There are several ways to make machine leraning models and their predictions more transparent and interpretable. First of all, a distinction is made between global or local and model-specific or model-agnostic explanations of models. Following table gives a brief summary about some methods that I am analyzing in my thesis:

Method Use Scope
LIME* model-agnostic Local
DeepLIFT model-specific Local
Class Activation Maps model-specific Local
Partial Dependence model-agnostic Global
ELI5!** model-agnostic Global
ICE*** model-agnostic Global

You can find an implementation of LIME and Class Activation Maps in this Notebook other implemetations will follow.

* LIME: Local interpretable model-agnostic explanations

** ELI5!: Explain Lime I am 5!

*** ICE: Individual Conditional Expecation