lucapug / Explainable-AI-Workshop

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Explainable Machine Learning Models - Workshop

Making sense of opaque and complex models using Python

This repo is home to the code that accompanies a course with the same name on O'Reilly Learning Platform

Workshop

AI models are making predictions that affect people’s lives, so ensuring that they’re fair and unbiased must be an industry imperative. One way to ensure fairness is to discover a model’s mispredictions and analyze and fix the underlying causes. Some machine learning methods, like logistic regression and decision trees, are interpretable, but they aren’t highly accurate in their predictions. Others, like boosted trees and deep neural nets, are more accurate, but the logic behind their predictions can’t be clearly identified or explained, making it more difficult to spot and fix bias.

Join to get the lowdown on commonly used techniques like SHAP values, LIME, partial dependence plots, and more that can help you explain the inexplicable in these models and ensure responsible machine learning. You’ll gain an understanding of the intuition behind the techniques and learn how to implement them in Python. Using case studies, you’ll discover how to extract the most important features and values of a model’s predictions to discover why a particular person has been denied a bank loan or is more susceptible to a heart attack. Finally, you’ll examine the vulnerabilities and shortcomings of these methods and discuss the road ahead.

Recommended preparation:

Recommended follow-up:


About


Languages

Language:Jupyter Notebook 100.0%