fferrara / interpretable-classification

How to interpret prediction models for classification

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

This repository tells stories. Stories about how different models can achieve high predictive power by capturing completely different patterns in data.

By interpreting the patterns they learn we are able to choose which model is the right one, without relying exclusively on metrics about predictive power.

Case studies

  • Interpretable Titanic survival: how to be sure a model learned patterns that match reality
  • Interpretable income prediction: how to be sure a model hasn't learned discriminatory bias

How to run the notebooks

  1. Install pipenv
  2. Create a virtualenv with pipenv sync
  3. Access the virtualenv with pipenv shell
  4. Run jupyter notebook from inside the virtualenv

About

How to interpret prediction models for classification

License:MIT License


Languages

Language:Jupyter Notebook 97.1%Language:Python 2.9%