Talamantez / ml-fairness-gym

Google ML Fairness Gym

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

What is ML-fairness-gym?

ML-fairness-gym is a set of components for building simple simulations that explore the potential long-run impacts of deploying machine learning-based decision systems in social environments. As the importance of machine learning fairness has become increasingly apparent, recent research has focused on potentially surprising long term behaviors of enforcing measures of fairness that were originally defined in a static setting. Key findings have shown that under specific assumptions in simplified dynamic simulations, long term effects may in fact counteract the desired goals. Achieving a deeper understanding of such long term effects is thus a critical direction for ML fairness research. ML-fairness-gym implements a generalized framework for studying and probing long term fairness effects in carefully constructed simulation scenarios where a learning agent interacts with an environment over time. This work fits into a larger push in the fair machine learning literature to design decision systems that induce fair outcomes in the long run, and to understand how these systems might differ from those designed to enforce fairness on a one-shot basis.

This initial version of the ML-fairness-gym (v 0.1.0) focuses on reproducing and generalizing environments that have previously been discussed in research papers.

ML-fairness-gym environments implement the environment API from OpenAI Gym.

This is not an officially supported Google product.

Contents

Contact us

The ML fairness gym project discussion group is: ml-fairness-gym-discuss@google.com.

Versions

v0.1.0: Initial release. v0.1.1: Update to use gym 0.19.0.

About

Google ML Fairness Gym

License:Apache License 2.0


Languages

Language:Python 99.8%Language:Shell 0.2%