smaityumich / individual-fairness-testing

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Statistical Inference for Individual Fairness

This is the supplementary code accompanying ICLR 2021 submission "Statistical inference for individual fairness"

Overview

The work proposes an inferential procedure to test for violation of individual fairness for a ML model. The main idea is to perform an adversarial attack on each of the data-points aiming to increase the loss, while restricting the movement in sensitve subspace. The average of loss ratio between attacked points and original points tend to be large if the ML model is unfair. This property is used in the proposed test. We use finite time gradient flow for adversarial attack, where the continuous time gradient flow is approximated by Euler's method.

We perform simulated experiment to study the properties of proposed test and apply it to Adult and COMPAS data. More descriptions are provided in corresponding folders.

About


Languages

Language:Jupyter Notebook 73.7%Language:Python 26.2%Language:Shell 0.1%