There are 1 repository under ml-fairness topic.
Source code/webpage/demos for the What-If Tool
AART: AI-Assisted Red-Teaming with Diverse Data Generation for New LLM-powered Applications
Sample project using IBM's AI Fairness 360 is an open source toolkit for determining, examining, and mitigating discrimination and bias in machine learning (ML) models throughout the AI application lifecycle.
Deep-Learning approach for generating Fair and Accurate Input Representation for crime rate estimation in continuous protected attributes and continuous targets.
Tools to assess fairness and mitigate unfairness in sociolinguistic auto-coding