There are 0 repository under bias-measurement topic.
Introduction to trusted AI. Learn to use fairness algorithms to reduce and mitigate bias in data and models with aif360 and explain models with aix360
GBDF: Gender Balanced DeepFake Dataset
Sample project using IBM's AI Fairness 360 is an open source toolkit for determining, examining, and mitigating discrimination and bias in machine learning (ML) models throughout the AI application lifecycle.
Code and data accompanying the paper: "Model-Agnostic Bias Measurement in Link Prediction" published in the EACL Findings 2023