There are 14 repositories under fairness topic.
A curated list of awesome responsible machine learning resources.
A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
moDel Agnostic Language for Exploration and eXplanation
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.
推荐/广告/搜索领域工业界经典以及最前沿论文集合。A collection of industry classics and cutting-edge papers in the field of recommendation/advertising/search.
Examples of techniques for training interpretable ML models, explaining ML models, and debugging ML models for accuracy, discrimination, and security.
H2O.ai Machine Learning Interpretability Resources
A curated list of trustworthy deep learning papers. Daily updating...
A curated list of awesome Fairness in AI resources
Code for reproducing our analysis in the paper titled: Image Cropping on Twitter: Fairness Metrics, their Limitations, and the Importance of Representation, Design, and Agency
Python code for training fair logistic regression classifiers.
Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
A library that implements fairness-aware machine learning algorithms
Paper List for Fair Graph Learning (FairGL).
A curated list of papers and resources about the distribution shift in machine learning.
Toolkit for Auditing and Mitigating Bias and Fairness of Machine Learning Systems 🔎🤖🧰
Identify bias and measure fairness of your data
A toolbox for benchmarking trustworthiness of multimodal large language models (MultiTrust)
Flexible tool for bias detection, visualization, and mitigation
A module which fairly distributes a list of arbitrary objects among a set of targets, considering weights.
Modular Python Toolbox for Fairness, Accountability and Transparency Forensics
Data+code for NFT launch guide blogpost.
Sidekiq middleware to re-route “greedy” clients’ jobs to slower queues
Fairness Aware Machine Learning. Bias detection and mitigation for datasets and models.
A Python package to facilitate research on building and evaluating automated scoring models.