There are 7 repositories under fairness-ml topic.
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.
😎 Everything about class-imbalanced/long-tail learning: papers, codes, frameworks, and libraries | 有关类别不平衡/长尾学习的一切:论文、代码、框架与库
A library for generating and evaluating synthetic tabular data for privacy, fairness and data augmentation.
Tensorflow's Fairness Evaluation and Visualization Toolkit
Code for reproducing our analysis in the paper titled: Image Cropping on Twitter: Fairness Metrics, their Limitations, and the Importance of Representation, Design, and Agency
Fair Resource Allocation in Federated Learning (ICLR '20)
WEFE: The Word Embeddings Fairness Evaluation Framework. WEFE is a framework that standardizes the bias measurement and mitigation in Word Embeddings models. Please feel welcome to open an issue in case you have any questions or a pull request if you want to contribute to the project!
Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
Toolkit for Auditing and Mitigating Bias and Fairness of Machine Learning Systems 🔎🤖🧰
LangFair is a Python library for conducting use-case level LLM bias and fairness assessments
Flexible tool for bias detection, visualization, and mitigation
FairPut - Machine Learning Fairness Framework with LightGBM — Explainability, Robustness, Fairness (by @firmai)
Dataset associated with "BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation" paper
Fairness Aware Machine Learning. Bias detection and mitigation for datasets and models.
Papers and online resources related to machine learning fairness
Talks & Workshops by the CODAIT team
Official implementation of our work "Collaborative Fairness in Federated Learning."
Credo AI Lens is a comprehensive assessment framework for AI systems. Lens standardizes model and data assessment, and acts as a central gateway to assessments created in the open source community.
EMNLP'2022: BERTScore is Unfair: On Social Bias in Language Model-Based Metrics for Text Generation
A tool for gender bias identification in text. Part of Microsoft's Responsible AI toolbox.
Evidence-based tools and community collaboration to end algorithmic bias, one data scientist at a time.
Responsible AI Workshop: a series of tutorials & walkthroughs to illustrate how put responsible AI into practice
Julia Toolkit with fairness metrics and bias mitigation algorithms
A curated list of Robust Machine Learning papers/articles and recent advancements.
A fairness library in PyTorch.
[KDD2021] Federated Adversarial Debiasing for Fair and Transferable Representations: Optimize an adversarial domain-adaptation objective without adversarial or source data.
A Prompt Array Keeps the Bias Away: Debiasing Vision-Language Models with Adversarial Learning [AACL 2022]