There are 11 repositories under fairness-ai topic.
🐢 Open-Source Evaluation & Testing for ML models & LLMs
A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.
WEFE: The Word Embeddings Fairness Evaluation Framework. WEFE is a framework that standardizes the bias measurement and mitigation in Word Embeddings models. Please feel welcome to open an issue in case you have any questions or a pull request if you want to contribute to the project!
Toolkit for Auditing and Mitigating Bias and Fairness of Machine Learning Systems 🔎🤖🧰
FairPut - Machine Learning Fairness Framework with LightGBM — Explainability, Robustness, Fairness (by @firmai)
Fairness Aware Machine Learning. Bias detection and mitigation for datasets and models.
Papers and online resources related to machine learning fairness
PyTorch package to train and audit ML models for Individual Fairness
[ACL 2020] Towards Debiasing Sentence Representations
👋 Influenciae is a Tensorflow Toolbox for Influence Functions
Talks & Workshops by the CODAIT team
A curated list of awesome academic research, books, code of ethics, data sets, institutes, newsletters, principles, podcasts, reports, tools, regulations and standards related to Responsible AI, Trustworthy AI, and Human-Centered AI.
Credo AI Lens is a comprehensive assessment framework for AI systems. Lens standardizes model and data assessment, and acts as a central gateway to assessments created in the open source community.
EMNLP'2022: BERTScore is Unfair: On Social Bias in Language Model-Based Metrics for Text Generation
A tool for gender bias identification in text. Part of Microsoft's Responsible AI toolbox.
Responsible AI Workshop: a series of tutorials & walkthroughs to illustrate how put responsible AI into practice
Counterfactual Local Explanations of AI systems
SIREN: A Simulation Framework for Understanding the Effects of Recommender Systems in Online News Environments
A fairness library in PyTorch.
PyTorch reimplementation of computing Shapley values via Truncated Monte Carlo sampling from "What is your data worth? Equitable Valuation of Data" by Amirata Ghorbani and James Zou [ICML 2019]
A curated list of Robust Machine Learning papers/articles and recent advancements.
This repository contains demo notebooks (sample code) for the AutoMLx (automated machine learning and explainability) package from Oracle Labs.
Data and Model-based approaches for Mitigating Bias in Machine Learning Applications
Examples of unfairness detection for a classification-based credit model