There are 0 repository under aif360 topic.
FairPut - Machine Learning Fairness Framework with LightGBM — Explainability, Robustness, Fairness (by @firmai)
Introduction to trusted AI. Learn to use fairness algorithms to reduce and mitigate bias in data and models with aif360 and explain models with aix360
Trying out things on Kaggle's Titanic dataset
Fairness in data, and machine learning algorithms is critical to building safe and responsible AI systems from the ground up by design. Both technical and business AI stakeholders are in constant pursuit of fairness to ensure they meaningfully address problems like AI bias. While accuracy is one metric for evaluating the accuracy of a machine learning model, fairness gives us a way to understand the practical implications of deploying the model in a real-world situation.
Responsible AI Masterclass (June 2024 Run)
This repo contains two jupyter notebooks that demonstrates how to identify and correct algorithmic bias in machine learning models using Python and Julia.
This notebook represents my personal code, notes, and reflections for the Manning liveProject titled "Mitigate Machine Learning Bias: Shap and AIF360" by Michael McKenna. Any citations or references to original course material retain the original author copyright and ownership. Personal code is licensed under the MIT License.
Evaluating Fairness in Machine Learning: Comparative Analysis of Fairlearn and AIF360
Gender classification model that uses a CNN to classify images of faces as male or female. The notebook includes code for data preprocessing, model architecture, training, and evaluation which will then be used for algorithmic bias detection.
Visualising Accuracy vs. Fairness in ML models using AIF360 tools and dataset
Building Fair AI models tutorial at PyData Berlin / REVISION 2018