jolares / ai-ethics-fairness-and-bias

Sample project using IBM's AI Fairness 360 is an open source toolkit for determining, examining, and mitigating discrimination and bias in machine learning (ML) models throughout the AI application lifecycle.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Example AI Ethics Fairness Practices

TODO: Link to Blog Post Workshop

References

  • IBM's AI Fairness 360: This extensible open source toolkit can help you examine, report, and mitigate discrimination and bias in machine learning models throughout the AI application lifecycle.

  • Google's What-If-Tool: Using WIT, you can test performance in hypothetical situations, analyze the importance of different data features, and visualize model behavior across multiple models and subsets of input data, and for different ML fairness metrics.

  • Georgia Institute of Technology's CS 6603: AI, Ethics, and Society.

  • UC Berkley's Algorithmic Fairness & Opacity Lecture Series.

About

Sample project using IBM's AI Fairness 360 is an open source toolkit for determining, examining, and mitigating discrimination and bias in machine learning (ML) models throughout the AI application lifecycle.