k1c / biasly

AI for Social Good Final Project

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

biasly

We are a team taking part in the Artificial Intelligence for Social Good Lab. This 6-week program teaches a cohort of 25 women from across Canada the essential machine learning concepts and programming skills so that we can build a prototype for a cause which we deem to be important. We have chosen to build a model which can detect gender bias in sentences.

Implicit gender bias refers to the automatic assumptions that we as humans make based on gender. These assumptions are often based on societal standards and the influences of culture and the media. We are investigating the ways that implicit gender bias materializes in text. For example, gender bias may emerge as someone assuming the gender of a person based on their profession or other parts of the sentence; or referring to a mixed group of people as “guys.”

Currently, a data set of labeled sentences does not exist. This is a project we would like to see come to fruition so we have undertaken the task of rapidly crowdsourcing our data to be labeled as either gender biased or unbiased.

Thank you for your contribution to our project and the machine learning community! Please feel free to retake the survey as many times as you would like; it takes approximately 3 minutes to complete.

Help us label our data here: http://biaslyAI.com/

About

AI for Social Good Final Project


Languages

Language:Jupyter Notebook 99.5%Language:Python 0.5%