sueqian6 / ACL2019-Reducing-Gender-Bias-in-Word-Level-Language-Models-Using-A-Gender-Equalizing-Loss-Function

This is a project for DS-GA 1012 Natural Language Understanding and Computational Semantics at New York University. The paper is accepted to appear at ACL2019 student research workshop.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Reducing-Gender-Bias-in-Language-Models

Introduction

This is a project for DS-GA 1012 Natural Language Understanding and Computational Semantics at New York University. Our purpose is to reduce gender bias in language models. It has been accepted to appear at ACL2019 student research workshop. We have also been invited to present at the 1st workshop on gender bias in NLP at ACL2019.

Paper link: https://arxiv.org/abs/1905.12801

Dataset

The dataset we use is Daily Mails. https://cs.nyu.edu/~kcho/DMQA/ We use 5% of the whole dataset. The subsample can be found in Data/Sample Stories.

Authors

  • Yusu Qian *

  • Urwa Muaz *

  • Ben Zhang

  • Jae Won Hyun

  • '*' denotes equal contribution

Acknowledgments

  • Hat tip to anyone whose code was used
  • Thanks to Professor Sam Bowman and Shikha Bordia who gave us lots of advice on the project and paper
  • Thanks to Tian Liu and Qianyi Fan for prof reading our paper.

About

This is a project for DS-GA 1012 Natural Language Understanding and Computational Semantics at New York University. The paper is accepted to appear at ACL2019 student research workshop.


Languages

Language:Jupyter Notebook 97.2%Language:Python 2.7%Language:Shell 0.1%