NoviScl / BiasInNLP

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Bias in NLP paper list

It also includes some works related to Social Media & Computational Social Science.

Some people to follow

Yulia Tsvetkov (CMU) [pubs]

Rachel Rudinger (UMD) [pubs]

Mark Yatskar (UPenn) [pubs]

Kai-Wei Chang (UCLA) [pubs]

Dan Jurafsky (Stanford) [pubs]

David Jurgens (UMich) [pubs]

Hal Daumé III (UMD) [pubs]

Dirk Hovy (Bocconi) [pubs]

Emily Bender (UW) [pubs]

The folllowing is mainly categorised by the types of bias. But many papers have covered multiple types of biases and in some cases the methods proposed are generalizable. Under each category, the papers are in chronological order.

Gender Bias

  1. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings. Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, Adam Kalai. NeurIPS 2016. [pdf]

  2. Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, Kai-Wei Chang. EMNLP 2017. [pdf]

  3. Gender Bias in Coreference Resolution. Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. NAACL 2018. [pdf]

  4. Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, Kai-Wei Chang. NAACL 2018. [pdf]

  5. Examining Gender and Race Bias in Two Hundred Sentiment Analysis Systems. Svetlana Kiritchenko, Saif M. Mohammad. *SEM 2018. [pdf]

  6. Mind the GAP: A Balanced Corpus of Gendered Ambiguous Pronouns. Kellie Webster and Marta Recasens and Vera Axelrod and Jason Baldridge. TACL 2018. [pdf]

  7. Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove Them. Hila Gonen and Yoav Goldberg. NAACL 2019. [pdf]

  8. Measuring Bias in Contextualized Word Representations. Keita Kurita, Nidhi Vyas, Ayush Pareek, Alan W Black, Yulia Tsvetkov. Workshop@ACL 2019. [pdf]

  9. Gender Bias in Contextualized Word Embeddings. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Ryan Cotterell, Vicente Ordonez, Kai-Wei Chang. NAACL 2019. [pdf]

  10. Toward Gender-Inclusive Coreference Resolution. Yang Trista Cao, Hal Daumé III. ACL 2020. [pdf]

  11. Fair Is Better than Sensational: Man Is to Doctor as Woman Is to Doctor. Malvina Nissim, Rik van Noord, Rob van der Goot. CL 2020. [pdf]

  12. Investigating Gender Bias in BERT. Rishabh Bhardwaj, Navonil Majumder, Soujanya Poria. arXiv 2020. [pdf]

Other Social Bias (including Racial, Riligious, Name, Demographic, Age, etc.)

  1. Semantics derived automatically from language corpora necessarily contain human biases. Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. Science 2017. [pdf]

  2. Black is to Criminal as Caucasian is to Police: Detecting and Removing Multiclass Bias in Word Embeddings. Thomas Manzini, Lim Yao Chong, Alan W Black, Yulia Tsvetkov. NAACL 2019. [pdf]

  3. Social Bias in Elicited Natural Language Inferences. Rachel Rudinger, Chandler May, Benjamin Van Durme. Workshop @ ACL 2017. [pdf]

  4. On Measuring Social Biases in Sentence Encoders. Chandler May, Alex Wang, Shikha Bordia, Samuel R. Bowman, Rachel Rudinger. NAACL 2019. pdf

  5. Understanding the Origins of Bias in Word Embeddings. Marc-Etienne Brunet, Colleen Alkalay-Houlihan, Ashton Anderson, Richard Zemel. ICML 2019. [pdf]

  6. Assessing Social and Intersectional Biases in Contextualized Word Representations. Yi Chern Tan, L. Elisa Celis. NeurIPS 2019. [pdf]

  7. SOCIAL BIAS FRAMES: Reasoning about Social and Power Implications of Language. Maarten Sap, Saadia Gabriel, Lianhui Qin, Dan Jurafsky, Noah A. Smith, Yejin Choi. ACL 2020. [pdf]

  8. “You are grounded!”: Latent Name Artifacts in Pre-trained Language Models. Vered Shwartz, Rachel Rudinger, Oyvind Tafjord. EMNLP 2020. [pdf]

  9. Assessing Demographic Bias in Named Entity Recognition. Shubhanshu Mishra, Sijun He, Luca Belli. Workshop @ AKBC 2020. [pdf]

  10. StereoSet: Measuring stereotypical bias in pretrained language models. Moin Nadeem, Anna Bethke, Siva Reddy. arXiv 2020. [pdf]

  11. CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models. Nikita Nangia, Clara Vania, Rasika Bhalerao, Samuel R. Bowman. EMNLP 2020. [pdf]

Survey

  1. The Social Impact of Natural Language Processing. Dirk Hovy, Shannon L. Spruit. ACL 2016. [pdf]

  2. Mitigating Gender Bias in Natural Language Processing: Literature Review. Tony Sun, Andrew Gaut, Shirlyn Tang, Yuxin Huang, Mai ElSherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, William Yang Wang. ACL 2019. [pdf]

  3. Language (Technology) is Power: A Critical Survey of “Bias” in NLP. Su Lin Blodgett, Solon Barocas, Hal Daumé III, Hanna Wallach. ACL 2020. pdf

  4. Predictive Biases in Natural Language Processing Models: A Conceptual Framework and Overview. Deven Shah, H. Andrew Schwartz, Dirk Hovy. ACL 2020. [pdf]

About