I am a Ph.D. candidate at the School of Information at the University of Texas at Austin. I am co-advised by Dr. Matt Lease and Dr. Jessy Li. I am a part of the Laboratory for Artificial Intelligence and Human-Centered Computing (AI&HCC) and associated with the UT NLP Group. During my PhD, I have also interned at Amazon Alexa Responsible AI Research, Cisco Responsible AI Research, and The Max-Planck Institute for Informatics where I worked with Dr. Gerhard Weikum.
Before joining the Ph.D. program, I worked as a Software Engineer in Microsoft and as a Decision Scientist in Mu Sigma. I received my Bachelor of Engineering Degree in Computer Science and Technology from IIEST, Shibpur.
I am on the job market looking for academic or industry postdoc positions for fall 2024.
I am interested in the intersection of Natural Language Processing and Human-Computer Interaction, specifically focused on developing NLP technologies that complement the capabilities of human experts. My work centers on three key thrusts of research:
-
Human-Centered NLP: How can we identify the needs of the stakeholders for the practical adoption of NLP applications? How can we evaluate if NLP applications are meeting stakeholder needs? How can research in human-centered NLP help push forward basic NLP research? How can we align NLP models to complement human experts in critical fields effectively? [Preprint] [IPM Journal]
-
Interpretable NLP Models: How can we build NLP models to help stakeholders understand its inner workings? How can we effectively evaluate interpretable models? How can we use insights from interpretable models to steer generative model outputs? How can we build interpretable models to help promote responsible and productive human-AI partnerships? [ACL'22] [IPM Journal]
-
Responsible Language Technologies: How can we detect and mitigate potential harms caused by language technonologies? How can we make these models behave responisbly and not perpetuate societal biases? How can we protect workers who contribute to data-collection for AI? [FnTIR Journal] [HCOMP'20] [ASIS&T'19]
-
I spent Fall 2023 as a research intern at Cisco Responsible AI research team and worked on evaluating interpretable NLP models
-
I spent Summer 2023 at the Amazon Alexa Responsible AI team and worked on developing interpretable NLP
-
Paper on Human-Centered NLP for Fact-Checking is published in a special issue of the IPM (Impact Factor: 6.222) journal [Arxiv]
-
Paper on Explaining Black-box NLP models with Case-based reasoning is accepted in ACL 2022. [arxiv] [code]
-
Paper on Interactive AI for Fact-Checking is accepted in ACM CHIIR 2022[arxiv]
-
The state of human-centered NLP technology for fact-checking
- Invited talk at the Information Processing & Management Conference 2022 for our journal paper
-
ProtoTEx: Explaining Model Decisions with Prototype Tensors [Video]
- Explainable AI Group, 09/29/2022
- Research Colloquium, UT Austin, iSchool, 09/20/2022
- iSchools European Doctoral Seminar Series, 09/16/2022
- Amazon Science Clarify Team, 05/17/2022
- NEC Laboratories Europe, 06/09/2022
-
You Are What You Tweet: Profiling Users by Past Tweets to Improve Hate Speech Detection. [Video]
- Presented on behalf of Prateek Chaudhry and Matthew Lease at the iConference 2022.
-
ExFacto: An Explainable Fact-Checking Tool [slides] [Video]
- The Knight Research Network (KRN) Demo Day at Center for Informed Democracy & Social - cybersecurity (IDeaS) at Carnegie Mellon University, 10/13/2021
-
Commerical Content Moderation and Worker Well-Being. [Video]
- TxHCI - A seminar organized by HCI Researchers across universities in Texas, 10/02/2020
- Invited talk - Amazon AWS Science, 10/14/2020
- Invited talk - Amazon Human-in-the-loop (HILL) services team, 10/23/2020
- AAAI HCOMP 2020
-
CobWeb: A Research Prototype for Exploring User Bias in Political Fact-Checking [Slides]
- Presented at the SIGIR 2019 Workshop Fairness, Accountability, Confidentiality, Transparency, and Safety in Information retrieval [FACTS-IR]
Full list of publications on: (Google Scholar) (* = equal contribution)
-
Human-centered NLP Fact-checking: Co-Designing with Fact-checkers using Matchmaking for AI
Houjiang Liu*, Anubrata Das*, Alexander Boltz*, Didi Zhou, Daisy Pinaroc, Matthew Lease, Min Kyung Lee
Arxiv Preprint -
The state of human-centered NLP technology for fact-checking [Arxiv]
Anubrata Das, Houjiang Liu, Venelin Kovatchev, and Matthew Lease.
Information processing & management 60, no. 2 (2023): 103219. -
ProtoTex: Explaining Model Decisions with Prototype Tensors [code] |[slides] | [Talk] | [Poster]
Anubrata Das*, Chitrank Gupta*, Venelin Kovatchev, Matthew Lease, and Junyi Jessy Li.
In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL), 2022. -
The Effects of Interactive AI Design on User Behavior: An Eye-tracking Study of Fact-checking COVID-19 Claims
Li Shi, Nilavra Bhattacharya, Anubrata Das, Matthew Lease, and Jacek Gwizdka.
In Proceedings of the 7th ACM SIGIR Conference on Human Information, Interaction and Retrieval (CHIIR), 2022. -
Fairness in Information Access Systems
Michael D. Ekstrand, Anubrata Das, Robin Bruke, Fernando Diaz
Foundations and Trends in Information Retrieval, 2022 -
Fast, Accurate, and Healthier: Interactive Blurring Helps Moderators Reduce Exposure to Harmful Content
Anubrata Das, Brandon Dang, and Matthew Lease
AAAI Conferenece on Human Computation, 2020 -
Dataset bias: A case study for visual question answering
Anubrata Das, Samreen Anjum and Danna Gurari
Proceedings of the Association for Information Science and Technology 56, no. 1 (2019): 58-67. -
CobWeb: A Research Prototype for Exploring User Bias in Political Fact-Checking
Anubrata Das, Kunjan Mehta, and Matthew Lease
FACTS-IR Workshop, SIGIR 2019. [slides] -
A Conceptual Framework for Evaluating Fairness in Search
Anubrata Das and Matthew Lease
arXiv preprint arXiv: arXiv:1907.09328 (2019) -
Interactive information crowdsourcing for disaster management using SMS and Twitter: A research prototype
Anubrata Das, Neeratyoy Mallik, Somprakash Bandyopadhyay, Sipra Das Bit, and Jayanta Basak
IEEE International Conference on Pervasive Computing and Communication Workshops (PerCom Workshops) 2016 -
Predicting trends in the twitter social network: A machine learning approach
Anubrata Das, Moumita Roy, Soumi Dutta, Saptarshi Ghosh, and Asit Kumar Das
In International Conference on Swarm, Evolutionary, and Memetic Computing, Springer, Cham, 2014.