Utah NLP's repositories
layer_augmentation
Implementation of the NLI model in our ACL 2019 paper: Augmenting Neural Networks with First-order Logic.
consistency
Implementation of models in our EMNLP 2019 paper: A Logic-Driven Framework for Consistency of Neural Models
BERT-fine-tuning-analysis
The codebase for the paper: A Closer Look at How Fine-tuning Changes BERT
infotabs-code
Implementation of the semi-structured inference model in our ACL 2020 paper, INFOTABS: Inference on Tables as Semi-structured Data.
knowledge_infotabs
Repository containing code for the NAACL 2021 paper (Incorporating External Knowledge to Enhance Tabular Reasoning)
structured_tuning_srl
Implementation of our ACL 2020 paper: Structured Tuning for Semantic Role Labeling
therapist-observer
Code for the ACL 2019 paper "Observing Dialogue in Therapy: Categorizing and Forecasting Behavioral Codes"
layer_augmentation_qa
Implementation of the machine comprehension model in our ACL 2019 paper: Augmenting Neural Networks with First-order Logic.
learning-constraints
Experiments in our ACL 2020 paper. Learning Constraints for Structured Prediction Using Rectifier Networks
bert-therapy
Transformer-based observers in Psychotherapy
neural-logic
Code for the paper: Evaluating Relaxations of Logic for Neural Networks: A Comprehensive Study (IJCAI 2021)
word-salad
Code for Paper BERT & Family Eat Word Salad: Experiments with Text Understanding at AAAI 2021
weak-verifiers
This repository contains the data and implementation for the ACL'23 Findings paper: "Verifying Annotation Agreement without Multiple Experts: A Case Study with Gujarati SNACS"
evidence-tabularNLI-code
Code for ACL 2022 paper
nlp.cs.utah.edu
The official website for the UtahNLP group
scaling_robustness
Code for reproducing experiments from our NAACL 24 Paper: Whispers of Doubt Amidst Echoes of Triumph in NLP Robustness (https://arxiv.org/abs/2311.09694)
unqover
UnQovering Stereotyping Biases via Underspecified Questions - EMNLP 2020 (Findings)
utahnlp_logo
Logo candidates for the Utah NLP team