ovshake / acl18_results

Code to reproduce results in our ACL 2018 paper "Did the Model Understand the Question?"

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Adversarial Attacks on Question Answering models

Code to reproduce results in the paper

Mudrakarta, Pramod Kaushik, Ankur Taly, Mukund Sundararajan, and Kedar Dhamdhere. "Did the Model Understand the Question?." ACL 2018.

@article{mudrakarta2018did,
  title={Did the Model Understand the Question?},
  author={Mudrakarta, Pramod Kaushik and Taly, Ankur and Sundararajan, Mukund and Dhamdhere, Kedar},
  journal={arXiv preprint arXiv:1805.05492},
  year={2018}
}

Setup

Clone the repository using

git clone https://github.com/pramodkaushik/acl18_results.git --recursive

Code for experiments

Attacks on Neural Programmer are present in the folder np_analysis, and those on visual question answering in visual_qa_analysis. Code for computing attributions via Integrated Gradients and to reproduce experiments are in Jupyter notebooks in both these directories.

Contact

Pramod Kaushik Mudrakarta

pramodkm@uchicago.edu

About

Code to reproduce results in our ACL 2018 paper "Did the Model Understand the Question?"