meet-cjli / BackdoorNLP-Papers

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

BackdoorNLP-Papers

Contents

1. Attack Papers

  1. Weight Poisoning Attacks on Pre-trained Models. Keita Kurita , Paul Michel, Graham Neubig. ACL 2020. [pdf] [code]
  2. BadNL: Backdoor Attacks Against NLP Models. Xiaoyi Chen, Ahmed Salem, Michael Backes, Shiqing Ma, Yang Zhang. Preprint. [pdf]
  3. Trojaning Language Models for Fun and Profit. Xinyang Zhang, Zheng Zhang, Ting Wang. Preprint. [pdf]

2. Defense Papers

No

3. Natural Backdoor Papers

  1. Universal Adversarial Triggers for Attacking and Analyzing NLP. Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, Sameer Singh. EMNLP 2019. [pdf] [code]
  2. Universal Adversarial Attacks with Natural Triggers for Text Classification. Liwei Song, Xinwei Yu, Hsuan-Tung Peng, Karthik Narasimhan. Preprint. [pdf] [code]

About