adzhua / ReadingList

A list of research resources that I've appreciated.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Reading List

Conference Deadline is the primary productive force.

Relational Triple Extraction

  1. GraphRel: Modeling Text as Relational Graphs for Joint Entity and Relation Extraction. Tsu-Jui Fu, Peng-Hsuan Li, and Wei-Yun Ma. ACL 2019. pdf code
  2. Entity-Relation Extraction as Multi-Turn Question Answering. Xiaoya Li, Fan Yin, Zijun Sun, Xiayu Li, Arianna Yuan, Duo Chai, Mingxin Zhou, Jiwei Li. ACL 2019. pdf code
  3. Joint Extraction of Entities and Overlapping Relations Using Position-Attentive Sequence Labeling. Dai Dai, Xinyan Xiao, Yajuan Lyu, Shan Dou, Qiaoqiao She, Haifeng Wang. AAAI 2019. pdf
  4. A Hierarchical Framework for Relation Extraction with Reinforcement Learning. Ryuichi Takanobu, Tianyang Zhang, Jiexi Liu, Minlie Huang. AAAI 2019. pdf code
  5. An Attention-based Model for Joint Extraction of Entities and Relations with Implicit Entity Features. Yan Zhou, Longtao Huang, Tao Guo, Songlin Hu, Jizhong Han. WWW 2019 Companion. pdf
  6. Extracting Relational Facts by an End-to-End Neural Model with Copy Mechanism. Xiangrong Zeng, Daojian Zeng, Shizhu He, Kang Liu, Jun Zhao. ACL 2018. pdf code
  7. Extracting Entities and Relations with Joint Minimum Risk Training. Changzhi Sun, Yuanbin Wu, Man Lan, Shiliang Sun. EMNLP 2018. pdf code
  8. Adversarial training for multi-context joint entity and relation extraction. Giannis Bekoulis, Johannes Deleu, Thomas Demeester, Chris Develder. EMNLP 2018. pdf code
  9. Joint Extraction of Entities and Relations Based on a Novel Graph Scheme. Shaolei Wang, Yue Zhang, Wanxiang Che, Ting Liu. IJCAI 2018. pdf code
  10. Heterogeneous Supervision for Relation Extraction: A Representation Learning Approach. Liyuan Liu, Xiang Ren, Qi Zhu, Shi Zhi, Huan Gui, Heng Ji, Jiawei Han. EMNLP 2017. pdf code
  11. End-to-End Neural Relation Extraction with Global Optimization. Meishan Zhang, Yue Zhang, Guohong Fu. EMNLP 2017. pdf
  12. Joint Extraction of Entities and Relations Based on a Novel Tagging Scheme. Suncong Zheng, Feng Wang, Hongyun Bao, Yuexing Hao, Peng Zhou, Bo Xu. ACL 2017. pdf code
  13. Going out on a limb: Joint Extraction of Entity Mentions and Relations without Dependency Trees. Arzoo Katiyar, Claire Cardie. ACL 2017. pdf
  14. CoType: Joint Extraction of Typed Entities and Relations with Knowledge Bases. Xiang Ren, Zeqiu Wu, Wenqi He, Meng Qu, Clare R. Voss, Heng Ji, Tarek F. Abdelzaher, Jiawei Han. WWW 2017. pdf code
  15. End-to-End Relation Extraction using LSTMs on Sequences and Tree Structures. Makoto Miwa, Mohit Bansal. ACL 2016. pdf code
  16. Improved Relation Extraction with Feature-Rich Compositional Embedding Models. Matthew R. Gormley, Mo Yu, Mark Dredze. EMNLP 2015. pdf
  17. Modeling Joint Entity and Relation Extraction with Table Representation. Makoto Miwa, Yutaka Sasaki. EMNLP 2014. pdf code
  18. Incremental Joint Extraction of Entity Mentions and Relations. Qi Li, Heng, Ji. ACL 2014. pdf
  19. Knowledge-Based Weak Supervision for Information Extraction of Overlapping Relations. Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, Daniel S. Weld. ACL 2011. pdf code

General Seq2Seq (ner, pos, slot filling, etc.)

  1. GraphIE: A Graph-Based Framework for Information Extraction. Yujie Qian, Enrico Santus, Zhijing Jin, Jiang Guo, Regina Barzilay. NAACL 2019. pdf code
  2. FAIRSEQ: A Fast, Extensible Toolkit for Sequence Modeling. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli. NAACL 2019. pdf code
  3. BERT for Joint Intent Classification and Slot Filling. Qian Chen, Zhu Zhuo, Wen Wang. Preprint 2019. pdf code
  4. QA4IE: A Question Answering based Framework for Information Extraction. Lin Qiu, Hao Zhou, Yanru Qu, Weinan Zhang, Suoheng Li, Shu Rong, Dongyu Ru, Lihua Qian, Kewei Tu, Yong Yu. ISWC 2018. pdf code
  5. OpenTag: Open Attribute Value Extraction from Product Profiles. Guineng Zheng, Subhabrata Mukherjee, Xin Luna Dong, Feifei Li. SIGKDD 2018. pdf
  6. Deep Active Learning for Named Entity Recognition. Yanyao Shen, Hyokun Yun, Zachary C. Lipton, Yakov Kronrod, Animashree Anandkumar. ICLR 2018. pdf
  7. Semi-Supervised Sequence Modeling with Cross-View Training. Kevin Clark, Minh-Thang Luong, Christopher D. Manning, Quoc V. Le. EMNLP 2018. pdf code
  8. Design Challenges and Misconceptions in Neural Sequence Labeling. Jie Yang, Shuailong Liang, Yue Zhang. COLING 2018. pdf code
  9. Semi-supervised sequence tagging with bidirectional language models. Matthew E. Peters, Waleed Ammar, Chandra Bhagavatula, Russell Power. ACL 2017. pdf code
  10. Semi-supervised Multitask Learning for Sequence Labeling. Marek Rei. ACL 2017. pdf code
  11. Leveraging Sentence-level Information with Encoder LSTM for Semantic Slot Filling. Gakuto Kurata, Bing Xiang, Bowen Zhou, Mo Yu. EMNLP 2016. pdf
  12. Attending to Characters in Neural Sequence Labeling Models. Marek Rei, Gamal K. O. Crichton, Sampo Pyysalo. COLING 2016. pdf code
  13. Neural Architectures for Named Entity Recognition. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer. NAACL 2016. pdf code
  14. End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF. Xuezhe Ma, Eduard Hovy. ACL 2016. pdf code
  15. Supertagging with LSTMs. Ashish Vaswani, Yonatan Bisk, Kenji Sagae, Ryan Musa. NAACL 2016. pdf
  16. Understanding LSTM Networks. Christopher Olah. Blog 2016. blog

Relation Classification/Extraction

  1. Fine-tuning Pre-Trained Transformer Language Models to Distantly Supervised Relation Extraction. Christoph Alt, Marc Hübner, Leonhard Hennig. pdf code
  2. Cross-relation Cross-bag Attention for Distantly-supervised Relation Extraction. Yujin Yuan, Liyuan Liu, Siliang Tang, Zhongfei Zhang, Yueting Zhuang, Shiliang Pu, Fei Wu, Xiang Ren. AAAI 2019. pdf code
  3. Hierarchical Relation Extraction with Coarse-to-Fine Grained Attention. Xu Han, Pengfei Yu, Zhiyuan Liu, Maosong Sun, Peng Li. EMNLP 2018. pdf code
  4. Multi-Task Transfer Learning for Weakly-Supervised Relation Extraction. Jing Jiang. ACL 2009. pdf

Event Detection/Extraction/Prediction

  1. COMET: Commonsense Transformers for Automatic Knowledge Graph Construction. Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Celikyilmaz, Yejin Choi. ACL 2019. pdf code
  2. ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning. Maarten Sap, Ronan LeBras, Emily Allaway, Chandra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A. Smith, Yejin Choi. AAAI 2019. pdf demo
  3. Collective Event Detection via a Hierarchical and Bias Tagging Networks with Gated Multi-level Attention Mechanisms. Yubo Chen, Hang Yang, Kang Liu, Jun Zhao, Yantao Jia. EMNLP 2018. pdf code
  4. Adaptive Scaling for Sparse Detection in Information Extraction. Hongyu Lin, Yaojie Lu, Xianpei Han, Le Sun. ACL 2018. pdf code
  5. Constructing Narrative Event Evolutionary Graph for Script Event Prediction. Zhongyang Li, Xiao Ding, Ting Liu. IJCAI 2018. pdf code
  6. Constructing and Embedding Abstract Event Causality Networks from Text Snippets. Sendong Zhao, Quan Wang, Sean Massung, Bing Qin, Ting Liu, Bin Wang, ChengXiang Zhai. WSDM 2017. pdf

Transformers

  1. XLNet: Generalized Autoregressive Pretraining for Language Understanding. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le. Preprint 2019. pdf code
  2. Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context. Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov. ACL 2019. pdf code
  3. Cross-lingual Language Model Pretraining. Guillaume Lample, Alexis Conneau. Preprint 2019. pdf code
  4. Language Models are Unsupervised Multitask Learners. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever. Preprint 2019. pdf code
  5. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova. NAACL 2019. pdf code
  6. Improving Language Understanding by Generative Pre-Training. Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever. Preprint 2018. pdf code
  7. Attention Is All You Need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin. NIPS 2017. pdf code

Knowledge Graph System

  1. T2KG: An End-to-End System for Creating Knowledge Graph from Unstructured Text. Natthawut Kertkeidkachorn, Ryutaro Ichise. AAAI 2017 Workshop. pdf demo

Representation learning

  1. PyTorch-BigGraph: A Large-scale Graph Embedding System. Adam Lerer, Ledell Wu, Jiajun Shen, Timothee Lacroix, Luca Wehrstedt, Abhijit Bose, Alex Peysakhovich. SysML 2019. pdf code
  2. One-Shot Relational Learning for Knowledge Graphs. Wenhan Xiong, Mo Yu, Shiyu Chang, Xiaoxiao Guo, William Yang Wang. EMNLP 2018. pdf code

General Multi-task Learning

  1. A Hierarchical Multi-task Approach for Learning Embeddings from Semantic Tasks. Victor Sanh, Thomas Wolf, Sebastian Ruder. AAAI 2019. pdf code
  2. An Overview of Multi-Task Learning in Deep Neural Networks. Sebastian Ruder. Preprint 2017. pdf

Others

  1. Awesome-knowledge-graph. BrambleXu. Github Repo. github [recommended]
  2. 基于DGCNN和概率图的轻量级信息抽取模型. 苏剑林. Blog 2019. blog code
  3. 中文自然语言处理数据集列表. InsaneLife. GitHub Repo. github
  4. 深度学习500问. Tan Jiyong. Github Repo. github
  5. 深度学习与自然语言处理、知识图谱、对话系统. Li hanghang. GitHub Repo. github

About

A list of research resources that I've appreciated.