atfortes / LM-Reasoning-Papers

Collection of papers and resources on Reasoning using Language Models

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

LM-Reasoning-Papers

Awesome

Collection of papers and resources on Reasoning using Language Models.

Author: Armando Fortes @THU

Contents

👋 Introduction

Language models have recently revolutionized the landscape of Natural Language Processing, and scaling them up in size has been shown to confer several benefits, such as improved performance and sample efficiency. However, increasing model size alone has not proved sufficient for achieving high performance on challenging reasoning tasks, such as solving arithmetic problems or answering commonsense questions. This repository contains a collection of papers and resources which explore how the reasoning ability of language models can be unlocked.

📄 Papers

Surveys

  1. Emergent Abilities of Large Language Models. TMLR 2022.

    Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, William Fedus. [Paper] [Blog], 2022.6

  2. Reasoning with Language Model Prompting: A Survey. Preprint.

    Shuofei Qiao, Yixin Ou, Ningyu Zhang, Xiang Chen, Yunzhi Yao, Shumin Deng, Chuanqi Tan, Fei Huang, Huajun Chen. [Paper], 2022.12

  3. Towards Reasoning in Large Language Models: A Survey. Preprint.

    Jie Huang, Kevin Chen-Chuan Chang. [Paper], 2022.12

Techniques

  1. Chain of Thought Prompting Elicits Reasoning in Large Language Models. NeurIPS 2022.

    Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, Denny Zhou. [Paper] [Blog], 2022.1

  2. Self-consistency improves chain of thought reasoning in language models. Preprint.

    Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, Denny Zhou. [Paper], 2022.3

  3. Iteratively Prompt Pre-trained Language Models for Chain of Thought. EMNLP 2022.

    Boshi Wang, Xiang Deng, Huan Sun. [Paper] [Code]

  4. Least-to-most prompting enables complex reasoning in large language models. Preprint.

    Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Claire Cui, Olivier Bousquet, Quoc Le, Ed Chi. [Paper], 2022.5

  5. Large Language Models are Zero-Shot Reasoners. NeurIPS 2022.

    Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, Yusuke Iwasawa. [Paper], 2022.5

  6. On the Advance of Making Language Models Better Reasoners. Preprint.

    Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, Weizhu Chen. [Paper], 2022.6

  7. Large Language Models Still Can't Plan. NeurIPS 2022.

    Karthik Valmeekam, Alberto Olmo, Sarath Sreedharan, Subbarao Kambhampati. [Paper] [Code], 2022.6

  8. Solving Quantitative Reasoning Problems with Language Models. NeurIPS 2022.

    Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, Yuhuai Wu, Behnam Neyshabur, Guy Gur-Ari, Vedant Misra. [Paper] [Blog], 2022.6

  9. Rationale-Augmented Ensembles in Language Models. Preprint.

    Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Denny Zhou. [Paper], 2022.7

  10. Dynamic Prompt Learning via Policy Gradient for Semi-structured Mathematical Reasoning. Preprint.

    Pan Lu, Liang Qiu, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, Tanmay Rajpurohit, Peter Clark, Ashwin Kalyan. [Project] [Paper] [Code], 2022.9

  11. Ask Me Anything: A simple strategy for prompting language models. Preprint.

    Simran Arora, Avanika Narayan, Mayee F. Chen, Laurel Orr, Neel Guha, Kush Bhatia, Ines Chami, Frederic Sala, Christopher Ré. [Paper] [Code], 2022.10

  12. Language Models are Multilingual Chain-of-Thought Reasoners. Preprint.

    Freda Shi, Mirac Suzgun, Markus Freitag, Xuezhi Wang, Suraj Srivats, Soroush Vosoughi, Hyung Won Chung, Yi Tay, Sebastian Ruder, Denny Zhou, Dipanjan Das, Jason Wei. [Paper], 2022.10

  13. Measuring and Narrowing the Compositionality Gap in Language Models. Preprint.

    Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah A. Smith, Mike Lewis. [Paper], 2022.10

  14. Automatic Chain of Thought Prompting in Large Language Models. Preprint.

    Zhuosheng Zhang, Aston Zhang, Mu Li, Alex Smola. [Paper], 2022.10

  15. ReAct: Synergizing Reasoning and Acting in Language Models. Preprint.

    Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, Yuan Cao. [Project] [Paper] [Code] [Blog], 2022.10

  16. Mind's Eye: Grounded language model reasoning through simulation. Preprint.

    Ruibo Liu, Jason Wei, Shixiang Shane Gu, Te-Yen Wu, Soroush Vosoughi, Claire Cui, Denny Zhou, Andrew M. Dai. [Paper], 2022.10

  17. Language Models of Code are Few-Shot Commonsense Learners. EMNLP 2022.

    Aman Madaan, Shuyan Zhou, Uri Alon, Yiming Yang, Graham Neubig. [Paper] [Code], 2022.10

  18. Challenging BIG-Bench tasks and whether chain-of-thought can solve them. Preprint.

    Mirac Suzgun, Nathan Scales, Nathanael Schärli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V. Le, Ed H. Chi, Denny Zhou, Jason Wei. [Paper] [Code], 2022.10

  19. Scaling Instruction-Finetuned Language Models. Preprint.

    Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, Jason Wei. [Paper], 2022.10

  20. Large Language Models Can Self-Improve. Preprint.

    Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, Jiawei Han. [Paper], 2022.10

  21. Retrieval Augmentation for Commonsense Reasoning: A Unified Approach. EMNLP 2022.

    Wenhao Yu, Chenguang Zhu, Zhihan Zhang, Shuohang Wang, Zhuosheng Zhang, Yuwei Fang, Meng Jiang. [Paper] [Code], 2022.10

  22. PAL: Program-aided Language Models. Preprint.

    Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, Graham Neubig. [Project] [Paper] [Code], 2022.11

  23. Unsupervised Explanation Generation via Correct Instantiations. AAAI 2023.

    Sijie Cheng, Zhiyong Wu, Jiangjie Chen, Zhixing Li, Yang Liu, Lingpeng Kong. [Paper], 2022.11

  24. Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks. Preprint.

    Wenhu Chen, Xueguang Ma, Xinyi Wang, William W. Cohen. [Paper] [Code], 2022.11

  25. Complementary Explanations for Effective In-Context Learning. Preprint.

    Xi Ye, Srinivasan Iyer, Asli Celikyilmaz, Ves Stoyanov, Greg Durrett, Ramakanth Pasunuru. [Paper], 2022.11

  26. Distilling Multi-Step Reasoning Capabilities of Large Language Models into Smaller Models via Semantic Decompositions. Preprint.

    Kumar Shridhar, Alessandro Stolfo, Mrinmaya Sachan. [Paper], 2022.12

  27. Teaching Small Language Models to Reason. Preprint.

    Lucie Charlotte Magister, Jonathan Mallinson, Jakub Adamek, Eric Malmi, Aliaksei Severyn. [Paper], 2022.12

  28. MURMUR: Modular Multi-Step Reasoning for Semi-Structured Data-to-Text Generation. Preprint.

    Swarnadeep Saha, Xinyan Velocity Yu, Mohit Bansal, Ramakanth Pasunuru, Asli Celikyilmaz. [Paper], 2022.12

  29. Can Retriever-Augmented Language Models Reason? The Blame Game Between the Retriever and the Language Model. Preprint.

    Parishad BehnamGhader, Santiago Miret, Siva Reddy. [Paper] [Code], 2022.12

  30. Large Language Models are reasoners with Self-Verification. Preprint.

    Yixuan Weng, Minjun Zhu, Shizhu He, Kang Liu, Jun Zhao. [Paper] [Code], 2022.12

  31. Language Models as Inductive Reasoners. Preprint.

    Zonglin Yang, Li Dong, Xinya Du, Hao Cheng, Erik Cambria, Xiaodong Liu, Jianfeng Gao, Furu Wei. [Paper], 2022.12

  32. Rethinking with Retrieval: Faithful Large Language Model Inference. Preprint.

    Hangfeng He, Hongming Zhang, Dan Roth. [Paper], 2023.01

🎯 Benchmarks

Reasoning Skills Benchmarks
Arithmetic Reasoning GSM8K, SVAMP, ASDiv, AQuA, MAWPS, AddSub, MultiArith, SingleEq, SingleOp, Lila
Commonsense Reasoning CommonsenseQA, StrategyQA, ARC, BoolQ, HotpotQA, OpenBookQA, PIQA
Symbolic Reasoning Coin Flip, Last Letter Concatenation
Logical Reasoning ReClor, LogiQA, ProofWriter
Multimodal Reasoning SCIENCEQA
Others BIG-bench, ALERT, CONDAQA, SCAN

🔧 Other Resources

  • ThoughtSource: Central and open resource for data and tools related to chain-of-thought reasoning in large language models.
  • LogiTorch: PyTorch-based library for logical reasoning on natural language.

👥 Contributing

  • Add a new paper or update an existing paper, thinking about which category the work should belong to.
  • Use the same format as existing entries to describe the work.
  • Add the abstract link of the paper (/abs/ format if it is an arXiv publication).

Don't worry if you do something wrong, it will be fixed for you!

Contributors