lointain / prompt-in-context-learning

Providing the open-source tools for learning prompt

Home Page:https://github.com/EgoAlpha/prompt-in-context-learning

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

An Open-Source Engineering Guide for Prompt-in-context-learning from EgoAlpha Lab.

📝 Papers | ⚡️ Playground | 🛠 Prompt Zoo | 🌍 ChatGPT Prompt

version

⭐️ Shining ⭐️: This is fresh, daily-updated resources for in-context learning and prompt engineering. As Artificial General Intelligence (AGI) is approaching, let’s take action and become a super learner so as to position ourselves at the forefront of this exciting era and strive for personal and professional greatness.

The resources include:

🎉Papers🎉: The latest papers about in-context learning or prompt engineering.

🎉Playground🎉: Large language models that enable prompt experimentation.

🎉Prompt Zoo🎉: Prompt techniques for leveraging large language models.

🎉ChatGPT Prompt🎉: Prompt examples that can be applied in our work and daily lives.

In the future, there will likely be two types of people on Earth (perhaps even on Mars, but that's a question for Musk): Those who enhance their abilities through the use of AI; Those whose jobs are replaced by AI automation.

💎EgoAlpha: Hello! human👤, are you ready?

📢 News

  • [2023.3.4] We establish this project that is organised by professor Yu Liu from EgoAlpha Lab.

📜Papers


Prompt Engineering

📌 Prompt Design

UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning 👨‍🎓Yuning Mao,Lambert Mathias,Rui Hou,Amjad Almahairi,Hao Ma,Jiawei Han,Wen-tau Yih,Madian Khabsa 2021

HETFORMER: Heterogeneous Transformer with Sparse Attention for Long-Text Extractive Summarization 👨‍🎓Ye Liu,Jianguo Zhang,Yao Wan,Congying Xia,Lifang He,Philip S. Yu 2021

Can Language Models be Biomedical Knowledge Bases? 👨‍🎓Mujeen Sung,Jinhyuk Lee,Sean S. Yi,Minji Jeon,Sungdong Kim,Jaewoo Kang 2021

The SelectGen Challenge: Finding the Best Training Samples for Few-Shot Neural Text Generation 👨‍🎓Ernie Chang,Xiaoyu Shen,Alex Marin,V. Demberg 2021

Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing 👨‍🎓Pengfei Liu,Weizhe Yuan,Jinlan Fu,Zhengbao Jiang,Hiroaki Hayashi,Graham Neubig 2021

On Training Instance Selection for Few-Shot Neural Text Generation 👨‍🎓Ernie Chang,Xiaoyu Shen,Hui-Syuan Yeh,V. Demberg 2021

Template-Based Named Entity Recognition Using BART 👨‍🎓Leyang Cui,Yu Wu,Jian Liu,Sen Yang,Yue Zhang 2021

Enriching Transformers with Structured Tensor-Product Representations for Abstractive Summarization 👨‍🎓Yichen Jiang,Asli Celikyilmaz,P. Smolensky,Paul Soulos,Sudha Rao,H. Palangi,Roland Fernandez,Caitlin Smith,Mohit Bansal,Jianfeng Gao 2021

SciFive: a text-to-text transformer model for biomedical literature 👨‍🎓Long Phan,J. Anibal,Hieu Tran,Shaurya Chanana,Erol Bahadroglu,Alec Peltekian,G. Altan-Bonnet 2021

PTR: Prompt Tuning with Rules for Text Classification 👨‍🎓Xu Han,Weilin Zhao,Ning Ding,Zhiyuan Liu,Maosong Sun 2021

👉Complete paper list 🔗 for "prompt design"👈

📌 Automatic Prompt

Active Example Selection for In-Context Learning 👨‍🎓Yiming Zhang,Shi Feng,Chenhao Tan 2022

Large Language Models Can Self-Improve 👨‍🎓Jiaxin Huang,S. Gu,Le Hou,Yuexin Wu,Xuezhi Wang,Hongkun Yu,Jiawei Han 2022

Automatic Chain of Thought Prompting in Large Language Models 👨‍🎓Zhuosheng Zhang,Aston Zhang,Mu Li,Alexander J. Smola 2022

Complexity-Based Prompting for Multi-Step Reasoning 👨‍🎓Yao Fu,Hao-Chun Peng,Ashish Sabharwal,Peter Clark,Tushar Khot 2022

Dynamic Prompt Learning via Policy Gradient for Semi-structured Mathematical Reasoning 👨‍🎓Pan Lu,Liang Qiu,Kai-Wei Chang,Y. Wu,Song-Chun Zhu,Tanmay Rajpurohit,Peter Clark,A. Kalyan 2022

Selective Annotation Makes Language Models Better Few-Shot Learners 👨‍🎓Hongjin Su,Jungo Kasai,Chen Henry Wu,Weijia Shi,Tianlu Wang,Jiayi Xin,Rui Zhang,Mari Ostendorf,Luke Zettlemoyer,Noah A. Smith,Tao Yu 2022

Interactive and Visual Prompt Engineering for Ad-hoc Task Adaptation with Large Language Models 👨‍🎓Hendrik Strobelt,Albert Webson,Victor Sanh,Benjamin Hoover,J. Beyer,H. Pfister,Alexander M. Rush 2022

Exploring CLIP for Assessing the Look and Feel of Images 👨‍🎓Jianyi Wang,Kelvin C. K. Chan,Chen Change Loy 2022

Rationale-Augmented Ensembles in Language Models 👨‍🎓Xuezhi Wang,Jason Wei,D. Schuurmans,Quoc Le,E. Chi,Denny Zhou 2022

Initial Images: Using Image Prompts to Improve Subject Representation in Multimodal AI Generated Art 👨‍🎓Han Qiao,Vivian Liu,Lydia B. Chilton 2022

👉Complete paper list 🔗 for "Automatic Prompt"👈

📌 Chain of Thought

Large Language Models Are Reasoning Teachers 👨‍🎓Namgyu Ho,Laura Schmid,Se-Young Yun 2022

Teaching Small Language Models to Reason 👨‍🎓Lucie Charlotte Magister,Jonathan Mallinson,Jakub Adamek,Eric Malmi,Aliaksei Severyn 2022

The Impact of Symbolic Representations on In-context Learning for Few-shot Reasoning 👨‍🎓Hanlin Zhang,Yi-Fan Zhang,Li Erran Li,Eric Xing 2022

Complementary Explanations for Effective In-Context Learning 👨‍🎓Xi Ye,Srini Iyer,Asli Celikyilmaz,V. Stoyanov,Greg Durrett,Ramakanth Pasunuru 2022

PAL: Program-aided Language Models 👨‍🎓Luyu Gao,Aman Madaan,Shuyan Zhou,Uri Alon,Pengfei Liu,Yiming Yang,Jamie Callan,Graham Neubig 2022

Active Example Selection for In-Context Learning 👨‍🎓Yiming Zhang,Shi Feng,Chenhao Tan 2022

Large Language Models Can Self-Improve 👨‍🎓Jiaxin Huang,S. Gu,Le Hou,Yuexin Wu,Xuezhi Wang,Hongkun Yu,Jiawei Han 2022

Scaling Instruction-Finetuned Language Models 👨‍🎓Hyung Won Chung,Le Hou,S. Longpre,Barret Zoph,Yi Tay,W. Fedus,Eric Li,Xuezhi Wang,M. Dehghani,Siddhartha Brahma,Albert Webson,S. Gu,Zhuyun Dai,Mirac Suzgun,Xinyun Chen,Aakanksha Chowdhery,Dasha Valter,Sharan Narang,Gaurav Mishra,A. Yu,Vincent Zhao,Yanping Huang,Andrew M. Dai,Hongkun Yu,Slav Petrov,E. Chi,J. Dean,Jacob Devlin,Adam Roberts,Denny Zhou,Quoc V. Le,Jason Wei 2022

Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them 👨‍🎓Mirac Suzgun,Nathan Scales,Nathanael Scharli,Sebastian Gehrmann,Yi Tay,Hyung Won Chung,Aakanksha Chowdhery,Quoc V. Le,E. Chi,Denny Zhou,Jason Wei 2022

Prompting GPT-3 To Be Reliable 👨‍🎓Chenglei Si,Zhe Gan,Zhengyuan Yang,Shuohang Wang,Jianfeng Wang,Jordan L. Boyd-Graber,Lijuan Wang 2022

👉Complete paper list 🔗 for "Chain of Thought"👈

📌 Evaluation & Reliability

Relay Node Placement in Wireless Sensor Networks With Respect to Delay and Reliability Requirements 👨‍🎓Chaofan Ma,W. Liang,M. Zheng,Bofu Yang 2019

Comparative Analysis of Transmission Power Level and Packet Size Optimization Strategies for WSNs 👨‍🎓Huseyin Ugur Yildiz,Sinan Kurt,B. Tavli 2019

RBD Model-Based Approach for Reliability Assessment in Complex Systems 👨‍🎓M. Catelani,L. Ciani,M. Venzi 2019

Joint Transmission Power Optimization and Connectivity Control in Asymmetric Networks 👨‍🎓Milad Esmacilpour,A. Aghdam,S. Blouin 2018

Reliability Allocation Procedures in Complex Redundant Systems 👨‍🎓M. Catelani,L. Ciani,G. Patrizi,M. Venzi 2018

Device-to-Device Communications: A Performance Analysis in the Context of Social Comparison-Based Relaying 👨‍🎓Young Jin Chun,Gualtiero Colombo,S. Cotton,W. Scanlon,R. Whitaker,S. Allen 2017

MIMO Wireless Communications over Generalized Fading Channels 👨‍🎓B. Kumbhani,R. Kshetrimayum 2017

Energy Saving With Network Coding Design Over Rayleigh Fading Channel 👨‍🎓Shijun Lin,Liqun Fu,Yong Li 2017

Optimal WSN Deployment Models for Air Pollution Monitoring 👨‍🎓Ahmed Boubrima,Walid Bechkit,H. Rivano 2017

Distance distribution between nodes in a 3D wireless network 👨‍🎓J. Nichols,J. Michalowicz 2017

👉Complete paper list 🔗 for "Evaluation & Reliability"👈

In-context Learning

Large Language Models Are Implicitly Topic Models: Explaining and Finding Good Demonstrations for In-Context Learning 👨‍🎓Xinyi Wang,Wanrong Zhu,William Yang Wang 2023

OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization 👨‍🎓S. Iyer,Xiaojuan Lin,Ramakanth Pasunuru,Todor Mihaylov,Daniel Simig,Ping Yu,Kurt Shuster,Tianlu Wang,Qing Liu,Punit Singh Koura,Xian Li,Brian O'Horo,Gabriel Pereyra,Jeff Wang,Christopher Dewan,Asli Celikyilmaz,Luke Zettlemoyer,Veselin Stoyanov 2022

Prompt-Augmented Linear Probing: Scaling Beyond The Limit of Few-shot In-Context Learners 👨‍🎓Hyunsoo Cho,Hyuhng Joon Kim,Junyeob Kim,Sang-Woo Lee,Sang-goo Lee,Kang Min Yoo,Taeuk Kim 2022

Self-adaptive In-context Learning 👨‍🎓Zhiyong Wu,Yaoxiang Wang,Jiacheng Ye,Lingpeng Kong 2022

Is GPT-3 a Good Data Annotator? 👨‍🎓Bosheng Ding,Chengwei Qin,Linlin Liu,Lidong Bing,Shafiq R. Joty,Boyang Li 2022

Reasoning with Language Model Prompting: A Survey 👨‍🎓Shuofei Qiao,Yixin Ou,Ningyu Zhang,Xiang Chen,Yunzhi Yao,Shumin Deng,Chuanqi Tan,Fei Huang,Huajun Chen 2022

Structured Prompting: Scaling In-Context Learning to 1, 000 Examples 👨‍🎓Y. Hao,Yutao Sun,Li Dong,Zhixiong Han,Yuxian Gu,Furu Wei 2022

Complementary Explanations for Effective In-Context Learning 👨‍🎓Xi Ye,Srini Iyer,Asli Celikyilmaz,V. Stoyanov,Greg Durrett,Ramakanth Pasunuru 2022

Active Example Selection for In-Context Learning 👨‍🎓Yiming Zhang,Shi Feng,Chenhao Tan 2022

Tuning Language Models as Training Data Generators for Augmentation-Enhanced Few-Shot Learning 👨‍🎓Yu Meng,Martin Michalski,Jiaxin Huang,Yu Zhang,T. Abdelzaher,Jiawei Han 2022

👉Complete paper list 🔗 for "In-context Learning"👈

Multimodal Prompt

📌 Hard Prompt/ Discrete Prompt

RLPrompt: Optimizing Discrete Text Prompts with Reinforcement Learning 👨‍🎓Mingkai Deng,Jianyu Wang,Cheng-Ping Hsieh,Yihan Wang,Han Guo,Tianmin Shu,Meng Song,E. Xing,Zhiting Hu 2022

Hard Prompts Made Easy: Gradient-Based Discrete Optimization for Prompt Tuning and Discovery 👨‍🎓Yuxin Wen,Neel Jain,John Kirchenbauer,Micah Goldblum,Jonas Geiping,T. Goldstein 2023

👉Complete paper list 🔗 for "Hard Prompt"👈

📌 Soft Prompt/ Continuous Prompt

Prompting through Prototype: A Prototype-based Prompt Learning on Pretrained Vision-Language Models 👨‍🎓

P-Tuning: Prompt Tuning Can Be Comparable to Fine-tuning Across Scales and Tasks 2022

FedPrompt: Communication-Efficient and Privacy Preserving Prompt Tuning in Federated Learning 👨‍🎓

Instance-aware prompt learning for language understanding and generation 👨‍🎓

Learning to Compose Soft Prompts for Compositional Zero-Shot Learning 👨‍🎓

FPT: Improving Prompt Tuning Efficiency via Progressive Training 👨‍🎓

Decomposed Soft Prompt Guided Fusion Enhancing for Compositional Zero-Shot Learning 👨‍🎓

Prompt Distribution Learning 👨‍🎓

Late Prompt Tuning: A Late Prompt Could Be Better Than Many Prompts 👨‍🎓

Scalable Prompt Generation for Semi-supervised Learning with Language Models 👨‍🎓

👉Complete paper list 🔗 for "Soft Prompt"👈

Knowledge Augmented Prompts

UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning 👨‍🎓Yuning Mao,Lambert Mathias,Rui Hou,Amjad Almahairi,Hao Ma,Jiawei Han,Wen-tau Yih,Madian Khabsa 2021

HETFORMER: Heterogeneous Transformer with Sparse Attention for Long-Text Extractive Summarization 👨‍🎓Ye Liu,Jianguo Zhang,Yao Wan,Congying Xia,Lifang He,Philip S. Yu 2021

Can Language Models be Biomedical Knowledge Bases? 👨‍🎓Mujeen Sung,Jinhyuk Lee,Sean S. Yi,Minji Jeon,Sungdong Kim,Jaewoo Kang 2021

The SelectGen Challenge: Finding the Best Training Samples for Few-Shot Neural Text Generation 👨‍🎓Ernie Chang,Xiaoyu Shen,Alex Marin,V. Demberg 2021

Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing 👨‍🎓Pengfei Liu,Weizhe Yuan,Jinlan Fu,Zhengbao Jiang,Hiroaki Hayashi,Graham Neubig 2021

On Training Instance Selection for Few-Shot Neural Text Generation 👨‍🎓Ernie Chang,Xiaoyu Shen,Hui-Syuan Yeh,V. Demberg 2021

Template-Based Named Entity Recognition Using BART 👨‍🎓Leyang Cui,Yu Wu,Jian Liu,Sen Yang,Yue Zhang 2021

Enriching Transformers with Structured Tensor-Product Representations for Abstractive Summarization 👨‍🎓Yichen Jiang,Asli Celikyilmaz,P. Smolensky,Paul Soulos,Sudha Rao,H. Palangi,Roland Fernandez,Caitlin Smith,Mohit Bansal,Jianfeng Gao 2021

SciFive: a text-to-text transformer model for biomedical literature 👨‍🎓Long Phan,J. Anibal,Hieu Tran,Shaurya Chanana,Erol Bahadroglu,Alec Peltekian,G. Altan-Bonnet 2021

PTR: Prompt Tuning with Rules for Text Classification 👨‍🎓Xu Han,Weilin Zhao,Ning Ding,Zhiyuan Liu,Maosong Sun 2021

👉Complete paper list 🔗 for "Knowledge Augmented Prompts"👈

Prompt for Knowledge Graph

UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning 👨‍🎓Yuning Mao,Lambert Mathias,Rui Hou,Amjad Almahairi,Hao Ma,Jiawei Han,Wen-tau Yih,Madian Khabsa 2021

HETFORMER: Heterogeneous Transformer with Sparse Attention for Long-Text Extractive Summarization 👨‍🎓Ye Liu,Jianguo Zhang,Yao Wan,Congying Xia,Lifang He,Philip S. Yu 2021

Can Language Models be Biomedical Knowledge Bases? 👨‍🎓Mujeen Sung,Jinhyuk Lee,Sean S. Yi,Minji Jeon,Sungdong Kim,Jaewoo Kang 2021

The SelectGen Challenge: Finding the Best Training Samples for Few-Shot Neural Text Generation 👨‍🎓Ernie Chang,Xiaoyu Shen,Alex Marin,V. Demberg 2021

Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing 👨‍🎓Pengfei Liu,Weizhe Yuan,Jinlan Fu,Zhengbao Jiang,Hiroaki Hayashi,Graham Neubig 2021

On Training Instance Selection for Few-Shot Neural Text Generation 👨‍🎓Ernie Chang,Xiaoyu Shen,Hui-Syuan Yeh,V. Demberg 2021

Template-Based Named Entity Recognition Using BART 👨‍🎓Leyang Cui,Yu Wu,Jian Liu,Sen Yang,Yue Zhang 2021

Enriching Transformers with Structured Tensor-Product Representations for Abstractive Summarization 👨‍🎓Yichen Jiang,Asli Celikyilmaz,P. Smolensky,Paul Soulos,Sudha Rao,H. Palangi,Roland Fernandez,Caitlin Smith,Mohit Bansal,Jianfeng Gao 2021

SciFive: a text-to-text transformer model for biomedical literature 👨‍🎓Long Phan,J. Anibal,Hieu Tran,Shaurya Chanana,Erol Bahadroglu,Alec Peltekian,G. Altan-Bonnet 2021

PTR: Prompt Tuning with Rules for Text Classification 👨‍🎓Xu Han,Weilin Zhao,Ning Ding,Zhiyuan Liu,Maosong Sun 2021

👉Complete paper list 🔗 for "Prompt for Knowledge Graph"👈

🎓 Citation

If you find our work helps, please star our project and cite our paper. Thanks a lot!

综述论文可以放在这个位置

✉️ Contact

This repo is maintained by EgoAlpha Lab. Questions and discussions are welcome via cyfedu1024@gmail.com or cyfedu1024@163.com.

We are willing to communicate with your research team or confirm in variety of fields.

🙏 Acknowledgements

Thanks to the PhD students from EgoAlpha Lab and other workers who participated in this repo. We will improve the project in the follow-up period and maintain this community well. More researchers are welcome to join us and make more contributions to the community.

👨‍👩‍👧‍👦 Contributors

Main Contributors

📔 License

This project is open source and available under the MIT

About

Providing the open-source tools for learning prompt

https://github.com/EgoAlpha/prompt-in-context-learning

License:MIT License