An Open-Source Engineering Guide for Prompt-in-context-learning from EgoAlpha Lab.

📝 Papers | ⚡️ Playground | 🛠 Prompt Engineering | 🌍 ChatGPT Prompt
⭐️ Shining ⭐️: This is fresh, daily-updated resources for in-context learning and prompt engineering. As Artificial General Intelligence (AGI) is approaching, let’s take action and become a super learner so as to position ourselves at the forefront of this exciting era and strive for personal and professional greatness.
The resources include:
🎉Papers🎉: The latest papers about in-context learning or prompt engineering.
🎉Playground🎉: Large language models that enable prompt experimentation.
🎉Prompt Engineering🎉: Prompt techniques for leveraging large language models.
🎉ChatGPT Prompt🎉: Prompt examples that can be applied in our work and daily lives.
In the future, there will likely be two types of people on Earth (perhaps even on Mars, but that's a question for Musk):
- Those who enhance their abilities through the use of AI;
- Those whose jobs are replaced by AI automation.
💎EgoAlpha: Hello! human👤, are you ready?
-
[2023.3.27] Scaling Expert Language Models with Unsupervised Domain Discovery
-
[2023.3.26] CoLT5: Faster Long-Range Transformers with Conditional Computation
-
[2023.3.23] OpenAI announces 'Plug-ins' for ChatGPT that enable it to perform actions beyond text.
-
[2023.3.22] GitHub launches Copilot X, aiming at the future of AI-powered software development.
-
[2023.3.21] Google Bard is now available in the US and UK, w/ more countries to come.
-
[2023.3.20] OpenAI’s new paper looks at the economical impact of LLMs+Labor Market.GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models
You can directly click on the title to jump to the corresponding PDF link location
Augmented Language Models: a Survey (2023.02.15)
A Survey for In-context Learning (2022.12.31)
Towards Reasoning in Large Language Models: A Survey (2022.12.20)
Reasoning with Language Model Prompting: A Survey (2022.12.19)
Emergent Abilities of Large Language Models (2022.06.15)
Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing (2021.07.28)
👉Complete paper list 🔗 for "Survey"👈
A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT (2023.02.21)
GraphPrompt: Unifying Pre-Training and Downstream Tasks for Graph Neural Networks (2023.02.16)
Progressive Prompts: Continual Learning for Language Models (2023.01.29)
Batch Prompting: Efficient Inference with Large Language Model APIs (2023.01.19)
One Embedder, Any Task: Instruction-Finetuned Text Embeddings (2022.12.19)
Successive Prompting for Decomposing Complex Questions (2022.12.08)
Promptagator: Few-shot Dense Retrieval From 8 Examples (2022.09.23)
Interactive and Visual Prompt Engineering for Ad-hoc Task Adaptation with Large Language Models (2022.08.16)
Black-box Prompt Learning for Pre-trained Language Models (2022.01.21)
Design Guidelines for Prompt Engineering Text-to-Image Generative Models (2021.09.14)
👉Complete paper list 🔗 for "Prompt Design"👈
Automatic Prompt Augmentation and Selection with Chain-of-Thought from Labeled Data (2023.02.24)
Guiding Large Language Models via Directional Stimulus Prompting (2023.02.22)
Evaluating the Robustness of Discrete Prompts (2023.02.11)
Hard Prompts Made Easy: Gradient-Based Discrete Optimization for Prompt Tuning and Discovery (2023.02.07)
Ask Me Anything: A simple strategy for prompting language models (2022.10.05)
STaR: Bootstrapping Reasoning With Reasoning (2022.03.28)
Making Pre-trained Language Models Better Few-shot Learners (2021.01.01)
Eliciting Knowledge from Language Models Using Automatically Generated Prompts (2020.10.29)
Automatically Identifying Words That Can Serve as Labels for Few-Shot Text Classification (2020.10.26)
👉Complete paper list 🔗 for "Automatic Prompt"👈
Automatic Prompt Augmentation and Selection with Chain-of-Thought from Labeled Data (2023.02.24)
Active Prompting with Chain-of-Thought for Large Language Models (2023.02.23)
Multimodal Chain-of-Thought Reasoning in Language Models (2023.02.02)
Synthetic Prompting: Generating Chain-of-Thought Demonstrations for Large Language Models (2023.02.01)
Faithful Chain-of-Thought Reasoning (2023.01.31)
Large Language Models Are Reasoning Teachers (2022.12.20)
Solving math word problems with process- and outcome-based feedback (2022.11.25)
Complementary Explanations for Effective In-Context Learning (2022.11.25)
Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks (2022.11.22)
Ignore Previous Prompt: Attack Techniques For Language Models (2022.11.17)
👉Complete paper list 🔗 for "Chain of Thought"👈
A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT (2023.02.21)
GraphPrompt: Unifying Pre-Training and Downstream Tasks for Graph Neural Networks (2023.02.16)
Progressive Prompts: Continual Learning for Language Models (2023.01.29)
Batch Prompting: Efficient Inference with Large Language Model APIs (2023.01.19)
One Embedder, Any Task: Instruction-Finetuned Text Embeddings (2022.12.19)
Successive Prompting for Decomposing Complex Questions (2022.12.08)
Promptagator: Few-shot Dense Retrieval From 8 Examples (2022.09.23)
Interactive and Visual Prompt Engineering for Ad-hoc Task Adaptation with Large Language Models (2022.08.16)
Black-box Prompt Learning for Pre-trained Language Models (2022.01.21)
Design Guidelines for Prompt Engineering Text-to-Image Generative Models (2021.09.14)
👉Complete paper list 🔗 for "Knowledge Augmented Prompts"👈
How Robust is GPT-3.5 to Predecessors? A Comprehensive Study on Language Understanding Tasks (2023.03.01)
Bounding the Capabilities of Large Language Models in Open Text Generation with Prompt Constraints (2023.02.17)
Evaluating the Robustness of Discrete Prompts (2023.02.11)
Controlling for Stereotypes in Multimodal Language Model Evaluation (2023.02.03)
Large Language Models Can Be Easily Distracted by Irrelevant Context (2023.01.31)
Emergent Analogical Reasoning in Large Language Models (2022.12.19)
Discovering Language Model Behaviors with Model-Written Evaluations (2022.12.19)
Constitutional AI: Harmlessness from AI Feedback (2022.12.15)
On Second Thought, Let's Not Think Step by Step! Bias and Toxicity in Zero-Shot Reasoning (2022.12.15)
Solving math word problems with process- and outcome-based feedback (2022.11.25)
👉Complete paper list 🔗 for "Evaluation & Reliability"👈
Larger language models do in-context learning differently (2023.03.07)
Language Model Crossover: Variation through Few-Shot Prompting (2023.02.23)
How Does In-Context Learning Help Prompt Tuning? (2023.02.22)
PLACES: Prompting Language Models for Social Conversation Synthesis (2023.02.07)
Large Language Models Are Implicitly Topic Models: Explaining and Finding Good Demonstrations for In-Context Learning (2023.01.27)
Large Language Models Are Implicitly Topic Models: Explaining and Finding Good Demonstrations for In-Context Learning (2023.01.27)
Transformers as Algorithms: Generalization and Stability in In-context Learning (2023.01.17)
OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization (2022.12.22)
OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization (2022.12.22)
Prompt-Augmented Linear Probing: Scaling Beyond The Limit of Few-shot In-Context Learners (2022.12.21)
👉Complete paper list 🔗 for "In-context Learning"👈
MEDIMP: Medical Images and Prompts for renal transplant representation learning (2023.03.22)
CLIP goes 3D: Leveraging Prompt Tuning for Language Grounded 3D Recognition (2023.03.20)
MM-REACT: Prompting ChatGPT for Multimodal Reasoning and Action (2023.03.20)
Visual Prompt Multi-Modal Tracking (2023.03.20)
Audio Visual Language Maps for Robot Navigation (2023.03.13)
Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models (2023.03.08)
Multimodal Parameter-Efficient Few-Shot Class Incremental Learning (2023.03.08)
Multitask Prompt Tuning Enables Parameter-Efficient Transfer Learning (2023.03.06)
Multimodal Prompting with Missing Modalities for Visual Recognition (2023.03.06)
Multimodal Chain-of-Thought Reasoning in Language Models (2023.02.02)
👉Complete paper list 🔗 for "Multimodal Prompt"👈
SpeechPrompt v2: Prompt Tuning for Speech Classification Tasks (2023.03.01)
Soft Prompt Guided Joint Learning for Cross-Domain Sentiment Analysis (2023.03.01)
EvoPrompting: Language Models for Code-Level Neural Architecture Search (2023.02.28)
Grimm in Wonderland: Prompt Engineering with Midjourney to Illustrate Fairytales (2023.02.17)
LabelPrompt: Effective Prompt-based Learning for Relation Classification (2023.02.16)
Prompt Tuning of Deep Neural Networks for Speaker-adaptive Visual Speech Recognition (2023.02.16)
Prompting for Multimodal Hateful Meme Classification (2023.02.08)
Toxicity Detection with Generative Prompt-based Inference (2022.05.24)
QaNER: Prompting Question Answering Models for Few-shot Named Entity Recognition (2022.03.03)
👉Complete paper list 🔗 for "Prompt Application"👈
Scaling Expert Language Models with Unsupervised Domain Discovery (2023.03.24)
CoLT5: Faster Long-Range Transformers with Conditional Computation (2023.03.17)
Meet in the Middle: A New Pre-training Paradigm (2023.03.13)
High-throughput Generative Inference of Large Language Models with a Single GPU (2023.03.13)
Stabilizing Transformer Training by Preventing Attention Entropy Collapse (2023.03.11)
An Overview on Language Models: Recent Developments and Outlook (2023.03.10)
Foundation Models for Decision Making: Problems, Methods, and Opportunities (2023.03.07)
How Do Transformers Learn Topic Structure: Towards a Mechanistic Understanding (2023.03.07)
LLaMA: Open and Efficient Foundation Language Models (2023.02.27)
Self-Instruct: Aligning Language Model with Self Generated Instructions (2022.12.20)
👉Complete paper list 🔗 for "Foundation Models"👈
This repo is maintained by EgoAlpha Lab. Questions and discussions are welcome via helloegoalpha@gmail.com
.
We are willing to engage in discussions with friends from the academic and industrial communities, and explore the latest developments in prompt engineering and in-context learning together.
Thanks to the PhD students from EgoAlpha Lab and other workers who participated in this repo. We will improve the project in the follow-up period and maintain this community well. We also would like to express our sincere gratitude to the authors of the relevant resources. Your efforts have broadened our horizons and enabled us to perceive a more wonderful world.