ZCyueternal / prompt-in-context-learning

Awesome resources for in-context learning and prompt engineering: Mastery of the LLMs such as ChatGPT, GPT-3, and FlanT5, with up-to-date and cutting-edge updates.

Home Page:https://github.com/EgoAlpha/prompt-in-context-learning

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Typing SVG

An Open-Source Engineering Guide for Prompt-in-context-learning from EgoAlpha Lab.

📝 Papers | ⚡️ Playground | 🛠 Prompt Engineering | 🌍 ChatGPT Prompt

version Awesome

⭐️ Shining ⭐️: This is fresh, daily-updated resources for in-context learning and prompt engineering. As Artificial General Intelligence (AGI) is approaching, let’s take action and become a super learner so as to position ourselves at the forefront of this exciting era and strive for personal and professional greatness.

The resources include:

🎉Papers🎉: The latest papers about in-context learning or prompt engineering.

🎉Playground🎉: Large language models that enable prompt experimentation.

🎉Prompt Engineering🎉: Prompt techniques for leveraging large language models.

🎉ChatGPT Prompt🎉: Prompt examples that can be applied in our work and daily lives.

In the future, there will likely be two types of people on Earth (perhaps even on Mars, but that's a question for Musk):

  • Those who enhance their abilities through the use of AI;
  • Those whose jobs are replaced by AI automation.

💎EgoAlpha: Hello! human👤, are you ready?

📢 News

👉 Complete history news 👈

📜 Papers

You can directly click on the title to jump to the corresponding PDF link location


Survey

👉Complete paper list 🔗 for "Survey"👈

Prompt Engineering

📌 Prompt Design

Prompt Pre-Training with Twenty-Thousand Classes for Open-Vocabulary Visual Recognition2023.04.10

A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT2023.02.21

GraphPrompt: Unifying Pre-Training and Downstream Tasks for Graph Neural Networks2023.02.16

Progressive Prompts: Continual Learning for Language Models2023.01.29

Batch Prompting: Efficient Inference with Large Language Model APIs2023.01.19

One Embedder, Any Task: Instruction-Finetuned Text Embeddings2022.12.19

Successive Prompting for Decomposing Complex Questions2022.12.08

Promptagator: Few-shot Dense Retrieval From 8 Examples2022.09.23

Interactive and Visual Prompt Engineering for Ad-hoc Task Adaptation with Large Language Models2022.08.16

Black-box Prompt Learning for Pre-trained Language Models2022.01.21

👉Complete paper list 🔗 for "Prompt Design"👈

📌 Automatic Prompt

👉Complete paper list 🔗 for "Automatic Prompt"👈

📌 Chain of Thought

REFINER: Reasoning Feedback on Intermediate Representations2023.04.04

Automatic Prompt Augmentation and Selection with Chain-of-Thought from Labeled Data2023.02.24

Active Prompting with Chain-of-Thought for Large Language Models2023.02.23

Multimodal Chain-of-Thought Reasoning in Language Models2023.02.02

Synthetic Prompting: Generating Chain-of-Thought Demonstrations for Large Language Models2023.02.01

Faithful Chain-of-Thought Reasoning2023.01.31

Large Language Models Are Reasoning Teachers2022.12.20

Solving math word problems with process- and outcome-based feedback2022.11.25

Complementary Explanations for Effective In-Context Learning2022.11.25

Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks2022.11.22

👉Complete paper list 🔗 for "Chain of Thought"👈

📌 Knowledge Augmented Prompt

Commonsense-Aware Prompting for Controllable Empathetic Dialogue Generation2023.02.02

REPLUG: Retrieval-Augmented Black-Box Language Models2023.01.30

Self-Instruct: Aligning Language Model with Self Generated Instructions2022.12.20

The Impact of Symbolic Representations on In-context Learning for Few-shot Reasoning2022.12.16

Don’t Prompt, Search! Mining-based Zero-Shot Learning with Language Models2022.10.26

Knowledge Prompting in Pre-trained Language Model for Natural Language Understanding2022.10.16

Knowledge Injected Prompt Based Fine-tuning for Multi-label Few-shot ICD Coding2022.10.07

DocPrompting: Generating Code by Retrieving the Docs2022.07.13

Towards Unified Conversational Recommender Systems via Knowledge-Enhanced Prompt Learning2022.06.19

Decoupling Knowledge from Memorization: Retrieval-augmented Prompt Learning2022.05.29

👉Complete paper list 🔗 for "Knowledge Augmented Prompt"👈

📌 Evaluation & Reliability

GPTEval: NLG Evaluation using GPT-4 with Better Human Alignment2023.03.29

How Robust is GPT-3.5 to Predecessors? A Comprehensive Study on Language Understanding Tasks2023.03.01

Bounding the Capabilities of Large Language Models in Open Text Generation with Prompt Constraints2023.02.17

Evaluating the Robustness of Discrete Prompts2023.02.11

Controlling for Stereotypes in Multimodal Language Model Evaluation2023.02.03

Large Language Models Can Be Easily Distracted by Irrelevant Context2023.01.31

Emergent Analogical Reasoning in Large Language Models2022.12.19

Discovering Language Model Behaviors with Model-Written Evaluations2022.12.19

Constitutional AI: Harmlessness from AI Feedback2022.12.15

On Second Thought, Let's Not Think Step by Step! Bias and Toxicity in Zero-Shot Reasoning2022.12.15

👉Complete paper list 🔗 for "Evaluation & Reliability"👈

In-context Learning

Self-Refine: Iterative Refinement with Self-Feedback2023.03.30

Larger language models do in-context learning differently2023.03.07

Language Model Crossover: Variation through Few-Shot Prompting2023.02.23

How Does In-Context Learning Help Prompt Tuning?2023.02.22

PLACES: Prompting Language Models for Social Conversation Synthesis2023.02.07

Large Language Models Are Implicitly Topic Models: Explaining and Finding Good Demonstrations for In-Context Learning2023.01.27

Transformers as Algorithms: Generalization and Stability in In-context Learning2023.01.17

OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization2022.12.22

Prompt-Augmented Linear Probing: Scaling Beyond The Limit of Few-shot In-Context Learners2022.12.21

In-context Learning Distillation: Transferring Few-shot Learning Ability of Pre-trained Language Models2022.12.20

👉Complete paper list 🔗 for "In-context Learning"👈

Multimodal Prompt

Vita-CLIP: Video and text adaptive CLIP via Multimodal Prompting2023.04.06

MEDIMP: Medical Images and Prompts for renal transplant representation learning2023.03.22

CLIP goes 3D: Leveraging Prompt Tuning for Language Grounded 3D Recognition2023.03.20

MM-REACT: Prompting ChatGPT for Multimodal Reasoning and Action2023.03.20

Visual Prompt Multi-Modal Tracking2023.03.20

Audio Visual Language Maps for Robot Navigation2023.03.13

Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models2023.03.08

Multimodal Parameter-Efficient Few-Shot Class Incremental Learning2023.03.08

Multitask Prompt Tuning Enables Parameter-Efficient Transfer Learning2023.03.06

Multimodal Prompting with Missing Modalities for Visual Recognition2023.03.06

👉Complete paper list 🔗 for "Multimodal Prompt"👈

Prompt Application

SpeechPrompt v2: Prompt Tuning for Speech Classification Tasks2023.03.01

Soft Prompt Guided Joint Learning for Cross-Domain Sentiment Analysis2023.03.01

EvoPrompting: Language Models for Code-Level Neural Architecture Search2023.02.28

More than you've asked for: A Comprehensive Analysis of Novel Prompt Injection Threats to Application-Integrated Large Language Models2023.02.23

Grimm in Wonderland: Prompt Engineering with Midjourney to Illustrate Fairytales2023.02.17

LabelPrompt: Effective Prompt-based Learning for Relation Classification2023.02.16

Prompt Tuning of Deep Neural Networks for Speaker-adaptive Visual Speech Recognition2023.02.16

Prompting for Multimodal Hateful Meme Classification2023.02.08

Toxicity Detection with Generative Prompt-based Inference2022.05.24

QaNER: Prompting Question Answering Models for Few-shot Named Entity Recognition2022.03.03

👉Complete paper list 🔗 for "Prompt Application"👈

Foundation Models

TagGPT: Large Language Models are Zero-shot Multimodal Taggers2023.04.06

Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling2023.04.03

BloombergGPT: A Large Language Model for Finance2023.03.30

Scaling Expert Language Models with Unsupervised Domain Discovery2023.03.24

Sparks of Artificial General Intelligence: Early experiments with GPT-42023.03.22

CoLT5: Faster Long-Range Transformers with Conditional Computation2023.03.17

Meet in the Middle: A New Pre-training Paradigm2023.03.13

High-throughput Generative Inference of Large Language Models with a Single GPU2023.03.13

Stabilizing Transformer Training by Preventing Attention Entropy Collapse2023.03.11

An Overview on Language Models: Recent Developments and Outlook2023.03.10

👉Complete paper list 🔗 for "Foundation Models"👈

✉️ Contact

This repo is maintained by EgoAlpha Lab. Questions and discussions are welcome via helloegoalpha@gmail.com.

We are willing to engage in discussions with friends from the academic and industrial communities, and explore the latest developments in prompt engineering and in-context learning together.

🙏 Acknowledgements

Thanks to the PhD students from EgoAlpha Lab and other workers who participated in this repo. We will improve the project in the follow-up period and maintain this community well. We also would like to express our sincere gratitude to the authors of the relevant resources. Your efforts have broadened our horizons and enabled us to perceive a more wonderful world.

About

Awesome resources for in-context learning and prompt engineering: Mastery of the LLMs such as ChatGPT, GPT-3, and FlanT5, with up-to-date and cutting-edge updates.

https://github.com/EgoAlpha/prompt-in-context-learning

License:MIT License


Languages

Language:CSS 100.0%