alaincoletta / instructor_docs

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Instruct Cookbook

All definitions and ideas are taken from The Prompt Report: A Systematic Survey of Prompting Techniques

Table of Contents

It incorporates phrases of psychological relevance to humans (e.g., "This is important to my career") into the prompt, which may lead to improved LLM performance on benchmarks and open-ended text generation.

It also known as persona prompting, assigns a specific role to the GenAI in the prompt. For example, the user might prompt it to act like "Madonna" or a "travel writer". This can create more desirable outputs for open-ended tasks and in some cases improve accuracy on benchmarks.

It involves specifying the desired style, tone, or genre in the prompt to shape the output of a GenAI. A similar effect can be achieved using role prompting.

It first asks an LLM to rewrite the prompt and remove any information unrelated to the question therein. Then, it passes this new prompt into an LLM to retrieve a final response.

It deals with complicated questions which involve multiple people or objects. Given the question, it attempts to establish the set of facts one person knows, then answer the question based only on those facts. This is a two prompt process and can help eliminate the effect of irrelevant information in the prompt.

It instructs the LLM to rephrase and expand the question before generating the final answer. For example, it might add the following phrase to the question: "Rephrase and expand the question, and respond". This could all be done in a single pass or the new question could be passed to the LLM separately. RaR has demonstrated improvements on multiple benchmarks.

It adds the phrase "Read the question again:" to the prompt in addition to repeating the question. Although this is such a simple technique, it has shown improvement in reasoning benchmarks, especially with complex questions.

It prompts LLMs to first decide if they need to ask follow up questions for a given prompt. If so, the LLM generates these questions, then answers them and finally answers the original question.

Few Shot Prompting

Example Generation

Example Ordering

Exemplar Selection

Thought Generation

Chain-of-Thought (CoT)

Ensembling

COSP

DENSE

DiVeRSe

Max Mutual Information

MoRE

Self-Consistency

Universal Self-Consistency

USP

Prompt Paraphrasing

Self Critcism

Chain-of-Verification

Self-Calibration

Self-Refine

Self-Verification

ReverseCoT

Cumulative Reason

Decomposition

DECOMP

FAITHFUL CoT

Least-to-Most

Plan-and-Solve

Program-of-Thought

Recurs.-of-Thought

Skeleton-of-Thought

Tree-of-Thought

About


Languages

Language:Python 100.0%