There are 9 repositories under in-context-learning topic.
:sparkles::sparkles:Latest Papers and Datasets on Multimodal Large Language Models, and Their Evaluation.
An open-source framework for training large multimodal models.
Painter & SegGPT Series: Vision Foundation Models from BAAI
ćťçťPrompt&LLM莺ćďźĺźćşć°ćŽ&樥ĺďźAIGCĺşç¨
A trend starts from "Chain of Thought Prompting Elicits Reasoning in Large Language Models".
Emu Series: Generative Multimodal Models from BAAI
Awesome resources for in-context learning and prompt engineering: Mastery of the LLMs such as ChatGPT, GPT-3, and FlanT5, with up-to-date and cutting-edge updates.
Must-read Papers on LLM Agents.
Reasoning in Large Language Models: Papers and Resources, including Chain-of-Thought, Instruction-Tuning and Multimodality.
Awesome-LLM-Robustness: a curated list of Uncertainty, Reliability and Robustness in Large Language Models
This repository contains a collection of papers and resources on Reasoning in Large Language Models.
Awesome papers about generative Information Extraction (IE) using Large Language Models (LLMs)
Papers and Datasets on Instruction Tuning and Following. â¨â¨â¨
Research Trends in LLM-guided Multimodal Learning.
An Easy-to-use Instruction Processing Framework for LLMs.
Paper List for Recommend-system PreTrained Models
A curated list of awesome instruction tuning datasets, models, papers and repositories.
Official Code for "Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents"
đ[ChatGPT4NLU] A Comparative Study on ChatGPT and Fine-tuned BERT
[NeurIPS2023] Official implementation and model release of the paper "What Makes Good Examples for Visual In-Context Learning?"
Grimoire is All You Need for Enhancing Large Language Models
[ICLR 2023] Code for our paper "Selective Annotation Makes Language Models Better Few-Shot Learners"
A curated list of papers and applications on tool learning.
Experiments and code to generate the GINC small-scale in-context learning dataset from "An Explanation for In-context Learning as Implicit Bayesian Inference"
[NeurIPS 2023 Main Track] This is the repository for the paper titled "Donât Stop Pretraining? Make Prompt-based Fine-tuning Powerful Learner"
An benchmark for evaluating the capabilities of large vision-language models (LVLMs)
A curated list of resources for graph prompting methods