There are 23 repositories under instruction-following topic.
Code and documentation to train Stanford's Alpaca models, and generate the data.
:sparkles::sparkles:Latest Advances on Multimodal Large Language Models
An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.
Must-read Papers on LLM Agents.
An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.
This repository collects papers for "A Survey on Knowledge Distillation of Large Language Models". We break down KD into Knowledge Elicitation and Distillation Algorithms, and explore the Skill & Vertical Distillation of LLMs.
A collection of open-source dataset to train instruction-following LLMs (ChatGPT,LLaMA,Alpaca)
A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.
PhoGPT: Generative Pre-training for Vietnamese (2023)
Reading list of Instruction-tuning. A trend starts from Natrural-Instruction (ACL 2022), FLAN (ICLR 2022) and T0 (ICLR 2022).
Instruction-based prompts for generating and classifying text.
Code to accompany the Universal Deep Research paper (https://arxiv.org/abs/2509.00244)
[ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI
[NeurIPS'23] "MagicBrush: A Manually Annotated Dataset for Instruction-Guided Image Editing".
EVE Series: Encoder-Free Vision-Language Models from BAAI
[ICLR 2024] Mol-Instructions: A Large-Scale Biomolecular Instruction Dataset for Large Language Models
This framework works as a form of user/machine calibration, with a focus on user-context and user-intent, deconstructing your ideas logically from A to B to Z.
Finetune LLaMA-7B with Chinese instruction datasets
[ACM Multimedia 2025 Datasets Track] EditWorld: Simulating World Dynamics for Instruction-Following Image Editing
This is the official GitHub repository for our survey paper "Beyond Single-Turn: A Survey on Multi-Turn Interactions with Large Language Models".
[ACL 2024] FollowBench: A Multi-level Fine-grained Constraints Following Benchmark for Large Language Models
MedAlign is a clinician-generated dataset for instruction following with electronic medical records.
WangChanGLM 🐘 - The Multilingual Instruction-Following Model
Awesome Instruction Editing. Image and Media Editing with Human Instructions. Instruction-Guided Image and Media Editing.
Awesome Multimodal Assistant is a curated list of multimodal chatbots/conversational assistants that utilize various modes of interaction, such as text, speech, images, and videos, to provide a seamless and versatile user experience.
Official repository for KoMT-Bench built by LG AI Research
This repo contains a list of channels and sources from where LLMs should be learned
Instruction Following Agents with Multimodal Transforemrs
🌱 梦想家(DreamerGPT):中文大语言模型指令精调
An benchmark for evaluating the capabilities of large vision-language models (LVLMs)
Code and models of MOCA (Modular Object-Centric Approach) proposed in "Factorizing Perception and Policy for Interactive Instruction Following" (ICCV 2021). We address the task of long horizon instruction following with a modular architecture that decouples a task into visual perception and action policy prediction.
A Multi-Modal AI Copilot for Single-Cell Analysis with Instruction Following