There are 11 repositories under instruction-following topic.
Code and documentation to train Stanford's Alpaca models, and generate the data.
:sparkles::sparkles:Latest Advances on Multimodal Large Language Models
An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.
Must-read Papers on LLM Agents.
An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.
A collection of open-source dataset to train instruction-following LLMs (ChatGPT,LLaMA,Alpaca)
A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.
PhoGPT: Generative Pre-training for Vietnamese (2023)
Reading list of Instruction-tuning. A trend starts from Natrural-Instruction (ACL 2022), FLAN (ICLR 2022) and T0 (ICLR 2022).
A collection of ChatGPT and GPT-3.5 instruction-based prompts for generating and classifying text.
This repository collects papers for "A Survey on Knowledge Distillation of Large Language Models". We break down KD into Knowledge Elicitation and Distillation Algorithms, and explore the Skill & Vertical Distillation of LLMs.
[NeurIPS'23] "MagicBrush: A Manually Annotated Dataset for Instruction-Guided Image Editing".
[ICLR 2024] Mol-Instructions: A Large-Scale Biomolecular Instruction Dataset for Large Language Models
Code for "Lion: Adversarial Distillation of Proprietary Large Language Models (EMNLP 2023)"
Finetune LLaMA-7B with Chinese instruction datasets
BigCodeBench: The Next Generation of HumanEval
EditWorld: Simulating World Dynamics for Instruction-Following Image Editing
WangChanGLM 🐘 - The Multilingual Instruction-Following Model
EVE Series: Encoder-Free Vision-Language Models from BAAI
Awesome Multimodal Assistant is a curated list of multimodal chatbots/conversational assistants that utilize various modes of interaction, such as text, speech, images, and videos, to provide a seamless and versatile user experience.
Code for "FollowBench: A Multi-level Fine-grained Constraints Following Benchmark for Large Language Models (ACL 2024)"
Instruction Following Agents with Multimodal Transforemrs
🌱 梦想家(DreamerGPT):中文大语言模型指令精调
Code and models of MOCA (Modular Object-Centric Approach) proposed in "Factorizing Perception and Policy for Interactive Instruction Following" (ICCV 2021). We address the task of long horizon instruction following with a modular architecture that decouples a task into visual perception and action policy prediction.
An benchmark for evaluating the capabilities of large vision-language models (LVLMs)
This is the official repo for Contrastive Vision-Language Alignment Makes Efficient Instruction Learner.
Collect and maintain high quality instruction finetune datasets in different domain and languages. 搜集並維護高品質各專業領域及語言的指令微調資料集
A better Alpaca Model Trained with Less Data (only 9k instructions of the original set)
Is In-Context Learning Sufficient for Instruction Following in LLMs?
[Arxiv 2024] Official Implementation of the paper: "Towards Robust Instruction Tuning on Multimodal Large Language Models"
This repo contains a list of channels and sources from where LLMs should be learned
Awesome Instruction Editing. Image and Media Editing with Human Instructions. Instruction-Guided Image and Media Editing.
Code for the Paper "Grounding Hindsight Instructions in Multi-Goal Reinforcement Learning for Robotics"