There are 35 repositories under instruction-tuning topic.
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
:sparkles::sparkles:Latest Advances on Multimodal Large Language Models
The official GitHub page for the survey paper "A Survey of Large Language Models".
Data processing for and with foundation models! 🍎 🍋 🌽 ➡️ ➡️🍸 🍹 🍷
Aligning pretrained language models with instruction data generated by themselves.
Instruction Tuning with GPT-4
Code and models for ICML 2024 paper, NExT-GPT: Any-to-Any Multimodal Large Language Model
🦦 Otter, a multi-modal model based on OpenFlamingo (open-sourced version of DeepMind's Flamingo), trained on MIMIC-IT and showcasing improved instruction-following and in-context learning ability.
【EMNLP 2024🔥】Video-LLaVA: Learning United Visual Representation by Alignment Before Projection
总结Prompt&LLM论文,开源数据&模型,AIGC应用
InternLM-XComposer2.5-OmniLive: A Comprehensive Multimodal System for Long-term Streaming Video and Audio Interactions
We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tuning) together for easy use. We welcome open-source enthusiasts to initiate any meaningful PR on this repo and integrate as many LLM related technologies as possible. 我们打造了方便研究人员上手和使用大模型等微调平台,我们欢迎开源爱好者发起任何有意义的pr!
[ECCV2024] Video Foundation Models & Data for Multimodal Understanding
Cambrian-1 is a family of multimodal LLMs with a vision-centric design.
Synthetic data curation for post-training and structured data extraction
A collection of open-source dataset to train instruction-following LLMs (ChatGPT,LLaMA,Alpaca)
DataDreamer: Prompt. Generate Synthetic Data. Train & Align Models. 🤖💤
[ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation
DISC-FinLLM,中文金融大语言模型(LLM),旨在为用户提供金融场景下专业、智能、全面的金融咨询服务。DISC-FinLLM, a Chinese financial large language model (LLM) designed to provide users with professional, intelligent, and comprehensive financial consulting services in financial scenarios.
Generative Representational Instruction Tuning
Crosslingual Generalization through Multitask Finetuning
DialogStudio: Towards Richest and Most Diverse Unified Dataset Collection and Instruction-Aware Models for Conversational AI
Papers and Datasets on Instruction Tuning and Following. ✨✨✨
MindSpore online courses: Step into LLM
[ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuning
Multimodal Chain-of-Thought Reasoning: A Comprehensive Survey
CIKM2023 Best Demo Paper Award. HugNLP is a unified and comprehensive NLP library based on HuggingFace Transformer. Please hugging for NLP now!😊
Research Trends in LLM-guided Multimodal Learning.
A curated list of awesome instruction tuning datasets, models, papers and repositories.
Collection of training data management explorations for large language models