yykdev / KoLLaVA

KoLLaVA: Korean Large Language-and-Vision Assistant (feat.LLaVA)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

๐Ÿ”๏ธ KoLLaVA

[Dataset] [Model] [Paper Review]

  • Korean Large Language and Vision Assistant (feat. LLaVA)
  • ์ด๋ฏธ์ง€ ๊ธฐ๋ฐ˜ ํ•œ๊ตญ์–ด ๋Œ€ํ™” ๊ฐ€๋Šฅํ•œ ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ๋ชจ๋ธ


KoLLaVA Logo

Update Logs

  • 24.01.02
  • 23.11.30
  • 23.08.05
    • ๐Ÿ’ฅ ๐Ÿค— KoLLaVA-LLaMA-v2-7b-qlora-4bit ๊ณต๊ฐœ : ๐Ÿค— Llama-2-ko-7b-Chat์„ KoLLaVA-Instruct-150k์œผ๋กœ QLoRA 1epoch ํ•™์Šต (RTX 3090 GPU 4๊ฐœ, ์•ฝ 10์‹œ๊ฐ„)
    • ๐Ÿ’ฅ LLaVA์˜ ์ตœ๊ทผ ์—…๋ฐ์ดํŠธ๋ฅผ ๋ฐ˜์˜. LLaMA-2, QLoRA ๊ธฐ๋ฐ˜์˜ ์ฝ”๋“œ ๋ฐ ํ•™์Šต ๋ฐฉ๋ฒ• ๊ณต์œ 
  • 23.07.01
    • ๐Ÿ’ฅ ๐Ÿค— KoLLaVA-KULLM-13B-8bit ๊ณต๊ฐœ : KULLM์„ KoLLaVA-Instruct-150k์œผ๋กœ ํ•™์Šต

      โ†’ ์„ฑ๋Šฅ์ด ๊ธฐ๋Œ€์— ๋ฏธ์น˜์ง€ ๋ชปํ•ด ์‚ญ์ œํ•ฉ๋‹ˆ๋‹ค. ๋” ๋‚˜์€ 13B ๋ชจ๋ธ์„ ์ถ”ํ›„์— ๊ณต๊ฐœํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค.

    • ๐Ÿ’ฅ ํด๋ผ์šฐ๋“œ GPU ๋Œ€์—ฌ ๋น„์šฉ ๋ฌธ์ œ๋กœ ๋ฐ๋ชจ๋ฅผ ์ผ์‹œ ์ค‘์ง€ํ•ฉ๋‹ˆ๋‹ค๐Ÿฅฒ

  • 23.06.24
    • ๐Ÿ’ฅ ๐Ÿค— Ko-Otter-9B-LACR-v0 ๊ณต๊ฐœ : Otter๋ฅผ KoLLaVA_Complex_Resoning_77k ๋ฐ์ดํ„ฐ์…‹์œผ๋กœ ํ•™์Šต
  • 23.06.18
    • ๐Ÿ’ฅ Gradio๋ฅผ ์ด์šฉํ•œ ๋ฐ๋ชจ๋ฅผ ์˜คํ”ˆํ•ฉ๋‹ˆ๋‹ค! (RTX 3090 GPU 1๊ฐœ)
  • 23.06.12
    • ๐Ÿ’ฅ ํ•œ๊ตญ์–ด Visual Instruction ๋ฐ์ดํ„ฐ์…‹์œผ๋กœ ํ•™์Šตํ•œ ๐Ÿค—KoLLaVA-KoVicuna-7B ๊ณต๊ฐœ
    • ๐Ÿ’ฅ Colab(Pro) ์ด์šฉํ•œ inference ์˜ˆ์‹œ Open In Colab
  • 23.06.09

Visual Chat Example


Contents

Install

์•„๋ž˜ ๊ณผ์ •์€ Linux ๊ธฐ์ค€์œผ๋กœ ์ž‘์„ฑ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. macOS์—์„œ ํ…Œ์ŠคํŠธ ํ•˜์‹ค ๊ฒฝ์šฐ ์—ฌ๊ธฐ๋ฅผ ์ฐธ๊ณ ํ•˜์„ธ์š”.

  1. Clone ํ›„ ํ•ด๋‹น ๋””๋ ‰ํ† ๋ฆฌ๋กœ ์ด๋™
 git clone https://github.com/tabtoyou/KoLLaVA.git
 cd KoLLaVA
  1. Package ์„ค์น˜
 conda create -n kollava python=3.10 -y
 conda activate kollava
 pip install --upgrade pip 
 pip install -e .
  1. ํ•™์Šต ์ง„ํ–‰ํ•  ๊ฒฝ์šฐ ์ถ”๊ฐ€ Package ์„ค์น˜
pip install -e ".[train]"
pip install flash-attn --no-build-isolation

Inference

ํ„ฐ๋ฏธ๋„ ์ฐฝ์—์„œ ์•„๋ž˜ ๋ช…๋ น์–ด๋ฅผ ํ†ตํ•ด multi-turn ๋Œ€ํ™”๊ฐ€ ๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ M1/M2 ์นฉ์ด ํƒ‘์žฌ๋œ Apple ๋””๋ฐ”์ด์Šค๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ --device flag๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ mps ๋””๋ฐ”์ด์Šค๋ฅผ ์ง€์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. (--device mps) macOS์—์„œ ํ…Œ์ŠคํŠธ ํ•˜์‹ค ๊ฒฝ์šฐ ์—ฌ๊ธฐ๋ฅผ ์ฐธ๊ณ ํ•˜์„ธ์š”.

python -m llava.serve.cli \
    --model-path tabtoyou/KoLLaVA-v1.5-Synatra-7b \
    --image-file "https://llava-vl.github.io/static/images/view.jpg" \

Training

LLaVA/KoLLaVA ํ•™์Šต์€ two stage๋กœ ์ง„ํ–‰๋ฉ๋‹ˆ๋‹ค: (1) Pretraining(feature alignment stage): CC3M ๋ฐ์ดํ„ฐ์…‹์„ ํ•„ํ„ฐ๋งํ•œ 595K subset์„ ์ด์šฉํ•˜์—ฌ, frozen pretrained vision encoder์™€ frozen LLM์„ ์—ฐ๊ฒฐํ•˜๋Š” projection layer๋ฅผ ํ•™์Šตํ•ฉ๋‹ˆ๋‹ค.; (2) Finetuning(visual instruction tuning stage): 150K ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ instruction-following ๋ฐ์ดํ„ฐ์™€ ์•ฝ academic-oriented tasks ๋ฐ AI-Hub์—์„œ ์–ป์€ 460K VQA ๋ฐ์ดํ„ฐ๋ฅผ ์ด์šฉํ•ด multimodal instruction์„ ํ•™์Šตํ•ฉ๋‹ˆ๋‹ค.

KoLLaVA-v1.5๋Š” 8 A100 GPUs (80GB)๋กœ ํ•™์Šตํ–ˆ์œผ๋ฉฐ, ๋” ์ ์€ GPU๋กœ ํ•™์Šตํ•  ๊ฒฝ์šฐ per_device_train_batch_size๋ฅผ ์ค„์ด๊ณ  ๊ทธ ์ˆ˜์— ๋งž๊ฒŒ gradient_accumulation_steps๋ฅผ ๋Š˜๋ฆฌ๋ฉด ๋ฉ๋‹ˆ๋‹ค. ์žฌํ˜„์„ ์œ„ํ•ด์„œ๋Š” global batch size(per_device_train_batch_size x gradient_accumulation_steps x num_gpus)๋ฅผ ์•„๋ž˜ Hyperparameters์— ๋งž๊ฒŒ ์œ ์ง€ํ•˜์„ธ์š”.

Hyperparameters

  1. Pretraining
Hyperparameter Global Batch Size Learning rate Epochs Max length Weight decay
KoLLaVA-v1.5-Synatra-7B 256 1e-3 1 2048 0
  1. Finetuning
Hyperparameter Global Batch Size Learning rate Epochs Max length Weight decay
KoLLaVA-v1.5-Synatra-7B 128 2e-5 1 2048 0

Download Synatra checkpoints (automatically)

Base LLM ๋ชจ๋ธ์ธ Synatra-7b์˜ weights์€ ์ฃผ์–ด์ง„ training scripts๋ฅผ ์‹คํ–‰ํ•˜๋ฉด ์ž๋™์œผ๋กœ ๋‹ค์šด๋กœ๋“œ ๋ฉ๋‹ˆ๋‹ค.

Pretrain (feature alignment)

Pretrain ๊ณผ์ •์—๋Š” 8 A100 GPUs (80GB) ๊ธฐ์ค€ ์•ฝ 4์‹œ๊ฐ„์ด ์†Œ์š”๋์Šต๋‹ˆ๋‹ค.

Prepare Pretraining Dataset

๐Ÿค— KoLLaVA-CC3M-Pretrain-595K : LLaVA Pretrain ๋ฐ์ดํ„ฐ์…‹์˜ index์— ๋งž์ถฐ Ko-CC3M ํ•œ๊ตญ์–ด caption ์ถ”์ถœ

Data English Korean Size
CC3M Concept-balanced 595K chat.json ko_chat.json 211 MB / 229 MB
Details

     ์‚ฌ์ „ํ•™์Šต ๋ฐ์ดํ„ฐ์…‹์€ CC3M์„ ํ•„ํ„ฐ๋งํ•ด ์ƒ์„ฑํ–ˆ์œผ๋ฉฐ, 595K๊ฐœ์˜ ๋ฐ์ดํ„ฐ๋กœ ์ด๋ฃจ์–ด์ ธ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ์…‹ ๊ตฌ์กฐ์™€ ์˜์–ด ๋ฒ„์ „ ๋‹ค์šด๋กœ๋“œ ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ์ž์„ธํ•œ ์„ค๋ช…์€ ์—ฌ๊ธฐ๋ฅผ, ํ•œ๊ตญ์–ด ๋ฐ์ดํ„ฐ์…‹์€ ์—ฌ๊ธฐ๋ฅผ ์ฐธ๊ณ ํ•˜์„ธ์š”. (์ฃผ์˜ : DeepL๋กœ ๋ฒˆ์—ญํ•œ ๊ฒฐ๊ณผ๊ฐ€ ์•„๋‹ˆ๋ฉฐ, ํ’ˆ์งˆ์ด ์กฐ๊ธˆ ๋–จ์–ด์งˆ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.)

License: CC-3M ์ค€์ˆ˜

Image Dataset

images.zip - LLaVA์˜ ์ €์ž๋“ค์€ ์‚ฌ์ „ํ•™์Šต์— ์‚ฌ์šฉํ•œ ์ด๋ฏธ์ง€ ํŒŒ์ผ๋„ ๊ณต์œ ํ–ˆ์Šต๋‹ˆ๋‹ค. ์ด ์ด๋ฏธ์ง€ ํŒŒ์ผ์€ ์—ฐ๊ตฌ ์™ธ์— ๋‹ค๋ฅธ ์šฉ๋„๋กœ ์‚ฌ์šฉํ•ด์„œ๋Š” ์•ˆ ๋˜๋ฉฐ, ์ด๋ฏธ์ง€์˜ ์‚ฌ์šฉ์€ CC3M์˜ ๋ผ์ด์„ ์Šค๋ฅผ ์ค€์ˆ˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์›๋ณธ CC3M ๋ฐ์ดํ„ฐ์…‹ ์†Œ์œ ์ž ํ˜น์€ ์ฐธ์กฐ๋œ ์ด๋ฏธ์ง€์˜ ์†Œ์œ ์ž๊ฐ€ ์š”์ฒญํ•  ๊ฒฝ์šฐ ์–ธ์ œ๋“ ์ง€ ํ•ด๋‹น ์ด๋ฏธ์ง€๋Š” ์‚ญ์ œ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

Training script with DeepSpeed ZeRO-2: pretrain.sh.

  • --mm_projector_type mlp2x_gelu: the two-layer MLP vision-language connector.
  • --vision_tower openai/clip-vit-large-patch14-336: CLIP ViT-L/14 336px.

Run

sh scripts/v1_5/pretrain.sh

Visual Instruction Tuning

1. Prepare data

Instruction tuning data : ๐Ÿค— KoLLaVA-Instruct-581k

์œ„์˜ ๋ฐ์ดํ„ฐ๋ฅผ ๋ชจ๋‘ ๋‹ค์šด๋ฐ›์€ ๋’ค, /workspace/data ๋””๋ ‰ํ† ๋ฆฌ๋ฅผ ์•„๋ž˜์™€ ๊ฐ™์ด ๊ตฌ์„ฑํ•˜์„ธ์š”. ์ด๋•Œ workspace๋Š” ๊ฐ์ž์˜ ์ด๋ฏธ์ง€ ๋ฐ์ดํ„ฐ๋ฅผ ์ €์žฅํ•˜๋Š” ๋””๋ ‰ํ† ๋ฆฌ ์ด๋ฆ„์ž…๋‹ˆ๋‹ค.

  • ์ฃผ์˜ : COCO,GQA,VG ๋ฐ์ดํ„ฐ์…‹์€ ๋ชจ๋‘ academic-oriented tasks์ธ ์˜์–ด ๋ฐ์ดํ„ฐ์…‹์ด๋ฉฐ, ์ด๋ฅผ DeepL๋กœ ๋ฒˆ์—ญํ–ˆ์Šต๋‹ˆ๋‹ค. ๋ฒˆ์—ญ ๊ณผ์ •์—์„œ ์˜ค๋ฅ˜๊ฐ€ ์žˆ์„ ์ˆ˜ ์žˆ์œผ๋ฉฐ, VG์˜ ๊ฒฝ์šฐ ์˜์–ด ๋‹จ์–ด OCR ๋ฐ Bounding Box์— ๋Œ€ํ•œ ์ •๋ณด๋„ ํฌํ•จํ•ฉ๋‹ˆ๋‹ค. EKVQA๋Š” AI-Hub์—์„œ ์ œ๊ณตํ•˜๋Š” ์™ธ๋ถ€ ์ง€์‹ ๊ธฐ๋ฐ˜ ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ์งˆ์˜์‘๋‹ต ๋ฐ์ดํ„ฐ์ด๋ฉฐ ์ƒ์‹์ ์ธ ์ง€์‹์ด๋‚˜ ๋ฐฐ๊ฒฝ์ง€์‹์„ ๋ฐ”ํƒ•์œผ๋กœ ์ด๋ฏธ์ง€์— ๊ด€๋ จํ•œ ์งˆ๋ฌธ์— ๋Œ€ํ•ด ๋‹ต์„ ํ•˜๋Š” task๋กœ, Instruction-following data ํ˜•์‹์œผ๋กœ ์žฌ๊ตฌ์„ฑํ–ˆ์Šต๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ์…‹์— ๋Œ€ํ•œ ์ €์ž‘๊ถŒ์€ ๊ฐ ๋ฐ์ดํ„ฐ์…‹์˜ license ๊ทœ์ •์„ ๋”ฐ๋ฆ…๋‹ˆ๋‹ค.
โ”œโ”€โ”€ coco
โ”‚   โ””โ”€โ”€ train2017
โ”œโ”€โ”€ gqa
โ”‚   โ””โ”€โ”€ images
โ”œโ”€โ”€ vg
โ”‚   โ”œโ”€โ”€ VG_100K
โ”‚   โ””โ”€โ”€ VG_100K_2
โ””โ”€โ”€ ekvqa
KoLLaVA-v1 Visual Instruction Dataset

Visual Instruction Dataset

๐Ÿค— KoLLaVA-Instruct-150K : LLaVA์˜ instruction-following ๋ฐ์ดํ„ฐ์…‹์„ DeepL๋กœ ๋ฒˆ์—ญ

English Korean
llava_instruct_150k.json ko_llava_instruct_150k.json
conversation_58k.json ko_conversation_58k.json
detail_23k.json ko_detail_23k.json
complex_reasoning_77k.json ko_complex_reasoning_77k.json
Details

     Visual instruction tuning์— ์‚ฌ์šฉํ•˜๋Š” instruction-following ๋ฐ์ดํ„ฐ๋Š” GPT-4๋กœ ์ƒ์„ฑ๋œ ๋ฐ์ดํ„ฐ์ž…๋‹ˆ๋‹ค. ์ด๋•Œ GPT-4์˜ ์ธํ’‹์œผ๋กœ๋Š” ํ…์ŠคํŠธ๋งŒ ๋„ฃ์–ด์ค๋‹ˆ๋‹ค(์ด๋ฏธ์ง€ X). ๊ตฌ์ฒด์ ์œผ๋กœ๋Š” image-text pair ๋ฐ์ดํ„ฐ์…‹์ธ COCO์˜ ํ…์ŠคํŠธ ์ •๋ณด(caption, bounding box)๋งŒ์„ ์ด์šฉํ•ด instruction-following ํ˜•์‹์˜ ๋ฐ์ดํ„ฐ๋ฅผ ์ƒ์„ฑํ•œ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ฐ์ดํ„ฐ ์ƒ์„ฑ ํŒŒ์ดํ”„๋ผ์ธ์ด ๊ถ๊ธˆํ•˜์‹  ๋ถ„์€ ๋ธ”๋กœ๊ทธ๋ฅผ ์ฐธ๊ณ ํ•ด์ฃผ์„ธ์š”.

License: Attribution-NonCommercial 4.0 International | OpenAI policy ์ค€์ˆ˜

Image Dataset

Finetuning์— ์‚ฌ์šฉ๋˜๋Š” ์ด๋ฏธ์ง€ ๋ฐ์ดํ„ฐ์…‹์€ COCO-train2014 ์ž…๋‹ˆ๋‹ค.

wget http://images.cocodataset.org/zips/train2014.zip

2. Start training!

Pretrain์„ ํ†ตํ•ด projection layer๋ฅผ ์ƒ์„ฑํ•˜๊ฑฐ๋‚˜, ์ €ํฌ๊ฐ€ ๋ฏธ๋ฆฌ pretrainํ•œ KoLLaVA-v1.5-mlp2x-336px-pretrain-Synatra-7b๋ฅผ ๋‹ค์šด๋กœ๋“œ ๋ฐ›์œผ์„ธ์š”.

Visual instruction tuning์€ 8x A100 (80G)์—์„œ 7B ๊ธฐ์ค€ ๋Œ€๋žต 13์‹œ๊ฐ„ ํ•™์Šตํ–ˆ์Šต๋‹ˆ๋‹ค.

Training script with DeepSpeed ZeRO-3: finetune.sh.

Run

sh scripts/v1_5/finetune.sh

GPU ๋ฉ”๋ชจ๋ฆฌ๊ฐ€ ์ถฉ๋ถ„ํ•˜์ง€ ์•Š์„ ๊ฒฝ์šฐ:

  • LoRA: finetune_lora.sh. global batch size(per_device_train_batch_size x gradient_accumulation_steps x num_gpus) ์œ„์—์„œ ์ฃผ์–ด์ง„ scripts์™€ ๋™์ผํ•˜๊ฒŒ ์œ ์ง€ํ•˜์„ธ์š”.

New options to note:

  • --mm_projector_type mlp2x_gelu: the two-layer MLP vision-language connector.
  • --vision_tower openai/clip-vit-large-patch14-336: CLIP ViT-L/14 336px.
  • --image_aspect_ratio pad: this pads the non-square images to square, instead of cropping them; it slightly reduces hallucination.
  • --group_by_modality_length True: this should only be used when your instruction tuning dataset contains both language (e.g. ShareGPT) and multimodal (e.g. LLaVA-Instruct). It makes the training sampler only sample a single modality (either image or language) during training, which we observe to speed up training by ~25%, and does not affect the final outcome.
KoLLaVA-v1 Pretrain

     ํด๋ผ์šฐ๋“œ GPU ๋Œ€์—ฌ ์„œ๋น„์Šค์ธ vast.ai๋ฅผ ์ด์šฉํ•ด ํ•™์Šต์„ ์ง„ํ–‰ํ–ˆ์Šต๋‹ˆ๋‹ค. KoLLaVA-KoVicuna-7b ๋ชจ๋ธ ํ•™์Šต ์‹œ 4๊ฐœ์˜ A100(80GB) GPU๋ฅผ ๋Œ€์—ฌํ–ˆ์œผ๋ฉฐ Disk Space๋Š” 200GB ์ด์ƒ์„ ์ถ”์ฒœ๋“œ๋ฆฝ๋‹ˆ๋‹ค(์‹œ๊ฐ„ ๋‹น ์•ฝ $7.44). ์ธ์Šคํ„ด์Šค ์ƒ์„ฑ ์‹œ Docker image๋กœ pytorch/pytorch:2.0.1-cuda11.7-cudnn8-devel ๋ฅผ ์‚ฌ์šฉํ–ˆ์Šต๋‹ˆ๋‹ค.

+์ถ”๊ฐ€ : KoLLaVA-LLaMA-v2-7b-qlora ๋ชจ๋ธ ํ•™์Šต์—๋Š” 4๊ฐœ์˜ RTX 3090(24G) GPU๋ฅผ ์‚ฌ์šฉํ–ˆ์Šต๋‹ˆ๋‹ค.

To-do

  • Finetuning ๋ฐ์ดํ„ฐ์…‹ ํ•œ๊ตญ์–ด ๋ฒˆ์—ญ (LLaVA-Instruct-150K)
  • Pretraining ๋ฐ์ดํ„ฐ์…‹ ํ•œ๊ตญ์–ด ๋ฒˆ์—ญ (LLaVA-CC3M-Pretrain-595K)
  • LLaVA ๋ชจ๋ธ์—์„œ Vicuna -> KoVicuna-7B ๋Œ€์ฒด ํ›„ ํ•™์Šต
  • Ko-Otter ๋ชจ๋ธ ํ•™์Šต ๋ฐ ํ—ˆ๊น…ํŽ˜์ด์Šค ๊ณต๊ฐœ
  • KoLLaVA-13B ๋ชจ๋ธ ํ•™์Šต ๋ฐ ํ—ˆ๊น…ํŽ˜์ด์Šค ๊ณต๊ฐœ
  • QLoRA ์ด์šฉํ•ด low GPU memory์—์„œ๋„ ํ•™์Šตํ•  ์ˆ˜ ์žˆ๋„๋ก (RTX 3090 ๋“ฑ)
  • LLaVA-v1.5 ์ฝ”๋“œ ๋ฐ˜์˜ ๋ฐ ๋ชจ๋ธ ๊ณต๊ฐœ
  • KoLLaVA์˜ linear layer๋ฅผ Q-former๋กœ ์—…๋ฐ์ดํŠธ(InstructBLIP)

Team

KoLLaVA-v1 ํ”„๋กœ์ ํŠธ๋Š” ๋”ฅ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” ๊ตฌ์„ฑ์›๋“ค๊ณผ ํ•จ๊ป˜ ์ง„ํ–‰ํ–ˆ์Šต๋‹ˆ๋‹ค.

ํŒ€์› : Jeonghyeon, Seongyeon, Seonghwan, Seungwoo, Seonghun, Taebaek

KoLLaVA-v1.5 ํ”„๋กœ์ ํŠธ๋Š” ๋ณต์ง€์ด์‹ญ์‚ฌ์˜ ์ง€์›์„ ๋ฐ›์•„ ์ง„ํ–‰๋˜์—ˆ์Šต๋‹ˆ๋‹ค.


๐ŸŒ‹ LLaVA: Large Language and Vision Assistant

Visual instruction tuning towards large language and vision models with GPT-4 level capabilities.

[Project Page] [Paper] [Demo] [Data] [Model]

Visual Instruction Tuning
Haotian Liu*, Chunyuan Li*, Qingyang Wu, Yong Jae Lee (*Equal Contribution)

Code License Data License Usage and License Notices: The data, code and checkpoint is intended and licensed for research use only. They are also restricted to uses that follow the license agreement of LLaMA, Vicuna and GPT-4. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes.

Contents

Data Download

Data file name Size
llava_instruct_150k.json 229 MB
llava_instruct_80k.json 229 MB
conversation_58k.json 126 MB
detail_23k.json 20.5 MB
complex_reasoning_77k.json 79.6 MB

To download our langauge-image multimodal instruction-folllowing dataset LLaVA-Instruct-150K, please run the following script:

sh download_data.sh

Pretraining Dataset

The pretraining dataset used in this release is a subset of CC-3M dataset, filtered with a more balanced concept coverage distribution. Please see here for a detailed description on the dataset structure and how to download the images.

If you already have CC-3M dataset on your disk, the image names follow this format: GCC_train_000000000.jpg. You may edit the image field correspondingly if necessary.

Data Chat File Meta Data Size
CC-3M Concept-balanced 595K chat.json metadata.json 211 MB
LAION/CC/SBU BLIP-Caption Concept-balanced 558K blip_laion_cc_sbu_558k.json metadata.json 181 MB

GPT-4 Prompts

We provide our prompts and few-shot samples for GPT-4 queries, to better facilitate research in this domain. Please check out the prompts folder for three kinds of questions: conversation, detail description, and complex reasoning.

They are organized in a format of system_message.txt for system message, pairs of abc_caps.txt for few-shot sample user input, and abc_conv.txt for few-shot sample reference output.

Note that you may find them in different format. For example, conversation is in jsonl, and detail description is answer-only. The selected format in our preliminary experiments work slightly better than a limited set of alternatives that we tried: jsonl, more natural format, answer-only. If interested, you may try other variants or conduct more careful study in this. Contributions are welcomed!

Install

  1. Clone this repository and navigate to LLaVA folder
git clone https://github.com/haotian-liu/LLaVA.git
cd LLaVA
  1. Install Package
conda create -n llava python=3.10 -y
conda activate llava
pip install --upgrade pip  # enable PEP 660 support
pip install -e .

NOTE: [Update 4/30/23] We have successfully moved LLaVA framework to this repo, without the need of a special transformers modified by us. If you install our repo before 4/30/23, please reinstall transformers following the instructions here.

  1. Install additional packages for training cases
pip install ninja
pip install flash-attn==1.0.2

Upgrade to v0.1

NOTE: If you install our package before 4/30/23, please make sure to execute the command below to correctly upgrade to v0.1. You may try a clean install as well.

git pull
pip uninstall transformers
pip install git+https://github.com/huggingface/transformers@cae78c46
pip install -e .

LLaVA Weights

We release LLaVA weights as delta weights to comply with the LLaMA model license. You can add our delta to the original LLaMA weights to obtain the LLaVA weights.

Instructions:

  1. Get the original LLaMA weights in the huggingface format by following the instructions here.
  2. Use the following scripts to get LLaVA weights by applying our delta (13b-v0, 7b-v0, lightning-7B-v1-1). It will automatically download delta weights from our Hugging Face account.

LLaVA-13B

This conversion command needs around 60 GB of CPU RAM.

python3 -m llava.model.apply_delta \
    --base /path/to/llama-13b \
    --target /output/path/to/LLaVA-13B-v0 \
    --delta liuhaotian/LLaVA-13b-delta-v0

LLaVA-7B

This conversion command needs around 30 GB of CPU RAM.

python3 -m llava.model.apply_delta \
    --base /path/to/llama-7b \
    --target /output/path/to/LLaVA-7B-v0 \
    --delta liuhaotian/LLaVA-7b-delta-v0

LLaVA pretrained projector weights

The initial release is pretrained on LLaVA-filtered CC3M 595K with 1 epoch. The pretrained weights are released here.

You may perform instruction tuning on our pretrained checkpoints, by using our visual instruction tuning data following the instructions here.

Serving

Web UI

Launch a controller

python -m llava.serve.controller --host 0.0.0.0 --port 10000

Launch a model worker

python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path ./checkpoints/LLaVA-13B-v0 --multi-modal

Wait until the process finishes loading the model and you see "Uvicorn running on ...".

Launch a model worker (Multiple GPUs, when GPU VRAM <= 24GB)

If your the VRAM of your GPU is less than 24GB (e.g., RTX 3090, RTX 4090, etc.), you may try running it with multiple GPUs.

python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path ./checkpoints/LLaVA-13B-v0 --multi-modal --num-gpus 2

Wait until the process finishes loading the model and you see "Uvicorn running on ...".

Launch a gradio web server.

python -m llava.serve.gradio_web_server --controller http://localhost:10000

You can open your browser and chat with a model now.

CLI Inference

A starting script for inference with LLaVA without the need of Gradio interface. The current implementation only supports for a single-turn Q-A session, and the interactive CLI is WIP. This also serves as an example for users to build customized inference scripts.

python -m llava.eval.run_llava \
    --model-name /path/to/LLaVA-13B-v0 \
    --image-file "https://llava-vl.github.io/static/images/view.jpg" \
    --query "What are the things I should be cautious about when I visit here?"

Example output (varies in different runs):

When visiting this picturesque location with a serene lake and a wooden pier extending over the water, one should be cautious about various safety aspects. Some important considerations include:

  1. Ensuring that the pier is structurally sound andstable, as old or weakened pier structures might not support the weight of visitors.
  2. Being aware of the water depth around the pier and lake, as sudden drop-offs or strong currents may pose a risk to swimmers, boaters, or those who venture too close to the edge.
  3. Staying vigilant about the presence of wildlife in the area, such as slippery, stealthy fish or other animals that might cause harm or inconvenience.
  4. Maintaining a safe distance from the water's edge, particularly for children, elderly individuals, or those who are not strong swimmers.
  5. Following any posted signs or guidelines related to safety and the use of the pier and surrounding areas.

By considering these safety precautions, visitors can enjoy the natural beauty of the location while minimizing risks and ensuring a safe and pleasant experience.

Evaluation

GPT-assisted Evaluation

Our GPT-assisted evaluation pipeline for multimodal modeling is provided for a comprehensive understanding of the capabilities of vision-language models. Please see our paper for more details.

  1. Generate LLaVA responses
python model_vqa.py \
    --model-name ./checkpoints/LLaVA-13B-v0 \
    --question-file \
    playground/data/coco2014_val_qa_eval/qa90_questions.jsonl \
    --image-folder \
    /path/to/coco2014_val \
    --answers-file \
    /path/to/answer-file.jsonl
  1. Evaluate the generated responses. In our case, answer-file-1.jsonl is the response generated by text-only GPT-4 (0314), with the context captions/boxes provided.
OPENAI_API_KEY="sk-***********************************" python eval_gpt_review_visual.py \
    --question playground/data/coco2014_val_qa_eval/qa90_questions.jsonl \
    --context table/caps_boxes_coco2014_val_80.jsonl \
    --answer-list \
    /path/to/answer-file-1.jsonl \
    /path/to/answer-file-2.jsonl \
    --rule table/rule.json \
    --output /path/to/review.json
  1. Summarize the evaluation results
python summarize_gpt_review.py

ScienceQA

Prepare Data

  1. Please see ScienceQA repo for setting up the dataset.
  2. Generate ScienceQA dataset for LLaVA conversation-style format.
python scripts/convert_sqa_to_llava \
    convert_to_llava \
    --base-dir /path/to/ScienceQA/data/scienceqa \
    --split {train,val,minival,test,minitest}

Evaluation

  1. Download our pretrained LLaVA-13B (delta) weights for ScienceQA dataset here. Convert the delta weights to actual weights.
python -m llava.model.apply_delta \
    --base /path/to/llama-13b \
    --target /path/to/LLaVA-13b-v0-science_qa \
    --delta liuhaotian/LLaVA-13b-delta-v0-science_qa
  1. [Option 1] Multiple-GPU inference You may evaluate this with multiple GPUs, and concatenate the generated jsonl files. Please refer to our script for batch evaluation and results gathering.

  2. [Option 2] Single-GPU inference

(a) Generate LLaVA responses on ScienceQA dataset

python -m llava.eval.model_vqa_science \
    --model-name /path/to/LLaVA-13b-v0-science_qa \
    --question-file /path/to/ScienceQA/data/scienceqa/llava_test.json \
    --image-folder /path/to/ScienceQA/data/scienceqa/images/test \
    --answers-file vqa/results/ScienceQA/test_llava-13b.jsonl \
    --answer-prompter
    --conv-mode simple

(b) Evaluate the generated responses

python eval_science_qa.py \
    --base-dir /path/to/ScienceQA/data/scienceqa \
    --result-file vqa/results/ScienceQA/test_llava-13b.jsonl \
    --output-file vqa/results/ScienceQA/test_llava-13b_output.json \
    --output-result vqa/results/ScienceQA/test_llava-13b_result.json \

For reference, we attach our prediction file test_llava-13b_result.json here for comparison when reproducing our results, as well as for further analysis in detail.

Fine-tuning

Data

The current version of LLaVA is fine-tuned from a Vicuna-13B model. We use approximately 600K filtered CC3M in feature alignment pretraining and 150K GPT-generated multimodal instruction-following data in finetuning. For detailed description of the data generation pipeline, please refer see our paper.

We are working on a more capable model that is pretrained with the data at a larger scale. Stay tuned!

We release all three types of multimodal instruction-following data. The use of these data is subject to OpenAI TOS.

Code and Hyperparameters

We fine-tune the model using the code from FastChat. We use a similar set of hyperparameters as Vicuna in finetuning. Both hyperparameters used in pretraining and finetuning are provided below.

  1. Pretraining
Hyperparameter Global Batch Size Learning rate Epochs Max length Weight decay
LLaVA-13B 128 2e-3 1 2048 0
  1. Finetuning
Hyperparameter Global Batch Size Learning rate Epochs Max length Weight decay
LLaVA-13B 32 2e-5 3 2048 0

Fine-tuning with Local GPUs

LLaVA is trained on 8 A100 GPUs with 80GB memory with the following code. To train on fewer GPUs, you can reduce the per_device_train_batch_size and increase the gradient_accumulation_steps accordingly to keep the global batch size the same.

  1. Pretraining
Pretrain: LLaVA-13B, 8x A100 (80G). Time: ~4 hours.
torchrun --nnodes=1 --nproc_per_node=8 --master_port=25001 \
    llava/train/train_mem.py \
    --model_name_or_path ./checkpoints/llama-vicuna-13b \
    --data_path /path/to/cc3m_595k.json \
    --image_folder /path/to/cc3m_595k \
    --vision_tower openai/clip-vit-large-patch14 \
    --tune_mm_mlp_adapter True \
    --mm_vision_select_layer -2 \
    --mm_use_im_start_end \
    --bf16 True \
    --output_dir ./checkpoints/llava-13b-pretrain \
    --num_train_epochs 1 \
    --per_device_train_batch_size 16 \
    --per_device_eval_batch_size 4 \
    --gradient_accumulation_steps 1 \
    --evaluation_strategy "no" \
    --save_strategy "steps" \
    --save_steps 2400 \
    --save_total_limit 1 \
    --learning_rate 2e-3 \
    --weight_decay 0. \
    --warmup_ratio 0.03 \
    --lr_scheduler_type "cosine" \
    --logging_steps 1 \
    --tf32 True \
    --model_max_length 2048 \
    --gradient_checkpointing True \
    --lazy_preprocess True \
    --report_to wandb

You may run this with a single A100 GPU with the following code. Please note that the per_device_train_batch_size * gradient_accumulation_steps should be equal to 128 to keep the global batch size the same.

Pretrain: LLaVA-13B, 1x A100 (80G). Time: ~33 hours.
python llava/train/train_mem.py \
    --model_name_or_path ./checkpoints/llama-vicuna-13b \
    --data_path /path/to/cc3m_595k.json \
    --image_folder /path/to/cc3m_595k \
    --vision_tower openai/clip-vit-large-patch14 \
    --tune_mm_mlp_adapter True \
    --mm_vision_select_layer -2 \
    --mm_use_im_start_end \
    --bf16 True \
    --output_dir ./checkpoints/llava-13b-pretrain \
    --num_train_epochs 1 \
    --per_device_train_batch_size 16 \
    --per_device_eval_batch_size 4 \
    --gradient_accumulation_steps 8 \
    --evaluation_strategy "no" \
    --save_strategy "steps" \
    --save_steps 2400 \
    --save_total_limit 1 \
    --learning_rate 2e-3 \
    --weight_decay 0. \
    --warmup_ratio 0.03 \
    --lr_scheduler_type "cosine" \
    --logging_steps 1 \
    --tf32 True \
    --model_max_length 2048 \
    --gradient_checkpointing True \
    --lazy_preprocess True \
    --report_to wandb
Pretrain: LLaVA-7B, 1x A100 (80G/40G). Time: ~19 hours.
python llava/train/train_mem.py \
    --model_name_or_path ./checkpoints/llama-vicuna-7b \
    --data_path /path/to/cc3m_595k.json \
    --image_folder /path/to/cc3m_595k \
    --vision_tower openai/clip-vit-large-patch14 \
    --tune_mm_mlp_adapter True \
    --mm_vision_select_layer -2 \
    --mm_use_im_start_end \
    --bf16 True \
    --output_dir ./checkpoints/llava-7b-pretrain \
    --num_train_epochs 1 \
    --per_device_train_batch_size 16 \
    --per_device_eval_batch_size 4 \
    --gradient_accumulation_steps 8 \
    --evaluation_strategy "no" \
    --save_strategy "steps" \
    --save_steps 2400 \
    --save_total_limit 1 \
    --learning_rate 2e-3 \
    --weight_decay 0. \
    --warmup_ratio 0.03 \
    --lr_scheduler_type "cosine" \
    --logging_steps 1 \
    --tf32 True \
    --model_max_length 2048 \
    --gradient_checkpointing True \
    --lazy_preprocess True \
    --report_to wandb

Experimental: use FSDP to save memory in pretraining

Learn more

Currently, PyTorch and Huggingface does not yet have stable/native support for FSDP on parameter efficient tuning (part of the parameters are frozen). However, the feature is being developed in PyTorch nightly and shall be shipped in the next release. We provide an experimental script to enable FSDP in pretraining. To use it, please create a new enviroment (to be safe), install PyTorch nightly (MUST), and LLaVA package following the instructions below.

  1. Prepare environment
conda create -n llava_beta python=3.10 -y
conda activate llava_beta
pip install --upgrade pip
pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu117
pip install -e .
pip install einops ninja
pip install flash-attn
  1. Run pretraining with FSDP (experimental)
torchrun --nnodes=1 --nproc_per_node=8 --master_port=25001 \
    llava/train/train_mem.py \
    --model_name_or_path ./checkpoints/llama-vicuna-13b \
    --data_path /path/to/cc3m_595k.json \
    --image_folder /path/to/cc3m_595k \
    --vision_tower openai/clip-vit-large-patch14 \
    --tune_mm_mlp_adapter True \
    --mm_vision_select_layer -2 \
    --mm_use_im_start_end \
    --bf16 True \
    --output_dir ./checkpoints/llava-13b-pretrain_fsdp \
    --num_train_epochs 1 \
    --per_device_train_batch_size 16 \
    --per_device_eval_batch_size 4 \
    --gradient_accumulation_steps 1 \
    --evaluation_strategy "no" \
    --save_strategy "steps" \
    --save_steps 2400 \
    --save_total_limit 1 \
    --learning_rate 2e-3 \
    --weight_decay 0. \
    --warmup_ratio 0.03 \
    --lr_scheduler_type "cosine" \
    --logging_steps 1 \
    --tf32 True \
    --fsdp "full_shard auto_wrap" \
    --fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' \
    --model_max_length 2048 \
    --gradient_checkpointing True \
    --lazy_preprocess True \
    --report_to wandb
  1. Extract projector features
python scripts/extract_mm_projector.py \
  --model_name_or_path ./checkpoints/llava-13b-pretrain \
  --output ./checkpoints/mm_projector/llava-13b-pretrain.bin
  1. Finetuning
torchrun --nnodes=1 --nproc_per_node=8 --master_port=25001 \
    llava/train/train_mem.py \
    --model_name_or_path /path/to/llama-vicuna-13b \
    --data_path /path/to/llava_instruct_150k.json \
    --image_folder /Data/haotian/coco/train2014 \
    --vision_tower openai/clip-vit-large-patch14 \
    --pretrain_mm_mlp_adapter ./checkpoints/mm_projector/llava-13b-pretrain.bin \
    --mm_vision_select_layer -2 \
    --mm_use_im_start_end True \
    --bf16 True \
    --output_dir ./checkpoints \
    --num_train_epochs 3 \
    --per_device_train_batch_size 4 \
    --per_device_eval_batch_size 4 \
    --gradient_accumulation_steps 1 \
    --evaluation_strategy "no" \
    --save_strategy "steps" \
    --save_steps 5000 \
    --save_total_limit 3 \
    --learning_rate 2e-5 \
    --weight_decay 0. \
    --warmup_ratio 0.03 \
    --lr_scheduler_type "cosine" \
    --logging_steps 1 \
    --tf32 True \
    --fsdp "full_shard auto_wrap" \
    --fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' \
    --model_max_length 2048 \
    --gradient_checkpointing True \
    --lazy_preprocess True \
    --report_to wandb

Train LLaVA Lightning

LLaVA-Lightning can be trained on 8x A100 GPUs in just 3 hours, including both pretraining and finetuning. When using spot instances, it costs just ~$40. We are working on SkyPilot tutorial to make spot instance training even easier, stay tuned!

Please make sure to: (1) install or upgrade to the latest code base, and (2) pass the correct model version identifier v0/v1 to ensure the correct conversation template is loaded.

bash ./scripts/train_lightning.sh {v0,v1}

Hyperparameters

  1. Pretraining
Hyperparameter Global Batch Size Learning rate Epochs Max length Weight decay
LLaVA-Lightning-7B 128 2e-3 1 2048 0
  1. Finetuning
Hyperparameter Global Batch Size Learning rate Epochs Max length Weight decay
LLaVA-Lightning-7B 128 2e-5 1 2048 0

LLaVA-MPT-7b

Thanks to LLaVA-Lightning, we are able to train a checkpoint based on MPT-7b-Chat on 8x A100 GPUs in just 3 hours, including both pretraining and finetuning.

NOTE: This is a research preview of the LLaVA-Lightning based on MPT-7B-chat checkpoint. The usage of the model should comply with MPT-7B-chat license and agreements.

NOTE: Unlike other LLaVA models, this model should be used directly without delta weights conversion!

NOTE: You need to upgrade to our latest code base to use LLaVA-MPT-7b!

  1. Usage

You do not need to download our checkpoint, it will directly load from our Hugging Face model: liuhaotian/LLaVA-Lightning-MPT-7B-preview.

python -m llava.serve.controller --host 0.0.0.0 --port 10000
python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path liuhaotian/LLaVA-Lightning-MPT-7B-preview
python -m llava.serve.gradio_web_server --controller http://localhost:10000
  1. Training

We use the same set of training dataset, and the hyperparameters as other Lightning checkpoints.

bash ./scripts/train_lightning_mpt.sh

Fine-tuning on ScienceQA

NOTE: Due to that ScienceQA experiments were done earlier, the current checkpoints are trained without <im_start> and <im_end> tokens. Checkpoints with these tokens will be updated later. Here we provide our training scripts for the current checkpoints.

1. Pretraining
torchrun --nnodes=1 --nproc_per_node=8 --master_port=25001 \
    llava/train/train_mem.py \
    --model_name_or_path ./checkpoints/llama-vicuna-13b \
    --data_path /path/to/cc3m_595k.json \
    --image_folder /path/to/cc3m_595k \
    --vision_tower openai/clip-vit-large-patch14 \
    --tune_mm_mlp_adapter True \
    --mm_vision_select_layer -2 \
    --bf16 True \
    --output_dir ./checkpoints/llava-13b-pretrain-no_im_start_end_token \
    --num_train_epochs 1 \
    --per_device_train_batch_size 16 \
    --per_device_eval_batch_size 4 \
    --gradient_accumulation_steps 1 \
    --evaluation_strategy "no" \
    --save_strategy "steps" \
    --save_steps 2400 \
    --save_total_limit 1 \
    --learning_rate 2e-3 \
    --weight_decay 0. \
    --warmup_ratio 0.03 \
    --lr_scheduler_type "cosine" \
    --logging_steps 1 \
    --tf32 True \
    --model_max_length 2048 \
    --gradient_checkpointing True \
    --lazy_preprocess True \
    --report_to wandb
2. Extract projector features
python scripts/extract_mm_projector.py \
  --model_name_or_path ./checkpoints/llava-13b-pretrain-no_im_start_end_token \
  --output ./checkpoints/mm_projector/llava-13b-pretrain-no_im_start_end_token.bin
3. Finetuning

You may download our pretrained llava-13b-pretrain-no_im_start_end_token.bin here.

torchrun --nnodes=1 --nproc_per_node=8 --master_port=25001 \
    llava/train/train_mem.py \
    --model_name_or_path /path/to/llama-vicuna-13b \
    --data_path /path/to/scienceqa/llava_train_QCM-LEPA.json \
    --image_folder /path/to/scienceqa/images/train \
    --vision_tower openai/clip-vit-large-patch14 \
    --pretrain_mm_mlp_adapter ./checkpoints/mm_projector/llava-13b-pretrain-no_im_start_end_token.bin \
    --mm_vision_select_layer -2 \
    --bf16 True \
    --output_dir ./checkpoints/llava-13b-pretrain-no_im_start_end_token-finetune_scienceqa \
    --num_train_epochs 12 \
    --per_device_train_batch_size 4 \
    --per_device_eval_batch_size 4 \
    --gradient_accumulation_steps 1 \
    --evaluation_strategy "no" \
    --save_strategy "steps" \
    --save_steps 5000 \
    --save_total_limit 3 \
    --learning_rate 2e-5 \
    --weight_decay 0. \
    --warmup_ratio 0.03 \
    --lr_scheduler_type "cosine" \
    --logging_steps 1 \
    --tf32 True \
    --fsdp "full_shard auto_wrap" \
    --fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' \
    --model_max_length 2048 \
    --gradient_checkpointing True \
    --lazy_preprocess True \
    --report_to wandb

Acknowledgement

  • Vicuna: the codebase we built upon, and our base model Vicuna-13B that has the amazing language capabilities!

If you find LLaVA useful for your your research and applications, please cite using this BibTeX:

@misc{liu2023llava,
      title={Visual Instruction Tuning}, 
      author={Liu, Haotian and Li, Chunyuan and Wu, Qingyang and Lee, Yong Jae},
      publisher={arXiv:2304.08485},
      year={2023},
}

Related Projects

For future project ideas, pleae check out:

About

KoLLaVA: Korean Large Language-and-Vision Assistant (feat.LLaVA)

License:Apache License 2.0


Languages

Language:Jupyter Notebook 82.1%Language:Python 16.5%Language:Shell 0.5%Language:JavaScript 0.4%Language:HTML 0.3%Language:CSS 0.1%