haotian-liu / LLaVA

[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.

Home Page:https://llava.hliu.cc

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[Question] Why I got nothing when I tested my lora finetune model

wuwu-C opened this issue · comments

Question

1.I use finetune_lora to finetune the model for 3 epochs, and I modified the save code to ensure every epoch will save non_lora_trainable.bin
2. I merge lora weight for every epoch
3. I tested on model_vqa but my output tensor is null
image

Are your projector weights changing after every epoch?