haotian-liu / LLaVA

[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.

Home Page:https://llava.hliu.cc

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[Question] Learning is completed, but only the weights of the projector are output.

kouyakamada opened this issue · comments

Question

We would like to make Mistral-7B-v0.3 our VLM with our own Continual Pre-Training and SFT.
I followed the tutorial, ran pretrain.sh and finetune.sh (Not Lora) and the training appeared to complete successfully. However, only three files, config.json, mm_projector.bin, and trainer_state.json, were output in the output directory.
How can I complete the training correctly?