OpenGVLab / LLaMA-Adapter

[ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

what is the dataset during pretraining llama_adapter_v2_multimodal7b?

yabuke opened this issue · comments

Could you tell us what is the dataset was used when training the parameters of visual block? And how much its data size?