haotian-liu / LLaVA

[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.

Home Page:https://llava.hliu.cc

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[Question] FT use 1.5 face a issue that tensor mismatch

hellangleZ opened this issue · comments

Question

[2024-04-29 06:52:01,294] [INFO] [partition_parameters.py:345:exit] finished initializing model - num_params = 295, num_elems = 6.76B
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 3.14it/s]
Some weights of LlavaLlamaForCausalLM were not initialized from the model checkpoint at /aml/llama2chat and are newly initialized: ['model.mm_projector.0.bias', 'model.mm_projector.0.weight', 'model.mm_projector.2.bias', 'model.mm_projector.2.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
/aml/llava/lib/python3.10/site-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
return self.fget.get(instance, owner)()
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:03<00:00, 1.80s/it]
Traceback (most recent call last):
File "/aml/LLaVA-main/llava/train/train_mem.py", line 5, in
train(attn_implementation="flash_attention_2")
File "/aml/LLaVA-main/llava/train/train.py", line 827, in train
model = LlavaLlamaForCausalLM.from_pretrained(
File "/aml/llava/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3850, in from_pretrained
) = cls._load_pretrained_model(
File "/aml/llava/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4335, in _load_pretrained_model
raise RuntimeError(f"Error(s) in loading state_dict for {model.class.name}:\n\t{error_msg}")
RuntimeError: Error(s) in loading state_dict for LlavaLlamaForCausalLM:
size mismatch for model.embed_tokens.weight: copying a param with shape torch.Size([32000, 4096]) from checkpoint, the shape in current model is torch.Size([32001, 4096]).
size mismatch for lm_head.weight: copying a param with shape torch.Size([32000, 4096]) from checkpoint, the shape in current model is torch.Size([32001, 4096]).
You may consider adding ignore_mismatched_sizes=True in the model from_pretrained method.

[2024-04-29 06:52:06,327] [INFO] [launch.py:319:sigkill_handler] Killing subprocess 48707
[2024-04-29 06:52:06,327] [INFO] [launch.py:319:sigkill_handler] Killing subprocess 48708