OpenBMB / MiniCPM-V

MiniCPM-Llama3-V 2.5: A GPT-4V Level Multimodal LLM on Your Phone

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[BUG] resampler frozen while LoRA finetuning

Fr0do opened this issue · comments

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

  • 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

  • 我已经搜索过FAQ | I have searched FAQ

当前行为 | Current Behavior

With default config resampler is neither a LoRA module nor unfrozen for full finetuning. Seems to be the fixable by adding modules_to_save=['resampler'], to LoraConfig.

期望行为 | Expected Behavior

Resampler is finetuned during LoRA training.

复现方法 | Steps To Reproduce

run finetune_lora.sh

运行环境 | Environment

requirements.txt

备注 | Anything else?

No response