ymcui / Chinese-LLaMA-Alpaca

中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)

Home Page:https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

lora微调train loss下降,eval loss不变

ZoeyChen-lab opened this issue · comments

提交前必须检查以下项目

  • 请确保使用的是仓库最新代码(git pull),一些问题已被解决和修复。
  • 由于相关依赖频繁更新,请确保按照Wiki中的相关步骤执行
  • 我已阅读FAQ章节并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案
  • 第三方插件问题:例如llama.cpptext-generation-webuiLlamaChat等,同时建议到对应的项目中查找解决方案
  • 模型正确性检查:务必检查模型的SHA256.md,模型不对的情况下无法保证效果和正常运行

问题类型

模型训练与精调

基础模型

LLaMA-13B

操作系统

Linux

详细描述问题

在一开始运行的时候显示loss没有梯度下降,所以我在model forward中加了loss = loss.requires_grad_()后才能把训练跑起来。
但是训练过程中train loss有下降,eval loss没有变化,有人遇到过类似的问题吗?

依赖情况(代码类问题务必提供)

peft 0.5.0
torch 2.1.0
torchaudio 0.11.0+cu113
torchvision 0.12.0+cu113
transformers 4.28.1

运行日志或截图

在原有的代码里增加一下代码:
lora_config = LoraConfig(peft_type="LORA",task_type="SEQ_2_SEQ_LM",r=8,lora_alpha=32,target_modules=["q", "v"],lora_dropout=0.01, inference_mode=False)
model = get_peft_model(model, lora_config)
for name, para in model.named_parameters():
if "lora" in name:
para.requires_grad_(True)
else:
para.requires_grad_(False)

=========================================================
"log_history": [
{
"epoch": 0.69,
"learning_rate": 5e-05,
"loss": 8.2967,
"step": 500
},
{
"epoch": 1.0,
"eval_loss": 6.520833492279053,
"eval_runtime": 6.7135,
"eval_samples_per_second": 1.341,
"eval_steps_per_second": 0.149,
"step": 720
},
{
"epoch": 1.39,
"learning_rate": 5e-05,
"loss": 8.2952,
"step": 1000
},
{
"epoch": 2.0,
"eval_loss": 6.520833492279053,
"eval_runtime": 8.9157,
"eval_samples_per_second": 1.009,
"eval_steps_per_second": 0.112,
"step": 1440
},
{
"epoch": 2.08,
"learning_rate": 5e-05,
"loss": 8.2847,
"step": 1500
},
{
"epoch": 2.78,
"learning_rate": 5e-05,
"loss": 8.298,
"step": 2000
},
{
"epoch": 3.0,
"eval_loss": 6.520833492279053,
"eval_runtime": 6.7706,
"eval_samples_per_second": 1.329,
"eval_steps_per_second": 0.148,
"step": 2160
}
]

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your consideration.

Closing the issue, since no updates observed. Feel free to re-open if you need any further assistance.