nlpxucan / WizardLM

LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

OOM at the half way finetuning

yy9996 opened this issue · comments

commented

training break down at 1.57 epoch, why? and how to avoid it?