Have you tried combination of SPIN, SFT, DPO
penolove opened this issue · comments
is quite interesting of this awesome work.
do you ever try that
- train with SPIN only
- train with SFT + SPIN + DPO
- mixture the SPIN + DPO i.e union DPO pairs with SPIN pairs per iteration
also I'm wondering the difference between SFT and SPIN, high level imagination from the loss
SPIN not only wanna toward gt, but also wanna leave the mistakes made last checkpoint.
is it kind of regularization or a kind of more aggressive learning strategy?
very impressive work! if you can share your precious idea with us will be a huge benefit!