FranxYao / Long-Context-Data-Engineering

Implementation of paper Data Engineering for Scaling Language Models to 128K Context

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

When did you perform dynamic-NTK?

Liu-yuliang opened this issue · comments

hi, i find you used dynamic NTK in llama-7b-80k, I'm curious about when you used it, before or after training phase?
thank you for your reply

before training; also note that this approach eventually is equivalent to modifying the base of rope, and my take is as long as you make the base of rope longer than 128K, then continue train the model, you are good to go (i.e., it does not matter either you use linear/ NTK / whatever rope modifications)