Finetuning a Pretrained Model Using MuP
zanussbaum opened this issue · comments
Zach Nussbaum commented
Somewhat of a naive question, but say we have pretrained a model and now want to finetune it on a downstream task. Is there any reason we shouldn't replace the MuP layers with the equivalent torch
layers? I have to imagine that we don't need to use MuP
here, but want to make sure that this doesn't break anything if we replace them
Greg Yang commented
If it's already pretrained, you can replace torch layers with muP layers to
allow you to use muP optimizers (that can scale per layer lr with shape
info), as long as you make sure to keep the model forward pass invariant
when you switch out layers.
…On Wed, Dec 7, 2022, 6:13 AM Zach Nussbaum ***@***.***> wrote:
Somewhat of a naive question, but say we have pretrained a model and now
want to finetune it on a downstream task. Is there any reason we shouldn't
replace the MuP layers with the equivalent torch layers? I have to
imagine that we don't need to use MuP here, but want to make sure that
this doesn't break anything if we replace them
—
Reply to this email directly, view it on GitHub
<#31>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AMWHHMY2DARLGILJMLQBUQDWMCZRLANCNFSM6AAAAAASXAFWV4>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
Zach Nussbaum commented
Sorry I was not clear! If we pretrained using MuP, we should replace the
Readout layers with normal torch layers when fine tuning correct?
…On Wed, Dec 7, 2022 at 11:42 AM Greg Yang ***@***.***> wrote:
If it's already pretrained, you can replace torch layers with muP layers to
allow you to use muP optimizers (that can scale per layer lr with shape
info), as long as you make sure to keep the model forward pass invariant
when you switch out layers.
On Wed, Dec 7, 2022, 6:13 AM Zach Nussbaum ***@***.***> wrote:
> Somewhat of a naive question, but say we have pretrained a model and now
> want to finetune it on a downstream task. Is there any reason we
shouldn't
> replace the MuP layers with the equivalent torch layers? I have to
> imagine that we don't need to use MuP here, but want to make sure that
> this doesn't break anything if we replace them
>
> —
> Reply to this email directly, view it on GitHub
> <#31>, or unsubscribe
> <
https://github.com/notifications/unsubscribe-auth/AMWHHMY2DARLGILJMLQBUQDWMCZRLANCNFSM6AAAAAASXAFWV4
>
> .
> You are receiving this because you are subscribed to this thread.Message
> ID: ***@***.***>
>
—
Reply to this email directly, view it on GitHub
<#31 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AIBFIPKDDKW4CZHTYJC4FADWMC45RANCNFSM6AAAAAASXAFWV4>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
Greg Yang commented
I think this is up to you. It's possible to replacing muP layer with torch
layers can make it easier to apply established hyperparameters for
fine-tuning. On the other hand, the muP layers themselves can open up
better hyperparameter choices for fine-tuning as well.
…On Wed, Dec 7, 2022, 6:47 AM Zach Nussbaum ***@***.***> wrote:
Sorry I was not clear! If we pretrained using MuP, we should replace the
Readout layers with normal torch layers when fine tuning correct?
On Wed, Dec 7, 2022 at 11:42 AM Greg Yang ***@***.***> wrote:
> If it's already pretrained, you can replace torch layers with muP layers
to
> allow you to use muP optimizers (that can scale per layer lr with shape
> info), as long as you make sure to keep the model forward pass invariant
> when you switch out layers.
>
> On Wed, Dec 7, 2022, 6:13 AM Zach Nussbaum ***@***.***> wrote:
>
> > Somewhat of a naive question, but say we have pretrained a model and
now
> > want to finetune it on a downstream task. Is there any reason we
> shouldn't
> > replace the MuP layers with the equivalent torch layers? I have to
> > imagine that we don't need to use MuP here, but want to make sure that
> > this doesn't break anything if we replace them
> >
> > —
> > Reply to this email directly, view it on GitHub
> > <#31>, or unsubscribe
> > <
>
https://github.com/notifications/unsubscribe-auth/AMWHHMY2DARLGILJMLQBUQDWMCZRLANCNFSM6AAAAAASXAFWV4
> >
> > .
> > You are receiving this because you are subscribed to this
thread.Message
> > ID: ***@***.***>
> >
>
> —
> Reply to this email directly, view it on GitHub
> <#31 (comment)>, or
> unsubscribe
> <
https://github.com/notifications/unsubscribe-auth/AIBFIPKDDKW4CZHTYJC4FADWMC45RANCNFSM6AAAAAASXAFWV4
>
> .
> You are receiving this because you authored the thread.Message ID:
> ***@***.***>
>
—
Reply to this email directly, view it on GitHub
<#31 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AMWHHM6MONS54U5QXTIQVC3WMC5TJANCNFSM6AAAAAASXAFWV4>
.
You are receiving this because you commented.Message ID:
***@***.***>