kozistr / pytorch_optimizer

optimizer & lr scheduler & loss function collections in PyTorch

Home Page:https://pytorch-optimizers.readthedocs.io/en/latest/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Versions of codes that work with half precision models

sjscotti opened this issue · comments

Hi
I just discovered your repo and I would like to try it to fine-tune my ParlAI blenderbot2 (see https://github.com/facebookresearch/ParlAI) model. However, I am running the model in FP16 precision to make better use of my GPU. ParlAI has versions of a few optimizers that can use FP16 models, and I have tried installing a couple of other optimizers that can also work with FP16 models by casting the state parameters and gradients to FP32 within the optimizer, determining the new state parameters with FP32 accuracy, and recasting the state parameters back to FP16 for updating the model. If you had a version of your library that automatically did this, it would greatly simplify its use with FP16 precision models.
Thanks!

P.S.
It looks like adabelief, radam, and diffrgrad do something like this, but not in a consistent way.

@sjscotti

First of all, thanks for your interest in this repo : )

I think It's a good suggestion to support FP16 with a wrapper that can simply be used.

It takes some time, but I'll work on it.

Thanks again for your idea!

Best regard