xiaohongxiao / RAdam

On The Variance Of The Adaptive Learning Rate And Beyond

Home Page:https://arxiv.org/abs/1908.03265

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

RAdam

In this paper, we study the problem why we need warmup for Adam and identifies the adaptive learning rate has an undesirably large variance in the early stage. In principle,

We are in an early-release beta. Expect some adventures and rough edges.

Detailed readme is still in process.

How to use guidance:

  1. Directly replace Adam with RAdam first without changing any settings (if Adam works with some setting, it's likely RAdam also works with that). it is worth mentioning that, if you are using Adam with warmup, try RAdam with warmup first (instead of RAdam without warmup).
  2. Further tune hyper-parameters for a better performance.

About

On The Variance Of The Adaptive Learning Rate And Beyond

https://arxiv.org/abs/1908.03265

License:Apache License 2.0


Languages

Language:Python 96.7%Language:Shell 3.3%