ChaseMonsterAway / adaptive-inertia-adai

The PyTorch Implementation of Adaptive Inertia Methods. The algorithms are based on our ICML2020 Long Oral paper: "Adaptive Inertia: Disentangling the Effects of Adaptive Learning Rate and Momentum".

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

adaptive-inertia-adai

The Pytorch Implementation of Adaptive Inertia Methods.

Adaptive Inertia Optimization is proposed in our work:

Adaptive Inertia: Disentangling the Effects of Adaptive Learning Rate and Momentum.

This work has been accepted as a Long Oral paper (Acceptance Rate ~ 2%) at ICML 2022.

In this work, we design a novel adaptive optimization method named Adaptive Inertia (Adai), which uses parameter-wise inertia (the momentum hyperparameter as a vector) to accelerate saddle-point escaping and provably select flat minima as well as SGD. Adai combines the advantages of Adam and SGD on saddle-point escaping and minima selection, respectively.

Our experiments demonstrate that Adai can significantly outperform SGD and existing Adam variants for various DNNs where flat minima are desired.

The environment is as bellow:

Python 3.7.3

PyTorch >= 1.4.0

Usage

You may use it as a standard PyTorch optimizer.

import adai_optim

optimizer = adai_optim.Adai(net.parameters(), lr=lr, betas=(0.1, 0.99), eps=1e-03, weight_decay=5e-4)

Hyperparameters

The recommended learning rate of Adai is equal to the choice of SGD or 10 times the choice of SGD Momentum (beta=0.9).

The recommended weight decay of Adai is euqal to the choice of SGD and SGD Momentum, usually 1e-4 or 5e-4 for CNNs.

AdaiW adoptes decoupled weight decay instead of L2 regularization. Thus, the optimal weight decay of AdaiW depends on the learning rate choice.

The recommended hyperparameters for Transformers are not avaliable yet.

In principle, the optimal hyperparameter choice of Adai should be close to the optimal hyperparameter choice of SGD (no Momentum).

Theoretical Comparison

SGD Adaptive Learning Rate Adaptive Inertia
Saddle-Escaping Slow ✗ Fast ✓ Fast ✓
Minima Selection Flat ✓ Sharp ✗ Flat ✓

Test performance

Dataset Model AdaiW Adai SGD M Adam AMSGrad AdamW AdaBound Padam Yogi RAdam
CIFAR-10 ResNet18 4.590.16 4.740.14 5.010.03 6.530.03 6.160.18 5.080.07 5.650.08 5.120.04 5.870.12 6.010.10
VGG16 5.810.07 6.000.09 6.420.02 7.310.25 7.140.14 6.480.13 6.760.12 6.150.06 6.900.22 6.560.04
CIFAR-100 ResNet34 21.050.10 20.790.22 21.520.37 27.160.55 25.530.19 22.990.40 22.870.13 22.720.10 23.570.12 24.410.40
DenseNet121 19.440.21 19.590.38 19.810.33 25.110.15 24.430.09 21.550.14 22.690.15 21.100.23 22.150.36 22.270.22
GoogLeNet 20.500.25 20.550.32 21.210.29 26.120.33 25.530.17 21.290.17 23.180.31 21.820.17 24.240.16 22.230.15

Citing

If you use Adai or other Adai variants in your work, please cite Adaptive Inertia: Disentangling the Effects of Adaptive Learning Rate and Momentum.

@InProceedings{xie2022adaptive,
  title = 	 {Adaptive Inertia: Disentangling the Effects of Adaptive Learning Rate and Momentum},
  author =       {Xie, Zeke and Wang, Xinrui and Zhang, Huishuai and Sato, Issei and Sugiyama, Masashi},
  booktitle = 	 {Proceedings of the 39th International Conference on Machine Learning},
  pages = 	 {24430--24459},
  year = 	 {2022}
  volume = 	 {162},
  series = 	 {Proceedings of Machine Learning Research}
}

About

The PyTorch Implementation of Adaptive Inertia Methods. The algorithms are based on our ICML2020 Long Oral paper: "Adaptive Inertia: Disentangling the Effects of Adaptive Learning Rate and Momentum".

License:MIT License


Languages

Language:Python 62.3%Language:Jupyter Notebook 37.7%