lilujunai / autolrs

Automatic learning-rate scheduler

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

AutoLRS

This is the PyTorch code implementation for the paper AutoLRS: Automatic Learning-Rate Schedule by Bayesian Optimization on the Fly published at ICLR 2021.

A TensorFlow version will appear in this repo later.

What is AutoLRS?

Finding a good learning rate schedule for a DNN model is non-trivial. Can we automatically tune the learning rate (LR) over the course of training without human involvement? AutoLRS treats the training loss and validation loss as a black-box function of LR, and uses Bayesian optimization (BO) to search for the best LR for each training stage. AutoLRS does not depend on a pre-defined LR schedule, dataset, or a specified task and is compatible with almost all optimizers. The LR schedules auto-generated by AutoLRS lead to speedup over highly hand-tuned LR schedules for several state-of-the-art DNNs including ResNet-50, Transformer, and BERT.

Setup

$ pip install --user -r requirements.txt

How to use AutoLRS for your work?

autolrs_server.py is the brain of AutoLRS, which implements the LR searching algorithms including BO and exponential forecasting model.

autolrs_callback.py implements a callback which you can plug into your Pytorch training loop. The callback receives commands from the server via socket, adjusting the learning rate, saving/restoring model parameters and optimizer states accordingly.

Example

We provide an example to use AutoLRS to search for the LR schedules for CIFAR-10 various DNNs. The models are imported from kuangliu's great and simple pytorch-cifar repository.

Prerequisites: Python 3.6+, PyTorch 1.0+

Run the example

$ bash run.sh

Contact

You can contact us at yuchenj@cs.washington.edu. We would love to hear your questions and feedback!

About

Automatic learning-rate scheduler

License:MIT License


Languages

Language:Python 99.7%Language:Shell 0.3%