dkumazaw / onecyclelr

One cycle policy learning rate scheduler in PyTorch

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

One cycle policy learning rate scheduler

A PyTorch implementation of one cycle policy proposed in Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates.

Usage

The implementation has an interface similar to other common learning rate schedulers.

from onecyclelr import OneCycleLR

optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)
scheduler = OneCycleLR(optimizer, num_steps=num_steps, lr_range=(0.1, 1.))
for epoch in range(epochs):
    for step, X in enumerate(train_dataloader):
        train(...) 
        scheduler.step()

About

One cycle policy learning rate scheduler in PyTorch

License:MIT License


Languages

Language:Python 100.0%