tamanna-a / learning-rates

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

learning-rates

In this project, I look at various gradient descent optimization methods such as Adam, Adagrad, RMSProp and examine their impact on training loss and training time. I also implement cyclical learning rate policy.

Resources and Citation: 1.An Overview of gradient descent optimization algorithms https://ruder.io/optimizing-gradient-descent/

  1. Leslie N. Smith Cyclical Learning Rates for Training Neural Networks. Available at https://arxiv.org/abs/1506.01186.

  2. Cyclical learning rate policy in Keras: https://www.pyimagesearch.com/2019/08/05/keras- learning-rate-finder/.

About


Languages

Language:Jupyter Notebook 100.0%