shreyansh26 / ML-Optimizers-JAX

Toy implementations of some popular ML optimizers using Python/JAX

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

ML Optimizers from scratch using JAX

Implementations of some popular optimizers from scratch for a simple model i.e., Linear Regression on a dataset of 5 features. The goal of this project was to understand how these optimizers work under the hood and try to do a toy implementation myself. I also use a bit of JAX magic to perform the differentiation of the loss function w.r.t to the weights and the bias without explicitly writing their derivatives as a separate function. This can help to generalize this notebook for other types of loss functions as well.

Kaggle Open In Colab

The optimizers I have implemented are -

  • Batch Gradient Descent
  • Batch Gradient Descent + Momentum
  • Nesterov Accelerated Momentum
  • Adagrad
  • RMSprop
  • Adam
  • Adamax
  • Nadam
  • Adabelief

References -

About

Toy implementations of some popular ML optimizers using Python/JAX


Languages

Language:Python 56.4%Language:Jupyter Notebook 43.6%