There are 2 repositories under adam-optimizer topic.
On the Variance of the Adaptive Learning Rate and Beyond
Deep learning library in plain Numpy.
This repository contains the results for the paper: "Descending through a Crowded Valley - Benchmarking Deep Learning Optimizers"
CS F425 Deep Learning course at BITS Pilani (Goa Campus)
ADAS is short for Adaptive Step Size, it's an optimizer that unlike other optimizers that just normalize the derivative, it fine-tunes the step size, truly making step size scheduling obsolete, achieving state-of-the-art training performance
Lion and Adam optimization comparison
Google Street View House Number(SVHN) Dataset, and classifying them through CNN
Reproducing the paper "PADAM: Closing The Generalization Gap of Adaptive Gradient Methods In Training Deep Neural Networks" for the ICLR 2019 Reproducibility Challenge
Toy implementations of some popular ML optimizers using Python/JAX
A collection of various gradient descent algorithms implemented in Python from scratch
This library provides a set of functionalities for different type of deep learning (and ML) algorithms in C
From linear regression towards neural networks...
A compressed adaptive optimizer for training large-scale deep learning models using PyTorch
Modified XGBoost implementation from scratch with Numpy using Adam and RSMProp optimizers.
The project aimed to implement Deep NN / RNN based solution in order to develop flexible methods that are able to adaptively fillin, backfill, and predict time-series using a large number of heterogeneous training datasets.
[Python] [arXiv/cs] Paper "An Overview of Gradient Descent Optimization Algorithms" by Sebastian Ruder
Lookahead optimizer ("Lookahead Optimizer: k steps forward, 1 step back") for tensorflow
Implementation of Adam Optimization algorithm using Numpy
📈Implementing the ADAM optimizer from the ground up with PyTorch and comparing its performance on six 3-D objective functions (each progressively more difficult to optimize) against SGD, AdaGrad, and RMSProp.
Learning about Haskell with Variational Autoencoders
A Ray Tracing-Inspired Approach to Neural Network Optimization
Short description for quick search
Improved Hypergradient optimizers for ML, providing better generalization and faster convergence.
Grams: Gradient Descent with Adaptive Momentum Scaling (ICLR 2025 Workshop)
Implemenation of DDPG with numpy only (without Tensorflow)
Nadir: Cutting-edge PyTorch optimizers for simplicity & composability! 🔥🚀💻
Plant Disease Detection using convolutional neural network. Our model can easily predict the disease of plants like Potato , Tomato , Pepper Bel and many more in the upcoming version.
Implementing contractive auto encoder for encoding cloud images and using that encoding for multi label image classification
A project I made to practice my newfound Neural Network knowledge - I used Python and Numpy to train a network to recognize MNIST images. Adam and mini-batch gradient descent implemented
MetaPerceptron: A Standardized Framework For Metaheuristic-Driven Multi-layer Perceptron Optimization
Implement different variants of gradient descent in python using numpy
A Siamese Neural Network is a class of neural network architectures that contain two or more identical subnetworks. ‘identical’ here means, they have the same configuration with the same parameters and weights.
An Educational Framework Based on PyTorch for Deep Learning Education and Exploration