There are 2 repositories under adam topic.
On the Variance of the Adaptive Learning Rate and Beyond
Learning Rate Warmup in PyTorch
RAdam implemented in Keras & TensorFlow
This is a repo for my master thesis research about the Fusion of Visual SLAM and GPS. It contains the research paper, code and other interesting data.
8-bit systems to ESP32 WiFi Multifunction Firmware
Pytorch LSTM RNN for reinforcement learning to play Atari games from OpenAI Universe. We also use Google Deep Mind's Asynchronous Advantage Actor-Critic (A3C) Algorithm. This is much superior and efficient than DQN and obsoletes it. Can play on many games
A Deep Learning and preprocessing framework in Rust with support for CPU and GPU.
ADAS is short for Adaptive Step Size, it's an optimizer that unlike other optimizers that just normalize the derivative, it fine-tunes the step size, truly making step size scheduling obsolete, achieving state-of-the-art training performance
A tour of different optimization algorithms in PyTorch.
Easy-to-use AdaHessian optimizer (PyTorch)
Unofficial implementation of Switching from Adam to SGD optimization in PyTorch.
Lion and Adam optimization comparison
Toy implementations of some popular ML optimizers using Python/JAX
Partially Adaptive Momentum Estimation method in the paper "Closing the Generalization Gap of Adaptive Gradient Methods in Training Deep Neural Networks" (accepted by IJCAI 2020)
Deep learning projects including applications (face recognition, neural style transfer, autonomous driving, sign language reading, music generation, translation, speech recognition and NLP) and theories (CNNs, RNNs, LSTM, Adam, Dropout, BatchNorm, Xavier/He initialization, hyperparameter tuning, regularization, optimization, Residual Networks). Deep Learning Specialization by Andrew Ng, deeplearning.ai
Quasi Hyperbolic Rectified DEMON Adam/Amsgrad with AdaMod, Gradient Centralization, Lookahead, iterative averaging and decorrelated Weight Decay
[Python] [arXiv/cs] Paper "An Overview of Gradient Descent Optimization Algorithms" by Sebastian Ruder
Adam, NAdam and AAdam optimizers
pytorch implement of NovoGrad Optimizer
Simple MATLAB toolbox for deep learning network: Version 1.0.3
Development repo for pilot3 submission to FDA - ADaM
ADAM python client and notebooks
DeepVariant-on-Spark is a germline short variant calling pipeline that runs Google DeepVariant on Apache Spark at scale.
Addon which enhances all user profiles of confluence. It also adds an advanced people directory. The whole addon is configurable by means of an XML, can be localized, supports Velocity templates and supports view and edit restrictions.
The optimization methods in deep learning explained by Vietnamese such as gradient descent, momentum, NAG, AdaGrad, Adadelta, RMSProp, Adam, Adamax, Nadam, AMSGrad.
Literature survey of convex optimizers and optimisation methods for deep-learning; made especially for optimisation researchers with ❤️
Orbit propagation, orbit determination, and analysis code
implementation of neural network from scratch only using numpy (Conv, Fc, Maxpool, optimizers and activation functions)
Tensorflow-Keras callback implementing arXiv 1712.07628