gradjitta / annotated_deep_learning_paper_implementations

🧠 Implementations/tutorials of deep learning papers with side-by-side notes; including transformers (original, xl, switch, feedback, vit), optimizers(adam, radam, adabelief), gans(dcgan, cyclegan, stylegan2), reinforcement learning (ppo, dqn), capsnet, distillation, etc.

Home Page:https://nn.labml.ai

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Join Slack Twitter

labml.ai Deep Learning Paper Implementaion

This is a collection of simple PyTorch implementations of neural networks and related algorithms. These implementations are documented with explanations,

The website renders these as side-by-side formatted notes. We believe these would help you understand these algorithms better.

Screenshot

We are actively maintaining this repo and adding new implementations almost weekly. Twitter for updates.

Modules

✨ Transformers

✨ Recurrent Highway Networks

✨ LSTM

✨ HyperNetworks - HyperLSTM

✨ ResNet

✨ Capsule Networks

✨ Generative Adversarial Networks

✨ Sketch RNN

✨ Graph Neural Networks

✨ Counterfactual Regret Minimization (CFR)

Solving games with incomplete information such as poker with CFR.

✨ Reinforcement Learning

✨ Optimizers

✨ Normalization Layers

✨ Distillation

Installation

pip install labml-nn

Citing LabML

If you use LabML for academic research, please cite the library using the following BibTeX entry.

@misc{labml,
 author = {Varuna Jayasiri, Nipun Wijerathne},
 title = {LabML: A library to organize machine learning experiments},
 year = {2020},
 url = {https://nn.labml.ai/},
}

About

🧠 Implementations/tutorials of deep learning papers with side-by-side notes; including transformers (original, xl, switch, feedback, vit), optimizers(adam, radam, adabelief), gans(dcgan, cyclegan, stylegan2), reinforcement learning (ppo, dqn), capsnet, distillation, etc.

https://nn.labml.ai

License:MIT License


Languages

Language:Jupyter Notebook 66.9%Language:Python 33.1%Language:Makefile 0.0%