Pythonic implementation for PyTorch autograd
- Pythonic Autograd
- Table of Content
- Introduction
- Walkthrough
- Notebook 1 : Unsupervised Loss Implementation
- Notebook 2 : Unsupervised Closefrom Loss
- Notebook 3 : Different Gradiant Decent Implementations
- Notebook 4 : Backpropagation Algorithm explained
- Notebook 5 : Pytorch Under The Hood
- Notebook 6 : Pytorch vs Numba
- Notebook 7 : Back Propagation details P1
- Notebook 8 : Back Propagation details P2
- Notebook9 : Testing Implemented Autograd
- Extra - Topological Sort
This repository comprises numerous notebooks, each playing a significant role in enhancing comprehension of the PyTorch graph and facilitating the development of Pythonic autograd functionality.
Notebook 1 : Unsupervised Loss Implementation
This notebook addresses a target concerning an unsupervised training problem, which can be broken down as follows:
- Problem: find a solution to adjust an initially positioned point towards the central position amidst a distribution of randomly scattered points.
- Goal: Develop an understanding of the central objective underpinning gradient descent techniques, which revolves around the attainment of the optimal minimum point
Notebook 2 : Unsupervised Closefrom Loss
Refining the loss equation established in the previous notebook to a closed-form expression, as opposed to a numerical gradient loss, promises faster and more efficient computations.
Notebook 3 : Different Gradiant Decent Implementations
- Implementation of the Full batch gradient decent
- Implementation of the Mini batch gradient decent
- Implementation of the Stochastic gradient decent
Notebook 4 : Backpropagation Algorithm explained
A simplified mathematical breakdown of the backpropagation algorithm, which serves as the fundamental engine behind autograd functionality.
Notebook 5 : Pytorch Under The Hood
Explaination of some pytorch built-in tensor's related functions
Notebook 6 : Pytorch vs Numba
Creating the same function using both PyTorch and Numba, followed by a performance comparison between the two approaches.
computionally Expensive to run, consume lot of disk memory, for the comparsion results.
Notebook 7 : Back Propagation details P1
Thoroughly elaborating the implementation process of the backpropagation algorithm using Torch tensors, applied to the moons dataset.
Notebook 8 : Back Propagation details P2
Continuing from the previous notebook, this iteration provides deeper insights into the backpropagation algorithm. It includes a comprehensive guide to implementing the backward function, leading to the creation and training of a distinct PyTorch model.
Notebook9 : Testing Implemented Autograd
Assessing the implemented autograd feature using the Euclidean distance loss. This involves a comparison of outcomes against both the native PyTorch autograd capability and the previously developed loss and gradient calculation functions.