mohd-faizy / PyTorch-Essentials

Welcome to the Pytorch Essentials repository! This Repo aims to cover fundamental to advanced topics related to PyTorch, providing comprehensive resources for learning and mastering this powerful deep learning framework.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

author made-with-Markdown Language Maintained Last Commit contributions welcome Size

PyTorch Essentials

PyTorch

Welcome to PyTorch Essentials, a comprehensive repository covering the power and versatility of PyTorch, a cutting-edge deep learning library.

Table of Contents

🚀 Why PyTorch?

PyTorch is not just a library; it's a revolution in the world of deep learning. Here are some reasons why PyTorch stands out:

  • Dynamic Computation Graph: PyTorch's dynamic computation graph allows for intuitive debugging and dynamic neural network architectures, making it ideal for research and experimentation.

  • Efficient GPU Utilization: Leveraging CUDA and cuDNN, PyTorch maximizes GPU performance, accelerating deep learning computations and training times.

  • Native Pythonic Interface: PyTorch's Pythonic syntax makes it easy to learn and use, facilitating rapid prototyping and code readability.

  • Rich Ecosystem: With support for computer vision, natural language processing, reinforcement learning, and more, PyTorch offers a rich ecosystem of tools and libraries for diverse deep learning tasks.

🛣️Roadmap

PyTorch

📒Colab-Notebook

# Notebook Link
01 Pytorch Basics P1 Open in Colab
02 Pytorch Basics P2 Open in Colab
03 Linear Regression Open in Colab
04 Binary Classification P1 - [sklearn make_moons dataset] Open in Colab
05 Binary-Classification P2 Open in Colab
06 Multi-Class Classification P1 - [sklearn make_Blob dataset] Open in Colab
07 Multi-Class-Classification P2 Open in Colab
08 Computer Vision - Artificial Neural Network (ANN) Open in Colab
09 Computer Vision - LeNet5 Open in Colab
10 Computer Vision - Convolutional Neural Network (CNN) Open in Colab
11 Custom Datasets P1 Open in Colab
11 Costom_DataSets P2 Open in Colab
11 PyTorch Going Modular Open in Colab

🔦Explore

This repository covers a wide range of topics, including:

  • Fundamentals of PyTorch: Learn about tensors, operations, autograd, and optimization techniques.

  • GPU Usage Optimization: Explore strategies to efficiently utilize GPUs for accelerated deep learning workflows.

  • Delve into advanced concepts like:

    • 👁️Computer Vision: Dive into image classification, object detection, image segmentation, and transfer learning using PyTorch.
    • 🔊Natural Language Processing: Discover how PyTorch powers state-of-the-art NLP models for tasks like sentiment analysis, language translation, and text generation.
    • 🖼️Generative Models: Explore how to create entirely new data, like generating realistic images or writing creative text.
    • 🛠️Reinforcement Learning: Train models to learn optimal strategies through interaction with an environment.
  • Custom Datasets and Data Loading: Master the art of creating custom datasets and efficient data loading pipelines in PyTorch.

  • Modular Workflows: Build modular and scalable deep learning pipelines for seamless experimentation and model deployment.

  • Experiment Tracking: Learn best practices for experiment tracking, model evaluation, and hyperparameter tuning.

  • Replicating Research Papers: Replicate cutting-edge research papers and implement state-of-the-art deep learning models.

  • Model Deployment: Explore techniques for deploying PyTorch models in production environments, including cloud deployments and edge devices.

  • Bonus: Dive into the exciting world of PyTorch Lightning, a framework that streamlines the machine learning development process.

💧PyTorch code

Category Import Code Example Description See
Imports import torch Root package
from torch.utils.data import Dataset, DataLoader Dataset representation and loading
import torchvision Computer vision tools and datasets torchvision
from torchvision import datasets, models, transforms Vision datasets, architectures & transforms torchvision
import torch.nn as nn Neural networks nn
import torch.nn.functional as F Layers, activations, and more functional
import torch.optim as optim Optimizers (e.g., gradient descent, ADAM, etc.) optim
from torch.autograd import Variable For variable management in autograd autograd
Neural Network API from torch import Tensor Tensor node in the computation graph
import torch.autograd as autograd Computation graph autograd
from torch.nn import Module Base class for all neural network modules nn
from torch.nn import functional as F Functional interface for neural networks functional
TorchScript and JIT from torch.jit import script, trace Hybrid frontend decorator and tracing JIT TorchScript
torch.jit.trace(model, input) Traces computational steps of data input through the model TorchScript
@script Decorator indicating data-dependent control flow TorchScript
ONNX import torch.onnx ONNX export interface onnx
torch.onnx.export(model, dummy_input, "model.onnx") Exports a model to ONNX format using trained model, dummy data, and file name onnx
Data Handling x = torch.randn(*size) Tensor with independent N(0,1) entries tensor
x = torch.ones(*size) Tensor with all 1's tensor
x = torch.zeros(*size) Tensor with all 0's tensor
x = torch.tensor(L) Create tensor from [nested] list or ndarray L tensor
y = x.clone() Clone of x tensor
with torch.no_grad(): Code wrap that stops autograd from tracking tensor history tensor
x.requires_grad_(True) In-place operation, when set to True, tracks computation history for future derivative calculations tensor
Dimensionality x.size() Returns tuple-like object of dimensions tensor
x = torch.cat(tensor_seq, dim=0) Concatenates tensors along dim tensor
y = x.view(a, b, ...) Reshapes x into size (a, b, ...) tensor
y = x.view(-1, a) Reshapes x into size (b, a) for some b tensor
y = x.transpose(a, b) Swaps dimensions a and b tensor
y = x.permute(*dims) Permutes dimensions tensor
y = x.unsqueeze(dim) Tensor with added axis tensor
y = x.squeeze() Removes all dimensions of size 1 tensor
y = x.squeeze(dim=1) Removes specified dimension of size 1 tensor
Algebra ret = A.mm(B) Matrix multiplication math operations
ret = A.mv(x) Matrix-vector multiplication math operations
x = x.t() Matrix transpose math operations
GPU Usage torch.cuda.is_available() Check for CUDA availability cuda
x = x.cuda() Move x's data from CPU to GPU and return new object cuda
x = x.cpu() Move x's data from GPU to CPU and return new object cuda
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') Device agnostic code cuda
model.to(device) Recursively convert parameters and buffers to device-specific tensors cuda
x = x.to(device) Copy tensors to a device (GPU, CPU) cuda
Deep Learning nn.Linear(m, n) Fully connected layer from m to n units nn
nn.Conv2d(m, n, s) 2-dimensional conv layer from m to n channels with kernel size s nn
nn.MaxPool2d(s) 2-dimensional max pooling layer nn
nn.BatchNorm2d(num_features) Batch normalization layer nn
nn.RNN(input_size, hidden_size) Recurrent Neural Network layer nn
nn.LSTM(input_size, hidden_size) Long Short-Term Memory layer nn
nn.GRU(input_size, hidden_size) Gated Recurrent Unit layer nn
nn.Dropout(p=0.5) Dropout layer nn
nn.Embedding(num_embeddings, embedding_dim) Mapping from indices to embedding vectors nn
Loss Functions nn.CrossEntropyLoss() Cross-entropy loss loss functions
nn.MSELoss() Mean Squared Error loss loss functions
nn.NLLLoss() Negative Log-Likelihood loss loss functions
Activation Functions nn.ReLU() Rectified Linear Unit activation function activation functions
nn.Sigmoid() Sigmoid activation function activation functions
nn.Tanh() Tanh activation function activation functions
Optimizers optimizer = optim.SGD(model.parameters(), lr=0.01) Stochastic Gradient Descent optimizer optimizers
optimizer = optim.Adam(model.parameters(), lr=0.001) ADAM optimizer optimizers
optimizer.step() Update weights optimizers
Learning Rate Scheduling scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=30, gamma=0.1) Create learning rate scheduler learning rate scheduler
scheduler.step() Adjust learning rate learning rate scheduler

⚡PyTorch APIs

PyTorch-APIs

🍰 Contributing

Contributions are welcome!

🙇 Acknowledgements

⚖ ➤ License

This project is licensed under the MIT License. See LICENSE for details.

❤️ Support

If you find this repository helpful, show your support by starring it! For questions or feedback, reach out on Twitter(X).

$\color{skyblue}{\textbf{Connect with me:}}$

➤ If you have questions or feedback, feel free to reach out!!!


About

Welcome to the Pytorch Essentials repository! This Repo aims to cover fundamental to advanced topics related to PyTorch, providing comprehensive resources for learning and mastering this powerful deep learning framework.

License:MIT License


Languages

Language:Jupyter Notebook 99.9%Language:Python 0.1%