dolittle007 / Deep-Learning-Experiments

Notes and experiments to understand deep learning concepts

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Deep Learning Lecture Notes and Experiments

Code samples have links to other repo that I maintain (Advanced Deep Learning with Keras book) or contribute (Keras)

2022 Version

Welcome to the 2022 version of Deep Learning course. We made major changes in the coverage and delivery of this course to reflect the recent advances in the field.

What is new in 2022 version:

  1. Emphasis on tools to use and deploy deep learning models. In the past, we learn how to build and train models to perform certain tasks. However, often times we want to use a pre-trained model for immediate deployment. testing or demonstration. Hence, we will use tools such as huggingface, gradio and streamlit in our discussions.

  2. Emphasis on understanding deep learning building blocks. The ability to build, train and test models is important. However, when we want to optimize and deploy a deep learning model on a new hardware or run it on production, we need an in-depth understanding of the code implementation of our algorithms. Hence, there will be emphasis on low-level algorithms and their code implementations.

  3. Emphasis on practical applications. Deep learning can do a lot more than recognition. Hence, we will highlight practical applications in vision (detection, segmentation), speech (ASR, TTS) and text (sentiment, summarization).

  4. Various levels of abstraction. We will present deep learning concepts from low-level numpy and einops, to mid-level framework such as PyTorch, and to high-level APIs such as huggingface, gradio and streamlit. This enables us to use deep learning principles depending on the problem constraints.

  5. Emphasis on individual presentation of assignments, machine exercises and projects. Online learning is hard. To maximize student learning, this course focuses on exchange of ideas to ensure individual student progress.

Coverage:

  1. Deep Learning Toolkit - Anaconda, venv, VSCode, Python, Numpy, Einops, PyTorch, Timm, HuggingFace, Gradio, Streamlit, Colab, Deepnote, Kaggle, etc.
  1. Datasets - collection, labelling, loading, splitting, feeding
  2. Supervised Learning
  3. Building blocks - MLPs, CNNs, RNNs, Transformers
  4. Backpropagation, Optimization and Regularization
  5. Unsupervised Learning
  6. AutoEncoders and Variational AutoEncoders
  7. Practical Applications

2020 Version

So much have changed since this course was offerred. Hence, it is time to revise. I will keep the original lecture notes at the bottom. They will no longer be maintained. I am introducing 2020 version. Big changes that will happen are as follows:

  1. Review of Machine Learning - Frustrated with the lack of depth in the ML part, I decided to develop a new course - Foundations of Machine Learning. Before studying DL, a good grasp of ML is of paramount importance. Without ML, it is harder to understand DL and to move it forward.

  2. Lecture Notes w/ Less Clutter - Prior to this version, my lecture notes have too much text. In the 2020 version, I am trying to focus more on the key concepts while carefully explaining during lecture the idea behind these concepts. The lecture notes are closely coupled with sample implementations. This enables us to quickly move from concepts to actual code implementations.

Lecture Notes and Experiments

  1. Course Roadmap
  1. Multilayer Perceptron (MLP)
  1. Convolutional Neural Network (CNN)
  1. Recurrent Neural Network (RNN)
  1. Transformer
  1. Regularizer
  1. Optimizer
  1. AutoEncoder
  1. Normalization
  1. Generative Adversarial Network (GAN)
  1. Variational AutoEncoder (VAE)
  1. Object Detection
  1. Object Segmentation
  1. Deep Reinforcement Learning (DRL)
  1. Policy Gradient Methods

Star, Fork, Cite

If you find this work useful, please give it a star, fork, or cite:

@misc{atienza2020dl,
  title={Deep Learning Lecture Notes},
  author={Atienza, Rowel},
  year={2020},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/roatienza/Deep-Learning-Experiments}},
}

Lecture Notes (Old - will no longer be maintained)

  1. Course Roadmap
  1. Background Materials
  1. Machine Learning Basics
  1. Deep Neural Networks
  1. Regularization
  1. Optimization

  2. Convolutional Neural Networks (CNN)

  1. Deep Networks
  1. Embeddings
  1. Recurrent Neural Networks, LSTM, GRU
  1. AutoEncoders
  1. Generative Adversarial Networks (GAN)

11a. Improved GANs

11b. Disentangled GAN

11c. Cross-Domain GAN

  1. Variational Autoencoder (VAE)
  1. Deep Reinforcement Learning (DRL)
  1. Policy Gradient Methods

About

Notes and experiments to understand deep learning concepts

License:MIT License


Languages

Language:Jupyter Notebook 73.2%Language:Python 26.8%