kibromhft / Improving-Deep-Neural-Networks

Improving Deep Neural Networks

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Improving Deep Neural Networks

Objectives:

  • Hyperparameter Tuning, Regularization and Optimization

Specific Objectives

  • Recall that different types of initializations lead to different results
  • Setting up Machine Learning Application: train/dev/test sets
  • Diagnose the bias and variance issues in DL models : bias-variance trade-off
  • Regularization methods such as dropout or L2 regularization.
  • Deal with experimental issues such as Vanishing gradients in DL models
  • Recall various optimization methods such as Momentum, RMSProp,(Stochastic) Gradient Descent (SGD), and Adam
  • Use random minibatches to accelerate the convergence and improve optimization
  • Use gradient checking to verify the correctness of backpropagation implementation

Part 1:

  • Initialization.ipynb
  • Regularization.ipynb
  • Gradient Checking v1.ipynb

Part 2:

  • Optimization methods.ipynb

Part 3:

  • Tensorflow Tutorial.ipynb

About

Improving Deep Neural Networks


Languages

Language:Jupyter Notebook 100.0%