wandb / edu

Educational materials on deep learning by Weights & Biases

Home Page:http://wandb.ai

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Rewrite MLP colab to focus on batchnorm instead of dropout

charlesfrye opened this issue · comments

The issues with normalization are a great lead-in to introducing batchnorm, while dropout feels kinda tacked on.

Depending on timing, we could do both dropout and batchnorm -- presented as solutions to the two issues you see when you increase network depth, optimization problems and over-fitting.