piyush01123 / Understanding-Deep-Learning-Requires-Rethinking-Generalization

SMAI Project: Understanding Deep Learning Requires Rethinking Generalization

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

SMAI Project

Understanding Deep Learning Requires Rethinking Generalization (arxiv)

Aim:

To understand what differentiates neural networks that generalize well from those that do not

Datasets:

CIFAR10, ImageNet

Models:

MLP-512, Inception (tiny), Wide ResNet, AlexNet, Inception_v3

Experiments done:

  • Effect of explicit regularization like augmentation, weight decay, dropout
  • Effect of implicit regularization like BatchNorm
  • Input data corruption: Pixel shuffle, Gaussian pixels, Random pixels
  • Label corruption with different corruption levels from 1 to 100 %

Results

Data Corruption experiments

Label corruption experiments

Regularization experiments

Checkpoint files of Model Trained on ImageNet (Explicit Regularization):

  • w/o Augmentation, Learning Rate Scheduler, Dropout: checkpoint
  • w/o Augmentation, w/o Learning Rate Scheduler, Dropout : checkpoint
  • with Augmentation, Learning Rate Scheduler, Dropout : checkpoint

Team Members:

About

SMAI Project: Understanding Deep Learning Requires Rethinking Generalization


Languages

Language:Jupyter Notebook 97.8%Language:Python 2.2%