itsShnik / internal-knowledge-distillation

Internal knowledge distillation in residual networks

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Internal Knowledge Distillation

Residual networks can be viewed as the ensembles of a lot of shallower sub-networks. The idea is to train the residual networks in such a way that the knowledge in the ensemble is distilled into the sub-networks in a single procedure. The advantages of doing the same are

  1. Increment in the accuracy of the original ResNet
  2. Possible training of residual networks of multiple depths in a single and efficient procedure
  3. A better approach for knowledge distillation when compared to the traditional distillation methods.

About

Internal knowledge distillation in residual networks


Languages

Language:Python 100.0%