PROxZIMA / Skin-Cancer-MNIST-HAM10000

Skin Cancer MNIST: HAM10000 - ResNet50 vs Inception-V3 vs VGG-19 vs VGG-16 vs GoogLeNet (Inception-V1)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Open in Kaggle

Skin Cancer MNIST HAM10000

Residual learning: a building block.

The degradation (of training accuracy) indicates that not all systems are similarly easy to optimize. In [Kaiming He et all. 2015] the degradation problem is adressed by introducing a deep residual learning framework. Instead of hoping each few stacked layers directly fit a desired underlying mapping, we explicitly let these layers fit a residual mapping. Formally, denoting the desired underlying mapping as H(x), we let the stacked nonlinear layers fit another mapping of F(x) := H(x)−x. The original mapping is recast into F(x)+x. We hypothesize that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.

The formulation of F(x)+x can be realized by feedforward neural networks with ''shortcut connections'' (see scheme). Shortcut connections are those skipping one or more layers. In our case, the shortcut connections simply perform identity mapping, and their outputs are added to the outputs of the stacked layers (see scheme). Identity shortcut connections add neither extra parameter nor computational complexity. The entire network can still be trained end-to-end by SGD with backpropagation, and can be easily implemented using common libraries



Example of ResNet50 vs Xception vs Inception-V3 vs VGG-19 vs VGG-16 as reference model.



Detailed example of GoogLeNet a.k.a. Inception-V1 (Szegedy, 2015) as reference model.

References

About

Skin Cancer MNIST: HAM10000 - ResNet50 vs Inception-V3 vs VGG-19 vs VGG-16 vs GoogLeNet (Inception-V1)

License:Apache License 2.0


Languages

Language:Jupyter Notebook 100.0%