btgraham / Batchwise-Dropout

Run fully connected artificial neural networks with dropout applied (mini)batchwise, rather than samplewise. Given two hidden layers each subject to 50% dropout, the corresponding matrix multiplications for forward- and back-propagation is 75% less work as the dropped out units are not calculated.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Batchwise Dropout Benjamin Graham, University of Warwick, 2015 GPLv3

If you use this software please tell me what you are using it for (b.graham@warwick.ac.uk).

Run "make dataset" for dataset in the list { mnist, cifar10, artificial }

About

Run fully connected artificial neural networks with dropout applied (mini)batchwise, rather than samplewise. Given two hidden layers each subject to 50% dropout, the corresponding matrix multiplications for forward- and back-propagation is 75% less work as the dropped out units are not calculated.


Languages

Language:C++ 82.5%Language:C 10.9%Language:Cuda 3.6%Language:Python 2.0%Language:Makefile 0.9%