Compute model uncertainty
huevosabio opened this issue · comments
Theory here: http://mlg.eng.cam.ac.uk/yarin/blog_2248.html
It is possible to estimate the uncertainty of a DL model using dropout in the training and in the sampling. Basically, the idea is that in order to estimate the uncertainty of an LSTM network, it is simply necessary to sample T
times the network with dropout (that is on each sample, we drop randomly some of the nodes).
This task is basically to extend the current approach to one that includes uncertainty. Specifically, the multistep should return the mean multistep prediction and the standard deviation multistep of the samples.
Relevant links:
yaringal/BayesianRNN#3
"Alternately, dropout can be applied to the input and recurrent connections of the memory units with the LSTM precisely and separately." - from http://machinelearningmastery.com/sequence-classification-lstm-recurrent-neural-networks-python-keras/