Tensorflow implementation of the paper Marginalized Denoising Auto-encoders for Nonlinear Representations (ICML 2014). Other denoising techniques have longer training time and high computational demands. mDA addresses the problem by implicitly denoising the raw input via Marginalization and, thus, is effectively trained on infinitely many training samples without explicitly corrupting the data. There are similar approaches but they have non-linearity or latent representations stripped away. This addresses the disadvantages of those approaches, and hence is a generalization of those works.
- Python 2.7
- Tensorflow
- NumPy
To train the demo model :
python mdA.py
Resulted filters of first layer during training:
The filters are continuously improving and learning specialized feature extractors.
- Chen, Minmin, et al. "Marginalized denoising auto-encoders for nonlinear representations." International Conference on Machine Learning. 2014. [Paper]
- Vincent, Pascal, et al. "Extracting and composing robust features with denoising autoencoders." Proceedings of the 25th international conference on Machine learning. ACM, 2008. [Paper]