SETIADEEPANSHU / PaperImplementations

Pytorch Implementations of many papers

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

DEEP LEARNING PAPERS

  • This repository contains implementations of various Deep learning papers
  • Models have been trained for a little time and will give better results if trained more. The focus was more on understanding rather than results
  • An index of papers implemented along with the citations and URLs are as follows

Note: Codes in Pytorch

INDEX

[1] Alex Net

  • Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. "Imagenet classification with deep convolutional neural networks." Advances in neural information processing systems. 2012 Paper

[2] VGG Net

  • Simonyan, Karen, and Andrew Zisserman. "Very deep convolutional networks for large-scale image recognition." arXiv preprint arXiv:1409.1556 (2014). Paper

[3] GoogLe Net

  • Szegedy, Christian, et al. "Going deeper with convolutions." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015. Paper

[4] Dropout (Just notes)

  • Srivastava, Nitish, et al. "Dropout: a simple way to prevent neural networks from overfitting." Journal of Machine Learning Research 15.1 (2014): 1929-1958. Paper

[5] Mobile Net

  • MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications (2017), Andrew G. Howard et al. Paper

[6] Inceptionism

  • Google Deep Dream Link

[7] DC GAN

  • Radford, A., Metz, L., & Chintala, S. (2015). Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434. Paper

[8] Spatial Transformer Networks

  • Jaderberg, M., Simonyan, K., & Zisserman, A. (2015). Spatial transformer networks. In Advances in neural information processing systems (pp. 2017-2025). Paper

[9] Squeeze Net

  • SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 1MB model size (2016), F. Iandola et al. Paper

[10] VAE (Auto-Encoding Variational Bayes)

  • Kingma, D. P., & Welling, M. (2013). Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114. Paper

[11] SRCNN

  • Dong, C., Loy, C. C., He, K., & Tang, X. (2014, September). Learning a deep convolutional network for image super-resolution. In European conference on computer vision (pp. 184-199). Springer, Cham. Paper

[12] WGAN

  • Arjovsky, M., Chintala, S., & Bottou, L. (2017). Wasserstein gan. arXiv preprint arXiv:1701.07875. Paper

[13] One cycle (Just notes)

  • Smith, L. N. (2017, March). Cyclical learning rates for training neural networks. In 2017 IEEE Winter Conference on Applications of Computer Vision (WACV) (pp. 464-472). IEEE. Paper

[14] A disciplined approach to neural network hyper-parameters (Just notes)

  • Smith, L. N. (2018). A disciplined approach to neural network hyper-parameters: Part 1--learning rate, batch size, momentum, and weight decay. arXiv preprint arXiv:1803.09820. Paper

[15] Class Imbalance Problem (Just notes)

  • Buda, M., Maki, A., & Mazurowski, M. A. (2018). A systematic study of the class imbalance problem in convolutional neural networks. Neural Networks, 106, 249-259. Paper

[16] Perceptual Loss (For super resolution)

[17] Semantic segmentation DeepLab

  • Chen, L. C., Papandreou, G., Kokkinos, I., Murphy, K., & Yuille, A. L. (2017). Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE transactions on pattern analysis and machine intelligence, 40(4), 834-848. Paper

[18] Neural Fabrics

  • Saxena, S., & Verbeek, J. (2016). Convolutional neural fabrics. In Advances in Neural Information Processing Systems (pp. 4053-4061).

TODO

  • Spatial pyramid pooling
  • SAGAN
  • Turing paper

About

Pytorch Implementations of many papers


Languages

Language:Jupyter Notebook 100.0%Language:Python 0.0%