The following is a list of recommended paper in many fields including 'sampling, generative modele, inference'. Please email Zhihang Xu (xuzhh@shanghaitech.edu.cn) if you have any suggestions.
- Machine Learning: A probabilistic Perspective: Chapter 23 and 24.
-
Generalizing Hamiltonian Monte Carlo with Neural Networks, Daniel Levy, Matthew D. Hoffman and Jascha Sohl-Dickstein, arXiv preprint arXiv:1711.09268, 2017. [Code].
-
A-NICE-MC: Adversarial Training for MCMC, Jiaming Song, Shengjia Zhao and Stefano Ermon, 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. [Website]. [Code].
-
Learning Deep Latent Gaussian Models with Markov Chain Monte Carlo, Matthew D. Hoffman, Proceedings of the 34 th International Conference on Machine Learning, Sydney, Australia, PMLR 70, 2017.
-
Auxiliary Variational MCMC, Raza. Habib and David. Barber, Published as a conference paper at ICLR 2019. [Code].
- Pattern Recognation and Machine Learning: Chapter 10.1 and 10.2.
- Machine Learning: A probabilistic Perspective: Chapter 21 and 22.
-
Markov Chain Monte Carlo and Variational Inference: Bridging the Gap, Tim Salimans, Diederik P. Kingma and Max Welling, Proceedings of the 32 nd International Conference on Machine Learning, Lille, France, 2015. [Lecture slides]
-
Auto-Encoding Variational Bayes, Diederik P Kingma and Max Welling, The International Conference on Learning Representations (ICLR), Banff, 2014, [Video], [Slides].
-
Hamiltonian Variational Auto-Encoder, Anthony L. Caterini, Arnaud Doucet and Dino Sejdinovic, 32nd Conference on Neural Information Processing Systems (NIPS 2018), [Code], [Video].
-
Stein Variational Gradient Descent: A General Purpose Bayesian Inference Algorithm, Qiang Liu and Dilin Wang, 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. [Code].
-
Coupling the reduced-order model and the generative model for an importance sampling estimator Xiaoliang Wan and Shuangqing Wei, arXiv preprint arXiv:1901.07977 (2019).
-
Neural Importance Sampling, THOMAS MÜLLER, BRIAN MCWILLIAMS, FABRICE ROUSSELLE, MARKUS GROSS, JAN NOVÁK.
Apart from VAE, GAN is an another typical methodology to generate samples from the “estimated(implicit)” distribution to estimate the target distribution if it works, there are some papers.
-
Generative Adversarial Nets Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville and Yoshua Bengio, Advances in neural information processing systems(NIPS). 2014. [Code].
-
Conditional Generative Adversarial Nets Mehdi Mirza and Simon Osindero, (arXiv preprint arXiv:1411.1784) 2014.
-
InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets, Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever and Pieter Abbeel, 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
This topic focus on density estimation (or related) that is a central topic in unsupervised learning. Combining deep neural networks with the standard statistic method is very popular in recent years. The basic GAN model and its variants are omited here.
-
Density Estimation using real NVP, Laurent Dinh, Laurent Dinh and Laurent Dinh, Published as a conference paper at ICLR 2017.
-
Flow-GAN: Combining Maximum Likelihood and Adversarial Learning in Generative Models, Aditya Grover, Manik Dhar and Stefano Ermon, Thirty-Second AAAI Conference on Artificial Intelligence. 2018.
-
Glow: Generative Flow with Invertible 1×1 Convolutions, Diederik P. Kingma and Prafulla Dhariwal, Advances in Neural Information Processing Systems. 2018.
-
Neural Importance Sampling, THOMAS MÜLLER, BRIAN MCWILLIAMS, FABRICE ROUSSELLE, MARKUS GROSS, JAN NOVÁK.
-
Nice: non-linear independent components estimation, Laurent Dinh,David Krueger, Yoshua Bengio, Accepted as a workshop contribution at ICLR 2015.
- Efficient Monte Carlo Integration Using Boosted Decision Trees and Generative Deep Neural Networks, Joshua Bendavid, Prepared for submission to JHEP.
- Adam: A method for stochastic optimization, Diederik P. Kingma, Jimmy Lei Ba, Published as a conference paper at ICLR 2015.
-
A practical randomized CP tensor decomposition Battaglino C, Ballard G, Kolda T G. SIAM Journal on Matrix Analysis and Applications, 2018, 39(2): 876-901.
-
Generalized CP decomposition Hong D, Kolda T G, Duersch J A. arXiv preprint arXiv:1808.07452, 2018.
-
Tensor Analyzers Tang Y, Salakhutdinov R and Hinton G, International Conference on Machine Learning (ICML). 2013.
-
Tensorizing neural networks Novikov A, Podoprikhin D, Osokin A, et al. Advances in neural information processing systems (NIPS). 2015.
-
Uncertainty in neural networks: Bayesian Ensembling, Pearce T, Zaki M, Brintrup A, et al, arXiv preprint arXiv:1810.05546, 2018.
-
Deep Neural Networks Motivated by Partial Differential Equations Lars Ruthotto and Eldad Haber, arXiv preprint arXiv:1804.04272, 2018.
-
Tensorizing neural networks Novikov A, Podoprikhin D, Osokin A, et al. Advances in neural information processing systems (NIPS). 2015.
-
Learning Neural PDE Solvers with Convergence Guarantees Hsieh J T, Zhao S, Eismann S, et al. [Review Website].
- Inference via Low-Dimensional Couplings, Alessio Spantini, Daniele Bigoni and Youssef Marzouk, Journal of Machine Learning Research 19 (2018) 1-71.