Approximating 1-D distributions through Generative Adversarial Networks.
python Main.py
- numpy
- scipy
- tensorflow
If our noise follows a uniform distribution on (0,1) we can think of the discriminators network as an approximator of the inverse cdf. Let U ~ Unifrom(0,1) and F be increasing cdf, then it holds F^{-1}(U) ~ F. Note that it can also learn a different mapping than inverse cdf.
Evolution of G approximator, density (histogram) and decision boundary where the true distribution is N(0,1)
- G (approximation of inverse cdf of standard normal)
- Decision boundary (probability of discriminator classifying a given point as real (=coming from true distribution))
- Histogram (pdf)
GANs are really sensitive to input parameters and the convergence is not guaranteed.
Problems
- convergence not guaranteed
- mode collapse (generator learns to fool the discriminator without approximating the true distirbution)
ToDos
- Minibatch discrimination (discriminator classifies batches rather than single numbers)
https://github.com/ericjang/genadv_tutorial
https://github.com/emsansone/GAN
http://blog.aylien.com/introduction-generative-adversarial-networks-code-tensorflow/