jaanli / variational-autoencoder

Variational autoencoder implemented in tensorflow and pytorch (including inverse autoregressive flow)

Home Page:https://jaan.io/what-is-variational-autoencoder-vae-tutorial/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

I very much hope that you can also give this paper Code implementation

CXX1113 opened this issue · comments

Many people are actually based on the code written in this paper(https://arxiv.org/pdf/1611.01144.pdf), but none of them implement the implementation part of Bernoulli variables. According to https://arxiv.org/pdf/1611.00712.pdf, obeying the Bernoulli distribution hidden variable is a very important special case in Gumbel Softmax, and its formula is different from the general Categorical distribution.

I think the VAE you implement is very authoritative (because you insist on using the original form of ELBO, and finally only sum, rather than seeking experience average, in https://arxiv.org/pdf/1611.00712.pdf, the author also Said ELBO's expression should be the expected form, the inclusion of KL divergence is actually unreasonable. I think you and they both think of a piece), so I very much hope that if you have time, you can also give this paper Code implementation of(https://arxiv.org/pdf/1611.00712.pdf).
Thanks, :)

Thanks for the kind words @alanlisten!

You can find an implementation of that paper here: https://github.com/ericjang/gumbel-softmax

Hope that helps.

Yes, I know this. But what I am talking about is https://arxiv.org/pdf/1611.00712.pdf.
If you peruse the above, you will find that all the code on Github is not based on this one.

Ah OK, I think the idea is similar.