Naresh1318 / Adversarial_Autoencoder

A wizard's guide to Adversarial Autoencoders

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Trainable variables for the generator optimizer

ankur-manikandan opened this issue · comments

Hi Naresh,

Really appreciate you taking the time to make the AAE tutorial. It is a great read!

I have a question regarding the implementation of generator_optimizer in the code. When I print en_var, I get the following list of variables

e_dense_1/weights:0
e_dense_1/bias:0
e_dense_2/weights:0
e_dense_2/bias:0
e_latent_variable/weights:0
e_latent_variable/bias:0
d_dense_1/weights:0
d_dense_1/bias:0
d_dense_2/weights:0
d_dense_2/bias:0

In your post, you mention that:

We’ll backprop only through the encoder weights, which causes the encoder to learn the required distribution and produce output which’ll have that distribution.

Do the decoder weights get updated as well?