Using Gradient reversal for AAE
ssakhavi opened this issue · comments
Hi
For the AAE,
I noticed that instead of using a gradient reversal layer, you back propagated the loss when the fake samples are real.
Is this correct?
Good question! I originally took the adversarial training bit from soumith/dcgan.torch
, so I'd hope that it was fine. And actually it corresponds to a little change in the objective which is explained in the original GAN paper (search for "fixed point"). Whether or not this is the best solution is a matter of active research for some people.
If you do note anything strange with the maths then do let me know as I hope that this is a useful resource for people, both theoretically and practically :)