tkwoo / anogan-keras

Unsupervised anomaly detection with generative model, keras implementation

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Loss Function for Generator and Discriminator

nishanthballal-9 opened this issue · comments

Why are we using 'mse' as the loss function for both generator and discrimator? Do we not use 'binary_crossentropy' in case of the optimizers?

Also another doubt was to know the reason behind the usage of Conv2dTranspose layers instead of Upsampling layers?

Of course, you can use cross entrophy (original DCGAN).
Because I used mse, I read LSGAN (Least Square GAN). this paper said that least square type loss function is more stable in training process.
Please check LSGAN paper : https://arxiv.org/abs/1611.04076

From my own experience, Conv2dTranspose is better than upsampling. I tought the reason is conv makes higher non linearity than upsampling. upsampling is not trainable... well.. I am not sure that clear reason.