Zardinality / WGAN-tensorflow

a tensorflow implementation of WGAN

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Should I add an tanh as activate function for discriminator's outputs?

WeiJenLee opened this issue · comments

Hi,

I found both my d_loss and g_loss will grow extremely big with negative sign.
According to the paper, discriminator should maximize the EM distance between fake and real data.
But this loss function caused my g_loss extremely big and d_loss extremely small and made both my g_loss and d_loss very small. (d_loss = EMD(fake)-EMD(real), g_loss = -EMD(fake))
And seems generator didn't decrease the EMD between real and fake data.
So I'm wondering should I make all the outputs from discriminator positive?
Or maybe add a tanh in the last layer of discriminator?

There is no need to make discriminator's output restricted in [-1, 1], otherwise we are talking about sort of TV loss. You might want to check your training part and see if it updates g_loss wrt theta_g correctly.