caogang / wgan-gp

A pytorch implementation of Paper "Improved Training of Wasserstein GANs"

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

WGAN-gp loss keeps going large

haonanhe opened this issue · comments

commented

Hello, I've implemented your code on my own dataset. However, the d_loss decreases from 10(which equals to lambda) to a very small negative number(like -10000), the wasserstein distance keeps going to order of million, and the gradient penalty changes from 10 to 0 and then goes to order of thousand. I've worked on this problem for several days but I still can't solve it. Can anyone help me with this?
@caogang

Hello,I've met the same problem as yours,did u find out where the problem is?

@yallien @supermarian Are you using pytorch with version >= 1.4?

@haonanhe @yallien hello,did u solve this problem?

how about the wasserstein distance at the start of the train, it is very smaller than lambda?