akanimax / pro_gan_pytorch

Unofficial PyTorch implementation of the paper titled "Progressive growing of GANs for improved Quality, Stability, and Variation"

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Very large d_loss

jiujing23333 opened this issue · comments

Thanks for your great work. I'm training based on my own dataset using the default WGAN-GP loss. The depth is 5. I wonder that the d_loss is always very large like this:

`
Epoch: 10
Elapsed: [0:16:56.978964] batch: 20 d_loss: 360.280609 g_loss: 1.696454
Elapsed: [0:17:05.162741] batch: 40 d_loss: 1318.774780 g_loss: 46.433090
Elapsed: [0:17:12.699862] batch: 60 d_loss: 369.273987 g_loss: -0.842132
Elapsed: [0:17:20.655461] batch: 80 d_loss: 687.216553 g_loss: 14.159639
Elapsed: [0:17:28.525324] batch: 100 d_loss: 1313.480713 g_loss: 34.156487
Elapsed: [0:17:36.623373] batch: 120 d_loss: 347.785248 g_loss: 4.414964
Elapsed: [0:17:44.439503] batch: 140 d_loss: 689.839966 g_loss: -9.050404
Elapsed: [0:17:51.356449] batch: 160 d_loss: 387.812683 g_loss: 8.951473
Time taken for epoch: 66.575 secs

Epoch: 11
Elapsed: [0:18:03.436184] batch: 20 d_loss: 536.160645 g_loss: 29.115032
Elapsed: [0:18:11.773171] batch: 40 d_loss: 333.525940 g_loss: 7.787774
Elapsed: [0:18:18.837069] batch: 60 d_loss: 265.996277 g_loss: 7.262208
`

I have tried to tune hyper parameters like learning rate but it still cannot be fixed. The generated images like this:
image

Any suggestions? Thank you.

@jiujing23333,

I presume you are using a multi-gpu setup here.
Yes, this is currently a know issue (bug) with the package. The MinBatchStd layer is not properly optimized to take care of the data-parallel Multi-GPU setup. There is currently no synchronization taking place for the divided minibatch to obtain the proper minibatchstd values back at GPU_1. I am working on a solution which just involves removing the last layer of the discriminator out of the dataParallel block so that everything synchronizes before the last layer's calculations are performed. But this is a temporary solution since, it makes the training way slower and consumes very little of the other parallel GPUs.

For now, you could either ignore the very high loss values produced by only the discriminator [they will go even bigger 😆], because the GAN trains fine. Or, you could run the code on only one gpu.

Please feel free to ask if you have any more questions.

Also, please feel free to suggest if you have a better solution for this problem.

Cheers 🍻!
@akanimax

@akanimax
Very thanks, it really works. Now I have another problem, the parameter beta of EMA is set as 0 in your code, which means EMA is always closed. Why?

@akanimax
I'm so sorry. I notice that the zero beta is only for init...