akanimax / pro_gan_pytorch

Unofficial PyTorch implementation of the paper titled "Progressive growing of GANs for improved Quality, Stability, and Variation"

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Redproducing paper's results

ArtjomUEA opened this issue · comments

I've been trying to reproduce the 8.80 IS on Cifar-10, however, I managed to get only to 6.60.. It looks like the following code uses the same hyperparameters as the ones that are in the paper:

# some parameters:
depth = 4
# hyper-parameters per depth (resolution)
num_epochs = [16, 32, 32, 32]
fade_ins = [50, 50, 50, 50]
batch_sizes = [16, 16, 16, 16]
latent_size = 512

Apart from the fact that lambda in the loss is 10 instead of 750 (tested with 750 and it did not produce better results).

800k*2 images per epoch, which is 50k (Cifar-10 train set) per epoch * 32.

I am wondering if somebody managed to reproduce the reported results by using this code ?

@ArtjomUEA, indeed! The CIFAR10 still seems to be out of my grasp for now. Oddly, I was able to reproduce the CelebA-HQ model from the pro-GAN paper but not the CIFAR-10. Even my inception score became stagnant around the same value. Perhaps, complete details of the CIFAR-10 experiment are not given in the paper. Please let me know if you were able to do so.
Cheers 🍻!
@akanimax

commented

Hi Do you have the pre-trained model which re-produce the CelebA-HQ model from pro-GAN paper, can you please send me the pretrained model.

@Cold-Winter, please check out the models kept at the drive link on the readme under the pretrained models section. You can either use the GAN_GEN_SHADOW model from the directory or the one under best_model directory. Both perform almost equally well.

Best regards,
@akanimax

commented

Thanks for your quick response, this one can get same inception score on CELEBA-HQ dataset?

Closing due to inactivity. Please feel free to reopen if something new pops up!