VITA-Group / TransGAN

[NeurIPS‘2021] "TransGAN: Two Pure Transformers Can Make One Strong GAN, and That Can Scale Up", Yifan Jiang, Shiyu Chang, Zhangyang Wang

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

about test

maoshen5 opened this issue · comments

Now I have experimented with a framework in which the generator is transgan and the discriminator is autogan, but it doesn't seem to converge? Epoch is 320, and the experimental result FID is 130. What tricks did you use in the experiment?Thank you

Now I have experimented with a framework in which the generator is transgan and the discriminator is autogan, but it doesn't seem to converge? Epoch is 320, and the experimental result FID is 130. What tricks did you use in the experiment?Thank you

yeah, training GAN is comparably tricky, I would suggest you to go with AutoGAN's repo and run the experiments, make sure you get the reasonable FIDs and then replace AutoGAN's generator with Transformer. Then you will get the expected output.

OK,But isn't the weight propagation of the pre training model the same? Can we still use the pre training model of autogan after this change?

I use autogan's generator and discriminator to train to 180 epochs, is to train to 8.3, and FID to train to 15. However, when I use this model to train the generator to be transformer, and the discriminator is an autogan model, after training from 180 to 200 epochs, is suddenly reduced to 1.3, and FID rises to more than 320. Is this normal?

this is not normal. I've tried it before and the IS should be around 8.7-8.8 @maoshen5
I think you should try to tune the hyper parameter and learning rate

OK,then,do I need to prohibit the training of certain epoch autogan discriminators to continue training, and only train the generator of transformer?

no, just follow the standard training of AutoGAN.

I adjusted the learning rate, but now I can only adjust it to about 100 epochs at most. Is is 7 and FID is 30. But after 110 epochs, the loss of the generator will change around 0, and the loss of the discriminator will not drop any more. The FID index will also rise. I suspect that the generator's ability completely exceeds the discriminator after 100 epochs. When you train to 320 epochs, is the indicator of GAN still rising? Thank you

yeah, I had same observation that IS/FID will get worse after longer training. But that IS is > 8.0 in my case. Sorry, I don't have the script to run this now.

I adjusted the learning rate, but now I can only adjust it to about 100 epochs at most. Is is 7 and FID is 30. But after 110 epochs, the loss of the generator will change around 0, and the loss of the discriminator will not drop any more. The FID index will also rise. I suspect that the generator's ability completely exceeds the discriminator after 100 epochs. When you train to 320 epochs, is the indicator of GAN still rising? Thank you

Hello, did you solve this problem?