sanghoon / prediction_gan

PyTorch Impl. of Prediction Optimizer (to stabilize GAN training)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

CelebA experiments

sanghoon opened this issue · comments

Notes

  • This work was done only to show some sample outputs. Different random seeds can lead to totally different outcomes. Therefore, we need to investigate outputs from repeated trials to correctly compare two GAN methods.
  • For faster training, I used only 50k images from CelebA (resized to be 64x64)

Large learning rate (0.01)

Vanilla DCGAN

After 2 epochs After 10 epochs After 25 epochs
ep02_celeba_base_lr 0 01 ep10_celeba_base_lr 0 01 ep25_celeba_base_lr 0 01

DCGAN w/ prediction

After 2 epochs After 10 epochs After 25 epochs
ep02_celeba_pred_lr 0 01 ep10_celeba_pred_lr 0 01 ep25_celeba_pred_lr 0 01

Medium learning rate (0.0001)

Vanilla DCGAN

After 2 epochs After 10 epochs After 25 epochs
ep02_celeba_base_lr 0 0001 ep10_celeba_base_lr 0 0001 ep25_celeba_base_lr 0 0001

DCGAN w/ prediction

After 2 epochs After 10 epochs After 25 epochs
ep02_celeba_pred_lr 0 0001 ep10_celeba_pred_lr 0 0001 ep25_celeba_pred_lr 0 0001

Small learning rate (1e-5)

Vanilla DCGAN

After 2 epochs After 10 epochs After 25 epochs
ep02_celeba_base_lr 0 00001 ep10_celeba_base_lr 0 00001 ep25_celeba_base_lr 0 00001

DCGAN w/ prediction

After 2 epochs After 10 epochs After 25 epochs
ep02_celeba_pred_lr 0 00001 ep10_celeba_pred_lr 0 00001 ep25_celeba_pred_lr 0 00001