mafda / generative_adversarial_networks_101

Keras implementations of Generative Adversarial Networks. GANs, DCGAN, CGAN, CCGAN, WGAN and LSGAN models with MNIST and CIFAR-10 datasets.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

why random labels are used for training of d_g?

GussailRaat opened this issue · comments

In 03_CGAN_MNIST
(1) d_loss_real = discriminator.train_on_batch(x=[X_batch, real_labels], y=real * (1 - smooth))
(2) d_loss_fake = discriminator.train_on_batch(x=[X_fake, random_labels], y=fake)
(3) d_g_loss_batch = d_g.train_on_batch(x=[z, random_labels], y=real)

To train the discriminator, you first train for X_batch with real_labels then X_fake with random_labels. I think this should be real_labels instead of random_labels in equation 2.

To train the generator, in equation (3), why you take random_lables for training the d_g instead of real_labels?
Thank you.

Hi,

Thank you for your feedback.

To train a GAN, I train the discriminator and the generator in a loop as follows:

  1. Set the discriminator trainable.

  2. Train the discriminator with the real images and the images generated by the generator to classify the real and fake images (half the samples are real, and half are fake).

  3. Set the discriminator non-trainable.

  4. Train the generator as part of the GAN. We feed latent samples into the GAN and let the generator to produce images and use the discriminator to classify the image.

According to the author's article Conditional Generative Adversarial Nets, Generative Adversarial Nets can be extended to a conditional model if both the generator and discriminator are conditioned on some extra information y.

  • y could be any auxiliary information, such as class labels or data from other modalities.

We can perform the conditioning by feeding y into both the discriminator and generator as additional input layer.

  • Generator: The prior input noise p(z), and y are combined in joint hidden representation.
  • Discriminator: x and y are presented as inputs and to a discriminative function.

Then, to train the real samples in the discriminator, the input would be X_batch with real_labels as equation (1), and to train the fake samples, the input would be X_fake with random_labels as equation (2), and not X_fake with real_labels because the other half of the samples must be false, that's why we generate random labels.

In the same way for the generator. I take random_lables for training the d_g, as equation (3), instead of real_labels, because I need the generator to produce images and use the discriminator to classify the image.

As far as I understand, there must not be any significant difference between using real_labels or random_labels in equations 2 and 3. Since the generator is learning to produce samples from a conditional input, it may generate the same quality samples.

Thank you.

Thank you for your reply.