PyTorch implementation of "Precise Recovery of Latent Vectors from Generative Adversarial Networks" https://arxiv.org/abs/1702.04782.
Given the generated image G(z)
with z
unknown, the goal of ReverseGAN is to
find z_approx
that approximates z
. To achieve this, one can find z_approx
that minimizes MSE(G(z_approx), G(z))
through gradient descent.
Download the "Align&Cropped Images" from http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html and unzip.
python dcgan.py --dataset=folder --dataroot=/path/to/dataset --cuda
By default, the generated images and saved models will be saved in dcgan_out
directory.
After training, run
python dcgan_reverse.py --clip=stochastic --netG=pre_trained/netG_epoch_10.pth --cuda
where --netG
points to the model saved during training.
The following example uses the pre-trained model pre_trained/netG_epoch_10.pth
on CelebA aligned dataset.
G(z)
: the generated image with z
G(z_approx)
: the generated image with the estimated z_approx
- DCGAN's implementation from PyTorch examples.
- Author's TF implementation.