Why did not use Conv2DTranspose rather than Conv2D in the generator
lestel opened this issue · comments
the related paper all say they use Conv2DTranspose
Yes. That is one of their key contributions.
They say to
- use strided convolutions instead of max-pooling layer
- use cond2dTranspose instead of upsampling
- use batch normalization layers
- use relu activations for intermediate layers
@jacobgil You've implemented none of those. So, this is not DC GAN right?