mingyuliutw / UNIT

Unsupervised Image-to-Image Translation

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

THEORETICAL QUESTION ABOUT THE MODEL

GZeta95 opened this issue · comments

Hello, first of all thank you for your work. I read the paper and tested the network with good results. However I have a theoretical question about the model. How is it possible that from the same latent code each generator is able to generate two images, the translated one and the reconstructed one, thus respecting the consistency of the cycle?
It is clear to me that each generator is able to reconstruct an image of a domain but it is not clear to me how it is possible to recreate two different images using the same generator if we start from the same encoding.
Is it possible for weight sharing?

thank you very much for your availability

@GZeta95 The two generators are not exactly the same, they differ in the bottom layers. The difference allow them to generate two different images.