mingyuliutw / UNIT

Unsupervised Image-to-Image Translation

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

different dataset length in both domains

mhusseinsh opened this issue · comments

Hello,
I would like to know how does the code handle different sizes of datasets in both domains
Let's say I have 20K images in domainA, and 15K in domainB

Does this mean, that the training will be done on only 15K images, or will it load use the biggest size from both domains and repeat the images ?
Like we have 20K of minibatches, and for the difference of these 5K, they are repeated again in the training ?

I would point you to a recent research paper from Facebook AI Research. They have an in-depth discussion on it. https://arxiv.org/abs/1806.06029