astra-vision / CoMoGAN

CoMoGAN: continuous model-guided image-to-image translation. CVPR 2021 oral.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Linear target dataset structure

cyprian opened this issue · comments

Thank you for your research and for sharing your code!
I want to train a custom dataset rgb2rgb ex. blured_image 2 focused_image.
From your paper it seams that the I should use the Linear target approach.
How would I go about creating a dataset structure? Should it be as simple as trainA (blured images) trainB (focused images)?
Can you provide your Linear target dataset loading files?

Thank you!

Thanks for your interest!
Yes, it's correct, the linear approach is appropriate. The dataset structure is exactly like that, you don't need anything else except for the custom model in the case for sharpness to apply to blurred images. I believe that some simple sharpening filter will do the trick.

For the linear model, it's suboptimal (see supp.) but you can also simply adapt the cyclic dataset structure for the linear case by providing a sharpening model invariant to sin(\phi) (i.e. which just depends on cos(\phi)), in that case you'll be ready for a quick test really fast. The cyclic FIN will simply collapse to a linear one. In other words, just modify the __apply_colormap function in the dataset with a __sharpening one to apply the correct continuous model.

I'm closing the issue, if you need further assistance don't hesitate to reopen.