astra-vision / CoMoGAN

CoMoGAN: continuous model-guided image-to-image translation. CVPR 2021 oral.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Questions about the tone mapping.

xyIsHere opened this issue · comments

Dear author,

Thank you for this very impressive work. I just visualized the tone mapping results and I think it is very similar to the images obtained by using color jittering. So, can it be simply replaced by color jittering? And what do the values in the daytime_model_lut.csv represent?

Thanks!

Hi, thanks for your interest!
Well, the tone mapping is indeed a global color modification so I guess that jittering is going to give similar results. What is different is that the colors that we are using have sense for the physical guidance! All the values in the lookup table are an average of a sky dome image obtained with the Hosek model. That's why the system is working: we link something simple that provides physical guidance (the Hosek model) with something complex that we want to reorganize (the mixed time lapse target domain).

In brief, it you want to guide the learning process with jittering, you could do that, but the link between your model and the target dataset should somehow make sense.

Thank you for the clear explanation. I got their difference right now. It is worth trying your numbers to augment my training data since they are got from real-world data.