codeslake / SYNDOF

The official matlab implementation of SYNDOF generation used in the paper, 'Deep Defocus Map Estimation using Domain Adaptation', CVPR 2019

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Generating defocus maps on own dataset

SkyeLu opened this issue · comments

Hi, Lee. Thanks for your great work and sharing your code! I'm trying to use your code to generate defocus maps on my own dataset (rgb images with the corresponding depth maps), but the genereted defocus maps seem wrong. I guess the reason might be that I set the depth_scale and depth_scale_factor wrong. I wonder is there anything I should pay attention to when I set these two parameters? Or is there anything I'm missing?

Hi, @SkyeLu!

I don't remember exactly, but I believe depth_scale was for depth value to have units in millimeters.
For depth_scale_factor, as depth values sometimes are smaller than the focal length, we shifted the values so that all depth values are larger than minimum focusing distance, for which 5 is empirically chosen. This way, we can later randomly choose a focused plane considering all the depth planes of an image.

I am not sure how exactly you generated defocus maps that are wrong, but as long as you match the metric (mm) of depth values, there shouldn't be a problem.

Thanks!

Hi Lee, thanks for your reply. Actually I have rgb images and their corresponding disparity maps (no camera intrinsics available). I've tried to invert the disparity maps to depth maps through 1./ (disp + 0.0001) and try different depth_scale values. But I either got GPU out of memory or all black decofus map.

If you get GPU out of memory error, run function with is_gpu=false (i.e., generate_blur_by_depth(29, 'data', 'out', false, false, 1)).

I have no clue about which value is needed for depth_scale, but as you said you get an all-black defocus map, I guess the depth-scale value is too low.

Thanks a lot!

Hi Lee, sorry to bother again. If I want to generate defocus maps which are consistent with the SYNDOF dataset which is used to train DMNet shared in this repo, should I set max_coc to 15 or 29 or 61 according to these annotations?

No worries!

If you are going to divide the image with 15, themax_coc should be 61.

If you are going to divide the image with 7, themax_coc should be 29, which will render defocus blur with the same COC used in the original dataset.

Got it. Thanks a lot!

Hi Lee, I'm bothering your again. I have succeeded in generating defocus maps and blurred images on one of my own datasets before. However, today when I try on another dataset, I failed. The generated defocus maps are normal, but the generated blurred images are all white. This dataset is similar with the previous one, and I have tried to adjust different depth scale values, but the blurred images are still abnormal. Could you provide any insight that which step I might go wrong?

Hi, @SkyeLu! It is good to hear from you.

Are images of the new dataset in 8-bit precision?
Try printing out the max value of the image after reading it.

This is exactly the detail what I missed before! The images are of 16-bit. The max value is 65535.

Would you try the following?

    image = im2double(imread(image_file_name));

Just make sure the image is in [0~1].

Thanks so much for your help Lee! I succeed in generating correct images.