DGF code in segmentation
hedes1992 opened this issue · comments
Thanks for your excellent work
When I view and run the code of "./ComputerVision/Deeplab-Resnet/predict_dgf.py" with the released pth file, I observed that the guided filter layer just work as the following:
where the breakpoint is in ./ComputerVision/Deeplab-Resnet/deeplab_resnet.py
It seems that firstly using low-resolution image "x" to get low-resolution "output" and using original high-resolution image "im" to get guided_map "g", then upsample "output" to get coarse high-resolution "output", finally using guided_layer to get fine high-resolution "output".
In fact, in the guided_layer, it just calculate the A and b using guided_map "g" and coarse high-resolution "output", like the following:
where the breakpoint is in ./GuidedFilteringLayer/GuidedFIlter_PyTorch/guided_filter_pytorch/guided_filter.py
So I cannot know whether there use the end-to-end guided layer in paper's Figure 2 like the following:
I guess it's just the DGFs version mentioned in paper's subsection 4.2 like the following?
It's DGF not DGFs, the differences are:
DGFs | DGFb | DGF | |
---|---|---|---|
Guidance Map | N | N | Y |
Joint Training | N | Y | Y |
Figure 2 only shows guided filter layer used in image processing tasks. In supplementary material, Algorithm 2 and Algorithm 3 show the detail of guided filter layer used in image processing and computer vision separately,
- Image Processing
hr_y = FastGuidedFilter(r, eps)(lr_x, lr_y, hr_x)
- Computer Vision
hr_y = GuidedFilter(r, eps)(hr_x, init_hr_y)
Yes, Algorithm 3 describes deep guided filter for computer vision tasks.