[How to get SR image by spatially variant estimated blur kernels]
CaptainEven opened this issue · comments
Hi, Thank you for your excellent and interesting work! I'm not so clear about the process after kernels estimation during SR reconstruction after reading your paper. Could you please explain?
A line blind SR works take a two-step strategy: kernel estimation + non-blind SR.
Step1: estimating kernel by MANet
Step2: Given estimated kernel and the LR image, using a non-blind SR model RRDB-SFT to reconstruct the HR image.
See Fig.1 in supplementary for an illustration.
@JingyunLiang Thanks,got it!
A line blind SR works take a two-step strategy: kernel estimation + non-blind SR.
Step1: estimating kernel by MANet
Step2: Given estimated kernel and the LR image, using a non-blind SR model RRDB-SFT to reconstruct the HR image.
See Fig.1 in supplementary for an illustration.
Did this two-step strategy implicitly mean spatially invariant blur kernel estimation(estimate a kernel for the whole input image)?
No. It depends on the non-blind SR model. For example, ZSSR may only take one kernel (i.e., spatially invariant) as input, while SRMD can take spatially variant kernel (each pixel has a kernel) as input. Note that SRMD can also deal with spatially invariant kernel (a special case) by expanding the same kernel to all positions, as what they did in their paper.
@JingyunLiang Well, what's the case of RRDB-SFT ?
RRDB-SFT is basically based on the idea of SRMD, so it can deal with spatially variant kernel. This is why we choose such a model.
@JingyunLiang Thanks for the replying!