# JingyunLiang / MANet

Official PyTorch code for Mutual Affine Network for Spatially Variant Kernel Estimation in Blind Image Super-Resolution (MANet, ICCV2021)

https://arxiv.org/abs/2108.05302

Geek Repo

Github PK Tool

# [How to get SR image by spatially variant estimated blur kernels]

CaptainEven opened this issue · comments

commented

Hi, Thank you for your excellent and interesting work! I'm not so clear about the process after kernels estimation during SR reconstruction after reading your paper. Could you please explain?

A line blind SR works take a two-step strategy: kernel estimation + non-blind SR.

Step1: estimating kernel by MANet

Step2: Given estimated kernel and the LR image, using a non-blind SR model RRDB-SFT to reconstruct the HR image.

See Fig.1 in supplementary for an illustration.

commented

@JingyunLiang Thanks，got it!

commented

A line blind SR works take a two-step strategy: kernel estimation + non-blind SR.

Step1: estimating kernel by MANet

Step2: Given estimated kernel and the LR image, using a non-blind SR model RRDB-SFT to reconstruct the HR image.

See Fig.1 in supplementary for an illustration.

Did this two-step strategy implicitly mean spatially invariant blur kernel estimation(estimate a kernel for the whole input image)?

No. It depends on the non-blind SR model. For example, ZSSR may only take one kernel (i.e., spatially invariant) as input, while SRMD can take spatially variant kernel (each pixel has a kernel) as input. Note that SRMD can also deal with spatially invariant kernel (a special case) by expanding the same kernel to all positions, as what they did in their paper.

commented

@JingyunLiang Well, what's the case of RRDB-SFT ?

RRDB-SFT is basically based on the idea of SRMD, so it can deal with spatially variant kernel. This is why we choose such a model.

commented

@JingyunLiang Thanks for the replying!