JingyunLiang / FKP

Official PyTorch code for Flow-based Kernel Prior with Application to Blind Super-Resolution (FKP, CVPR2021)

Home Page:https://arxiv.org/abs/2103.15977

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

image alignment issue

zhihongp opened this issue · comments

It seems that the alignment between HR and LR generated by your program is off and it could be related to the following code (different from original version). But somehow after USRNet, the alignment is back to normal. This is quite confusing.

wanted_center_of_mass = (np.array(kernel.shape) - sf) / 2.

As claimed in Sec.6.1 of the paper, Following Kernel-GAN [3] and USRNet [53], the blur kernel is shifted and the upper-left pixels are kept in downsampling to avoid sub-pixel misalignments. Both our paper and USRNet assume the kernel is shifted. Therefore, when inputting the estimated kernel (shifted) to USRNet, it can generated aligned images automatically, as it is trained in this way (LR+shifted kernel-->aligned HR).

@JingyunLiang Yes the kernel is shifted to make LR to align with HR as in KernelGAN. But your kernel-shift (maybe USRNet too) is to the different direction comparing to KernelGAN (as the code I referenced above), which resulted in your LR mis-aligned with HR. You can load your LR and HR (zoom to same size) and flip the views back-and-forth then you can see the mis-alignment.

My guess is the USRNet is able to correct that mis-alignment. Your result is valid if you prepare LR data using your own code. But if you input LR from DIV2KRK, or even bicubic LR as in original DIV2K, your SR output will now be mis-aligned and PSNR will be much lower.

Yes. KernelGAN and USRNet use different center calculation formulas to shift the kernel: KernelGAN uses line 22 (I have commented it), while USRNet uses line 24. We follow the setting of USRNet in all experiments.

Feel free to open it if you have more questions.