image alignment issue
zhihongp opened this issue · comments
As claimed in Sec.6.1 of the paper,
Following Kernel-GAN  and USRNet , the blur kernel is shifted and the upper-left pixels are kept in downsampling to avoid sub-pixel misalignments. Both our paper and USRNet assume the kernel is shifted. Therefore, when inputting the estimated kernel (shifted) to USRNet, it can generated aligned images automatically, as it is trained in this way (LR+shifted kernel-->aligned HR).
@JingyunLiang Yes the kernel is shifted to make LR to align with HR as in KernelGAN. But your kernel-shift (maybe USRNet too) is to the different direction comparing to KernelGAN (as the code I referenced above), which resulted in your LR mis-aligned with HR. You can load your LR and HR (zoom to same size) and flip the views back-and-forth then you can see the mis-alignment.
My guess is the USRNet is able to correct that mis-alignment. Your result is valid if you prepare LR data using your own code. But if you input LR from DIV2KRK, or even bicubic LR as in original DIV2K, your SR output will now be mis-aligned and PSNR will be much lower.
Feel free to open it if you have more questions.