yumingj / C2-Matching

Code for C2-Matching (CVPR2021). Paper: Robust Reference-based Super-Resolution via C2-Matching.

Home Page:https://yumingj.github.io/projects/C2_matching.html

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Question about Correspondence Network training

wdmwhh opened this issue · comments

In ContrasDataset, you resize the input. (opt['gt_size'] is 160 in training config.)

        gt_h, gt_w = self.opt['gt_size'], self.opt['gt_size']
        # in case that some images may not have the same shape as gt_size
        img_in = mmcv.imresize(img_in, (gt_w, gt_h), interpolation='bicubic')

However, in ContrasValDataset, there is not "resize".

        gt_h, gt_w, _ = img_in.shape

        H_inverse = self.transform_matrices[index]
        img_in_transformed = cv2.warpPerspective(
            src=img_in, M=H_inverse, dsize=(gt_w, gt_h))

Therefore, I have the question why you don't train the correspondence net from original HR (GT).

Hi, Thanks for your interest in our work.

In ContrasDataset, we resize the input just in case that some images are with the shape of 160 x 160. Actually, in CUFED dataset (we used in ContrasDataset), almost all of the images are with the shape of 160 x 160, and there are only a few exceptions that the images are slightly smaller than that, like 158 x 158. So this is the purpose of using resize operation here. It actually does not downsample the input image. The input image is still an HR one.

As for ContrasValDataset, we use some samples from the CUFED5 dataset, and the images in this dataset are of different sizes. So here we do not resize them to the same resolution.

Hope the above answer addresses your concerns. If you have any other questions, please feel free to let me know.

Thanks for your prompt reply. It solves my confusion. Thanks a lot.