JingyunLiang / MANet

Official PyTorch code for Mutual Affine Network for Spatially Variant Kernel Estimation in Blind Image Super-Resolution (MANet, ICCV2021)

Home Page:https://arxiv.org/abs/2108.05302

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Question about the prepare_testset

mrgreen3325 opened this issue · comments

Hi, thanks for your work.
I find a problem when I run the prepare_testset.yml
There are several outputs for the same input with different sig1, sig2, theta setting.
May I know which one should be used to train my sr model or which can have the best quality?
Thanks.

As indicated by the name, prepare_testset.yml is used for generating testing set. For training, the HR-LR pairs are generated one-the-fly, which means only HR path is required.

As indicated by the name, prepare_testset.yml is used for generating testing set. For training, the HR-LR pairs are generated one-the-fly, which means only HR path is required.

Thanks.
Yes, I want to generate the HR-LR pair for my training program.
May I know which setting of this HR-LR pair should I use in my training?

We generate HR-LR pairs on-the-fly. The parameters on degradation are set at

We generate HR-LR pairs on-the-fly. The parameters on degradation are set at

Thanks for reply.
I follow the train_stage1.yml setting to config prepare_testset.yml to process 4x downscale as

name: 001_MANet_prepare_dataset
suffix: ~
model: blind
distortion: sr
scale: ~
gpu_ids: [6]
kernel_size: 21
code_length: 15
sig_min: 0.7
sig_max: 10.0
sig: 1.6
sig1: 6
sig2: 1
theta: 0
rate_iso: 0 # 1 for iso, 0 for aniso
sv_mode: ~
test_noise: False
noise: 15

datasets:
  test1:
    name: Set5
    mode: GT
    dataroot_GT: ../datasets/toy_dataset/HR_si
    dataroot_LQ: ~

network_G:
  which_model_G: MANet_s1
  in_nc: 3
  out_nc: ~
  nf: ~
  nb: ~
  upscale: 0
#
path:
  strict_load: true
  pretrain_model_G: ../experiments/pretrained_models

However, the prepare_testset still produce so many different version of downscale LR.
Is that I miss something?

As shown in the Read.me. There are three settings.

1, for training, use train_stage*.yml. It will generate HR-LR pairs on the fly.

2, for testing, use prepare_testset.yml. It will generate and save different versions of LR (with different degradations) for testing.

3, for testing, you can also use test_stage3.yml. It will generate HR-LR testing pairs on the fly, but all of them follow the same degradation (different from case 2).

ezoic increase your site revenue