Note This repository modify from yumingj/C2-Matching
This repository contains the implementation of the following paper:
Robust Reference-based Super-Resolution via C2-Matching
Yuming Jiang, Kelvin C.K. Chan, Xintao Wang, Chen Change Loy, Ziwei Liu
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021
[Paper] [Project Page] [WR-SR Dataset]
- module/C2-Matching/mmsr/models/archs/DCNv2
- module/C2-Matching/mmsr/models/archs/dcn/src
-
Anaconda
-
Python == 3.7
-
PyTorch == 1.7
-
torchvision==0.8.0
-
numpy == 1.21.5
-
opencv-python-headless
-
CUDA 11.1
# Driver CUDA version Run nvidia-smi # Runtime CUDA version Run nvcc --version
-
GCC 5.4.0
-
Clone Repo
git clone https://github.com/mile-zhang/C2-Matching-1.7.0
-
Create Conda Environment
conda create --name c2_matching python=3.7 conda activate c2_matching
-
Install Dependencies
cd C2-Matching conda install pytorch=1.7.0 torchvision cudatoolkit=11.1 -c pytorch pip install mmcv==0.4.4
-
Install MMSR and DCNv2
python setup.py develop cd mmsr/models/archs/DCNv2 python setup.py build develop
Please refer to Datasets.md for pre-processing and more details.
Downloading the pretrained models from this link and put them under experiments/pretrained_models folder
.
We provide quick test code with the pretrained model.
-
Modify the paths to dataset and pretrained model in the following yaml files for configuration.
./options/test/test_C2_matching.yml ./options/test/test_C2_matching_mse.yml
-
Run test code for models trained using GAN loss.
python mmsr/test.py -opt "options/test/test_C2_matching.yml"
Check out the results in
./results
. -
Run test code for models trained using only reconstruction loss.
python mmsr/test.py -opt "options/test/test_C2_matching_mse.yml"
Check out the results in in
./results
All logging files in the training process, e.g., log message, checkpoints, and snapshots, will be saved to ./experiments
and ./tb_logger
directory.
-
Modify the paths to dataset in the following yaml files for configuration.
./options/train/stage1_teacher_contras_network.yml ./options/train/stage2_student_contras_network.yml ./options/train/stage3_restoration_gan.yml
-
Stage 1: Train teacher contrastive network.
python mmsr/train.py -opt "options/train/stage1_teacher_contras_network.yml"
-
Stage 2: Train student contrastive network.
# add the path to *pretrain_model_teacher* in the following yaml # the path to *pretrain_model_teacher* is the model obtained in stage1 ./options/train/stage2_student_contras_network.yml python mmsr/train.py -opt "options/train/stage2_student_contras_network.yml"
-
Stage 3: Train restoration network.
# add the path to *pretrain_model_feature_extractor* in the following yaml # the path to *pretrain_model_feature_extractor* is the model obtained in stage2 ./options/train/stage3_restoration_gan.yml python mmsr/train.py -opt "options/train/stage3_restoration_gan.yml" # if you wish to train the restoration network with only mse loss # prepare the dataset path and pretrained model path in the following yaml ./options/train/stage3_restoration_mse.yml python mmsr/train.py -opt "options/train/stage3_restoration_mse.yml"
Author (yumingj / C2-Matching ) results in Google Drive.
For more results on the benchmarks, you can directly download our C2-Matching results from here.
Check out our Webly-Reference (WR-SR) SR Dataset through this link! We also provide the baseline results for a quick comparison in this link.
Webly-Reference SR dataset is a test dataset for evaluating Ref-SR methods. It has the following advantages:
- Collected in a more realistic way: Reference images are searched using Google Image.
- More diverse than previous datasets.
If you find our repo useful for your research, please consider citing our paper:
@inproceedings{jiang2021robust,
title={Robust Reference-based Super-Resolution via C2-Matching},
author={Jiang, Yuming and Chan, Kelvin CK and Wang, Xintao and Loy, Chen Change and Liu, Ziwei},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={2103--2112},
year={2021}
}