A CNN-Transformer Cooperation Network for Face Image Super-Resolution
Guangwei Gao, Zixiang Xu
Clone this repository
git clone https://github.com/IVIPLab/CTCNet
cd CTCNet
I have tested the codes on
-install required packages by pip install -r requirements.txt
We provide example test commands in script test.sh
for both CTCNet. Two models with difference configurations are provided for each of them, refer to section below to see the differences. Here are some test tips:
- CTCNet upsample a 16x16 bicubic downsampled face image to 128x128, and there is no need to align the LR face.
- Please specify test input directory with
--dataroot
option. - Please specify save path with
--save_as_dir
, otherwise the results will be saved to predefined directoryresults/exp_name/test_latest
.
The commands used to train the released models are provided in script train.sh
. Here are some train tips:
- You should download CelebA to train CTCNet and CTCGAN respectively. Please change the
--dataroot
to the path where your training images are stored. - To train CTCNet, we simply crop out faces from CelebA without pre-alignment, because for ultra low resolution face SR, it is difficult to pre-align the LR images.
- Please change the
--name
option for different experiments. Tensorboard records with the same name will be moved tocheck_points/log_archive
, and the weight directory will only store weight history of latest experiment with the same name. --gpus
specify number of GPUs used to train. The script will use GPUs with more available memory first. To specify the GPU index, uncomment theexport CUDA_VISIBLE_DEVICES=
The pretrained models and test results can be downloaded from Google Drive .
@article{gao2023ctcnet,
title={Ctcnet: a cnn-transformer cooperation network for face image super-resolution},
author={Gao, Guangwei and Xu, Zixiang and Li, Juncheng and Yang, Jian and Zeng, Tieyong and Qi, Guo-Jun},
journal={IEEE Transactions on Image Processing},
year={2023},
publisher={IEEE}
}
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
The codes are based on SPARNet.