jacquelinelala / GFN-IJCV

Gated Fusion Network for Degraded Image Super-Resolution (IJCV 2020).

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

GFN-IJCV

"Gated Fusion Network for Degraded Image Super Resolution" by Xinyi Zhang*, Hang Dong*, Zhe Hu, Wei-Sheng Lai, Fei Wang, Ming-Hsuan Yang (Accpeptd by IJCV, first two authors contributed equally).

[arXiv]

You can find more details on Project Website.

Dependencies

  • Python 3.6
  • PyTorch >= 0.4.0
  • torchvision
  • numpy
  • skimage
  • h5py
  • MATLAB

Super-resolving non-uniform blurry images

  1. Git clone this repository.
$git clone https://github.com/BookerDeWitt/GFN-IJCV
$cd GFN-IJCV/DBSR
  1. Download the trained model GFN_G3D_4x.pkl from here, then unzip and move the GFN_G3D_4x.pkl to GFN-IJCV/DBSR/models folder.

  2. Then, you can follow the instructions here to test and train our network with our latest code and pre-trained model.

Super-resolving hazy images

How to test:

Test on LR-RESIDE

  1. Git clone this repository.
$git clone https://github.com/BookerDeWitt/GFN-IJCV
$cd GFN-IJCV/DHSR
  1. Download the LR-RESIDE dataset (including both the test and training sets) from [Google Drive] or BaiduYun (Code:2tnh) and unzip it.
  2. Download the trained model GFN_epoch_60.pkl from Google Drive or BaiduYun (Code:v01z), then unzip and move the GFN_epoch_60.pkl to GFN-IJCV/DHSR/models folder.
  3. Run the GFN-IJCV/DHSR/test.py with cuda on command line:
GFN-IJCV/DHSR/$python test.py --dataset your_downloads_directory/LR-RESIDE/Validation_4x

Then the dehazing and super-solving images ending with GFN_4x.png are in the directory of your_downloads_directory/LR-RESIDE/Validation_4x/Results.

  1. Calculate the PSNR using Matlab function GFN-IJCV/DHSR/evaluation/test_RGB.m. The output of the average PSNR is 25.77456 dB. You can also use the GFN-IJCV/DHSR/evaluation/test_bicubic.m to calculate the bicubic method.
>> folder = 'your_downloads_directory/LR-RESIDE/Validation_4x';
>> test_RGB(folder)

How to train

Train on LR-RESIDE dataset You should accomplish the first two steps in Test on LR-RESIDE before the following steps.

Train from scratch

  1. Generate the train hdf5 files of RESIDE dataset: Run the matlab function LR_RESIDE_HDF5_Generator.m which is in the directory of GFN-IJCV/DHSR/h5_generator. The generated hdf5 files are stored in the your_downloads_directory/LR-RESIDE/RESIDE/RESIDE_train256_4x_HDF5.
>> folder = 'your_downloads_directory/LR-RESIDE/RESIDE';
>> LR_RESIDE_HDF5_Generator(folder)
  1. Run the GFN-IJCV/DHSR/train.py with cuda on command line:
GFN-IJCV/DHSR/$python train.py --dataset your_downloads_directory/LR-RESIDE/RESIDE/RESIDE_train256_4x_HDF5
  1. The three step intermediate models will be respectively saved in models/1/ models/2 and models/3. You can also use the following command to test the intermediate results during the training process. Run the GFN/Hazy/test.py with cuda on command line:
GFN-IJCV/DHSR/$python test.py --dataset your_downloads_directory/LR-RESIDE/Validation_4x --intermediate_process models/1/GFN_epoch_30.pkl # We give an example of step1 epoch30. You can replace another pkl file in models/.

Resume training from breakpoints

Since the training process will take 3 or 4 days, you can use the following command to resume the training process from any breakpoints. Run the GFN-IJCV/DHSR/train.py with cuda on command line:

GFN-IJCV/DHSR/$python train.py --dataset your_downloads_directory/LR-RESIDE/RESIDE/RESIDE_train256_4x_HDF5 --resume models/1/GFN_epoch_25.pkl # Just an example of step1 epoch25.

Super-resolving rainy images

How to test:

Test on LR-Rain1200 This model is the result of the third step with 37 epoch.

  1. Git clone this repository.
$git clone https://github.com/BookerDeWitt/GFN-IJCV
$cd GFN-IJCV/DRSR
  1. Download the LR-Rain1200 dataset (including both the test and training sets) from Google Drive or BaiduYun (Code:v7e1) and unzip it.
  2. Download the trained model GFN_epoch_37.pkl from Google Drive or BaiduYun (Code:koeu), then unzip and move the GFN_epoch_37.pkl to GFN/models folder.
  3. Run the GFN-IJCV/DRSR/test.py with cuda on command line:
GFN-IJCV/DRSR/$python test.py --dataset your_downloads_directory/LR_Rain1200/Validation_4x

Then the deraining and super-solving images ending with GFN_4x.png are in the directory of your_downloads_directory/LR_Rain1200/Validation_4x/Results.

  1. Calculate the PSNR using Matlab function GFN-IJCV/DRSR/evaluation/test_RGB.m. The output of the average PSNR is 25.248834 dB. You can also use the GFN-IJCV/DRSR/evaluation/test_bicubic.m to calculate the bicubic method.
>> folder = 'your_downloads_directory/LR_Rain1200/Validation_4x';
>> test_RGB(folder)

How to train

Train on LR-Rain1200 dataset You should accomplish the first two steps in Test on LR-Rain1200 before the following steps.

Train from scratch

  1. Generate the train hdf5 files of LR_Rain1200 dataset: Run the matlab function rain_hdf5_generator.m which is in the directory of GFN/h5_generator. The generated hdf5 files are stored in the your_downloads_directory/LR_Rain1200/Rain_HDF5.
>> folder = 'your_downloads_directory/LR_Rain1200';
>> rain_hdf5_generator(folder)
  1. Run the GFN-IJCV/DRSR/train.py with cuda on command line:
GFN-IJCV/DRSR/$python train.py --dataset your_downloads_directory/LR_Rain1200/Rain_HDF5
  1. The three step intermediate models will be respectively saved in models/1/ models/2 and models/3. You can also use the following command to test the intermediate results during the training process. Run the GFN-IJCV/DRSR/test.py with cuda on command line:
GFN-IJCV/DRSR/$python test.py --dataset your_downloads_directory/LR_Rain1200/Validation_4x --intermediate_process models/1/GFN_epoch_25.pkl # We give an example of step1 epoch25. You can replace another pkl file in models/.

Resume training from breakpoints

Since the training process will take 3 or 4 days, you can use the following command to resume the training process from any breakpoints. Run the GFN-IJCV/DRSR/train.py with cuda on command line:

GFN-IJCV/DRSR/$python train.py --dataset your_downloads_directory/LR_Rain1200/Rain_HDF5 --resume models/1/GFN_epoch_25.pkl # Just an example of step1 epoch25.

Citation

If you use these models in your research, please cite:

@article{GFN_IJCV,
	author = {Xinyi, Zhang and Hang, Dong and Zhe, Hu and Wei-Sheng, Lai and Fei, Wang and Ming-Hsuan, Yang},
	title = {Gated Fusion Network for Degraded Image Super Resolution},
	journal={International Journal of Computer Vision},
	year = {2020},
		pages={1 - 23}
}

@inproceedings{GFN_BMVC,
		title = {Gated Fusion Network for Joint Image Deblurring and Super-Resolution},
	author = {Xinyi, Zhang and Hang, Dong and Zhe, Hu and Wei-Sheng, Lai and Fei, Wang and Ming-Hsuan, Yang},
	booktitle = {BMVC},
	year = {2018}
}

About

Gated Fusion Network for Degraded Image Super-Resolution (IJCV 2020).


Languages

Language:Python 63.2%Language:MATLAB 36.8%