Chongyi Li, Chun-Le Guo, Man Zhou, Zhexin Liang, Shangchen Zhou, Ruicheng Feng, Chen Change Loy
S-Lab, Nanyang Technological University; Nankai University
- 2024.01.10: We provide a low-resolution UHD_LL dataset, called UHD_LL_down. The resolutions of the images in the UHD_LL_down dataset are 960 × 540.
(The datasets are hosted on both Google Drive and BaiduPan)
Dataset | Link | Number | Description |
---|---|---|---|
UHD-LL_down | Google Drive / BaiduPan (key: 1234) | 2,150 | A total of 2,000 pairs for training and 150 pairs for testing. Resolution: 960 × 540 |
UHD-LL | Google Drive / BaiduPan (key: 1234) | 2,150 | A total of 2,000 pairs for training and 150 pairs for testing. |
LOL-v1 | Google Drive / BaiduPan (key: 1234) | 500 | A total of 485 pairs for training and 15 pairs for testing. |
LOL-v2 | Google Drive / BaiduPan (key: 1234) | 789 | A total of 689 pairs for training and 100 pairs for testing. |
[Unfold] for detailed description of each folder in UHD-LL dataset:
|
-
Clone Repo
git clone <code_link> cd UHDFour_code/
-
Create Conda Environment and Install Dependencies
conda env create -f environment.yaml conda activate UHDFour
Before performing the following steps, please download our pretrained model first.
Download Links: [Google Drive] [Baidu Disk (Key: 1234)]
Then, unzip the file and place the models to ckpts/<dataset_name>
directory, separately.
The directory structure will be arranged as:
ckpts
|-UHD_checkpoint.pt
|-LOLv1_checkpoint.pt
|-LOLv2_checkpoint.pt
We provide some classic test images in the classic_test_image
directory.
Run the following command to process them:
CUDA_VISIBLE_DEVICES=X python src/test_PSNR.py --dataset-name our_test
The enhanced images will be saved in the results/
directory.
You can also run the following command to process your own images:
CUDA_VISIBLE_DEVICES=X python src/test_PSNR.py \
--dataset-name our_test -t path/to/your/test/folder
The data
directory structure will be arranged as: (Note: please check it carefully)
data
|- classic_test_image
|- 1.bmp
|- 01.jpg
|- datalist.txt
|-LOL-v1
|- eval15
|- gt
|- 1.png
|- 22.png
|- input
|- 1.png
|- 22.png
|- datalist.txt
|- our485
|- gt
|- 2.png
|- 5.png
|- input
|- 2.png
|- 5.png
|- datalist.txt
|-LOL-v2
|- Test
|- gt
|- 00690.png
|- 00691.png
|- input
|- 00690.png
|- 00691.png
|- datalist.txt
|- Train
|- gt
|- 00001.png
|- 00002.png
|- input
|- 00001.png
|- 00002.png
|- datalist.txt
|-UHD-LL
|- testing_set
|- gt
|- 1_UHD_LL.JPG
|- 7_UHD_LL.JPG
|- input
|- 1_UHD_LL.JPG
|- 7_UHD_LL.JPG
|- datalist.txt
|- training_set
|- gt
|- 2_UHD_LL.JPG
|- 3_UHD_LL.JPG
|- input
|- 2_UHD_LL.JPG
|- 3_UHD_LL.JPG
|- datalist.txt
See python3 src/train.py --h
for list of optional arguments, or train.sh
for examples.
CUDA_VISIBLE_DEVICES=X python src/train.py \
--dataset-name UHD \
--train-dir ./data/UHD-LL/training_set/\
--valid-dir ./data/UHD-LL/testing_set/ \
--ckpt-save-path ./ckpts_training/ \
--nb-epochs 1000 \
--batch-size 2\
--train-size 512 512 \
--plot-stats \
--cuda
For the perceptual loss used in the paper, you can download the pre-trained VGG19 model from [Google Drive] [Baidu Disk (Key: 1234)].
This project is licensed under S-Lab License 1.0. Redistribution and use for non-commercial purposes should follow this license.
If our work is useful for your research, please consider citing:
@InProceedings{Li2023ICLR,
author = {Li, Chongyi and Guo, Chun-Le and Zhou, Man and Liang, Zhexin and Zhou, Shangchen and Feng, Ruicheng and Loy, Chen Change},
title = {Embedding Fourier for Ultra-High-Definition Low-Light Image Enhancement},
booktitle = {ICLR},
year = {2023}
}
If you have any questions, please feel free to reach me out at lichongyi25@gmail.com
.