Code and dataset generation scripts for article: Learning Tone Curves for Local Image Enhancement.
Luxi Zhao, Abdelrahman Abdelhamed, Michael S. Brown
Samsung Artificial Intelligence Center, Toronto, Canada
-
Download HDR+ dataset
-
Run the following command to process the raw input (
20171106/results_20171023/<burst_name>/merged.dng
) up to the gamma-correction stage.20171106/results_20171023/<burst_name>/final.jpg
is used as ground truthpython3 -m prepare.prep_hdrplus --hdrplus_dir /path/to/20171106 --out_dir <dataset_dir> mv <dataset_dir>/gt-final <dataset_dir>/gt mv <dataset_dir>/input-srgb-gamma <dataset_dir>/input
- Training data file names: prepare/data/hdrplus_images_train.txt
- Validation data file names: prepare/data/hdrplus_images_val.txt
- Testing data file names: prepare/data/hdrplus_images_test.txt
python3 -m jobs.ltmnet_hdrplus_ds
python3 -m jobs.ltmnet_res_hdrplus_ds
Evaluation results are saved to <project_root>/outputs
.
Download MIT-Adobe FiveK dataset.
Input:
- Export MIT-Adobe FiveK images from LightRoom with the following settings
- Collection: Input/InputZeroed with ExpertC WhiteBalance
- Filetype: PNG
- Resize: long edge resized to 1024 pixels, 240 ppi
- Bit depth: 8 bit
- Color Space: sRGB
- Save the exported images to
<dataset_dir>/input
Ground truth:
python3 -m jobs.job_prep_mit_adobe_clahe_ds
cp -r <dataset_dir>/mit-adobe-clahe-15v/long-edge-1024 <dataset_dir>/gt
python3 -m jobs.ltmnet_ltm_ds
Evaluation results are saved to <project_root>/outputs
.
Download MIT-Adobe FiveK dataset.
Input:
- Export MIT-Adobe FiveK images from LightRoom with the following settings
- Collection: Input/InputZeroed with ExpertC WhiteBalance
- Filetype: PNG
- Resize: long edge resized to 1024 pixels, 240 ppi
- Bit depth: 8 bit
- Color Space: sRGB
- Save the exported images to
<dataset_dir>/input
Ground truth:
- Export MIT-Adobe FiveK images from LightRoom with the following settings
- Collection: Experts/C
- Filetype: PNG
- Resize: long edge resized to 1024 pixels, 240 ppi
- Bit depth: 8 bit
- Color Space: sRGB
- Save the exported images to
<dataset_dir>/gt
Train / Valid / Test split
python3 -m prepare.gen_file_lists_mit_adobe \
--out_dir "~/Data" \
--train_range 101 1100 \
--val_range 1 100 \
--test_range 4501 5000
- Train indices: a0102 - a1101
- Validation indices: a0002 - a0101
- Test indices: a4502 - a5000
8 x 8 grid
python3 -m jobs.ltmnet_mit_grid8x8
1 x 1 grid
python3 -m jobs.ltmnet_mit_grid1x1
LTMNet trained on HDR+ dataset: pretrained_models/ltmnet_hdrplus_ds_model
LTMNet with residual module trained on HDR+ dataset: pretrained_models/ltmnet_res_hdrplus_ds_model
LTMNet trained on our LTM dataset: pretrained_models/ltmnet_ltm_ds_model
Use the following arguments for main.py
--pretrained_model_dir ./pretrained_models/ltmnet_hdrplus_ds_model
--eval
Note: For now the code only supports bit_depth = 8.