This is the official Tensorflow implementation of paper
C. Guo and X. Jiang, "Deep Tone-Mapping Operator Using Image Quality Assessment Inspired Semi-Supervised Learning," in IEEE Access, vol. 9, pp. 73873-73889, 2021.
Tone-mapping is to display HDR (High Dynamic Range) image on a traditional SDR (Standrad Dynamic Range, a.k.a. Low Dynamic Range, LDR) display, its result is usually stored as SDR image. That's to say, tone-mapping is the reverse process of single-shot HDR image generation (a.k.a. reverse/inverse tone-mapping, and SI-HDR).
There're 2 types of HDR content: photometrically linear HDR which is used in photograhpy, medicine and image based lighting, and non-linear HDRTV content with PQ/HLG non-linearity and wide gamut RGB primaries which is used in film and television. Dynamic range of the HDR content is relatively higher than HDRTV.
Specifically, our work deals with linear HDR content. Checkpoint for non-linear HDRTV content has not been trained yet.
- Unbuntu with PyCharm IDE
- Python 2.7
- CUDA 8.0 & CuDNN 6.0.21
- Tensorflow-GPU 1.4.1
- Other packages: opencv-python, imageio, easydict, etc.
Download checkpoint (model parameters) from BaiduYunNetDisk (password: 9yvv) or GoogleDrive, make sure checkpoint (3 files suffixed .data-00000-of-00001
, .index
and .meta
respectively) and a checkpoint
file indicating the index of checkpoint are placed under /checkpoint/ftlayer
.
Place your testing HDR images under /dataset/test
floder. We recommend to use .hdr
encapsulation, otherwise you have to go to /utils/configs.py
and change config.data.appendix_hdr
to your one as long as package imageio
support.
Run /generate_tfrec.py
, note that our program will automatically clip some boundary pixels if image hight or width could not be divided by 8. (Optional) you can set test_resize_to_half = True
if your later find GPU out of memory.
Run /test.py
, results will be stored under /result
floder.
If your want to use "preceptual loss lp", download vgg16.npy
here and place it under /loss/pretrained
, clone Pre-trained VGG-16 and place this repository under /loss/
.
Place your HDR and SDR images under /dataset/train/hdr
and /dataset/train/sdr
, respectively. Run /generate_tfrec.py
with phase = 'training'
& ft = False
to generate TFRcord for the separate training of 2 netwoek branches (NG and NL, i.e. Step 1); run /generate_tfrec.py
with phase = 'training'
& ft = True
to generate TFRcord for the joint training of whole network (Step 2).
Go to /utils/configs.py
and change config.train.train_set_size
to that of your training pair, and run files blow with specific value of epochs
file to run | function | saved checkpoint |
---|---|---|
/train_bot.py |
Step 1, training NG | under /checkpoint/botlayer |
/train_high.py |
Step 1, training NL | under /checkpoint/highlayer |
/train_ft.py |
Step 2, training the whole network | under /checkpoint/ftlayer |
- Program will only save the checkpoint of 5 latest epochs, and they will be rewritten if training is not stoped.
- You can skip Step 1 at the cost of potential increase of training difficulty.
- Training can be monitored using TensorBoard by runing
/run_tfboard.py
.
We took Deep Reformulated Laplacian Tone Mapping (DRLTM) as the Tensorflow prototype at the early stage of our development, this greatly simplified our coding since we are not experienced in computer science.