ChaoWei0606 / EnlightenGAN

[Preprint] "EnlightenGAN: Deep Light Enhancement without Paired Supervision"

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

EnlightenGAN

EnlightenGAN: Deep Light Enhancement without Paired Supervision

Representitive Results

representive_results

Overal Architecture

architecture

Environment Preparing

python3.5

You should prepare at least 3 1080ti gpus or change the batch size.

pip install -r requirement.txt
mkdir model
Download VGG pretrained model from [Google Drive 1], [2] and then put them into the directory model.

Training process

Before starting training process, you should launch the visdom.server for visualizing.

nohup python -m visdom.server -port=8097

then run the following command

python scripts/script.py --train

Testing process

Download pretrained model and put it into ./checkpoints/enlightening

python scripts/script.py --predict

Dataset preparing

Training data [Google Drive] (unpaired images collected from multiple datasets)

Testing data [Google Drive] (including LIME, MEF, NPE, VV, DICP)

If you find this work useful for you, please cite

@article{jiang2019enlightengan,
  title={EnlightenGAN: Deep Light Enhancement without Paired Supervision},
  author={Jiang, Yifan and Gong, Xinyu and Liu, Ding and Cheng, Yu and Fang, Chen and Shen, Xiaohui and Yang, Jianchao and Zhou, Pan and Wang, Zhangyang},
  journal={arXiv preprint arXiv:1906.06972},
  year={2019}
}

About

[Preprint] "EnlightenGAN: Deep Light Enhancement without Paired Supervision"


Languages

Language:Python 99.2%Language:TeX 0.5%Language:Shell 0.3%