zhuyr97 / TransMEF

Official PyTorch implementation of our AAAI22 paper: TransMEF: A Transformer-Based Multi-Exposure Image Fusion Framework via Self-Supervised Multi-Task Learning.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Official-PyTorch-Implementation-of-TransMEF

This is a PyTorch/GPU implementation of the AAAI2022 paper TransMEF: A Transformer-Based Multi-Exposure Image Fusion Framework using Self-Supervised Multi-Task Learning:

For training

  • We use the MS-COCO dataset for self-supervised training and all images are converted to 256 * 256 grayscale images.
  • For a quick start, please run
python train_TransMEF.py --root './coco' --batch_size 24 --save_path './train_result_TransMEF' --summary_name 'TransMEF_qiuck_start_'

For fusion

  • We use the benchmark dataset MEFB for evaluation, and all images are converted to 256 * 256 grayscale png images. Note that our test metrics may not be consistent with those reported in the MEFB research due to the resizing and format conversion.

    We provide a convenient implementation of the transformation. Please refer to resize_all.py for details.

    We provide an example of the dataset here. Please note the data path and format!

  • For a quick start, please run

python fusion_gray_TransMEF.py --model_path './best_model.pth' --test_path './MEFB_dataset/' --result_path './TransMEF_result' 
  • Managing RGB Input

    We refer to the code of hanna-xu to convert the fused image into a color image.

  • Managing Arbitrary input size images

    We recommend to use the sliding window strategy to fuse input images of arbitrary non-256 * 256 size, i.e., fusing images of 256 * 256 window size at a time.

    We will make this part available soon!

  • Best model in this paper

    Please refer to this link for Google Drive/ link for Baidu Disk for the best model employed in this paper.

Fusion results of TransMEF

Main fusion results

Supplementary outdoor fusion results

Supplementary indoor fusion results

Evaluation metrics of TransMEF

Main metrics

Citation

If this work is helpful to you, please cite it as:

@article{qu2021transmef,
  title={TransMEF: A Transformer-Based Multi-Exposure Image Fusion Framework using Self-Supervised Multi-Task Learning},
  author={Qu, Linhao and Liu, Shaolei and Wang, Manning and Song, Zhijian},
  journal={arXiv preprint arXiv:2112.01030},
  year={2021}
}

Contact Information

If you have any question, please email to me lhqu20@fudan.edu.cn.

About

Official PyTorch implementation of our AAAI22 paper: TransMEF: A Transformer-Based Multi-Exposure Image Fusion Framework via Self-Supervised Multi-Task Learning.


Languages

Language:Python 94.2%Language:MATLAB 5.8%