WarranWeng / ET-Net

Event-based Video Reconstruction Using Transformer, ICCV 2021.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Event-based Video Reconstruction Using Transformer

Official implementation of the following paper:

Event-based Video Reconstruction Using Transformer by Wenming Weng, Yueyi Zhang, Zhiwei Xiong. In ICCV 2021.

Dataset

HQF, MVSEC and IJRR datasets can be produced via the instructions in this repo. Note that MVSEC and IJRR are cut for better evaluation, of which the exact cut time can be found in the supplementary material.

Pretrained model

The pretrained model, which can reproduce the quantitative results in the paper, will be released in this site

Inference

To reconstruct the intensity image using our ET-Net, E2VID, E2VID+, FireNet, FireNet+, one can run the file scripts/eval.py. The paths to the pretrained model, dataset and output files should be specified. We provide example in scripts/eval.py, one can look into this script for details.

Citation

If you find this work helpful, please consider citing our paper.

@InProceedings{Weng_2021_ICCV,
    author    = {Weng, Wenming and Zhang, Yueyi and Xiong, Zhiwei},
    title     = {Event-based Video Reconstruction Using Transformer},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    year      = {2021},
}

Related Projects

Reducing the Sim-to-Real Gap for Event Cameras, ECCV'20

High Speed and High Dynamic Range Video with an Event Camera, TPAMI'19

Contact

If you have any problem about the released code, please do not hesitate to contact me with email (wmweng@mail.ustc.edu.cn).

About

Event-based Video Reconstruction Using Transformer, ICCV 2021.


Languages

Language:Python 100.0%