Official Pytorch implementation for the "EvTexture: Event-driven Texture Enhancement for Video Super-Resolution" paper (ICML 2024).
π Project | π Paper | πΌοΈ Poster
Authors: Dachun Kaiπ§οΈ, Jiayao Lu, Yueyi Zhangπ§οΈ, Xiaoyan Sun, University of Science and Technology of China
Feel free to ask questions. If our work helps, please don't hesitate to give us a β!
- Release training code
- Release details to prepare datasets
- 2024/06/08: Publish docker image
- 2024/06/08: Release pretrained models and test sets for quick testing
- 2024/06/07: Video demos released
- 2024/05/25: Initialize the repository
- 2024/05/02: π π Our paper was accepted in ICML'2024
A
Vid4_City.mp4
Vid4_Foliage.mp4
REDS_000.mp4
REDS_011.mp4
-
Dependencies: Miniconda, CUDA Toolkit 11.1.1, torch 1.10.2+cu111, and torchvision 0.11.3+cu111.
-
Run in Conda
conda create -y -n evtexture python=3.7 conda activate evtexture pip install torch-1.10.2+cu111-cp37-cp37m-linux_x86_64.whl pip install torchvision-0.11.3+cu111-cp37-cp37m-linux_x86_64.whl git clone https://github.com/DachunKai/EvTexture.git cd EvTexture && pip install -r requirements.txt && python setup.py develop
-
Run in Docker π
Note: before running the Docker image, make sure to install nvidia-docker by following the official intructions.
[Option 1] Directly pull the published Docker image we have provided from Alibaba Cloud.
docker pull registry.cn-hangzhou.aliyuncs.com/dachunkai/evtexture:latest
[Option 2] We also provide a Dockerfile that you can use to build the image yourself.
cd EvTexture && docker build -t evtexture ./docker
The pulled or self-built Docker image containes a complete conda environment named
evtexture
. After running the image, you can mount your data and operate within this environment.source activate evtexture && cd EvTexture && python setup.py develop
-
Download the pretrained models from (Releases / Onedrive / Google Drive / Baidu Cloud(n8hg)) and place them to
experiments/pretrained_models/EvTexture/
. The network architecture code is in evtexture_arch.py.-
EvTexture_REDS_BIx4.pth: trained on REDS dataset with BI degradation for
$4\times$ SR scale. -
EvTexture_Vimeo90K_BIx4.pth: trained on Vimeo-90K dataset with BI degradation for
$4\times$ SR scale.
-
EvTexture_REDS_BIx4.pth: trained on REDS dataset with BI degradation for
-
Download the preprocessed test sets (including events) for REDS4 and Vid4 from (Releases / Onedrive / Google Drive / Baidu Cloud(n8hg)), and place them to
datasets/
.-
Vid4_h5: HDF5 files containing preprocessed test datasets for Vid4.
-
REDS4_h5: HDF5 files containing preprocessed test datasets for REDS4.
-
-
Run the following command:
- Test on Vid4 for 4x VSR:
./scripts/dist_test.sh [num_gpus] options/test/EvTexture/test_EvTexture_Vid4_BIx4.yml
- Test on REDS4 for 4x VSR:
This will generate the inference results in
./scripts/dist_test.sh [num_gpus] options/test/EvTexture/test_EvTexture_REDS4_BIx4.yml
results/
. The output results on REDS4 and Vid4 can be downloaded from (Releases / Onedrive / Google Drive / Baidu Cloud(n8hg)).
- Test on Vid4 for 4x VSR:
If you find the code and pre-trained models useful for your research, please consider citing our paper. π
@inproceedings{kai2024evtexture,
title={Ev{T}exture: {E}vent-driven {T}exture {E}nhancement for {V}ideo {S}uper-{R}esolution},
author={Kai, Dachun and Lu, Jiayao and Zhang, Yueyi and Sun, Xiaoyan},
booktitle={International Conference on Machine Learning},
year={2024},
organization={PMLR}
}
If you meet any problems, please describe them in issues or contact:
- Dachun Kai: dachunkai@mail.ustc.edu.cn
This project is released under the Apache-2.0 license. Our work is built upon BasicSR, which is an open source toolbox for image/video restoration tasks. Thanks to the inspirations and codes from RAFT and event_utils.