likethis85 / EvTexture

[ICML 2024] EvTexture: Event-driven Texture Enhancement for Video Super-Resolution

Home Page:https://dachunkai.github.io/evtexture.github.io/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Official Pytorch implementation for the "EvTexture: Event-driven Texture Enhancement for Video Super-Resolution" paper (ICML 2024).

🌐 Project | πŸ“ƒ Paper | πŸ–ΌοΈ Poster

Authors: Dachun KaiπŸ“§οΈ, Jiayao Lu, Yueyi ZhangπŸ“§οΈ, Xiaoyan Sun, University of Science and Technology of China

Feel free to ask questions. If our work helps, please don't hesitate to give us a ⭐!

πŸš€ News

  • Release training code
  • Release details to prepare datasets
  • 2024/06/08: Publish docker image
  • 2024/06/08: Release pretrained models and test sets for quick testing
  • 2024/06/07: Video demos released
  • 2024/05/25: Initialize the repository
  • 2024/05/02: πŸŽ‰ πŸŽ‰ Our paper was accepted in ICML'2024

πŸ”– Table of Content

  1. Video Demos
  2. Code
  3. Citation
  4. Contact
  5. License and Acknowledgement

πŸ”₯ Video Demos

A $4\times$ upsampling results on the Vid4 and REDS4 test sets.

Vid4_City.mp4
Vid4_Foliage.mp4
REDS_000.mp4
REDS_011.mp4

Code

Installation

  • Dependencies: Miniconda, CUDA Toolkit 11.1.1, torch 1.10.2+cu111, and torchvision 0.11.3+cu111.

  • Run in Conda

    conda create -y -n evtexture python=3.7
    conda activate evtexture
    pip install torch-1.10.2+cu111-cp37-cp37m-linux_x86_64.whl
    pip install torchvision-0.11.3+cu111-cp37-cp37m-linux_x86_64.whl
    git clone https://github.com/DachunKai/EvTexture.git
    cd EvTexture && pip install -r requirements.txt && python setup.py develop
  • Run in Docker πŸ‘

    Note: before running the Docker image, make sure to install nvidia-docker by following the official intructions.

    [Option 1] Directly pull the published Docker image we have provided from Alibaba Cloud.

    docker pull registry.cn-hangzhou.aliyuncs.com/dachunkai/evtexture:latest

    [Option 2] We also provide a Dockerfile that you can use to build the image yourself.

    cd EvTexture && docker build -t evtexture ./docker

    The pulled or self-built Docker image containes a complete conda environment named evtexture. After running the image, you can mount your data and operate within this environment.

    source activate evtexture && cd EvTexture && python setup.py develop

Test

  1. Download the pretrained models from (Releases / Onedrive / Google Drive / Baidu Cloud(n8hg)) and place them to experiments/pretrained_models/EvTexture/. The network architecture code is in evtexture_arch.py.

    • EvTexture_REDS_BIx4.pth: trained on REDS dataset with BI degradation for $4\times$ SR scale.
    • EvTexture_Vimeo90K_BIx4.pth: trained on Vimeo-90K dataset with BI degradation for $4\times$ SR scale.
  2. Download the preprocessed test sets (including events) for REDS4 and Vid4 from (Releases / Onedrive / Google Drive / Baidu Cloud(n8hg)), and place them to datasets/.

    • Vid4_h5: HDF5 files containing preprocessed test datasets for Vid4.

    • REDS4_h5: HDF5 files containing preprocessed test datasets for REDS4.

  3. Run the following command:

    • Test on Vid4 for 4x VSR:
      ./scripts/dist_test.sh [num_gpus] options/test/EvTexture/test_EvTexture_Vid4_BIx4.yml
    • Test on REDS4 for 4x VSR:
      ./scripts/dist_test.sh [num_gpus] options/test/EvTexture/test_EvTexture_REDS4_BIx4.yml
      This will generate the inference results in results/. The output results on REDS4 and Vid4 can be downloaded from (Releases / Onedrive / Google Drive / Baidu Cloud(n8hg)).

😊 Citation

If you find the code and pre-trained models useful for your research, please consider citing our paper. πŸ˜ƒ

@inproceedings{kai2024evtexture,
  title={Ev{T}exture: {E}vent-driven {T}exture {E}nhancement for {V}ideo {S}uper-{R}esolution},
  author={Kai, Dachun and Lu, Jiayao and Zhang, Yueyi and Sun, Xiaoyan},
  booktitle={International Conference on Machine Learning},
  year={2024},
  organization={PMLR}
}

Contact

If you meet any problems, please describe them in issues or contact:

License and Acknowledgement

This project is released under the Apache-2.0 license. Our work is built upon BasicSR, which is an open source toolbox for image/video restoration tasks. Thanks to the inspirations and codes from RAFT and event_utils.

About

[ICML 2024] EvTexture: Event-driven Texture Enhancement for Video Super-Resolution

https://dachunkai.github.io/evtexture.github.io/

License:Apache License 2.0


Languages

Language:Python 76.6%Language:Cuda 13.5%Language:C++ 9.1%Language:Dockerfile 0.6%Language:Shell 0.2%