ercanburak / HyperE2VID

HyperE2VID: Improving Event-Based Video Reconstruction via Hypernetworks (IEEE Transactions on Image Processing, 2024)

Home Page:https://ercanburak.github.io/HyperE2VID.html

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

HyperE2VID: Improving Event-Based Video Reconstruction via Hypernetworks

Hugging Face Spaces arxiv.org PWC PWC

This is the official repository of our paper titled HyperE2VID: Improving Event-Based Video Reconstruction via Hypernetworks by Burak Ercan, Onur Eker, Canberk Sağlam, Aykut Erdem, and Erkut Erdem.

HyperE2VID: Improving Event-Based Video Reconstruction via Hypernetworks
In this work we present HyperE2VID, a dynamic neural network architecture for event-based video reconstruction. Our approach extends existing static architectures by using hypernetworks and dynamic convolutions to generate per-pixel adaptive filters guided by a context fusion module that combines information from event voxel grids and previously reconstructed intensity images. We show that this dynamic architecture can generate higher-quality videos than previous state-of-the-art, while also reducing memory consumption and inference time.

Overview of our proposed HyperE2VID architecture

Citations

If you use code in this repo in an academic context, please cite the following:

@article{ercan2023hypere2vid,
title={{HyperE2VID}: Improving Event-Based Video Reconstruction via Hypernetworks},
author={Ercan, Burak and Eker, Onur and Saglam, Canberk and Erdem, Aykut and Erdem, Erkut},
journal={arXiv preprint arXiv:2305.06382},
year={2023}}

Acknowledgements

About

HyperE2VID: Improving Event-Based Video Reconstruction via Hypernetworks (IEEE Transactions on Image Processing, 2024)

https://ercanburak.github.io/HyperE2VID.html

License:MIT License


Languages

Language:Python 100.0%