The simple benchmark for event-based denoising. Any questions please contact me with KugaMaxx@outlook.com.
To ensure the running of the project, the following dependencies are need.
- Install common dependencies.
# Install compiler
sudo apt-get install git gcc-10 g++-10 cmake
# Install boost, opencv, eigen3, openblas
sudo apt-get install libboost-dev libopencv-dev libeigen3-dev libopenblas-dev
- Install third-party dependencies for dv.
# Add repository
sudo add-apt-repository ppa:inivation-ppa/inivation
# Update
sudo apt-get update
# Install pre dependencies
sudo apt-get install boost-inivation libcaer-dev libfmt-dev liblz4-dev libzstd-dev libssl-dev
# Install dv
sudo apt-get install dv-processing dv-runtime-dev
- Initialize our dv-toolkit, for simplifying the processing of event-based data.
# Recursively initialize our submodule
git submodule update --init --recursive
In this section, the model will be built as Python packages by using pybind11,
allowing directly import in your project. If using C++ language, you can
directly copy the header files in ./include
and following
tutorial to
see how to use.
- Install dependencies for building packages.
sudo apt-get install python3-dev python3-pybind11
- Create a new virtual environment:
# Create virtual environment
conda create -n emlb python=3.8
# Activate virtual environment
conda activate emlb
# Install requirements
pip install -r requirements.txt
# Install dv-toolkit
pip install external/dv-toolkit/.
- Compile with setting
-DEMLB_ENABLE_PYTHON
# create folder
mkdir build && cd build
# compile with samples
CC=gcc-10 CXX=g++-10 cmake .. -DEMLB_ENABLE_PYTHON=ON
# generate library
cmake --build . --config Release
- Run
demo.py
to test:
python3 demo.py
By following the steps below, you will obtain a series of .so
files in the
./modules
folder, which are third-party modules that can be called by
DV software.
For how to use them, please refer to the "set up for dv" in the
tutorial.
- Compile with setting
-DEMLB_ENABLE_MODULES
# create folder
mkdir build && cd build
# compile with samples
CC=gcc-10 CXX=g++-10 cmake .. -DEMLB_ENABLE_MODULES=ON
# generate library
cmake --build . --config Release
Assuming that libtorch is
installed, you can include -DTORCH_DIR=/path/to/libtorch/
to compile deep
learning models. For example, you can build by following instruction.
CC=gcc-10 CXX=g++-10 cmake .. \
-DEMLB_ENABLE_PYTHON=ON \
-DTORCH_DIR=<path/to/libtorch>/share/cmake/Torch/
NOTE: download pretrained models here
and paste them into ./modules/net/
folder.
At present, we have implemented the following event-based denoising algorithms.
Algorithms | Full Name | Year | Languages | DV | Cuda |
---|---|---|---|---|---|
TS | Time Surface | 2016 | C++ | ✓ | |
KNoise | Khodamoradi's Noise | 2018 | C++ | ✓ | |
EvFlow | Event Flow | 2019 | C++ | ✓ | |
YNoise | Yang's Noise | 2020 | C++ | ✓ | |
EDnCNN | Event Denoising CNN | 2020 | C++ | ✓ | |
DWF | Double Window Filter | 2021 | C++ | ✓ | |
MLPF | Multilayer Perceptron Filter | 2021 | C++ | ✓ | |
EvZoom | Event Zoom | 2021 | Python | ✓ | |
GEF | Guided Event Filter | 2021 | Python | ✓ | |
RED | Recursive Event Denoisor | - | C++ | ✓ |
You can run eval_denoisor.py
to test one of the above denoising algorithms:
python eval_denoisor.py \
--file './data/demo/samples/demo-01.aedat4' \
--denoisor 'ynoise'
--file
/-f
: path of sequence data.--denoisor
: select a denoising algorithm. You can revise denoisor's parameters in./configs/denoisors.py
.
NOTE: Some algorithms need to install libtorch in advance and compile with cuda.
You can run eval_benchmark.py
to test all sequences store in ./data
folder.
python eval_benchmark.py \
--input_path './data' \
--output_path './result' \
--denoisor 'ynoise' --store_result --store_score
--input_path
/-i
: path of the datasets folder.--output_path
/-o
: path of saving denoising results.--denoisor
: select a denoising algorithm. You can revise denoisor's parameters in./configs/denoisors.py
.--store_result
: turn on denoising result storing.--store_score
: turn on mean ESR score calculation.
NOTE: The structure of the dataset folder must meet the requirements.
Download our Event Noisy Dataset (END), including D-END (Daytime part) and N-END (Night part), then unzip and paste them into ./data
folder:
./data/
├── D-END
│ ├── nd00
│ │ ├── Architecture-ND00-1.aedat4
│ │ ├── Architecture-ND00-2.aedat4
│ │ ├── Architecture-ND00-3.aedat4
│ │ ├── Bicycle-ND00-1.aedat4
│ │ ├── Bicycle-ND00-2.aedat4
│ │ ├── ...
│ ├── nd04
│ │ ├── Architecture-ND04-1.aedat4
│ │ ├── Architecture-ND04-2.aedat4
│ │ ├── ...
│ ├── ...
├── N-END
│ ├── nd00
│ │ ├── ...
│ ├── ...
├── ...
Also you can paste your customized datasets into ./data
folder (only supported
aedat4 file now). They should be rearranged as the following structure:
./data/
├── <Your Dataset Name>
│ ├── Subclass-1
│ │ ├── Sequences-1.*
│ │ ├── Sequences-2.*
│ │ ├── ...
│ ├── Subclass-2
│ │ ├── Sequences-1.*
│ │ ├── Sequences-2.*
│ │ ├── ...
│ ├── ...
├── ...
We provide a general template to facilitate building your own denoising algorithm, see ./configs/denoisors.py
:
class your_denoisor:
def __init__(self, resolution,
modified_params: Dict,
default_params: Dict) -> None:
# /*-----------------------------------*/
# initialize parameters
# /*-----------------------------------*/
def accept(self, events):
# /*-----------------------------------*/
# receive noise sequence and process
# /*-----------------------------------*/
def generateEvents(self):
# /*-----------------------------------*/
# perform denoising and return result
# /*-----------------------------------*/
This repository is derived from E-MLB: Multilevel Benchmark for Event-Based Camera Denoising.
@article{ding2023mlb,
title = {E-MLB: Multilevel Benchmark for Event-Based Camera Denoising},
author = {Ding, Saizhe and Chen, Jinze and Wang, Yang and Kang, Yu and Song, Weiguo and Cheng, Jie and Cao, Yang},
journal = {IEEE Transactions on Multimedia},
year = {2023},
publisher = {IEEE}
}
Time Surface Hots: a hierarchy of event-based time-surfaces for pattern recognition
@article{lagorce2016hots,
title = {Hots: a hierarchy of event-based time-surfaces for pattern recognition},
author = {Lagorce, Xavier and Orchard, Garrick and Galluppi, Francesco and Shi, Bertram E and Benosman, Ryad B},
journal = {IEEE transactions on pattern analysis and machine intelligence},
pages = {1346--1359},
year = {2016},
publisher = {IEEE}
}
KNoise O(N)-Space Spatiotemporal Filter for Reducing Noise in Neuromorphic Vision Sensors
@article{khodamoradi2018n,
title = {O(N)-Space Spatiotemporal Filter for Reducing Noise in Neuromorphic Vision Sensors},
author = {Khodamoradi, Alireza and Kastner, Ryan},
journal = {IEEE Transactions on Emerging Topics in Computing},
year = {2018},
publisher = {IEEE}
}
EvFlow EV-Gait: Event-based robust gait recognition using dynamic vision sensors
@inproceedings{wang2019ev,
title = {EV-Gait: Event-based robust gait recognition using dynamic vision sensors},
author = {Wang, Yanxiang and Du, Bowen and Shen, Yiran and Wu, Kai and Zhao, Guangrong and Sun, Jianguo and Wen, Hongkai},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages = {6358--6367},
year = {2019}
}
YNoise Event density based denoising method for dynamic vision sensor
@article{feng2020event,
title = {Event density based denoising method for dynamic vision sensor},
author = {Feng, Yang and Lv, Hengyi and Liu, Hailong and Zhang, Yisa and Xiao, Yuyao and Han, Chengshan},
journal = {Applied Sciences},
year = {2020},
publisher = {MDPI}
}
EDnCNN Event probability mask (epm) and event denoising convolutional neural network (edncnn) for neuromorphic cameras
@inproceedings{baldwin2020event,
title = {Event probability mask (epm) and event denoising convolutional neural network (edncnn) for neuromorphic cameras},
author = {Baldwin, R and Almatrafi, Mohammed and Asari, Vijayan and Hirakawa, Keigo},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages = {1701--1710},
year = {2020}
}
DWF & MLPF Low Cost and Latency Event Camera Background Activity Denoising
@article{guo2022low,
title = {Low Cost and Latency Event Camera Background Activity Denoising},
author = {Guo, Shasha and Delbruck, Tobi},
journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence},
year = {2022},
publisher = {IEEE}
}
EvZoom EventZoom: Learning to denoise and super resolve neuromorphic events
@inproceedings{duan2021eventzoom,
title = {EventZoom: Learning to denoise and super resolve neuromorphic events},
author = {Duan, Peiqi and Wang, Zihao W and Zhou, Xinyu and Ma, Yi and Shi, Boxin},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages = {12824--12833},
year = {2021}
}
GEF Guided event filtering: Synergy between intensity images and neuromorphic events for high performance imaging
@article{duan2021guided,
title = {Guided event filtering: Synergy between intensity images and neuromorphic events for high performance imaging},
author = {Duan, Peiqi and Wang, Zihao W and Shi, Boxin and Cossairt, Oliver and Huang, Tiejun and Katsaggelos, Aggelos K},
journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence},
year = {2021},
publisher = {IEEE}
}
Special thanks to Yang Wang.