gupta-abhay / DeepEvent-VO

Fusing Event Frame Streams in DeepVO

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Build Failing Status Alpha

DeepEvent-VO: Fusing Intensity and Event Frames for End-to-End Deep Visual Odometry

This is the project which @epiception, @SuhitK and I worked on for the Robot Localization and Mapping (16-833) course project at CMU in Spring 2019. The motivation of the project is to see how DeepVO can be enhanced using event-frames. To read more about event cameras and event SLAM, see this link.

Installation Instructions

This is a PyTorch implementation. This code has been tested on PyTorch 0.4.1 with CUDA 9.1 and CUDNN 7.1.2.

Dependencies

We have a dependency on a few packages, especially tqdm, scikit-image, tensorbordX and matplotlib. They can be installed using standard pip as below:

pip install scipy scikit-image matplotlib tqdm tensorboardX

To replicate the conda environment that we used for our training and evaluation

conda env create -f requirements.yml

The model assumes that FlowNet pre-trained weights are available for training. You can download the weights from @ClementPinard's implementation. Particularly, we need the weights for FlowNetS (flownets_EPE1.951.pth.tar). Instructions for downloadind the weights are in the README given there.

Datasets

This model assumes the MVSEC datasets available from the Daniilidis Group at University of Pennsylvania. The code to sync the dataframes for event and intensity frames along with poses will be released soon.

Running Code

To run the code for base DeepVO results - without any fusion, from the base directory of the repository, run:

sh exec.sh

To run the fusion model, from the base directory of the repository, run:

sh exec_fusion.sh

About

Fusing Event Frame Streams in DeepVO


Languages

Language:Python 99.4%Language:Shell 0.6%