eborboihuc / MOTSFusion

MOTSFusion: Track to Reconstruct and Reconstruct to Track

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

MOTSFusion: Track to Reconstruct and Reconstruct to Track

Method Overview

Introduction

This repository contains the corresponding source code for the paper "Track to Reconstruct and Reconstruct to Track" arXiv PrePrint.

Requirements

The code was tested on:

  • CUDA 9, cuDNN 7
  • Tensorflow 1.13
  • Python 3.6

Note: For some external code that is required to run in the precompute script, you need different requirements (see references). Please refer to the corresponding repositories to obtain the requirements for these.

Instructions

At first, download the datasets in section References (Stereo image pairs) as well as the detections and adapt the config files in ./configs according to your desired setup. The file "config_default" will run 2D as well as 3D tracking on the KITTI MOTS validation set. Next, download our pretrained segmentation network and extract it into './external/BB2SegNet'. Before running the main script, run:

python precompute.py -config ./configs/config_default

After this, all necessary information such as segmentations, optical flow, disparity and the corresponding pointcloud should be computed. Now you can run the tracker using:

python main.py -config ./configs/config_default

After the tracker has completed all sequences, results will be evaluated automatically.

References

Citation

@article{luiten2019track,
  title={Track to Reconstruct and Reconstruct to Track},
  author={Luiten, Jonathon and Fischer, Tobias and Leibe, Bastian},
  journal={arXiv:1910.00130},
  year={2019}
}

License

MIT License

About

MOTSFusion: Track to Reconstruct and Reconstruct to Track

License:MIT License


Languages

Language:Python 100.0%