Ironbrotherstyle / UnVIO

The source code of IJCAI2020 paper "Unsupervised Monocular Visual-inertial Odometry Network".

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Unsupervised Network for Visual Inertial Odometry

IJCAI2020 paper: Unsupervised Network for Visual Inertial Odometry.

KITTI 09 KITTI 10
aa bb

Introduction

This repository is the official Pytorch implementation of IJCAI2020 paper Unsupervised Network for Visual Inertial Odometry.

Installation

UnVIO has been tested on Ubuntu with Pytorch 1.4 and Python 3.7.10. For installation, it is recommended to use conda environment.

conda create -n unvio_env python=3.7.10
conda activate unvio_env
pip install -r requirements.txt

Other applications should be installed also,

sudo apt install gnuplot

Data Preparing

The datasets used in this paper are KITTI raw ataset and Malaga dataset. Please refer to Data preparing for detailed instruction.

Validation

Validation can be implemented on Depth estimation and Odometry estimation. First specify the model path and dataset path:

ROOT='MODEL_ROOT_HERE'
DATA_ROOT='DATA_ROOT_HERE'

Depth Estimation

For Depth estimation on KITTI 09 (if you want to test on KITTI 10, change the --dataset-list to .eval/kitti_10.txt, same set for Malaga dataset), run the following command:

ROOT=$ROOT/kitti_ckpt
#ROOT=$ROOT/malaga_ckpt
DATA_ROOT=$DATA_ROOT/KITTI_rec_256/ 
#DATA_ROOT=$DATA_ROOT/Malaga_down/
python test_disp.py \
   --pretrained-dispnet $ROOT/UnVIO_dispnet.pth.tar \
   --dataset-dir $DATA_ROOT \
   --dataset-list .eval/kitti_09.txt \
   --output-dir $ROOT/results_disp \
   --save-depth

The predictions.npy that stores the all the depth values will be saved in $ROOT/results_disp, if --save-depth is added, the colored depths will be saved simultaneously is $ROOT/results_disp/disp

Visual Odometry

For Odometry estimation KITTI 09 (if you want to test on KITTI 10, change the testscene to 2011_09_30_drive_0034_sync_02), run the following command:

ROOT=$ROOT/kitti_ckpt
DATA_ROOT=$DATA_ROOT
python test_pose.py \
 --pretrained-visualnet $ROOT/UnVIO_visualnet.pth.tar \
 --pretrained-imunet $ROOT/UnVIO_imunet.pth.tar\
 --pretrained-posenet $ROOT/UnVIO_posenet.pth.tar\
 --dataset_root $DATA_ROOT \
 --dataset KITTI \
 --testscene 2011_09_30_drive_0033_sync_02 \
 --show-traj

This will create a .csv file represneting $T_{wc} \in \mathbb{R}^{3 \times 4}$ in $ROOT directory. If the --show-traj is added, a scaled trajectory comparing with the ground truth will be ploted.

Train

Run the following command to train the UnVIO from scratch:

DATA_ROOT=$DATA_ROOT
python train.py --dataset_root $DATA_ROOT --dataset KITTI

specify --dataset (KITTI or Malaga) as you need.

Citation

@inproceedings{2020Unsupervised,
  title={Unsupervised Monocular Visual-inertial Odometry Network},
  author={ Wei, P.  and  Hua, G.  and  Huang, W.  and  Meng, F.  and  Liu, H. },
  booktitle={Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}},
  year={2020},
}

License

This project is licensed under the terms of the MIT license.

References

The repository borrowed some code from SC, Monodepth2 and SfMLearner, thanks for their great work.

About

The source code of IJCAI2020 paper "Unsupervised Monocular Visual-inertial Odometry Network".


Languages

Language:Python 72.7%Language:C++ 24.6%Language:C 2.7%