GITSHOHOKU / stereo-vo

Visual Odometry using Stereo and Pointcloud Alignment

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Visual Odometry using Stereo


Introduction

Odometry refers to the use of data from motion sensors to estimate change in position over time. One of the most reliable ways of estimation of 3-D structure using cameras is to use a calibrated stereo pair. Given the sequence of 3-D structures generated by the stereo camera, we can estimate the motion of the camera with respect to its environment as well as generate a 3-D map of the environment. This is usually referred to as visual SLAM (simultaneous localisation and mapping), which has wide applications in robotics and remote sensing.

We plan to implement a 6 DoF pose estimation algorithm using a calibrated stereo pair and generate a 3-D map of the environment simultaneously. We assume that scene illumination doesn't change much and most of the field of view of the camera is occupied by static parts of the environment.

We use a Gaussian Mixture Model to model each pointset and then use a measure of divergence between distributions to optimize the translation and rotation between pointsets.

Requirements

The best way to run the code is to use a system running a Linux distribution (who uses Windows anyway?). The code was written and tested on Ubuntu 14.04 LTS. Anyhoo, you need the following things:

  • A C++ compiler that supports C++11.
  • The OpenCV Library (here)
  • The Point Cloud Library (here)
  • The Eigen Template Library (here)
  • The CMake Build System (here)
  • GNU Make (here)

These should be fairly easy to obtain on a standard Linux distribution.

Running the code

Once you have the system set up, create a folder for the executables, navigate to it and run
cmake <path to CMakeLists.txt in src>
This will print out a shitload of stuff. Hopefully if everything's right, it'll write build files to your binaries folder. After this, type
make
to compile the code. You can run the code after it is compiled successfully. Now:

  • Images -> Pointclouds: Typically you'd be converting a set of images (left and right) to pointclouds. The syntax is
    ./stereo-pointcloud <images.list> <Q.mat>
    where images.list is a file that contains names of the images in the images/ folder. The actual name is images/_<left/right>.jpg.
    Q.mat is a text file that contains the vectorised stereo reprojection matrix for the stereo pair.

  • Registering Pointclouds: This is typically the second step.
    ./pointset-matching <pointcloud_static> <pointcloud_moving> <rfile> <tfile>
    where the first two arguments are obvious, rfile is the name of the file to be used to store the vectorised rotation matrix, and tfile is the name of the file to be used to store the translation vector.

  • Aligning Pointclouds: Do this once you have the transformations obtained from step 2.
    ./pointcloud-stitching args
    Describe arguments to pointcloud-stitching here.

Contact

Me (Alankar Kotwal, alankarkotwal13@gmail.com).

About

Visual Odometry using Stereo and Pointcloud Alignment


Languages

Language:TeX 54.9%Language:C++ 37.2%Language:Shell 4.4%Language:CMake 3.5%