ITking666 / RidgeSfM

Ridge SfM Structure from Motion via robust pairwise matching under depth uncertainty

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

RidgeSfM: Structure from Motion via Robust Pairwise Matching Under Depth Uncertainty

Benjamin Graham, David Novotny
3DV 2020

This is the official implementation of RidgeSfM: Structure from Motion via Robust Pairwise Matching Under Depth Uncertainty in PyTorch.

Link to paper | Poster

ScanNet reconstruction

RidgeSfM applied to the ScanNet test set

Scene 0707_00 frame skip rate k=1
ScanNet reconstruction

Scene 0708_00 frame skip rate k=3
ScanNet reconstruction

Scene 0709_00 frame skip rate k=10
ScanNet reconstruction

Scene 0710_00 frame skip rate k=30
ScanNet reconstruction

Below we illustrate the depth uncertainty factors of variation for a frame from scene 0708.

ScanNet Depth Factors of variation
Top left: an input image.
Bottom left: the predicted depth.
Middle and right: We use SVD to reduce the 32 FoV planes down to 12 planes, and display them as 4 RGB images; each of the 4x3 color planes represents one factor of variation.

RidgeSfM applied to a video taken on a mobile phone

We applied RidgeSfM to a short video taken using a mobile phone camera. There is no ground truth pose, so the bottom right hand corner of the video is blank.

Living room - skip rate k=3
ScanNet reconstruction

RidgeSfM applied to the KITTI odometry dataset

We trained a depth prediction network on the KITTI depth prediction training set. We then processed videos from the KITTI Visual Odometry dataset. We used the 'camera 2' image sequences, cropping the input to RGB images of size 1216x320. We used R2D2 as the keypoint detector. We used a frame skip rate of k=3. The scenes are larger spatially, so for visualization we increased the number of K-Means centroids to one million.

Scene 6 - skip rate k=3
ScanNet reconstruction

Scene 7 - skip rate k=3
ScanNet reconstruction

Setup

wget https://github.com/magicleap/SuperGluePretrainedNetwork/blob/master/models/weights/superpoint_v1.pth?raw=true -O ridgesfm/weights/superpoint_v1.pth
wget https://raw.githubusercontent.com/magicleap/SuperGluePretrainedNetwork/master/models/superpoint.py -O ridgesfm/superpoint.py
  • Run bash prepare_scannet.sh in ridgesfm/data/
  • Run python ridgesfm.py scene.n=0 scene.frameskip=10

Try process your own video,

  • calibrate your camera using calibrate/calibrate.ipynb
  • then run python ridgesfm.py scenes=calibrate/ scene.n=0 scene.frameskip=10
Videos are scaled and/or cropped to resoltion 640x480. The notebook calculates a camera intrinsic matrix for the rescaled video. RidgeSfM will work best when the C.I. matrix is similar to that of the depth prediction network's training data, i.e. [[578, 0, 319.5], [0, 578, 239.5], [0, 0, 1]].

Dependencies:

License

RidgeSfM is CC-BY-NC licensed, as found in the LICENSE file. Terms of use. Privacy

Citations

If you find this code useful in your research then please cite:

@InProceedings{ridgesfm2020,
    author       = "Benjamin Graham and David Novotny",
    title        = "Ridge{S}f{M}: Structure from Motion via Robust Pairwise Matching Under Depth Uncertainty",
    booktitle    = "International Conference on 3D Vision (3DV)",
    year         = "2020",
}

About

Ridge SfM Structure from Motion via robust pairwise matching under depth uncertainty

License:Other


Languages

Language:Python 62.4%Language:Shell 33.7%Language:Jupyter Notebook 3.8%