tuandle / self-supervised-depth-completion

ICRA 2019 "Self-supervised Sparse-to-Dense: Self-supervised Depth Completion from LiDAR and Monocular Camera"

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

self-supervised-depth-completion

This repo contains the PyTorch implementation of our ICRA'19 paper on "Self-supervised Sparse-to-Dense: Self-supervised Depth Completion from LiDAR and Monocular Camera" by Fangchang Ma, Guilherme Venturelli Cavalheiro, and Sertac Karaman at MIT. A video demonstration is available on YouTube.

photo not available

Contents

  1. Notes
  2. Requirements
  3. Trained Models
  4. Training and Testing
  5. Questions
  6. Citation

Notes

Our network is trained with the KITTI dataset alone, without pretraining on Cityscapes or other similar driving dataset (either synthetic or real). The use of additional data is very likely to further improve the accuracy.

Requirements

This code was tested with Python 3 and PyTorch 1.0 on Ubuntu 16.04.

  • Install PyTorch on a machine with CUDA GPU.
  • The code for self-supervised training requires OpenCV along with the contrib modules. For instance,
pip3 uninstall opencv-contrib-python
pip3 install opencv-contrib-python==3.4.2.16
  • Download the KITTI Depth Dataset and the corresponding RGB images. Please refer to scripts under download.
  • The code, data and result directory structure is shown as follows
.
├── self-supervised-depth-completion
├── data
|   ├── kitti_depth
|   |   ├── train
|   |   ├── val
|   |   ├── val_selection_cropped
|   |   ├── ...
|   └── kitti_rgb
|   |   ├── train
|   |   |   ├── 2011_09_26_drive_0001_sync
|   |   |   |   ├── image_02
|   |   |   |   |   ├── data
|   |   |   |   |   |   ├── 0000000000.png
|   |   |   |   |   |   ├── ...
|   |   |   |   ├── image_03
|   |   ├── val
|   |   ├── val_selection_cropped
|   |   |   ├── 2011_09_26_drive_0002_sync_0000000005_image_02.png
|   |   |   ├── ...
├── results

Trained Models

Download our trained models at http://datasets.lids.mit.edu/self-supervised-depth-completion to a folder of your choice.

Training and testing

A complete list of training options is available with

python main.py -h

For instance,

python main.py --train-mode dense -b 1 # train with the KITTI semi-dense annotations and batch size 1
python main.py --train-mode sparse+photo # train with the self-supervised framework, not using ground truth
python main.py --resume [checkpoint-path] # resume previous training
python main.py --evaluate [checkpoint-path] # test the trained model

Questions

Please create a new issue for code-related questions. Pull requests are welcome.

Citation

If you use our code or method in your work, please cite the following:

@article{ma2018self,
	title={Self-supervised Sparse-to-Dense: Self-supervised Depth Completion from LiDAR and Monocular Camera},
	author={Ma, Fangchang and Cavalheiro, Guilherme Venturelli and Karaman, Sertac},
	booktitle={ICRA},
	year={2019}
}
@article{Ma2017SparseToDense,
	title={Sparse-to-Dense: Depth Prediction from Sparse Depth Samples and a Single Image},
	author={Ma, Fangchang and Karaman, Sertac},
	booktitle={ICRA},
	year={2018}
}

About

ICRA 2019 "Self-supervised Sparse-to-Dense: Self-supervised Depth Completion from LiDAR and Monocular Camera"

License:MIT License


Languages

Language:Python 90.3%Language:Shell 9.7%