choyingw / CFCNet

NeurIPS 2019: Deep RGB-D Canonical Correlation Analysis For Sparse Depth Completion

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Deep RGB-D Canonical Correlation Analysis for Sparse Depth Completion

This is the official PyTorch implemenation for our NeurIPS 2019 paper by Yiqi Zhong*, Cho-Ying Wu*, Suya You, Ulrich Neumann (*Equal Contribution) at USC

Paper: [Arxiv].

Check out the whole video demo [Youtube].

Also check our newest work on depth estimation/completion using sensor fusion SCADC!

Prerequisites

Linux
Python 3
PyTorch 1.0+ (Orginally developed upder v1.0, testing on v1.5 is also fine)
NVIDIA GPU + CUDA CuDNN
Other common libraries: matplotlib, cv2, PIL

Getting Started

Data Preparation: Please refer to [KITTI] or [NYU Depth V2] and process them into h5 files. Here also provides preprocessed data.

Tutorial:

  1. Create a folder and a subfolder 'checkpoint/kitti'
  2. Download the pretrained weights: [NYU-Depth 500 points training] [KITTI 500 points training] and put the .pth under 'checkpoint/kitti/'
  3. Prepare data in the previous "getting started" section
  4. Run "python3 evaluate.py --name kitti --checkpoints_dir ./checkpoint/ --test_path [path ot the testing file] "
  5. You'll see completed depth maps are saved under 'vis/'

Train/Evaluation:

For training, please run

python3 train_depth_complete.py --name kitti --checkpoints_dir [path to save_dir] --train_path [train_data_dir] --test_path [test_data_dir]

If you use the preprocessed data from here. The train/test data path should be ./kitti/train or ./kitti/val/ under your data directory.

If you want to use your data, please make your data into h5 dataset. (See dataloaders/dataloader.py)

Other specifications: --continue_train would load the lastest saved ckpt. Also set --epoch_count to tell what's the next epoch_number. Otherwise, will start from epoch 0. Set hyperparameters by --lr, --batch_size, --weight_decay, or others. Please refer to the options/base_options.py and options/options.py

Note that the default batch size is 4 during the training and use gpu:0. You can set larger batch size (--batch_size=xx) with more gpus (--gpu_ids="0,1,2,3") to attain larger batch size training.

Example command:

python3 train_depth_complete.py --name kitti --checkpoints_dir ./checkpoints --lr 0.001 --batch_size 4 --train_path './kitti/train/' --test_path './kitti/val/' --continue_train --epoch_count [next_epoch_number]

For evalutation, please run

python3 evaluate.py --name kitti --checkpoints_dir [path to save_dir to load ckpt] --test_path [test_data_dir] [--epoch [epoch number]]

This will load the latest checkpoint to evaluate. Add --epoch to specify which epoch checkpoint you want to load.

Update: 02/10/2020

1.Fix several bugs and take off redundant options.

2.Release Orb sparsifier

3.Pretrain models release: [NYU-Depth 500 points training] [KITTI 500 points training]

Update: 04/19/2021

  1. Revise README and add a tutorial
  2. Several minor revisions

If you find our work useful, please consider to cite our work.

@inproceedings{zhong2019deep,
  title={Deep rgb-d canonical correlation analysis for sparse depth completion},
  author={Zhong, Yiqi and Wu, Cho-Ying and You, Suya and Neumann, Ulrich},
  booktitle={Advances in Neural Information Processing Systems},
  pages={5332--5342},
  year={2019}

About

NeurIPS 2019: Deep RGB-D Canonical Correlation Analysis For Sparse Depth Completion

License:MIT License


Languages

Language:Python 100.0%