xchk138 / DepthCompletionNet

A Conv Net to complete depth map

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

DepthCompletionNet or DCNet(not DeformConvNet!)

A conv net to complete depth map generated by StereoBM, StereoSGBM or directly obtained by TOF or LiDAR.

Basic idea

  1. Input #0: Raw and rough depth(disparity) map;
    Input #1: Corresponding RGB image (may or may not be undistorted);
    Output #0: Refined and smooth depth(disparity) map;
    (Optional*) Output #1: Confidence map for the corresponding depth;

  2. Basic trial: Model: U-Net based on mobilenet-v3. Dataset: Simulation scenes using 3ds Max 2016 to obtain ground truth depth maps and corresponding RGB images, and raw depth maps are generated by BM(or SGBM) algorithm using stereo views from rendered main and sub image pairs. Training: Adam(large lr) + SGD(small lr).

  3. Advanced trial: Training with real world samples: obtain groundtruth depth with Panoptic Segmentation Depth Optimization. MSConv: multi-scale convolution, a newly designed cross-scale convolution op that match patterns from the whole image in all varied scales.

TODO-list

  1. Prepare 3d scenes for rendering;
  2. Construct the U-Net based on mobilenet-v3;
  3. Training;
  4. Test pretrained panoptic segmentation models to see if it worth the trial;
  5. Implement the idea of MSConv;

About

A Conv Net to complete depth map

License:MIT License


Languages

Language:Python 97.2%Language:MAXScript 2.8%