svip-lab / FastMVSNet

[CVPR'20] Fast-MVSNet: Sparse-to-Dense Multi-View Stereo With Learned Propagation and Gauss-Newton Refinement

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Fast-MVSNet

PyTorch implementation of our CVPR 2020 paper:

Fast-MVSNet: Sparse-to-Dense Multi-View Stereo With Learned Propagation and Gauss-Newton Refinement

Zehao Yu, Shenghua Gao

How to use

git clone git@github.com:svip-lab/FastMVSNet.git

Installation

pip install -r requirements.txt

Training

  • Download the preprocessed DTU training data from MVSNet and unzip it to data/dtu.

  • Train the network

    python fastmvsnet/train.py --cfg configs/dtu.yaml

    You could change the batch size in the configuration file according to your own pc.

Testing

  • Download the rectified images from DTU benchmark and unzip it to data/dtu/Eval.

  • Test with the pretrained model

    python fastmvsnet/test.py --cfg configs/dtu.yaml TEST.WEIGHT outputs/pretrained.pth

Depth Fusion

We need to apply depth fusion tools/depthfusion.py to get the complete point cloud. Please refer to MVSNet for more details.

python tools/depthfusion.py -f dtu -n flow2

Acknowledgements

Most of the code is borrowed from PointMVSNet. We thank Rui Chen for his great works and repos.

Citation

Please cite our paper for any purpose of usage.

@inproceedings{Yu_2020_fastmvsnet,
  author    = {Zehao Yu and Shenghua Gao},
  title     = {Fast-MVSNet: Sparse-to-Dense Multi-View Stereo With Learned Propagation and Gauss-Newton Refinement},
  booktitle = {CVPR},
  year      = {2020}
}

About

[CVPR'20] Fast-MVSNet: Sparse-to-Dense Multi-View Stereo With Learned Propagation and Gauss-Newton Refinement

License:MIT License


Languages

Language:Python 100.0%