StriveZs / DSC-MVSNet

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

README for DSC-MVSNet

PyTorch implement of “DSC-MVSNet: Attention Aware Cost Volume Regularization Based On Depthwise Separable Convolution for Multi-View Stereo”.

Installation

pip install -r requirments.txt

Testing

Download the preprocessed test data DTU testing data (from Original MVSNet) and unzip it as the DTU_TESTING folder, which should contain one cams folder, one images folder and one pair.txt file.

Test with the pretrained model

python dscmvsnet/test.py --cfg configs/dtu.yaml TEST.WEIGHT outputs/pretrained.pth

dtu.yaml

Please set the following configuration

OUTPUT_DIR: ""  # logfile and .pth save path 
DATA:
  TRAIN:
    ROOT_DIR: "" # training set path
		NUM_VIRTUAL_PLANE:
	VAL:
	    ROOT_DIR: "" # validation set path
	TEST:
	    ROOT_DIR: "" # testing set path
	    NUM_VIEW:
	    IMG_HEIGHT:
	    IMG_WIDTH:
	    NUM_VIRTUAL_PLANE:
	    INTER_SCALE: 2.13
	    MODE: "dtu" # dtu or tanks
TEST:
  WEIGHT: "" # .pth path
  BATCH_SIZE: 1

Depth Fusion

We need to apply depth fusion tools/depthfusion.py to get the complete point cloud. Please refer to MVSNet for more details. And use tools/rename_ply.py to get the rename results.

python tools/depthfusion.py -f dtu -n flow2

python tools/rename_ply.py

To obtain the fusibile:

  • Check out the modified version fusibile git clone https://github.com/YoYo000/fusibile
  • Install fusibile by cmake . and make, which will generate the executable at FUSIBILE_EXE_PATH

Evaluation

We need to download the official STL point clouds for our evaluation. Please download the STL Point Clouds, which is the STL reference point clouds for all the scenes. And please download the observability masks and evaluation codes from the SampleSet for evaluation.

About


Languages

Language:C++ 37.6%Language:Python 34.2%Language:C 13.0%Language:CMake 8.1%Language:Makefile 7.1%Language:TypeScript 0.0%