MachinePerceptionLab / Attentive_DFPrior

[NeurIPS'23] Learning Neural Implicit through Volume Rendering with Attentive Depth Fusion Priors

Home Page:https://machineperceptionlab.github.io/Attentive_DF_Prior/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Learning Neural Implicit through Volume Rendering with Attentive Depth Fusion Priors

Pengchong Hu · Zhizhong Han

NeurIPS 2023

Table of Contents
  1. Installation
  2. Dataset
  3. Run
  4. Evaluation
  5. Acknowledgement
  6. Citation

Installation

Please install all dependencies by following the instrutions here. You can use anaconda to finish the installation easily.

You can bulid a conda environment called df-prior. Note that for linux users, you need to install libopenexr-dev first before building the environment.

git clone https://github.com/MachinePerceptionLab/Attentive_DFPrior.git
cd Attentive_DFPrior

sudo apt-get install libopenexr-dev
    
conda env create -f environment.yaml
conda activate df-prior

Dataset

Replica

Please download the Replica dataset generated by the authors of iMAP into ./Datasets/Replica folder.

bash scripts/download_replica.sh # Released by authors of NICE-SLAM

ScanNet

Please follow the data downloading procedure on ScanNet website, and extract color/depth frames from the .sens file using this code.

[Directory structure of ScanNet (click to expand)]

DATAROOT is ./Datasets by default. If a sequence (sceneXXXX_XX) is stored in other places, please change the input_folder path in the config file or in the command line.

  DATAROOT
  └── scannet
      └── scans
          └── scene0000_00
              └── frames
                  ├── color
                  │   ├── 0.jpg
                  │   ├── 1.jpg
                  │   ├── ...
                  │   └── ...
                  ├── depth
                  │   ├── 0.png
                  │   ├── 1.png
                  │   ├── ...
                  │   └── ...
                  ├── intrinsic
                  └── pose
                      ├── 0.txt
                      ├── 1.txt
                      ├── ...
                      └── ...

Run

To run our code, you first need to generate the TSDF volume and corresponding bounds. We provide the generated TSDF volume and bounds for Replica and ScanNet: replica_tsdf_volume.tar, scannet_tsdf_volume.tar.

You also can generate the TSDF volume and corresponding bounds by using the following code:

CUDA_VISIVLE_DEVICES=0 python get_tsdf.py configs/Replica/room0.yaml --space 1 # For Replica
CUDA_VISIVLE_DEVICES=0 python get_tsdf.py configs/ScanNet/scene0050_00.yaml --space 10 # For ScanNet

You can run DF-Prior by using the following code:

CUDA_VISIVLE_DEVICES=0 python -W ignore run.py configs/Replica/room0.yaml # For Replica
CUDA_VISIVLE_DEVICES=0 python -W ignore run.py configs/ScanNet/scene0050.yaml # For ScanNet

The mesh for evaluation is saved as $OUTPUT_FOLDER/mesh/final_mesh_eval_rec.ply, where the unseen regions are culled using all frames.

Evaluation

Average Trajectory Error

To evaluate the average trajectory error. Run the command below with the corresponding config file:

python src/tools/eval_ate.py configs/Replica/room0.yaml

Reconstruction Error

Replica

To evaluate the reconstruction error in Replica, first download the ground truth Replica meshes where unseen region have been culled.

bash scripts/download_cull_replica_mesh.sh # Released by authors of NICE-SLAM

Then run the command below. The 2D metric requires rendering of 1000 depth images, which will take some time (~9 minutes). Use -2d to enable 2D metric. Use -3d to enable 3D metric.

# assign any output_folder and gt mesh you like, here is just an example
OUTPUT_FOLDER=output/Replica/room0
GT_MESH=cull_replica_mesh/room0.ply
python src/tools/eval_recon.py --rec_mesh $OUTPUT_FOLDER/mesh/final_mesh_eval_rec.ply --gt_mesh $GT_MESH -2d -3d

We also provide code to cull the mesh given camera poses. Here we take culling of ground truth mesh of Replica room0 as an example.

python src/tools/cull_mesh.py --input_mesh Datasets/Replica/room0_mesh.ply --traj Datasets/Replica/room0/traj.txt --output_mesh cull_replica_mesh/room0.ply

ScanNet

To evaluate the reconstruction error in ScanNet, first download the ground truth ScanNet meshes.zip into ./Datasets/scannet folder. Then run the command below.

python src/tools/evaluate_scannet.py configs/ScanNet/scene0050.yaml 

We also provide our reconstructed meshes in Replica and ScanNet for evaluation purposes: meshes.zip.

Acknowledgement

We adapt codes from some awesome repositories, including NICE-SLAM, NeuralRGBD, tsdf-fusion, manhattan-sdf, MonoSDF. Thanks for making the code available. We also thank Zihan Zhu of NICE-SLAM, for prompt responses to our inquiries regarding the details of their methods.

Citation

If you find our code or paper useful, please cite

@inproceedings{Hu2023LNI-ADFP,
      title = {Learning Neural Implicit through Volume Rendering with Attentive Depth Fusion Priors},
      author = {Hu, Pengchong and Han, Zhizhong},
      booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
      year = {2023}
    }

About

[NeurIPS'23] Learning Neural Implicit through Volume Rendering with Attentive Depth Fusion Priors

https://machineperceptionlab.github.io/Attentive_DF_Prior/

License:MIT License


Languages

Language:Python 99.7%Language:Shell 0.3%