liuguoyou / Deep-Geometry-Post-Processing

DEEP GEOMETRY POST-PROCESSING FOR DECOMPRESSED POINT CLOUDS (ICME2022 oral)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

ArXiv | Get Start

Deep-Geometry-Post-Processing

The source code for our paper "Deep Geometry Post-Processing for Decompressed Point Clouds" (ICME2022 oral)

We propose a novel learning-based post-processing method to enhance the decompressed point clouds. Our model is able to significantly improve the geometry quality of the decompressed point clouds by predicting the occupancy probability of each voxel.

  • Display:

Left: the ground truth point clouds; Middle: the decompressed point clouds obtained by G-PCC; Right: the refined point clouds obtained by our model.

Experimental results show that the proposed method can significantly improve the quality of the decompressed point clouds, achieving 9.30dB BDPSNR gain on three representative datasets on average.

Get Start

1) Installation

Requirements

  • Python 3
  • pytorch (1.7.1)
  • CUDA

Conda installation

# 1. Create a conda virtual environment.
conda create -n torch17 python=3.6
source activate torch17
pip install torch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2

# 2. Install dependency
pip install -r requirement.txt

2) Running

Generating training dataset

The longdress and loot sequences in the 8iVFB dataset are used for training. We randomly select 60 frames from the two sequences to construct the training set. The latest version of MPEG-TMC13 (V14.0) is used to obtain the decompressed point clouds at different bit rates.

The decoded point clouds are first divided into small non-overlapped cubes.

python util/split_point_cloud.py --source_dir=your_path_of_decoded_point_clouds --save_dir=./traindata

Training

CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 --master_port 12345 train.py \
--config ./config/multi_scale.yaml \
--name baseline

We provide the pre-trained weights of our model here.

Testing

We evaluate the performance of our proposed model in the 8iVFB dataset, MVUB dataset, and ODHM dataset. Except for the two training point cloud sequences, the other sequences of the above three datasets are used for testing.

CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --nproc_per_node=1 --master_port 6789 test.py \
--config ./config/multi_scale_all.yaml \
--name baseline \
--test_list ./test/example.txt

The geometry refined point clouds will be saved in the eval_result folder in default.

Citation

@article{fan2022deep,
  title={Deep Geometry Post-Processing for Decompressed Point Clouds},
  author={Fan, Xiaoqing and Li, Ge and Li, Dingquan and Ren, Yurui and Gao, Wei and Li, Thomas H},
  journal={arXiv preprint arXiv:2204.13952},
  year={2022}
}

Acknowledgement

Some dataset preprocessing methods are derived from PCGCv1.

About

DEEP GEOMETRY POST-PROCESSING FOR DECOMPRESSED POINT CLOUDS (ICME2022 oral)


Languages

Language:Python 100.0%