Pointnet2.ScanNet
PointNet++ Semantic Segmentation on ScanNet in PyTorch with CUDA acceleration based on the original PointNet++ repo and the PyTorch implementation with CUDA
Performance
The semantic segmentation results in percentage on the ScanNet train/val split in data/
.
Avg | Floor | Wall | Cabinet | Bed | Chair | Sofa | Table | Door | Window | Bookshelf | Picture | Counter | Desk | Curtain | Refrigerator | Bathtub | Shower | Toilet | Sink | Others |
50.62 | 90.96 | 63.87 | 35.21 | 56.75 | 62.43 | 68.46 | 47.15 | 36.12 | 34.12 | 25.62 | 23.58 | 41.46 | 42.73 | 32.38 | 44.12 | 64.93 | 63.90 | 74.04 | 58.13 | 46.40 |
The pretrained models: SSG MSG
Installation
Requirements
- Linux (tested on Ubuntu 14.04/16.04)
- Python 3.6+
- PyTorch 1.0
- TensorBoardX
Install
Install this library by running the following command:
cd pointnet2
python setup.py install
Configure
Change the path configurations for the ScanNet data in lib/config.py
Prepare multiview features (optional)
-
Download the ScanNet frames here (~13GB) and unzip it.
-
Extract the multiview features from ENet:
python compute_multiview_features.py
- Generate the projection mapping between image and point cloud
python compute_multiview_projection.py
- Project the multiview features from image space to point cloud
python project_multiview_features.py
Usage
preprocess ScanNet scenes
Parse the ScanNet data into *.npy
files and save them in preprocessing/scannet_scenes/
python preprocessing/collect_scannet_scenes.py
sanity check
Don't forget to visualize the preprocessed scenes to check the consistency
python preprocessing/visualize_prep_scene.py --scene_id <scene_id>
The visualized <scene_id>.ply
is stored in preprocessing/label_point_clouds/
train
Train the PointNet++ semantic segmentation model on ScanNet scenes
python train.py
The trained models and logs will be saved in outputs/<time_stamp>/
Note: please refer to train.py for more training settings
eval
Evaluate the trained models and report the segmentation performance in point accuracy, voxel accuracy and calibrated voxel accuracy
python eval.py --folder <time_stamp>
vis
Visualize the semantic segmentation results on points in a given scene
python visualize.py --folder <time_stamp> --scene_id <scene_id>
The generated <scene_id>.ply
is stored in outputs/<time_stamp>/preds
. See the class palette here
Acknowledgement
- charlesq34/pointnet2: Paper author and official code repo.
- sshaoshuai/Pointnet2.PyTorch: Initial work of PyTorch implementation of PointNet++ with CUDA acceleration.