Castiel-Lee / robustness_pc_detector

We propose the first robustness benchmark of point cloud detectors against common corruption patterns. We first introduce different corruption patterns collected for this benchmark and dataset. Then we propose the evaluation metrics used in our benchmark. Finally, we introduce the subject object detection methods and robustness enhancment methods selected for this benchmark.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

robustness_pc_detector

We propose the first robustness benchmark of point cloud detectors against common corruption patterns. We first introduce different corruption patterns collected for this benchmark and dataset. Then we propose the evaluation metrics used in our benchmark. Finally, we introduce the subject object detection methods and robustness enhancment methods selected for this benchmark.

Installation

All the codes are tested (not only) on Ubuntu 20.04 with Python 3.8 tools:

  • PyGeM and PyMieScatt
  • Scipy, and other basic tools, like numpy, os, argparse, glob, etc.
  • As for the weather simulation, please refer to LISA and Lidar_fog_simulation.

Most tools can be installed by pip install ${tool}.

Data Preparation

This kit of corruption simulation applies to KITTI data. Please download the KITTI 3D object detection dataset and organize the files as the below:

data
├── kitti
│   │── ImageSets
│   │── training
│   │   ├──calib & velodyne & label_2 & image_2 
│   │── testing (optional)
│   │   ├──calib & velodyne & image_2

More details of the implementation are as in object and scene folder.

Corruption Simulation

We formulate 25 corruptions covering 2 affecting ranges $\times$ 4 corruption categories, i.e., {object, scene} $\times$ {weather, noise, density, transformation}.

For the implementation of 25 corruptions, corresponding README files under the object and scene folders give the details on the Python commands to generate simulated data.

We show some corruption examples based on the KITTI LiDAR example with ID = 000008, as in the below figures. Besides, we provide the ground-truth annotations of objects and detection results obtained by PVRCNN in the format of bounding boxes.

Clean

clean example

Snow

snow example

Scene-level uniform noise

uniform_rad example

Scene-level layer missing

layer_del example

Tested Detection Methods

As in the paper, we tested 7 voxel-based methods, 3 point-voxel-based methods, and 2 point-based methods (we re-shaped PartA2 with point-based data representation and pointnet++ backbone) on the OpenPCDet (https://github.com/open-mmlab/OpenPCDet). Also, we extended CenterPoint to the versions of different data representations and different proposal architectures for relatively fair evaluation.

Acknowledgement

Some parts of the code implement are learned from the official released codes of the below methods:

We would like to thank for their proposed methods and the official implementation.

Citation

If you find this project useful in your research, please consider cite:

@article{li2022common,
  title={Common Corruption Robustness of Point Cloud Detectors: Benchmark and Enhancement},
  author={Li, Shuangzhi and Wang, Zhijie and Juefei-Xu, Felix and Guo, Qing and Li, Xingyu and Ma, Lei},
  journal={arXiv preprint arXiv:2210.05896},
  year={2022}
}

About

We propose the first robustness benchmark of point cloud detectors against common corruption patterns. We first introduce different corruption patterns collected for this benchmark and dataset. Then we propose the evaluation metrics used in our benchmark. Finally, we introduce the subject object detection methods and robustness enhancment methods selected for this benchmark.


Languages

Language:HTML 56.3%Language:Jupyter Notebook 29.4%Language:C++ 7.2%Language:Python 6.9%Language:TeX 0.1%Language:Makefile 0.1%Language:Batchfile 0.1%Language:Dockerfile 0.0%Language:Shell 0.0%