Pointcept / Pointcept

Pointcept: a codebase for point cloud perception research. Latest works: PTv3 (CVPR'24 Oral), PPT (CVPR'24), OA-CNNs (CVPR'24), MSC (CVPR'23)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

About the ScanNetPP preprocessing

RayYoh opened this issue · comments

Hi authors,

I find that the segments for ScanNet++ val set are different from the official toolkit results. This is an example.

image

Hi, good point. Could you provide the configuration used to create processed datasets with the official toolkit? (I wrote the preprocessing code based on my personal understanding towards this dataset.)

Hi Xiaoyang, the yaml file I used for preprocessing is:

data:
  data_root: '/path/to/scannetpp/data'

  labels_path: '/path/to/scannetpp/metadata/semantic_benchmark/top100.txt'
  # for instance segmentation
  use_instances: true
  instance_labels_path: '/path/to/scannetpp/metadata/semantic_benchmark/top100_instance.txt'

  ## save multiple labels per vertex/point? ##
  # multilabel:
  #   max_gt: 3
  #   multilabel_only: false

  mapping_file: '/path/to/scannetpp/metadata/semantic_benchmark/map_benchmark.csv'

  list_path: '/path/to/scannetpp/splits/nvs_sem_val.txt'
  # list_path: '/mnt/nvme1/ray/dataset/scannetpp/splits/sem_test.txt'

  ignore_label: -1

  sample_factor: 1.0

  transforms:
    # read the mesh 
    - add_mesh_vertices
    # map raw labels to benchmark classes
    - map_label_to_index
    # use segments info to get labels on the vertices, handle multilabels
    - get_labels_on_vertices
    - add_normals
    # sample points on the mesh and transfer all vertex info to the points
    # - sample_points_on_mesh

# dir to save pth training data
out_dir: '/path/to/scannetpp_pth/val_vtx'

And then I use the official toolkit to run:
python -m semantic.prep.prepare_training_data semantic/configs/prepare_training_data.yml.
Note that I comment sample_points_on_mesh. Because I find that you provided code seems no this operation (is it correct?).

The coord and color (val_vtx color * 255) are same.