Yang7879 / 3D-BoNet

🔥3D-BoNet in Tensorflow (NeurIPS 2019, Spotlight)

Home Page:https://arxiv.org/abs/1906.01140

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

scannet processing

lapetite123 opened this issue · comments

Hello, I am very interesting in your great work, and I want to use scannet to train your net, and I follewed #6 to process the dataset, but i still have some questions, *_vh_clean_2.ply is the raw point cloud? and when I use pointnet for processing, what are the classes should I set in indoor3d_util.py?

Hi @lapetite123 , the *_vh_clean_2.ply file is the raw point cloud. To divide the raw point cloud into blocks, you need to keep all points. When training the 3D-BoNet, you can use either 20 cls or the total 40 cls (i.e., to kick out uninterested pts during training, but you have to use all pts for online testing). If you need to train the SparseConvNet for sementic prediction, please follow their settings.

thanks for your kindly reply! I have successfully generate sem ground truth and ins ground truth, before divide the raw point cloud into blocks, does the raw point cloud include sem or ins ground truth label?

@lapetite123 yes, when dividing the point cloud into blocks, you need to divide the sem/ins labels for each corresponding block.

the sem labels I generate like this:
16
16
16
16
0
16
16
0
16
16
0
2
0
and the ins label file like this:
pred_mask/scene0000_00_1.txt 9 1.000000
pred_mask/scene0000_00_2.txt 9 1.000000
pred_mask/scene0000_00_3.txt 7 1.000000
pred_mask/scene0000_00_4.txt 12 1.000000
pred_mask/scene0000_00_5.txt 38 1.000000
pred_mask/scene0000_00_6.txt 16 1.000000
pred_mask/scene0000_00_7.txt 16 1.000000
pred_mask/scene0000_00_8.txt 14 1.000000
pred_mask/scene0000_00_9.txt 3 1.000000
are they right? If right, I don't know how to divide the sem/ins labels for each corresponding block, because they don't have any x,y,z attribute. In other words, I don't know which label belongs to which point, so I don't know how to divide the sem/ins labels

@lapetite123, simply append the sem and ins labels to each point {x, y, z, r, g, b, sem, ins} and then divide the point cloud.

@lapetite123 Hi. I met errorI "OError: [Errno 2] No such file or directory: 'pred_mask/scene0000_00_1.txt'" when I using the script
python export_train_mesh_for_evaluation.py --scan_path /media/rose/Doc/Document/cv_projects/ScanNetDatav2/scans/scene0000_00 --output_file /media/rose/Doc/Document/cv_projects/prossed_scannet/scene0000_00.txt --label_map_file /media/rose/Doc/Document/cv_projects/ScanNetDatav2/scannetv2-labels.combined.tsv --type instance How do you process the Scannet? Could you give me some advice?

you should create a file named 'pred_mask' under '/ScanNet/BenchmarkScripts/3d_helpers'

@lapetite123 Thanks for your help. By the way, I have some other questions.
First, should I export other scenes like scene0000_01? But there are hundreds of scenes, it will costs lots of time.
Second, how do I put the prossed data for training like S3DIS? Thank you in advance:)

yes, all the scenes should be dealt with, I used bash script to process all the scenes:

#! /bin/bash
pth="/home/data/scannetv2/scans/"
files=$(ls $pth)
for file in $files
do
echo $file
b=.txt
a=$file$b
echo $a
python export_train_mesh_for_evaluation.py --scan_path /home/data/scannetv2/scans/$file --output_file /home/data/scannetv2/sem_gt/$a --label_map_file /home/data/scannetv2/scannetv2-labels.combined.tsv --type1 label
done

you should change " --type" to "--type1" in the code

@lapetite123 Thanks for your help:)

@DRosemei Have you finished processing scannet data? Can you tell me how to get xyzrgb value from mesh?

@bonbonjour Sorry, I have turn to RandLaNet, because I need data processing for Kitti.