Partial SPG calculation for custom_dataset
sandeepnmenon opened this issue · comments
I ran the partition code with my custom dataset paths and --voxel_width 0.05
Added the pruning code
xyz, rgb, labels = read_custom_data(data_file, label_file)
if args.voxel_width > 0:
xyz, rgb, labels, _ = libply_c.prune(xyz.astype('f4'), args.voxel_width, rgb.astype('uint8'), labels.astype('uint8'), np.zeros(1, dtype='uint8'), n_labels, 0)
Using features as below (stacking rgb with features gave worse results)
elif args.dataset=='custom_dataset':
#choose here which features to use for the partition
features = geof
geof[:,3] = 2. * geof[:, 3]
The arguments
k_nn_geof: 20
k_nn_adj: 5
lambda_edge_weight: 1
reg_strength: 0.1
d_se_max: 0
use_voronoi: 0,1 (Tried both)
The result from top view is as follows.
You can see that one half of the point cloud has superpoints but the other is completely random.
Is there any range parameter that I am missing?
Hi!
We are releasing a new version of SuperPoint Graph called SuperPoint Transformer (SPT).
It is better in any way:
✨ SPT in numbers ✨ |
---|
📊 SOTA results: 76.0 mIoU S3DIS 6-Fold, 63.5 mIoU on KITTI-360 Val, 79.6 mIoU on DALES |
🦋 212k parameters only! |
⚡ Trains on S3DIS in 3h on 1 GPU |
⚡ Preprocessing is x7 faster than SPG! |
🚀 Easy install (no more boost!) |
If you are interested in lightweight, high-performance 3D deep learning, you should check it out. In the meantime, we will finally retire SPG and stop maintaining this repo.