loicland / superpoint_graph

Large-scale Point Cloud Semantic Segmentation with Superpoint Graphs

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

about superpoint embedding

LeanderWayne opened this issue · comments

Hi, I ve been reading your essay, trying to understand the superpoing embedding part. According to my understanding, the superpoints should be subsampled to np=128 before embedding in the PointNet, am I right? And I can't find the corresbonding code in the main.py.
you define run_full_monger(self, model, clouds_meta, clouds_flag, clouds, clouds_global) in pointnet.py
but use this function as embeddings = ptnCloudEmbedder.run(model, *clouds_data), does *clouds_data include clouds_flag, clouds and clouds_global?

and I wonder where does --ptn_npts=128 actually works and subsample the superpoint? Looking forward to your reply. @loicland

superpoints should be subsampled to np=128 before embedding in the PointNet,

Yes

does *clouds_data include clouds_flag, clouds and clouds_global?

Yes, as a tuple.

where does --ptn_npts=128 actually works and subsample the superpoint?

See loader and load_superpoint in /learning/spg.py.

Processed by MLP, a np*dp matrix becomes a width* dp matrix. I suppose in every layer, there is a weight matrix which size is width*dp? Are these weights also trainable parameters?

The weights of the MLP are all trained end-to-end and the size if the MLPs are given by the --ptn_widths option. The superpoints embedding is almost exactly pointnet, I recommend starting with this paper if you are not clear on how we embed SPs.