loicland / superpoint_graph

Large-scale Point Cloud Semantic Segmentation with Superpoint Graphs

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

what is "induced graph" in the paper?

largeword opened this issue · comments

Hi Dr. Landrieu, your work is excellent.

But I'm not a native English speaker, so I have some problems with your paper. In Section 3.5, Paragraph Training, you said "Note that as the induced graph is a union of small neighborhoods, relationships over many hops may still be formed and learned."

What is "induced graph" and what do you mean "many hops"?

Hi,
This described our graph-augmentation strategy. There is two reasons why we do that:

  • large graphs can be challenging to handle in training mode. By only considering a subgraph, this allows us to use larger batch sizes.
  • in a dataset such as Semantic3d, there is only 15 training clouds, and hence 15 superpoint graphs. Training on this graph runs the risk that the graph convolution scheme will overfit and learn particular graph configuration. By sampling random subgraphs, we mitigate this risk and learn on a higher variety of graph configurations.

Now how does this work:
(i) we sample a random superpoint in the graph
(ii) we add its 3-neighborhoods (up to neighbors of neighbors of neighbors), aka '3-hop' neighborhoods wrt the SPG
(ii) if we have less than a fixed number (typically 512) of true superpoints (ie one with more than n_min = 40 points in it) we go back to i).

Clearer?

Hi,
This described our graph-augmentation strategy. There is two reasons why we do that:

  • large graphs can be challenging to handle in training mode. By only considering a subgraph, this allows us to use larger batch sizes.
  • in a dataset such as Semantic3d, there is only 15 training clouds, and hence 15 superpoint graphs. Training on this graph runs the risk that the graph convolution scheme will overfit and learn particular graph configuration. By sampling random subgraphs, we mitigate this risk and learn on a higher variety of graph configurations.

Now how does this work:
(i) we sample a random superpoint in the graph
(ii) we add its 3-neighborhoods (up to neighbors of neighbors of neighbors), aka '3-hop' neighborhoods wrt the SPG
(ii) if we have less than a fixed number (typically 512) of true superpoints (ie one with more than n_min = 40 points in it) we go back to i).

Clearer?

Clear, thanks a lot!