ma-xu / pointMLP-pytorch

[ICLR 2022 poster] Official PyTorch implementation of "Rethinking Network Design and Local Geometry in Point Cloud: A Simple Residual MLP Framework"

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

About Geometric Affine Module.

popopochan opened this issue · comments

Hi, @ma-xu
I am aware that the Geometric Affine Module overcomes the drawback of having the input point cloud transformed affine using Shared MLP.
Would this Module be effective when the input is a neighborhood patch rather than the entire point cloud?
Also, you mention that α and β are learnable parameters.
Thank you.

commented

Probably not. When applying to a neighborhood path, there would be only k*d values (k is the number of neighbors, and d is the feature dimension), which may not be enough to get reasonable statistifcal information.

But I didn't try it. Any results are welcome.

OK, I try to use dataloader to divide the input point cloud into PATCHES (each patch is about 500 points) and apply this affine transformation.
And, how are alpha and beta parameters to be learned?
Thank you.

commented

@popopochan

self.affine_alpha = nn.Parameter(torch.ones([1,1,1,channel + add_channel]))

Thank you.
In Figure 6 of the paper, we can see that [1024, 64] ⇨ [512, 24, 64] depending on the Local Grouper.
However, when we actually check the code, we see that the input of the Local Grouper is [1024, 64] and the output is [512, 24, 128].
Is this processed differently from the paper? (the number of channels is changing)

commented

Same as the paper.
What I said kd is in a local neighborhood path. For example, for the [512, 24, 128], 512 is number of groups, 24 is number of k-neighbors in local knn, 128 is the dimension. So, in a local neighborhood path, it would be kd or 24*128. Hope the explanation can make it clear. Let me know if you have any further questions.

Thank you for your response, I can understand.