ma-xu / pointMLP-pytorch

[ICLR 2022 poster] Official PyTorch implementation of "Rethinking Network Design and Local Geometry in Point Cloud: A Simple Residual MLP Framework"

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Maxpooling not performed within each stage?

thatgeeman opened this issue · comments

Hello,

Congrats on this cool work! My doubt is about why the maxpool operation takes place outside of the for loop in the forward pass of the model defined in the referenced line here:

for i in range(self.stages):
# Give xyz[b, p, 3] and fea[b, p, d], return new_xyz[b, g, 3] and new_fea[b, g, k, d]
xyz, x = self.local_grouper_list[i](xyz, x.permute(0, 2, 1)) # [b,g,3] [b,g,k,d]
x = self.pre_blocks_list[i](x) # [b,d,g]
x = self.pos_blocks_list[i](x) # [b,d,g]
x = F.adaptive_max_pool1d(x, 1).squeeze(dim=-1)

Specifically, it seems that the aggregation doesnt take place, $\phi_{pos}(\phi_{pre}(f_{i,j}))$.

I would have expected to do this instead according to how its defined in the paper $\phi_{pos}(A(\phi_{pre}(f_{i,j})))$:

def forward(self, ...):
.
.
        for i in range(self.stages): 
            xyz, x = self.local_grouper_list[i](xyz, x.permute(0, 2, 1))  
            x = self.pre_blocks_list[i](x)  
            x = F.adaptive_max_pool1d(x, 1).squeeze(dim=-1)  # pooling inside the loop after pre block
            x = self.pos_blocks_list[i](x) 
.
.

Thanks for clarifying this.

Ah, just saw that its done within the Pre block:

x = F.adaptive_max_pool1d(x, 1).view(batch_size, -1)

Thanks!