ma-xu / pointMLP-pytorch

[ICLR 2022 poster] Official PyTorch implementation of "Rethinking Network Design and Local Geometry in Point Cloud: A Simple Residual MLP Framework"

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

some problems about global context and cls_token

mmiku1 opened this issue · comments

Hi @ma-xu
[gmp_list.append(F.adaptive_max_pool1d(self.gmp_map_listi, 1))](

global_context = self.gmp_map_end(torch.cat(gmp_list, dim=1)) # [b, gmp_dim, 1]
)

x = torch.cat([x, global_context.repeat([1, 1, x.shape[-1]]), cls_token.repeat([1, 1, x.shape[-1]])], dim=1)

The features of each layer of the encoder are concat. After max pooling,the feature is concat with the last layer of the decoder.
Why not concat with each layer of the decoder?

Thanks.

commented

@mmiku1 It could be, but unnecessary. Empirically, this setting can achieve promising performance.

Thank you for your answer. Wish you a happy life!