lucidrains / point-transformer-pytorch

Implementation of the Point Transformer layer, in Pytorch

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

The layer structure and mask

ayushais opened this issue · comments

Hi,

Thanks for this contribution. In the implementation of attn_mlp the first linear layer increases the dimension. Is this a standard practice because I did not find any details about this in the paper. Also paper also does not describe use of mask, is this again some standard practice for attention layers?

Thanks!!

I think the mask is used in some cases similar to Transformer in NLP, if you need it.
If you don't have any special purposes, just set the mask to all ones.