lucidrains / point-transformer-pytorch

Implementation of the Point Transformer layer, in Pytorch

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

The shape of vector attention map

Liu-Feng opened this issue · comments

Firstly, thanks for your awesome work!!!
I have a confusion that if the vector attention can modulate individual feature channel, the attention map should have a axis whose dimension is same as the feature dimension of cooresponding value (V). And based on the Eq. (2) in the paper, if the shape of query, key and value is [batch_size, num_point, num_dim],
the shape of vector attention map may should be [batch_size, num_points, num_points, num_dim].
Looking forwoard to your reply!

@Liu-Feng Hi Feng! Admittedly I wrote this repository and forgot to follow up and double check if it is correct

Do you mean to say it should be like https://github.com/lucidrains/point-transformer-pytorch/pull/3/files

@lucidrains Thanks!! If the last dimension of attn_mlp is dim other 1, I think it cloud be used to modulate individual feature channel. But I cannot confirm which axis that the softmax should conduct on. Thanks for your reply!

@Liu-Feng i'm pretty sure the softmax will be across the similarities of queries against all the keys (dimension j)

I've merged the PR, so do let me know if this work or doesn't work in your training

Thank you!

against

Thanks for your reply!!! I have not train the model. I am trying to construct the PT via Tensorflow.

@Liu-Feng hello! how's your tensorflow port going? are you more certain that you had the correct hunch here?

Hollo, I carefully read the Point transformer, and the vector attention is conducted on a local path of the point cloud( which pointed out in the paper). The application of vector attention in lcoal patch will reduce the memeory cost, as the number of points in local patch is much smaller than whole point cloud.

If the number of downed-sampled points is N, and the dim of the learned feature is D, the B represents the batch size, thus, the feature shape is BND, and the shape of grouped feature is BNKD, where K is number of points in each local patch(K of KNN).

Thus the attention may be conducted between BND and BNKD, the shape of Query , Key and Value are BND, BNKD and BNKD, respectively. The attention weight after substraction and mapping is BNKD(BND-BNKD, like the relative coordinates used in point cloud grouping in Pointnet++). Then the hadamard product is made bettwen attention weight(BNKD) and value(BNKD), following by the reduce_sum made in K dim(axis=2). Therefore, the shape of output feature is BND(with squeeze in dim 2).

In this way, the wector weight could refine every channel in Value. But it works like the attention, other than the self-attention. All these steps are based on my understandings of the paper.

The real workflow of the Point Transformer may different from my own understandings, and the truth cloud be uncoverred after the author open the source code.

By the way, some interesting things may be found in the orignal vector attention paper( the code of which is opened), which is written by the sample author of Point transformer.

Have a good day!!