hehefan / P4Transformer

Implementation of the "Point 4D Transformer Networks for Spatio-Temporal Modeling in Point Cloud Videos" paper.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Questions about frame-level self-attention

create7859 opened this issue · comments

Hello.
In the ablation study section of the paper, there were results of performing self-attention at the frame level.
When you implemented this in code, did you use a for loop to apply the transformer to frame tokens one by one, or did you have another method to calculate them all at once?
I'm curious about your approach.