XLechter / SDT

Codes for Point Cloud Completion via Skeleton-Detail Transformer

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

About Skeleton-Detail Transformerarchitecture

yugang-guo opened this issue · comments

commented

Hello, thanks for opening such an excellent job. I have a question after reading your paper. In Fig. 3. 'Skeleton-Detail Transformer architecture. It consists of a self-attention layer, a cross attention layer, and an optional globally self-attention layer'.
When I read your code, I found that the code does not seem to use self-attention layer before cross attention layer. Do I understand mistakes?

Hi, @yugang-guo. I check the codes and find you are correct and I am sorry for this. It seems I upload the codes used for ablation study, which I comment the self-attention layers in

# self.sa_coarse = SA_Layer(out_channel_size_1)

# x1_coarse = self.sa_coarse(x1_coarse)

It should work without changes by just uncommenting these lines, but the corresponding models seems been deleted since I have graduated.

commented

OK,thanks for your reply.