megvii-research / MOTR

[ECCV2022] MOTR: End-to-End Multiple-Object Tracking with TRansformer

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Attention Map

amindehnavi opened this issue · comments

Hi, is there any way to generate the output attention maps of model.transformer.decoder.layers[i].cross_attn layer? when I follow the referenced functions, I finally get stuck in MSDA.ms_deform_attn_forward function in the forward method of the MSDeformAttnFunction class which is located at ./models/ops/functions/ms_deform_attn_func.py file, and I couldn't find any argument to set True to get the attention map in output.

./models/deformable_transformer_plus/DeformableTransformerDecoderLayer
image

./models/ops/modules/ms_deform_attn.py
image

./models/ops/functions/ms_deform_attn_func.py
image