frank-xwang / InstanceDiffusion

[CVPR 2024] Code release for "InstanceDiffusion: Instance-level Control for Image Generation"

Home Page:https://people.eecs.berkeley.edu/~xdwang/projects/InstDiff/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

question about Instance-Masked Attention

jsg921019 opened this issue · comments

Thank you for sharing interesting work.

I have question about Instance-Masked Attention.
Current code does not seems to apply Instance-Masked Attention. (return_att_masks = False)
Is this because not applying Instance-Masked Attention is better in generation quality?

Secondly, is Instance-Masked Attention applied when training? Or is this only applied when inference?

Thank you in advance.

Hi, thank you for expressing your interest. Currently, we have return_att_masks set to False as Flash Attention does not yet support attention masks (check it here). However, if speed and memory usage are not primary concerns for your application, you may opt to set return_att_masks to True. It's worth noting that during our learning process, we had this option enabled. Hope it helps!

Thank you for precise and fast feedback!