facebookresearch / xformers

Hackable and optimized Transformers building blocks, supporting a composable construction.

Home Page:https://facebookresearch.github.io/xformers/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Request for tutorial on how to modify an attention processor into its xformers version

JWargrave opened this issue · comments

Hi, there. I am new to xformers and I want to finetune ip-adapter with xformers. To achieve this, I need to implement the xformers version of IPAdapterAttnProcessor2_0. By comparing XFormersAttnProcessor and AttnProcessor2_0, I found two key lines of code:

hidden_states = xformers.ops.memory_efficient_attention(
    query, key, value, attn_bias=attention_mask, op=self.attention_op, scale=attn.scale
) # in XFormersAttnProcessor

hidden_states = F.scaled_dot_product_attention(
    query, key, value, attn_mask=attention_mask, dropout_p=0.0, is_causal=False
) # in AttnProcessor2_0

But it seems that the matter is not as simple as just modifying this part.

Is there any tutorial on how to modify an attention processor into its xformers version?

Thanks a lot.