PKU-YuanGroup / LanguageBind

【ICLR 2024🔥】 Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment

Home Page:https://arxiv.org/abs/2310.01852

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Combination of multiple modalities

anthony-mendil opened this issue · comments

First of all congrats on the paper and thanks for providing the code!

In the paper at 'Zero-shot language-based multi-modal joint retrieval' you mention that integrating/combining multiple embeddings improves the performance. I am specifically referring to the sentence:

'Similar trends have been observed in other modalities, where each modality has the potential to enhance the performance when combined with other modalities.'

However, the paper does not clarify how the embeddings for different modalities are actually combined. If for instance, the input modalities are text, audio, video and depth the model would produce individual embeddings for all of the modalities. How do you then combine these embeddings in order to obtain the results you report?
Do you simply average the different embeddings?

Thanks in advance,
Anthony Mendil.

Yes, just average two modalities logits.

Is the code for this available? I can not seem to locate it in the repository. If not could you perhaps provide it? For example for the Infrared+RGB -> Text task.

Thanks in advance,
Anthony Mendil.

And is there is specific reason to average the logits and not directly the produced embeddings of the modalities? Especially for the retrieval task there are no logits computed if I understand correctly. How would this be done without the logits?