There are 10 repositories under multimodal-fusion topic.
This repository contains the official implementation code of the paper Improving Multimodal Fusion with Hierarchical Mutual Information Maximization for Multimodal Sentiment Analysis, accepted at EMNLP 2021.
Multimodal Co-Attention Transformer for Survival Prediction in Gigapixel Whole Slide Images - ICCV 2021
Code on selecting an action based on multimodal inputs. Here in this case inputs are voice and text.
Creating multimodal multitask models
Multimodal sentiment analysis using hierarchical fusion with context modeling
[CVAMD 2021] "End-to-End Learning of Fused Image and Non-Image Feature for Improved Breast Cancer Classification from MRI"
Few-Shot malware classification using fused features of static analysis and dynamic analysis (基于静态+动态分析的混合特征的小样本恶意代码分类框架)
FusionBrain Challenge 2.0: creating multimodal multitask model
This repository contains the dataset and baselines explained in the paper: M2H2: A Multimodal Multiparty Hindi Dataset For HumorRecognition in Conversations
Deep-HOSeq: Deep Higher-Order Sequence Fusion for Multimodal Sentiment Analysis.
Multimodal sentiment analysis
A generalized self-supervised training paradigm for unimodal and multimodal alignment and fusion.
[FR|EN - Trio] 2023 - 2024 Centrale Méditerranée AI Master | Multimodal retranscription with text, audio and video
Repo for "Centaur: Robust Multimodal Fusion for Human Activity Recognition"
A Transferability-guided Protein-Ligand Interaction Prediction Method
🖼️Latest Papers on Visually(Imagination)-Augmented NLP
Repository for context based emotion recognition
We propose Multi-Modal Segmentation TransFormer (MMSFormer) that incorporates a novel fusion strategy to perform multimodal material segmentation.