There are 22 repositories under multimodal-sentiment-analysis topic.
MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversation
This repository contains various models targetting multimodal representation learning, multimodal fusion for downstream tasks such as multimodal sentiment analysis.
多模态情感分析——基于BERT+ResNet的多种融合方法
This repository contains the official implementation code of the paper Improving Multimodal Fusion with Hierarchical Mutual Information Maximization for Multimodal Sentiment Analysis, accepted at EMNLP 2021.
Context-Dependent Sentiment Analysis in User-Generated Videos
The code for our IEEE ACCESS (2020) paper Multimodal Emotion Recognition with Transformer-Based Self Supervised Feature Fusion.
CM-BERT: Cross-Modal BERT for Text-Audio Sentiment Analysis(MM2020)
Code for the paper "VistaNet: Visual Aspect Attention Network for Multimodal Sentiment Analysis", AAAI'19
This repository contains the implementation of the paper -- Bi-Bimodal Modality Fusion for Correlation-Controlled Multimodal Sentiment Analysis
😎 Awesome lists about Speech Emotion Recognition
Multimodal sentiment analysis using hierarchical fusion with context modeling
A survey of deep multimodal emotion recognition.
NAACL 2022 paper on Analyzing Modality Robustness in Multimodal Sentiment Analysis
This paper list is about multimodal sentiment analysis.
Engaged in research to help improve to boost text sentiment analysis using facial features from video using machine learning.
[EMNLP 2022] This repository contains the official implementation of the paper "MM-Align: Learning Optimal Transport-based Alignment Dynamics for Fast and Accurate Inference on Missing Modality Sequences"
DeepCU: Integrating Both Common and Unique Latent Information for Multimodal Sentiment Analysis, IJCAI-19
Bimodal and Unimodal Sentiment Analysis of Internet Memes (Image+Text)
Emotion recognition methods through facial expression, speeches, audios, and multimodal data
Multimodal sentiment analysis
Code and Splits for the paper "A Fair and Comprehensive Comparison of Multimodal Tweet Sentiment Analysis Methods", In Proceedings of the 2021 Workshop on Multi-Modal Pre-Training for Multimedia Understanding (MMPT ’21), August 21, 2021,Taipei, Taiwan
This repository contains the code for the paper "Sentiment-driven statistical causality in multimodal systems", by Ioannis Chalkiadakis, Anna Zaremba, Gareth W. Peters and Michael J. Chantler.
Sentiment Analysis, Summarization, Tagging with MongoDB Atlas and Gemini — Google Cloud's AI model
Multimodal emotion recognition on two benchmark datasets RAVDESS and SAVEE from audio-visual information using CNN(Convolutional Neural Networks)
Multimodal Emotion Recognition using ClipBERT.
Multimodal Sentiment Analysis of video reviews on social media platform, using a supervised fuzzy rule-based system.
Official Git repository for "Hakimov, S., and Schlangen, D., (2023). Images in Language Space: Exploring the Suitability of Large Language Models for Vision & Language Tasks. Findings of the Association for Computational Linguistics (ACL 2023 Findings)"