There are 5 repositories under multimodal-emotion-recognition topic.
MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversation
Lightweight and Interpretable ML Model for Speech Emotion Recognition and Ambiguity Resolution (trained on IEMOCAP dataset)
A collection of datasets for the purpose of emotion recognition/detection in speech.
The code for our INTERSPEECH 2020 paper - Jointly Fine-Tuning "BERT-like'" Self Supervised Models to Improve Multimodal Speech Emotion Recognition
The code for our IEEE ACCESS (2020) paper Multimodal Emotion Recognition with Transformer-Based Self Supervised Feature Fusion.
Human Emotion Understanding using multimodal dataset.
😎 Awesome lists about Speech Emotion Recognition
A survey of deep multimodal emotion recognition.
The repo contains an audio emotion detection model, facial emotion detection model, and a model that combines both these models to predict emotions from a video
A Tensorflow implementation of Speech Emotion Recognition using Audio signals and Text Data
SERVER: Multi-modal Speech Emotion Recognition using Transformer-based and Vision-based Embeddings
Emotion recognition from Speech & Text using different heterogeneous ensemble learning methods
Published in Springer Multimedia Tools and Applications Journal.
All experiments were done to classify multimodal data.
audio-text multimodal emotion recognition model which is robust to missing data
This API utilizes a pre-trained model for emotion recognition from audio files. It accepts audio files as input, processes them using the pre-trained model, and returns the predicted emotion along with the confidence score. The API leverages the FastAPI framework for easy development and deployment.
Official repo for "Multi-Corpus Emotion Recognition Method based on Cross-Modal Gated Attention Fusion" in INTERSPEECH 2024