There are 12 repositories under affective-computing topic.
:computer: :robot: A summary on our attempts at using Deep Learning approaches for Emotional Text to Speech :speaker:
Emotion-LLaMA: Multimodal Emotion Recognition and Reasoning with Instruction Tuning
Official implementation of the paper "Estimation of continuous valence and arousal levels from faces in naturalistic conditions", Antoine Toisoul, Jean Kossaifi, Adrian Bulat, Georgios Tzimiropoulos and Maja Pantic, Nature Machine Intelligence, 2021
Video2Music: Suitable Music Generation from Videos using an Affective Multimodal Transformer model
A curated list of awesome affective computing 🤖❤️ papers, software, open-source projects, and resources
This is my reading list for my PhD in AI, NLP, Deep Learning and more.
A machine learning application for emotion recognition from speech
From Pixels to Sentiment: Fine-tuning CNNs for Visual Sentiment Prediction
This repository contains the source code for our paper: "Husformer: A Multi-Modal Transformer for Multi-Modal Human State Recognition". For more details, please refer to our paper at https://arxiv.org/abs/2209.15182.
😎 Awesome lists about Speech Emotion Recognition
Spatial Temporal Graph Convolutional Networks for Emotion Perception from Gaits
🚀 Pre-process, annotate, evaluate, and train your Affect Computing (e.g., Multimodal Emotion Recognition, Sentiment Analysis) datasets ALL within MER-Factory! (LangGraph Based Agent Workflow)
This is the official implementation of the paper "Speech2AffectiveGestures: Synthesizing Co-Speech Gestures with Generative Adversarial Affective Expression Learning".
personal repository
ABAW3 (CVPRW): A Joint Cross-Attention Model for Audio-Visual Fusion in Dimensional Emotion Recognition
IEEE T-BIOM : "Audio-Visual Fusion for Emotion Recognition in the Valence-Arousal Space Using Joint Cross-Attention"
Multimodal Deep Learning Framework for Mental Disorder Recognition @ FG'20
FG2021: Cross Attentional AV Fusion for Dimensional Emotion Recognition
ABAW6 (CVPR-W) We achieved second place in the valence arousal challenge of ABAW6
This is the official implementation of the paper "Text2Gestures: A Transformer-Based Network for Generating Emotive Body Gestures for Virtual Agents".
EmoInt provides a high level wrapper to combine various word embeddings and creating ensembles from multiple trained models
Using deep recurrent networks to recognize horses' pain expressions in video.
Supplementary codes for the K-EmoCon dataset
Diploma thesis analyzing emotion recognition in conversations exploiting physiological signals (ECG, HRV, GSR, TEMP) and an Attention-based LSTM network
VAD analysis of text using some affective lexicon (ANEW, SENTIWORDNET, and VADER)
IEEE Transactions on Affective Computing, 2022
This is an official PyTorch implementation of "Gesture2Vec: Clustering Gestures using Representation Learning Methods for Co-speech Gesture Generation" (IROS 2022).
PyTorch code for "M³T: Multi-Modal Multi-Task Learning for Continuous Valence-Arousal Estimation"
Real-time Emotion Recognition using Physiological signals in e-Learning Here one can find the development of realtime emotion recognition using various physiological signals
Facial expression recognition package built on Pytorch and FER+ dataset from Microsoft.
Artemis Speaker Tools B
[TVCG 2024] ReactFace: Online Multiple Appropriate Facial Reaction Generation in Dyadic Interactions
[ACII Demo] Emolysis: A Multimodal Open-Source Group Emotion Analysis and Visualization Toolkit