PyTorch implementation for the paper:
-
Title: Multi-Scale Receptive Field Graph Model for Emotion Recognition in Conversations
-
Authors: Jie Wei, Guanyu Hu, Luu Anh Tuan, Xinyu Yang, Wenjing Zhu
-
Submitted to: ICASSP2023
git clone https://github.com/Janie1996/MSRFG.git
You can create an anaconda environment with:
conda env create -f environment.yaml
conda activate MSRFG
a. Download dataset from google-drive. Unzip it and put them under ./data/
b. Download model checkpoint from google-drive. Unzip it and put them under ./checkpoints/
-
Run IEMOCAP
python eval_iemocap.py
-
Run MELD
python eval_meld.py
- the proposed model training
-
Run IEMOCAP
python train\train_iemocap.py
-
Run MELD
python train\train_meld.py
- fine-tune the Utterance Encoder
- Wav2Vec 2.0
- RoBERTa-Large
If you have questions, feel free to contact weijie_xjtu@stu.xjtu.edu.cn
- IEMOCAP: Interactive emotional dyadic motion capture database
- MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversations
- Directed Acyclic Graph Network for Conversational Emotion Recognition