hjpwhu / multimodal-speech-emotion

TensorFlow implementation of "Multimodal Speech Emotion Recognition using Audio and Text", IEEE SLT-18

Home Page:https://arxiv.org/abs/1810.04635

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

multimodal-speech-emotion

This repository contains the source code used in the following paper,

Multimodal Speech Emotion Recognition using Audio and Text, IEEE SLT-18, [paper]


[requirements]

tensorflow==1.4 (tested on cuda-8.0, cudnn-6.0)
python==2.7
scikit-learn==0.20.0
nltk==3.3

[download data corpus]

  • IEMOCAP [link] [paper]
  • download IEMOCAP data from its original web-page (license agreement is required)

[preprocessed-data schema (our approach)]

  • for the preprocessing, refer to codes in the "./preprocessing"
  • If you want to download the "preprocessed corpus" from us directly, please send us an email after getting the license from IEMOCAP team.
  • We cannot publish ASR-processed transcription due to the license issue (commercial API), however, we assume that it is moderately easy to extract ASR-transcripts from the audio signal by oneself. (we used google-cloud-speech-api)
  • Examples

    MFCC : MFCC features of the audio signal (ex. train_audio_mfcc.npy)
    MFCC-SEQN : valid lenght of the sequence of the audio signal (ex. train_seqN.npy)
    PROSODY : prosody features of the audio signal (ex. train_audio_prosody.npy)
    LABEL : targe label of the audio signal (ex. train_label.npy)
    TRANS : sequences of trasnciption (indexed) of a data (ex. train_nlp_trans.npy)

[source code]

  • repository contains code for following models

    Audio Recurrent Encoder (ARE)
    Text Recurrent Encoder (TRE)
    Multimodal Dual Recurrent Encoder (MDRE)
    Multimodal Dual Recurrent Encoder with Attention (MDREA)


[training]

  • refer "reference_script.sh"
  • fianl result will be stored in "./TEST_run_result.txt"

[cite]

  • Please cite our paper, when you use our code | model | dataset

    @inproceedings{yoon2018multimodal,
    title={Multimodal Speech Emotion Recognition Using Audio and Text},
    author={Yoon, Seunghyun and Byun, Seokhyun and Jung, Kyomin},
    booktitle={2018 IEEE Spoken Language Technology Workshop (SLT)},
    pages={112--118},
    year={2018},
    organization={IEEE}
    }

About

TensorFlow implementation of "Multimodal Speech Emotion Recognition using Audio and Text", IEEE SLT-18

https://arxiv.org/abs/1810.04635

License:MIT License


Languages

Language:Jupyter Notebook 75.5%Language:Python 22.1%Language:PHP 1.8%Language:Shell 0.4%Language:C++ 0.2%