This repository contains PyTorch implementation of 4 different models for classification of emotions of the speech:
- Stacked Time Distributed 2D CNN - LSTM
- Stacked Time Distributed 2D CNN - Bidirectional LSTM with attention
- Parallel 2D CNN - Bidirectional LSTM with attention
- Parallel 2D CNN - Transformer Encoder
Models are trained on RAVDESS Emotional Speech Audio dataset. It consits of 1440 speech audio-only files (16 bits, 48kHz, .wav).
Dataset is balanced:
Emotions have 2 intensities: strong and normal (except for the neutral emotion, which only has normal intensity).
Signals are loaded with sample rate of 48kHz and cut off to be in the range of [0.5, 3] seconds. If the signal is shorter than 3s it is padded with zeros.
MEL spectrogram is calculated and used as an input for the models (for the 1st and 2nd model the spectrogram is splitted into 7 chunks).
Example of the MEL spectrogram:
Dataset is splitted into train, validation and test sets, with following percentage: (80,10,10)%.
Data augmentation is performed by adding Additive White Gaussian Noise (with SNR in range [15,30]) on the original signal. This enormously improved accuracy and removed overfitting.
Datasets are scaled with Standard Scaler.
Architectures for all 4 models are shown from left to right respectively:
1. Model:
Accuracy: 94.02%
Confusion Matrix | Influence of Emotion intensity on correctness |
---|---|
2. Model:
Accuracy: 96.55%
Confusion Matrix | Influence of Emotion intensity on correctness |
---|---|
3. Model:
Accuracy: 95.40%
Confusion Matrix | Influence of Emotion intensity on correctness |
---|---|
4. Model:
Accuracy: 96.78%
Confusion Matrix | Influence of Emotion intensity on correctness |
---|---|