AlbertDu / A-Project-on-OMG-Emotion-Challenge-2018

This is the code repository for the OMG emotion challenge 2018 from team HKUST-NISL2018.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

A-Project-on-OMG-Emotion-Challenge-2018

This is the code repository for the OMG emotion challenge 2018.
Arxiv paper: Multimodal Utterance-level Affect Analysis using Visual, Audio and Text Features

Prerequisite

To run this code, you need to install these libraries first

Instructions

In data prepration, all videos will be downloaded, and splitted into utterances into /Videos/Train, /Videos/Validation,/Video/Test (the csv files for train, validation, test set can be requested from OMG emotion challenge)

  1. data_preparation: run python create_videoset.py

In feature extraction, the features for three modal are extracted

  1. feature_extraction:

In experiment:

  • data.py provides normalized features and labels.
  • models.py contains definitions of unimodal models, trimodal models in late and early fusion.
  • functions.py defines some custom functions used as loss function or metric.
  • train.py: train and evaluation.

Multimodal Fusion

Early Fusion

early fusion

Late Fusion

late fusion

About

This is the code repository for the OMG emotion challenge 2018 from team HKUST-NISL2018.


Languages

Language:Python 95.5%Language:MATLAB 4.5%