SoyeonHH / MMDA

Multi-modal Multi-label Dynamic Adaptation

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Multi-modal Multi-label Dynamic Adaptation

1. MOSI variable for Emotion Classification

  • Dataset: CMU-MOSEI
  • Source: textual feature, visual feature, acoustic feature
  • Target: sentiment label (-3, 3), emotion label (6-class)
  • Emotion 6-class: ​​​​{​​​​​happiness, sadness, anger, fear, disgust, surprise}

Before Running

  1. Clone the repo.
git clone git@github.com:SoyeonHH/MMDA.git
  1. Set your CUDA device in here
os.environ['CUDA_LAUNCH_BLOCKING'] = "1"
  1. Modify the downloaded GloVe file path in here
word_emb_path = '/data1/multimodal/glove.840B.300d.txt'
  1. Modify CMU-Multimodal SDK file path in here
sdk_dir = Path('/data1/multimodal/CMU-MultimodalSDK')
  1. Modify MOSI and MOSEI file path in here with downloaded dataset from Google Drive.
data_dir = Path('/data1/multimodal')
data_dict = {'mosi': data_dir.joinpath('MOSI'), 'mosei': data_dir.joinpath('MOSEI')}

Run

bash train.sh

If you want to use additional network, called 'ConfidNet', in previous architecture, run this code:

bash train_confid.sh

Acknowledgement

The Dataset source is from CMU-Multimodal SDK, kniter1/TAILOR, and declare-lab/Multimodal-Infomax.

About

Multi-modal Multi-label Dynamic Adaptation


Languages

Language:Python 99.7%Language:Shell 0.3%