yangli-hub / CMMT-Code

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Cross-Modal Multitask Transformer for End-to-End Multimodal Aspect-Based Sentiment Analysis

Author: Li YANG, yang0666@e.ntu.edu.sg

The Corresponding Paper:

Cross-modal multitask transformer for end-to-end multimodal aspect-based sentiment analysis
The framework of the CMMT model:

![alt text]Screenshot 2024-04-10 at 10 38 04 AM

Data

Requirement

  • PyTorch 1.0.0
  • Python 3.7
  • pytorch-crf 0.7.2

Code Usage

Training for CMMT

  • This is the training code of tuning parameters on the dev set, and testing on the test set. Note that you can change "CUDA_VISIBLE_DEVICES=2" based on your available GPUs.
sh run_cmmt_crf.sh
  • We show our running logs on twitter-2015, twitter-2017 and political twitter in the folder "log files". Note that the results are a little bit lower than the results reported in our paper, since the experiments were run on different servers.

Acknowledgements

  • Using these two datasets means you have read and accepted the copyrights set by Twitter and dataset providers.
  • Most of the codes are based on the codes provided by huggingface: https://github.com/huggingface/transformers.

Citation Information:

Yang, L., Na, J. C., & Yu, J. (2022). Cross-modal multitask transformer for end-to-end multimodal aspect-based sentiment analysis. Information Processing & Management, 59(5), 103038.

About


Languages

Language:Python 99.9%Language:Shell 0.1%