zerohd4869 / SACL

The repository for ACL 2023 paper "Supervised Adversarial Contrastive Learning for Emotion Recognition in Conversations", and SemEval@ACL 2023 paper "UCAS-IIE-NLP at SemEval-2023 Task 12: Enhancing Generalization of Multilingual BERT for Low-resource Sentiment Analysis"

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

How to apply your code to other datasets

hyxie2023 opened this issue · comments

This work is exceptionally insightful. As a newcomer to this field, how should I go about applying your code to other datasets?

This work is exceptionally insightful. As a newcomer to this field, how should I go about applying your code to other datasets?

Our work is a two-stage training process for SACL-LSTM.

  1. Extract the RoBERTa features for the target dataset. Then, sentence-level text features can be obtained. You can find the scripts and code in the COSMIC repository.
  2. Based on the obtained utterance features, use the scripts in this repository for training and prediction.

Additionally, if performance requirements are not particularly high, you can also omit the first step (fine-tuning RoBERTa to obtain utterance features), directly load the unfine-tuned RoBERTa model to obtain utterance embeddings (this part of the code can be seen in How to use), and then replace the input features in this repository's code.

In addition, if you are interested in SACL rather than contextual modeling, you can refer to our another paper SACL-XLMR. This paper directly adopts the RoBERTa backbone model and the SACL optimization objective, which can be more easily adapted to new datasets.

Thank you for your interest to our work. If you have any questions while using our services, please feel free to reach out to us. We will strive to offer our assistance. If there is a delay in response, please contact me via email at hudou@iie.ac.cn.