The official repository for ACL 2022 main conference paper: Other Roles Matter! Enhancing Role-Oriented Dialogue Summarization via Role Interactions.
We propose two role interaction methods to enhance the role-oriented dialogue summarization task. Two methods include cross attention interaction and decoder self-attention interaction. The cross attention interaction adopts an attention divergence loss to let each role decoder attend to the most useful utterances from other roles. The decoder self-attention interaction adopts the interactive decoding strategy to consider other roles' summaries when generating summaries.
We experiment on two datasets (CSDS and MC), and two baseline methods (PGN and BERTAbs).
- CSDS dataset: Please refer to the original repository for downloading the dataset
- MC dataset: Please refer to the process provided by the original repository. (Due to the website policy, we are unable to directly provide the processed data. However, we provide the splits of urls for train, val and test datasets in data/MC/)
- Pretrained BERT model: We use the base version of Chinese BERT-wwm, available at here.
- Tencent embeddings: If you want to train on other Chinese dataset for PGN-based model, you need to extract the pretrained embeddings through tencent embeddings, available at here.
- python == 3.7
- pytorch == 1.8
- files2rouge == 2.1.0
- jieba == 0.42.1
- numpy == 1.19.1
- tensorboard == 2.3.0
- tensorboardx == 2.1
- cytoolz == 0.11.0
- nltk == 3.5
- bert-score == 0.3.6
- moverscore == 1.0.3
-
Go to the models/PGN_interact/ directory.
-
Download the CSDS/MC dataset, and put the data under the folder data/CSDS or data/MC.
-
If you want to extract the pretrained embeddings, download the tencent embedding and put it under the ../pretrained/ folder, else you could use our provided extracted embeddings and pass this step.
For MC dataset, the extracted embeddings are a bit large and you could download it through:
-
google drive: https://drive.google.com/file/d/1KsaY0ErkyJwJY1hndbX7XzJp5lrQkeYm/view?usp=sharing
-
baidudiskļ¼https://pan.baidu.com/s/1_LDNFvd5AGocalWkYcv57Q password: 5d1o
After downloading, please put it under the models/PGN_interact/data_utils/embeddings folder
-
-
Run the bash file run_CSDS.sh or run_MC.sh to train and test.
- Go to the models/BERT_interact/ directory.
- Download the CSDS/MC dataset, and put the data under the folder data/CSDS or data/MC.
- Download the Chinese BERT-wwm pretrained models, create a new folder named bert_base_chinese/ and put it under the folder ../pretrained/.
- Run the bash file run_CSDS.sh or run_MC.sh to train and test.
- We put the output of our trained models to the results/ folder. If you have trained your models, you could also put the outputs into the folder.
- Run evaluate/evaluate.py to evaluate through automatic metrics. Pay attention to change the file names if you want to test your own output.
We also provide some checkpoints for PGN-both and BERT-both. You could download them through the following links:
- PGN for CSDS:
- google drive: https://drive.google.com/file/d/1kIj7saeTXtM0ekMtLZFfFC09nSX3xBvk/view?usp=sharing
- baidudisk: https://pan.baidu.com/s/18jAuOn8feWZfkDozQvpAjA password: otkj
- BERT-both for CSDS:
- google drive: https://drive.google.com/file/d/1w9tDeP5WnJFXBdRV6KjCyrf33n0Tw_bv/view?usp=sharing
- baidudisk: https://pan.baidu.com/s/1YZRVfo3ToUG33DryJumw1A password: bjiu
- PGN for MC:
- google drive: https://drive.google.com/file/d/1PyYqiTh8mSbHBfkZZbbGEbh9Clhmo5Dy/view?usp=sharing
- baidudisk: https://pan.baidu.com/s/1GIHHPwx33rs3xOOzvGR6xQ password: rdyk
- BERT-both for MC:
- google drive: https://drive.google.com/file/d/1kRM11BBIj1QR4A5Eo444BzQ3J_iagCwN/view?usp=sharing
- baidudisk: https://pan.baidu.com/s/1BeBBikZk3m7nuIP5jgIMVg password: grkc
The reference code of the provided methods are:
We thanks for all these researchers who have made their codes publicly available.
We will updatae the citation format soon.
If you have any issues, please contact with haitao.lin@nlpr.ia.ac.cn