LightChen233 / CGIM

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

CGIM: A Cycle Guided Interactive Learning Model for Consistency Identification in Task-oriented Dialogue

License: MIT

This repository contains the PyTorch implementation and the data of the paper: CGIM: A Cycle Guided Interactive Learning Model for Consistency Identification in Task-oriented Dialogue. Libo Qin, Qiguang Chen, Tianbao Xie,Qian Liu, Shijue Huang, Wanxiang Che, Yu Zhou. COLING2022.[PDF] .

This code has been written using PyTorch >= 1.1. If you find this code useful for your research, please consider citing the following paper:

@misc{xxx,
      title={CGIM: A Cycle Guided Interactive Learning Model for Consistency Identification in Task-oriented Dialogue}, 
      author={Libo Qin and Qiguang Chen and Tianbao Xie and Qian Liu and Shijue Huang and Wanxiang Che and Yu Zhou},
      year={2022},
      eprint={xxx},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

Network Architecture

Prerequisites

This codebase was developed and tested with the following settings:

-- scikit-learn==0.23.2
-- numpy==1.19.1
-- pytorch==1.1.0
-- fitlog==0.9.13
-- tqdm==4.49.0
-- sklearn==0.0
-- transformers==3.2.0

We highly suggest you using Anaconda to manage your python environment. If so, you can run the following command directly on the terminal to create the environment:

conda env create -f py3.6pytorch1.1_.yaml

How to run it

The script train.py acts as a main function to the project, you can run the experiments by the following commands:

python -u train.py --cfg KBRetriver_DC_BERT_INTERACTIVE/KBRetriver_DC_BERT_INTERACTIVE.cfg

The parameters we use are configured in the configure. If you need to adjust them, you can modify them in the relevant files or append parameters to the command.

Finally, you can check the results in logs folder. Also, you can run fitlog command to visualize the results:

fitlog log logs/

Model Performance

Model QI F1 HI F1 KBI F1 Overall Acc
BERT (Devlin et al., 2019) 0.691 0.555 0.740 0.500
RoBERTa (Liu et al., 2019) 0.715 0.472 0.715 0.500
XLNet (Yang et al., 2020) 0.725 0.487 0.736 0.509
Longformer (Beltagy et al., 2020) 0.717 0.500 0.710 0.497
BART (Lewis et al., 2020) 0.744 0.510 0.761 0.513
CGIM(Our) 0.764 0.567 0.772 0.563

About

License:MIT License


Languages

Language:Python 97.7%Language:Shell 2.3%