e0397123 / MDD-Eval

Code Repository For AAAI-2022 Paper - MDD-Eval: Self-Training on Augmented Data for Multi-Domain Dialogue Evaluation

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

MDD-Eval

Installation

conda env create -f environment.yml
conda activate tf1-nv

Resources

Download data, checkpoints, libray and tools at
https://www.dropbox.com/s/r5eu8tvlmqclyko/resources.zip?dl=0
unzip the file and put everything under the current folder

Setup Tool

cd tools/best_checkpoint_copier
python setup install

Train

bash train.sh

Score the Evaluation Data

bash eval.sh

Correlation Analysis

see the code in evaluation_notebook.ipynb

Full Data

Full Machine-annotated Data at https://www.dropbox.com/s/4bnha62u8uuj8ak/mdd_data.zip?dl=0

PLease cite the dataset paper if you use the respective split of MDD-Data: DailyDialog (Li et al., 2017), EmpatheticDialogues (Rashkin et al., 2019), TopicalChat (Gopalakrishnan et al., 2019), and ConvAI2 (Dinan et al., 2020)

Cite the following if use code or resources in this repo

@inproceedings{zhang-etal-2021-mdd,
    title = "{MDD}-{E}val: Self-Training on Augmented Data for Multi-Domain Dialogue Evaluation",
    author = "Zhang, Chen  and
      D{'}Haro, Luis Fernando  and
      Friedrichs, Thomas  and
      Li, Haizhou",
    booktitle = "Proceedings of the 36th AAAI Conference on Artificial Intelligence",
    month = March,
    year = "2022",
    address = "Online",
    publisher = "Association for the Advancement of Artificial Intelligence",
}

About

Code Repository For AAAI-2022 Paper - MDD-Eval: Self-Training on Augmented Data for Multi-Domain Dialogue Evaluation


Languages

Language:Python 81.8%Language:Jupyter Notebook 17.9%Language:Shell 0.3%