yxuansu / SimCTG

[NeurIPS'22 Spotlight] A Contrastive Framework for Neural Text Generation

Home Page:https://arxiv.org/abs/2202.06417

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Could you provide the implementations of baselines?

Jxu-Thu opened this issue · comments

Thanks for your great work!

I would like to re-train the model with different training methods for comparison such as MLE training, unlikelihood training vs contrastive training.

Could you provide the implementation of MLE, unlikelihood training to reproduce the experimental results in the paper?

Thanks for your great work!

I would like to re-train the model with different training methods for comparison such as MLE training, unlikelihood training vs contrastive training.

Could you provide the implementation of MLE, unlikelihood training to reproduce the experimental results in the paper?

Hi,

Thank you for your interest in our work. For the MLE baseline, you can just set the margin as 0 in the training scripts like here (

). We discuss the reason in section 3.1 of the paper.

For the Unlikelihood baseline, we used the official code here (https://github.com/facebookresearch/unlikelihood_training). Please refer to more details from the official implementation.

Feel free to ask me if you have more question :-)