qfzhu / COPT

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Counterfactual Off-Policy Training for Neural Dialogue Generation

A pytorch implementation for Counterfactual Off-Policy Training for Neural Dialogue Generation

Requirements

python 2.7

opennmt 0.3

Quickstart

Prepare the data

Download the data from the following link, and put it under the root of the project.

https://drive.google.com/drive/folders/1IDjn5f7mILBCAsfbqyLwzGdWRKiGigxO?usp=sharing

Pre-Train G

python train.py -data data/daily -save_model model/daily -word_vec_size 300 -dropout 0.2 -gpu 1 -epochs 15 -training_mode pre_g -pre_word_vecs_enc data/daily.emb.enc.pt -pre_word_vecs_dec data/daily.emb.dec.pt

Pre-Train D

This is optional according to the adversarial learning model that COPT applied to. For example, StepGAN does not need this step.

python train.py -data data/daily -gpu 1 -training_mode pre_d -epochs 20 -train_from checkpoint

Adversarial Learning

python train.py -data data/daily -gpu 1 -training_mode adv -epochs 25 -train_from checkpoint

Inference

python translate.py -src src-test.txt -tgt src-test.txt -ref src-test.txt -verbose -gpu 1 -model checkpoint

Citation

@inproceedings{copt,
  author    = {Qingfu Zhu and
               Weinan Zhang and
               Ting Liu and
               William Yang Wang},
  title     = {Counterfactual Off-Policy Training for Neural Dialogue Generation},
  booktitle = {Proc. EMNLP},
  year      = {2020}
}

About

License:MIT License


Languages

Language:Python 83.1%Language:Perl 8.4%Language:Emacs Lisp 3.9%Language:Shell 3.1%Language:Smalltalk 0.4%Language:Ruby 0.4%Language:NewLisp 0.4%Language:JavaScript 0.2%Language:Slash 0.1%Language:SystemVerilog 0.0%Language:Dockerfile 0.0%