pschwllr / RL4NMT

Reinforcement Learning for Neural Machine Translation

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Reinforcement Learning for Neural Machine Translation

EMNLP 2018 paper: A Study of Reinforcement Learning for Neural Machine Translation

Dataset and models will be relased if needed.

Reinforcement Learning for Neural Machine Translation (RL4NMT) based on Transformer

Take WMT17 Chinese-English translation as example:

Different training strategies are provided.

  1. Different RL training strategies for NMT, evaluated on bilingual dataset.
    (1) HPARAMS=zhen_wmt17_transformer_rl_total_setting: terminal reward + beam search
    (2) HPARAMS=zhen_wmt17_transformer_rl_delta_setting: reward shapping + beam search
    (3) HPARAMS=zhen_wmt17_transformer_rl_delta_setting_random: reward shapping + multinomial sampling
    (4) HPARAMS=zhen_wmt17_transformer_rl_total_setting_random: terminal reward + multinomial sampling
    (5) HPARAMS=zhen_wmt17_transformer_rl_delta_setting_random_baseline: reward shaping + multinomial sampling + reward baseline
    (6) HPARAMS=zhen_wmt17_transformer_rl_delta_setting_random_mle: reward shapping + multinomial sampling + objectives combination

  2. Different monolingual data combination traininig in RL4NMT
    (1) zhen_src_mono: source monolingual data RL training based on bilingual data MLE model
    (2) zhen_tgt_mono: target monolingual data RL training based on bilingual data MLE model
    (3) zhen_src_tgt_mono: sequential mode [target monolingual data RL trianing based on (bilingual + source monolingual data) MLE model]
    (4) zhen_tgt_src_mono: sequential mode [source monolingual data RL training based on (bilinugal + target monolingual data) MLE model]
    (5) zhen_bi_src_tgt_mono: unified model

Supports MRT (minimum risk training) for NMT.

About

Reinforcement Learning for Neural Machine Translation


Languages

Language:Python 97.5%Language:Jupyter Notebook 1.2%Language:JavaScript 0.8%Language:Shell 0.6%