xinyadu / nqg

neural question generation for reading comprehension

Home Page:https://arxiv.org/abs/1705.00106

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Tensorflow version

zhezhaoa opened this issue · comments

I am really happy seeing question generation work like this. However, i am not familiar with torch, is there any resource written in tensrflow that can reproduce your work?

I see that you implemented seq2seq model in tf. Could you provide the source code or which toolkit you use? I have tried many times with various seq2seq models. However, the generated questions are strange and unrelated with the source sentence. The generated questions even don't share words with source sentences. Could you give me some suggestions?

Hi, thank you very much for your kind suggestion~
I am a rookie in seq2seq and now I am confused with the evaluation. I directly use the opennmt-tf and multi-bleu-detok.perl is used for evaluation. The returned Bleu result is 2.21 (or 0.0221?) However, with the same output, the returned results of eval.py (the evaluation script in this toolkit) are Bleu1: 0.263 Bleu2: 0.10017 Bleu3: 0.04796 Bleu4: 0.026. It seems that results obtained from your evaluation script are much higher than the results from opennmt-tf. I wonder why there is a big gap between opennmt evaluation and evaluation script in your toolkit. I am look forward to your reply~ This toolkit is extremely useful and important to me!