xinyadu / nqg

neural question generation for reading comprehension

Home Page:https://arxiv.org/abs/1705.00106

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

About BLEU scores with default settings

ganymedetitan opened this issue · comments

Hi! Thank you for opensourcing your qg work.

I tried running with default parameters and because I got errors when running qgevalcap, I evaluated with bleus with coco-caption :
https://github.com/XgDuan/coco-caption/
And got 34.79/19.04/12.29/8.44 for BLEU1~4, which is a bit behind scores reported in ACL paper.
I am not sure did I accidentally do something wrong since I am not familiar with lua codes, or is it just a different implementation/parameter with bleu scores (if there is)?

Sorry for very poor English. :/
Thank you very much!

Thank you for quick reply!