nlpyang / PreSumm

code for EMNLP 2019 paper Text Summarization with Pretrained Encoders

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Getting the same sequence for all input candidate in generation

samanenayati opened this issue · comments

Hello,
I was using PreSumm code to run on a custom dataset. I made the format of data compatible with model input.
I trained Transformer baseline, a simple encoder and decoder, and stopped training when perplexity was low, around 2.
However, in the inference, I get a very low rouge score. When I checked the actual generated candidates, I saw for all the inputs, the model generates the same candidate. I could not figure out the issue.
Any help is greatly appreciated.