batra-mlp-lab / visdial-challenge-starter-pytorch

Starter code in PyTorch for the Visual Dialog challenge

Home Page:https://visualdialog.org/challenge/2019

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

The 'answer' would be 0 if the answer is one word

KingAndQueen opened this issue · comments

[dialog_round["answer"][:-1] for dialog_round in dialog]

Hi, the code here confuses me. Since 'dialog_round["answer"][:-1]' and 'dialog_round["answer"][1:]' ignore the last and the first word respectively, if the answer is one word, the 'answers_in' and 'answers_out' would be '0'. In this situation, the model would not learn anything from this sample.
Not sure if I am understanding this right, looking forward to your reply.
Thank you.

I think the implementation is correct.

In the case of generative decoding, we prepend and append start and end tokens respectively here:


so dialog_round["answer"] will have 3 tokens (<START>, <ANSWER>, <END>) for single-word answers.

Discriminative decoding doesn't use answers_in and answers_out; it works with options:

options, option_lengths = self._pad_sequences(

Let me know if this answers your query.

I get it, thank you so much! Your work is very cool!