ematvey / tensorflow-seq2seq-tutorials

Dynamic seq2seq in TensorFlow, step by step

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

what does _init_decoder_train_connectors actually do?

wolfshow opened this issue · comments

Can you please explain a little bit? Thanks!

In the training phase you have to feed the input at each time step to the decoder and it's relevant target. Also we should modify our sequence outputs by adding padding and end of sentence tokens.
You might wonder why these padding and EOS tokens?
In the training we have to feed inputs to decoder as the decoder sequence with added EOS. Then targets should be the same sequence which is one time step ahead.
Let's take an example
a,b,c,d (input) -> p,q,r (output)
So in the decoder the input seq should be EOS,p,q,r
The targets should be p,q,r,0