ematvey / tensorflow-seq2seq-tutorials

Dynamic seq2seq in TensorFlow, step by step

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Potential bug of the dynamic_decoder

opened this issue · comments

If I set the num_units of the cell in decoder as the twice as the one in encoder (just like what the tutorial does), everything goes well. But If I set, say, both of the encoder and decoder's cell have the same num_units. Then the incompatible shape error occurs. Is it related to the LSTM issue? Thanks.

Not a bug.
This is because encoder is bidirectional, and encoder state is passed to decoder as-is. You can instead put a linear layer between encoder final state and decoder initial state and use whatever dimensionality you like.

@ematvey Oh yes, sorry I made a stupid mistake.