ematvey / tensorflow-seq2seq-tutorials

Dynamic seq2seq in TensorFlow, step by step

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Attempted an implementation

scottleith opened this issue · comments

I really enjoyed your "Advanced dynamic seq2seq with TensorFlow" tutorial, and decided to try it out myself! I wanted to take a corpus of english quotes, and create an encoder-decoder that could reconstruct the quotes from the meaning vector (the hidden state).

I've run into an error in the softmax_entropy_with_logits:
InvalidArgumentError (see above for traceback): logits and labels must be same size: logits_size=[1000,27994] labels_size=[500,27994] ( sequences have 5 timesteps, batch size is 100, vocab size is 27994).

I've been looking over my code for hours now, but can't find the mistake. I know it's a long shot, but would you be willing to take a look to see where I've gone wrong?

The code is here, and the 'problem' might be around line 246:
https://github.com/scottleith/lstm/blob/master/Attempted%20encoder-decoder%20LSTM.py

The raw data can be downloaded here: https://github.com/alvations/Quotables/blob/master/author-quote.txt

I also apologize if this is an inappropriate place to ask - I wanted to contact you, but github doesn't make it easy!