EderSantana / X

X is a temporary name, but here lies RL

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Respect batch_size everywhere to support RNN?

gw0 opened this issue · comments

commented

I tried to use a Keras model with a stateful RNN, but did not manage to get it to work. The idea was to introduce previous internal state like an additional recurrent input. Unfortunately recurrent units require a fixed batch_input_shape.

One part of the problem is that ExperienceReplay.get_batch() does not always return a fixed batch shape (but batch_mem = batch_size is not enough). On the other hand the dimensions of states from environment are not handled in same shape (maybe KerasModel.values()).

I believe you will have to write your own memory class to correct that problem actually.

You can get your batch size to be always the same by initializing the memory with zeros. You can do that for example, by calling remember in a for loop adding zeros states.

But, note that memory samples previous experiences randomly. So, even though you are using state-fulness you may not be getting the continuity that you may be expecting.

In any case, I hope that helps.

commented

Oh, you are right. This won't work the way I thought and has to be approached differently. Thanks,