joschu / modular_rl

Implementation of TRPO and related algorithms

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Will dropout break out the final loss of ppo algorithm?

ppaanngggg opened this issue · comments

If I add dropout layer to model, will it be a bad idea?

Any experiments there?

I use eval model when explore environment, and use train model for policy, old policy and value model when training