google-deepmind / open_spiel

OpenSpiel is a collection of environments and algorithms for research in general reinforcement learning and search/planning in games.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Neurd Clip Instability [RNaD]

spktrm opened this issue · comments

In the paper and open spiel implementation, the neurd clip value is set to 10k.

From my testing, this is a source of major instability since the vtrace operator frequently outputs very large q value targets - this flows directly to the advantage estimation. The large clip bounds don't do much to prevent extremely large policy loss values.

Setting this value to something lower eg 10 makes for more consistent training.

Could the authors comment on why such a large clip value was used / didn't introduce this instability.

Thank you.

P.S. the readme value incorrectly reports a clip value of 100k instead of 10k as used.

@perolat any ideas?