awjuliani / DeepRL-Agents

A set of Deep Reinforcement Learning Agents implemented in Tensorflow.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

How to set hyper-parameters? "The right recipe!"

IbrahimSobh opened this issue · comments

Hi

@DMTSource
@awjuliani

Is there a way to set hyper parameters?

  • Reward value
  • Parameter Initialization method
  • LSTM length
  • Learning rate
  • Optimizer (Adam or RMSProp)
  • Gradient Clipping value

After 10s of experiments, I found that any tiny change in one of these affects the whole training dramatically, usually in a bad way.

It is also not logic to conduct a grid search over different parameters, because a single experiment may take hours or days, and cost a lot of money

One trick I usually use, is to use large network and dropout to reduce/eliminate over fitting, but what about all of the above?

Another trick, try to adjust the learning rate * gradient = 1e-3 parameters. (In other works make the parameter update around 1/1000 of the parameter value, to prevent too large to too small updates)

What do you recommend?

Very common issue in machine learning in general. Some quick reading might show you suggestions like
Bayesian > random > gridsearch. That is a SUPER simple view, cause I have had seen many try Bayesian opt and fail, more so in architecture vs hyperparam opt.

I would DEF not opt things yourself or use a grid search. Just let it run via Spearmint for a few days to save your sanity.
https://github.com/HIPS/Spearmint

Of course this is just 1 quick, old idea. I'm sure there are many more cutting edge ways to explore the param space.

@DMTSource thank you, helpful as usual :)

Moreover, I have seen the following

This short video
and this nice article

What are super simple ways to perform optimization like Spearmint? I mean that will be applied to the code easily (This TensorFlow "DeepRL-Agents" code)

The Spearmint examples show how to place you code inside a function and then call it from another. This is used by Spearmint to automatically determine the best values for whatever you let it optimize.

For example you could set ranges for your: learning rate, skip rate, gamma and max experience buffer len, and then have Spearmint automatically, "intelligently"(Bayesian) explore the param space in a quicker way than a grid or random search might. The trick would be evaluation. You would have to take the max of the average score or some other grading factor to judge each experiment against each other in a meaningful way. If you use multi factors you could try minimizing (-top score, time to score) and try to discover a network that works best for you. The idea being even if it took a month its totally automatic. Smartish Brute force, woooo!

If you have the resources, Spearmint can speed up its efforts by running multiple experiments at once.