odow / SDDP.jl

Stochastic Dual Dynamic Programming in Julia

Home Page:https://sddp.dev

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Free up resources in rolling horizon

pauleseifert opened this issue · comments

Hi Oscar,

I have implemented a larger model and want to investigate a rolling horizon version.

The model is trained in serial mode, reserving (stages*states_per_stage+1) gurobi instances. I use loops to load the new data and create a model = SDDP.MarkovianPolicyGraph() in each iteration. Then, the model is trained, policy simulated, and transition variables saved to the hard drive with a new model = SDDP.MarkovianPolicyGraph() for the next period.

However, gurobi instances are not freed up in between, and the model eventually runs out of available licences or kills the Gurobi Token server.

I tried to set the variable model = nothing at the end of each loop without the expected result. Do you have any ideas on how to resolve this?

Best,
Paul

You can force the garbage collector with GC.gc(). But you should make sure that all references to the model are no longer available. The best way would be to wrap the rolling horizons step into a function that takes in the current state and returns the first few output steps. Then call the function from within the loop, and potentially force GC.gc() after each function call.

You can also create a single env = Gurobi.Env() object and pass that with optimizer = ()->Gurobi.Optimizer(env).

That solved the problem, thanks!