odow / SDDP.jl

Stochastic Dual Dynamic Programming in Julia

Home Page:https://sddp.dev

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Accessing Optimal Decision

SolidAhmad opened this issue · comments

Once we have trained our model and convergence was achieved, we end up with a policy graph where each subproblem at each node contains all the cuts generated in the backward pass. However, we don't have access to the explicit variables that lead to the lower bound. Rather, we have to simulate to have an idea of how the variables interact with the policy graph. But I am only interested in the first stage optimal state variables, namely, The state variables that are used to calculate the lower bound in the last iteration, is there a way to access or calculate that directly as opposed to inferring those variables through simulations?

You can get a decision rule for a node:

https://sddp.dev/stable/tutorial/first_steps/#Obtaining-the-decision-rule

If your first stage is deterministic, you can get the JuMP model from node 1 as follows:

sp = model[1].subprolem

But if your first stage is deterministic, then just do a single simulation and look at the values.

But if your first stage is deterministic, then just do a single simulation and look at the values.

I get that you meant stochastic. That makes sense, thank you!