odow / SDDP.jl

Stochastic Dual Dynamic Programming in Julia

Home Page:https://sddp.dev

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Is it mandatory to create state variables for all nodes?

Thiago-NovaesB opened this issue · comments

It seems to me that currently SDDP.jl needs state variables in all nodes.
In a case where a simple graph, where each node has only one exit and entry, it would be interesting to be able to stop creating state variables, for example if a reservoir stops existing after that node.

Is there any theoretical or practical reason to require state variables on all nodes?

When I try not to create a state variable for a node, the code breaks when checking the bounds of the state variables.

x_out = policy_graph[child].states[k].out

for (child, probability) in graph.nodes[graph.root_node]
      push!(policy_graph.root_children, Noise(child, probability))
      # We check the feasibility of the initial point here. It is a really
      # tricky feasibility bug to diagnose otherwise. See #387 for details.
      for (k, v) in policy_graph.initial_root_state
          x_out = policy_graph[child].states[k].out
          if JuMP.has_lower_bound(x_out) && JuMP.lower_bound(x_out) > v
              error("Initial point $(v) violates lower bound on state $k")
          elseif JuMP.has_upper_bound(x_out) && JuMP.upper_bound(x_out) < v
              error("Initial point $(v) violates upper bound on state $k")
          end
      end
  end

Is it mandatory to create state variables for all nodes?

Yes

Is there any theoretical or practical reason to require state variables on all nodes?

No. In theory, the nodes can have different state variables, but there are a few practical things:

  • What is the incoming value of a state variable that appears for the first time?
  • How should the user keep track of which state variables appear at which nodes?

I choose to force the union of all state variables in every node for simplicity, and because I think it is the "right" thing to do. I get that this is a subjective choice though, and various people have commented on it. I like it because it forces the user to think in terms of stochastic optimal control: there exists one state vector that we are optimizing over time.

I understand, I agree that the user interface would be much more complicated if state variables could disappear and reappear.
For my application, I am using state variables not as a quantity that has some kind of balance equation. They are actually information that I need to pass from one node to another node. In this case, when a new state variable appears, I don't care about its .in, just the .out that will be passed forward.

One last question, is this union of state variables that you mentioned something very intrinsic to SDDP.jl?
Or could I try to adapt this into a fork?

For my application, I am using state variables not as a quantity that has some kind of balance equation. They are actually information that I need to pass from one node to another node. In this case, when a new state variable appears, I don't care about its .in, just the .out that will be passed forward.

Sure. But you could still create all the state variables in every node. If they are un-used, then the dual is 0 and so they won't appear in the cut calculations. There should be a very minimal performance difference.

You can also do the trick of just fixing the unused variables to 0 and they'll get resolved out by the solver.

is this union of state variables that you mentioned something very intrinsic to SDDP.jl? Or could I try to adapt this into a fork?

It would require quite a few engineering changes. I can't stop you trying this in a fork 😄, but I don't want to add it to the main SDDP.jl.

I'm creating variables in all nodes for now. It works well for small tests, I haven't done large tests yet.
I'll try to set the variables to zero as you suggested too.
Thanks!

I'll also point out that this is a problem with MSPFormat as well: #693