odow / SDDP.jl

Stochastic Dual Dynamic Programming in Julia

Home Page:https://sddp.dev

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Here and now solution

hghayoom opened this issue · comments

Hi Oscar,
I hope you are doing well.
I am working on a Here and now problem. I had a course that taught me "Hare and now Problem has only one solution". ( I am not sure if it is exactly correct??).
To validate that I used your example Here. Then I drew the diagrams for the code and I expected to see a line in the graphs because the answers should be the same for all simulations. I get bounds in all my variables. Could you please clarify it for me? This is the Picture I get.

Here is the Code:

using SDDP,Plots , Plots.PlotMeasures,Gurobi
import HiGHS
import Distributions
decision_hazard_2 = SDDP.LinearPolicyGraph(
    stages = 5,  # <-- changed
    sense = :Min,
    lower_bound = 0.0,
    optimizer = HiGHS.Optimizer,
) do sp, node
    @variables(sp, begin
        0 <= x_storage <= 8, (SDDP.State, initial_value = 6)
        u_thermal >= 0, (SDDP.State, initial_value = 0)
        u_hydro >= 0
        u_unmet_demand >= 0
    end)
    if node == 1                                        # <-- new
        @constraint(sp, x_storage.out == x_storage.in)  # <-- new
        @stageobjective(sp, 0)                          # <-- new
    else
        @constraint(sp, u_thermal.in + u_hydro == 9 - u_unmet_demand)
        @constraint(sp, c_balance, x_storage.out == x_storage.in - u_hydro + 0)
        SDDP.parameterize(sp, [2, 3]) do ω
            return set_normalized_rhs(c_balance, ω)
        end
        @stageobjective(sp, 500 * u_unmet_demand + 20 * u_thermal.in)
    end
end

SDDP.train(decision_hazard_2; iteration_limit = 100)
simulations = SDDP.simulate(
    # The trained model to simulate.
    decision_hazard_2,
    # The number of replications.
    100,
    # A list of names to record the values of.
    sampling_scheme = SDDP.InSampleMonteCarlo(
            terminate_on_cycle = false,
            terminate_on_dummy_leaf = true,
    ),
    [:x_storage,:u_thermal,:u_hydro,:u_unmet_demand,],
)

Plots.plot(
    SDDP.publication_plot(simulations, title = "x_storage") do data
        return data[:x_storage].in
    end,
    SDDP.publication_plot(simulations, title = "u_thermal") do data
        return data[:u_thermal].out
    end,
    SDDP.publication_plot(simulations, title = "u_hydro") do data
        return data[:u_hydro]
    end,
    SDDP.publication_plot(simulations, title = "u_unmet_demand") do data
        return data[:u_unmet_demand]
    end,
   
    ;
    margin=20mm,
    left_margin=100mm,
    guidefontsize=100,
    xtickfontsize=60,
    ytickfontsize=60,
    xticks = 0:5:52,
    titlefontsize=100,
    xlabel = "Stage",
    ylims = (0,10),
    layout = (1, 4),
    size = (5000, 1000),
)
Plots.savefig("HereNow.pdf")

Your insights are appreciated.
Thanks,
Hadi

Your model looks correct.

I had a course that taught me "Here and now Problem has only one solution". ( I am not sure if it is exactly correct??)

This isn't true in multistage. Here-and-now just means that you choose the value of a control in period t-1 but use it in period t. It's not the case that the control needs to be the same in every time period (that's a different model again).

I don't find the literature on stochastic programming helpful. Just try to model your problem as the agent would make decisions, and ignore the theory around hazard-decision/wait-and-see/non-anticipativity.

What decisions does an agent make? When do they make those decisions? What information do they have available to them?

I expected to see a line in the graphs because the answers should be the same for all simulations

This is not what you should expect. See previous answer.

Instead of

SDDP.publication_plot(simulations, title = "u_thermal") do data
    return data[:u_thermal].out
end

you probably want

SDDP.publication_plot(simulations, title = "u_thermal") do data
    return data[:u_thermal].in
end

So that the plot is showing the quantity of thermal used in period t, not the quantity that was decided in period t for use in period t+1.

Hi Oscar,
Thanks for the illustration.
Now it is clear to me what happens in multistage or how to model it.
I think in a classic two-stage stochastic problem because we only have two stages, here and now problems have only one solution. but in multistage it is not essentially correct.

Thanks,
Hadi

here and now problems have only one solution

I still dislike this idea.

You could have a problem where some of the controls are chosen in the previous step and some aren't. For example, maybe you need to choose your coal generation in t-1, but you can choose OCGT generation in stage t. Is that a here-and-now or wait-and-see problem?

I find it better to just ignore the labels. Everything is a graph and you can make some variables "here-and-now" with an explicit modeling choice.

Dear Oscar
You are correct. and I learned a lot.
Thanks for illustrating everything.

No problem. Closing because this seems resolved.