odow / SDDP.jl

Stochastic Dual Dynamic Programming in Julia

Home Page:https://sddp.dev

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Regarding file saving issues of SDDP model

wrx1990 opened this issue · comments

I have two problem on save model

one:

     I read_from_file to open the saved model, then train, and then simulate, the result will be a lot of Nan。
     Here is a comparison of the two results

1、model is generated by running

The SDDP.simulate() result of this model is correct.

#I wrote a function file .jl to facilitate passing some parameters.
model = get_model(pre_num,g_loaded,lambda,beta,start_price,suohui_usd)
  SDDP.train(
    model;
    iteration_limit = 400,
    log_frequency = 5,
    print_level=0,
    risk_measure = SDDP.EAVaR(lambda = lambda, beta = beta),
)
simulations = SDDP.simulate(
                model,
                1,
                [:all_value,:USD_volume,:CNH_volume,:change_CNH,:change_USD,:spot_node,:change_Value],
                skip_undefined_variables=true,
                )

image

2、model_ is i obtained by read_from_file

The SDDP.simulate() result of this model_ is wrong with NaN

write_to_file(model, "my_model.sof.json.gz"; validation_scenarios = 10)
model_, validation_scenarios = read_from_file("my_model.sof.json.gz")#file model_
set_optimizer(model_, HiGHS.Optimizer)
SDDP.train(
    model_;
    iteration_limit = 400,
    log_frequency = 5,
    print_level=0,
    risk_measure = SDDP.EAVaR(lambda = lambda, beta = beta),
)
simulations = SDDP.simulate(
                model_,
                1,
                [:all_value,:USD_volume,:CNH_volume,:change_CNH,:change_USD,:spot_node,:change_Value],
                skip_undefined_variables=true,
                )

image

two:

       The node format of the model read through read_from_file is Dict{String, SDDP.Node{String}} and the node format of the original model is different, Dict{Tuple{Int64, Float64}, SDDP.Node{Tuple{Int64, Float64 }}}, this will cause the SDDP.DecisionRule method to report an error

the code is this:

rule = SDDP.DecisionRule(model_; node = (j,closest_price))
        solution = SDDP.evaluate(
        rule;
        incoming_state = Dict(:USD_volume => USD_volume,:CNH_volume => CNH_volume,:all_change_USD_value => 0.0),
        noise = nothing,
        controls_to_record = [:all_value,:USD_volume,:change_USD,:change_CNH,:spot_node,:change_Value,:CNH_volume,:xingquan,:buxingquan],
        )

the error is this:

TypeError: in keyword argument node, expected String, got a value of type Tuple{Int64, Float64}

Stacktrace:
 [1] top-level scope
   @ ./In[34]:12

I changed the node into string type, but still got an error, showing that I have no variables.

rule = SDDP.DecisionRule(model_; node = string((j,closest_price)))#string(node)
        solution = SDDP.evaluate(
        rule;
        incoming_state = Dict(:USD_volume => USD_volume,:CNH_volume => CNH_volume,:all_change_USD_value => 0.0),
        noise = nothing,
        controls_to_record = [:all_value,:USD_volume,:change_USD,:change_CNH,:spot_node,:change_Value,:CNH_volume,:xingquan,:buxingquan],
        )

error:

KeyError: key :all_value not found

Stacktrace:
 [1] getindex(m::Model, name::Symbol)
   @ JuMP ~/.julia/packages/JuMP/D44Aq/src/JuMP.jl:918
 [2] (::SDDP.var"#115#116"{SDDP.DecisionRule{String}})(c::Symbol)
   @ SDDP ./none:0
 [3] iterate
   @ ./generator.jl:47 [inlined]
 [4] _all(f::Base.var"#372#374", itr::Base.Generator{Vector{Symbol}, SDDP.var"#115#116"{SDDP.DecisionRule{String}}}, #unused#::Colon)
   @ Base ./reduce.jl:1282
 [5] all
   @ ./reduce.jl:1278 [inlined]
 [6] Dict(kv::Base.Generator{Vector{Symbol}, SDDP.var"#115#116"{SDDP.DecisionRule{String}}})
   @ Base ./dict.jl:111
 [7] evaluate(rule::SDDP.DecisionRule{String}; incoming_state::Dict{Symbol, Float64}, noise::Nothing, controls_to_record::Vector{Symbol})
   @ SDDP ~/.julia/packages/SDDP/ZJfQL/src/algorithm.jl:1427
 [8] top-level scope
   @ ./In[35]:13

but It will be fine when I run the generated model。

rule = SDDP.DecisionRule(model; node = (j,closest_price))#it's worked
        solution = SDDP.evaluate(
        rule;
        incoming_state = Dict(:USD_volume => USD_volume,:CNH_volume => CNH_volume,:all_change_USD_value => 0.0),
        noise = nothing,
        controls_to_record = [:all_value,:USD_volume,:change_USD,:change_CNH,:spot_node,:change_Value,:CNH_volume,:xingquan,:buxingquan],
        )

tips:

The graph of the model is passed in as parameters

model = SDDP.PolicyGraph(
    g,#param
    sense = :Max,
    upper_bound = 10.0,
    optimizer = HiGHS.Optimizer,
    )

Both of these are expected behavior.

  1. We cannot recover the original JuMP structure of the subproblem, so you cannot use [:all_value,:USD_volume,:CNH_volume,:change_CNH,:change_USD,:spot_node,:change_Value]. You'd need to use a custom recorder and JuMP.get_variable_by_name.
  2. JSON cannot store tuples, so all nodes are written to file as the string representation. We do not attempt to convert back to the original form.

We could write documentation to clarify these points, but I won't be fixing them. Note that the write_to_file is still very experimental. See the warnings in the docstrings: https://sddp.dev/stable/apireference/#SDDP.write_to_file

In general, I would not write the model to file. Just use the .jl file to reconstruct it if needed.

Both of these are expected behavior.

  1. We cannot recover the original JuMP structure of the subproblem, so you cannot use [:all_value,:USD_volume,:CNH_volume,:change_CNH,:change_USD,:spot_node,:change_Value]. You'd need to use a custom recorder and JuMP.get_variable_by_name.
  2. JSON cannot store tuples, so all nodes are written to file as the string representation. We do not attempt to convert back to the original form.

We could write documentation to clarify these points, but I won't be fixing them. Note that the write_to_file is still very experimental. See the warnings in the docstrings: https://sddp.dev/stable/apireference/#SDDP.write_to_file

In general, I would not write the model to file. Just use the .jl file to reconstruct it if needed.

I get it ,save model for two purposes. One is to maintain the consistency of the model each time and to reproduce the results like python. The second is to make it more convenient for others to use without leaking the code.

I also found a way to save it, which is to save the graph file. I have tested that if the graph is the same, the results of training multiple times are almost the same.

The second is to make it more convenient for others to use without leaking the code.

What are you using SDDP.jl for that this is a concern??? Feel free to email me o.dowson@gmail.com.

I also found a way to save it, which is to save the graph file.

Yes, this is also a way.

I generally structure my models like this:
https://github.com/odow/SDDP.jl/blob/master/papers/policy_graph/paper.jl
https://github.com/odow/SDDP.jl/blob/master/papers/policy_graph/powder_data.json
where there is one external data file and one script. The script is useless without the data file that goes with it.

The second is to make it more convenient for others to use without leaking the code.

What are you using SDDP.jl for that this is a concern??? Feel free to email me o.dowson@gmail.com.

I also found a way to save it, which is to save the graph file.

Yes, this is also a way.

I generally structure my models like this: https://github.com/odow/SDDP.jl/blob/master/papers/policy_graph/paper.jl https://github.com/odow/SDDP.jl/blob/master/papers/policy_graph/powder_data.json where there is one external data file and one script. The script is useless without the data file that goes with it.

thank you ,I will tell you what I did later via email

Closing this as won't fix. write_to_file is experimental and not intended to be a lossless representation of the JuMP subproblems.