odow / MathOptSymbolicAD.jl

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Speed-up ideas

odow opened this issue · comments

Carleton has some nice results showing a ~15% improvement on OPF cases with >1000 buses.

  • Compare RuntimeGeneratedFunctions with eval and Base.invokelatest Eval is slower
  • Currently, add constants are passed as parameters. This includes things like the 2 in x[i]^2. We could be faster if we kept constants which don't change. Partially done in #7. The other cases are less critical.
  • Different parallelism. I should update the rosetta-opf to test the ThreadedBackend.
  • Currently, we update the x coefficients every function call: https://github.com/odow/SymbolicAD.jl/blob/8ad6488cfc58a8fc9f2deb12b761feea5497ad48/src/nonlinear_oracle.jl#L207-L209
    we should do this only if needed
    This isn't a win. It requires extra state and some vector comparisons for minimal gain.
import Ipopt
import JuMP
import PowerModels
import SymbolicAD

function power_model(case::String)
    pm = PowerModels.instantiate_model(
        joinpath(@__DIR__, "data", case),
        PowerModels.ACPPowerModel,
        PowerModels.build_opf,
    )
    return pm.model
end

import ProfileView

model = power_model("pglib_opf_case118_ieee.m")
JuMP.set_optimizer(model, Ipopt.Optimizer)
JuMP.set_optimize_hook(model, SymbolicAD.optimize_hook)
ProfileView.@profview JuMP.optimize!(model)

image

Left block is parsing the expression tree and deduplicating etc. Middle spikey block is Symbolics computing derivatives. RHS is Ipopt solving. Three spikes are the hessian, constraints, and jacobian. Whitespace is time spent in Ipopt.

Closing because I think we're pretty good for now. In OPF, we're 3-5x faster than JuMP.