Speed-up ideas
odow opened this issue · comments
Oscar Dowson commented
Carleton has some nice results showing a ~15% improvement on OPF cases with >1000 buses.
-
CompareEval is slowerRuntimeGeneratedFunctions
witheval
andBase.invokelatest
-
Currently, add constants are passed as parameters. This includes things like thePartially done in #7. The other cases are less critical.2
inx[i]^2
. We could be faster if we kept constants which don't change. - Different parallelism. I should update the
rosetta-opf
to test theThreadedBackend
. -
Currently, we update theThis isn't a win. It requires extra state and some vector comparisons for minimal gain.x
coefficients every function call: https://github.com/odow/SymbolicAD.jl/blob/8ad6488cfc58a8fc9f2deb12b761feea5497ad48/src/nonlinear_oracle.jl#L207-L209
we should do this only if needed
Oscar Dowson commented
import Ipopt
import JuMP
import PowerModels
import SymbolicAD
function power_model(case::String)
pm = PowerModels.instantiate_model(
joinpath(@__DIR__, "data", case),
PowerModels.ACPPowerModel,
PowerModels.build_opf,
)
return pm.model
end
import ProfileView
model = power_model("pglib_opf_case118_ieee.m")
JuMP.set_optimizer(model, Ipopt.Optimizer)
JuMP.set_optimize_hook(model, SymbolicAD.optimize_hook)
ProfileView.@profview JuMP.optimize!(model)
Left block is parsing the expression tree and deduplicating etc. Middle spikey block is Symbolics computing derivatives. RHS is Ipopt solving. Three spikes are the hessian, constraints, and jacobian. Whitespace is time spent in Ipopt.
Oscar Dowson commented
Closing because I think we're pretty good for now. In OPF, we're 3-5x faster than JuMP.