Release 2.19.0 breaks my code
ufechner7 opened this issue · comments
I am running a ModelingToolkit simulation multithreaded, same model, different parameters.
This worked fine before 2.19.0, but not any longer.
Luckily I found out I can just add MKL to my project and it works again.
But why is multithreading only working with MKL and not with OpenBLAS?
OpenBLAS is multithreaded but it's really bad at multithreading so it's almost always worse. We don't change anything about OpenBLAS's parameters, it's just the defaults.
What size model is it? What linear solver is chosen? Is it sparse?
Release 2.19.0 breaks my code
Can you share the stack trace and error message?
Well, there is no stack trace. It works, but only single threaded... The simplified model looks like this:
Model sys with 5 equations
States (5):
ω(t)
Uest(t)
Pgc(t)
y(t) [defaults to 7.0]
yd(t) [defaults to 0]
Parameters (15):
J [defaults to 4.047e7]
Ku [defaults to 2.25e-7]
λnom [defaults to 8.0]
R [defaults to 63.0]
Γest [defaults to 1.0]
Δα_est [defaults to 0.0]
⋮
Incidence matrix:5×10 SparseArrays.SparseMatrixCSC{Num, Int64} with 16 stored entries:
× ⋅ × ⋅ ⋅ × ⋅ ⋅ ⋅ ⋅
× × × ⋅ ⋅ ⋅ × ⋅ ⋅ ⋅
× × × ⋅ ⋅ × ⋅ × ⋅ ⋅
⋅ ⋅ ⋅ ⋅ × ⋅ ⋅ ⋅ × ⋅
⋅ ⋅ ⋅ ⋅ × ⋅ ⋅ ⋅ ⋅ ×
I am calling the solver like this:
sol = solve(prob, Rodas5(), dt=dt, tstops=tstops, abstol=tol, dtmax=1, reltol = tol, saveat=ts)
This code give me a 5.5x speedup with MKL, and it slows the simulation down with OpenBLAS:
Threads.@threads for Δα0 in Δαs
println("Δα = $(Δα0)")
Eps = ep2alpha(se, Δα_ests; Δα0=Δα0)
lock(lk)
try
# writing to a dictionary is not thread safe, we need a lock here
dict[Δα0] = Eps
finally
unlock(lk)
end
end
function ep2alpha(se, Δα_ests; Δα0=0.0)
Uests = zeros(length(Δα_ests))
m = load_mdc()
for (i, Δα_est0) in pairs(Δα_ests)
Γ0 = 1.0
Γest0 = 1.0
eq = Equilibrium(se, m, Γ0, Γest0, Δα0, Δα_est0)
params = Dict([ m.U => se.U0, m.rel_ex => 1, m.w_ex => se.w_ex,
m.Γ => Γ0, m.Γest => Γest0, m.Δα => Δα0+OFFSET, m. Δα_est=>Δα_est0+OFFSET])
sol = simulate(se, m, merge_eq(m, eq, params))
X = sol.t
EP = sol(X, idxs=m.e_p)
signal = do_fft(se, EP.u[1:end-1], beta=90-se.phase_shift-0.6, plt=false, prn=false)
Uests[i] = signal
end
Uests
end
function simulate(se, mod, p, tstops=t_stops, duration=nothing)
u0 = [mod.ω => p[mod.ω], # this is a fixed initial value
mod.D(mod.ω) => 0.0] # the value for the derivative is an inital estimate
# simulation parameters
if isnothing(duration)
duration = se.t_stop
end
tspan = (0.0, duration)
dt = se.dt
ts = 0:dt:duration
# run the simulation
prob = ODEProblem(mod.sys_simple, u0, tspan, p)
tol=1e-7
sol = solve(prob, Rodas5(), dt=dt, tstops=tstops, abstol=tol, dtmax=1, reltol = tol, saveat=ts)
sol
end
Okay, that's not "breaks my code". Please in the future be precise with that. We know MKL is faster. I want it to make it a default, but right now using MKL
is actually breaking.
Well, if I an indirect dependency of my project gets updated and multithreading stops to work, I would call this breaking. MKL is not 7 times faster than OpenBLAS. There is something else going on.
Sorry but that was literally the only thing changed in that release. See https://github.com/SciML/LinearSolve.jl/releases/tag/v2.19.0. The entirety of the code changes are v2.18.0...v2.19.0
Well, but this has a huge impact...
Then advocate for LinearAlgebra to improve by shipping MKL instead of OpenBLAS. I'm with you on that, but it needs to get fixed.