JuliaPhysics / Measurements.jl

Error propagation calculator and library for physical measurements. It supports real and complex numbers with uncertainty, arbitrary precision calculations, operations with arrays, and numerical integration.

Home Page:https://juliaphysics.github.io/Measurements.jl/stable/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Is Measurements.jl compatible with Optim.jl and ForwardDiff.jl, or am I facing a gotcha?

Boxylmer opened this issue · comments

Forgive me for etiquette and formatting issues, this is my first real github issue post. Further, if this wasn't the appropriate place to post problems like this, please let me know.

What I'm trying to do

I'm trying to optimize a function that has error associated with it.
I'm aware there is not a meaningful way to propagate error through the optimization process out-of-the-box, as we're guessing with a pure number, but my strategy would be to optimize twice: Once for the actual value, and a second time to arrive at the same error as the target variable. I'm running into some very strange issues when trying to accomplish the former, however, and cannot tell if it is due to my being new to Julia, or something fundamental to one of the packages involved.

Tl,dr: For all intents and purposes, right now I'm just trying to optimize a pure number, where Measurement.jl types might be fed in as well. I don't care if errors aren't propagated correctly in this case, I just want this to happen without errors i.e., I want ForwardDiff and Measurement types to play nice.

The nature of the problem (context)

The code in question right now is a struct capable of solving one of the three following variables, given the other two: P, V, and T.
e.g., if P and T are specified, V is solvable

If V and T are specified, then solving P is an analytical problem (and thus does not need Optim.jl), right now Measurements works wonderfully in this case.

If P and T are specified, we need to guess V until we minimize the loss function: (P_calculated - P_given)^2. This is done with Optim.jl and using automatic differentiation. I've also tried using finite differencing. This is where Measurements.jl (or something else) seems to fail.

Errors and information

When feeding a function into Optim.jl where any variable involved is a Measurement type, the following type of ambiguity errors occur with simple arithmetic methods:

ERROR: LoadError: MethodError: /(::Measurement{Float64}, ::ForwardDiff.Dual{ForwardDiff.Tag{Main.PengRobinson.var"#2#3"{SVector{2, Main.PengRobinson.PengRobinsonChemical{Float64, Float64, Float64, Float64, Float64}}, SVector{2, Float64}, SMatrix{2, 2, Float64, 4}, Float64, Measurement{Float64}}, Float64}, Float64, 1}) is ambiguous. Candidates:
  /(x::AbstractFloat, y::ForwardDiff.Dual{Ty, V, N} where {V, N}) where Ty in ForwardDiff at C:\Users\Will\.julia\packages\ForwardDiff\sqhTO\src\dual.jl:140
  /(x::Real, y::ForwardDiff.Dual{Ty, V, N} where {V, N}) where Ty in ForwardDiff at C:\Users\Will\.julia\packages\ForwardDiff\sqhTO\src\dual.jl:140
  /(a::Measurement{T}, b::Real) where T<:AbstractFloat in Measurements at C:\Users\Will\.julia\packages\Measurements\PVbNf\src\math.jl:250
Possible fix, define
  /(::Measurement{T}, ::ForwardDiff.Dual{Ty, V, N} where {V, N}) where {T<:AbstractFloat, Ty}

If we instead forego automatic differentiation, and have Optim.jl try to use a function containing any measurement type with finite differencing, we arrive at

ERROR: LoadError: MethodError: no method matching Float64(::Measurement{Float64})
Closest candidates are:
  (::Type{T})(::Real, ::RoundingMode) where T<:AbstractFloat at rounding.jl:200
  (::Type{T})(::T) where T<:Number at boot.jl:760
  (::Type{T})(::AbstractChar) where T<:Union{AbstractChar, Number} at char.jl:50
  ...

And this last error seems to be something that is more my fault than anything else. However, it isn't traced back to my own code, but rather something in Optim.jl. The full stack trace for this is below:

Stacktrace:
  [1] convert(#unused#::Type{Float64}, x::Measurement{Float64})
    @ Base .\number.jl:7
  [2] setindex!(A::Vector{Float64}, x::Measurement{Float64}, i1::Int64)
    @ Base .\array.jl:839
  [3] finite_difference_gradient!(df::Vector{Float64}, f::Main.PengRobinson.var"#2#3"{SVector{2, Main.PengRobinson.PengRobinsonChemical{Float64, Float64, Float64, Float64, Float64}}, SVector{2, Float64}, SMatrix{2, 2, Float64, 4}, Float64, Measurement{Float64}}, x::Vector{Float64}, cache::FiniteDiff.GradientCache{Nothing, Nothing, Nothing, Vector{Float64}, Val{:central}(), Float64, Val{true}()}; relstep::Float64, absstep::Float64, dir::Bool)
    @ FiniteDiff ~\.julia\packages\FiniteDiff\blirf\src\gradients.jl:277
  [4] finite_difference_gradient!(df::Vector{Float64}, f::Function, x::Vector{Float64}, cache::FiniteDiff.GradientCache{Nothing, Nothing, Nothing, Vector{Float64}, Val{:central}(), Float64, Val{true}()})
    @ FiniteDiff ~\.julia\packages\FiniteDiff\blirf\src\gradients.jl:224
  [5] (::NLSolversBase.var"#g!#44"{Main.PengRobinson.var"#2#3"{SVector{2, Main.PengRobinson.PengRobinsonChemical{Float64, Float64, Float64, Float64, Float64}}, SVector{2, Float64}, SMatrix{2, 2, Float64, 4}, Float64, Measurement{Float64}}, FiniteDiff.GradientCache{Nothing, Nothing, Nothing, Vector{Float64}, Val{:central}(), Float64, Val{true}()}})(storage::Vector{Float64}, x::Vector{Float64})
    @ NLSolversBase ~\.julia\packages\NLSolversBase\geyh3\src\objective_types\twicedifferentiable.jl:113
  [6] (::NLSolversBase.var"#fg!#45"{Main.PengRobinson.var"#2#3"{SVector{2, Main.PengRobinson.PengRobinsonChemical{Float64, Float64, Float64, Float64, Float64}}, SVector{2, Float64}, SMatrix{2, 2, Float64, 4}, Float64, Measurement{Float64}}})(storage::Vector{Float64}, x::Vector{Float64})
    @ NLSolversBase ~\.julia\packages\NLSolversBase\geyh3\src\objective_types\twicedifferentiable.jl:117
  [7] value_gradient!!(obj::TwiceDifferentiable{Float64, Vector{Float64}, Matrix{Float64}, Vector{Float64}}, x::Vector{Float64})
    @ NLSolversBase ~\.julia\packages\NLSolversBase\geyh3\src\interface.jl:82
  [8] initial_state(method::Newton{LineSearches.InitialStatic{Float64}, LineSearches.HagerZhang{Float64, Base.RefValue{Bool}}}, options::Optim.Options{Float64, Nothing}, d::TwiceDifferentiable{Float64, Vector{Float64}, Matrix{Float64}, Vector{Float64}}, initial_x::Vector{Float64})
    @ Optim ~\.julia\packages\Optim\uwNqi\src\multivariate\solvers\second_order\newton.jl:45
  [9] optimize
    @ ~\.julia\packages\Optim\uwNqi\src\multivariate\optimize\optimize.jl:35 [inlined]
 [10] #optimize#87
    @ ~\.julia\packages\Optim\uwNqi\src\multivariate\optimize\interface.jl:142 [inlined]
 [11] optimize(f::Function, initial_x::Vector{Float64}, method::Newton{LineSearches.InitialStatic{Float64}, LineSearches.HagerZhang{Float64, Base.RefValue{Bool}}}, options::Optim.Options{Float64, Nothing}) (repeats 2 times)
    @ Optim ~\.julia\packages\Optim\uwNqi\src\multivariate\optimize\interface.jl:141
 [12] Main.PengRobinson.PengRobinsonState(components::Vector{Main.PengRobinson.PengRobinsonChemical{Float64, Float64, Float64, Float64, Float64}}, mole_fractions::Vector{Float64}, kijmatrix::Matrix{Float64}; p::Float64, v::Nothing, t::Measurement{Float64}, minimal_calculations::Bool)
    @ Main.PengRobinson c:\git\julia-polymer-membranes\PolymerMembranes.jl\StaticPengRobinson.jl:75
 [13] top-level scope
    @ c:\git\julia-polymer-membranes\PolymerMembranes.jl\test.jl:26
in expression starting at c:\git\julia-polymer-membranes\PolymerMembranes.jl\test.jl:26

I was going to provide a minimum working example but it turned out to be a bit too hefty to post here, if this issue is truly a potential compatibility problem with Measurements and someone needs to reproduce this easily, please run test.jl in this repo and be sure to add the measurement syntactic sugar somewhere to cause the error (otherwise it will work!) in one of the constructor methods.

I'm aware of some issues with Optim.jl, but due to a bug in that package: JuliaNLSolvers/Optim.jl#823. I believe Measurements.jl can't work with ForwardDiff because of the limitations of the latter package.

In the end, I don't think there is anything I can do here

Can't believe I missed that post in the Optim page! Thanks for pointing this out! It's helpful as it tells me that this is a fundamental bug and not something I'm doing incorrectly when feeding it in.

For anyone else struggling with this: I'm attempting the following workaround (as long as your main case allows for analytical solutions that work with Measurements.jl)

  1. Define a second constructor for using Measurement types with
  2. Strip all Measurement types to only their values via measurement.val
  3. Optimize the value normally (no measurement types)
  4. Define a new optimizer for the uncertainty component only (now that you've solved the actual value)
    a. The optimized function should only take in the uncertainty, the value should be static now.
  5. Use a black box optimizer to solve the uncertainty. I'm going to attempt this with BlackBoxOptim.jl
  6. Reconstruct the solved Measurement with the solved value and uncertainty

@giordano A few of us from the Humans of Julia discord noticed that the line 92 in src/Measurements.jl is the culprit in the ForwardDiff dual numbers. Do you know about this?

measurement(val::Real, err::Real) = measurement(promote(float(val), float(err))...)

When using autodiff=true in Optim.jl, the following loop occurs

ERROR: LoadError: StackOverflowError:
Stacktrace:
 [1] measurement(val::ForwardDiff.Dual{ForwardDiff.Tag{Main.PengRobinson.PRVolumeSolver.var"#target_function#1"{StaticArrays.SVector{2, Main.PengRobinson.PengRobinsonChemical{Float64, Float64, Float64, Float64, Float64}}, StaticArrays.SVector{2, Measurement{Float64}}, StaticArrays.SMatrix{2, 2, Float64, 4}, Float64, Float64, Measurement{Float64}}, Float64}, Float64, 1}, err::ForwardDiff.Dual{ForwardDiff.Tag{Main.PengRobinson.PRVolumeSolver.var"#target_function#1"{StaticArrays.SVector{2, Main.PengRobinson.PengRobinsonChemical{Float64, Float64, Float64, Float64, Float64}}, StaticArrays.SVector{2, Measurement{Float64}}, StaticArrays.SMatrix{2, 2, Float64, 4}, Float64, Float64, Measurement{Float64}}, Float64}, Float64, 1}) (repeats 79984 times)
   @ Measurements ~\.julia\dev\Measurements\src\Measurements.jl:92
in expression starting at c:\git\julia-polymer-membranes\PolymerMembranes.jl\test.jl:30

In the case that you use Optim.jl's finite differencing, the issue doesn't occur, so it seems the issue is that line calling itself until the Stack Overflow Error happens. If you want to know more about what I was doing / talk to us over at HoJ, I'm more than happy to explain it!

That's not the problem, the culprit is ForwardDiff being unable to deal with anything more specific than Real, the StackOverflowError you see is only of symptom of their limitation. Quoting from the documentation of ForwardDiff

The target function must be written generically enough to accept numbers of type T<:Real as input (or arrays of these numbers). The function doesn't require a specific type signature, as long as the type signature is generic enough to avoid breaking this rule. This also means that any storage assigned used within the function must be generic as well (see this comment for an example).

But Measurement <: AbstractFloat and I'm not going to change the subtyping.

That's completely understandable, and thanks for the quick responses!

So far, it seems that using Optim.jl while breaking the components into uncertainties and values and solving them independently works well, do you have any advice about how I might go about splitting a measurement object into this without running into issues?

Applying the same technique to other optimization libraries like BlackBoxOptim.jl, which (shouldn't) do any autodiff-like things with the functions creates very strange results. There, the same stackoverflow issue happens without even doing anything other than just using Measurements in the same code. Here's a tiny (ish) example done on Julia 1.6.0.

using BlackBoxOptim
# using Measurements  # uncomment this after confirming it runs correctly
function solve_volume_uncertainty_bbox(p_uncertainty, v_value, t)
    function target_function(volume_uncertainty_guess_l_mol)
        resulting_pressure = 4
        return (resulting_pressure - p_uncertainty)^2
    end
    res = bboptimize(target_function; SearchRange=(0.0, 10.0), NumDimensions=1, Method=:de_rand_1_bin)  
    return best_candidate(res)[1]
end

bbox = solve_volume_uncertainty_bbox(0.1, 20.08405, 273.15)
println(bbox)

It may also just be something that happens on my machine. I'm hoping that this might prove useful to you in some way, because I'm super excited that this library exists. In my field, errors are basically unseen in most literature (I'm planning on citing your library in mine as well, once I get my underlying calculations working!), as its too lengthy to do the analytical calculations by hand. The ultimate version of this would be to somehow generally extend it to optimization as well as direct or, for the lack of a better word: analytical as opposed to iterative, solutions are only a subset of the calculations done.

That's extremely bizarre, but since Measurements isn't directly involved apart from loading it (and thus adding a new data type) my guess is that BlackBoxOptim is doing something fishy with type-inference (looking also at the output, it goes crazy when inferring the type of some containers). I'd report them this issue (and we'd be up to three packages not working correctly out-of-the-box because of their limitations 🙂)

BTW, in JuliaNLSolvers/Optim.jl#823 I had some suggestions about how to fix Optim.jl (those are genuinely issues in the package, unrelated to Measurements.jl). If you're interested in using Optim.jl together with Measurements.jl I'd recommend you to comment on that issue to make the maintainer aware that there is interest in getting it fixed 🙂

For the record, BlackBoxOptim seems to go haywire because of this log(::Real, ::Measurement) method

Measurements.jl/src/math.jl

Lines 619 to 622 in e71f520

function Base.log(a::Real, b::Measurement)
bval = b.val
return result(log(a, bval), inv(log(a) * bval), b)
end

but frankly I don't see how that would be relevant since it isn't called at all in your example.

Thanks @giordano! This is extremely helpful, how did you track down the BlackBoxOptim.jl issue to those lines? I was never quite able to narrow it down further than things happening in my own code. This was mainly because the stackoverflow error caused by importing Measurements was just a whole bunch of spam in the terminal, followed by the correct answer (Measurements propagated through correctly!) Is this what it showed up to you as? Or did you get a formal exception followed by the process exiting early?

I'll send it over to them and see if they'd be willing to work on it! Ideally we'd have at least one optimization library that works without tomfoolery in Measurements. BlackBoxOptim would be a nice one! They have the BorgMOEA algorithm implemented which can handle multiple objective functions, and would be perfect for optimizing both the value and uncertainty in a single optimization process.

how did you track down the BlackBoxOptim.jl issue to those lines?

Binary search: I commented half of the code of Measurements.jl until I couldn't trigger the error, then repeat 🙂

Whelp, I'm adding that to my bag of tricks. Thanks again!

ok, im trying to make optim work with measurements, and the main show problem comes on the use of the @sprintf macro for printing