JuliaNLSolvers / NLsolve.jl

Julia solvers for systems of nonlinear equations and mixed complementarity problems

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Function argument cannot demand Float64

adamglos92 opened this issue · comments

currently nlsolve cannot use function which demand Float64 type. Pleas consider the code

using NLsolve

f_any(x) = (x-2)^4
f_any!(F, x) = F[1] = f_any(x[1])
println("######### any #########")
println(nlsolve(f_any!, [0.]))

println("######### float #########")
f_float(x::Float64) = (x-2)^4
f_float!(F, x) = F[1] = f_float(x[1])
println(nlsolve(f_float!, [0.]))

The output is following

######### any #########
Results of Nonlinear Solver Algorithm
 * Algorithm: Trust-region with dogleg and autoscaling
 * Starting Point: [0.0]
 * Zero: [1.99137]
 * Inf-norm of residuals: 0.000000
 * Iterations: 22
 * Convergence: true
   * |x - x'| < 0.0e+00: false
   * |f(x)| < 1.0e-08: true
 * Function Calls (f): 23
 * Jacobian Calls (df/dx): 23
######### float #########
ERROR: LoadError: MethodError: no method matching f_float(::ForwardDiff.Dual{ForwardDiff.Tag{#f_float!,Float64},Float64,1})
Closest candidates are:
  f_float(!Matched::Float64) at /home/adam/nlsolve_test.jl:9
Stacktrace:
 [1] f_float! at /home/adam/nlsolve_test.jl:10 [inlined]
 [2] vector_mode_dual_eval(::#f_float!, ::Array{Float64,1}, ::Array{Float64,1}, ::ForwardDiff.JacobianConfig{ForwardDiff.Tag{#f_float!,Float64},Float64,1,Tuple{Array{ForwardDiff.Dual{ForwardDiff.Tag{#f_float!,Float64},Float64,1},1},Array{ForwardDiff.Dual{ForwardDiff.Tag{#f_float!,Float64},Float64,1},1}}}) at /home/adam/.julia/v0.6/ForwardDiff/src/apiutils.jl:42
 [3] vector_mode_jacobian!(::DiffResults.MutableDiffResult{1,Array{Float64,1},Tuple{Array{Float64,2}}}, ::#f_float!, ::Array{Float64,1}, ::Array{Float64,1}, ::ForwardDiff.JacobianConfig{ForwardDiff.Tag{#f_float!,Float64},Float64,1,Tuple{Array{ForwardDiff.Dual{ForwardDiff.Tag{#f_float!,Float64},Float64,1},1},Array{ForwardDiff.Dual{ForwardDiff.Tag{#f_float!,Float64},Float64,1},1}}}) at /home/adam/.julia/v0.6/ForwardDiff/src/jacobian.jl:161
 [4] jacobian!(::DiffResults.MutableDiffResult{1,Array{Float64,1},Tuple{Array{Float64,2}}}, ::Function, ::Array{Float64,1}, ::Array{Float64,1}, ::ForwardDiff.JacobianConfig{ForwardDiff.Tag{#f_float!,Float64},Float64,1,Tuple{Array{ForwardDiff.Dual{ForwardDiff.Tag{#f_float!,Float64},Float64,1},1},Array{ForwardDiff.Dual{ForwardDiff.Tag{#f_float!,Float64},Float64,1},1}}}, ::Val{false}) at /home/adam/.julia/v0.6/ForwardDiff/src/jacobian.jl:74
 [5] (::NLsolve.#fg!#4{#f_float!})(::Array{Float64,1}, ::Array{Float64,2}, ::Array{Float64,1}) at /home/adam/.julia/v0.6/NLsolve/src/objectives/autodiff.jl:24
 [6] value_jacobian!!(::NLSolversBase.OnceDifferentiable{Array{Float64,1},Array{Float64,2},Array{Float64,1},Val{false}}, ::Array{Float64,1}, ::Array{Float64,2}, ::Array{Float64,1}) at /home/adam/.julia/v0.6/NLSolversBase/src/interface.jl:89
 [7] trust_region_(::NLSolversBase.OnceDifferentiable{Array{Float64,1},Array{Float64,2},Array{Float64,1},Val{false}}, ::Array{Float64,1}, ::Float64, ::Float64, ::Int64, ::Bool, ::Bool, ::Bool, ::Float64, ::Bool) at /home/adam/.julia/v0.6/NLsolve/src/solvers/trust_region.jl:102
 [8] #nlsolve#38(::Symbol, ::Float64, ::Float64, ::Int64, ::Bool, ::Bool, ::Bool, ::Function, ::Float64, ::Bool, ::Int64, ::Float64, ::NLsolve.#nlsolve, ::NLSolversBase.OnceDifferentiable{Array{Float64,1},Array{Float64,2},Array{Float64,1},Val{false}}, ::Array{Float64,1}) at /home/adam/.julia/v0.6/NLsolve/src/nlsolve/nlsolve.jl:26
 [9] (::NLsolve.#kw##nlsolve)(::Array{Any,1}, ::NLsolve.#nlsolve, ::NLSolversBase.OnceDifferentiable{Array{Float64,1},Array{Float64,2},Array{Float64,1},Val{false}}, ::Array{Float64,1}) at ./<missing>:0
 [10] #nlsolve#39(::Symbol, ::Float64, ::Float64, ::Int64, ::Bool, ::Bool, ::Bool, ::Function, ::Float64, ::Bool, ::Int64, ::Float64, ::Symbol, ::Bool, ::NLsolve.#nlsolve, ::#f_float!, ::Array{Float64,1}) at /home/adam/.julia/v0.6/NLsolve/src/nlsolve/nlsolve.jl:59
 [11] nlsolve(::Function, ::Array{Float64,1}) at /home/adam/.julia/v0.6/NLsolve/src/nlsolve/nlsolve.jl:53
 [12] include_from_node1(::String) at ./loading.jl:569
 [13] include(::String) at ./sysimg.jl:14
 [14] process_options(::Base.JLOptions) at ./client.jl:305
 [15] _start() at ./client.jl:371
while loading /home/adam/nlsolve_test.jl, in expression starting on line 11

While in this case removing typing is the solution, the problem remains when using some external function. Is it a bug, or a feature? It seems that the code would work with older version of NLsolve (actually it works few days ago).

Hm, this seems to be ForwardDiff related somehow (though it may very well be our fault!) thanks for the simple example. I will have a look at it as soon as possible.

I don't see how the function can be expected to be auto differentiable if it limits input arguments to Float64?.

So the solution is to update the function in such a way, that accepts and returns Dual?

Just change ::Float64 to ::AbstractFloat. Done.

What if the function is not implemented by me and I cannot change the typing? example: hcubature from Cubature module.

Ah, of course.. Sorry. But it is interesting that @adamglos92 says that it worked in the previous version of NLsolve.

Can you live with finite differencing in the mean time?

I think the default was finite differencing before.

I think the default was finite differencing before.

Yeah, this transition was a bit messy due to time constraints on my part.

As @ChrisRackauckas said, according to README I used finite differencing before and it didn't work.
I am new to GitHub, what you mean by PR? Pull request? I am not sure how to do this properly.

and it didn't work.

did or didn't work?

Before the update of NLsolve (probably in version 0.13.0) it worked and it does not work anymore.

Still works.

julia> println(nlsolve(f_float!, [0.],autodiff=:central))
Results of Nonlinear Solver Algorithm
 * Algorithm: Trust-region with dogleg and autoscaling
 * Starting Point: [0.0]
 * Zero: [1.99137]
 * Inf-norm of residuals: 0.000000
 * Iterations: 22
 * Convergence: true
   * |x - x'| < 0.0e+00: false
   * |f(x)| < 1.0e-08: true
 * Function Calls (f): 23
 * Jacobian Calls (df/dx): 23

OK, it works in by example with hcubature as well (actually it outputs NaN, but it seems to be different problem), it is just strange I did not pass the keyword before.

It used to default to finite difference, now it defaults to AD.

Maybe that should be reverted - I didn't intend it... I think.

I think think it's a bad idea though, but with DiffEqDiffTools it's not terrible anymore. The timing is almost the same for both AD and finite difference here now. But I think consistency with Optim is a good choice too though.

I suggest, if possible, to improve README. Since Function type (according to my knowledge) cannot be parametrized, it would be helpful to know what functions need to satisfy.
Currently I am satisfied on proposed by @ChrisRackauckas solution. From my side the issue can be closed.

I suggest, if possible, to improve README.

Agreed.

Since Function type (according to my knowledge) cannot be parametrized, it would be helpful to know what functions need to satisfy.

It can, and you should almost never force dispatches of functions to work on concrete types since it will specialize anyways. So there's no performance benefits by doing ::Float64 here, but it does cause issues like this.

I've reverted the default.