JuliaNLSolvers / NLsolve.jl

Julia solvers for systems of nonlinear equations and mixed complementarity problems

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

How to solve equations given in a vector form?

homocomputeris opened this issue · comments

Let's have a simple example which can be coded directly:

using NLsolve
function f!(F, x)
    F[1] = x[1]^2 - 4.0
    F[2] = x[2]^2 - 4.0
end

nlsolve(f!, [4.0; 4.0], autodiff=:forward)

is as expected:

Results of Nonlinear Solver Algorithm

  • Algorithm: Trust-region with dogleg and autoscaling
  • Starting Point: [4.0, 4.0]
  • Zero: [2.0, 2.0]
  • Inf-norm of residuals: 0.000000
  • Iterations: 5
  • Convergence: true
    • |x - x'| < 0.0e+00: false
    • |f(x)| < 1.0e-08: true
  • Function Calls (f): 6
  • Jacobian Calls (df/dx): 6

If my vector x has dimension, say, 100, I don't want to code the function elementwise. If I don't miss anything in Julia syntax, it would be

function g!(G, x)
    G = x.^2 - [4.0; 4.0]
end

nlsolve(g!, [4.0; 4.0], autodiff=:forward)

which is incorrect:

Results of Nonlinear Solver Algorithm

  • Algorithm: Trust-region with dogleg and autoscaling
  • Starting Point: [4.0, 4.0]
  • Zero: [4.0, 4.0]
  • Inf-norm of residuals: 0.000000
  • Iterations: 0
  • Convergence: true
    • |x - x'| < 0.0e+00: false
    • |f(x)| < 1.0e-08: true
  • Function Calls (f): 1
  • Jacobian Calls (df/dx): 1

How do I define the equation in vector form and get a valid result?

mutate: G .=

@ChrisRackauckas Thanks, the syntax keeps surprising me.

An opposite question: can the package solve scalar equations like

function g!(G, x)
    G = 4.0 - x*2
end
nlsolve(g!, 1.0, autodiff=:forward)

I get

MethodError: no method matching nlsolve(::typeof(g!), ::Float64; autodiff=:forward)

NLsolve.jl doesn't do scalar equations. Though Roots.jl is all about scalar rootfinding.

Thanks, Chris!

You can put your scalar equation in a length-1 vector and solve though.

You can put your scalar equation in a length-1 vector and solve though.

Yeah, but that's not really using scalars well though. It's a bit more expensive and it's not using methods that are tailored towards 1D, which is a much simpler problem. If someone is actually doing a bunch of 1D rootfinding problems, it's probably better to point them in another direction. If what they're doing is just a 1D test case and then straight to ND, then sure length-1 vector is fine.