JuliaMath / SpecialFunctions.jl

Special mathematical functions in Julia

Home Page:https://specialfunctions.juliamath.org/stable/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

gama_inc_inv fails for valid arguments

Deduction42 opened this issue · comments

While using quantile(d::Gamma, q::Real) I ran into an error from gamma_inc_inv with the following inputs

k = 0.0016546512046778552
p = 0.29292929292929293
x = gamma_inc_inv(k, p, 1-p)

ERROR: DomainError with -9.5893e-320:
log will only return a complex result if called with a complex argument. Try log(Complex(x)).
Stacktrace:
 [1] throw_complex_domainerror(f::Symbol, x::Float64)
   @ Base.Math .\math.jl:33
 [2] _log(x::Float64, base::Val{:ℯ}, func::Symbol)
   @ Base.Math .\special\log.jl:304
 [3] log
   @ .\special\log.jl:269 [inlined]
 [4] __gamma_inc_inv(a::Float64, minpq::Float64, pcase::Bool)
   @ SpecialFunctions C:\Users\user\.julia\packages\SpecialFunctions\oPGFg\src\gamma_inc.jl:1048
 [5] _gamma_inc_inv
   @ C:\Users\user\.julia\packages\SpecialFunctions\oPGFg\src\gamma_inc.jl:1012 [inlined]
 [6] gamma_inc_inv(a::Float64, p::Float64, q::Float64)
   @ SpecialFunctions C:\Users\user\.julia\packages\SpecialFunctions\oPGFg\src\gamma_inc.jl:994

this is such a small number, it's BARELY negative, so it looks like some sort of round-off error in an internal iteration surrounding gamma_inc.jl:1048. I see there's an abs(x) expression on line 1086,

t = abs(x/x0 - 1.0)

would it be safe to stick one on line 1088 as well?

Isn't this a duplicate of #390? I assume the problem here is the same as the one described by @andreasnoack in #390 (comment): The algorithm takes a step that leads to slightly negative values. It seems a possible fix would be to translate Amparo Gil's Fortran code: #390 (comment)

Oh shoot, it is a duplicate. It's been a while since I tested the code that used this; it progressed further than I remembered, but it got hung up whenever this particular case came about.

Now that I see this is a newton step, do you think a box constraint on the step would be appropriate? When doing univariate Newton steps, I typically limit the step size to be
flipsign(min(abs(Δx), abs((1-ϵ)*(x-constraint)), Δx)
which is particularly convenient when the constraints are (0,Inf), because I think it would result in

x0 = max(ϵ*x, x + Δx) 
Δx = x0 - x

There's a bit of overhead but I don't think it's a lot (very small values of ϵ increase convergence but you have to tune it to be safe, but the (0,Inf) shortcut is more forgiving as it's less prone to round-off error). Otherwise, I see your changes help mitigate the round-off error that happens (thanks for putting it in terms of Δx, it makes it more obvious this is a Newton step).