JuliaMath / HypergeometricFunctions.jl

A Julia package for calculating hypergeometric functions

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

pFq values different from Mathematica

DylanMMarques opened this issue · comments

Hi

I found some different values between this implementation of the generalized hypergeometric function and from mathematica 12.0 if z < 0:

using HypergeometricFunctions:
pFq([1/2], [1., 3/2], -1000) = 1.5267456019790377e7
pFq([1/2], [1., 3/2], -10000) = -5.471268327327299e65

using Mathematica 12.0
N [HypergeometricPFQ[{1/2}, {1, 3/2}, -1000]] = 0.0152208
N [HypergeometricPFQ[{1/2}, {1, 3/2}, -10000]] = 0.00472887

using positive values of z the functions give the same result:
pFq([1/2], [1., 3/2], 10000) = 1.0224158131796494e83

N[HypergeometricPFQ[{1/2}, {1, 3/2}, 10000]] = 1.02242*10^83

Is there any reason for the results to be different?

Hypergeometric functions require many different algorithms and techniques to converge properly, any single algorithm or technique usually doesn't converge for all inpute values.

Looks like if we removed the parameter length check here

elseif abs(z) ρ || length(α) length(β)
return pFqmaclaurin(α, β, float(z); kwds...)
else
return pFqweniger(α, β, float(z); kwds...)
end
end
then the rational approximation algorithm would do quite a bit better than the Maclaurin series

julia> HypergeometricFunctions.pFqweniger([1/2], [1, 3/2], -1000)
0.015220788412071432

julia> HypergeometricFunctions.pFqweniger([1/2], [1, 3/2], big(-1000))
0.01522078841245579637174821040509930075759919800570185853201355946706413542259123

julia> HypergeometricFunctions.pFqweniger([1/2], [1, 3/2], -10000)
0.004728869998571868

julia> HypergeometricFunctions.pFqweniger([1/2], [1, 3/2], big(-10000))
0.004728870002692929177025108485606062277803713334109646653656286413090573959288139

I don't think Mathematica uses finite-precision floating-point, so it's tough to draw comparisons. This package applies generic algorithms using the data types you supply: no gimmicks or tricks. Conversely, there's no guarantee of correctness.

It appears that Float64s run out precision for the particular arguments + implementation. Fortunately, we can use ArbNumerics and choose how many digits we want to use:

julia> using HypergeometricFunctions

julia> using ArbNumerics

julia> x = ArbReal(10000, digits=64)
10000.0

julia> pFq([1/2], [1., 3/2], x)
1.022415813179649929832430194462446585213636487335708107714185911e+83

julia> pFq([1/2], [1., 3/2], -x)
2618540298.085577180549025083135444297331203397443340991447512761

julia> x = ArbReal(10000, digits=256)
10000.0

julia> Float64(pFq([1/2], [1., 3/2], -x))
0.004728870002692929

I imagine that Mathematica is adaptively increasing the working precision.

Versions:

  Version 1.5.3-pre.0 (2020-09-24)
  [7e558dbc] ArbNumerics v1.2.1
  [34004b35] HypergeometricFunctions v0.3.3

The referenced commit fixes this by switching algorithms for 1F2 outside the disk of centre 0 and radius 0.72.