Arbitrary Precision BesselK (ArbNumerics.jl)


I am fairly new to Julia and have been working with ArbNumerics.jl BesselK function as part of evaluating a complicated response function that can be expanded in a series of BesselK functions. The arguments it ends up taking sometimes cause it to underflow at 64-bit precision. Although the function decays to very small values, these cancel with other parts of my equation that are large to give finite values between 0-1000. So, as long as I extend the precision of the BesselK functions, everything comes out nicely. However, I have noticed that sometimes ArbNumerics implementation gives the wrong value for certain arguments for a range of precisions. For example:

setworkingprecision(ArbFloat, bits = p)
bessel = ArbNumerics.besselk(ArbFloat(2.0), ArbFloat(57.966855579194714))

is one term in my code that returns the wrong value for precisions p between 116 and 175 bits which ultimately gives unphysical results.

I admit I do not fully understand what is going on here. How do I deal with this numerically volatile expression? Is there a way to correct my usage of ArbNumerics.jl, or any suggestions to tame these extreme floats?

Many thanks!

Part of you problem here might be that in the expression ArbFloat(57.966855579194714), 57.966855579194714 is a Float64 so rounding occurs there.

1 Like

You can use parse(ArbFloat, "57.966855579194714"), which will avoid parsing 57.966855579194714 as a double precision floating point literal, but in extended precision instead.


Ah, my bad, I missed some details that might help more. The actual value of the argument being put in here is, ArbFloat(57.966855579194711170011206453920230004). I have essentially wrapped every single number and variable in ArbFloat() to try and ensure that every float is at a precision p specified by setworkingprecision(ArbFloat, bits = p) at the beginning. At least that’s how I think it works.

In this example p = 128 bits, which gives:

ArbNumerics.besselk(ArbFloat(2), ArbFloat(57.966855579194711170011206453920230004)) 

which gives


when the actual value (evaluated to 64 bit precision using SpecialFunctions.jl version of besselk) is:


which is a very large difference.

Thank you for your suggestion! I had a go testing it like this:

for p in 105:1:120
    setworkingprecision(ArbFloat, bits = p)
    bessel = ArbNumerics.besselk(parse(ArbFloat, "2.0"), parse(ArbFloat, "57.9668555791947111700112064539202300037282162"))
    @show(p, bessel)

where I think this gradually changes the precision of the inputs and the result from 105 bits to 120 bits. The output I get is:

p = 105
bessel = 1.13687286812830688844382558995e-26
p = 106
bessel = 1.13687286812830688844382558995e-26
p = 107
bessel = 1.13687286812830688844382558995e-26
p = 108
bessel = 1.1368728681283068884438255899501e-26
p = 109
bessel = 1.1368728681283068884438255899501e-26
p = 110
bessel = 1.13687286812830688844382558995009e-26
p = 111
bessel = 1.13687286812830688844382558995009e-26
p = 112
bessel = 1.13687286812830688844382558995009e-26
p = 113
bessel = 1.136872868128306888443825589950093e-26
p = 114
bessel = 1.136872868128306888443825589950094e-26
p = 115
bessel = 1.136872868128306888443825589950094e-26
p = 116
bessel = -1.7007550909743124634425486022552948e-10
p = 117
bessel = -2.1191226283967460416203995155093783e-10
p = 118
bessel = -9.7202262714151653324244948144179731e-12
p = 119
bessel = -5.9685591068233579184885859007671116e-12
p = 120
bessel = -9.62074844699578945371020680839924215e-12

where the result starts off as increasing the precision of the true answer until it drastically changes. After higher precisions > 180 bits, the correct result returns again.

Note the quoted inputted float is not the only one. It is generated by preceding calculations and is one of many. For most arguments ArbNumerics.besselk gives the correct, high-precision answer, for others (like my example above) it doesn’t.

Oh this is really weird. can you check if BigFloat gives the same results?

1 Like

Sadly, I am not sure how! I do not think ArbNumerics.besselk() accepts BigFloat types. Nor does SpecialFunctions.besselk(), which is what brought me to ArbNumerics.jl initially to navigate around SpecialFunctions.besselk() giving me 0 for the vast majority of my Float64 input arguments.

ArbNumerics is a wrapper around certain types and functions of Nemo.jl

It’s probably possible to get this inside ArbNumerics too, but in Nemo you get a higher bound on the accumulated numerical error in your computations.

I do not understand why you see this strange pattern, but at least the error bounds in Nemo give off a warning when this happens:

using Nemo
function ff(p)
        CC = ComplexField(p)
        Nemo.besselk(CC(2.0), CC(57.9668555791947111700112064539202300037282162))
16-element Array{acb,1}:
 [1.13687286812830340096154382324e-26 +/- 7.62e-56] + i*0
 [1.13687286812830340096154382325e-26 +/- 6.09e-56] + i*0
 [1.136872868128303400961543823245e-26 +/- 7.62e-57] + i*0
 [1.136872868128303400961543823245e-26 +/- 4.85e-57] + i*0
 [1.136872868128303400961543823245e-26 +/- 3.49e-57] + i*0
 [1.136872868128303400961543823245e-26 +/- 3.08e-57] + i*0
 [1.1368728681283034009615438232452e-26 +/- 7.86e-58] + i*0
 [1.1368728681283034009615438232452e-26 +/- 6.68e-58] + i*0
 [1.1368728681283034009615438232453e-26 +/- 5.82e-58] + i*0
 [1.13687286812830340096154382324525e-26 +/- 6.75e-59] + i*0
 [1.13687286812830340096154382324525e-26 +/- 4.61e-59] + i*0
 [+/- 1.25e-8] + i*0
 [+/- 6.24e-9] + i*0
 [+/- 3.14e-9] + i*0
 [+/- 1.56e-9] + i*0
 [+/- 7.79e-10] + i*0

Results suddenly become much less precise, so there is definitely something strange going on…


I see a similar result with the ArbNumerics.jl wrapper using midpoint, radius = ball(x::ArbReal):

using ArbNumerics
function ff(p)
    setworkingprecision(ArbReal, bits = p)
    bessel = ArbNumerics.besselk(ArbReal(2.0), ArbReal(57.9668555791947111700112064539202300037282162))
    range = ball(bessel)[2]
    bessel, range
16-element Array{Tuple{ArbReal,ArbReal},1}:
 (1.13687286812830340096154382324e-26, 2.679502940531405325989203892668e-56)   
 (1.13687286812830340096154382325e-26, 1.2394343483740873033922786967486e-56)  
 (1.136872868128303400961543823245e-26, 5.749354746790453596318149480275e-57)  
 (1.136872868128303400961543823245e-26, 2.6609659571549163789967918088654e-57) 
 (1.136872868128303400961543823245e-26, 1.1764036150206523255726348083687e-57) 
 (1.136872868128303400961543823245e-26, 6.00512852635812629561535982982253e-58)
 (1.1368728681283034009615438232452e-26, 3.00321790245430220575421066988444e-58)
 (1.1368728681283034009615438232452e-26, 1.67558249568207836829333913982191e-58)
 (1.1368728681283034009615438232453e-26, 9.386672930935215841429672157461206e-59)
 (1.13687286812830340096154382324525e-26, 4.694067564836733014219649346677595e-59)
 (1.13687286812830340096154382324525e-26, 2.372838892813769527936901327749736e-59)
 ([+/- 1.25e-8], 1.2239969926497451524483039975166321e-8)
 ([+/- 6.24e-9], 6.2016155266686467939507565461099148e-9)
 ([+/- 3.14e-9], 3.0694941102749062622478959383442998e-9)
 ([+/- 1.56e-9], 1.5405000573787130946357137872837484e-9)
 ([+/- 7.79e-10], 7.77541440954987450595581321977078915e-10)

I agree that this is very bizarre… I’m guessing it’s something in the source code for Nemo.jl then? Perhaps in Nemo.besselk()?

Nemo.besselk() is an interface itself for a C function in a library called Arb. I guess that if no one comes up with a better explanation, you could post an issue at GitHub - Nemocas/Nemo.jl: Julia bindings for various mathematical libraries (including flint2)

1 Like

With more bits

for p in 100:10:250
    setworkingprecision(ArbFloat, bits = p)
    bessel = ArbNumerics.besselk(2.0, parse(ArbReal, "57.9668555791947111700112064539202300037282162"))
    @show((p, bessel))
(p, bessel) = (100, 1.13687286812830688844382559e-26)
(p, bessel) = (110, 1.13687286812830688844382558995e-26)
(p, bessel) = (120, [+/- 1.19e-9])
(p, bessel) = (130, [+/- 1.17e-12])
(p, bessel) = (140, [+/- 1.18e-15])
(p, bessel) = (150, [+/- 1.15e-18])
(p, bessel) = (160, [+/- 1.14e-21])
(p, bessel) = (170, [+/- 1.13e-24])
(p, bessel) = (180, 1e-26)
(p, bessel) = (190, 1.137e-26)
(p, bessel) = (200, 1.136873e-26)
(p, bessel) = (210, 1.136872868e-26)
(p, bessel) = (220, 1.136872868128e-26)
(p, bessel) = (230, 1.136872868128307e-26)
(p, bessel) = (240, 1.136872868128306888e-26)
(p, bessel) = (250, 1.136872868128306888444e-26)

It looks like the garbage in the middle could be very low precision estimates of the correct value, but the precision somehow “starts over”.