How to increase precision of solution using Roots.jl and ArbNumerics.jl

I am inverting it, because I need to solve some differential equations.

I have some value of f, say f_0, and I need to find the x that solves f(x) = f_0. So basically, I am trying to implement the root-finding function to give me this inverse. I am not sure this answers your question…

Thanks! you are indeed correct as I wrote

I fixed my example with the actual Mathematica proper result, but I do not get the same results as you. I do not understand why the deviation in my case is so large compared to yours. Do you?

I thank you also for this last comment

I do not exactly understand what is more capable vs less capable. I guess it would be nice if the root-finding function would return an ArbReal with the error estimation from the root-finding algorithm.

In my other functions, I try to use ArbReal. Do you think since this is “less” capable, it is a bad choice?
If I were to keep everything ArbReal, would you just convert the value that is returned by solve to ArbReal?

I fixed my example with the actual Mathematica proper result, but I do not get the same results as you. I do not understand why the deviation in my case is so large compared to yours. Do you?

You are missing the citation marks around the value you give to ArbReal. If you replace it with

mat_sol = ArbReal("0.5970995261535437979056290157285245402529726426867049646646376914313944654874563187306730863627235781")

it works fine!

I do not exactly understand what is more capable vs less capable. I guess it would be nice if the root-finding function would return an ArbReal with the error estimation from the root-finding algorithm.

With more capable I mean that ArbReal is more capable than ArbFloat, it can do anything ArbFloat can do and at the same time it keeps track of the error. To say a bit more about this: The promotion system in Julia typically returns the more “general” of the input types, for example promote_type(Int32, Int64) = Int64 and promote_type(Float64, BigFloat) = BigFloat. In base Julia I believe it always holds that promote_type(T, float(T)) = float(T). But for ArbReal this is not the case, then float(ArbReal) = ArbFloat but promote_type(ArbReal, ArbFloat) = ArbReal.

In your case I believe you can just as well use ArbFloat. The only benefit with ArbReal is that you would notice if you lose too much precision in the numerical calculations, but it wont tell you anything about the error bound for the root anyway. If you want to get error bounds for the root you have to use some other algorithm. For example the interval Newton method, which is what the refine_root function I used in my previous post makes use of.

1 Like

This is the same error I mentioned early on. To understand what’s going on here, the Julia Parser reads the number as a Float64 thereby ignoring all the excess digits, then passes that to ArbReal … If you put the quote marks, the parser reads it as a string and passes the string to ArbReal which converts it to a full precision value. You simply can’t give a naked string of digits to the parser and get anything other than a native float type. The parser doesn’t know anything about Arb types.

1 Like

Thanks so much for the answer! It is very clear.

It would be great to get the error. This refine_root method is part of your package ArbExtras right? Do you know if there is a similar analog in Roots?

And thank you both @joeldahne and @dlakelan for pointing out the issue with the string parsing!

Thanks for this informative response. Roots uses float for the input values as it seems like the right generic for different number types.

When I try to solve this with A42 which might be recommended as there is a known bracket, the convergence is quite quick for smooth enough functions, and the tolerance is like 2abs(r)*eps(one(r)) between iterates, I get an answer that is off from mat_sol, but yields an exact zero for f:

julia> setprecision(ArbFloat, bits=1000)
1000

julia> f(x) = elliptic_e(x) - 13 // 10
f (generic function with 1 method)

julia> r = find_zero(f, ArbReal.( (0.0, 1.0) ), A42())
0.5970995261535437979056290157285245402529726426867049646646376914313944654874563187306730863627235781167012473182082471941890091509579328579999681942247871362902116886510527010793499181168551920918664052964368404265225864463002416030871302985698796181604247464352127876246164265529899386558022253688676

julia> f(r)
0

julia> r - mat_sol
-2.496873060882422448783512389107976765165905935558302891285924378533353483003135167996405765830182705071755548296901317683611453726541415832228563754139160817061255022802266605200253564787212375383573447010061344197774631132438257432457875535296194985152788292500142950589819644532161239691396597893509e-78

I didn’t check with other zero-finding methods, and am not sure this happens with ArbReals which should be guaranteed intervals (as I understand it), but over floating point numbers some functions can have 0, 1 or more “zeros” near a mathematical zero.

Yes, that seems reasonable. Is there a way to get back from Roots the error estimate of the found solution?

The refine_root is indeed part of my ArbExtras.jl package, which uses Arblib.jl instead of ArbNumerics.jl for the interface to Arb. The code is fairly short and self-contained, so you could take a look at it.

In general you need to use different algorithms to get rigorous error bounds and I think that would be outside the scope of Roots.jl. I’m only aware of one package in the general repository that does this, IntervalRootFinding.jl. But that package makes use of IntervalArithmetic.jl for the arithmetic and that doesn’t support the elliptic functions that you use. So if you want rigorous error bounds you will either have to use ArbExtras.jl, which is maybe not optimal since it is not in the general repository, or implement your own version of the interval Newton method, which is in principle a very simple method but might be hard if you are not used to working with rigorous numerics.

1 Like

Thanks so much for the information! I will have a look at refine_root.