Detecting low precision contamination of DoubleFloats.jl

For the last many months I’ve very happily used DoubleFloats.jl to compute series akin to 1+1/2+1/3-1/4-1/5… with varying signs changes and ranges up to (and fractions as as small as) 1/10^15 – or smaller. As these summations grow large, I recognize the precision will fall off in the last decimal places, say in the 18th place, I’m comfortable with that.
I’ve gone through my code and believe every number that touches the final summation is Double64. When I print the final result the typeof(result) = Double64. It’s all good.
Yet …would I bet my life that the final results are absolutely free of 15 digit contamination? No, I wouldn’t.
So, my question is this: If the final typeof = Double64, should I feel confident it is contamination free?
If not, is there some tool that might show where a result lost precision and find something I’m overlooking?

I suppose one check would be to make all numbers in your code be extended precision and then see if you get the same result.

If your code uses only basic operations, you could look through the type promotion rules in Base and/or DoubleFloats.jl to make certain that you are hitting the right methods and that they promote to Double64 appropriately.

I should have been clearer - the only time non-Double64 appear are as loop counters, or in prime generation (which are then made Double64), that sort of thing.

Bueller? Bueller? Anyone?

I don’t know of any existing tools. But, one idea is to define your own type, say called Double64Canary, which wraps a Double64. Then you could define the basic mathematical operators so that they error if they are performed with types which are lower precision.

1 Like

Thank you Jondea. I’ll give that some thought. Would you have any thoughts on “If the final typeof = Double64, should I feel confident it is contamination free?” Or will a value claim that high precision even if it’s been poorly used along the way?

Even if the final type is Double64, I suspect you cannot be confident that there wasn’t a lower precision number used at some point in the calculation. To test this, multiply a Double64 with a Float32 (or 64) and see what comes out.

Would this help?

Hi @HexSpin,

Somehow I missed you March 4th post.
If you are allowed to share the code in question, please do.
I am happy to examine it and see what is up.
If you are loosing that much precision either you have encountered a bug, or your approach is likely to perform better reorganized.
You may dm me or email the code in question (my email is easy to find).


Thanks for the response. Directly mixing values results in the lower precision value. I think I’m worried about something more subtle. Like an undetected illegal operation etc. But thanks!

Very interesting… I’ll have to fiddle with that and check it out.

I have nothing too top secret in here. I’ll shoot you a copy. No snickering - the beauty of my code can only be described as brutalist.

I’d suggest using interval arithmetic (GitHub - JuliaIntervals/IntervalArithmetic.jl: Rigorous floating-point calculations using interval arithmetic in Julia) to bound the actual precision of the result.


JuliaIntervals.jl is very useful for this sort of thing … as is ArbNumerics.jl

1 Like

Thanks, cjdoris! I’ll investigate.


Thank you all, but especially @JeffreySarnoff, for your thoughts. While I’m not sure there was an answer to determine if Double64 has been contaminated (and if my thousands of computer hours are valid), you’ve provided me some interesting thoughts about moving forward and coming work!
In the meantime, I will take comfort in my my results since they tie to BigFloat at high-ish values and that using 32 digit precision assures my first 10 digit result is accurate at the ranges I’m investigating.