Precision issue: (A.+R.-A.-R) do not vanish ?!

Also, when Manifest.toml is part of the final sources, Pkg3 brings a level of reproducibility to a project that only a few modern languages share (eg Rust with Cargo.lock). In contrast, even when the source is available, most papers written in other commonly used scientific programming languages become very costly or impossible to reproduce after a few years (some fields care more about this than others).

4 Likes

I run part of your example with IntervalArithmetic.jl, as it gives you bounds for the errors.

using IntervalArithmetic

Iabs(z) = sqrt(real(z)^2 + imag(z)^2)  # abs function behaves strangely currently with complex intervals
A = rand(1000:20000, (N, N))
R = rand(Complex128, N, N)  # I use Julia 0.6.4

# turn this into intervals
a = Interval.(A)
r = Interval.(R)

reduc = sum(a .+ r, 1)[1, :]
println(sum(abs.(reduc .- reduc)))
# prints [0, 0.000942274]

So you can not be guaranteed that the result using two different methods to be smallest than 0.000942274. Since imprecisions only add and are never reduced the millions of operations performed just result in an absurdly low precision.

While there are some possibility to mitigate the issue (prefer multiplications over additions, subtractions are the devil, find a way to do the same with less operations), in some cases the instability is a property of the problem and not of the algorithm (matrix inversion for example is unstable for matrices with determinant close to 0), and in these cases not much can be done beside using greater precision or using regularization (i.e. restricting your possible results to a reasonable set of solutions).

2 Likes

This is the dependency problem: this is a well known case in which the error bounds generated by interval arithmetic are grossly pessimistic. In general, floating-point errors can often be much smaller than the bounds provided by interval operations, so they aren’t a substitute for numerical analysis.

(In this particular case, the answer 0 is exact!)

1 Like

Interval arithmetic is a worst case analysis, but more importantly look how dang easy that was to do! That’s insane!! And it’s just as easy to try any number of bits of precision to get a sense of whether the interval bound is unduly pessimistic.

3 Likes

is there a reference for this? How many orders of magnitude (2 or 10 based) can it be smaller?

Thanks @Kolaru, it is very helpful for a quick estimate of error bounds !

I ran a test on the package in julia 0.7

Test Summary:          | Pass  Total
Constructing intervals |   92     92
Test Summary: | Pass  Broken  Total
Big intervals |    6       1      7
Complex intervals: Test Failed at /home/yunlong/.julia7/packages/IntervalArithmetic/dkW4/test/interval_tests/construction.jl:199
  Expression: a == Interval(3) + im * Interval(4)
   Evaluated: Interval(2.999999999999999, 3.000000000000001) + Interval(3.999999999999999, 4.000000000000002)*im == Interval(2.9999999999999996, 3.0000000000000004) + Interval(3.9999999999999996, 4.000000000000001)*im

Maybe this is relevant ?

This is a problem with one of the dependencies of IntervalArithmetic.jl, which should be fixed soon.

The bounds computed by interval arithmetic can be too large by an arbitrary factor. For example, they are too large by a factor of infinity for x-x. Therefore you should think of them as upper bounds but not as “error estimates” in general.

This is all well known, e.g. the Wikipedia article on interval arithmetic discusses it.

1 Like

Hello,

this is probably not a constructive answer, and I already canceled writing the post once.

But from a strictly mathematical viewpoint you can in fact analyse the rounding and stability behaviour of a given mathematical algorithm, cf. for example N. J. Higham, Accuracy and Stability of Numerical Algorithms, SIAM, 2002. But please note that this is fundamental mathematics, it will not give you any help when actually coding the algorithm and it will certainly not tell you why your implementation of a numerically stable algorithm fails. Also, the analysis presented only covers the basic building block algorithms and the error bounds are of course usually upper (backwards) bounds. If your library has the book or a similar one you might browse through them, anyway.

1 Like

I first ran into interval analysis about 18 years ago using a language that did not have operator overloading. In comparison, using Julia is like winning the LOTTO twice in one week.

6 Likes

I tried the package following your example and found this way to show the possible error of A.+R.-A.-R

using IntervalArithmetic
setprecision(Interval,64)
setformat(:full)
a = rand(-100:2000,1024,1024);
r = rand(1024,1024);
A = Interval.(a);
R = Interval.(r);

RES = A.+R.-A.-R;
res = a.+r.-a.-r;

#NOTE to test the results are in the intervals
all(res .∈ RES)

#NOTE the quick way to estimate error 
radius.(RES)

The result of radius(), I think, could be used as an estimation.


The CUDA kernel I post earlier basically “fold” the first dimension, each time by half. The corresponding CPU code for an 1024 x 1024 array could be

M = rand(ComplexF64,1024,1024);
for k ∈ 9:-1:0
    M = M[1:2^k,:].+M[2^k+1:2^(k+1),:]
end

The goal is then to test whether the GPU results are in the intervals :

using IntervalArithmetic
setprecision(Interval,64)
setformat(:full)

M = rand(ComplexF64,1024,1024);
Mi = Interval.(M);
for k ∈ 9:-1:0
    M = M[1:2^k,:].+M[2^k+1:2^(k+1),:]
    Mi = Mi[1:2^k,:].+Mi[2^k+1:2^(k+1),:]
end

#NOTE the following gives true (as it should be)
all( M .∈ Mi )

#NOTE to test
all( M_from_GPU  .∈ Mi )

I do not have access of GPUs right now, gonna see that tomorrow :smiley:

This is an amazing community ! Big thanks, everybody !

Yes indeed ! IntervalArithmetic gives the anticipated result:

code:

using IntervalArithmetic
setprecision(Interval,Float64)
setformat(:full)

x = rand()
X = Interval(x)
println(X-X)
println(@interval x-x)

output:

Interval(0.0, 0.0)
Interval(-1.1102230246251568e-16, 1.1102230246251568e-16)

It is helpful, and the bibliography in the documentation of IntervalArithmetic.jl are also helpful.
https://juliaintervals.github.io/IntervalArithmetic.jl/stable/index.html#Bibliography-1

That is awesome for Julia.

There is recent development of a specialized debugging tool for C/C++ and Fortran:

1 Like