While trying out the R packages JuliaCall and XRJulia for calling Julia functions from R, I got interested in quite accurately determining the determinant of the 12-by-12 Hilbert matrix.

julia> A = [1/(i+j-1) for i in 1:12, j in 1:12];
julia> det(A)
2.732614032613825e-78

Let’s compare this with some other systems I can put my hands on.

Is this good enough? Pari-GP gets it exact even if we define the matrix in floating point, probably because it calculates in Float128 by default (I know we could do this in Julia, too). Though the first four systems use Float64, the differences seem to be significant. Are there systematic differences in the way determinants are computed?

with just a Julia generic binary on Windows 10. The compiler can optimize differently on different architectures which can change the result at this level!!! I heard about weird stuff like this, but first encountered it when doing some precision debugging of DelayDiffEq.jl where different OS’s would give slightly different answers:

The moral of the story is, 64-bit floating points are not trustworthy at this level, no matter what you use. Different BLAS/LAPACK backends can do different ordering of the summations (and the compiler can sometimes change it, and SIMD can affect this, etc), and some will be better/worse on a given problem + architecture combo, but it can change depending on the problem you’re looking to solve. The only way to really make this accurate is embrace the fact that Julia gives you generic algorithms over abstract types and compute it using something that is efficient yet accurate enough for your problem. For example, Julia can do it with rationals as well:

Furthermore, since the matrix is so ill conditioned, different ways of computing the determinant in floating point arithmetic give quite deifferent results.

The discussion here is very interesting. In https://github.com/JuliaLang/julia/issues/24568 we had a long discussions on the problems/merits of -ffast-math. My understanding from it is that if you don’t use -ffast-math, or other value changing optimizations, and Julia on purpose does not do that, then you get predictable results from Julia. That is the motivation to keep the pi sum benchmark without -ffast-math.

However, from the discussion here it seems that Julia’s determinant is not predictable and depends on the platform. Isn’t that inconsistent with the conclusion that we have reached in https://github.com/JuliaLang/julia/issues/24568?

Because if it cannot be predicted what exact floating point number you get for a well defined problem (even if ill-conditioned), then we might as well give up on predictability like that and enable -ffast-math for the pi sum benchmark I would think.