Inaccurate Matrix Multiplication

I think MKL.jl should give the same result as MKL in MATLAB (assuming it does use MKL, and the exact same version on the same processor). I think you’re not actually getting 0.0 there, just MATLAB showing the value as such? Some languages, including it, show more user-friendly (well that’s debatable) version of your number by default.

The answer you get is because of floating-point (and more…), but the difference isn’t strictly because of it. The answer “yes” would be a simplification.

As you can see at PSA: floating-point arithmetic - #8 by simonbyrne

Across machines, you “might get non-deterministic results”, for “BLAS or LAPACK operations” (also for e.g. “muladd”), and not just the default BLAS, i.e. openBLAS, also for MKL. And not just in Julia. That’s too be expected across languages using these dependencies, such as Python and MATLAB.

The reason is, e.g. “skylake” vs “tigerlake” even though the floating point-math is deterministic, and should I believe be the same across those (and all x86) architectures, that would only apply for individual scalar operations (or exact same SIMD instructions), but for the full code using them, when it’s not the same (or run in same sequence). It’s just that OpenBLAS has separate code for each architecture, different assembly code. You can use @Elrod’s pure Julia code, but I’m not sure it’s NOT special-cased too (or if not yet, then might be later). Another thing I could think of is, if you have many threads then that might, or likely will, add non-determinism, but I doubt for such small matrices. You could try disabling it.

So in short, this is to be expected, and isn’t strictly considered a bug. But:

If you need your answer to contain the right value, you could look into interval arithmetic (Julia has packages for), and use with Chris Elrod’s code. I’m not sure the end-points will be deterministic, actually. Another thing to look into are Posits (Unum) instead of traditional floating point.

Note while -3.583133801120832e-16 isn’t exactly 0.0 it is close enough, while not:

julia> -3.583133801120832e-16 ≈ 0.0
false

It’s been enough for me to use that operator (when porting from MATLAB), an alias for isapprox with the defaults.

Note, however:

julia> isapprox(-3.583133801120832e-16, 0.0; atol=0.05)
true

I’ve never had to use absolute tolerance (atol) before, or relative (rtol), so I’m not sure which values are most appropriate, I just took that one from the docs.