Matrix multiplication precision issue

Why A1 and A2 have different precision and A1 != A2, though mathematically, they should have the same results. I’m wondering is there anything special about matrix slicing that would affect the precision? or the results are caused by something esle?

Example(Julia 0.6.4):
A = randn(Float64, 200, 100)
A = [ones(Float64, 200) A]
B = randn(Float64, 200, 1)

ApB = A’B
idx = [1, 2]
A1 = ApB[idx]

A2 = (A[:, idx]’ * B)[:]
A1 == A2

A2[1] == sum(B)

Also, please quote code

I think you’re just hitting rounding errors (as already said above). You may find it interesting to try the same thing using DoubleFloats (see below). Note that if you use instead of == in your example it also “works” but that doesn’t answer your question I guess.

julia> using DoubleFloats
julia> using Random
julia> Random.seed!(123);
julia> A = hcat(ones(200), randn(200, 100));
julia> B = randn(200);
julia> # let's declare objects in double double precision
julia> A2 = convert(Matrix{Double64}, A);
julia> B2 = convert(Vector{Double64}, B);
julia> # Now let's do the computations as you did
julia> ApB = A'B;
julia> idx = [1, 2];
julia> A_1 = ApB[idx];
julia> A_2 = (A[:, idx]' * B)[:];
julia> # the results are not identical (floating point error)
julia> A_1 - A_2
2-element Array{Float64,1}:
 0.0                   
 1.9539925233402755e-14
julia> # let's do the same thing in double double...
julia> ApB2 = A2'B2;
julia> A_12 = ApB2[idx];
julia> A_22 = (A2[:, idx]' * B2)[:];
julia> # the rounding error goes away
julia> A_12 - A_22
2-element Array{Double64,1}:
 0.0
 0.0

link to the awesome DoubleFloats package repo

Rounding issues, as others have said. Use this to calculate the relative norm (“error”):

julia> norm(A1-A2) / max(norm(A1), norm(A2))
2.0277114478533147e-15

Seems small enough. And here’s what you can use instead of ==:

julia> isapprox(A1, A2; rtol = 1e-10)
true

julia> isapprox(A1, A2; rtol = 1e-15)
false

Or, shorter but perhaps less readable:

julia> ≈(A1, A2; rtol = 1e-10)
true

julia> ≈(A1, A2; rtol = 1e-15)
false
2 Likes

I’ve read the previous post already. But I still don’t understand. Aren’t the numbers in the first two column of matrix A used for calculation the same in both cases? If they are same, the results should be the same following the floating point arithmetic rules.

BTW, thanks for the tip regarding code quotation.

Thanks for the tip on DoubleFloats. Really helps!

I think the reason is to be found behind ordering of operations. Julia may re-order operations to get optimal performance and if you sum floating point numbers in different orders you may get different results.

Small experiment below:

using Random
using LinearAlgebra

Random.seed!(125)
A = hcat(ones(200), randn(200, 100));
B = randn(200);

ApB = A'B;
idx = [1, 2];

# -- let's unroll the operations partially

ApB = Vector{Float64}(undef, 101)
@inbounds for i ∈ 1:101
    ApB[i] = dot(view(A, :, i), B)
end

A2 = Vector{Float64}(undef, 2)
@inbounds for i ∈ idx
    A2[i] = dot(view(A, :, i), B)
end

ApB[idx] == A2 # true as expected

# -- now let's do the same thing but using
# -- a different ordering for ApB and for A2

rp1 = randperm(200)
rp2 = randperm(200)

ApB = Vector{Float64}(undef, 101)
@inbounds for i ∈ 1:101
    ApB[i] = dot(view(A, rp1, i), B[rp1])
end

A2 = Vector{Float64}(undef, 2)
@inbounds for i ∈ idx
    A2[i] = dot(view(A, rp2, i), B[rp2])
end

ApB[idx] == A2 # false !
ApB[idx] ≈ A2  # still true
1 Like

Thanks for the help, man!

That may explain the issue. It’s related to the Julia implementation of floating point arithmetic.
I tried it on python and A1==A2 is true.

import numpy as np
import numpy.random as nprd

In [27]: A = np.c_[np.ones((200, 1)), nprd.randn(200, 100)]

In [28]: B = nprd.randn(200, 1)

In [29]: ApB = np.dot(A.T, B)

In [30]: idx = [1, 2]

In [31]: A1 = ApB[idx]

In [32]: A2 = np.dot(A[:, idx].T, B)

In [33]: A1 == A2
Out[33]:
array([[ True],
       [ True]])

In [34]: type(A1[0][0])
Out[34]: numpy.float64

that’s misleading (or you got lucky). I just had a shot and without surprise

>>> A1 - A2
array([[-1.77635684e-15],
       [-5.32907052e-15]])
1 Like

OK. That’s weird.

Mine is:

In [27]: A = np.c_[np.ones((200, 1)), nprd.randn(200, 100)]

In [28]: B = nprd.randn(200, 1)

In [29]: ApB = np.dot(A.T, B)

In [30]: idx = [1, 2]

In [31]: A1 = ApB[idx]

In [32]: A2 = np.dot(A[:, idx].T, B)

In [33]: A1 == A2
Out[33]:
array([[ True],
       [ True]])

In [34]: type(A1[0][0])
Out[34]: numpy.float64

In [36]: A1 - A2
Out[36]:
array([[0.],
       [0.]])

Have you tried several times? I set the seed to 1515 (at least doing that here gives me a standard epsilon machine error) anyway.

Another thing to note is that your Python might use MKL as a BLAS backend (mine does) where your Julia would use OpenBLAS, slight differences would then be expected.

PS: as a side note, I’m definitely not an expert but I’m pretty sure the Julia devs didn’t implement their own sauce for floating point arithmetics. There is a standard (ieee754) which is most likely scrupulously followed. However the ordering of how operations are effectively done in a matrix-vector multiplication can depend (I think) on what BLAS backend you use.

1 Like

Yes, tried several times, the results are same in Python.

BTW, if the different results are caused by the order of computation, does it mean the implementations of whole matrix multiplication and block matrix multiplication are different in Julia?

No, that’s what I was trying to write (probably clumsily) above. These core linear algebra things (typically matrix-matrix and matrix-vector as far as I know) will be done by a BLAS backend. See also the LinearAlgebra doc, the “BLAS Functions” part.

See for instance in the LinearAlgebra code you’ll see lines like GC.@preserve x y BLAS.dot etc. These call your BLAS backend and results might differ on what implementation of BLAS you’re using.

OK. I got it. Thx man!