Floating number matrix, equal matrix in whole, but not equal column

norm(Y - B * A, Inf) is 0, but when comparing individual columns, such as Y[:,4] - B * A[:,4] is not zero. Why?

using TranscodingStreams, CodecXz, Serialization, Downloads
using LinearAlgebra


function urlData(url, fn)
    tFile = joinpath(tempdir(), fn)
    if !isfile(tFile)
        Downloads.download(url, tFile)
    end
    #xz = last(fn, 3)
    io = open(tFile, "r")
    if lowercase(last(fn, 3)) == ".xz"
        io = TranscodingStream(XzDecompressor(), io)
    end
    t = deserialize(io)
    close(io)
    return t
end

function main()

    url = "https://github.com/maxchendt/FuturesHedge/raw/master/data/YA4.jls.xz"
    fn = "YA4.jls.xz"
    B, Y, A = urlData(url, fn)

    #display(Y - B * A)
    display(norm(Y - B * A, Inf))
    display(Y[:,4] - B * A[:,4])
    #y = B * A[:,4]
    #display(Y[:,4] - y)

    nothing

end

main()
nothing

the output

0.0
3-element Vector{Float64}:
 -1.3877787807814457e-17
  0.0
  2.7755575615628914e-17
1 Like

That is probably due to norm being optimized and performing the floating point operations in a slightly different order than you would do column-by-column. If you test isapprox(Y[:,4], B * A[:,4]) to disregard this kind of numerical errors, you will get true

1 Like

Due to rounding errors, B * A gives only approximately the same result as multiplying B separately by each column of A.

julia> A, B = randn(1000,1000), randn(1000,1000);

julia> (B * A)[:,4] == B * A[:,4]
false

julia> (B * A)[:,4] ≈ B * A[:,4]
true

The reason is that B * A uses a different (faster) algorithm for multiplying the matrices than column-by-column, which does the operations in a different order, and floating-point arithmetic is not associative. Even just summing the same numbers in a different order produces a different result due to roundoff errors accumulating differently.

5 Likes

Thanks so much!

I have another big matrix, the difference is 10^-8. I double check and check my codes, now I am not puzzled.