I’ve run into some confusion about rounding with some linear algebra things which is causing some trouble for me in an image reconstruction algorithm I’m working on. When I run the following example code, I sometimes see that the trace and the two sums are equal as I expect, but just as often one, or all three values will differ. I’m confused as to why this is happening, can anyone enlighten me?

using LinearAlgebra
A=rand(Float32,100);
T=zeros(Float32,100,100)
for i=1:100
T[i,i]=A[i]
end
println("-----------------")
println("sum T: ",sum(T))
println("tr T: ",tr(T))
println("sum A: ",sum(A))

Summing floating point numbers is not guaranteed to be exact, as information is lost if the ratio of two numbers is too big (1e10+1e-10, for example). If you depend on the comparison, you can use the ≈ operator

Basically, what is happening is that sum and tr calculations compute the sum in a different order (more precisely, a different associativity), which gives slightly different results in floating-point arithmetic due to different roundoff-error accumulations.

(The sum function uses a fancier summation algorithm than a simple loop, called “pairwise” summation, that reduces the accumulated roundoff error.)

Thank you all for your clear responses. This all makes sense to me. I’m working on an iterative image reconstruction algorithm based on ADMM. I am in the process of streamlining some of the math and encountered this when vectorizing some matrix math involving scaled identity matrices. I was checking to make sure that the results matched and was puzzled when they weren’t but differed only by small amounts. To be clear, the algorithm doesn’t rely on any equalities in this way, just my lazy way of checking my math.

That’s good to hear In that case, you can follow @longemen3000’s suggestion and check out the isapprox function (can also be written as ≈). It is well suited to this sort of checking.