I found that matrix-vector multiplication in Julia uses a different algorithm when the matrix is wrapped by Transpose or Adjoint, compared to when it is a regular Matrix. As a result, the multiplication can sometimes produces slightly different results. You can see this by running the following small script:
Neither. That is just floating point rounding errors.
As a result of their finite precision, floating point operations are not associative, e.g.
a=1
b=1e-20
@assert (b+a)-a != b+(a-a)
This means that results naturally depend on the exact order of operation you perform. Results are generally expected to change across algorithms. However for results shouldn’t change much if the algorithms are well designed. What you see here is on the level of the accuracy of Float64. So basically a difference of 1 unit in last place.
If your algorithm/application is sensitive to these tiny deviations, then you should consider
being very very careful with every operation
Use bigger/more precise floats (if suitable)
Use some exact arithmetic (maybe fixed point numbers or rationals)
An the reason things are done in a different order when using a transposed matrix is probably due to questions of memory-locality. I.e. it’s usually good for performance to access memory in sequential order.
I already knew the difference came from floating-point arithmetic, but I wanted to check whether it was expected behavior or something unintended. From your answers, I now understand it’s neither an intentional spec nor a bug, and that I should handle it by reordering calculations or making the system more stable.
If your algorithm/application is sensitive to these tiny deviations…
Your guess was spot on. When I posted the first message, I was struggling to port a Python implementation to Julia and was trying to figure out why the results weren’t matching. After digging into it, I found the issue was caused by the large condition number of the design matrix and the Jacobian.