I am testing two packages in Julia 1.6.2 (TSAnalysis.jl and a new private package in the same area), as I have been using them with the LTS version until now. They seem to be working fine and pass all unit tests except one. In the problematic test, I have noticed that there are small differences (in the order of 1e-16 to 1e-8) in the output.

I cannot post a reproducible example for now, since the relevant code is still not available on Git and might be too complicated to describe in detail here. However, broadly speaking, I am processing a model with large Symmetric block diagonal covariance matrices with the Kalman filter in TSAnalysis.jl. At the second iteration of the filter (t=2), the covariance matrix P_post already shows 1e-14 differences compared to the LTS output. Over time, this gap increases.

I am unsure what is creating these differences. I suppose that it can be either the inv function for symmetric matrices, the use of Symmetric(...) itself, changes in the lyap function or in more standard matrix operations (e.g., matrix product, adjoint). Therefore, I was wondering if it could be possible to pin down LinearAlgebra to the LTS version or if there exists a list of critical changes.

The standard libraries are fixed for each julia version (for now). So if you want different versions of LinearAlgebra you need different Julia versions.

Thanks, will do. I was actually looking for a list of critical changes if the functions listed above, but I can certainly look into the list of commits [I have edited the OP to explain it upfront]

Itâ€™s annoying, but the amplification of floating point errors is kind of a fundamental issue of numerical computations. Itâ€™s better to have your tests check whether error is under some bound than to require exact replication of previous results.

For example, consider solving this Ax=b system by two different algorithms, SVD and QR. Both methods produce approximate xâ€™s that satisfy the equation to roughly 1e-15. But the xâ€™s they produce differ from each other and from the â€śtrueâ€ť x by 1e-6. Neither of the solved xâ€™s is better than the other. (Similarly, if thereâ€™s a 1e-8 difference between answers produced for your problem by Julia 1.5 linear algebra and 1.6, itâ€™s unlikely that either answer is better than the other.) The issue is that x cannot be computed to more than six digits precision in 64-bit floating point arithmetic, for an Ax=b problem with the 8x8 Hilbert matrix A. A unit test for this problem should simply check that the norm(Ax-b) < 1e-15 and that the computed x differs from the original by less than 1e-05.

Are the 1e-5 and 1e-6 specific to this example or generalise to a wider class of linear systems? If they do generalise, where can I read more about it?

The norm sounds like a good idea to handle floating point errors in unit tests and I will use it For now, I was simply using the round function with digits=10 following a similar logic in mind.

The approach based on the round function is enough for a battery of tests, except for the one described in my first post. Do you think that the block diagonal structure of the covariance matrix used in this very test might be the cause of this numerical instability?

I tried with all versions listed in Julia Downloads (Old releases). It works fine up to v1.4.2 (i.e., at the same floating precision of the LTS version). It stops working with v1.5.4. I will edit this post later today if I get to the bottom of it.

Youâ€™re welcome; Iâ€™m glad that helps. To your questions:

The 1e-06 error was specific to the given example, but it does follow a general theory for linear systems and even more broadly, all numeric calculation. The theory for Ax=b problems is that for appropriately stable solution algorithms the error in x obeys the bound

|| x - \hat{x} ||/||x|| < O(\epsilon_m \; \kappa(A))

where x is the true solution of Ax=b, \hat{x} is the solution computed in finite-precision arithmetic, || \cdot || is the 2-norm, \epsilon_m is the so-called machine precision equalling half the distance from 1.0 to the next highest floating-point number, \kappa(A) is the condition number of the matrix, equal to the ratio between its largest and smallest singular values. and O(\cdot) means â€śorder of,â€ť i.e. a small positive scaling factor times the enclosed quantity.

In my example, the machine precision of 64-bit floats is \epsilon_m = 2^{-52} \approx 10^{-16} and the condition number of the given matrix is \kappa(A) \approx 10^{10}, giving an error bound of about 10^{-6}. (Note I reported unnormalized errors || x - \hat{x} || rather than || x - \hat{x} ||/||x|| in my previous message but the example had ||x|| \approx 1 anyway.)

julia> eps(Float64)
2.220446049250313e-16
julia> 2^-52
2.220446049250313e-16
julia> cond(A)
1.5257575516147259e10
julia> eps(Float64)*cond(A) # bound on norm of error for Ax=b solve
3.3878623275967487e-6

Trefethen and Bauâ€™s â€śNumerical Linear Algebraâ€ť textbook gives a really nice, succinct introduction to this theory. More generally any kind of computational problem has a condition number which measures how much it amplifies the floating-point errors inherent in finite-precision arithmetic.

Norms are great for unit tests!

I bet if you compute the condition number of the block-diagonal matrix in your example, youâ€™ll find it is about 10^8. That follows from \kappa(A) \approx \epsilon_m^{-1} \; || x - \hat{x} ||/||x|| \approx 10^{16} \cdot 10^{-8}

Thanks again, I will definitely take a look at this textbook.

The issue I described above is observed when moving from 1.4.2 to 1.5.0. I am comparing these versions of Julia on git and I suppose that the different floating precision depends on changes in LinearAlgebra such as:

the inclusion of spmv! (for which I cannot find an official documentation);

the new promotion rules for Diagonal and Symmetric matrix sums and subtractions.

I think they are non-breaking per se, but when used within a loop they might amplify floating point differences.

Do you happen to use random numbers in the test? The way random numbers are generated changed with v1.5, so if you do use random numbers that could be the reason.

I donâ€™t think that youâ€™re getting a less precise answer with 1.5.0. I think instead you get equally imprecise answers in 1.4.2 and 1.5.0 (like in my example, both xsvd and xqr are equally imprecise). But if you use the 1.4.2 answer as the standard, then it appears that 1.5.0 answer is in error.

Neither answer is better than the other. Theyâ€™re equally imprecise. The error shows up in different ways depending on how you do the algorithm, maybe something to do with changes in promotion rules. But the change does not mean worsening. Just different.

This is expected behavior from a theory-of-numerics perspective. Maybe thereâ€™s more going on in your example than I know about. But what youâ€™re seeing is expected. And the fix is to make your unit tests check that the computation aligns with theory (error bounded as discussion above), and not to expect higher accuracy than is possible.