Pinning LinearAlgebra in Julia 1.6.2

Hi,

I am testing two packages in Julia 1.6.2 (TSAnalysis.jl and a new private package in the same area), as I have been using them with the LTS version until now. They seem to be working fine and pass all unit tests except one. In the problematic test, I have noticed that there are small differences (in the order of 1e-16 to 1e-8) in the output.

I cannot post a reproducible example for now, since the relevant code is still not available on Git and might be too complicated to describe in detail here. However, broadly speaking, I am processing a model with large Symmetric block diagonal covariance matrices with the Kalman filter in TSAnalysis.jl. At the second iteration of the filter (t=2), the covariance matrix P_post already shows 1e-14 differences compared to the LTS output. Over time, this gap increases.

I am unsure what is creating these differences. I suppose that it can be either the inv function for symmetric matrices, the use of Symmetric(...) itself, changes in the lyap function or in more standard matrix operations (e.g., matrix product, adjoint). Therefore, I was wondering if it could be possible to pin down LinearAlgebra to the LTS version or if there exists a list of critical changes.

Thank you.

The standard libraries are fixed for each julia version (for now). So if you want different versions of LinearAlgebra you need different Julia versions.

Not sure I get what you mean here. List of changes of LinearAlgebra? The code is here: https://github.com/JuliaLang/julia/tree/master/stdlib/LinearAlgebra, so you could use various git-tools to dig into it.

1 Like

Thanks, will do. I was actually looking for a list of critical changes if the functions listed above, but I can certainly look into the list of commits [I have edited the OP to explain it upfront]

This is quite annoying for comparisons :confused:

It’s annoying, but the amplification of floating point errors is kind of a fundamental issue of numerical computations. It’s better to have your tests check whether error is under some bound than to require exact replication of previous results.

For example, consider solving this Ax=b system by two different algorithms, SVD and QR. Both methods produce approximate x’s that satisfy the equation to roughly 1e-15. But the x’s they produce differ from each other and from the “true” x by 1e-6. Neither of the solved x’s is better than the other. (Similarly, if there’s a 1e-8 difference between answers produced for your problem by Julia 1.5 linear algebra and 1.6, it’s unlikely that either answer is better than the other.) The issue is that x cannot be computed to more than six digits precision in 64-bit floating point arithmetic, for an Ax=b problem with the 8x8 Hilbert matrix A. A unit test for this problem should simply check that the norm(Ax-b) < 1e-15 and that the computed x differs from the original by less than 1e-05.

julia> A = [1/(i+j-1) for i in 1:8, j in 1:8];

julia> x = rand(8)
8-element Vector{Float64}:
 0.1278532352020687
 0.7676493945865812
 0.00913463981518925
 0.6222238155162669
 0.10323435913906587
 0.9791508421975381
 0.6803781015368064
 0.9531836668412241

julia> b = A*x;

julia> norm(A*x-b)
0.0

julia> Asvd = svd(A);

julia> Aqr = qr(A);

julia> xsvd = Asvd\b
8-element Vector{Float64}:
 0.12785323521152347
 0.7676493940367002
 0.009134647388002199
 0.6222237729310713
 0.10323447719153722
 0.979150671248731
 0.6803782255013682
 0.953183631315329

julia> xqr = Aqr\b
8-element Vector{Float64}:
 0.12785323514108654
 0.7676493978160822
 0.009134598049168947
 0.6222240397918233
 0.10323375917314824
 0.9791516866231814
 0.6803775032989913
 0.9531838349809921

julia> norm(A*xsvd-b)
1.1511037319626838e-15

julia> norm(A*xqr-b)
4.1910000110727263e-16

julia> norm(x-xsvd)
2.4831498398415594e-7

julia> norm(x-xqr)
1.229319971970093e-6

julia> norm(xsvd-xqr)
1.4775841522815336e-6
5 Likes

Thank you, John. That’s helpful.

Just a few points:

  1. Are the 1e-5 and 1e-6 specific to this example or generalise to a wider class of linear systems? If they do generalise, where can I read more about it?
  2. The norm sounds like a good idea to handle floating point errors in unit tests and I will use it :slight_smile: For now, I was simply using the round function with digits=10 following a similar logic in mind.
  3. The approach based on the round function is enough for a battery of tests, except for the one described in my first post. Do you think that the block diagonal structure of the covariance matrix used in this very test might be the cause of this numerical instability?

There is also https://github.com/JuliaLang/julia/blob/master/HISTORY.md which should list all major changes. But all changes should be non-breaking. So if you find something breaking, then report it.

Thanks. I am incrementally upgrading Julia starting from the LTS version to understand what happened!

I tried with all versions listed in Julia Downloads (Old releases). It works fine up to v1.4.2 (i.e., at the same floating precision of the LTS version). It stops working with v1.5.4. I will edit this post later today if I get to the bottom of it.

You’re welcome; I’m glad that helps. To your questions:

  1. The 1e-06 error was specific to the given example, but it does follow a general theory for linear systems and even more broadly, all numeric calculation. The theory for Ax=b problems is that for appropriately stable solution algorithms the error in x obeys the bound

    || x - \hat{x} ||/||x|| < O(\epsilon_m \; \kappa(A))

    where x is the true solution of Ax=b, \hat{x} is the solution computed in finite-precision arithmetic, || \cdot || is the 2-norm, \epsilon_m is the so-called machine precision equalling half the distance from 1.0 to the next highest floating-point number, \kappa(A) is the condition number of the matrix, equal to the ratio between its largest and smallest singular values. and O(\cdot) means “order of,” i.e. a small positive scaling factor times the enclosed quantity.

    In my example, the machine precision of 64-bit floats is \epsilon_m = 2^{-52} \approx 10^{-16} and the condition number of the given matrix is \kappa(A) \approx 10^{10}, giving an error bound of about 10^{-6}. (Note I reported unnormalized errors || x - \hat{x} || rather than || x - \hat{x} ||/||x|| in my previous message but the example had ||x|| \approx 1 anyway.)

    julia> eps(Float64)
    2.220446049250313e-16
    
    julia> 2^-52
    2.220446049250313e-16
    
    julia> cond(A)
    1.5257575516147259e10
    
    julia> eps(Float64)*cond(A) # bound on norm of error for Ax=b solve
    3.3878623275967487e-6
    

    Trefethen and Bau’s “Numerical Linear Algebra” textbook gives a really nice, succinct introduction to this theory. More generally any kind of computational problem has a condition number which measures how much it amplifies the floating-point errors inherent in finite-precision arithmetic.

  2. Norms are great for unit tests!

  3. I bet if you compute the condition number of the block-diagonal matrix in your example, you’ll find it is about 10^8. That follows from \kappa(A) \approx \epsilon_m^{-1} \; || x - \hat{x} ||/||x|| \approx 10^{16} \cdot 10^{-8}

4 Likes

Thanks again, I will definitely take a look at this textbook.

The issue I described above is observed when moving from 1.4.2 to 1.5.0. I am comparing these versions of Julia on git and I suppose that the different floating precision depends on changes in LinearAlgebra such as:

  1. the inclusion of spmv! (for which I cannot find an official documentation);
  2. the new promotion rules for Diagonal and Symmetric matrix sums and subtractions.

I think they are non-breaking per se, but when used within a loop they might amplify floating point differences.

Do you happen to use random numbers in the test? The way random numbers are generated changed with v1.5, so if you do use random numbers that could be the reason.

1 Like

I do, but I am using StableRNGs. Shouldn’t it be safe with 1.5?

I’ve never actually used that package, but yes, I believe random numbers generated using that package should be reproducible across Julia versions.

I don’t think that you’re getting a less precise answer with 1.5.0. I think instead you get equally imprecise answers in 1.4.2 and 1.5.0 (like in my example, both xsvd and xqr are equally imprecise). But if you use the 1.4.2 answer as the standard, then it appears that 1.5.0 answer is in error.

Neither answer is better than the other. They’re equally imprecise. The error shows up in different ways depending on how you do the algorithm, maybe something to do with changes in promotion rules. But the change does not mean worsening. Just different.

This is expected behavior from a theory-of-numerics perspective. Maybe there’s more going on in your example than I know about. But what you’re seeing is expected. And the fix is to make your unit tests check that the computation aligns with theory (error bounded as discussion above), and not to expect higher accuracy than is possible.

3 Likes