It’s annoying, but the amplification of floating point errors is kind of a fundamental issue of numerical computations. It’s better to have your tests check whether error is under some bound than to require exact replication of previous results.
For example, consider solving this Ax=b system by two different algorithms, SVD and QR. Both methods produce approximate x’s that satisfy the equation to roughly 1e-15. But the x’s they produce differ from each other and from the “true” x by 1e-6. Neither of the solved x’s is better than the other. (Similarly, if there’s a 1e-8 difference between answers produced for your problem by Julia 1.5 linear algebra and 1.6, it’s unlikely that either answer is better than the other.) The issue is that x cannot be computed to more than six digits precision in 64-bit floating point arithmetic, for an Ax=b problem with the 8x8 Hilbert matrix A. A unit test for this problem should simply check that the norm(Ax-b) < 1e-15 and that the computed x differs from the original by less than 1e-05.
julia> A = [1/(i+j-1) for i in 1:8, j in 1:8];
julia> x = rand(8)
8-element Vector{Float64}:
0.1278532352020687
0.7676493945865812
0.00913463981518925
0.6222238155162669
0.10323435913906587
0.9791508421975381
0.6803781015368064
0.9531836668412241
julia> b = A*x;
julia> norm(A*x-b)
0.0
julia> Asvd = svd(A);
julia> Aqr = qr(A);
julia> xsvd = Asvd\b
8-element Vector{Float64}:
0.12785323521152347
0.7676493940367002
0.009134647388002199
0.6222237729310713
0.10323447719153722
0.979150671248731
0.6803782255013682
0.953183631315329
julia> xqr = Aqr\b
8-element Vector{Float64}:
0.12785323514108654
0.7676493978160822
0.009134598049168947
0.6222240397918233
0.10323375917314824
0.9791516866231814
0.6803775032989913
0.9531838349809921
julia> norm(A*xsvd-b)
1.1511037319626838e-15
julia> norm(A*xqr-b)
4.1910000110727263e-16
julia> norm(x-xsvd)
2.4831498398415594e-7
julia> norm(x-xqr)
1.229319971970093e-6
julia> norm(xsvd-xqr)
1.4775841522815336e-6