Different results for cholesky decomposition between 0.6.2 and 0.6.3

I use the following code to select a set of linearly independent columns from a matrix

function give_independent(X)
   chol = cholfact!(X'X, :U, Val{true})
   ipermute!(diag(chol.factors) .> 0, chol.piv)
end 

However I obtain different results in Julia 0.6.2 and 0.6.3

v1 = [1.51522, 1.91087, 1.6913, 1.88696, 1.26522, 3.19783, 3.09565, 3.03043, 4.13913, 2.27174, 1.61522, 1.43913, 2.02826, 2.45652, 0.906522, 0.543478, -0.141304, 0.528261, 5.53261, 2.98043, 2.92609, -1.87174, 0.941304, -1.53696, -1.21087, -5.63696, -8.33913, -4.82826, -8.2, -0.734783, -3.18478, -3.88913, -3.9087, -0.0130435, -1.13478, -1.10217, 2.59565, 0.530435, -0.0608696, -2.02826, -1.78478, -2.46087, 3.12826, 1.15652, -0.493478, -0.956522, 1.6587, 0.728261, 0.0326087, 3.18043, -0.973913, 0.128261, 0.741304, 2.16304, 2.28913, -6.53696, -5.03913, -13.7283, -1.2, -7.23478, -0.0847826, -0.58913, -0.908696, 0.686957, 0.0652174, -1.50217, -2.60435, 0.130435, 0.23913, 4.07174, 4.61522, 3.83913, 4.02826, 2.95652, 3.40652, 1.94348, 1.4587, 1.42826, 1.43261, 1.98043, 0.526087, 0.928261, 0.441304, -0.936957, 1.78913, -0.136957, -2.93913, -2.72826, -6.1, 4.36522]
v2 = [-25.8261, -25.8261, -25.8261, -25.8261, -25.8261, -25.8261, -25.8261, -25.8261, -25.8261, -25.8261, -25.8261, -25.8261, -25.8261, -25.8261, -25.8261, -25.8261, -25.8261, -25.8261, -25.8261, -25.8261, -25.8261, -25.8261, -25.8261, -25.8261, -25.8261, -25.8261, -25.8261, -25.8261, -25.8261, -25.8261, -23.8261, -23.8261, -23.8261, -23.8261, -23.8261, -23.8261, -23.8261, -23.8261, -23.8261, -23.8261, -23.8261, -23.8261, -23.8261, -23.8261, -23.8261, -23.8261, -23.8261, -23.8261, -23.8261, -23.8261, -23.8261, -23.8261, -23.8261, -23.8261, -23.8261, -23.8261, -23.8261, -23.8261, -23.8261, -23.8261, -22.8261, -22.8261, -22.8261, -22.8261, -22.8261, -22.8261, -22.8261, -22.8261, -22.8261, -22.8261, -22.8261, -22.8261, -22.8261, -22.8261, -22.8261, -22.8261, -22.8261, -22.8261, -22.8261, -22.8261, -22.8261, -22.8261, -22.8261, -22.8261, -22.8261, -22.8261, -22.8261, -22.8261, -22.8261, -22.8261]
give_independent(hcat(v1, v1, v2))
# Julia 0.6.2. returns [false, true, true]
# Julia 0.6.3. returns [true, true , true]

Julia 0.6.3. does not return the result I expected since the first and second columns are linearly dependent.
What changed between the two versions? What should I do to obtain the correct set of linearly independent vectors in 0.6.3?

Thanks for the report. What’s your versioninfo()?

Thanks. Here is the versioninfo()

Julia Version 0.6.2
Commit d386e40c17 (2017-12-13 18:08 UTC)
Platform Info:
  OS: macOS (x86_64-apple-darwin14.5.0)
  CPU: Intel(R) Core(TM) i5-7600 CPU @ 3.50GHz
  WORD_SIZE: 64
  BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Prescott)
  LAPACK: libopenblas64_
  LIBM: libopenlibm
  LLVM: libLLVM-3.9.1 (ORCJIT, broadwell)

and

Julia Version 0.6.3
Commit d55cadc350 (2018-05-28 20:20 UTC)
Platform Info:
  OS: macOS (x86_64-apple-darwin14.5.0)
  CPU: Intel(R) Core(TM) i5-7600 CPU @ 3.50GHz
  WORD_SIZE: 64
  BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Haswell)
  LAPACK: libopenblas64_
  LIBM: libopenlibm
  LLVM: libLLVM-3.9.1 (ORCJIT, broadwell)

Interesting. This seems to be a CPU-architecture-dependent issue; I cannot reproduce on my old westmere processor. I wonder if it’s related to the different BLAS configurations. Are you using the distributed binaries or building from source?

Could you file an issue at https://github.com/JuliaLang/julia/issues/new so this doesn’t get lost?

Ok, I filed an issue there. We can continue the discussion there, but I am using distributed binaries in both cases.

TL;DR: Small changes in the BLAS library will effect the order of operations, so there is no guarantee of identical results between minor Julia versions here.

Although your input matrix is exactly rank deficient, after a few manipulations it is perturbed, so we arrive at

By forming A'A, you make the situation worse (an almost-dependent vector could be obscured). Much more robust would be

F=qrfact(A,Val{true})
ipermute!(abs.(diag(F[:R])) .> eps(10*size(A,1)*F[:R][1,1]), F[:p])

(The 10 is a fairly conservative tolerance factor.) Even this will fail in unusual cases, so you should check the effective rank with an SVD to be safe.

3 Likes

Thanks for the great answer.
My issue is that I do not want to use QR. The way I typically use this function is to get the set of linearly dependent vectors from a list of a few vectors, say less than 10, with potentially a huge number of elements, say 100 millions. In this case, the matrix X’X is very small. I can compute it without even forming the matrix with columns of vectors. Forming the matrix X and applying QR on it would require much more memory.
Is there a not-so-bad way to still use cholfact? What about something that squares your condition?

    ipermute!(diag(chol.factors) .> 100 * size(X, 1)^2 * eps(chol.factors[1]), chol.piv)

You could protect against roundoff error by using a controlled sum algorithm (similar to sum_kbn) to compute elements of X'X, then converting to BigFloat before factoring. (If the matrix is small, it’s less trouble to just cheat on the numerical analysis.)

1 Like

10 vectors of dimension 100 million is 1e09 numbers. At double precision, with 8 bytes per number, that’s about 10e09 bytes, or 10 gigabytes. Not that big. You should be able to do a reduced QR on the full 10 x 10million data set X on a machine with 32 GB memory, or easily on a machine with 64 GB.