What is the best way to factorize/decompose a covariance matrix?

@cgeoga Thank you. I really appreciate your help

I doubt you’ll be particularly pleased with the results when trying to compute a full eigendecomposition of a 250000x250000 matrix (it’s an O(N^3) algorithm, for a dense matrix that would be ballpark one year of computation on a big machine). If the matrix is quite sparse, then minimum-degree ordering should at least make the cholesky factorization computable. However, if you allow pivoting then you risk breaking the minimum-degree ordering. If you don’t understand why, again I’d urge you to make a nice cup of tea and start reading up on these matters. Many of the simpler algorithms of linear algebra are quite straightforward, and given the scale of the problems you’re tackling I worry you’ll be at sea unless you invest in yourself by trying to understand the mathematics.

Given the size of what you’re working with, I think the best strategy will be to stick with the cholesky factorization. Given that allowing pivoting is not a trivial thing when working with large sparse matrices, I suspect your best choice is to change strategy and compute the cholesky factorization of Σ + τI, where you make τ as small as possible and still have the cholesky decomposition succeed. You can do that with an iterative bisection algorithm.

1 Like