# What is the best way to factorize/decompose a covariance matrix?

@cgeoga Thank you. I really appreciate your help

I doubt youâll be particularly pleased with the results when trying to compute a full eigendecomposition of a 250000x250000 matrix (itâs an `O(N^3)` algorithm, for a dense matrix that would be ballpark one year of computation on a big machine). If the matrix is quite sparse, then minimum-degree ordering should at least make the cholesky factorization computable. However, if you allow pivoting then you risk breaking the minimum-degree ordering. If you donât understand why, again Iâd urge you to make a nice cup of tea and start reading up on these matters. Many of the simpler algorithms of linear algebra are quite straightforward, and given the scale of the problems youâre tackling I worry youâll be at sea unless you invest in yourself by trying to understand the mathematics.

Given the size of what youâre working with, I think the best strategy will be to stick with the cholesky factorization. Given that allowing pivoting is not a trivial thing when working with large sparse matrices, I suspect your best choice is to change strategy and compute the cholesky factorization of ` ÎŁ + ĎI`, where you make ` Ď` as small as possible and still have the cholesky decomposition succeed. You can do that with an iterative bisection algorithm.

1 Like