Pure-Julia Sparse Cholesky

This sounds great. I’m currently using LDLFactorizations as the sparse solver in for large scale non-linear least squares optimization in my NLLSsolver.jl package. This sounds like it might be an improvement.

Is there a way to separate the symbolic phase of the decomposition from the numeric phase? LDLFactorizations provides the ldl_analyze method to do this. This is useful in optimization where the structure of the sparse matrix doesn’t change across iterations, but the values do, allowing the symbolic phase to be done only once at the start. The documentation for your package suggests this isn’t currently possible.

3 Likes

I need to update the docs. Make sure to update to the most recent version (1.9.2).

symb = CliqueTrees.symbolic(matrix)
fact = CliqueTrees.cholesky(matrix, symb)
1 Like

Hi @user664303,

Perhaps this is not quite what you are looking for. symbolic performs the symbolic factorization, but it does not allocate memory for the numerical factorization, so each call to

CliqueTrees.cholesky(matrix, symb)

will re-allocate. Do you want the memory to be allocated one time and then re-used?

It would be great to have the concept of FastLapackInterface.jl where there is a service function to pre allocate a workspace.
Then it can be used by the functions if given (Not mandatory) to prevent allocations.

2 Likes

That should be doable…

I’m not sure if ldl_analyze does this. I’ll check. But yes, having the option to preallocate memory would be nice.

1 Like

@user664303 @RoyiAvital How do you feel about this? You will have to dev CliqueTrees.jl to try it. The variable cholwork is a workspace.

julia> using BenchmarkTools, CliqueTrees, MatrixMarket, SuiteSparseMatrixCollection

julia> ssmc = ssmc_db(); name = "nasasrb";

julia> matrix = mmread(joinpath(fetch_ssmc(ssmc[ssmc.name .== name, :]; format="MM")[1], "$(name).mtx"));

julia> symbfact = CliqueTrees.symbolic(matrix)
SymbFact{Int64}:
    nnz: 10692891

julia> cholfact, cholwork = CliqueTrees.cholinit(matrix, symbfact);

julia> @btime CliqueTrees.cholesky!(cholfact, cholwork, matrix);
  163.623 ms (0 allocations: 0 bytes)

versus LDLFactorizations.jl:

julia> using LDLFactorizations

julia> ldlfact = LDLFactorizations.ldl_analyze(matrix);

julia> @btime LDLFactorizations.ldl_factorize!(matrix, ldlfact);
  1.367 s (0 allocations: 0 bytes)
5 Likes

Looks great to me. Thank you! I’m in the process to putting a benchmark together to see how this performs on my class of problem.

I think this should be:

CliqueTrees.cholesky!(cholfact, cholwork, matrix)

I’ve now tried it on my problems, but unfortunately my matrices are not guaranteed to be positive definite. That’s why I use LDLFactorizations.jl. Can your super-nodal approach also be applied to an LDL factorization, that will work on non-positive definite matrices?

Can you also apply the option to serve pre allocated buffers to the symbolic stage.
Preferably an option to apply the whole procedure without any allocations.

Hi! Thank you for trying it out. I have been thinking about the indefinite case as well.

In principle, there is no problem. In practice, I need a fast way to compute an un-pivoted LDL factorization of a dense matrix. There does not seem to be a LAPACK routine for this?

I could also publish a pivoted LDL, which would be slower / more stable. A little trickier to implement.

There seems to be a C++ implementation of unpivoted LDL here. Perhaps this could be translated into a pure Julia implementation.

Alternatively,

1 Like