Out of memory when constructing large sparse SDP in JuMP

Looking at @profview_allocs, it seems COSMO’s decompose takes quite a lot of memory that you can get rid of with "decompose" => false.

A lot of memory could be saved at the JuMP level, I fixed it in Speed up vectorization of symmetric matrices by blegat · Pull Request #3349 · jump-dev/JuMP.jl · GitHub.

You can gain some more by the following trick:

This first creates the vectorized version of a nonsparse matrix. So it creates n^2/2 affine expressions. That calls zero(JuMP.AffExpr) for each zero entry of the sparse matrix.
This vector of AffExpr is then converted into a MOI.VectorAffineExpression which uses a sparse datastructure for the affine terms which is much more memory efficient. You can gain a lot by creating the MOI.VectorAffineExpression directly as follows

func = MOI.VectorAffineFunction(
    [MOI.VectorAffineTerm(i, MOI.ScalarAffineTerm(1.0, index(y[i]))) for i in 1:n],
    JuMP.vectorize(C, SymmetricMatrixShape(n)),
)
set = MOI.PositiveSemidefiniteConeTriangle(n)
MOI.add_constraint(backend(model), func, set)
1 Like