Out of memory when constructing large sparse SDP in JuMP

I need to solve large but very sparse SDPs with COSMO.jl. The following code runs out of memory while constructing the linear matrix inequality (i.e., the PSD constraint):

using JuMP, COSMO
using LinearAlgebra
using SparseArrays

n = 8000
b = ones(n)
p = 0.01
C = sprandn(Float64, n, n, p)  

model = JuMP.Model(optimizer_with_attributes(COSMO.Optimizer, 
                                            "merge_strategy" => COSMO.CliqueGraphMerge,
                                            "decompose" => true,
                                            "complete_dual" => true
                                            ))
@variable(model, y[1:n])

# # Try two things: 1. Symmetric() on the constraint:
@constraint(model, Symmetric(C - spdiagm(y)) >= 0, PSDCone())

# 2. Defining the variable explicitly and then constraining it
# @variable(model, Z[1:n, 1:n], PSD)
# @constraint(model, Z == Symmetric(C - spdiagm(y)))

@objective(model, Max, b'y)
JuMP.optimize!(model)

I have tried constructing the constraint with the commented out section as well, and both times the process is killed because it runs out of memory (I’m on a machine with 32GB of RAM running Ubuntu 22.04).

Interestingly, when I decrease n to 4,000, the memory usage spikes during the construction of the PSD constraint but drops once COSMO is actually solving the problem.

Does anyone have any experience setting up problems of this size? Note that the random constraint matrix C is very sparse (1% non-zero entries). Is JuMP suitable for this type of problem?

Hmm. Even if it’s sparse, such a large SDP can cause issues. @blegat might have some ideas.

1 Like

Looking at @profview_allocs, it seems COSMO’s decompose takes quite a lot of memory that you can get rid of with "decompose" => false.

A lot of memory could be saved at the JuMP level, I fixed it in Speed up vectorization of symmetric matrices by blegat · Pull Request #3349 · jump-dev/JuMP.jl · GitHub.

You can gain some more by the following trick:

This first creates the vectorized version of a nonsparse matrix. So it creates n^2/2 affine expressions. That calls zero(JuMP.AffExpr) for each zero entry of the sparse matrix.
This vector of AffExpr is then converted into a MOI.VectorAffineExpression which uses a sparse datastructure for the affine terms which is much more memory efficient. You can gain a lot by creating the MOI.VectorAffineExpression directly as follows

func = MOI.VectorAffineFunction(
    [MOI.VectorAffineTerm(i, MOI.ScalarAffineTerm(1.0, index(y[i]))) for i in 1:n],
    JuMP.vectorize(C, SymmetricMatrixShape(n)),
)
set = MOI.PositiveSemidefiniteConeTriangle(n)
MOI.add_constraint(backend(model), func, set)
1 Like