# Improving performance for SDPs with "matrix trace constraints"

I’ve been working on an SDP model with (among other variables) a hermitian variable W of dimensions n x n. Although it would be nice to have support for complex variables in JuMP, a simple workaround is to declare 2 matrix variables; `Wr` for the real part and `Wi` for the imaginary part.

``````n = 111  # Example with dimensions 111 x 111

@variable(model, Wr[1:n, 1:n])  # Wr: real part of matrix variable
@variable(model, Wi[1:n, 1:n])  # W: imaginary part of matrix variable
``````

The rest of the model formulation continues more or less smoothly with the exception of a collection of equality constraints that present a significant bottleneck in performance (~10s - 20s instead of a few ms). The left hand side of these consists on a matrix multiplications of W with some complex matrix M[k] (dimensions also n x n), and the trace of the result: `tr(M[k]*W)`. Again, as a workaround of non-support for complex arithmetic in JuMP, we reformulate it with a real and imaginary parts of `M[k]`: `Mr[k] = real(M[k])` and `Mi[k] = imag(M[k])` >>> `tr(Mr[k]*Wr - Mi[k]*Wi)` (taking into account the diagonal of the matrix multiplication result contains only real values).

You may have noticed the k index in `Mr[k]` and `Mi[k]`. That’s because `Mr` or `Mi` are collections of `n` matrices, each with dimensions n x n.

``````# Again example with dimensions n = 111

julia> Mr
111-element Array{Array{Float64,2},1}:
...

julia> Mr
111×111 Array{Float64,2}:
...
``````

We build a constraint for each of the `k` matrices

``````# n constraints (111 constraints)
@constraint(model, [k = 1:n], tr(Mr[k]*Wr - Mi[k]*Wi) == RHS[k])
``````

It seems that this formulation is not ideal for JuMP. As expressed above it takes about 10-20 seconds to execute, instead of 0.5 s for the equivalent in MATLAB + YALMIP (without sacrificing solver time).

It may be important to add that each `Mr[k]` and `Mi[k]` matrix is very sparse, and possibly most of the matrix multiplication time is wasted because only the diagonal elements of the result are relevant. Is there formulation to take advantage of this? I remember having read somewhere that for now JuMP doesn’t exploit sparsity to accelerate the performance, should this be taken into account (for now) when formulating constraints?

PS: I’m sorry if my question formulation is not clear or too complex to get to the point. Declaring the matrices `Mr[k]` and `Mi[k]` as sparse matrices seems to significantly imporve performance (~10 fold), also: JuMP does exploit sparsity!

Further performance improvement recommendations are welcome! 