Advice on optimizing large SDP model generation with JuMP + ComplexOptInterface

I am implementing a somewhat large SDP with complex-valued variables in JuMP, using ComplexOptInterface. For small test cases, it seems to work, but my real interest is in larger instances (specifically, when sn > 1 and n is large in the function below.). For those cases, model generation takes quite a long time, sometimes more than solving time. Is there something that can be improved (i.e., that will make model generation faster or decrease the instance size) in my model description, or is this just too large of a problem?

"""Maximum visibility (w.r.t. random noise) s.t. the DPS criterion is certified.
An objective value of 1 means feasibility (unconclusive), and < 1 means entanglement."""
function maximally_mixed_distance(state, local_dim, sn=1, n::Integer=3; ppt::Bool=true)
    # Constants
    dim = size(state, 1)
    noise = eye(dim) / dim^2
    aux_dim = local_dim * sn # Dimension with auxiliary spaces A'B'
    dims = repeat([aux_dim], n + 1) # AA' dim. + `n` copies of BB'.
    P = kron(eye(aux_dim), symmetric_projection(aux_dim, n)) # Bosonic subspace projector.
    Qdim = aux_dim * binomial(n + aux_dim - 1, aux_dim - 1)  # Dim. extension w/ bosonic symmetries.
    entangling = kron(eye(local_dim), sn .* ghz(sn), eye(local_dim)) # Entangling between A' and B'.

    problem = Model(SCS.Optimizer)
    COI = ComplexOptInterface # Adds complex number support do JuMP.
    COI.add_all_bridges(problem)

    # Optimization variables
    @variable(problem, 0 <= vis <= 1)
    Q = @variable(problem, [1:Qdim, 1:Qdim] in ComplexOptInterface.HermitianPSDCone())
    if ppt
        # Dummy vars. to enforce PPT (not possible directly in ComplexOptInterface?).
        fulldim = prod(dims)
        PSD = @variable(problem, [1:fulldim, 1:fulldim] in ComplexOptInterface.HermitianPSDCone())
    end

    # Constraints
    noisy_state = vis * state + (1 - vis) * noise
    @expression(problem, lifted, P * Q * P')
    @expression(problem, reduced, ptr(lifted, 3:n+1, dims) * entangling)
    @constraint(problem, noisy_state .== ptr(reduced, [2, 3], [local_dim, sn, sn, local_dim]))
    if ppt
        ssys = Int.(1:ceil(n / 2) + 1)
        @constraint(problem, PSD .== ptransp(lifted, ssys, dims))
    end

    # Solution
    @objective(problem, Max, vis)
    @show problem
    optimize!(problem)
    @show solution_summary(problem, verbose=true)
    problem, Q
end

Thanks for any suggestions!

2 Likes

I don’t know a ton about JuMP models but one issue could be using Convex’s partial trace and partial transpose. The way Convex works is that it operates in a “vectorized” fashion where it’s primitive object is a matrix, and it is most efficient when doing a small number of matrix operations. JuMP operates on the scalar level instead, and big matrix operations lead to tons of scalar operations and it could be slow.

So I would try an alternate partial trace, like perhaps QuantumInfo.jl/basics.jl at a046dba210202eb21644f8f7d63b246549412e8e · BBN-Q/QuantumInfo.jl · GitHub and see if you can make that work and if it helps.

Another standard suggestion is to use Julia’s Profile standard library to try to see where the code spends the most time.

You can also just start commenting out lines and see if removing any speeds up the code significantly. Of course the calculation won’t be correct but you might be able to isolate what is slowest and then try to optimize that.

1 Like

For those cases, model generation takes quite a long time, sometimes more than solving time.

How long is long?

Following your hint, I have profiled the code and learned that I had a very large matrix multiplication (sometimes taking > 80% of running time) that should have been done with a sparse matrix instead. I have also tried different implementations of partial trace and partial transpose. The code can now solve interesting cases (thanks!), but I will still try some further work on optimizing it.

1 Like

From @ericphanson answer, I have solved the largest bottleneck (I overlooked a matrix which should have been declared as sparse). This solved the extremely long model generation time, so it was my fault. Anyway, precise times and profiling of the new version are below.

As a follow-up question, I am still unsure whether something can be improved, especially in the way I declare PSD constraints using ComplexOptInterface. Currently, this is done by creating a new, dummy, PSD variable and adding equality constraints (see l.26 in this file). Do you know of a better way to do this?

  • Profile for maximally_mixed_distance(ghz(3), 3, 2, 2, ppt=true) is here (as a Vega JSON Source).
  • Total running time: 25seconds
  • Solver time: 23seconds
2 Likes

Open an issue in ComplexOptInterface.

@blegat is about to start some work on it, and will be interested in the example.

1 Like

Yes please open an issue with the details on ComplexOptInterface, I’ll take a look

1 Like