Advice on optimizing large SDP model generation with JuMP + ComplexOptInterface

I don’t know a ton about JuMP models but one issue could be using Convex’s partial trace and partial transpose. The way Convex works is that it operates in a “vectorized” fashion where it’s primitive object is a matrix, and it is most efficient when doing a small number of matrix operations. JuMP operates on the scalar level instead, and big matrix operations lead to tons of scalar operations and it could be slow.

So I would try an alternate partial trace, like perhaps QuantumInfo.jl/basics.jl at a046dba210202eb21644f8f7d63b246549412e8e · BBN-Q/QuantumInfo.jl · GitHub and see if you can make that work and if it helps.

Another standard suggestion is to use Julia’s Profile standard library to try to see where the code spends the most time.

You can also just start commenting out lines and see if removing any speeds up the code significantly. Of course the calculation won’t be correct but you might be able to isolate what is slowest and then try to optimize that.

1 Like