Hi, I need to solve the following convex optimization problem:

min_{X}\quad tr(X) s.t. A(X)=b and X semi-definite positive,

where A computes rank-1 projections: A(X)=[<aᵢaᵢᵀ,X>]_{i=1}^n.

However I never really used optimization packages in julia and I’m a bit lost. I formulated the problem in JuMP as follows:

```
d = 5
m = Model(with_optimizer(SCS.Optimizer))
@variable(m, X[1:d, 1:d], PSD)
@constraint(m, [Ω[:,i]'*X*Ω[:,i] for i in 1:size(Ω, 2)] .== b)
@objective(m, Min, tr(X))
optimize!(m)
solution = value.(X)
```

I quickly tried different solvers supporting SDP optimization:

SeDuMi > requires a license

Mosek > requires a license

CSDP > segfault

ProxSDP > looks like it is working, but quite slow for d>≈10

SCS > faster for intermediate values of d, but does not scale very well either

So my questions would be :

- Is there a preference, between the constraint written above and
`‖A(x)-b‖≤ε`

, i.e. something like`@constraint(m, cat([ε], [Ω[:,i]'*X*Ω[:,i] for i in 1:size(Ω,2)] - s; dims=1) in SecondOrderCone())`

? Will it be solved differently by the solver? It seems to be faster to define the equality constraint, but I don’t know what’s happening in the background. - I could compute the operator
`A`

more efficiently, e.g. using the cholesky decomposition of X and using 1 matrix product, but I couldn’t call`cholesky`

on`X`

, which has a specific type. Is there a way to code the constraint by a user-defined function (working on arbitrary matrices) ? - If this is possible, is there a difference between doing so and the code above from the optimizer’s perspective? I mean, does JuMP infers some information on the nature of the constraint by analyzing the code (i.e. recognize it’s an affine constraint and leverage it), or is it just seen as a black box function? I understand e.g. that for the objective, using “simple” functions has the advantage that first-order methods can use auto-differentiation (although it might be more efficient to register a user-defined function/gradient), but I don’t understand if there are similar advantages regarding the constraints.
- If I define an “efficient” function to compute the constraint, is there a way (with JuMP or any other optimization package) to solve the problem more efficiently?

Sorry if the questions are not very clear, I think I don’t know enough about how optimization frameworks work and how they connect to the different solvers to understand what is really going on when running the code.

Note: I guess for large scale instances, it would be relevant to optimize directly using a cholesky factorization of X, but for now I just wanted to try something simple.