Hello. I’m trying to improve the performance of an optimization problem. I have a Hermitian matrix and want to minimize the diagonal terms while keeping the matrix positive semidefinite.

Given a matrix S,

where d is a vector variable. This is solved using Convex.jl and COSMO.jl:

```
using Convex, COSMO
n = 4
x = rand(ComplexF64,n);
noise = diagm(rand(n))
S = x*x' + noise
d = Convex.Variable(n)
p = minimize(sum(d), S+Diagonal(d) in :SDP)
solve!(p, () -> COSMO.Optimizer())
Sx = S.+diagm(d.value[:]) # Solution
diag(Sx) + diag(noise) ≈ diag(S) # approx true
```

First question, is this an optimal formulation or should it be reformulated?

Second question, my real problem is larger n=80 and needs to be solved 1000s of times since my matrix S is in fact 3D. Setting up a loop:

```
N,N,M = size(S) # (80,80,1000)
d = Convex.Variable(N)
A = similar(S)
for i = 1:M
p = minimize(sum(d), S[:,:,i]+Diagonal(d) in :SDP)
solve!(p, () -> COSMO.Optimizer())
A[:,:,i] = Hermitian(S[:,:,i].+diagm(d.value[:]))
end
```

How can I avoid to setup the problem at each iteration? Warm-starting seems like to perfect solution but I’m not sure how to do that when my constraints change at each iteration.

Thanks!