# How to deal with the lack of an explicit Expression type (present in CVX) in Convex.jl?

#1

Hello, I am a new user of Convex.jl and I am trying to port some Matlab code I wrote for CVX.

Basically one of the variables of my problem is a matrix, but my constraints involve not the variable matrix itself, but two other matrices built from elements of the variable matrix. In CVX I would declare these two matrices as expressions, but I do not think I can do that in Convex.jl.
The tricky part is that I have been able to express these two matrices only via a for loop, i.e. element-wise, it is likely that it can be done in a different (vectorized) way, but in CVX it works anyway!

In CVX I did something like this:

cvx_begin sdp
variable x;
variable xsi(n,n) complex;
expression M0(m,m);
expression M1(m,m);

for i=1:m
for j=1:m
a=ranvec(i,:) - ranvec(j,:) + (n+1)/2;
di=(ranvec(i,:)-1)*(2*l)/(n-1) - l;
dj=(ranvec(j,:)-1)*(2*l)/(n-1) - l;
M0(i,j)=xsi(a(1),a(2));
M1(i,j)=M0(i,j)*exp(1j*0.5*(-di(2)*dj(1) + di(1)*dj(2)));
end
end
M0 + x*m*eye(m) >= 0;
M1 >= 0;
cvx_end


Since I can declare M0 and M1 as expressions when I write M1 >= 0 everything is fine. I do not understand how I can do the same in Convex.jl…

I tried doing this

x=Variable()
xsi=ComplexVariable(n,n);
M0=Array(AbstractExpr,m,m);
M1=Array(AbstractExpr,m,m);


then fill the matrices M1 and M0 in the same way but then when I try to add the constraints like constraints= [ M1 in :SDP] I get some errors:

MethodError: no method matching start(::Symbol)
Closest candidates are:
start(::SimpleVector) at essentials.jl:170
start(::Base.MethodList) at reflection.jl:258
start(::IntSet) at intset.jl:184
...

in mapreduce_sc_impl(::Base.Predicate{Base.##212#213{Array{Convex.AbstractExpr,2}}}, ::Base.#|, ::Symbol) at ./reduce.jl:212
in in(::Array{Convex.AbstractExpr,2}, ::Symbol) at ./reduce.jl:496


Does anything of this make sense? Sorry but I am quite new to this fascinating world of convex optimization.

Francesco

#2

• Why in your Convex.jl version the variable xsi became a ComplexVariable?
• In Convex.jl you define a semidefinite matrix with Semidefinite

Maybe if you formulate your optimization in mathematical language, we can try help write it down in code.

#4

@juliohm thank you a lot for finding the time to answer my question

You are right in noticing this inconsistency, but actually the matlab code should read variable xsi(n,n) complex, since xsi is a matrix with complex entries. I am editing the initial post to reflect this.

I know this, but the point is that xsi is not a Semidefinite matrix, it is a more or less arbitrary complex matrix (actually if I choose the coordinates of the problem wisely it can be Hermitian).
On the other hand, the matrices M0 and M1 are always Hermitian, but these matrices are not the variables themselves, they are just combinations of the elements of xsi. Possibly I could rewrite the problem using M0 as my variable, but I am not sure.

Anyway, the point is that I would like to know if it is possible to do what I did in CVX also with Convex.jl, regardless of the details: i.e. how do I create a new matrix from elements of my variable matrix and enforce matrix inequalities on this newly created matrix?

In any case if you are interested, the mathematical problem in its full form is the following (slightly modified from this quantum physics paper):

P.S. I apologize for my bad knowledge of the Greek alphabet, xsi in my code actually corresponds to the \chi letter in the mathematical description!

#5

@falbarelli, thank you for your detailed reply. The math is much more clear than the code. I understand that the function \Chi is some sort of covariance and that you know the value of this covariance at some specific lags v_i - v_j.

Can you confirm that your only optimization variable is x? How is it connected to the rest of the problem? Everything looks like data to me, are the matrices fixed given the function \Chi and the collection T?

#6

If your function \Chi is parametrized by x, then you have to specify its structure in order to make sure this constitutes a convex optimization problem. Of course, I may be interpreting the entire problem incorrectly, please let me know if I am off.

#7

@juliohm thanks again for taking time to read my post!

Sorry, but I just noticed that I forgot to mention another point and maybe I have not been clear enough on some others:
The function chi is discretized and thus become a matrix on a 2d grid, when I generate the collection T I make sure that the differences between elements in T are actually on the grid.
My optimization variables are thus both x and the matrix which represents the discretized version of chi (i.e. xsi in my code).
My data is the vector collection T and the values c_j (i.e. the values I know for the function chi).

In case you are interested, some more background:

The function chi is actually the Fourier transform of a real function W (the Wigner function in physical terms).
The idea behind all this is the Bochner theorem: if the real function W is positive it ensures that the matrices M0 and M1 must both be positive definite. If W has also negative values M0 is not a positive definite matrix, while M1 remains positive definite.

The idea is to find the minimal matrix proportional to the identity that must be added to M0 in order to make it positive definite. Intuitively this quantifies how “negative” the function W is. The function chi is known only at certain particular points.

#8

Thank you for putting the problem into context, I understand it better now.

I believe that a working solution would start by defining a matrix M_oplus = M_o + xmI as one of your Semidefinite() optimization variables (M_oplus and M_1). By doing so, you help Convex.jl understand the semidefinite constraints. You then would proceed and write the other constraints entry by entry on these two matrices in a loop using the data and the scalar optimization variable x. The job of the loop is merely to tie the matrices M_oplus and M_1 with the scalar x.

I don’t know if it applies to your problem, but you may also find the conv operation supported in Convex.jl useful since you’re dealing with FFTs…

#9