Your equations appear to be invariant under scaling D,G \to \alpha D, \alpha G for any \alpha>0.
So, if you are looking for a solution where D is PD and not just PSD, you could eliminate the latter solution by adding a PSD constraint D - I \succeq 0, for example, instead of D \succeq 0. Or, equivalently, change variables from D to D = X + I where X is PSD.
@stevengj Thanks. I tried your approach, but when I examine the eigenvalues of G I found that one of them is negative (so G is not PSD). Am I misunderstanding something? Here is the modification
using JuMP
using SCS
using LinearAlgebra
model = Model(SCS.Optimizer)
set_optimizer_attribute(model, "max_iters", 1000000)
set_optimizer_attribute(model, "eps_abs", 1e-16)
s = 2
r = 2
A = [1/2 0;
4/3 1/2];
U = [1 -1/2;
1 -5/6];
B = [19/16 19/16;
1/4 3/4];
V = [1 -3/4;
0 0];
@variable(model, G[1:r, 1:r])
@variable(model, D[1:s, 1:s])
M = [D*A + A'*D - B'*G*B D*U-B'*G*V; U'*D-V'*G*B G-V'*G*V];
@expression(model, GG, G - I(r))
@expression(model, DD, D - I(s))
@constraint(model, Symmetric(GG)>=0 ,PSDCone())
@constraint(model, Symmetric(DD)>=0 ,PSDCone())
@constraint(model, Symmetric(M)>=0 ,PSDCone())
optimize!(model)
G_opt = value.(G)
D_opt = value.(D)
M_opt = [D_opt*A + A'*D_opt - B'*G_opt*B D_opt*U-B'*G_opt*V; U'*D_opt-V'*G_opt*B G_opt-V'*G_opt*V]
println(G_opt)
println(minimum(eigvals(G_opt)))
println(D_opt)
println(minimum(eigvals(D_opt)))
println(M_opt)
println(minimum(eigvals(Symmetric(M_opt))))
Did it find a feasible solution? Otherwise it seems like your constraint G - I \succeq 0 is violated if G has any eigenvalue ≤ 1, so the optimizer must be failing.
(Note that it should be sufficient to constrain D - I \succeq 0 and G \succeq 0 to exclude the trivial solution. But if a solution exists where both D and G are PD, not just PSD, then you can use your formulation)
(I’m don’t really use JuMP myself, so maybe I’m mis-reading your syntax.)
Thanks a lot. I have included the output that I got which shows that the problem was solved to high precision. Perhaps someone else could correct anything that I did not do correctly.
I haven’t tried your problem, but I noticed once obvious thing:
Using Symmetric is a hint to tell JuMP that the matrix GG is symmetric. If the matrix is not symmetric, then you can get incorrect results because JuMP will not add constraints to ensure the matrix is symmetric.
From a conceptual point of view though, I think you need to reconsider what you’re trying to do. The solver will find you a solution that satisfies the constraints as you have written them. If you want a different solution, it means that you are missing some constraints. That might, for example, mean adding a constraint like @constraint(model, G[1, 1] == 1) to fix one of the variables to a known value, around which the rest of the solution can be built.
Thanks. But I think we should have Symmetric() because I want G to be symmetric right? I tried it without it, and the solver did not converge anyway.
I think you are right with regards to your second point. The problem is that I don’t know what the solution would look like so adding a constraint like this may make the problem infeasible.