JuMP: Squared Frobenius norm objective

I think what you’ve done is reasonable.
Below is how I would write your example, plus a SecondOrderCone version as you suggested:

using JuMP

import Ipopt
import LinearAlgebra
import SCS

n = 5
K = 100

## Generate data:
X = rand(n, K)
A = rand(5,5)
Q = LinearAlgebra.Symmetric(rand(5,5))

## Nonlinear version:
solver = Ipopt
model = Model(solver.Optimizer)
@variable(model, P[1:n, 1:n], Symmetric)
@objective(model, Min, sum((P*A*X - Q*X).^2));
optimize!(model)
solution_summary(model)

z_nl = objective_value(model)
P_nl = value.(P)

## SoC version:
solver = SCS
model = Model(solver.Optimizer)
@variable(model, P[1:n, 1:n], Symmetric)
@variable(model, t)
@constraint(model, [t; vec(P*A*X - Q*X)] in SecondOrderCone());
@objective(model, Min, t)
optimize!(model)
solution_summary(model)

z_soc = objective_value(model)^2
P_soc = value.(P)

## Compare
isapprox(z_soc, z_nl, rtol=1e-3)

LinearAlgebra.norm(P_soc - P_nl) <= 1e-3

While you’ve probably just given a simplified problem, this type of problem has an “exact” solution using the Moore–Penrose inverse:


(see Ben-Israel, A., & Greville, T. N. (2003). Generalized inverses: theory and applications (Vol. 15). Springer)
Also the Ref 636

## "Exact" version
P_pi = Q*X*LinearAlgebra.pinv(A*X)
z_pi = LinearAlgebra.norm(P_pi*A*X - Q*X, 2)

# Compare to:
LinearAlgebra.norm(P_soc*A*X - Q*X, 2)
LinearAlgebra.norm(P_nl *A*X - Q*X, 2)

Hopefully you observe how much smaller the residual is in the exact pseudoinverse version.

2 Likes