Hi, I am attempting to rewrite MATLAB code, which is slow due to Yalmip’s parsing speed.
The resulting problem that gets sent to the solver from JuMP is much larger (and slower to solve) than Yalmip’s output.
I don’t think I am using any “inefficient” functions for building the problem, but the differences are huge and I’d really appreciate help from someone more experienced.
The difference in size can look like this for the same problem with the same optimal value:
Matlab:
Optimizer - Constraints : 106
Optimizer - Cones : 1
Optimizer - Scalar variables : 57
Julia:
Constraints : 1887
Cones : 1
Scalar variables : 108
The codes for both are below. They can be scaled by manipulating the variables n,m
(the results above are for n=10, m=50
).
The Julia code calls MATLAB in the beginning in order to
generate the same input data, this is just to show that we are indeed solving the same problem.
Matlab code:
%% Data preparation
clear all
close all
rng(3)
% State space dimension
n = 10;
% Number of neurons
m = 50;
% System
A = randn(n,n);
A = 0.7 * A / norm(A,2);
B = round(rand(n,2)); B = B / norm(B,2);
% Neural network
W1 = randn(m,n); W1 = W1 / norm(W1,2);
W2 = randn(2,m); W2 = W2 / norm(W2,2);
B = B*W2; % (no need to use B and W2 separately)
% Coordinate transformation x <- Qx
Q = randn(n,n);
A = inv(Q)*A*Q;
B = inv(Q)*B;
W1 = W1*Q;
%% Verification
% Degree of V
d = 2;
% Define Yalmip variables
H = sdpvar(n,n); % Matrix parametrizing V
x = sdpvar(n,1); % State
u = sdpvar(m,1); % Control
% Lyapunov candidate V
V = x'*H*x;
% Equality constraints u*(u-W1*x) == 0
h = u.*(u-W1*x);
% Polynomial multipliers for equality constraints
R = []; CR = [];
for i = 1:numel(h)
[r,cr] = polynomial([x;u], d - degree(h(i)));
CR = [CR ; cr];
R = [R ; r];
end
Vp = x'*A'*H*A*x + 2*u'*B'*H*A*x + u'*B'*H*B*u; % Equivalent to Vp = replace(V,x,A*x + B*u);
% Constraints
con = [ sos(V - Vp - x'*x - R'*h), H >= 0 ];
% Solver options
options = sdpsettings('solver','mosek');
% Solve
sol = solvesos(con,norm(H(:),2)^2,options,[H(:);CR])
fprintf("objective = %6.6f\n",norm(double(H(:)),2)^2)
Julia code:
using MosekTools
using JuMP
using DynamicPolynomials
using LinearAlgebra
using SumOfSquares
using MATLAB
# generate same data as MATLAB
mat"
clear all, close all
rng(3)
% State space dimension
n = 10;
% Number of neurons
m = 50;
% System
A = randn(n,n);
A = 0.7 * A / norm(A,2);
B = round(rand(n,2)); B = B / norm(B,2);
% Neural network
W1 = randn(m,n); W1 = W1 / norm(W1,2);
W2 = randn(2,m); W2 = W2 / norm(W2,2);
B = B*W2; % (no need to use B and W2 separately)
% Coordinate transformation x <- Qx
Q = randn(n,n);
A = inv(Q)*A*Q;
B = inv(Q)*B;
W1 = W1*Q;
$Q = Q;
$A = A;
$B = B;
$W1 = W1;
$n = n;
$m = m;
"
n = Int(n)
m = Int(m)
## Verification
d = 2; # degree of V
model = SOSModel(Mosek.Optimizer)
# Variables
@variable(model,H[1:n,1:n],PSD); # Matrix parametrizing V
@polyvar(x[1:n]) # State
@polyvar(u[1:m]) # Control
# Lyapunov candidate V
V = x'*H*x;
# constraints u.*(u-W1*x) == 0
h = u.*(u-W1*x);
R = Vector{Polynomial{true, VariableRef}}(undef,length(h))
for i = 1:length(h)
global R,h
ds = d - maxdegree(h[i])
R[i] = @variable(model,[1:1], Poly(monomials(x, 0:ds )))[1];
end
Vp = subs(V,x => A*x + B*u)
# Constraints
@constraint(model, V - Vp - x'*x - R'*h >= 0 ); # (V - Vp - x'*x) on set (h == 0)
@objective(model, Min, sum(H.^2) ); # norm of V's Gram matrix
JuMP.optimize!(model)
@show objective_value(model)
Info about my instalation/package versions:
Julia 1.6.0
JuMP v0.21.8
SumOfSquares v0.4.6
DynamicPolynomials v0.3.18
MosekTools v0.9.4
MATLAB v0.8.2
Thanks in advance.