# Solving a Convex Quadratic Program Using Gurobi & Mosek

I would like to solve the following Quadratic Program / Quadratic Programming:

\begin{aligned} \arg \min_{x} \quad & \frac{1}{2} {x}^{T} P x + {x}^{T} q \\ \text{subject to} \quad & l \leq A x \leq u \end{aligned}

The matrix P is a Positive Definite matrix. It is a sparse matrix of size 40_000 x 40_000.
I tried solving it using Convex.jl but it failed with Out of Memory (See Massive RAM for moderate problem · Issue #254 · jump-dev/Convex.jl · GitHub) at the problem formulation phase (I have 32 GB so it might be memory is not properly handled).

I would like to solve it with Gurobi.jl / Mosek.jl using the direct interface.
Does anyone have a sample code for this?

Whta about formulation using JuMP.jl? Will it also have large overhead?

JuMP should work fine for this.

The documentation has a range of examples
Or

I tried following the documentation and produced this:

using JuMP;
jumpModel = Model(Gurobi.Optimizer);
@variable(jumpModel, vT[1:numElements]);
@objective(jumpModel, Min, 0.5 * (vT' * mP * vT) + (vT' * vQ));
@constraint(jumpModel, vL .<= mA * vT);
@constraint(jumpModel, mA * vT .<= vU);
optimize!(jumpModel);


But I get errors: Expression contains an invalid NaN constant. This could be produced by Inf - Inf..
I have Inf in the some of the elements of vU (Namely no constraint). It works with Convex.jl for small matrices.

This why I approached the forum.
Could it be that JuMP expect to Inf / -Inf in the constraints vectors (vL and vU above)?

The solution I had, for using JuMP.jl is discarding those volatile inequalities.
So the function I created, for reference, is:

using JuMP;
using Gurobi;

function SolveQpJump(mP, vQ, mA, vL, vU; hOptFun = nothing)

if (isnothing(hOptFun))
hOptFun = Gurobi.Optimizer;
end

jumpModel = Model(hOptFun);
@variable(jumpModel, vT[1:numElements]);
@objective(jumpModel, Min, 0.5 * (vT' * mP * vT) + (vT' * vQ));
vV = vL .≠ -Inf;
@constraint(jumpModel, vL[vV] .<= mA[vV, :] * vT);
vV .= vU .≠ Inf;
@constraint(jumpModel, mA[vV, :] * vT .<= vU[vV]);
optimize!(jumpModel);

return value.(vT);

end


I still wonder what’s the proper way to define Inf in JuMP.jl.

When you give JuMP a constraint A*x .>= vU, my understanding is that it will try to reformulate it in a canonical form f(x) \in Set. If this involves moving the constant term to the f(x) term, and said constant is infinite… you’re out of luck.
For reference, I think this issue is related to the error you’re seeing.

I would recommend you create an additional variable s = Ax, and write your constraints as Ax = s, l \leq s \leq u (which, BTW, is more solver-friendly as it avoids duplicating the constraint matrix).

You can set lower/upper bound with set_lower_bound and set_upper_bound, and these shouldn’t error if passed infinite values.

using JuMP;
numConstraints = size(mA, 1)

jumpModel = Model(Gurobi.Optimizer);
@variable(jumpModel, vT[1:numElements]);
@variable(jumpModel, s[1:numConstraints ]);  # <--- new
@objective(jumpModel, Min, 0.5 * (vT' * mP * vT) + (vT' * vQ));

# the following lines have been updated
@constraint(jumpModel, mA * vT .== s);
set_lower_bound.(s, vL)
set_upper_bound.(s, vU)

optimize!(jumpModel)

3 Likes

I would even go

jumpModel = Model(Gurobi.Optimizer)
@variable(jumpModel, vL[i] <= s[i=1:numConstraints] <= vU[i])


Does it support Inf in this formulation?

By the way, what’s the reason Convex.jl can’t formulate the problem with the same efficiency as JuMP?

For Gurobi it does. Some solvers will have issues, which is the reason for the issue linked above.

JuMP and Convex.jl are very different, so it’s not surprising the have different performance characteristics.

Convex.jl converts everything to conic form using the Disciplined Convex Programming (DCP) rules.
When you give it a quadratic objective, it will check for convexity then reformulate it as a second-order cone constraint. This implies computing a factorization of the objective matrix, which can be costly.
cc @ericphanson: I strongly suspect part of the reason is this line, which will end up computing a dense cholesky factorization.

When you model the problem with JuMP, no such transformation occurs: the quadratic objective is passed as-is to Gurobi, who will be responsible for convexity checks and internal reformulations (if any).

3 Likes

Ah, could be. Convex has had memory problems long before that line but that does sound like it could be an issue. Perhaps we can provide a PositiveSemidefinite wrapper users can use to promise their matrix is PSD and skip the check.

1 Like

I am aware about DCP and the idea behind (Long time CVX user).
I thought that JuMP does some transformations as well in order to align all solvers.

Namely, will there be any benefit telling JuMP the problem is Convex Quadratic Program? Can it pass it to the Optimizer?

I thought that JuMP does some transformations as well in order to align all solvers.

Only if the constraint is not natively supported by the solver. In this case, Gurobi supports arbitrary quadratic objectives (including non-convex), so there is no need to analyze or reformulate it.

Namely, will there be any benefit telling JuMP the problem is Convex Quadratic Program?

In this case, no. In other cases (e.g., conic solvers like SCS), you may be better to formulate this as minimizing a single epigraph variable with a second-order cone constraint.

2 Likes

Just to update the thread, @mtanneau kindly updated Convex to use a sparse factorization when possible (Use sparse symmetric factorization when possible by mtanneau · Pull Request #457 · jump-dev/Convex.jl · GitHub) and I added an assume_psd=false argument to allow users to opt-out of the check altogether. Both are in Convex v0.14.15 which should be out shortly (New version: Convex v0.14.15 by JuliaRegistrator · Pull Request #44875 · JuliaRegistries/General · GitHub).

2 Likes