I would like to solve the following Quadratic Program / Quadratic Programming:
\begin{aligned}
\arg \min_{x} \quad & \frac{1}{2} {x}^{T} P x + {x}^{T} q \\
\text{subject to} \quad & l \leq A x \leq u
\end{aligned}
The matrix P is a Positive Definite matrix. It is a sparse matrix of size 40_000 x 40_000.
I tried solving it using Convex.jl but it failed with Out of Memory (See https://github.com/jump-dev/Convex.jl/issues/254) at the problem formulation phase (I have 32 GB so it might be memory is not properly handled).
I would like to solve it with Gurobi.jl / Mosek.jl using the direct interface.
Does anyone have a sample code for this?
Whta about formulation using JuMP.jl? Will it also have large overhead?
But I get errors: Expression contains an invalid NaN constant. This could be produced by Inf - Inf..
I have Inf in the some of the elements of vU (Namely no constraint). It works with Convex.jl for small matrices.
This why I approached the forum.
Could it be that JuMP expect to Inf / -Inf in the constraints vectors (vL and vU above)?
When you give JuMP a constraint A*x .>= vU, my understanding is that it will try to reformulate it in a canonical form f(x) \in Set. If this involves moving the constant term to the f(x) term, and said constant is infinite… you’re out of luck.
For reference, I think this issue is related to the error you’re seeing.
I would recommend you create an additional variable s = Ax, and write your constraints as Ax = s, l \leq s \leq u (which, BTW, is more solver-friendly as it avoids duplicating the constraint matrix).
You can set lower/upper bound with set_lower_bound and set_upper_bound, and these shouldn’t error if passed infinite values.
using JuMP;
numConstraints = size(mA, 1)
jumpModel = Model(Gurobi.Optimizer);
@variable(jumpModel, vT[1:numElements]);
@variable(jumpModel, s[1:numConstraints ]); # <--- new
@objective(jumpModel, Min, 0.5 * (vT' * mP * vT) + (vT' * vQ));
# the following lines have been updated
@constraint(jumpModel, mA * vT .== s);
set_lower_bound.(s, vL)
set_upper_bound.(s, vU)
# Ready to optimize!
optimize!(jumpModel)
Convex.jl converts everything to conic form using the Disciplined Convex Programming (DCP) rules.
When you give it a quadratic objective, it will check for convexity then reformulate it as a second-order cone constraint. This implies computing a factorization of the objective matrix, which can be costly.
cc @ericphanson: I strongly suspect part of the reason is this line, which will end up computing a dense cholesky factorization.
When you model the problem with JuMP, no such transformation occurs: the quadratic objective is passed as-is to Gurobi, who will be responsible for convexity checks and internal reformulations (if any).
Ah, could be. Convex has had memory problems long before that line but that does sound like it could be an issue. Perhaps we can provide a PositiveSemidefinite wrapper users can use to promise their matrix is PSD and skip the check.
I am aware about DCP and the idea behind (Long time CVX user).
I thought that JuMP does some transformations as well in order to align all solvers.
By the way, would letting the user add some knowledge about the problem be helpful?
Namely, will there be any benefit telling JuMP the problem is Convex Quadratic Program? Can it pass it to the Optimizer?
I thought that JuMP does some transformations as well in order to align all solvers.
Only if the constraint is not natively supported by the solver. In this case, Gurobi supports arbitrary quadratic objectives (including non-convex), so there is no need to analyze or reformulate it.
Namely, will there be any benefit telling JuMP the problem is Convex Quadratic Program?
In this case, no. In other cases (e.g., conic solvers like SCS), you may be better to formulate this as minimizing a single epigraph variable with a second-order cone constraint.