Solving a quadratic program inside a JuMP model solve

I’m trying to solve a two-step optimization problem (to fit a complex econometric model) that requires solving a [fairly straight forward] quadratic program (as a function of the parameters of the model) at every iteration.

Is there any issue with initializing a new JuMP model to solve the quadratic program at each step? Would a different approach (perhaps a direct quad-prog solver or something) be preferable?

A simple example of what I want to do (for intuition) might be something like this:

# Quad Prog Function
function solveQadProg( H, f, ceq, beq)
    
    m = Model(solver=IpoptSolver(print_level=0, tol=1e-12))
    
    @variables m begin
        b[i=1:length(f)] >= 0
    end
    
    @objective(m, Min, sum(H[i]*b[i]^2 + f[i]*b[i] for i=1:length(f) ))
    
    @constraint(m, sum( b[i] * ceq[i] for i=1:length(ceq) ) == beq)
    
    status = JuMP.solve(m)
    
    b_min = getvalue(b)
    
    return b_min

end


### Main Model #user

M_main = Model(solver=IpoptSolver())

@variables M_main begin
    f_var[i=1:N]
    beq_var
end

## This would require solveQadProg to be registered properly, etc. and wouldn't work as is, but it gets the idea across
@expression(M_main, b_quadprog[1:N], @solveQadProgm, [H_data, f_var, ceq_data, beq]) 

@objective(M_main, sum(someData[i] - b_quadprog[i] for i=1:N)^2 / N)

status = JuMP,solve(M_main)
 

Is there any issue with initializing a new JuMP model to solve the quadratic program at each step?

If the model generation time is a significant bottleneck in your code, then possibly yes. Otherwise, no. I’d suggest doing the easy thing (rebuilding the model from scratch each time) first, and then trying something fancier if it’s too slow.

See also https://github.com/JuliaOpt/JuMP.jl/issues/1241.

I’m working on a package currently called SimpleQP, built on top of MathOptInterface.jl (the new backend for future versions of JuMP), which allows you to set up this kind of ‘parameterized QP’ and efficiently solve instances of it (with zero allocation). See https://github.com/JuliaOpt/JuMP.jl/issues/1241#issuecomment-385806395 for a demo.

Note that the package is all of 11 days old, has zero documentation, has only been tested with the OSQP QP solver, and is still likely to change significantly.

6 Likes

You could consider embedding the inner QP’s KKT optimality conditions directly as extra constraints and variables in the outer problem. This is a pretty standard approach for bilevel problems.

Yeah, I thought of that and tried implementing it in a previous version of my model. It had a hard time finding feasible solutions, which discouraged me but I’ve made some simplifications to the model since then that might help. [update - it is having trouble finding feasible solutions… :X]

If this doesn’t work, do you think it makes any sense to use a QP solver in the form of a function, the solution to which is passed in to the main level in the form of an expression? I know JuMP isn’t really designed for this kind of thing, but I’d prefer to work in it rather going back to matlab, and I could provide a manual gradient to the QP…

Thanks for the thought!

Wondering if anyone has any thoughts on this being a good/bad approach? It seems like using the KKT optimality conditions as additional constraints makes it very difficult for the solver to find a feasible point…

In my experience JuMP probably won’t be the limiting factor here: it’s more likely to be the startup time of your solver. I recommend taking a look at the wrapper code for your solver and figuring out if there is a way to keep a persistent one that you just reset every time (even this may not work well).

I’ve circled back to trying to get the KKT optimality conditions approach to work. I think the issue I’m hitting is this: my quadratic program requires that the solution be non-negative (e.g. b[t] >= 0 for all t). When the boundary of 0 is not hit, the KKT model works perfectly. However, when the boundary is hit, the solver craps out and tells me there is no feasible solution.

@miles.lubin, @joehuchette, and @ExpandingMan - do you guys know what might be going on?
My notebook is here and the data behind it is uploaded to github as well

Your links are broken for me.

I only have the vaguest idea of what you are trying to do, but my first instinct would be to ask whether you can be sure that your Q matrix (i.e. the matrix of coefficients of the quadratic term in your objective function) is guaranteed to be positive definite. If it isn’t, your problem will not be convex. Most solvers should complain specifically that they got a non-positive-definite matrix rather than reporting the problem as infeasible in that case, so that’s not necessarily a great candidate for what’s going on. Can you fix your links?

Sorry, I just realized that github repo was private. Does this work?

I can prove that my Q is positive definite, actually. The math is here

Ah, ok good. I’ve looked at this briefly, I’m afraid I can’t offer anything very helpful without going through all of this thoroughly, but if it were me doing this, I’d have far more confidence that the JuMP solution is correct than I would in myself having manually computed and entered the KKT conditions without any errors.

Can you try a different solver and see what error you get?

I’m sure you have been wrestling this for a while now, but since it isn’t that much code it might be worth entering in the KKT code from scratch just to be extra sure that it’s right. One of the big advantages of Julia and JuMP is that you could have done, e.g.

@objective(m, Min,
           -γ*q_b⋅(b .- α*c) - (γ/2)*sum(σ_sq .* (b .- α*c).^2)
)     

which might be helpful for making things a little bit clearer as you try to spot any possible errors (granted I have no idea how good the Jupyter notebook unicode input plugin is, if it exists at all).

Also, why are you allowing b and omega to go very slightly negative in the KKT formulation? It might not matter, but I wouldn’t do that.

Thanks for this! I’ve tried a bunch of solvers (gurobi among them) and the results are very similar. I’m very confident that the JuMP model is correct, and I’ve double checked the KKT conditions. What’s weird is that I keep getting infeasibility errors… I’m honestly not sure why that’s happening. Any insight would be greatly appreciated.

Re: the slightly negative variable constraints - this was my attempt to make it a bit easier for the solver to find something feasible but it didn’t work :frowning:

the results are very similar

Does Gurobi prove infeasibility or does it crash with numeric error?

Provided that there is a feasible solution, you are likely hitting numerical error. This comment is relevant:

It could be something with numeric error or tolerances as @odow suggested. Alternatively, you can solve the interior optimization problem when doing the function eval for the econometric problem and then either compute the analytic gradient of the econometric objective function by hand (since the argmax of your interior function should be easy to differentiate because of the envelope theorem). If other parts of your function are better handled by Automatic Differentiation, you can do the differentiation manually for your argmax and overwrite/overload that function when the inputs are duals. See, for example, this post.