How do I extract the coefficients and RHS's of all constraints in a JuMP linear program? (Need for KKT)

I am building a bilevel problem and I need to construct the KKT conditions of a linear program written in JuMP. I am wondering if it is possible to extract all the coefficients and right-hand-sides (RHS) of the constraints built in JuMP? What I need are the A, C, b, and d from:

Ax=b
Cx<=b

where x is the decision vector.

1 Like

Have a look at BilevelOptimization.jl, it might already have code to do what you want.

1 Like

Thanks @leethargo I have not seen that package. It will be helpful in formulating my bilevel problem in JuMP. However, it appears that it requires as input what I am trying to extract from a JuMP model; i.e. what BilevelOptimizationl.jl refers to as the lower-level constraint matrix and the lower-level slack variable.

See Constraints · JuMP.

JuMP doesn’t have an easy way to get matrices, because it does not represent the problem in a Ax=b-type standard form.

2 Likes

After we merge this pull request and release it, you can do it with NLPModelsJuMP.jl and NLPModels.jl:

using JuMP, NLPModelsJuMP, NLPModels
model = Model()
@variable(model, x[1:3])
@variable(model, y[1:3])
@objective(model, Min, sum(x[i] * i - y[i] * 2^i for i = 1:3))
@constraint(model, [i=1:2], x[i+1] == x[i] + y[i])
@constraint(model, [i=1:3], x[i] + y[i] <= 1)

nlp = MathOptNLPModel(model) # NLPModelsJuMP enters here
x = zeros(nlp.meta.nvar)
grad(nlp, x) # = c = [1, 2, 3, -2, -4, -8]
jac(nlp, x) # = A = 5x6 12 entries
cons(nlp, x) # = g = zeros(5)
nlp.meta.lcon, nlp.meta.ucon # l <= Ax + g <= u = ([0,0,-Inf,-Inf,-Inf], [0, 0, 1, 1, 1])
# constraint indexes for each situation
# [1, 2], [], [3, 4, 5], []
nlp.meta.jfix, nlp.meta.jlow, nlp.meta.jupp, nlp.meta.jrng

Currently the JuMP model has to be nonlinear for it to work, but after the PR is merged (probably today), it should work with the example above.

cf. @dpo @amontoison

7 Likes

@abelsiqueira that looks like it will accomplish what I am looking for! I will watch NLPModelsJuMP.jl for the release.

NLPModelsJuMP 0.6.3 is released, here’s a terminal recording showing the example above: asciicast:326018 - asciinema

2 Likes

This is awesome! For others I’ll add a link to the NLPModels readme that helped me work with NLPModelsJuMP.

@abelsiqueira I have a couple questions one of which I will create a new thread for. The first one is rather simple though: cons(nlp, x) in your example returns a vector of zeros; the documentation says
cons(model, x): evaluate c(x), the vector of general constraints at x
I don’t understand what cons is returning, can you please explain it in terms of your example?

Here is my more in depth, related question: How to map JuMP VariableRef to NLPModelMeta indices?

NLPModels assumes a model with constraints of the form lcon <= c(x) <= ucon. It could be that c(x) = Ax, and lcon = ucon = b, to signify Ax = b, but it could also be that c(x) = Ax - b, and lcon = ucon = 0 for the same problem. cons(nlp, x) = c(x).
Maybe for NLPModelsJuMP, there is some internal assumption on which form is used. @amontoison, do you know?

Its still not released, but you can take a look at https://github.com/joaquimg/BilevelJuMP.jl
There many examples in the test folder to build bilevel problems in JuMP.

Thanks @joaquimg this looks really useful (I looked through some of the examples). Is this comment a TODO reminder or have you implemented the KKT conditions for a linear lower level problem?

KKT conditions are implemented.
Thing is, there are a few ways to do that, that TODO is just a list of the many I could remember, most of them are already implemented.
If you want a MIP formulations You should use SOS1Mode, if you want a NLP formulation you should use ProductMode as in the tests.
Once you add the package (via cloning) you should be able to easily copy and paste the examples and run models.
Feel free to open issues.

1 Like

In NLPModelsJuMP, linear constraints (lcon <= Ax + b <= ucon) are transformed internally to (lcon - b <= Ax <= ucon - b). We will only store the jacobian A and we evaluate the constraints with a sparse matrix vector product A * x.

With this internal modification, is you only have inequality constraints in your JuMP model, nlp = MathOptNLPModel(model) is the standard form of your LP and SlackModel(model) is the canonical form.

1 Like