After we merge this pull request and release it, you can do it with NLPModelsJuMP.jl
and NLPModels.jl
:
using JuMP, NLPModelsJuMP, NLPModels
model = Model()
@variable(model, x[1:3])
@variable(model, y[1:3])
@objective(model, Min, sum(x[i] * i - y[i] * 2^i for i = 1:3))
@constraint(model, [i=1:2], x[i+1] == x[i] + y[i])
@constraint(model, [i=1:3], x[i] + y[i] <= 1)
nlp = MathOptNLPModel(model) # NLPModelsJuMP enters here
x = zeros(nlp.meta.nvar)
grad(nlp, x) # = c = [1, 2, 3, -2, -4, -8]
jac(nlp, x) # = A = 5x6 12 entries
cons(nlp, x) # = g = zeros(5)
nlp.meta.lcon, nlp.meta.ucon # l <= Ax + g <= u = ([0,0,-Inf,-Inf,-Inf], [0, 0, 1, 1, 1])
# constraint indexes for each situation
# [1, 2], [], [3, 4, 5], []
nlp.meta.jfix, nlp.meta.jlow, nlp.meta.jupp, nlp.meta.jrng
Currently the JuMP model has to be nonlinear for it to work, but after the PR is merged (probably today), it should work with the example above.
cf. @dpo @amontoison