Debugging Container of Constraints in JuMP

I’m trying to debug a container of constraints built using the vectorized linear algebra syntax and trying to go in through the MOI interface to check the constraint value based on the start_value of the variables. I’ve successfully debugged single constraints but add_constraint doesn’t seem to work on a container, and I don’t know how to iterate over the constraints in a container, and if I try to “name” the constraints using the m1c[i=1:N] syntax I seem to get N copies of my N constraints.

A very stripped down example should be:

using JuMP
import Ipopt

const N = 6

D1 = [
     -7.5       10.1414       -4.03619       2.24468       -1.34991       0.5
     -1.78636    2.22045e-16   2.52343      -1.15283        0.653548     -0.237781
     0.484951  -1.72126       4.44089e-16   1.75296       -0.786357      0.269701
     -0.269701   0.786357     -1.75296      -1.11022e-16    1.72126      -0.484951
     0.237781  -0.653548      1.15283      -2.52343       -6.66134e-16   1.78636
     -0.5        1.34991      -2.24468       4.03619      -10.1414        7.5
    ]

display(D1)

model = Model(Ipopt.Optimizer)

@variables(model, begin
               m1[1:N]
           end)

for i in 1:N
  set_start_value(m1[i], 1)
end

@constraint(model, m1c, D1 * m1 == ones(N))

display(m1c)

nlp = MOI.Nonlinear.Model()
object = constraint_object(m1c)
MOI.Nonlinear.add_constraint(nlp, object.func, object.set)
vars = all_variables(model)
backend = MOI.Nonlinear.SparseReverseMode()
evaluator = MOI.Nonlinear.Evaluator(nlp, backend, index.(vars))
MOI.initialize(evaluator, Symbol[])
output = zeros(N)
MOI.eval_constraint(evaluator, output, start_value.(vars))
display(output)

Given those starting values and if I understand it correctly, I expect to just get all zeros out for the left-hand side of the expression (that should be a 6x6 Gauss-Lobatto pseudospectral Dmatrix of a constant value so it should be zero and the constraint should be violated).

What I get is just an error:

ERROR: LoadError: MethodError: no method matching add_constraint(::MathOptInterface.Nonlinear.Model, ::Vector{AffExpr}, ::MathOptInterface.Zeros)
The function `add_constraint` exists, but no method is defined for this combination of argument types.

Closest candidates are:
  add_constraint(::MathOptInterface.Nonlinear.Model, ::Any, ::Union{MathOptInterface.EqualTo{Float64}, MathOptInterface.GreaterThan{Float64}, MathOptInterface.Interval{Float64}, MathOptInterface.LessThan{Float64}})
   @ MathOptInterface ~/.julia/packages/MathOptInterface/kRheF/src/Nonlinear/model.jl:122

Stacktrace:
 [1] top-level scope
   @ ~/julia/jump/test.jl:33
in expression starting at /Users/lamont/julia/jump/test.jl:33

I’d like to know how to evaluate those constraints against the starting values. Also, if anyone has any obvious tips about the road I’m going down here, that’d be welcome. And that specific constraint seems simple but is giving me major headaches, which is why I’m trying to debug it, and if anyone spots an obvious n00b mistake please let me know.

The MOI.Nonlinear.Model does not support adding vector constraints.

But you don’t need to use MOI.Nonlinear to evaluate a constraint. Do value(start_value, m1c):

using JuMP
D1 = [
    -7.5       10.1414       -4.03619       2.24468       -1.34991       0.5
    -1.78636    2.22045e-16   2.52343      -1.15283        0.653548     -0.237781
    0.484951  -1.72126       4.44089e-16   1.75296       -0.786357      0.269701
    -0.269701   0.786357     -1.75296      -1.11022e-16    1.72126      -0.484951
    0.237781  -0.653548      1.15283      -2.52343       -6.66134e-16   1.78636
    -0.5        1.34991      -2.24468       4.03619      -10.1414        7.5
]
N = size(D1, 1)
model = Model()
@variable(model, m1[1:N], start = 1)
@constraint(model, m1c, D1 * m1 == ones(N))
value(start_value, m1c)

oh wow, i was making that a lot harder than i needed to…

1 Like

Hey, kind of unrelated, but I saw you used the @variable(...., start=) syntax to clean up my set_start_value calls… I did that along with figuring out how to use indexing notation to set vector values. The one remaining real bit of ‘ugliness’ in my code is this kind of thing:

  fix(r1[1,1], r1is[1], force=true)
  fix(r1[1,2], r1is[2], force=true)
  fix(r1[1,3], r1is[3], force=true)
  fix(v1[1,1], v1is[1], force=true)
  fix(v1[1,2], v1is[2], force=true)
  fix(v1[1,3], v1is[3], force=true)

Is there a better way to say v1[1,:] == v1is there?

Is it also more standard to use m1[1:N] and v1[3:N] matrices when doing pseudospectral collocation? I think I tried that first but ran into some kind of trouble with broadcasting along the correct matrix dimensions, so I flipped it around (although come to think of it, I was probably defining vectors wrong back then and getting matrices instead and have since fixed that after getting into a fight with the cross product function).

You can use broadcasting:

fix.(r1[1,:], r1is; force = true)

Is it also more standard

This isn’t my area of expertise, so I can’t comment. I will say that, in general, just because it’s a common approach in other modelling packages does not mean that it is a good choice for JuMP. You should always think about how you could better write something to use Julia and JuMP’s strengths.

Yeah, the catch-22 is not knowing Julia very well, I tend to wind up guessing and writing Matlab until I realize that isn’t the right way…

1 Like

This forum is a great place to post small reproducible snippets (not necessarily the full complicated code) of what you have and ask if there’s a better way :smile: