I am interested in getting the values of my constraints evaluated at a particular spot in the design space. Here is an example to explain what I am trying to do:

```
using JuMP
m = Model()
@variable(m, x, start = 0.0)
@variable(m, y, start = 0.0)
@NLobjective(m, Min, (1-x)^2 + 100(y-x^2)^2)
N=@NLconstraint(m, x^2 + y == 10)
solve(m)
```

I know that I can look at the infeasibility of the dual problem using:

```
getdual(N)
```

But, I would like to do something like

```
julia> getvalue(N)
ERROR: MethodError: no method matching getvalue(::JuMP.ConstraintRef{JuMP.Model,JuMP.GenericRangeConstraint{JuMP.NonlinearExprData}})
Closest candidates are:
getvalue(::JuMP.NonlinearExpression) at /home/febbo/.julia/v0.5/JuMP/src/nlp.jl:1323
getvalue(::JuMP.NonlinearParameter) at /home/febbo/.julia/v0.5/JuMP/src/nlp.jl:41
getvalue(::JuMP.GenericQuadExpr{Float64,JuMP.Variable}) at /home/febbo/.julia/v0.5/JuMP/src/quadexpr.jl:92
...
```

So, I want to look at the values of the constraints in my actual problem, not the dual problem.

Is there a way to do this without turning my constraints into @NLexpressions?

1 Like

You can use `eval_g`

via the derivative evaluation interface. However, JuMP currently normalizes most constraints to have zero on the right-hand side, so the â€śvalueâ€ť of the constraint will not correspond to the left-hand side that you wrote down.

2 Likes

@miles.lubin I am still trying to figure this out.

Lets say that I have this model:

```
m = Model()
@variable(m, x)
@variable(m, y)
con1=@constraint(m, x<=2)
con2=@constraint(m, 2<=y<=4)
@NLobjective(m, Min, sin(x) + sin(y))
```

Then I would like to evaluate the constraints at:

```
values = zeros(2)
values[linearindex(x)] = 1.0
values[linearindex(y)] = 5.0
```

Then I do some setup:

```
d = JuMP.NLPEvaluator(m)
MathProgBase.initialize(d, [:Grad])
g = zeros(2);
MathProgBase.eval_g(d, g, values)
```

I query:

```
julia> g[linearindex(x)]
1.0
julia> g[linearindex(y)]
5.0
```

I know that you said that JuMP normalizes the constraints so that the right hand side is zero, so for

```
julia> g[linearindex(x)]
```

I might expect an output like:

```
1-2=-2
```

and for

```
julia> g[linearindex(x)]
```

I might expect an output like:

```
[5-4;
2-5]
```

For the two constraints the upper and lower constraint values

Can you please help me understand this?

Thanks!

The `g`

vector is indexed linearly by constraints, not by variables. Linear and nonlinear constraints are handled differently with respect to their right-hand sides. (Furthermore, although itâ€™s not relevant to this particular example, the ordering between linear and nonlinear constraints is described in the documentation.)

@miles.lubin thank you for your help, for completeness I will post the solution I obtained,

The functionality can be used to check the feasibility of an `x`

vector w.r.t. the actual constraints ( @miles.lubin if there is a better way to do this please comment) .

```
m = Model()
@variable(m, x)
@variable(m, y)
con1=@constraint(m, 2*x + 1 <=4)
con2=@NLconstraint(m, y^2<=4)
@NLobjective(m, Min, sin(x) + sin(y))
values = zeros(2)
values[linearindex(x)] = 1.0
values[linearindex(y)] = 5.0
d = JuMP.NLPEvaluator(m)
MathProgBase.initialize(d, [:Grad])
g = zeros(MathProgBase.numconstr(d.m));
MathProgBase.eval_g(d, g, values)
b = JuMP.constraintbounds(m);
if minimum(sign([g-b[1];b[2]-g])) == -1
pass = false;
else
pass = true;
end
```

The JuMP usage seems fine to me. Typically youâ€™d check constraint satisfaction within some numerical tolerance.