JuMP interface access to automatic differentiation (ReverseDiffSparse)

I’m trying to use ReverseDiffSparse to return the gradient and Hessian of a scalar valued function f. I only need to evaluate the gradient and Hessian (i.e., I don’t need to set up a whole model). I saw from the JuMP Documentation that I can access automatic differentiation through JuMP using MathProgBase per the following example.

m = Model()
@variable(m, x)
@variable(m, y)

@NLobjective(m, Min, sin(x) + sin(y))
values = zeros(2)
values[linearindex(x)] = 2.0
values[linearindex(y)] = 3.0

d = JuMP.NLPEvaluator(m)
MathProgBase.initialize(d, [:Grad])
objval = MathProgBase.eval_f(d, values) # == sin(2.0) + sin(3.0)

∇f = zeros(2)
MathProgBase.eval_grad_f(d, ∇f, values)
# ∇f[linearindex(x)] == cos(2.0)
# ∇f[linearindex(y)] == cos(3.0)

Three questions:

  1. Is this using ReverseDiffSparse?

  2. Is there a better way of returning the gradient and Hessian than setting up a model and using MathProgBase.eval_grad_f?

  3. (not necessary if (2) is solved) If I define my function f(x,y) as returning sin(x) + cos(y) how can I insert this into @NLobjective(m, Min, sin(x) + sin(y))? I tried @NLobjective(m, Min, f(x,y)), but I get the following error:

Unrecognized function "f" used in nonlinear expression.

Stacktrace:
 [1] error(::String) at ./error.jl:21
 [2] include_string(::String, ::String) at ./loading.jl:515

Thanks!

1 Like

Yes, JuMP always uses ReverseDiffSparse.

No, this is the best and only documented way to do so.

We call this a user-defined function. It needs to be registered first. However, Hessians are not available if there are non-univariate user-defined functions present.

1 Like

Thanks! As a follow up, I’ve tried to evaluate the Hessian in the original example but am unsuccessful. My additional code is:

MathProgBase.initialize(d, [:Grad,:Hess])
H = zeros(2);  # based on MathProgBase.hesslag_structure(d) = ([1,2],[1,2])
MathProgBase.eval_hesslag(d, H, values, 0, [0])

but I get the following error (for various attempts at matching the types of H, x, σ, μ to those specified in the candidate methods):

MethodError: no method matching eval_hesslag(::JuMP.NLPEvaluator, ::Array{Float64,1}, ::Array{Float64,1}, ::Int64, ::Int64)
Closest candidates are:
  eval_hesslag(::MathProgBase.SolverInterface.LPQPEvaluator, ::Any, ::Any, ::Any, ::Any) at /Users/MYUSER/.julia/v0.6/MathProgBase/src/SolverInterface/nonlinear_to_lpqp.jl:272
  eval_hesslag(::JuMP.NLPEvaluator, ::Array{Float64,1}, ::Array{Float64,1}, ::Float64, ::Array{Float64,1}) at /Users/MYUSER/.julia/v0.6/JuMP/src/nlp.jl:748

Stacktrace:
 [1] include_string(::String, ::String) at ./loading.jl:515

Is there a simple fix? Finally, If I’m able to evaluate the Hessian, is there a way of extracting the intermediary gradient too?

Yes, call the method with arguments that match its (strict) type signature:

MathProgBase.eval_hesslag(d, H, values, 0.0, Float64[])

Use eval_grad_f. If you take a peek at the implementation you see that some computation is avoided if you’ve already evaluated the Hessian at the same point.

1 Like

Quick follow-up: I’ve been able to query JuMP for gradients / Hessians in unconstrained problems, but is there a standard way of doing this for constrained problems?

I tried the naive approach of adding a constraint to a NL model (for example: adding @constraint(m, x1 + x2 == 10.0) after my objective @NLobjective(m, Min, x1^2 + x2^2 + x1*x2^2)), but querying for the Hessian with MathProgBase.eval_hesslag(d, v, values, 1., Float64[1., 1.]) still gave me the Hessian of the objective (and I’d prefer not to augment the objective with a penalty).

eval_hesslag is the standard way to do this. You can drop the contribution from the objective by setting the σ argument to zero. You won’t see much going on if you only have linear constraints, however, because they never appear in the Hessian.

1 Like