Ipopt in Julia without JumP and automatic hessian calculation?

Hi all,
I know that it is possible to use Ipopt in Julia without using JumP. For this the user has to define eval_f (objective function), eval_g (nonlinear constraints), eval_grad_f (gradient of the objective function) and eval_jac_g (jacobian of the nonlinear constriants).
Well, defining eval_f and eval_g is not a problem. Also I use “forwardDiff.gradient” to evaluate eval_grad_f, which has worked okay. Is there a way to define eval_jac_g (ofcourse without doing it analytically) with syntax and arguments that fits Ipopt? Let us assume we have two nonlinear constraints.
For example:
function eval_jac_g(x, mode, rows, cols, values)
if mode == :Structure
# Constraint
rows[xx] = 1; cols[xx] = 1
and so on…
else
# Constraint
values[xx] = x[2]*x[3]*x[4] # just an example
and so on…
end
end

Is there a way to use “forwardDiff.hessian” to fill up rows and cols and values?
Thanks.

Just did this yesterday: NLPModelsIpopt.jl
You should be able to use NLPModels.jl to define a problem that uses ForwardDiff and pass it to Ipopt.jl trough NLPModelsIpopt.jl:

nlp = ADNLPModel(x -> (x[1] - 1)^2 + 100 * (x[2] - x[1]^2)^2, [-1.2; 1.0])
x, f, c, λ, zL, zU, status = ipopt(nlp)

If you still prefer to use it directly, the code should help you.

1 Like

Hi abelsiqueira,
Your module “NLPModelsIpopt.jl” is really useful. I think it does what MATLAB’s “fmincon” does i.e. provides a framework where the user (of certain type) needs to provide only the objective function and the constraints. Rest of thing (behinds the scenes) is performed automatically.
I tested your module, and I have a few comments.

  1. I tested against the use of Ipopt as given in the example at https://github.com/JuliaOpt/Ipopt.jl/blob/master/example/hs071.jl
    When I run this example, the execution time comes out to be around 0.243 seconds.
  2. Then I use the same example but this time using your NLPModelsIopopt module. I only supply the objective function and the constraints (two constraints in this case). The simulation environment is the same i.e. initial value, bounds, PC etc are the same. The execution time comes out to be around 0.668 seconds. This is roughly 3 times slower.

I am just wondering if this slowness is entirely due to automatic differentiation calls, or is there something I have not got it right in my code. Below is the code i use.

#push!(LOAD_PATH, “C:/Users/Roshan/Documents/julia-codes/Juno-files”)
using NLPModelsIpopt, NLPModels, Ipopt

function myobjfun(x)
return x[1] * x[4] * (x[1] + x[2] + x[3]) + x[3];
end

#unconstrained problem
#nlp = ADNLPModel(x → myobjfun(x), [-1.2; 1.0]);
#@time x = ipopt(nlp);

function myconsfun(x)
g1 = x[1] * x[2] * x[3] * x[4];
g2 = x[1]^2 + x[2]^2 + x[3]^2 + x[4]^2;
return [g1,g2];
end

x_L = [1.0, 1.0, 1.0, 1.0];
x_U = [5.0, 5.0, 5.0, 5.0];

g_L = [25.0, 40.0];
g_U = [2.0e19, 40.0];

x0 = [1.0, 5.0, 5.0, 1.0];
#constrained problem
nlp = ADNLPModel(x → myobjfun(x), x0, lvar = x_L, uvar = x_U,
c=x->myconsfun(x), lcon = g_L, ucon = g_U);
@time x = ipopt(nlp);
println(x)

Thanks again.

Thanks, Roshan, I’m glad you liked it. That’s a new addition to JuliaSmoothOptimizers, so there may be a few bugs. In fact, we just fixed a bug that may be part of the slowness, could you update and try again?

Also, there should be some slowdown by using automatic differentiation instead of providing the derivatives directly. Currently, we use ForwardDiff to compute the derivatives, but perhaps some other differentiation tool is more adequate.
ADNLPModel is one of a few AbstractNLPModels (see the docs), and creating a new model type shouldn’t be too hard. Implementing different models with other differentiation tools is on the list, let me know if you want to help tackle it.

Last thing, you don’t need to write x -> foo(x), you can just provide foo:

nlp = ADNLPModel(myobjfun, x0, lvar=x_L, uvar=x_U, c=myconsfun, lcon=g_L, ucon=g_U)

Hi,
I updated it. But it did not really decrease the execution time (for the same nonlinear problem as stated above).
I wanted to play around with your module to see what might help. I put an “addOption” where the hessian is approximated by “L-BFGS” and avoided the use of ‘eval_h’ while creating the problem as,
problem = createProblem(n, nlp.meta.lvar, nlp.meta.uvar,
m, nlp.meta.lcon, nlp.meta.ucon,
nlp.meta.nnzj, nlp.meta.nnzh,
eval_f, eval_g, eval_grad_f,
eval_jac_g);#, eval_h)

I then ran the same problem. Now the execution time comes out to be around 0.474 seconds (faster than before).
But I guess this is very much problem dependent. I have been using your module for implementing a nonlinear model predictive controller for a MIMO system. I get much faster execution time without using the hessian approximation, unlike for the example above. I can solve the nonlinear MPC in less than 0.1 seconds which kind of fulfills my need :slight_smile:
In any case your module is very useful for solving problems as in “optimal control” since for such problem types, calculating the gradient, jacobian and hessian analytically is almost next to impossible (well depends on the problem structure, but for most cases it is very difficult).

To speed up gradient computation, you may have some luck with one of the reverse AD modules, e.g., https://github.com/JuliaDiff/ReverseDiff.jl or one of the alternatives mentioned in the README.