 # Unusual JuMP and NLopt problem

This is a rather unusual problem for JuMP as I am using it to find roots of a function. I have tried Optim and Roots packages but I find them frequently failing-most likely due to the way I use them! IntervalRootFinding always work but takes a long time, may be because the search is global.

In JuMP, IPOPT and NLopt (SLSQP alogorithm) works for this problem. The runtime with IPOPT solver is 10.164ms and with NLopt it is 3.248ms, but I believe there is scope for reducing it further.

I have some questions, sorry for the long list- I have not been able to find answers myself

1. I have unsuccessfully attempted to define the problem in NLopt directly as I think JuMP overhead may be material, using the example given here. I am finding the documentation hard to follow. Would someone be able to suggest “corrections” to my code?
2. I need to solve the problem by changing one of the inputs `c` in the MWE and I want to avoid the overhead of rebuilding the problem. JuMP documentation suggests that the macro @NLparameter could be used when the parameter is a single value, whereas “c” in the MWE is a vector of values. Is there a way of using this macro in the MWE so that it works for a vector of parameters? I will be running it on thousands of different values of `c`, so every millisecond counts!
3. Would using MOI directly or JuMP direct mode impact the run-time materially? If so, is there any documentation on how to register nonlinear functions/constraints in MOI as we do for JuMP?
``````using JuMP, NLopt, Ipopt
function myfunc_Ipopt(c,t)
f(x) = sum(@views c[i]/(1+x)^t[i] for i in 1:length(c))
# Define first and second derivatives
df(x) = sum(@views -t[i]*c[i]/(1+x)^(t[i]+1) for i in 1:length(c))
ddf(x) = sum(@views (t[i]^2+t[i])*c[i]/(1+x)^(t[i]+2) for i in 1:length(c))
model = Model(Ipopt.Optimizer)
set_optimizer_attribute(model, MOI.Silent(), true)
@variable(model,-100.0 <=x<= 100.0)
JuMP.register(model, :f, 1, f,df,ddf)
@NLobjective(model,Min,f(x))
@NLconstraint(model,f(x)==0.0)
JuMP.optimize!(model)
rate = value.(x)
return rate
end

function myfunc_nlopt(c,t)
f(x) = sum(@views c[i]/(1+x)^t[i] for i in 1:length(c))
df(x) = sum(@views -t[i]*c[i]/(1+x)^(t[i]+1) for i in 1:length(c))
ddf(x) = sum(@views (t[i]^2+t[i])*c[i]/(1+x)^(t[i]+2) for i in 1:length(c))
model = Model(NLopt.Optimizer)
set_optimizer_attribute(model, "algorithm", :LD_SLSQP)
@variable(model,-100.0 <=x<= 100.0)
JuMP.register(model, :f, 1, f,df,ddf)
@NLobjective(model,Min,f(x))
@NLconstraint(model,f(x)==0.0)
JuMP.optimize!(model)
rate = value.(x)
return rate
end

``````

Running the model

``````c= vcat(-1.2e6,ones(1200)*10000)   # This is the only data that will change when doing different runs
t= collect(0:length(c)-1)/12
@btime myfunc_Ipopt(c,t)   #10.164ms
@btime myfunc_nlopt(c,t)    #3.248 ms
``````

My attempt at defining the model in NLopt direcly

``````function nlopt_direct(x::Vector, grad::Vector,c,t)
grad = sum(@views -t[i]*c[i]/(1+x)^(t[i]+1) for i in 1:length(c))
grad = sum(@views (t[i]^2+t[i])*c[i]/(1+x)^(t[i]+2) for i in 1:length(c))
return sum(@views c[i]/(1+x)^t[i] for i in 1:length(c))
end
function myconstraint(x::Vector,c,t)
sum(@views c[i]/(1+x)^t[i] for i in 1:length(c))
end
opt = Opt(:LD_SLSQP)
opt.xtol_abs=1e-9
opt.min_objective=nlopt_direct
equality_constraint!(opt,(x,c,t) ->myconstraint(x,c,t),0.0)
(minf,minx,ret) = NLopt.optimize(opt,[-10.0,10.0])
# not sure what I am doing!
``````

1. Parameters will only work if you can write the expression algebraically. They won’t be helpful in a user-defined function.

2. No to all. Just use `Model(Ipopt.Optimizer)`

As for other advice, your formulation is a little unusual because your objective will always be `0`. You might be better with `@NLobjective(model, Min, f(x)^2)` and without the `== 0` constraint.

Thank you. Running the example given NLopt.jl documentation made me curious as I noticed that it runs considerably faster if NLopt is used directly compared to when using JuMP.

I know it is unusual but it works, as I want to find the value of x for which f(x) is zero. I have tried `f(x)^2` without the constraint before posting the question and it gives the same answer (up to 13 decimal places) but the runtime is always longer (14.830 ms in the MWE).

Do you reckon parameters will be not a good use in this case if I reformulate it with `@NLexpression`? The nonlinear expression(s) will be effectively equivalent to the user defined functions (for objective and derivatives), so I am not sure if the impact of using parameters will be same as rebuilding the model.

A bit off topic, but I noticed that the JuMP documentation mentions that for Ipopt one can improve the performance by installing advanced sparse linear algebra packages and it refers one to the installation guide. I couldn’t see anything on spare linear algebra packages in the guide.

I reformulate it with `@NLexpression` ?

If you can write out the objective algebraically, I encourage you to do so.

You could also use the Ipopt.jl C API directly. Here’s an example: https://github.com/jump-dev/Ipopt.jl/blob/master/example/hs071.jl

I couldn’t see anything on spare linear algebra packages in the guide.

I’m improving the documentation on this front: https://github.com/jump-dev/Ipopt.jl/pull/236

1 Like

Thanks