This is a rather unusual problem for JuMP as I am using it to find roots of a function. I have tried Optim and Roots packages but I find them frequently failing-most likely due to the way I use them! IntervalRootFinding always work but takes a long time, may be because the search is global.

In JuMP, IPOPT and NLopt (SLSQP alogorithm) works for this problem. The runtime with IPOPT solver is 10.164ms and with NLopt it is 3.248ms, but I believe there is scope for reducing it further.

I have some questions, sorry for the long list- I have not been able to find answers myself

- I have unsuccessfully attempted to define the problem in NLopt directly as I think JuMP overhead may be material, using the example given here. I am finding the documentation hard to follow. Would someone be able to suggest â€ścorrectionsâ€ť to my code?
- I need to solve the problem by changing one of the inputs
`c`

in the MWE and I want to avoid the overhead of rebuilding the problem. JuMP documentation suggests that the macro @NLparameter could be used when the parameter is a single value, whereas â€ścâ€ť in the MWE is a vector of values. Is there a way of using this macro in the MWE so that it works for a vector of parameters? I will be running it on thousands of different values of`c`

, so every millisecond counts! - Would using MOI directly or JuMP direct mode impact the run-time materially? If so, is there any documentation on how to register nonlinear functions/constraints in MOI as we do for JuMP?

```
using JuMP, NLopt, Ipopt
function myfunc_Ipopt(c,t)
f(x) = sum(@views c[i]/(1+x)^t[i] for i in 1:length(c))
# Define first and second derivatives
df(x) = sum(@views -t[i]*c[i]/(1+x)^(t[i]+1) for i in 1:length(c))
ddf(x) = sum(@views (t[i]^2+t[i])*c[i]/(1+x)^(t[i]+2) for i in 1:length(c))
model = Model(Ipopt.Optimizer)
set_optimizer_attribute(model, MOI.Silent(), true)
@variable(model,-100.0 <=x<= 100.0)
JuMP.register(model, :f, 1, f,df,ddf)
@NLobjective(model,Min,f(x))
@NLconstraint(model,f(x)==0.0)
JuMP.optimize!(model)
rate = value.(x)
return rate
end
function myfunc_nlopt(c,t)
f(x) = sum(@views c[i]/(1+x)^t[i] for i in 1:length(c))
df(x) = sum(@views -t[i]*c[i]/(1+x)^(t[i]+1) for i in 1:length(c))
ddf(x) = sum(@views (t[i]^2+t[i])*c[i]/(1+x)^(t[i]+2) for i in 1:length(c))
model = Model(NLopt.Optimizer)
set_optimizer_attribute(model, "algorithm", :LD_SLSQP)
@variable(model,-100.0 <=x<= 100.0)
JuMP.register(model, :f, 1, f,df,ddf)
@NLobjective(model,Min,f(x))
@NLconstraint(model,f(x)==0.0)
JuMP.optimize!(model)
rate = value.(x)
return rate
end
```

Running the model

```
c= vcat(-1.2e6,ones(1200)*10000) # This is the only data that will change when doing different runs
t= collect(0:length(c)-1)/12
@btime myfunc_Ipopt(c,t) #10.164ms
@btime myfunc_nlopt(c,t) #3.248 ms
```

My attempt at defining the model in NLopt direcly

```
function nlopt_direct(x::Vector, grad::Vector,c,t)
grad[1] = sum(@views -t[i]*c[i]/(1+x)^(t[i]+1) for i in 1:length(c))
grad[2] = sum(@views (t[i]^2+t[i])*c[i]/(1+x)^(t[i]+2) for i in 1:length(c))
return sum(@views c[i]/(1+x)^t[i] for i in 1:length(c))
end
function myconstraint(x::Vector,c,t)
sum(@views c[i]/(1+x)^t[i] for i in 1:length(c))
end
opt = Opt(:LD_SLSQP)
opt.xtol_abs=1e-9
opt.min_objective=nlopt_direct
equality_constraint!(opt,(x,c,t) ->myconstraint(x,c,t),0.0)
(minf,minx,ret) = NLopt.optimize(opt,[-10.0,10.0])
# not sure what I am doing!
```