ODE Parameter Estimation Using NLopt.jl

Good day.

I’m new to Julia and using different sources implemented optimization procedure of Predator-Prey ODE system (2 possible models):

using ProgressBars, Distributions, DelimitedFiles, DifferentialEquations, DiffEqParamEstim, Optimization, NLopt

data = transpose(readdlm(“LV2_30_Julia.csv”, ‘,’))

function myoptimization(p, opt)
g_minf = Inf
g_minx = Array{Float64}(undef, 0)
loops = 10

for j in 1:loops
    (minf,minx,ret) = NLopt.optimize(opt,rand(Uniform(0, 1), length(p)))
    if minf < g_minf
        g_minf = minf
        g_minx = minx
    end
end

next_opt = false
add_loops = 0
while next_opt && add_loops < 50
init_p = Array([rand(Normal(g_minx[i], sqrt(g_minx[i])/3)) for i in 1:length(g_minx)])
init_p[init_p.<= 0] .= 0.000001

    (minf,minx,ret) = NLopt.optimize(opt, init_p)
    if minf < g_minf
        if g_minf - minf < 1e-10
            next_opt = false
            println("Exit on tolerance")
        end
        g_minf = minf
        g_minx = minx
    end
    add_loops += 1
end
return g_minf, g_minx

end

u0 = [1.0;2.0]
tspan = (0.0,30.0)
t = collect(range(0,stop=30,length=31))

function f1(du,u,p,t)
du[1] = p[1] * u[1] - p[2] * u[1] * u[2]
du[2] = -p[3] * u[2] + p[4] * u[1] * u[2]
end

p1 = [0.5,0.5,0.5,0.5]
lb1 = [0,0,0,0]
ub1 = [1000,1000,1000,1000]
prob1 = ODEProblem(f1,u0,tspan,p1)
obj1 = build_loss_objective(prob1, AutoVern9(Rodas5()),L2Loss(t,data),Optimization.AutoFiniteDiff())
opt1 = Opt(:LN_BOBYQA, 4)
min_objective!(opt1, obj1)
maxeval!(opt1, 100000)
lower_bounds!(opt1,lb1)
upper_bounds!(opt1,ub1)

function f2(du,u,p,t)
du[1] = p[1] * u[1] *(1 - u[1]/p[5]) - p[2] * u[2] * u[1]
du[2] = -p[3] * u[2] + p[4] * u[1] * u[2]
end

p2 = [0.5,0.5,0.5,0.5,0.5]
lb2 = [0,0,0,0,0]
ub2 = [1000,1000,1000,1000,1000]
prob2 = ODEProblem(f2,u0,tspan,p2)
obj2 = build_loss_objective(prob2, AutoVern9(Rodas5()),L2Loss(t,data),Optimization.AutoFiniteDiff())
opt2 = Opt(:LN_BOBYQA, 5)
min_objective!(opt2, obj2)
maxeval!(opt2, 100000)
lower_bounds!(opt2,lb2)
upper_bounds!(opt2,ub2)

cur_exp = 10

all_best_w_m1 = Array{Float64}(undef, cur_exp,4)
all_best_w_m2 = Array{Float64}(undef, cur_exp,5)

all_best_res_m1 = Array{Float64}(undef, cur_exp,1)
all_best_res_m2 = Array{Float64}(undef, cur_exp,1)

for i in ProgressBar(1:cur_exp)
g_minf, g_minx = myoptimization(p1, opt1)
all_best_w_m1[i,:] = g_minx
all_best_res_m1[i] = g_minf

g_minf, g_minx = myoptimization(p2, opt2)
all_best_w_m2[i,:] = g_minx
all_best_res_m2[i] = g_minf

end

The questions are the following:

  1. Using LN_BOBYQA above code returns reasonable estimations but it is too slow (compared to the similar implementation in Python with scipy library run on the same data: 15 min against 4 min.) - is there way to optimize here something to run it much faster? (time is stated using the ProgressBar)

  2. With LD_SLSQP as optimization algorithm (which is used in Python implementation) this code runs almost immediately but no optimization is done (it seems that it returns the initial random parameters), and no error from Julia is returned. Is there something I should add to run LD_SLSQP?

For question 1 without knowing more about how you benchmarked it (does that time include compilation etc) it is hard to understand why the time you see might be more.

For 2, I think the issue might be that since you are using DiffEqParamEstim to create the objective, you get an OptimizationFunction from the Optimization.jl package so you should use the OptimizationNLopt.jl wrapper instead of NLopt.jl directly as you are doing, or else you’d need to pass the gradient to NLopt.jl which you are not doing right now.

Take a look at the tutorials DiffEqParamEstim.jl: Parameter Estimation for Differential Equations · DiffEqParamEstim.jl for how to use it with Optimization.jl

My code in the part of estimation is based exactly on this page:
https://docs.sciml.ai/DiffEqParamEstim/stable/tutorials/global_optimization/
Specifically: take the last part of the code on this page:

opt = Opt(:LN_BOBYQA, 1)
min_objective!(opt, obj)
(minf,minx,ret) = NLopt.optimize(opt,[1.3])

change LN_BOBYQA to LD_SLSQP:
opt = Opt(:LD_SLSQP, 1)
min_objective!(opt, obj)
(minf,minx,ret) = NLopt.optimize(opt,[1.3])

and you will find that optimization does not work.
From this page: NLopt.jl · Optimization.jl
we know that LD_SLSQP is gradient-based optimizer “which utilize the gradient information based on derivatives defined or automatic differentiation”. On this page, there is an example of using LD_LBFGS with Optimization.AutoForwardDiff(). The same is done in the first link (with LN_BOBYQA):
obj = build_loss_objective(prob,Tsit5(),L2Loss(t,data),Optimization.AutoForwardDiff())

I also included “Optimization.AutoForwardDiff()” in my code:
build_loss_objective(prob1, AutoVern9(Rodas5()),L2Loss(t,data),Optimization.AutoFiniteDiff())

Specifically: take the last part of the code on this page - change LN_BOBYQA to LD_SLSQP and you will find that optimization does not work

Yes that’s because LN_BOBYQA is a derivative free optimizer so just setting the min_objective! is sufficient, for LD_SLSQP to be able to use it you’d need to tell it the gradient function. You don’t need to do this if you use the OptimizationNLopt or OptimizationMOI package, so look at the example right above the last one there.

On this page, there is an example of using LD_LBFGS with Optimization.AutoForwardDiff(), but I also included “Optimization.AutoForwardDiff()” in my code:
build_loss_objective(prob1, AutoVern9(Rodas5()),L2Loss(t,data),Optimization.AutoFiniteDiff())

Passing Optimization.AutoFiniteDiff() is not sufficient by itself you also need to use the OptimizationMOI package that’s used there because that constructs the gradient function

Am I correct that you talking about this part:
optprob = Optimization.OptimizationProblem(obj, [0.2], lb = [-1.0], ub = [5.0])
res = solve(optprob, OptimizationMOI.MOI.OptimizerWithAttributes(NLopt.Optimizer, “algorithm” => :GN_ISRES, “xtol_rel” => 1e-3, “maxeval” => 10000))

Yes, the solve call there

Trying to run this:
optprob = Optimization.OptimizationProblem(obj2, p2, lb = lb2, ub = ub2)
res = solve(optprob, OptimizationMOI.MOI.OptimizerWithAttributes(NLopt.Optimizer, “algorithm” => :GN_ISRES, “xtol_rel” => 1e-3, “maxeval” => 10000))
gives an error:
“ERROR: MathOptInterface.UnsupportedConstraint{MathOptInterface.VariableIndex, MathOptInterface.GreaterThan{Int64}}: MathOptInterface.VariableIndex-in-MathOptInterface.GreaterThan{Int64} constraint is not supported by the model.”
But I don’t introduce constraint.

P.S. obj2, p2, lb2, ub2 are defined in the initial post.

UPDATE: I simplified my code making it just the same as in the example here Global Optimization via NLopt · DiffEqParamEstim.jl and got the result - bad but result). Will test SLSQP