# Optimizing a 2 parameter simple ODE

I am having trouble understanding the results of code that optimizes a two parameter ODE.

``````using DifferentialEquations
using DiffEqParamEstim
using Optim

data = [3140, 3232, 3488, 3415, 3192, 3265, 2961, 3548, 4269, 4681, 4415, 4883, 4150, 4394, 5856, 7126, 9130, 9367, 10631, 10737, 11688, 14993, 20665, 23957, 23621, 21338, 22034, 33781, 32013, 39840, 45961, 45359, 46126, 38077, 37410, 39431]

# growth model
function f(C,params,t)
r, p = params, params
r*C^p
end
data = Int64.(dff.totalReportedCases)
c0 = 3140
tspan = (0,length(data)-1)
prob = ODEProblem(f, c0, tspan, [0.5, 0.8]) #sets up the problem
sol = solve(prob, Tsit5(), reltol=1e-8, abstol=1e-8, saveat=1) # a nice solution

# lets try to optimize the parameters r and p
cost_function = build_loss_objective(prob, Tsit5(), L2Loss(tspan, data), maxiters=10000,verbose=false)
result = Optim.optimize(cost_function, [0.5, 0.8])
result.minimizer
``````

which gives

``````result.minimizer

2-element Vector{Float64}:
0.703089235189494
0.16348045411130982
``````

but if I use these parameters for `r` and `p`, I get really bad results (sorry, I am not plotting the results here but if it will help, I can certainly attach some images)

``````prob = ODEProblem(f, c0, tspan, result.minimizer)
sol = solve(prob, Tsit5(), reltol=1e-8, abstol=1e-8, saveat=1)
``````

Moreover, the optimized values are sensitive to the initial values given. Is there a better algorithm that might not be so sensitive on the initial conditions?

Also, I tried using `LeastSquaresOptim` but I can’t get it to run.

``````using LeastSquaresOptim
cost_function = build_lsoptim_objective(prob, tspan, data, Tsit5())
lsp = LeastSquaresProblem(x = [0.5, 0.5], f! = cost_function, output_length = 2)
res = optimize!(lsp, LeastSquaresOptim.Dogleg())

BoundsError: attempt to access 2-element Vector{Float64} at index [-33:2]
``````

(the other minor issue is that the namespace is polluted. Perhaps I am using a different overloaded version of `optimize`).

I was able to figure out the `LeastSquaresOptim` problem. When construction the cost function using `build_lsoptim_objective(prob, t, data, alg)` the `t` argument refers to the `saveat` parameter in the ODE `solve` command. It does not refer to the original `tspan` of the ODE although that’s what it looks like.

Also, the objective function that is returned from `build_lsoptim_objective` has the signature

``````function(out, p) ... end
``````

where `out` is a preallocated vector to store the results of the ODE problem. And therefore, the `output_length` parameter in `LeastSquaresProblem` constructor is actually the length of your data (or whatever lenght your ODE solution has).

1 Like