Nonlinear least squares Parameter estimation with one set of data points

I have been trying to estimate parameters with one set of data. First, i followed the examples of least squares parameter estimation to write the first code where i generated 3 toy data sets for x, y and z . Everything works fine.

function f(du,u,p,t)
du[1] = dx = -p[1]*u[1]*u[2]
du[2] = dy = p[1]*u[1]*u[2] - p[2]*u[2]
du[3] = dz = p[2]*u[2]
end

u0 = [738.0;1.0;0]
tspan = (0.0,14.0)
p = [0.00237,0.465]
prob = ODEProblem(f,u0,tspan,p)
sol = solve(prob,Tsit5())

t = collect(range(0,stop=10,length=200))
randomized = VectorOfArray([(sol(t[i]) + 0.0randn(3)) for i in 1:length(t)])
data = convert(Array,randomized) # 3 toy sets of data for x,y and z

cost_function = build_lsoptim_objective(prob,t,data,Tsit5())

x = [0.03,0.8]
res = optimize!(LeastSquaresProblem(x = x, f! = cost_function,
output_length = length(t)*length(prob.u0)),
LeastSquaresOptim.Dogleg(LeastSquaresOptim.LSMR()))

In the second code, i have only one data set for y and need help on how to modify the cost_function to estimate parameters. Thank you.

function f(du,u,p,t)
du[1] = dx = -p[1]*u[1]*u[2]
du[2] = dy = p[1]*u[1]*u[2] - p[2]*u[2]
du[3] = dz = p[2]*u[2]
end

u0 = [738.0;1.0;0]
tspan = (0.0,14.0)
p = [0.00237,0.465]
prob = ODEProblem(f,u0,tspan,p)
sol = solve(prob,Tsit5())

t = collect(3:14)
y_data = [25,75,227,296,258,236,192,126,71,28,11,7] # 1 set of data for y

cost_function = build_lsoptim_objective(prob,t,y_data,Tsit5()) #???

x = [0.03,0.8]
res = optimize!(LeastSquaresProblem(x = x, f! = cost_function,
output_length = length(t)*length(prob.u0)),
LeastSquaresOptim.Dogleg(LeastSquaresOptim.LSMR()))

cost_function = build_lsoptim_objective(prob,t,y_data,Tsit5(),save_idxs = 2) so it’s only saving and comparing against y.

Thanks Chris for quick reply. I tried this and got a dimensions mismatch. Array could not be broadcast to match destination. May be it’s about prob ??

Post the full error message. My guess is that it expects the data to still be a matrix, so save_idxs = 2:2 might be the easy fix, or maybe you have to y_data', but the answer would be in the error message.

Thanks once again. Tried the easy fixes but error still persists. Here is the error report.

DimensionMismatch: array could not be broadcast to match destination

Stacktrace:
[1] check_broadcast_shape
@ .\broadcast.jl:540 [inlined]
[2] check_broadcast_axes
@ .\broadcast.jl:543 [inlined]
[3] check_broadcast_axes
@ .\broadcast.jl:546 [inlined]
[4] instantiate
@ .\broadcast.jl:284 [inlined]
[5] materialize!
@ .\broadcast.jl:871 [inlined]
[6] materialize!
@ .\broadcast.jl:868 [inlined]
[7] (::DiffEqParamEstim.var"#49#50"{typeof(DiffEqParamEstim.STANDARD_PROB_GENERATOR), Base.Pairs{Symbol, Int64, Tuple{Symbol}, NamedTuple{(:save_idxs,), Tuple{Int64}}}, ODEProblem{Vector{Float64}, Tuple{Float64, Float64}, true, Vector{Float64}, ODEFunction{true, typeof(f), LinearAlgebra.UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing}, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}, SciMLBase.StandardODEProblem}, Vector{Int64}, Tsit5{typeof(OrdinaryDiffEq.trivial_limiter!), typeof(OrdinaryDiffEq.trivial_limiter!), Static.False}, Int64, Vector{Int64}})(out::Vector{Float64}, p::Vector{Float64})
@ DiffEqParamEstim C:\Users\Kayanja.julia\packages\DiffEqParamEstim\tWnyt\src\build_lsoptim_objective.jl:14
[8] optimize!(anls::LeastSquaresProblemAllocated{Vector{Float64}, Vector{Float64}, DiffEqParamEstim.var"#49#50"{typeof(DiffEqParamEstim.STANDARD_PROB_GENERATOR), Base.Pairs{Symbol, Int64, Tuple{Symbol}, NamedTuple{(:save_idxs,), Tuple{Int64}}}, ODEProblem{Vector{Float64}, Tuple{Float64, Float64}, true, Vector{Float64}, ODEFunction{true, typeof(f), LinearAlgebra.UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing}, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}, SciMLBase.StandardODEProblem}, Vector{Int64}, Tsit5{typeof(OrdinaryDiffEq.trivial_limiter!), typeof(OrdinaryDiffEq.trivial_limiter!), Static.False}, Int64, Vector{Int64}}, Matrix{Float64}, LeastSquaresOptim.var"#3#5"{DiffEqParamEstim.var"#49#50"{typeof(DiffEqParamEstim.STANDARD_PROB_GENERATOR), Base.Pairs{Symbol, Int64, Tuple{Symbol}, NamedTuple{(:save_idxs,), Tuple{Int64}}}, ODEProblem{Vector{Float64}, Tuple{Float64, Float64}, true, Vector{Float64}, ODEFunction{true, typeof(f), LinearAlgebra.UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing}, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}, SciMLBase.StandardODEProblem}, Vector{Int64}, Tsit5{typeof(OrdinaryDiffEq.trivial_limiter!), typeof(OrdinaryDiffEq.trivial_limiter!), Static.False}, Int64, Vector{Int64}}, FiniteDiff.JacobianCache{Vector{Float64}, Vector{Float64}, Vector{Float64}, UnitRange{Int64}, Nothing, Val{:forward}(), Float64}}, LeastSquaresOptim.AllocatedDogleg{Vector{Float64}, Vector{Float64}, Vector{Float64}, Vector{Float64}, Vector{Float64}, Vector{Float64}}, LeastSquaresOptim.LSMRAllocatedSolver{LeastSquaresOptim.var"#20#22", LeastSquaresOptim.InverseDiagonal{Vector{Float64}}, Vector{Float64}, Vector{Float64}, Vector{Float64}, Vector{Float64}, Vector{Float64}, Vector{Float64}}}; x_tol::Float64, f_tol::Float64, g_tol::Float64, iterations::Int64, Δ::Float64, store_trace::Bool, show_trace::Bool, show_every::Int64, lower::Vector{Float64}, upper::Vector{Float64})
@ LeastSquaresOptim C:\Users\Kayanja.julia\packages\LeastSquaresOptim\B1WKJ\src\optimizer\dogleg.jl:71
[9] optimize!
@ C:\Users\Kayanja.julia\packages\LeastSquaresOptim\B1WKJ\src\optimizer\dogleg.jl:46 [inlined]
[10] #optimize!#11
@ C:\Users\Kayanja.julia\packages\LeastSquaresOptim\B1WKJ\src\types.jl:149 [inlined]
[11] optimize!(nls::LeastSquaresProblem{Vector{Float64}, Vector{Float64}, DiffEqParamEstim.var"#49#50"{typeof(DiffEqParamEstim.STANDARD_PROB_GENERATOR), Base.Pairs{Symbol, Int64, Tuple{Symbol}, NamedTuple{(:save_idxs,), Tuple{Int64}}}, ODEProblem{Vector{Float64}, Tuple{Float64, Float64}, true, Vector{Float64}, ODEFunction{true, typeof(f), LinearAlgebra.UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing}, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}, SciMLBase.StandardODEProblem}, Vector{Int64}, Tsit5{typeof(OrdinaryDiffEq.trivial_limiter!), typeof(OrdinaryDiffEq.trivial_limiter!), Static.False}, Int64, Vector{Int64}}, Matrix{Float64}, LeastSquaresOptim.var"#3#5"{DiffEqParamEstim.var"#49#50"{typeof(DiffEqParamEstim.STANDARD_PROB_GENERATOR), Base.Pairs{Symbol, Int64, Tuple{Symbol}, NamedTuple{(:save_idxs,), Tuple{Int64}}}, ODEProblem{Vector{Float64}, Tuple{Float64, Float64}, true, Vector{Float64}, ODEFunction{true, typeof(f), LinearAlgebra.UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing}, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}, SciMLBase.StandardODEProblem}, Vector{Int64}, Tsit5{typeof(OrdinaryDiffEq.trivial_limiter!), typeof(OrdinaryDiffEq.trivial_limiter!), Static.False}, Int64, Vector{Int64}}, FiniteDiff.JacobianCache{Vector{Float64}, Vector{Float64}, Vector{Float64}, UnitRange{Int64}, Nothing, Val{:forward}(), Float64}}}, optimizer::Dogleg{LeastSquaresOptim.LSMR{Nothing, Nothing}})
@ LeastSquaresOptim C:\Users\Kayanja.julia\packages\LeastSquaresOptim\B1WKJ\src\types.jl:148
[12] top-level scope
@ In[76]:30
[13] eval
@ .\boot.jl:368 [inlined]
[14] include_string(mapexpr::typeof(REPL.softscope), mod::Module, code::String, filename::String)
@ Base .\loading.jl:1281

Thank you Chris. Changed the cost_function to
cost_function = build_loss_objective(prob,Tsit5(),L2Loss(t,y_data),
maxiters=10000,verbose=false,save_idxs=[2]) and it worked.

Hello,
I tried your problem, but I still have error. How did you fix it?
The error is as follows:
BoundsError: attempt to access 2-element Vector{Float64} at index [3]

Stacktrace:
[1] setindex!
@ ./array.jl:903 [inlined]
[2] finite_difference!(f::DiffEqParamEstim.var"#37#42"{Nothing, Bool, Int64, typeof(DiffEqParamEstim.STANDARD_PROB_GENERATOR), Base.Pairs{Symbol, Any, Tuple{Symbol, Symbol, Symbol}, NamedTuple{(:maxiters, :verbose, :save_idxs), Tuple{Int64, Bool, Vector{Int64}}}}, ODEProblem{Vector{Float64}, Tuple{Float64, Float64}, true, Vector{Float64}, ODEFunction{true, typeof(f), LinearAlgebra.UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing}, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}, SciMLBase.StandardODEProblem}, Tsit5{typeof(OrdinaryDiffEq.trivial_limiter!), typeof(OrdinaryDiffEq.trivial_limiter!), Static.False}, L2Loss{Vector{Int64}, Matrix{Int64}, Nothing, Nothing, Nothing}, Nothing, Tuple{}}, x::Vector{Float64}, g::Vector{Float64}, dtype::Symbol)
@ Calculus ~/.julia/packages/Calculus/mbqhh/src/finite_difference.jl:130
[3] (::DiffEqParamEstim.var"#40#46"{DiffEqParamEstim.var"#37#42"{Nothing, Bool, Int64, typeof(DiffEqParamEstim.STANDARD_PROB_GENERATOR), Base.Pairs{Symbol, Any, Tuple{Symbol, Symbol, Symbol}, NamedTuple{(:maxiters, :verbose, :save_idxs), Tuple{Int64, Bool, Vector{Int64}}}}, ODEProblem{Vector{Float64}, Tuple{Float64, Float64}, true, Vector{Float64}, ODEFunction{true, typeof(f), LinearAlgebra.UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing}, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}, SciMLBase.StandardODEProblem}, Tsit5{typeof(OrdinaryDiffEq.trivial_limiter!), typeof(OrdinaryDiffEq.trivial_limiter!), Static.False}, L2Loss{Vector{Int64}, Matrix{Int64}, Nothing, Nothing, Nothing}, Nothing, Tuple{}}})(x::Vector{Float64}, out::Vector{Float64})
@ DiffEqParamEstim ~/.julia/packages/DiffEqParamEstim/BbF2D/src/build_loss_objective.jl:87
[4] (::DiffEqParamEstim.var"#41#47"{DiffEqParamEstim.var"#37#42"{Nothing, Bool, Int64, typeof(DiffEqParamEstim.STANDARD_PROB_GENERATOR), Base.Pairs{Symbol, Any, Tuple{Symbol, Symbol, Symbol}, NamedTuple{(:maxiters, :verbose, :save_idxs), Tuple{Int64, Bool, Vector{Int64}}}}, ODEProblem{Vector{Float64}, Tuple{Float64, Float64}, true, Vector{Float64}, ODEFunction{true, typeof(f), LinearAlgebra.UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing}, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}, SciMLBase.StandardODEProblem}, Tsit5{typeof(OrdinaryDiffEq.trivial_limiter!), typeof(OrdinaryDiffEq.trivial_limiter!), Static.False}, L2Loss{Vector{Int64}, Matrix{Int64}, Nothing, Nothing, Nothing}, Nothing, Tuple{}}})(p::Vector{Float64}, grad::Vector{Float64})
@ DiffEqParamEstim ~/.julia/packages/DiffEqParamEstim/BbF2D/src/build_loss_objective.jl:92
[5] DiffEqObjective
@ ~/.julia/packages/DiffEqParamEstim/BbF2D/src/build_loss_objective.jl:26 [inlined]
[6] optimize!(anls::LeastSquaresProblemAllocated{Vector{Float64}, Vector{Float64}, DiffEqObjective{DiffEqParamEstim.var"#37#42"{Nothing, Bool, Int64, typeof(DiffEqParamEstim.STANDARD_PROB_GENERATOR), Base.Pairs{Symbol, Any, Tuple{Symbol, Symbol, Symbol}, NamedTuple{(:maxiters, :verbose, :save_idxs), Tuple{Int64, Bool, Vector{Int64}}}}, ODEProblem{Vector{Float64}, Tuple{Float64, Float64}, true, Vector{Float64}, ODEFunction{true, typeof(f), LinearAlgebra.UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing}, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}, SciMLBase.StandardODEProblem}, Tsit5{typeof(OrdinaryDiffEq.trivial_limiter!), typeof(OrdinaryDiffEq.trivial_limiter!), Static.False}, L2Loss{Vector{Int64}, Matrix{Int64}, Nothing, Nothing, Nothing}, Nothing, Tuple{}}, DiffEqParamEstim.var"#41#47"{DiffEqParamEstim.var"#37#42"{Nothing, Bool, Int64, typeof(DiffEqParamEstim.STANDARD_PROB_GENERATOR), Base.Pairs{Symbol, Any, Tuple{Symbol, Symbol, Symbol}, NamedTuple{(:maxiters, :verbose, :save_idxs), Tuple{Int64, Bool, Vector{Int64}}}}, ODEProblem{Vector{Float64}, Tuple{Float64, Float64}, true, Vector{Float64}, ODEFunction{true, typeof(f), LinearAlgebra.UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing}, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}, SciMLBase.StandardODEProblem}, Tsit5{typeof(OrdinaryDiffEq.trivial_limiter!), typeof(OrdinaryDiffEq.trivial_limiter!), Static.False}, L2Loss{Vector{Int64}, Matrix{Int64}, Nothing, Nothing, Nothing}, Nothing, Tuple{}}}}, Matrix{Float64}, LeastSquaresOptim.var"#3#5"{DiffEqObjective{DiffEqParamEstim.var"#37#42"{Nothing, Bool, Int64, typeof(DiffEqParamEstim.STANDARD_PROB_GENERATOR), Base.Pairs{Symbol, Any, Tuple{Symbol, Symbol, Symbol}, NamedTuple{(:maxiters, :verbose, :save_idxs), Tuple{Int64, Bool, Vector{Int64}}}}, ODEProblem{Vector{Float64}, Tuple{Float64, Float64}, true, Vector{Float64}, ODEFunction{true, typeof(f), LinearAlgebra.UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing}, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}, SciMLBase.StandardODEProblem}, Tsit5{typeof(OrdinaryDiffEq.trivial_limiter!), typeof(OrdinaryDiffEq.trivial_limiter!), Static.False}, L2Loss{Vector{Int64}, Matrix{Int64}, Nothing, Nothing, Nothing}, Nothing, Tuple{}}, DiffEqParamEstim.var"#41#47"{DiffEqParamEstim.var"#37#42"{Nothing, Bool, Int64, typeof(DiffEqParamEstim.STANDARD_PROB_GENERATOR), Base.Pairs{Symbol, Any, Tuple{Symbol, Symbol, Symbol}, NamedTuple{(:maxiters, :verbose, :save_idxs), Tuple{Int64, Bool, Vector{Int64}}}}, ODEProblem{Vector{Float64}, Tuple{Float64, Float64}, true, Vector{Float64}, ODEFunction{true, typeof(f), LinearAlgebra.UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing}, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}, SciMLBase.StandardODEProblem}, Tsit5{typeof(OrdinaryDiffEq.trivial_limiter!), typeof(OrdinaryDiffEq.trivial_limiter!), Static.False}, L2Loss{Vector{Int64}, Matrix{Int64}, Nothing, Nothing, Nothing}, Nothing, Tuple{}}}}, FiniteDiff.JacobianCache{Vector{Float64}, Vector{Float64}, Vector{Float64}, Vector{Float64}, UnitRange{Int64}, Nothing, Val{:forward}(), Float64}}, LeastSquaresOptim.AllocatedDogleg{Vector{Float64}, Vector{Float64}, Vector{Float64}, Vector{Float64}, Vector{Float64}, Vector{Float64}}, LeastSquaresOptim.LSMRAllocatedSolver{LeastSquaresOptim.var"#20#22", LeastSquaresOptim.InverseDiagonal{Vector{Float64}}, Vector{Float64}, Vector{Float64}, Vector{Float64}, Vector{Float64}, Vector{Float64}, Vector{Float64}}}; x_tol::Float64, f_tol::Float64, g_tol::Float64, iterations::Int64, Δ::Float64, store_trace::Bool, show_trace::Bool, show_every::Int64, lower::Vector{Float64}, upper::Vector{Float64})
@ LeastSquaresOptim ~/.julia/packages/LeastSquaresOptim/B1WKJ/src/optimizer/dogleg.jl:71
[7] optimize!
@ ~/.julia/packages/LeastSquaresOptim/B1WKJ/src/optimizer/dogleg.jl:51 [inlined]
[8] #optimize!#11
@ ~/.julia/packages/LeastSquaresOptim/B1WKJ/src/types.jl:149 [inlined]
[9] optimize!(nls::LeastSquaresProblem{Vector{Float64}, Vector{Float64}, DiffEqObjective{DiffEqParamEstim.var"#37#42"{Nothing, Bool, Int64, typeof(DiffEqParamEstim.STANDARD_PROB_GENERATOR), Base.Pairs{Symbol, Any, Tuple{Symbol, Symbol, Symbol}, NamedTuple{(:maxiters, :verbose, :save_idxs), Tuple{Int64, Bool, Vector{Int64}}}}, ODEProblem{Vector{Float64}, Tuple{Float64, Float64}, true, Vector{Float64}, ODEFunction{true, typeof(f), LinearAlgebra.UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing}, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}, SciMLBase.StandardODEProblem}, Tsit5{typeof(OrdinaryDiffEq.trivial_limiter!), typeof(OrdinaryDiffEq.trivial_limiter!), Static.False}, L2Loss{Vector{Int64}, Matrix{Int64}, Nothing, Nothing, Nothing}, Nothing, Tuple{}}, DiffEqParamEstim.var"#41#47"{DiffEqParamEstim.var"#37#42"{Nothing, Bool, Int64, typeof(DiffEqParamEstim.STANDARD_PROB_GENERATOR), Base.Pairs{Symbol, Any, Tuple{Symbol, Symbol, Symbol}, NamedTuple{(:maxiters, :verbose, :save_idxs), Tuple{Int64, Bool, Vector{Int64}}}}, ODEProblem{Vector{Float64}, Tuple{Float64, Float64}, true, Vector{Float64}, ODEFunction{true, typeof(f), LinearAlgebra.UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing}, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}, SciMLBase.StandardODEProblem}, Tsit5{typeof(OrdinaryDiffEq.trivial_limiter!), typeof(OrdinaryDiffEq.trivial_limiter!), Static.False}, L2Loss{Vector{Int64}, Matrix{Int64}, Nothing, Nothing, Nothing}, Nothing, Tuple{}}}}, Matrix{Float64}, LeastSquaresOptim.var"#3#5"{DiffEqObjective{DiffEqParamEstim.var"#37#42"{Nothing, Bool, Int64, typeof(DiffEqParamEstim.STANDARD_PROB_GENERATOR), Base.Pairs{Symbol, Any, Tuple{Symbol, Symbol, Symbol}, NamedTuple{(:maxiters, :verbose, :save_idxs), Tuple{Int64, Bool, Vector{Int64}}}}, ODEProblem{Vector{Float64}, Tuple{Float64, Float64}, true, Vector{Float64}, ODEFunction{true, typeof(f), LinearAlgebra.UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing}, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}, SciMLBase.StandardODEProblem}, Tsit5{typeof(OrdinaryDiffEq.trivial_limiter!), typeof(OrdinaryDiffEq.trivial_limiter!), Static.False}, L2Loss{Vector{Int64}, Matrix{Int64}, Nothing, Nothing, Nothing}, Nothing, Tuple{}}, DiffEqParamEstim.var"#41#47"{DiffEqParamEstim.var"#37#42"{Nothing, Bool, Int64, typeof(DiffEqParamEstim.STANDARD_PROB_GENERATOR), Base.Pairs{Symbol, Any, Tuple{Symbol, Symbol, Symbol}, NamedTuple{(:maxiters, :verbose, :save_idxs), Tuple{Int64, Bool, Vector{Int64}}}}, ODEProblem{Vector{Float64}, Tuple{Float64, Float64}, true, Vector{Float64}, ODEFunction{true, typeof(f), LinearAlgebra.UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing}, Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}, SciMLBase.StandardODEProblem}, Tsit5{typeof(OrdinaryDiffEq.trivial_limiter!), typeof(OrdinaryDiffEq.trivial_limiter!), Static.False}, L2Loss{Vector{Int64}, Matrix{Int64}, Nothing, Nothing, Nothing}, Nothing, Tuple{}}}}, FiniteDiff.JacobianCache{Vector{Float64}, Vector{Float64}, Vector{Float64}, Vector{Float64}, UnitRange{Int64}, Nothing, Val{:forward}(), Float64}}}, optimizer::Dogleg{LeastSquaresOptim.LSMR{Nothing, Nothing}})
@ LeastSquaresOptim ~/.julia/packages/LeastSquaresOptim/B1WKJ/src/types.jl:149
[10] top-level scope
@ In[294]:19
[11] eval
@ ./boot.jl:373 [inlined]
[12] include_string(mapexpr::typeof(REPL.softscope), mod::Module, code::String, filename::String)
@ Base ./loading.jl:1196

Gents, please mind the environment and post smarter your long codes & error messages by using the available Discourse tools.

Did you change to cost_function = build_loss_objective(prob,Tsit5(),L2Loss(t,y_data), maxiters=10000,verbose=false,save_idxs=[2])?

Do exactly what Chris suggested. Only that optimization was easily achieved using black box optim. I used to find difficult with these optimization problems in Matlab but found them super easy in Julia. Thanks to Chris for his great ordinary differential tutorials.

Yes I did
function f(du,u,p,t)
du[1] = dx = -p[1]*u[1]*u[2]
du[2] = dy = p[1]*u[1]*u[2] - p[2]*u[2]
du[3] = dz = p[2]*u[2]
end

u0 = [738.0;1.0;0]
tspan = (0.0,14.0)
p = [0.00237,0.465]
prob = ODEProblem(f,u0,tspan,p)
sol = solve(prob,Tsit5())

t = collect(3:14)
y_data = [25,75,227,296,258,236,192,126,71,28,11,7] # 1 set of data for y

cost_function = build_loss_objective(prob,Tsit5(),L2Loss(t,y_data), maxiters=10000,verbose=false,save_idxs = [2])

x = [0.03,0.8]
res = optimize!(LeastSquaresProblem(x = x, f! = cost_function,
output_length = length(t)*length(prob.u0)),
LeastSquaresOptim.Dogleg(LeastSquaresOptim.LSMR()))

I had the same problem in Matlab even with simple parameter estimation which is why I moved to Julia, but I am very new in Julia and I have limited time to do my parameter estimation. I decided to do simple examples and then move to my main project. I will appreciate receiving some suggestions for such methods. my main project is super complex with more than a hundred states and parameters. Based on your experiences, which package and methods do you suggest

If you have a vector of data then you want to make solve return a vector. If you did solve with these options, you’ll see the output.

You mean should I change options for the solve command?

I should add a tutorial on the expected data format. It basically just assumes that it matches Array(sol). Array(solve(prob,alg)) on a multidimensional system is a matrix. Array(solve(prob,alg,save_idxs = [1,2,3])) is a 3xN matrix. Array(solve(prob,alg,save_idxs = [2])) is a 1xN matrix. Array(solve(prob,alg,save_idxs = 2)) is a vector of length N.

If you have a very complex parameter estimation problem, directly using SciMLSensitivity.jl is probably best given its flexibility.

https://sensitivity.sciml.ai/dev/

Thank you Chris for your suggestion.
Do you have a simple example of this method?
For example, using the method for a simple SIR epidemic model.

Optimization of Ordinary Differential Equations · SciMLSensitivity.jl shows optimizing parameters of an ODE with respect to arbitrary loss functions. I should probably change that tutorial to be a parameter estimation one, but I have videos to make today.

After stating the cost_function, you can now optimize using black box optim.
For example,
result = bboptimize(cost_function; SearchRange = [ (0.0, 1.0), (0.0, 1.0 ], NumDimensions = 3)
Read more about black box optim.