# Get best parameters for ODE modeling in Julia

Hello,
I have been modeling some data of bacterial growth. I have first run a regression model to obtain the growth rates of the two species of the consortium. I then need to run some ODEs using logistic growth; these are biologically matched to the context. While the regression works, the ODE does not. I was expecting that the regression would have given me the best parameters for the ODE, as evident in this figure:

However, the regression and the ODE model do not use exactly the same formulae. Is it possible to modify the regression formula to match the ODE model? Or is it a trick to find the best parametrs for an ODE modeling?
This is the code I have written.

``````vin_x = [0.0,  6.465599999999999, 12.544800000000002, 24.0, 29.107200000000002, 45.772800000000004, 53.392799999999994, 68.0616]
vin_y = [1.434628836798e8, 1.318375623746e8, 1.108524843855e8, 6.91176196241e7, 4.19137382458e7, 1.48676444645e7, 1.00659726053e7, 7.8734512643e6]
col_x = [0.0,  5.5296, 11.827200000000001, 24.0, 28.627200000000002, 32.916, 40.4928, 48.0, 52.4952, 67.0968]
col_y = [8.02511788606e7, 9.98481296252e7, 2.328332483329e8, 6.148181496526e8, 9.453171952624e8, 1.2331292887706e9, 1.4394296796998e9, 1.460066157955e9, 1.3570403824485e9, 1.4269447174002e9]
model = true_logistic(x, p) = p[1] ./ (1 .+ exp.(-p[2].*(x.-p[3])))  ## logistic equation for REGRESSION
# regression
# Sp. A
x = col_x
y = col_y
par = [col_y[1], 0.0, 1.0]
fit = curve_fit(model, x, y, par)
P_col = fit.param
Vtl_col = true_logistic(x, P_col)
mu = P_col[2]                           # growth rate Sp. A
# Sp. B
x = vin_x
y = vin_y
par = [vin_y[1], 0.0, 1.0]
fit = curve_fit(model, x, y, par)
P_vin = fit.param
Vtl_vin = true_logistic(x, P_vin)
nu = P_vin[2]       # growth rate Sp. B
# ODEs
function F_logistic!(du, u, p, t)    # LOGISTIC model
Ī¼, Ī½, Ī», Ī“, Ī², Ī·, Īŗ, Ļ = p
#=
p = max growth rate A & B, outflow, adsorption rate,
burst size, lysis rate, carrying capacity, outflow
du[1] = species A sensible
du[2] = species A infected
du[3] = species B
du[4] = phages
=#
N = u[1] + u[2] + u[3] # total bacteria
Ļ = 1 - N/Īŗ            # logistic term
du[1] = (Ī¼*u[1]*Ļ) - (Ī“*u[1]*u[4])            - (Ļ*u[1])
du[2] = (Ī¼*u[2]*Ļ) + (Ī“*u[1]*u[4]) - (Ī·*u[2]) - (Ļ*u[2])
du[3] = (Ī½*u[3]*Ļ)                            - (Ļ*u[3])
du[4] = (Ī·*Ī²*u[2]) - (Ī“*u[2]*u[4]) - (Ī»*u[4]) - (Ļ*u[4])
end
omega = 0.2
kappa = 1.5*10^9
delta = 5*10^-10
tau   = 23
eta   = 1/(tau/60)
lambda = 0.068
beta  = 150.0
v0    = 0
u0 = [col_y[1], 0, vin_y[1], v0]  # S, I, R, V
parms = [mu, nu, lambda, delta, beta, eta, kappa, omega]
tspan = (0.0, maximum(df.X)*24)
prob = ODEProblem(F_logistic!, u0, tspan, parms)
soln = solve( prob, AutoVern7(Rodas5()) )
``````

Essentially, I tried to change `true_logistic` into `F_logistic` adding the logistic term to the former, but then even the regression does not work.
How can I get the best parameters for `F_logistic`?
Thank you

1 Like

Thank you. You are right, I can dispense the regression altogether and get the best parameters for ODE directly. But on the page you suggest, the example does not even use `DiffEqFlux.jl`. On the associated page, if I understand properly, one passes a series of values for each parameter, and DiffEqFlux run the model for each of them:

``````using Flux, DiffEqFlux
p = [2.2, 1.0, 2.0, 0.4] # Initial Parameter Vector
params = Flux.params(p)
function predict_rd() # Our 1-layer "neural network"
solve(prob,Tsit5(),p=p,saveat=0.1)[1,:] # override with new parameters
end
loss_rd() = sum(abs2,x-1 for x in predict_rd()) # loss function
data = Iterators.repeated((), 100)
cb = function () #callback function to observe training
display(loss_rd())
# using `remake` to re-create our `prob` with current parameters `p`
display(plot(solve(remake(prob,p=p),Tsit5(),saveat=0.1),ylim=(0,6)))
end
cb()
Flux.train!(loss_rd, params, data, opt, cb = cb)
``````

But this example is for neural networks. For `ODEProblem` I could not see any equivalent for `Flux.params(p)` or `Flux.train`

I have seen there is also DifferentialEquations.jl, which should be more specific for ODEProblemā¦

The neural network case is literally just fitting the parameters of an ODEProblem to match data.

https://diffeqflux.sciml.ai/dev/examples/neural_ode_sciml/

1 Like

KO, based on the post I mentioned, which looks simple enough, I understand I need to create a cost function with parameters:

1. the problem generated with `solve`
2. the solver (in my case: AutoVern7(Rodas5()))
3. a L2Loss function containing the x series (time) and a matrix of the y data

But I ran immediately into a problem because L2Loss is not recognized; I thought it was part of Optim:

``````using Optim, LossFunctions

julia> [col_y, vin_y[1:8]]
2-element Vector{Vector{Float64}}:
[8.02511788606e7, 9.98481296252e7, 2.328332483329e8, 6.148181496526e8, 9.453171952624e8, 1.2331292887706e9, 1.4394296796998e9, 1.460066157955e9, 1.3570403824485e9, 1.4269447174002e9]
[1.434628836798e8, 1.318375623746e8, 1.108524843855e8, 6.91176196241e7, 4.19137382458e7, 1.48676444645e7, 1.00659726053e7, 7.8734512643e6]

julia> cost_function = build_loss_objective(prob, AutoVern7(Rodas5()),
L2Loss(tspan, [col_y, vin_y[1:8]]))
ERROR: UndefVarError: L2Loss not defined
Stacktrace:
[1] top-level scope
@ none:1
julia> cost_function = build_loss_objective(prob, Tsit5(),
L2Loss(tspan, [col_y, vin_y[1:8]]))
ERROR: UndefVarError: L2Loss not defined
Stacktrace:
[1] top-level scope
@ none:1
julia> cost_function = build_loss_objective(prob, AutoVern7(Rodas5()),
L2DistLoss(tspan, [col_y, vin_y[1:8]]), maxiters=10000, verbose=false)
ERROR: MethodError: no method matching L2DistLoss(::Tuple{Float64, Float64}, ::Vector{Vector{Float64}})
Stacktrace:
[1] top-level scope
@ none:1
``````

What is the correct syntax?
Should I pass a matrix or a vector of vectors is enough?
If a matrix is needed, how do I build it from vectors? I tried with:

``````ulia> reshape([col_y, vin_y[1:8]], 8, 2)
ERROR: DimensionMismatch("new dimensions (8, 2) must be consistent with array size 2")
Stacktrace:
[1] (::Base.var"#throw_dmrsa#234")(dims::Tuple{Int64, Int64}, len::Int64)
@ Base ./reshapedarray.jl:41
[2] reshape
@ ./reshapedarray.jl:45 [inlined]
[3] reshape(::Vector{Vector{Float64}}, ::Int64, ::Int64)
@ Base ./reshapedarray.jl:116
[4] top-level scope
@ none:1

julia> reshape([col_y, vin_y[1:8]], 2, 8)
ERROR: DimensionMismatch("new dimensions (2, 8) must be consistent with array size 2")
Stacktrace:
[1] (::Base.var"#throw_dmrsa#234")(dims::Tuple{Int64, Int64}, len::Int64)
@ Base ./reshapedarray.jl:41
[2] reshape
@ ./reshapedarray.jl:45 [inlined]
[3] reshape(::Vector{Vector{Float64}}, ::Int64, ::Int64)
@ Base ./reshapedarray.jl:116
[4] top-level scope
@ none:1
``````

This is all pretty basic Julia syntax now, might be worth starting a separate thread on these things which arenāt really about ODE modelling at all.

Iām not sure why youāre trying to use `col_y` as is, while doing `vin_y[1:8]`? Your post suggests that `col_y` has length 10, while `vin_y` has lenght 8. Therefore `vin_y[1:8]` doesnāt do anything, while `col_y` is still two elements longer than `vin_y` and therefore canāt be brought together with it in the same matrix?

You might be looking for:

``````julia> [col_y[1:8] vin_y]
8Ć2 Matrix{Float64}:
8.02512e7  1.43463e8
9.98481e7  1.31838e8
2.32833e8  1.10852e8
6.14818e8  6.91176e7
9.45317e8  4.19137e7
1.23313e9  1.48676e7
1.43943e9  1.0066e7
1.46007e9  7.87345e6
``````

I would encourage you to read through the Julia documentation to learn the basic syntax. For this issue, the most relevant section is probably that on multi-dimensional arrays, which has a subsection on concatenation here:

https://docs.julialang.org/en/v1/manual/arrays/#man-array-concatenation

sorry, I typoed the two vectorsā¦
but apart from the definition of the matrix, the problem is that neither āL2Lossā nor āL2DistLossā is defined in Optim or LossFunctions.

It looks like itās defined in `DiffEqParamEstim`, which is the package for which youāve linked to the docs?

1 Like

I understood the p[ackage was `Optim` but thank you for pointing it out. It was indeed `DiffEqParamEstim`. But now, is the subsequent step that does not work:

``````julia> cost_function = build_loss_objective(prob, AutoVern7(Rodas5()),
L2Loss(tspan, [[col_y[1:8]] [vin_y]]), maxiters=10000, verbose=false)
julia> result = optimize(cost_function, parms, BFGS())
ERROR: MethodError: no method matching -(::Vector{Float64}, ::Float64)
For element-wise subtraction, use broadcasting with dot syntax: array .- scalar
Closest candidates are:
-(::MultivariatePolynomials.RationalPoly, ::Any)
-(::Array, ::SparseArrays.AbstractSparseMatrixCSC)
-(::Array{T, N}, ::FillArrays.Zeros{V, N, Axes} where Axes) where {T, V, N}
...
julia> parms
8-element Vector{Float64}:
0.32
0.07
0.068
5.000000000000002e-10
150.0
2.608695652173913
1.5e9
0.2
``````

I have no knowledge of `DiffEqParamEstim`, but the error is clearly telling you that a subtraction of a scalar from a vector is being performed, which probably means you passed a vector when a scalar should have been passed. The likely culprit is here:

``````[[col_y[1:8]] [vin_y]]
``````

which looks similar to what I posted above (`[col_y[1:8] vin_y]`), but isnāt actually the same. As a hint:

``````julia> typeof([[col_y[1:8]] [vin_y]])
Matrix{Vector{Float64}} (alias for Array{Array{Float64, 1}, 2})

julia> typeof([col_y[1:8] vin_y])
Matrix{Float64} (alias for Array{Float64, 2})
``````

I again encourage you to work through the documentation so that you gain an understanding of how arrays work, as debugging these types of basic syntax issues line-by-line on Discourse isnāt a very productive way of writing code.