How can you constraint parameters to be positive in SciML?

How do you constraint parameters results in this setup to be strictly positive?
Here is my code:

using Flux, DiffEqFlux, DifferentialEquations, Plots
function f(du,u,p,t)
S,I,R = u
du[1] = -p[1]*u[1]*u[2]
du[2] = p[1]*u[1]*u[2] - p[2]*u[2]
du[3] = p[2]*u[2]
end

# initialise and have arange between 0 and 1.5 for time-t

tbegin=0.0
tend=15
tstep=1
trange = tbegin:tstep:tend
u0 =Float32[738.0,1.0,0.0]
tspan = (tbegin,tend)

t_f = collect(3:14)
I_data=Float32[25.0,75.0,227.0,296.0,258.0,236.0,192.0,126.0,71.0,28.0,11.0,7.0];

p = [0.002,0.3] # set initial values of parameters
prob = ODEProblem(f, u0, tspan, p)

sol=solve(prob,Tsit5(),reltol = 1e-12 , saveat=t_f,save_idxs = 2)
plot(sol)

function loss(p)
sol = solve(prob, Tsit5(),reltol = 1e-12 , saveat=t_f, save_idxs = 2)
loss = sum(abs2, I_data .- sol)
return loss, sol
end

callback = function (p, l, pred)
display(l)

# Tell sciml_train to not halt the optimization. If return true, then

# optimization stops.

return false
end

using Optimization
adtype = Optimization.AutoZygote()
optf = Optimization.OptimizationFunction((x, p) β†’ loss(x), adtype)
optprob = Optimization.OptimizationProblem(optf, p)
result_ode = Optimization.solve(optprob,
ADAM(0.1),
callback = callback,
maxiters = 300)

Guess the easiest way is to define the parameters on an unconstrained space and map them to the constrained space within f, i.e.,

constrain_positive(uparams) = exp.(uparams)

function f(du,u,up,t)
S,I,R = u
p = constrain_positive(up)
du[1] = -p[1]*u[1]*u[2]
du[2] = p[1]*u[1]*u[2] - p[2]*u[2]
du[3] = p[2]*u[2]
end

If you need more complicated transformations or their Jacobians, Bijectors.jl has some predefined already.

Thank you. Tired the easy way but get and error (UndefVarError: up not defined).

Ok, unfortunately I could not run your code and did not test it. Where exactly do you get the error? (Note that I have changed the name of the parameter argument in the signature of f(du,u,up,t) to up as well)

using Flux, DiffEqFlux, DifferentialEquations, Plots
function f(du,u,p,t)
S,I,R = u
du[1] = -p[1]*u[1]*u[2]
du[2] = p[1]*u[1]*u[2] - p[2]*u[2]
du[3] = p[2]*u[2]
end

initialise and have arange between 0 and 1.5 for time-t

u0 =Float32[738.0,1.0,0.0]
tspan = (1,15)

t_f = collect(3:14)
I_data=Float32[25.0,75.0,227.0,296.0,258.0,236.0,192.0,126.0,71.0,28.0,11.0,7.0]; # 1 set of data used

p = [0.02,0.5] # set initial values of parameters to one
prob = ODEProblem(f, u0, tspan, p)
#The neural network consists of a single ODE solver type layer:
sol=solve(prob,Tsit5(),reltol = 1e-12 ,p=p, saveat=t_f)
plot(sol)

function loss(p)
sol = solve(prob, Tsit5(),reltol = 1e-12 ,p=p, saveat=t_f)
loss = sum(abs2, I_data .- sol[2,:])
return loss, sol
end

callback = function (p, l, pred)
display(l)

Tell sciml_train to not halt the optimization. If return true, then

optimization stops.

return false
end

using Optimization
adtype = Optimization.AutoForwardDiff()
optf = Optimization.OptimizationFunction((x, p) β†’ loss(x), adtype)
optprob = Optimization.OptimizationProblem(optf, p)
result_ode = Optimization.solve(optprob,
ADAM(0.1),
callback = callback,
maxiters = 30)
Sorry about the code. This is the running code similar to the code which brings negative parameters.

Your code is probably fine, but I have trouble installing all the required libraries to get it to run. Please try again to exchange your definition of f with the complete snippet that I had posted. Sorry that I can not help in more detail at the moment.

Thank you, but when it’s implemented, parameters are never estimated. You get back the initial parameters. Wonder whether there is strategy of limltling them by setting boundaries.

Instead of doing it like this, just use the lb in the OptimizationProblem definition to set a lower bound, i.e. lb = [0.0,0.0]. That should be much simpler, and the optimizer can choose how it wants to constrain to that box constraint.

1 Like