ODE parameter estimation using neural nets

I’m trying to add inequality constraints to my ODE parameter optimization problem. The example I followed was fluxdiffeq. I need p[1] + p[2] <= 1 and p[1] and p[2] both also have to be positive. Is there any way I can achieve this with DiffEqFlux?

It would be done in the Optimization.jl setup.


Hi! Thanks for your reply. I’m quite new to Julia, so I’m not sure if the same approach in the link you sent could be applied to ODEs?
Here’s my code:

I1(t) = (60>=t) ? 550 : 0
I2(t) = (40>=t>=15) ? 810 : 0
timesteps = collect(1:1:100)

function crm_bhp!(du, u, p, t)
    q = u
    τ, J, F1, F2 = p
    du[1] = (I1(t)*F1 + I2(t)*F2 - 2τ*J - u[1]) / τ

u0 = [0.0]
tspan = (minimum(timesteps), maximum(timesteps))
crm_params = [5.0, 1.0, 0.1, 0.9]
prob = ODEProblem(crm_bhp!, u0, tspan, crm_params, saveat=1)
sol = solve(prob)

I want to optimize crm_params, and the constraints and bounds are:

cons(res, u, p) = (res .= [p[3]+p[4], p[1], p[2], p[3], p[4]])
crm_params_ = [6.0, 3.0, 0.2, 0.8]
optprob = OptimizationFunction(crm_bhp!, Optimization.AutoForwardDiff(), cons=cons)
prob2 = OptimizationProblem(optprob, u0, crm_params_, 
    lcons = [0.0, 0.0, 0.0, 0.0, 0.0], ucons = [1.0, 10.0, 10.0, 1.0, 1.0])
sol2 = solve(prob2, IPNewton())

I’m getting an error Warning: Initial guess is not an interior point

you want to treat the box constraints differently from the nonlinear constraints. This should be a single constraint system with the bounds on p set via lb and ub instead. See if that solves it (it may be throwing this in the optimizer because the parameter bounds are not set, in which case, we should throw a better error there).

I’m not sure if I fully understood… this is the new code I tried…

function obj(p)
    prob_ = ODEProblem(crm_bhp!, u0, tspan, p, saveat=1)
    sol_ = solve(prob_, Tsit5())
    return sol_.u .- sol.u

optprob = OptimizationFunction(obj, Optimization.AutoForwardDiff())
initial = [6.0, 3.0, 0.2, 0.8]
lower = [0.0, 0.0, 0.0, 0.0]
upper = [10.0, 10.0, 1.0, 1.0]
optimize(obj, lower, upper, initial, Fminbox(GradientDescent()))

The error I’m getting is MethodError: Cannot convert an object of type Vector{Vector{Float64}} to an object of type Float64. I’m also not sure how I can impose the inequality constraint P[3]+P[4]<=1 in here… Could you please elaborate more?

Objective functions need to output a scalar cost value. sol_.u .- sol.u is a Vector{Float64} for the loss of each value at each time point. Did you instead mean sum(abs2,Array(sol_) .- Array(sol))?