NeuralODE: optimal control action in a constrained optimization problem

Hi, I’m new with Julia.
I want to ask if it’s possible to optimize an differential equation using the NeuralODE.

I try to create my code but the problems are the physical constrains.
I don’t know how to add this equation to my problem in order to optimize the Loss function.

I know NLopt.jl is a specific library to do this kind of problems.
My aim is to utilize NeuralODE.

Here my code to take as reference:


using DifferentialEquations
using Plots
using Flux, DiffEqFlux
using Flux: throttle
using Optim
using CUDA
using OrdinaryDiffEq,Statistics, DiffEqSensitivity

tspan = (0.0, 30.0)
datasize = 100
t = tspan[1]:0.1:tspan[2]

#Parameters
r0 = 1.0
θ0 = 0.0
u0 = 0.0
v0 = 1/sqrt(r0)
m0 = 1.0
Tr0 = 0.01
Tt0 = 0.0
y0 = [r0,θ0,u0,v0,m0,Tr0,Tt0]
c = 1
t

#Problem differential Equation
function dynamic2D(dy,y,p,t)
    r,θ,u,v,m = y
    
    action = p(t)
    Tr = action[1]  #Control input
    Tt = action[2] #Control input
    
    dy[1] = u
    dy[2] = v/r
    dy[3] = v^2/r - 1/r^2 + Tr/m #sistemare
    dy[4] = -u*v/r + Tt/m #Tt
    dy[5] = -sqrt(Tt^2 + Tr^2)/c
    
end

#Neural_Network parameters
n_n = 5                               
act = swish                            
dim_input = 5                          
dim_output = 2                         

NN = FastChain(FastDense(dim_input,n_n,act),FastDense(n_n,dim_output)) |> gpu
par = initial_params(NN) |> gpu;

#Agent 
function agent(dy,y,p,t)
    
    action = NN([y[1],y[2],y[3],y[4],y[5]],p)
    Tr = action[1]
    Tt = action[2]
    dy[6] = Tr
    dy[7] = Tt
    dynamic2D(dy,y[1:5],t -> action,t)
    
end

#Prediction function
function pred(p)
    _prob = remake(prob,p = p)
    prediction = solve(_prob, Rodas5(), saveat = t, abstol = 1e-8, reltol = 1e-6) |> gpu #[t .<= 20.0]
    Array(prediction)'
end

#Loss function 
function Loss()
    x = pred(par)
    loss = abs(((x[1,end]-1.524)^2 - x[5,end])) #+ (states[4,end]-1/sqrt(1.524))^2 controllare .-
    return loss
end

Hyperparameters
n_epochs = 500
learning_rate = 0.1 
data = Iterators.repeated((),n_epochs)
opt = ADAM(learning_rate) |> gpu
LOSS = []

cb = function() #parametri, loss, predizione
    x = pred(par)
    l = Loss()
    fig = plot(t,x,layout = (7,1),size = (900,900), color = "black",label = [" r " " θ " " u " " v " " m " "impulsoTr" "impulsoTr"])
    push!(LOSS,l)
    display(fig)

end

Flux.train!(Loss,Flux.params(par),data,opt,cb = throttle(cb,10)) |> gpu

DiffEqFlux uses GalacticOptim.jl: Unified Global Optimization Package · GalacticOptim.jl under the hood and its optimizers allow for defining constraints, so go for it. Here’s an example using it directly:

https://diffeqflux.sciml.ai/dev/examples/neural_ode_galacticoptim/

and here’s an example with a constrained optimization:

https://galacticoptim.sciml.ai/dev/tutorials/rosenbrock/

1 Like

Thank @ChrisRackauckas for the advices!
I can’t find enough documentation about my problem in particular.
At first I try to optimize this problem in unconstrained way but it doesn’t work.

I can’t understand what the problem is.
I followed this example: https://diffeqflux.sciml.ai/dev/examples/neural_ode_galacticoptim/

Here my code:

using DifferentialEquations
using Plots
using Flux
using DiffEqFlux
using OrdinaryDiffEq,GalacticOptim

#Data
tspan = (0.0, 30.0)
datasize = 100
t = tspan[1]:0.1:tspan[2]


r0 = 1.0
θ0 = 0.0
u0 = 0.0
v0 = 1/sqrt(r0)
m0 = 1.0
Tr0 = 0.01
Tt0 = 0.0
y0 = [r0,θ0,u0,v0,m0,Tr0,Tt0]
c = 1

#Definizione dinamica sistema
function dynamic2D(dy,y,p,t)
    r,θ,u,v,m = y
    
    action = p(t)
    Tr = action[1]
    Tt = action[2]
    
    dy[1] = u
    dy[2] = v/r
    dy[3] = v^2/r - 1/r^2 + Tr/m #sistemare
    dy[4] = -u*v/r + Tt/m #Tt
    dy[5] = -sqrt(Tt^2 + Tr^2)/c
    
end

#Definizione reteneurale
n_n = 5                               #numero neuroni
act = swish                            #funzione di attivazione 
dim_input = 5                          #dimensione input
dim_output = 2                         #dimensione output

NN = FastChain(FastDense(dim_input,n_n,tanh),FastDense(n_n,dim_output))
par = initial_params(NN) 

#Agent control
function agent(dy,y,p,t)
    
    action = NN([y[1],y[2],y[3],y[4],y[5]],p)
    Tr = action[1]
    Tt = action[2]
    dy[6] = Tr
    dy[7] = Tt
    dynamic2D(dy,y[1:5],t -> action,t)
    
end

prob = ODEProblem(agent,y0,tspan,nothing)

#Definizione Eq di costo

function pred(p)
    _prob = remake(prob,p = p)
    prediction = solve(_prob, Rodas5(), saveat = t, abstol = 1e-8, reltol = 1e-6) #[t .<= 20.0]
    Array(prediction)
end

function Loss(p)
    x = pred(p)
    loss = ((x[1,end]-1.524)^2 - x[5,end]) #+ (states[4,end]-1/sqrt(1.524))^2 controllare .-
    return loss,x
end

n_epochs = 500
learning_rate = 0.1 
data = Iterators.repeated((),n_epochs)
opt = ADAM(learning_rate) 
LOSS = []

cb = function(p,l,pred) #parametri, loss, predizione
    fig = plot(t,pred,layout = (7,1),size = (900,900), color = "black",label = [" r " " θ " " u " " v " " m " "impulsoTr" "impulsoTr"])
    push!(LOSS,l)
    display(fig)
    return false

end

adtype = GalacticOptim.AutoZygote()

f = GalacticOptim.OptimizationFunction((x,p)-> Loss(x), adtype)
ff = GalacticOptim.instantiate_function(f, par, adtype, nothing)
prob = GalacticOptim.OptimizationProblem(ff, par)

sol = GalacticOptim.solve(prob,ADAM(0.05),cb = cb,maxiters = 300)

Here the error:

MethodError: no method matching iterate(::ErrorException)
Closest candidates are:
iterate(::Union{LinRange, StepRangeLen}) at range.jl:664
iterate(::Union{LinRange, StepRangeLen}, ::Int64) at range.jl:664
iterate(::T) where T<:Union{Base.KeySet{var"#s79", var"#s78"} where {var"#s79", var"#s78"<:Dict}, Base.ValueIterator{var"#s77"} where var"#s77"<:Dict} at dict.jl:693

Stacktrace:
[1] indexed_iterate(I::ErrorException, i::Int64)
@ Base ./tuple.jl:89
[2] #s3010#1217
@ ~/.julia/packages/Zygote/0da6K/src/compiler/interface2.jl:28 [inlined]
[3] var"#s3010#1217"(::Any, ctx::Any, f::Any, args::Any)
@ Zygote ./none:0
[4] (::Core.GeneratedFunctionStub)(::Any, ::Vararg{Any, N} where N)
@ Core ./boot.jl:571
[5] _pullback
@ ~/.julia/packages/GalacticOptim/paH6q/src/solve.jl:66 [inlined]
[6] _pullback(::Zygote.Context, ::SciMLBase.var"#__solve##kw", ::NamedTuple{(:saveat, :abstol, :reltol), Tuple{StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}}, Float64, Float64}}, ::typeof(SciMLBase.__solve), ::OptimizationProblem{false, OptimizationFunction{false, GalacticOptim.AutoZygote, OptimizationFunction{true, GalacticOptim.AutoZygote, var"#5#6", Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, GalacticOptim.var"#156#166"{GalacticOptim.var"#155#165"{OptimizationFunction{true, GalacticOptim.AutoZygote, var"#5#6", Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, Nothing}}, GalacticOptim.var"#159#169"{GalacticOptim.var"#155#165"{OptimizationFunction{true, GalacticOptim.AutoZygote, var"#5#6", Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, Nothing}}, GalacticOptim.var"#164#174", Nothing, Nothing, Nothing}, Vector{Float32}, Vector{Float32}, Nothing, Nothing, Nothing, Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}, ::Rodas5{0, true, DefaultLinSolve, DataType}, ::Base.Iterators.Cycle{Tuple{GalacticOptim.NullData}})
@ Zygote ~/.julia/packages/Zygote/0da6K/src/compiler/interface2.jl:0
[7] _pullback
@ ~/.julia/packages/GalacticOptim/paH6q/src/solve.jl:66 [inlined]
[8] _apply
@ ./boot.jl:804 [inlined]
[9] adjoint
@ ~/.julia/packages/Zygote/0da6K/src/lib/lib.jl:200 [inlined]
[10] _pullback
@ ~/.julia/packages/ZygoteRules/OjfTt/src/adjoint.jl:57 [inlined]
[11] _pullback
@ ~/.julia/packages/SciMLBase/DXiE6/src/solve.jl:3 [inlined]
[12] _pullback(::Zygote.Context, ::SciMLBase.var"##solve#468", ::Base.Iterators.Pairs{Symbol, Any, Tuple{Symbol, Symbol, Symbol}, NamedTuple{(:saveat, :abstol, :reltol), Tuple{StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}}, Float64, Float64}}}, ::typeof(solve), ::OptimizationProblem{false, OptimizationFunction{false, GalacticOptim.AutoZygote, OptimizationFunction{true, GalacticOptim.AutoZygote, var"#5#6", Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, GalacticOptim.var"#156#166"{GalacticOptim.var"#155#165"{OptimizationFunction{true, GalacticOptim.AutoZygote, var"#5#6", Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, Nothing}}, GalacticOptim.var"#159#169"{GalacticOptim.var"#155#165"{OptimizationFunction{true, GalacticOptim.AutoZygote, var"#5#6", Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, Nothing}}, GalacticOptim.var"#164#174", Nothing, Nothing, Nothing}, Vector{Float32}, Vector{Float32}, Nothing, Nothing, Nothing, Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}, ::Rodas5{0, true, DefaultLinSolve, DataType})
@ Zygote ~/.julia/packages/Zygote/0da6K/src/compiler/interface2.jl:0
[13] _apply(::Function, ::Vararg{Any, N} where N)
@ Core ./boot.jl:804
[14] adjoint
@ ~/.julia/packages/Zygote/0da6K/src/lib/lib.jl:200 [inlined]
[15] _pullback
@ ~/.julia/packages/ZygoteRules/OjfTt/src/adjoint.jl:57 [inlined]
[16] _pullback
@ ~/.julia/packages/SciMLBase/DXiE6/src/solve.jl:3 [inlined]
[17] _pullback(::Zygote.Context, ::CommonSolve.var"#solve##kw", ::NamedTuple{(:saveat, :abstol, :reltol), Tuple{StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}}, Float64, Float64}}, ::typeof(solve), ::OptimizationProblem{false, OptimizationFunction{false, GalacticOptim.AutoZygote, OptimizationFunction{true, GalacticOptim.AutoZygote, var"#5#6", Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, GalacticOptim.var"#156#166"{GalacticOptim.var"#155#165"{OptimizationFunction{true, GalacticOptim.AutoZygote, var"#5#6", Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, Nothing}}, GalacticOptim.var"#159#169"{GalacticOptim.var"#155#165"{OptimizationFunction{true, GalacticOptim.AutoZygote, var"#5#6", Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, Nothing}}, GalacticOptim.var"#164#174", Nothing, Nothing, Nothing}, Vector{Float32}, Vector{Float32}, Nothing, Nothing, Nothing, Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}, ::Rodas5{0, true, DefaultLinSolve, DataType})
@ Zygote ~/.julia/packages/Zygote/0da6K/src/compiler/interface2.jl:0
[18] _pullback
@ ./In[5]:5 [inlined]
[19] _pullback(ctx::Zygote.Context, f::typeof(pred), args::Vector{Float32})
@ Zygote ~/.julia/packages/Zygote/0da6K/src/compiler/interface2.jl:0
[20] _pullback
@ ./In[5]:10 [inlined]
[21] _pullback(ctx::Zygote.Context, f::typeof(Loss), args::Vector{Float32})
@ Zygote ~/.julia/packages/Zygote/0da6K/src/compiler/interface2.jl:0
[22] _pullback
@ ./In[10]:3 [inlined]
[23] _pullback(::Zygote.Context, ::var"#5#6", ::Vector{Float32}, ::SciMLBase.NullParameters)
@ Zygote ~/.julia/packages/Zygote/0da6K/src/compiler/interface2.jl:0
[24] _apply
@ ./boot.jl:804 [inlined]
[25] adjoint
@ ~/.julia/packages/Zygote/0da6K/src/lib/lib.jl:200 [inlined]
[26] _pullback
@ ~/.julia/packages/ZygoteRules/OjfTt/src/adjoint.jl:57 [inlined]
[27] _pullback
@ ~/.julia/packages/SciMLBase/DXiE6/src/problems/basic_problems.jl:107 [inlined]
[28] _pullback(::Zygote.Context, ::OptimizationFunction{true, GalacticOptim.AutoZygote, var"#5#6", Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, ::Vector{Float32}, ::SciMLBase.NullParameters)
@ Zygote ~/.julia/packages/Zygote/0da6K/src/compiler/interface2.jl:0
[29] _apply(::Function, ::Vararg{Any, N} where N)
@ Core ./boot.jl:804
[30] adjoint
@ ~/.julia/packages/Zygote/0da6K/src/lib/lib.jl:200 [inlined]
[31] _pullback
@ ~/.julia/packages/ZygoteRules/OjfTt/src/adjoint.jl:57 [inlined]
[32] _pullback
@ ~/.julia/packages/SciMLBase/DXiE6/src/problems/basic_problems.jl:107 [inlined]
[33] _pullback(::Zygote.Context, ::OptimizationFunction{false, GalacticOptim.AutoZygote, OptimizationFunction{true, GalacticOptim.AutoZygote, var"#5#6", Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, GalacticOptim.var"#156#166"{GalacticOptim.var"#155#165"{OptimizationFunction{true, GalacticOptim.AutoZygote, var"#5#6", Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, Nothing}}, GalacticOptim.var"#159#169"{GalacticOptim.var"#155#165"{OptimizationFunction{true, GalacticOptim.AutoZygote, var"#5#6", Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, Nothing}}, GalacticOptim.var"#164#174", Nothing, Nothing, Nothing}, ::Vector{Float32}, ::SciMLBase.NullParameters)
@ Zygote ~/.julia/packages/Zygote/0da6K/src/compiler/interface2.jl:0
[34] _apply
@ ./boot.jl:804 [inlined]
[35] adjoint
@ ~/.julia/packages/Zygote/0da6K/src/lib/lib.jl:200 [inlined]
[36] _pullback
@ ~/.julia/packages/ZygoteRules/OjfTt/src/adjoint.jl:57 [inlined]
[37] _pullback
@ ~/.julia/packages/GalacticOptim/paH6q/src/solve.jl:91 [inlined]
[38] _pullback(::Zygote.Context, ::GalacticOptim.var"#16#21"{OptimizationProblem{false, OptimizationFunction{false, GalacticOptim.AutoZygote, OptimizationFunction{true, GalacticOptim.AutoZygote, var"#5#6", Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, GalacticOptim.var"#156#166"{GalacticOptim.var"#155#165"{OptimizationFunction{true, GalacticOptim.AutoZygote, var"#5#6", Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, Nothing}}, GalacticOptim.var"#159#169"{GalacticOptim.var"#155#165"{OptimizationFunction{true, GalacticOptim.AutoZygote, var"#5#6", Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, Nothing}}, GalacticOptim.var"#164#174", Nothing, Nothing, Nothing}, Vector{Float32}, SciMLBase.NullParameters, Nothing, Nothing, Nothing, Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}, Vector{Float32}, GalacticOptim.NullData})
@ Zygote ~/.julia/packages/Zygote/0da6K/src/compiler/interface2.jl:0
[39] pullback(f::Function, ps::Params)
@ Zygote ~/.julia/packages/Zygote/0da6K/src/compiler/interface.jl:250
[40] gradient(f::Function, args::Params)
@ Zygote ~/.julia/packages/Zygote/0da6K/src/compiler/interface.jl:58
[41] __solve(prob::OptimizationProblem{false, OptimizationFunction{false, GalacticOptim.AutoZygote, OptimizationFunction{true, GalacticOptim.AutoZygote, var"#5#6", Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, GalacticOptim.var"#156#166"{GalacticOptim.var"#155#165"{OptimizationFunction{true, GalacticOptim.AutoZygote, var"#5#6", Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, Nothing}}, GalacticOptim.var"#159#169"{GalacticOptim.var"#155#165"{OptimizationFunction{true, GalacticOptim.AutoZygote, var"#5#6", Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, Nothing}}, GalacticOptim.var"#164#174", Nothing, Nothing, Nothing}, Vector{Float32}, SciMLBase.NullParameters, Nothing, Nothing, Nothing, Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}, opt::ADAM, data::Base.Iterators.Cycle{Tuple{GalacticOptim.NullData}}; maxiters::Int64, cb::Function, progress::Bool, save_best::Bool, kwargs::Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
@ GalacticOptim ~/.julia/packages/GalacticOptim/paH6q/src/solve.jl:90
[42] #solve#468
@ ~/.julia/packages/SciMLBase/DXiE6/src/solve.jl:3 [inlined]
[43] top-level scope
@ In[10]:7
[44] eval
@ ./boot.jl:360 [inlined]
[45] include_string(mapexpr::typeof(REPL.softscope), mod::Module, code::String, filename::String)
@ Base ./loading.jl:1094

WARNING: both Iterators and Flux export “flatten”; uses of it in module GalacticOptim must be qualified
WARNING: both Flux and Tracker export “gradient”; uses of it in module GalacticOptim must be qualified

@dhairyagandhi96 I wasn’t able to figure this one out. Could you take a look?

1 Like

Sure, sorry about the delay

I’ve read in https://diffeqflux.sciml.ai/stable/sciml_train/ something about constrained optimization.

Box Constrained Optimization

function sciml_train(loss, θ, opt = DEFAULT_OPT, adtype = DEFAULT_AD,
                     data = DEFAULT_DATA, args...;
                     lower_bounds, upper_bounds,
                     cb = (args...) -> (false), maxiters = get_maxiters(data),
                     kwargs...)

I can’t understand well how to use it.
Can this resolve my problem?

Thank you in advance.

That will do box-constrained optimization. Is that what you’re looking for?

My aim is to find the optimal control action u(t) in order to minimize the consumption of fuel using the loss function.
I want to define a set of ODE to describe the system’s dynamic, one of this equation (the control action u(t) ) is represented by the Neural ODE. In this way I’d like to use the loss function to minimize the input action u(t).

I don’t know if this is the right way to operate If you have any advice I’ll be happy to listen to you.

What you described doesn’t need constrained optimization, and there’s a tutorial for it:

https://diffeqflux.sciml.ai/dev/examples/optimal_control/

Yes I already saw this tutorial.
I have some problem with the stability of the problem.
I think there are some problems with the definition of the problem.
Some value like mass or other assume negative value and this is impossible.
I’d like to set some physical constraints in order to avoid negative mass.

Then use nonlinear constraints through NLopt?

The optimal control problem of minimizing fuel consumption is a classical optimal control problem. Normally, there are hard constraints on the control input u(t), which then leads to some “bang-bang control”.

  • Is this what you want?

The classical solution of the above problem involves using Pontryagin’s Maximum Principle. It is often solved using “simultaneous collocation and optimization” [a la Lorenz Biegler and co-workers, see also John T. Betts book (a Boeing perspective)], which involves discretizing the ODEs via collocation/multiple shooting, etc. This leads to a time evolution of the control input, u(t).

An alternative approach is to pose the problem as dynamic programming. If you don’t have constraints, the resulting Hamilton-Jacobi-Bellman equation is a PDE in the optimal cost value, and the dependent variables are the states (i.e., for infinite horizon; for finite horizon, you also get partial derivatives in time). Solving this equation, and using the relationship between cost and control input, you get a feedback law u(x). Michel Fliess from University of Grenoble did some work in this direction in the late 1980s; I think he created vector PDEs for the control input, with states as independent variable. This way, you get u(x) directly.

The dynamic programming approach is more tricky than described above when you have constraints in inputs and states.

With infinite horizon in the optimal control, resulting in a feedback law u(x) as with the dynamic programming approach, this is essentially equal to what has become known as explicit MPC (i.e., with infinite horizon).

With implicit MPC (= repetitive optimal control with a receding/sliding or shrinking horizon), one normally parameterizes u(t) and solves static optimization problem for each time step. The solution depends on the current state vector as well as future inputs (disturbances, reference values). Feedback is achieved by continuously updating the current state vector.

Implicit MPC is typically time consuming, although Biegler et al. managed to solve simplified MPC (optimal control) problems with 200 inputs within some 10 ms or so (ca. 2002).

It is also possible to solve the implicit MPC problem off-line for “all possible” combinations of initial state and future inputs, and then use, e.g., a neural network to train the function u(x,w,r) where x is the current state, w is the disturbance, and r is the reference value. Such a mapping, u(x,w,r), is known as explicit MPC (similar to above-mentioned dynamic programming). Such a mapping is extremely costly to develop, and is normally only done for very low order models. The advantage is that, once available, the optimal control input can be computed in the order of nanoseconds, which makes it feasible with real time control for very fast systems.

  • It is very interesting to relate the use of Neural ODEs in optimal control problems with the above mentioned classical ideas.
  • Is posing u(t) as a neural ode essentially some sort of parameterization of the control input, i.e., instead of suggesting u(t) = \sum_{i=1}^{n} u_i \mu_{(i-1)\Delta t}(t) with \mu_\tau being the Heaviside function?

At some stage, it is also of interest to consider simultaneous optimal control/MPC and systems identification. A key idea within “dual control” [Fel’dbaum, 1960s; see also Y. Bar-Shalom] is that too good control leads to poor information content while training the model, hence a poor model, which again leads to poor control. On the other hand, too much excitation leads to good model training, but ruins the control. So “dual control” attempts to balance the excitation of the system while finding good control input. This is related to Reinforcement Learning. I think E. Ydstie at Carnegie Mellon looked at this ca. 2015.

3 Likes

It’s possible to bring together NLopt and Neural ODE?
I’d like to do something like you suggest me https://diffeqflux.sciml.ai/dev/examples/optimal_control/ with some constraints.
I’ve not so skill in Julia programming there is some tutorial that explain how to bring this libraries together?

(Shameless plug :smile:) I worked in this for a poster in Juliacon2021, it would be great if anyone interested joins the gathertown session to talk about the details! I posed the problem from the control perspective and using their terminology, so that might be helpful if you come from that side of things.
bioreactor_constraints_x2_result

4 Likes