Here is what I meant with “passing” the input to the model… I use a simple model of a body following Newton’s law (with friction) as an example:
# Packages
using ModelingToolkit
using ModelingToolkit: t_nounits as t, D_nounits as Dt
using DataInterpolations
using DifferentialEquations: solve
using Plots
# Some constants
LW1 = 2.5
LC1 = :red
LC2 = :blue
LC3 = :green
LS1 = :solid
LS2 = :dashdot
# Registering the control input function
@register_symbolic u(t)
# Model instantiator
@mtkmodel Body begin
# Model parameters
@parameters begin
k=0.01, [description = "Linear friction coefficient, 1/s"]
end
# Model variables, with initial values needed
@variables begin
x(t)=1, [description = "Position, m"]
v(t)=0, [description = "Velocity, m/s"]
# u(t), scaled force, defined in the global name space
end
# Providing model equations
@equations begin
Dt(x) ~ v
Dt(v) ~ u(t) - k*v
end
end
# Instantiating a model named "body"
@mtkbuild body = Body()
With the last statement, Julia should respond with a typesetting of the model:
Continuing with defining the time points for control change (N control values) and setting the control values to zero initially:
t_final = 10
N = 3
# Linear spread in time points
T = collect(range(0, t_final, length=N+1))
# Initial input
U = zeros(N+1)
U[end] = U[end-1] # because of zero order hold
We can now create a numeric problem from the above symbolic model:
tspan = (T[1], T[end])
#
prob = ODEProblem(body, [], tspan)
Next, we create an interpolation function u_c
based on U
and T
, and set the control input u
equal to the interpolation function.
u_c = ConstantInterpolation(U, T)
u(t) = u_c(t)
Finally, let us solve the numeric problem + plot the states and the input function:
sol = solve(prob; tstops = T)
#
fg_state = plot(sol, lw=LW1, lc=[LC1 LC2], ls=LS1)
fg_input = plot(T, U; st=:steppost, lw=LW1, lc=LC3)
So far, so good.
In an optimization sequence, we need to change the control inputs. This essentially implies changing the elements of vector U
. Just to illustrate how to do it:
U = 0.1*randn(N+1)
U[end] = U[end-1]
u_c = ConstantInterpolation(U,T)
Because we have changed u_c
, that means that the registered function u
automatically also is changed, and this also changes the numeric problem – it is not necessary to change the creation of the numeric problem – we simply solve the numeric problem again:
sol = solve(prob; tstops = T)
#
plot!(fg_state, sol, lw=LW1, lc=[LC1 LC2], ls=LS2)
leading to:
while
plot!(fg_input, T, U; st=:steppost, lw=LW1, lc=LC3, ls=LS2)
gives:
Some final comments
- Here,
U[1:N]
are the control variables, i.e., the unknowns in the optimization problem.
- For the optimization problem, you need to construct a cost function. The cost function takes as input
U[1:N]
.
- Within the cost function, you take
U[1:N]
and create the interpolation function u_c
(in my code above – choose another name if you want to).
- Then you solve the model over the interval
tspan
.
- Then you construct your choice of cost, typically based on the solution of the model and the value of the control input (the “scaled force”). As an example, the cost could be the sum of squared deviations between the position and some reference value + the sum of squared force used.
- With input
U[1:N]
, the cost function needs to return the cost.
- You then choose some optimization code, select an initial guess of
U
[e.g., a zero vector], and pass the initial guess + the cost function to the optimization solver.
- With the above procedure, you can easily include constraints on the inputs
u
. It is not so straightforward to provide constraints on the states (e.g., the position).
- In my description above, it will probably not work to use an optimizer that is based on the gradient of the cost function. But it should work fine to use an optimization solver that does not use the gradient, e.g.,
BlackBoxOptim
.
- If you want to use a gradient-based solver, you may need to add more information to the
register_symbolic
macro. I don’t know how to do that.