Hello , I was wondering what would be the best way to introduce a constant tunable parameter in a model that would be used in a dynamic optimisation scenario. I have a couple of working examples building on top the basic free final time problem presented in the documentation.
- First approach , define an extra state
k(t)with zero derivative , allong with a periodic constraintk(0) ~ k(tf). No initial condition should be provided so that the solver has a freedom to play with. This introduces many desciscion variables that are linked together through the boundary condition + derivative constraint but are ultimately trivial to solve albeit not efficient.
Working Example below
import InfiniteOpt
using Ipopt
using ModelingToolkit
const MTK = ModelingToolkit
t = MTK.t_nounits
D = MTK.D_nounits
MTK.@variables begin
x(..) = 1.0
v(..) = 0.0
u(..), [bounds = (-1.0, 1.0), input = true]
k(..) = 0.5, [bounds = (0.0, 10.0)]
end
MTK.@parameters begin
tf
end
constr = [v(tf) ~ 0, x(tf) ~ 0, k(tf) ~ k(0)]
cost = [tf] # Minimize time
input_map(k) = 1 * exp(-(k - 2)^2) # < = new addition from the tutorial in https://docs.sciml.ai/ModelingToolkit/dev/tutorials/dynamic_optimization/#Free-final-time-problems
# +2 * exp(-(k - 6)^2)
MTK.@named block = MTK.System(
[D(x(t)) ~ v(t), D(v(t)) ~ input_map(k(t)) * u(t), D(k(t)) ~ 0], t; costs=cost, constraints=constr)
block = MTK.mtkcompile(block; inputs=[u(t)])
u0map = [x(t) => 1.0, v(t) => 0.0]
tspan = (0.0, tf)
parammap = [u(t) => 0.0, tf => 1.0]
jprob = InfiniteOptDynamicOptProblem(block, [u0map; parammap], tspan; steps=100)
Problem transcription :
julia> jprob.wrapped_model.P
Union{}[]
julia> jprob.wrapped_model.U
3-element Vector{InfiniteOpt.GeneralVariableRef}:
U[1](t)
U[2](t)
U[3](t)
- Attempt to introduce a scalar freedom by tagging the parameters as tunable. The construction of the model to achieve this is quite unintuitive as you will need to tag the final time parameter
tfand even optimal control inputs astunable = falseso that the functionprocess_DynamicOptProbleminoptimal_control_interface.jldoes not error out but ultimately I managed to create a model that contains a singlePfreedom inside and one less collocated variable stateU.
Working Example below
MTK.@variables begin
x(..) = 1.0
v(..) = 0.0
u(..), [bounds = (-1.0, 1.0), input = true, tunable = false]
# k(..) = 0.5, [bounds = (0.0, 10.0)]
end
MTK.@parameters begin
tf, [tunable = false]
k = 0.5, [bounds = (0.0, 10.0), tunable = true]
end
constr = [v(tf) ~ 0, x(tf) ~ 0]
cost = [tf] # Minimize time
input_map(k) = 0.01 * exp(-(k - 2)^2 / (2 * 30^2))
MTK.@named block = MTK.System(
[D(x(t)) ~ v(t), D(v(t)) ~ input_map(k) * u(t)], t; costs=cost, constraints=constr)
block = MTK.mtkcompile(block; inputs=[u(t)])
u0map = [x(t) => 1.0, v(t) => 0.0]
tspan = (0.0, tf)
parammap = [u(t) => 0.0, tf => 1.0]
guesses = []
jprob = InfiniteOptDynamicOptProblem(block, [u0map; parammap], tspan; steps=100, guesses=guesses, tune_parameters=true)
Problem transcription :
julia> jprob.wrapped_model.P
1-element Vector{InfiniteOpt.GeneralVariableRef}:
P[1]
julia> jprob.wrapped_model.U
2-element Vector{InfiniteOpt.GeneralVariableRef}:
U[1](t)
U[2](t)
Observations
While approach 1 works and solves correctly finding the peak defined in the input_map(k) .
(in this specific case optimal k = 2) , approach 2 seems to get IPOPT stuck reaching maximum number of iterations.
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
3000 3.2776964e+03 1.79e-05 7.03e-02 -5.7 1.48e+01 -20.0 1.00e+00 3.12e-02h 6
Now the solve command I used was identical for both approaches.
isol = solve(jprob, InfiniteOptCollocation(Ipopt.Optimizer);verbose = true)
Do you know why would that be the case ? The problem should be all around simpler as it creates less descision variables and what I am asking to be optimized is fairly trivial. It seems that something goes wrong in the transcription to the NLProbem but not sure exactly what.
These have been the approaches I found using the available documentation but do please correct me if there is a more straightforward approach to add tunable constants indynamic optimisation problems.