Optimizing a function in optimal control problem

I’m trying to solve the following optimal control problem:

minimize -\int_0^T v(t) dt, subject to
\frac{dv}{dt} = -\frac{v}{\tau} + f, v(0)=0
\frac{de}{dt} = \sigma - fv, e(0)=e_0, e(t)>=0
0\leq f(t) \leq f_{max} is the control variable I’m trying to solve. \tau, \sigma, e_0, T and f_{max} are all fixed constants.

I begin with the simplest case by choosing parameters (e.g. small T and large E_0 so that the pure state constraint e(t)>=0 is never active. In this case I know the optimal solution is simply f(t)=f_{max}.

I first try discretizing f(t) into a vector, and embed it inside the struct defining the parameters passed to the ODE function:

using OrdinaryDiffEq, DataInterpolations, Optimization, OptimizationMOI, Ipopt, Parameters, Setfield, ForwardDiff

@with_kw struct RunParams{T1, T2}
    τ::Float64 = 1.0
    σ::Float64 = 49.0
    E0::Float64 = 1000.0
    fmax::Float64 = 10.0

    t_interp::T2 = range(0, 20.0, length=201)
    f_interp::T1 = fmax*ones(length(t_interp))
end

τ = 1.0
σ = 49.0
E0 = 1000.0
fmax = 10.0
T = 20.0
t_interp = range(0, T, length=201)

params1 = RunParams(τ=τ, σ=σ, E0=E0, fmax=fmax, t_interp=t_interp)

function f_ode!(dy_dt, y, p, t)
    τ, σ, f_interp, t_interp = p.τ, p.σ, p.f_interp, p.t_interp
    f = LinearInterpolation(f_interp, t_interp)(t)
    v, E = y

    dy_dt[1] = -v/tau + f
    dy_dt[2] = sigma - f*v
end

ode_prob = ODEProblem(f_ode!, [0, E0], (0, T), params1)

function cost_fun(f, params)

    params = @set params.f_interp = f

    newprob = remake(ode_prob, p=params)
    sol = solve(newprob)

    v = map(s -> s[1], sol.u)
    D = sum(0.5*(v[1:end-1] + v[2:end]) .* (sol.t[2:end] - sol.t[1:end-1]))
    return -D
end

cons(res, u, p) = (res .= u)

optprob = OptimizationFunction(cost_fun, Optimization.AutoForwardDiff(), cons=cons)
prob = OptimizationProblem(optprob, f_guess, params, lcons = zeros(length(f_guess)), ucons = fmax*ones(length(f_guess)))
@time sol = solve(prob, Ipopt.Optimizer(), max_iter=10)

The above code seems to run, but I suspect there are tons of improvement I can make to the code to make it more efficient.

My first question is about the best way to pass the discretizied f(t) to the ODE function. Right now I just use use a struct to hold f_interp, and I change its value in every evaluation of the cost function. And I use interpolation to get f(t). Is there a better way to do this in terms of setting up the problem in Julia?

The second question is if I want to extend the problem to handle cases where the pure state inequality constraint E(t)>=0 may also be active, how do I set that up and pass to cons?

1 Like

This is really going to hurt the performance of the ODE solver because it makes the integrand non-smooth. I would parameterize f(t) in a smooth way if possible.

For example, you could parameterize f(t) = \left[\tanh(p(2t/T - 1)) + 1\right] f_\max / 2 where p(x) is a polynomial in a Chebyshev basis on x\in [-1,1] (which allow you to go to very high degree without problems), with unconstrained coefficients. The tanh function ensures that f(t) \in (0, f_\max).

See also ODE non autonomous interpolation needed - #5 by ChrisRackauckas

Forward-mode AD is also going to hurt you here because its cost scales with the number of parameters, so computing your derivatives here will be ≈ 200x the cost of solving the ODE once. You really want to use reverse-mode (either via AD or via manual adjoint equations — see our course notes, chapter 9 — or with the help of SciMLSensitivity.jl) in this kind of problem if possible.

2 Likes

Check out InfiniteOpt.jl: Home · InfiniteOpt.jl.

using InfiniteOpt
import Ipopt
import Plots
τ = 1.0
σ = 49.0
E0 = 100.0   # Note I made this smaller
fmax = 10.0
T = 20.0
model = InfiniteModel(Ipopt.Optimizer)
@infinite_parameter(model, t in [0, T], num_supports = 1_000)
@variables(model, begin
    e >= 0, Infinite(t)
    v, Infinite(t)
    0 <= f <= fmax, Infinite(t)
end)
@objective(model, Max, integral(v, t))
@constraints(model, begin
    deriv(v, t) == -v / τ + f
    v(0) == 0
    deriv(e, t) == σ - f * v
    e(0) == E0
end)
optimize!(model)
@assert termination_status(model) == LOCALLY_SOLVED
ts, e_opt, v_opt, f_opt = supports(t), value(e), value(v), value(f)
Plots.plot(
    Plots.plot(ts, e_opt, ylabel = "e(t)"),
    Plots.plot(ts, v_opt, ylabel = "v(t)"),
    Plots.plot(ts, f_opt, ylabel = "f(t)", xlabel = "t"),
    layout = (3, 1),
    legend = false,
)

cc @pulsipher

5 Likes
  1. Does using cubic spline instead of linear interpolation solve the problem? Much smoother, though it still does a lot of interpolation which I what I was originally worried about in terms of performance.
    Also later when I generalize the problem to where f_{max} is no longer a constant but depends on t (or maybe even some integral of v(t)), then I will still need to define a function representing the constraints? I’m just not 100% sure if the way I write the constraints are correct. cons(res, u, p) = (res .= u) is from the documentation tutorial, and I’m not sure how to extend it to include time in the inequality constraints.

  2. I tried using Optimization.AutoReverseDiff() as the auto-diff option in OptimizationFunction, and it gives StackOverflowError: which I have no idea about (see below for the cmplete error message). I have solved some PDE-constrained optimal control problem before using the adjoint method where I derived the adjoint equation myself and have the cost function output the gradient (PS: your adjoint method lecture notes were very helpful). For the problem considered here which looks a lot simpler than my previous problem (but with the addition of inequality constraints), I presume if I set everything correctly, I can just declare the use of reverse-mode AD and have Optimization.jl does all the work for me?

StackOverflowError:

Stacktrace:
     [1] anyeltypedual(::Type{T}, ::Type{Val{counter}}) where {T<:Union{Set, AbstractArray}, counter}
       @ DiffEqBase C:\Users\dchan\.julia\packages\DiffEqBase\R2Vjs\src\forwarddiff.jl:258
--- the last 1 lines are repeated 1 more time ---
     [3] (::Base.MappingRF{typeof(DiffEqBase.anyeltypedual), Base.BottomRF{typeof(DiffEqBase.promote_dual)}})(acc::Type, x::Type)
       @ Base .\reduce.jl:100
     [4] _foldl_impl(op::Base.MappingRF{typeof(DiffEqBase.anyeltypedual), Base.BottomRF{typeof(DiffEqBase.promote_dual)}}, init::Type, itr::Core.SimpleVector)
       @ Base .\reduce.jl:62
     [5] foldl_impl
       @ .\reduce.jl:48 [inlined]
     [6] mapfoldl_impl
       @ .\reduce.jl:44 [inlined]
     [7] mapfoldl
       @ .\reduce.jl:175 [inlined]
     [8] mapreduce
       @ .\reduce.jl:307 [inlined]
     [9] __anyeltypedual(::Type{ReverseDiff.TrackedReal{Float64, Float64, ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}}}})
       @ DiffEqBase C:\Users\dchan\.julia\packages\DiffEqBase\R2Vjs\src\forwarddiff.jl:243
    [10] anyeltypedual(::Type{ReverseDiff.TrackedReal{Float64, Float64, ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}}}}, ::Type{Val{0}})
       @ DiffEqBase C:\Users\dchan\.julia\packages\DiffEqBase\R2Vjs\src\forwarddiff.jl:249
--- the last 1 lines are repeated 1 more time ---
--- the last 11 lines are repeated 5006 more times ---
 [55078] anyeltypedual(::Type{ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}}}, ::Type{Val{1}})
       @ DiffEqBase C:\Users\dchan\.julia\packages\DiffEqBase\R2Vjs\src\forwarddiff.jl:258
 [55079] (::DiffEqBase.var"#80#81"{Int64})(x::Type)
       @ DiffEqBase C:\Users\dchan\.julia\packages\DiffEqBase\R2Vjs\src\forwarddiff.jl:102
 [55080] MappingRF
       @ .\reduce.jl:100 [inlined]
 [55081] _foldl_impl(op::Base.MappingRF{DiffEqBase.var"#80#81"{Int64}, Base.BottomRF{typeof(DiffEqBase.promote_dual)}}, init::Type, itr::Core.SimpleVector)
       @ Base .\reduce.jl:58
 [55082] foldl_impl
       @ .\reduce.jl:48 [inlined]
 [55083] mapfoldl_impl
       @ .\reduce.jl:44 [inlined]
 [55084] mapfoldl
       @ .\reduce.jl:175 [inlined]
 [55085] mapreduce
       @ .\reduce.jl:307 [inlined]
 [55086] diffeqmapreduce(f::DiffEqBase.var"#80#81"{Int64}, op::typeof(DiffEqBase.promote_dual), x::Core.SimpleVector)
       @ DiffEqBase C:\Users\dchan\.julia\packages\DiffEqBase\R2Vjs\src\forwarddiff.jl:55
 [55087] #s90#79
       @ C:\Users\dchan\.julia\packages\DiffEqBase\R2Vjs\src\forwarddiff.jl:102 [inlined]
 [55088] var"#s90#79"(counter::Any, ::Any, x::Any, ::Any)
       @ DiffEqBase .\none:0
 [55089] (::Core.GeneratedFunctionStub)(::UInt64, ::LineNumberNode, ::Any, ::Vararg{Any})
       @ Core .\boot.jl:602
 [55090] anyeltypedual
       @ C:\Users\dchan\.julia\packages\DiffEqBase\R2Vjs\src\forwarddiff.jl:95 [inlined]
 [55091] promote_u0
       @ C:\Users\dchan\.julia\packages\DiffEqBase\R2Vjs\src\forwarddiff.jl:387 [inlined]
 [55092] get_concrete_problem(prob::ODEProblem{Vector{Float64}, Tuple{Float64, Float64}, true, RunParams{ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, ODEFunction{true, SciMLBase.AutoSpecialize, typeof(f_ode!), LinearAlgebra.UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing, Nothing, Nothing, Nothing}, @Kwargs{}, SciMLBase.StandardODEProblem}, isadapt::Bool; kwargs::@Kwargs{u0::Vector{Float64}, p::RunParams{ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}})
       @ DiffEqBase C:\Users\dchan\.julia\packages\DiffEqBase\R2Vjs\src\solve.jl:1213
 [55093] get_concrete_problem
       @ C:\Users\dchan\.julia\packages\DiffEqBase\R2Vjs\src\solve.jl:1209 [inlined]
 [55094] solve_up(::ODEProblem{Vector{Float64}, Tuple{Float64, Float64}, true, RunParams{ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, ODEFunction{true, SciMLBase.AutoSpecialize, typeof(f_ode!), LinearAlgebra.UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing, Nothing, Nothing, Nothing}, @Kwargs{}, SciMLBase.StandardODEProblem}, ::Nothing, ::Vector{Float64}, ::RunParams{ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}; kwargs::@Kwargs{})
       @ DiffEqBase C:\Users\dchan\.julia\packages\DiffEqBase\R2Vjs\src\solve.jl:1105
 [55095] solve_up
       @ C:\Users\dchan\.julia\packages\DiffEqBase\R2Vjs\src\solve.jl:1101 [inlined]
 [55096] solve(::ODEProblem{Vector{Float64}, Tuple{Float64, Float64}, true, RunParams{ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, ODEFunction{true, SciMLBase.AutoSpecialize, typeof(f_ode!), LinearAlgebra.UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing, Nothing, Nothing, Nothing}, @Kwargs{}, SciMLBase.StandardODEProblem}; sensealg::Nothing, u0::Nothing, p::Nothing, wrap::Val{true}, kwargs::@Kwargs{})
       @ DiffEqBase C:\Users\dchan\.julia\packages\DiffEqBase\R2Vjs\src\solve.jl:1038
 [55097] solve(::ODEProblem{Vector{Float64}, Tuple{Float64, Float64}, true, RunParams{ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, ODEFunction{true, SciMLBase.AutoSpecialize, typeof(f_ode!), LinearAlgebra.UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing, Nothing, Nothing, Nothing}, @Kwargs{}, SciMLBase.StandardODEProblem})
       @ DiffEqBase C:\Users\dchan\.julia\packages\DiffEqBase\R2Vjs\src\solve.jl:1028
 [55098] cost_fun(f::ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}}, params::RunParams{Vector{Float64}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}})
       @ Main .\In[17]:17
 [55099] FixTail
       @ C:\Users\dchan\.julia\packages\DifferentiationInterface\UWUBZ\src\utils\context.jl:7 [inlined]
 [55100] ReverseDiff.GradientTape(f::DifferentiationInterface.FixTail{typeof(cost_fun), Tuple{RunParams{Vector{Float64}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}}}, input::Vector{Float64}, cfg::ReverseDiff.GradientConfig{ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}}})
       @ ReverseDiff C:\Users\dchan\.julia\packages\ReverseDiff\p1MzG\src\api\tape.jl:199
 [55101] gradient!(result::Vector{Float64}, f::DifferentiationInterface.FixTail{typeof(cost_fun), Tuple{RunParams{Vector{Float64}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}}}, input::Vector{Float64}, cfg::ReverseDiff.GradientConfig{ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}}})
       @ ReverseDiff C:\Users\dchan\.julia\packages\ReverseDiff\p1MzG\src\api\gradients.jl:41
 [55102] gradient!(f::Function, grad::Vector{Float64}, prep::DifferentiationInterfaceReverseDiffExt.ReverseDiffGradientPrep{ReverseDiff.GradientConfig{ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}}}, Nothing}, ::AutoReverseDiff{false}, x::Vector{Float64}, contexts::DifferentiationInterface.Constant{RunParams{Vector{Float64}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}})
       @ DifferentiationInterfaceReverseDiffExt C:\Users\dchan\.julia\packages\DifferentiationInterface\UWUBZ\ext\DifferentiationInterfaceReverseDiffExt\onearg.jl:175
 [55103] (::OptimizationBase.var"#grad#16"{RunParams{Vector{Float64}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OptimizationFunction{true, AutoReverseDiff{false}, typeof(cost_fun), Nothing, Nothing, Nothing, Nothing, Nothing, typeof(cons), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED_NO_TIME), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, AutoReverseDiff{false}})(res::Vector{Float64}, θ::Vector{Float64})
       @ OptimizationBase C:\Users\dchan\.julia\packages\OptimizationBase\gvXsf\src\OptimizationDIExt.jl:28
 [55104] (::OptimizationOptimJL.var"#26#33"{OptimizationCache{OptimizationFunction{true, AutoReverseDiff{false}, typeof(cost_fun), OptimizationBase.var"#grad#16"{RunParams{Vector{Float64}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OptimizationFunction{true, AutoReverseDiff{false}, typeof(cost_fun), Nothing, Nothing, Nothing, Nothing, Nothing, typeof(cons), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED_NO_TIME), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, AutoReverseDiff{false}}, Nothing, OptimizationBase.var"#hess#20"{RunParams{Vector{Float64}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OptimizationFunction{true, AutoReverseDiff{false}, typeof(cost_fun), Nothing, Nothing, Nothing, Nothing, Nothing, typeof(cons), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED_NO_TIME), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, DifferentiationInterface.SecondOrder{AutoReverseDiff{false}, AutoReverseDiff{false}}}, Nothing, OptimizationBase.var"#hv!#24"{RunParams{Vector{Float64}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, DifferentiationInterface.ReverseOverReverseHVPPrep{DifferentiationInterface.NoPullbackPrep}, OptimizationFunction{true, AutoReverseDiff{false}, typeof(cost_fun), Nothing, Nothing, Nothing, Nothing, Nothing, typeof(cons), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED_NO_TIME), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, DifferentiationInterface.SecondOrder{AutoReverseDiff{false}, AutoReverseDiff{false}}}, OptimizationBase.var"#9#26"{OptimizationFunction{true, AutoReverseDiff{false}, typeof(cost_fun), Nothing, Nothing, Nothing, Nothing, Nothing, typeof(cons), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED_NO_TIME), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, RunParams{Vector{Float64}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}}, OptimizationBase.var"#cons_j!#29"{AutoReverseDiff{false}, DifferentiationInterfaceReverseDiffExt.ReverseDiffOneArgJacobianPrep{ReverseDiff.JacobianConfig{ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}}, Nothing}, Nothing}}, Nothing, Nothing, OptimizationBase.var"#cons_h!#36"{Int64, Vector{DifferentiationInterface.HVPGradientHessianPrep{DifferentiationInterface.BatchSizeSettings{1, false, true}, Vector{Tuple{Vector{Float64}}}, Vector{Tuple{Vector{Float64}}}, DifferentiationInterface.ReverseOverReverseHVPPrep{DifferentiationInterface.NoPullbackPrep}, DifferentiationInterfaceReverseDiffExt.ReverseDiffGradientPrep{ReverseDiff.GradientConfig{ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}}}, Nothing}}}, DifferentiationInterface.SecondOrder{AutoReverseDiff{false}, AutoReverseDiff{false}}}, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED_NO_TIME), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, OptimizationBase.ReInitCache{Vector{Float64}, RunParams{Vector{Float64}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}}, Nothing, Nothing, Vector{Float64}, Vector{Float64}, Nothing, IPNewton{typeof(Optim.backtrack_constrained_grad), Symbol}, Bool, OptimizationOptimJL.var"#4#6", Nothing}, OptimizationOptimJL.var"#25#32"{OptimizationCache{OptimizationFunction{true, AutoReverseDiff{false}, typeof(cost_fun), OptimizationBase.var"#grad#16"{RunParams{Vector{Float64}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OptimizationFunction{true, AutoReverseDiff{false}, typeof(cost_fun), Nothing, Nothing, Nothing, Nothing, Nothing, typeof(cons), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED_NO_TIME), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, AutoReverseDiff{false}}, Nothing, OptimizationBase.var"#hess#20"{RunParams{Vector{Float64}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OptimizationFunction{true, AutoReverseDiff{false}, typeof(cost_fun), Nothing, Nothing, Nothing, Nothing, Nothing, typeof(cons), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED_NO_TIME), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, DifferentiationInterface.SecondOrder{AutoReverseDiff{false}, AutoReverseDiff{false}}}, Nothing, OptimizationBase.var"#hv!#24"{RunParams{Vector{Float64}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, DifferentiationInterface.ReverseOverReverseHVPPrep{DifferentiationInterface.NoPullbackPrep}, OptimizationFunction{true, AutoReverseDiff{false}, typeof(cost_fun), Nothing, Nothing, Nothing, Nothing, Nothing, typeof(cons), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED_NO_TIME), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, DifferentiationInterface.SecondOrder{AutoReverseDiff{false}, AutoReverseDiff{false}}}, OptimizationBase.var"#9#26"{OptimizationFunction{true, AutoReverseDiff{false}, typeof(cost_fun), Nothing, Nothing, Nothing, Nothing, Nothing, typeof(cons), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED_NO_TIME), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, RunParams{Vector{Float64}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}}, OptimizationBase.var"#cons_j!#29"{AutoReverseDiff{false}, DifferentiationInterfaceReverseDiffExt.ReverseDiffOneArgJacobianPrep{ReverseDiff.JacobianConfig{ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}}, Nothing}, Nothing}}, Nothing, Nothing, OptimizationBase.var"#cons_h!#36"{Int64, Vector{DifferentiationInterface.HVPGradientHessianPrep{DifferentiationInterface.BatchSizeSettings{1, false, true}, Vector{Tuple{Vector{Float64}}}, Vector{Tuple{Vector{Float64}}}, DifferentiationInterface.ReverseOverReverseHVPPrep{DifferentiationInterface.NoPullbackPrep}, DifferentiationInterfaceReverseDiffExt.ReverseDiffGradientPrep{ReverseDiff.GradientConfig{ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}}}, Nothing}}}, DifferentiationInterface.SecondOrder{AutoReverseDiff{false}, AutoReverseDiff{false}}}, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED_NO_TIME), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, OptimizationBase.ReInitCache{Vector{Float64}, RunParams{Vector{Float64}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}}, Nothing, Nothing, Vector{Float64}, Vector{Float64}, Nothing, IPNewton{typeof(Optim.backtrack_constrained_grad), Symbol}, Bool, OptimizationOptimJL.var"#4#6", Nothing}}})(G::Vector{Float64}, θ::Vector{Float64})
       @ OptimizationOptimJL C:\Users\dchan\.julia\packages\OptimizationOptimJL\e3bUa\src\OptimizationOptimJL.jl:371
 [55105] value_gradient!!(obj::TwiceDifferentiable{Float64, Vector{Float64}, Matrix{Float64}, Vector{Float64}}, x::Vector{Float64})
       @ NLSolversBase C:\Users\dchan\.julia\packages\NLSolversBase\kavn7\src\interface.jl:82
 [55106] value_gradient!(obj::TwiceDifferentiable{Float64, Vector{Float64}, Matrix{Float64}, Vector{Float64}}, x::Vector{Float64})
       @ NLSolversBase C:\Users\dchan\.julia\packages\NLSolversBase\kavn7\src\interface.jl:69
 [55107] initial_state(method::IPNewton{typeof(Optim.backtrack_constrained_grad), Symbol}, options::Optim.Options{Float64, OptimizationOptimJL.var"#_cb#31"{OptimizationCache{OptimizationFunction{true, AutoReverseDiff{false}, typeof(cost_fun), OptimizationBase.var"#grad#16"{RunParams{Vector{Float64}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OptimizationFunction{true, AutoReverseDiff{false}, typeof(cost_fun), Nothing, Nothing, Nothing, Nothing, Nothing, typeof(cons), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED_NO_TIME), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, AutoReverseDiff{false}}, Nothing, OptimizationBase.var"#hess#20"{RunParams{Vector{Float64}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OptimizationFunction{true, AutoReverseDiff{false}, typeof(cost_fun), Nothing, Nothing, Nothing, Nothing, Nothing, typeof(cons), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED_NO_TIME), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, DifferentiationInterface.SecondOrder{AutoReverseDiff{false}, AutoReverseDiff{false}}}, Nothing, OptimizationBase.var"#hv!#24"{RunParams{Vector{Float64}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, DifferentiationInterface.ReverseOverReverseHVPPrep{DifferentiationInterface.NoPullbackPrep}, OptimizationFunction{true, AutoReverseDiff{false}, typeof(cost_fun), Nothing, Nothing, Nothing, Nothing, Nothing, typeof(cons), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED_NO_TIME), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, DifferentiationInterface.SecondOrder{AutoReverseDiff{false}, AutoReverseDiff{false}}}, OptimizationBase.var"#9#26"{OptimizationFunction{true, AutoReverseDiff{false}, typeof(cost_fun), Nothing, Nothing, Nothing, Nothing, Nothing, typeof(cons), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED_NO_TIME), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, RunParams{Vector{Float64}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}}, OptimizationBase.var"#cons_j!#29"{AutoReverseDiff{false}, DifferentiationInterfaceReverseDiffExt.ReverseDiffOneArgJacobianPrep{ReverseDiff.JacobianConfig{ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}}, Nothing}, Nothing}}, Nothing, Nothing, OptimizationBase.var"#cons_h!#36"{Int64, Vector{DifferentiationInterface.HVPGradientHessianPrep{DifferentiationInterface.BatchSizeSettings{1, false, true}, Vector{Tuple{Vector{Float64}}}, Vector{Tuple{Vector{Float64}}}, DifferentiationInterface.ReverseOverReverseHVPPrep{DifferentiationInterface.NoPullbackPrep}, DifferentiationInterfaceReverseDiffExt.ReverseDiffGradientPrep{ReverseDiff.GradientConfig{ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}}}, Nothing}}}, DifferentiationInterface.SecondOrder{AutoReverseDiff{false}, AutoReverseDiff{false}}}, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED_NO_TIME), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, OptimizationBase.ReInitCache{Vector{Float64}, RunParams{Vector{Float64}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}}, Nothing, Nothing, Vector{Float64}, Vector{Float64}, Nothing, IPNewton{typeof(Optim.backtrack_constrained_grad), Symbol}, Bool, OptimizationOptimJL.var"#4#6", Nothing}}}, d::TwiceDifferentiable{Float64, Vector{Float64}, Matrix{Float64}, Vector{Float64}}, constraints::TwiceDifferentiableConstraints{OptimizationBase.var"#9#26"{OptimizationFunction{true, AutoReverseDiff{false}, typeof(cost_fun), Nothing, Nothing, Nothing, Nothing, Nothing, typeof(cons), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED_NO_TIME), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, RunParams{Vector{Float64}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}}, OptimizationBase.var"#cons_j!#29"{AutoReverseDiff{false}, DifferentiationInterfaceReverseDiffExt.ReverseDiffOneArgJacobianPrep{ReverseDiff.JacobianConfig{ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}}, Nothing}, Nothing}}, OptimizationOptimJL.var"#29#36"{OptimizationCache{OptimizationFunction{true, AutoReverseDiff{false}, typeof(cost_fun), OptimizationBase.var"#grad#16"{RunParams{Vector{Float64}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OptimizationFunction{true, AutoReverseDiff{false}, typeof(cost_fun), Nothing, Nothing, Nothing, Nothing, Nothing, typeof(cons), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED_NO_TIME), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, AutoReverseDiff{false}}, Nothing, OptimizationBase.var"#hess#20"{RunParams{Vector{Float64}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OptimizationFunction{true, AutoReverseDiff{false}, typeof(cost_fun), Nothing, Nothing, Nothing, Nothing, Nothing, typeof(cons), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED_NO_TIME), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, DifferentiationInterface.SecondOrder{AutoReverseDiff{false}, AutoReverseDiff{false}}}, Nothing, OptimizationBase.var"#hv!#24"{RunParams{Vector{Float64}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, DifferentiationInterface.ReverseOverReverseHVPPrep{DifferentiationInterface.NoPullbackPrep}, OptimizationFunction{true, AutoReverseDiff{false}, typeof(cost_fun), Nothing, Nothing, Nothing, Nothing, Nothing, typeof(cons), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED_NO_TIME), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, DifferentiationInterface.SecondOrder{AutoReverseDiff{false}, AutoReverseDiff{false}}}, OptimizationBase.var"#9#26"{OptimizationFunction{true, AutoReverseDiff{false}, typeof(cost_fun), Nothing, Nothing, Nothing, Nothing, Nothing, typeof(cons), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED_NO_TIME), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, RunParams{Vector{Float64}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}}, OptimizationBase.var"#cons_j!#29"{AutoReverseDiff{false}, DifferentiationInterfaceReverseDiffExt.ReverseDiffOneArgJacobianPrep{ReverseDiff.JacobianConfig{ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}}, Nothing}, Nothing}}, Nothing, Nothing, OptimizationBase.var"#cons_h!#36"{Int64, Vector{DifferentiationInterface.HVPGradientHessianPrep{DifferentiationInterface.BatchSizeSettings{1, false, true}, Vector{Tuple{Vector{Float64}}}, Vector{Tuple{Vector{Float64}}}, DifferentiationInterface.ReverseOverReverseHVPPrep{DifferentiationInterface.NoPullbackPrep}, DifferentiationInterfaceReverseDiffExt.ReverseDiffGradientPrep{ReverseDiff.GradientConfig{ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}}}, Nothing}}}, DifferentiationInterface.SecondOrder{AutoReverseDiff{false}, AutoReverseDiff{false}}}, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED_NO_TIME), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, OptimizationBase.ReInitCache{Vector{Float64}, RunParams{Vector{Float64}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}}, Nothing, Nothing, Vector{Float64}, Vector{Float64}, Nothing, IPNewton{typeof(Optim.backtrack_constrained_grad), Symbol}, Bool, OptimizationOptimJL.var"#4#6", Nothing}}, Float64}, initial_x::Vector{Float64})
       @ Optim C:\Users\dchan\.julia\packages\Optim\HvjCd\src\multivariate\solvers\constrained\ipnewton\ipnewton.jl:125
 [55108] optimize(d::TwiceDifferentiable{Float64, Vector{Float64}, Matrix{Float64}, Vector{Float64}}, constraints::TwiceDifferentiableConstraints{OptimizationBase.var"#9#26"{OptimizationFunction{true, AutoReverseDiff{false}, typeof(cost_fun), Nothing, Nothing, Nothing, Nothing, Nothing, typeof(cons), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED_NO_TIME), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, RunParams{Vector{Float64}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}}, OptimizationBase.var"#cons_j!#29"{AutoReverseDiff{false}, DifferentiationInterfaceReverseDiffExt.ReverseDiffOneArgJacobianPrep{ReverseDiff.JacobianConfig{ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}}, Nothing}, Nothing}}, OptimizationOptimJL.var"#29#36"{OptimizationCache{OptimizationFunction{true, AutoReverseDiff{false}, typeof(cost_fun), OptimizationBase.var"#grad#16"{RunParams{Vector{Float64}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OptimizationFunction{true, AutoReverseDiff{false}, typeof(cost_fun), Nothing, Nothing, Nothing, Nothing, Nothing, typeof(cons), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED_NO_TIME), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, AutoReverseDiff{false}}, Nothing, OptimizationBase.var"#hess#20"{RunParams{Vector{Float64}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OptimizationFunction{true, AutoReverseDiff{false}, typeof(cost_fun), Nothing, Nothing, Nothing, Nothing, Nothing, typeof(cons), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED_NO_TIME), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, DifferentiationInterface.SecondOrder{AutoReverseDiff{false}, AutoReverseDiff{false}}}, Nothing, OptimizationBase.var"#hv!#24"{RunParams{Vector{Float64}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, DifferentiationInterface.ReverseOverReverseHVPPrep{DifferentiationInterface.NoPullbackPrep}, OptimizationFunction{true, AutoReverseDiff{false}, typeof(cost_fun), Nothing, Nothing, Nothing, Nothing, Nothing, typeof(cons), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED_NO_TIME), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, DifferentiationInterface.SecondOrder{AutoReverseDiff{false}, AutoReverseDiff{false}}}, OptimizationBase.var"#9#26"{OptimizationFunction{true, AutoReverseDiff{false}, typeof(cost_fun), Nothing, Nothing, Nothing, Nothing, Nothing, typeof(cons), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED_NO_TIME), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, RunParams{Vector{Float64}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}}, OptimizationBase.var"#cons_j!#29"{AutoReverseDiff{false}, DifferentiationInterfaceReverseDiffExt.ReverseDiffOneArgJacobianPrep{ReverseDiff.JacobianConfig{ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}}, Nothing}, Nothing}}, Nothing, Nothing, OptimizationBase.var"#cons_h!#36"{Int64, Vector{DifferentiationInterface.HVPGradientHessianPrep{DifferentiationInterface.BatchSizeSettings{1, false, true}, Vector{Tuple{Vector{Float64}}}, Vector{Tuple{Vector{Float64}}}, DifferentiationInterface.ReverseOverReverseHVPPrep{DifferentiationInterface.NoPullbackPrep}, DifferentiationInterfaceReverseDiffExt.ReverseDiffGradientPrep{ReverseDiff.GradientConfig{ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}}}, Nothing}}}, DifferentiationInterface.SecondOrder{AutoReverseDiff{false}, AutoReverseDiff{false}}}, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED_NO_TIME), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, OptimizationBase.ReInitCache{Vector{Float64}, RunParams{Vector{Float64}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}}, Nothing, Nothing, Vector{Float64}, Vector{Float64}, Nothing, IPNewton{typeof(Optim.backtrack_constrained_grad), Symbol}, Bool, OptimizationOptimJL.var"#4#6", Nothing}}, Float64}, initial_x::Vector{Float64}, method::IPNewton{typeof(Optim.backtrack_constrained_grad), Symbol}, options::Optim.Options{Float64, OptimizationOptimJL.var"#_cb#31"{OptimizationCache{OptimizationFunction{true, AutoReverseDiff{false}, typeof(cost_fun), OptimizationBase.var"#grad#16"{RunParams{Vector{Float64}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, OptimizationFunction{true, AutoReverseDiff{false}, typeof(cost_fun), Nothing, Nothing, Nothing, Nothing, Nothing, typeof(cons), Nothing,    ...

(exceeds word limit)

Only if you use a sufficiently low-order ODE method that doesn’t mind the discontinuous third derivatives, as discussed in the linked thread. There is also the issue that your f(t) \in [0, f_\max] constraint is harder to impose, since a cubic spline may go above/below the control points.

At the very least you will have to use SciMLSensitivity to get reverse-mode AD to work on a standard ODE solver, I suspect … this is the sort of thing where AD systems need some “help”. A worst you might have to write your own adjoint differentiation (which is not too hard).

I basically assume that reverse-mode AD will break on any nontrivial problem in scientific computing (unless you carefully limit yourself to a set of building blocks where AD is known to work, like standard NN components or other things which have manual chainrules written).

The problem with your inequality constraint e(t) \ge 0 is that it is really an infinite set of inequality constraints, which you won’t be able to hand as-is to a typical off-the-shelf optimization algorithm. (It’s analogous to a continuous minimax problem.) Of course, you could simply discretize/approximate this into a finite number of t points (“collocation”). It looks like InfiniteOpt.jl does this for you (e.g. discretizing in a collocation sense or in a Galerkin sense).

1 Like

Cool solution odow.
Can you try with a negative σ?
As my mental interpretation is a maximum distance at time T with fuel e0. So σ would be the minimal fuel needed for engine operation. The second term in de/dt is velocity proportional drag.

With σ = -5.0 I get:

Ipopt cannot find a feasible solution with a more negative value for σ.

1 Like

I see. The logic being, to get furthest, burn as much fuel as fast as you can. Makes sense.
Why doesn’t v decline exponentially via 1/tau?

v is approximately 0 in all time steps (note the e-5).

1 Like

It might help to describe what you’re trying to do. If I interpret e as some kind of fuel or energy available (based on one of your replies), the constraint doesn’t make sense. When fuel runs out, there should be new dynamics that cause f(t)=0, but the current formulation more or less wishes that to happen, and IMO fails to do it properly. When you run out of fuel, e(t)=0 and f(t)=0 should both be automatic. Are you sure this shouldn’t be a two-stage optimization problem, where the second stage has the out-of-fuel dynamics? Please define the variables, since we have to guess what the problem is.

I also want to put in a word for InfiniteOpt.jl, an amazing package that makes it a snap to solve such a problem, as @odow demonstrates. You don’t have to think about how to parameterize the problem and interpolate f(t). Getting mucked up in implementation just gets in the way of figuring how how to state the problem, which is usually the hardest part. Even if you intend to use the ODE approach, you could prototype in InfiniteOpt and switch back to ODE after the problem makes more sense.

It is a toy model that describes running, and e(t) is interpreted as some anaerobic energy. If e(t)=0, then you still get your ‘aerobic’ energy via the \sigma term, so you don’t stop immediately. There are rigorous mathematical proofs regarding the structure of the optimal strategy for the simplest case stated here, but I’m curious what happens when we add more terms or conditions to the model which I think will need to 100% resort to numerical work.

what about control-toolbox · GitHub ?

I believe the @odow’s demo is correct, even with negative \sigma. It’s using the initial store of energy to supply an impulsive start and then a nearly zero force. The force is in equilibrium with the drag term; it’s best to be nearly constant because fluctuations would be more costly due to drag. When nearly at the end, energy runs out and the remaining momentum does die out exponentially, it’s that little blip at the very end.

I wouldn’t say “burn as much fuel as fast as you can”, it’s trying to maximize the distance. It’s burning as much or as little fuel needed to get the farthest, and in fact had to leave a bit in the tank to survive longer.

You can work toward a continuous spline as discussed with @stevengj, but I think your linear interpolation is not that bad. It might help if you told the ODE solver the interpolation points as tstops, so it doesn’t need to churn to discover the knot points. (Also BTW the ODE looks solvable analytically between linear interpolations, perhaps not what you want.) Linear interpolation is easier to enforce inequality constraints than splines.

Some details: I don’t think your v and D are efficient. You can probably avoid an anonymous function altogether, and your D allocates. It would be faster just to let the ODE solver integrate a third state for distance. But I doubt if you should be squeezing every bit of performance out, because InfiniteOpt.jl and other optimal control packages would probably do better. So I’m assuming you’re more interested in learning, in which case what you have looks fine even if not reverse AD friendly.

Another way to go is to use collocation. Instead of integrating the ODE, define v, e, and f (and d) all as unknown linear interpolators, and constrain the state-derivatives (according to e.g. forward rule) to match the right-hand sides. Now you no longer need an ODE solver, since integration is defined implicitly within constraints. I think JuMP.jl has a nice optimal control example of this. Although this is kind of what InfiniteOpt.jl already does for you, using way better interpolation.

1 Like

I think JuMP.jl has a nice optimal control example of this

See Example: nonlinear optimal control of a rocket · JuMP

1 Like