Variable time-stepping and discontinuous gradients

I read here that in the context of “trajectory optimisation” or “optimal control”:

One very important idea in numerical integration of differential equations is the use of variable-step integration … We typically avoid using variable steps inside a constraint (it can lead to discontinuous gradients)

What is your experience with optimising functions (with e.g. Ipopt) that entail variable-step integration? Is the resulting discontinuity in the gradients crippling to the performance of the optimisation solver?

Thanks!

Related comment, but not sure how much it applies to variable-step differential equation solvers.

If using continuous adjoint approaches, it does not lead to a discontinuity.

That’s only when solving the differential equations to high precision, right?

No, because the derivative is a differential equation of a smooth function of p and df/dp, so if f is smooth then so its that derivative.

What is referred to is that adaptive ODE solvers’ solution is essentially noise below the tolerance limit. So if you solve at a tolerance of 1e-8, then the solution can jump around by 1e-8 even for small changes in the parameters. This is because it does not guarantee any accuracy below that, and a change of a parameter can cause a step rejection which then gives a different solving process. However, that only is a problem if you’re using finite difference gradients: with continuous sensitivities the sensitivity equation is itself continuous and its calculation can be done with error controls.

1 Like