[ANN] ModelPredictiveControl.jl

ModelPredictiveControl v1.5.0

An update to announce the migration to DifferentiationInterface.jl. Many thanks to @gdalle for all the help! :grinning_face_with_smiling_eyes:

In addition to a simpler and more maintainable codebase, it allows to switch the differentiation backend for gradients and Jacobians inside NonLinMPC, MovingHorizonEstimator, linearize and ExtendedKalmanFilter. Sparse Jacobians are also supported with AutoSparse. Dense ForwardDiff.jl computation are used everywhere by default, except for the MultipleShooting transcription that uses sparse computations. Note that for small problems like the inverted pendulum with H_p=20 and H_c=2, dense Jacobians may be slightly faster than sparse matrices, even with a MultipleShooting transcription. At least, that’s what I benchmarked for this case study.

Note that the implementation rely on the Cache feature of DI.jl to reduce the allocations, and some backend does not support it for now.

The change log since my last post is:

  • added: migration to DifferentiationInterface.jl
  • added: new gradient and jacobian keyword arguments for NonLinMPC
  • added: new gradient and jacobian keyword arguments for MovingHorizonEstimator
  • added: new jacobian keyword argument for NonLinModel (for linearization)
  • added: new jacobian keyword argument for ExtendedKalmanFilter
  • added: ExtendedKalmanFilter is now allocation-free at runtime
  • changed: deprecate preparestate!(::SimModel,_,_), replaced by preparestate!(::SimModel)
  • debug: nonlinear inequality constraint with MultipleShooting now work as expected (custom + output + terminal constraints)
  • debug: x_noise argument in sim! now works as expected
  • doc: now using DocumenterInterLinks.jl to ease the maintenance
  • test: many new test with AutoFiniteDiff backend
  • test: new test to cover nonlinear inequality constraint with MultipleShooting corner cases

I will release the update soon.

10 Likes

ModelPredictiveControl v1.7.0

A quick update on the new stuff in the package since my last post.

First, the newest release introduces the ManualEstimator to turn off built-in state estimation and provide your own estimate. A first use case is to implement a linear MPC (with an approximate plant model, for the speed) with a nonlinear state estimator (with a high fidelity plant model, for accuracy). A second use case is using the exclusive observers from LowLevelParticleFilters.jl to estimate the state of the plant model and its disturbances.

Also, a significant performance boost for NonLinMPC and MovingHorizonEstimator was introduced in v1.6.0 by the more efficient value_and_gradient! and value_and_jacobian! of DI.jl. This is equivalent of using DiffResults.jl, but agnostic of the differentiation backend. I benchmarked about a 1.25x speed boost on the pendulum example of the manual.

Lastly, the nint_u option for the MovingHorizonEstimator was not working well because of a bug when the observation window is not filled (at the beginning). The bug was corrected in v1.6.2 (with new unit tests).

The next release will introduce custom move blocking, which is a way of specifying long control horizon H_c without increasing the number of decision variables in the optimization problem.

The changelog since my last post is:

  • added: ManualEstimator to turn off built-in state estimation and provide your own estimate \mathbf{\hat{x}}_{k}(k) or \mathbf{\hat{x}}_{k-1}(k)
  • added: slightly improve NonLinMPC performances with specialized conversion and weight matrices
  • added: significant performance boost of NonLinMPC and MovingHorizonEstimator using value_and_gradient!/jacobian! of DifferentiationInterface.jl instead of individual calls
  • added: setstate! now allows manual modifications of the estimation error covariance \mathbf{\hat{P}} (if computed by the estimator)
  • changed: M_Hp, N_Hc and L_Hp keyword arguments now default to Diagonal instead of diagm matrices for all PredictiveController constructors
  • changed: moved lastu0 inside PredictiveController objects
  • removed: DiffCaches in RungeKutta solver
  • debug: force update of gradient/jacobian in MovingHorzionEstimator when window not filled
  • debug: remove .data in KalmanFilter matrix products
  • debug: do not call jacobian! if nd==0 in linearize
  • debug: no more noisy @warn about DiffCache chunk size
  • test: new tests for ManualEstimator
  • test: added allocations tests for types that are known to be allocation-free (SKIP THEM FOR NOW)
  • test: adapt tests for the new automatically balancing minreal function
7 Likes

ModelPredictiveControl v1.8.2

Here’s a quick update on the new stuff in the package.

First v1.8.0 introduces custom move blocking patterns. This parameter changes the duration in time steps of each move blocks in the profile of the manipulated inputs \mathbf{u}, which is more general than classical control horizons H_c. See the H_c argument in e.g. LinMPC for detail. Note that an intrinsic side-effect of custom move blocking is a reduced accuracy for warm-starting the optimizer.

Also, the last two versions incorporate some debugging and performance improvements for MovingHorizonEstimator. It is common that covariance matrices are strictly diagonal, especially with trial-and-error tuning. The code now preserves Diagonal types to accelerate computations in the objective function. Also, problematic termination status at optimization will not crash anymore (it will emits an @error and return the open-loop estimation).

Last but not the least, I worked hard to implement a quite comprehensive benchmark suite that runs automatically on CI with AirspeedVelocity.jl (excellent job @MilesCranmer! btw excited for the feature at Emojis in ratio column by MilesCranmer · Pull Request #91 · MilesCranmer/AirspeedVelocity.jl · GitHub, will help for the overview). It will certainly help me to track the performance of the package over time. And now there will be proof that the future development incorporate performance improvements, and no regression, hopefully!

For now, I continuously monitor the the performance of OSQP.jl, DAQP.jl, Ipopt.jl and MadNLP.jl. Note that MadNLP.jl is generally 2-3 times faster than Ipopt.jl, but it does not work well for tightly constrained problem (e.g.: multiple shooting). It’s probably related to its L-BFGS approximation. I would be curious to see the performance with exact Hessians, but it’s not supported by ModelPredictiveControl.jl for now. For differentiation, only dense and sparse ForwardDiff.jl is monitored, but I may add other backend in the future.

BTW @MilesCranmer, is there a way to preserve the order of definition in SUITE in the outputted table? Or maybe specify the order manually? Some benchmark are interlinked and I would like them to be near each other.

I will register the version soon.

Change log:

  • added: move blocking feature in LinMPC, ExplicitMPC and NonLinMPC (see Hc argument)
  • added: new KalmanCovariance parametric struct to handle covariance sparsity efficiently
  • added: dispatch on repeatdiag to preserve Diagonnals
  • added: in-place and allocation-free inv! for MHE covariance matrices
  • added: new benchmark suite that runs automatically on each PR with AirspeedVelocity.jl
  • changed: store P̃Δu, P̃u and Tu conversion matrices as SparseMatrixCSC
  • debug: support Hermitian weights in PredictiveController
  • debug: use dummy b vector in MHE setmodel! to avoid ±Inf values
  • debug: MovingHorizonEstimator no longer crash with error termination flag
  • test: verify setmodel! with He>1 for MHE
  • test: new integration with ManualEstimator and MPCs
  • test: improve coverage with error termination status
  • doc: various improvements
3 Likes

It’s sorted by key, alphabetically: AirspeedVelocity.jl/src/Utils.jl at master · MilesCranmer/AirspeedVelocity.jl · GitHub

What I do is have a nested SUITE, which groups the benchmarks and results in the output being clustered in a sensible way. Like:

const SUITE = BenchmarkGroup()

for param in [1, 2, 3]
    SUITE["eval"]["type_1"]["param=$(param)"] = @benchmarkable ..
end

Note that a BenchmarkGroup will automatically instantiate the nested BenchmarkGroup when you do the get, so no need to worry about doing it manually.

2 Likes

I just stumbled upon this post today and spent the whole 17 minutes reading through the developments of the package. I just wanted to say that it was a pleasure tracking all your development and looking forward to what comes up afterwards!
My only 2 cents, since you were already looking into coming up with other integrators natively beyond RK4, orthogonal collocation using Gauss-Legendre quadrature is not much more complicated and might earn you some extra stability at not a significant cost.
Another thing that might be of interest is including robust (N)MPC via scenario trees, which has been done here https://www.do-mpc.com

4 Likes

Thanks for the feedback, really appreciated! Yes my next step is supporting collocation as a new transcription method. I will probably implement trapezoidal integration first for its simplicity, but Gauss-Legendre will come after.

For robust MPC, it is not planned in the short term but maybe if I have time in the mid-long term.

ModelPredictiveControl v1.9.0

The new release introduces a new TrapezoidalCollocation to handle moderately stiff NonLinModel! :bottle_with_popping_cork::bottle_with_popping_cork::bottle_with_popping_cork:

This is presumably the simplest form of all the direct collocation methods. It internally uses the implicit trapezoidal rule, which is efficiently solved in the nonlinear equality constraints of the NLP. This is the transcription method in MATLAB’s nlmpc when the dynamics are continuous by the way. It currently assumes piecewise constant manipulated inputs \mathbf{u} (a.k.a. ZOH, like all the other integrators in the package), but I will add linear interpolation soon. The number of decision variables is identical to MultipleShooting, so the computational cost are similar. As a matter of fact, the performances are even a little bit better than MultipleShooting with RungeKutta(4):

main
Pendulum/NonLinMPC/Custom constraints/Ipopt/MultipleShooting 1.05 ± 0.0072 s
Pendulum/NonLinMPC/Custom constraints/Ipopt/SingleShooting 0.866 ± 0.017 s
Pendulum/NonLinMPC/Custom constraints/Ipopt/TrapezoidalCollocation 1.06 ± 0.024 s
Pendulum/NonLinMPC/Economic/Ipopt/MultipleShooting 0.515 ± 0.0067 s
Pendulum/NonLinMPC/Economic/Ipopt/SingleShooting 0.242 ± 0.019 s
Pendulum/NonLinMPC/Economic/Ipopt/TrapezoidalCollocation 0.485 ± 0.0044 s
Pendulum/NonLinMPC/Noneconomic/Ipopt/MultipleShooting 0.486 ± 0.017 s
Pendulum/NonLinMPC/Noneconomic/Ipopt/SingleShooting 0.245 ± 0.016 s
Pendulum/NonLinMPC/Noneconomic/Ipopt/TrapezoidalCollocation 0.458 ± 0.0061 s

I also did some tests with the new experimental Ipopt._VectorNonlinearOracle. This is very promising since I benchmarked a 2.5-fold speedup on NonLinMPC with MultipleShooting and custom constraints. It will also allow exact Hessians for the Lagragian function, which in turn can improve the performance if the second-order differentiation is not too heavy (likely the case with AutoSparse). I will wait for an official release from JuMP.jl, but I’m super excited to incorporate these changes on the main branch!

This release helps me a lot in understanding collocation methods. I should be able to add orthogonal collocation with Gauss-Legendre in the next releases.

The changelog since my last post:

  • added: TrapezoidalCollocation method for continuous nonlinear systems
  • added: dependabot.yml file to help with the CI dependencies
  • changed: new fields with the continuous state-space function model.f! and model.h! in NonLinModels (instead of model.solver_f! and model.solver_h!)
  • test: new test with TrapezoidalCollocation and Ipopt
  • bench: new case studies with TrapezoidalCollocation on the pendulum
5 Likes