[ANN] ModelPredictiveControl.jl

ModelPredictiveControl.jl

image

I’m currently writing a model predictive control package for Julia. Here’s a quick list of the main features:

  • Both linear and nonlinear plant models are supported. The implementation exploits multiple dispatch.
  • The optimization is based on JuMP.jl, allowing to quickly compare the performances of many optimizers.
  • The nonlinear predictive controller, and also the extended Kalman Filter and the moving horizon estimator (both will be added soon), uses automatic differentiation for the Jacobian.
  • Both internal model structure (classical approach) and closed-loop state estimator (modern approach) are supported for the feedback.
  • The nonlinear predictive controller based on a linear model (e.g. economic optimization of a linear plant model) computes the prediction with matrices instead of a for loop.
  • The nonlinear predictive controller automatically designs an adequate state estimator for the feedback, contrarily to nlmpc in MATLAB.
  • etc.

Feedback is certainly welcome!

Francis Gagnon

23 Likes

Did you look at GitHub - infiniteopt/InfiniteOpt.jl: An intuitive modeling interface for infinite-dimensional optimization problems. and https://web.casadi.org/ ?

Both can be used to implement NMPC control systems…

And the link to your package is not working. This one should work: GitHub - franckgaga/ModelPredictiveControl.jl: A model predictive control package for Julia.

3 Likes

Yes, thanks! The link is modified.

I know about these two packages. They seems really powerful. My goal is to create a package that offer simpler implementations of both MPC and NMPC. You feed a plant model and you should be ready to go for quick simulations or control of a real plant (using additional DAC/ADC packages).

Francis

2 Likes

@odow I think I read somewhere that vector decision variables is planned in JuMP.jl. Is it true and is there any timeline for that ? I’m using the splatting syntax right now and, intuitively, there is probably some overhead that hurdles the performance of the NMPC.

Thanks!

Link to code example? The answer is a somewhat, maybe, it depends.

Here’s what I did. I memoized my objective and constrain functions using PreallocationTools.jl. I’m also using closure for the parameters so I applied the tip at Performance of captured variables.

ModelPredictiveControl v8.0.0

The release will be registered soon. Many features were added since the OP. Here is a short list:

  • Added: ExtendedKalmanFilter with automatic Jacobian computations using ForwardDiff.jl
  • Added: ExplicitMPC for linear predictive controllers without constraint and a small sample time
  • Added: automatic model augmentation based on observability for offset-free tracking
  • Added: support for both unmeasured disturbances at model input and output
  • Debug: only update the linear coefficients in LinMPC quadratic programming objective
  • Multiple improvements in documentation
  • Multiple other bug corrections
  • Improved coverage with many new tests

Feedback is welcome!

Francis Gagnon

3 Likes

Is there an MPC package that can integrate with Flux/Zygote for optimization?

1 Like

To my knowledge, I don’t think so. Mine will support the packages accordingly if the developers of JuMP.jl decide to support them. But I think it needs a lot of work on their part. (I assume you are referring to the optimization and AD tools, not the modeling part.)

This package has shaped up to become really impressive, congratulations on the nice work!

4 Likes

Thanks Fredrik, the compliment is truly appreciated !

I found a major bug in the estimation of the unmeasured disturbances at the model input (nint_u option). The implementation is a bit more complex than the estimation of output disturbances.

I will release a fix soon. It should work as expected now. I also added a usage example of nint_u option in the manual. I still need to add new tests that use the nint_u option.

1 Like

ModelPredictiveControl v0.9.1

The release will be registered soon. I’ve worked on new features and improvements since the last announcement. Here’s the list:

  • Added: changing the constraints at runtime is now supported
  • Added: time-varying constraints over prediction/control horizon is now supported
  • Added: Addionnal kwargs of PredictiveController constructors are passed to the StateEstimator constructor.
  • Added: show integrators info when printing StateEstimator \ PredictiveController objects
  • Added: initstate! now compute an adequate initial estimate for all StateEstimator based on LinModel, producing a bumpless transfer (incl. InternalModel)
  • Added: feedforward example in the documentation manual
  • Reduce allocations for NonLinMPC
  • Reduce allocations for KalmanFilter and ExtendedKalmanFilter
  • Debug: sim! calls on ExplicitMPC now works
  • Debug: LinModel constructor now call minreal at the end by default (for controllability/observability, can be bypassed)
  • Other minor debug
  • Update documentation based on the last version of Documenter.jl.
  • Improved coverage with new tests

IMO, the features, the testing and the stability of the package are starting to be really comprehensive.

Your feedback is welcome!

Francis Gagnon

2 Likes

ModelPredictiveControl v0.10.0

Added some important features in the last update:

  • Added: terminal state constraints to ensure nominal closed-loop stability (both for LinModel and NonLinModel)
  • Added: added terminal state field :x̂end in getinfo dictionnary
  • Added: user-friendly constructor for SimResult objects (interface with Plots.jl)
  • Code cleaning for LinModel prediction matrices

I still need to add some tests for the terminal constraints applied on NonLinModel.

Cheers!

Francis

3 Likes

ModelPredictiveControl v.0.11.0

The manual was significantly improved in the last commits. Here’s a summary of the new stuff since my last post:

  • Debug : terminal constraints for NonLinModel now works
  • Added : new terminal constraint tests
  • Added: error log when the termination status means “no solution available”
  • Added: more precise log in REPL (warning: we keep the solution anyway, error: we take the last solution)
  • Changed: warning with SLOW_PROGRESS status (keep solution)
  • Doc: update figures for the CSTR and the pendulum in manual
  • Doc: add an economic MPC example on the inverted pendulum in manual
  • Doc: cleaning for terminal constraints
  • Reduce allocation for NonLinMPC

I’ll probably need to update the package with the new JuMP syntax for nonlinear modeling, I’m still using the legacy one. I’m not sure when the timing will be right since the solver support seems to be partial right now. @odow do you have an opinion on that matter ? The default solver is Ipopt for NonLinMPC in the package.

Cheers,

Francis

Its ready for prime time. Ipopt BARON AmplNLWriter work. KNITRO and NLopt dont.

Let me know if you encounter any issues.

1 Like

ModelPredictiveControl v0.13.0

I watched @baggepinnen presentation at JuliaCon 2023 (nice presentation Fredrik btw!) and it gave me the idea of a new cool feature : the linearize function. Gain scheduling should be quite easy with this. It is not possible to modify the plant model of a LinMPC instance right now, but I will probably add this feature soon. Gain scheduling is still possible using multiple LinMPC objects.

Also, I did a major refactoring of the code to support generic number types across the package. Note that most solvers in JuMP only support Float64 numbers. Thus, PredictiveController objects still default to Float64-based optimizers, even if the plant model or state estimator uses other number types.

In short, the new features are:

  • added: linearization of NonLinModel based on ForwardDiff.jl
  • added: generic number types in all SimModel, StateEstimator and PredictiveController objects
  • doc: example of linearize on the pendulum
  • doc: example of solving MPC with unstable plant model using DAQP
  • tests for linearize function
  • tests with Float32 numbers

I will register the new version soon.

5 Likes

ModelPredictiveControl v0.14.0

I’m happy to share that I finished the implementation of the MovingHorizonEstimator, for the main features. If you don’t know about the MHE, it’s a nice addition since it can handle constraint on the estimates. Right now it only support bounds on the states , but I will add bounds on the process noise in the next releases. It is also an elegant feedback strategy for MPC, since there is no linearization (e.g. EKF) and it is conceptually the analogue of predictive control for state estimation. It produces a control strategy with 3 settings:

  • He : the estimation horizon, time steps in the past to consider for computing the plant states
  • Hp : the prediction horizon, time steps in the future to consider for computing the control action
  • Hc : the control horizon, time steps in the future in which moves are allowed

The computational cost are drastically higher since it solves a nonlinear optimization problem under constraints. Similarly to Hc, reducing the estimation horizons He decreases the number of decision variables. It’s a tradeoff between accuracy and algorithmic complexity.

I will register the release soon. Feedback is welcome !

Francis

5 Likes

Nice work. I assume you do not mean bounds on the process noise, though. The process noise is what it is. Perhaps bounds on the estimate of the process noise?

To me, it seems more logical to balance bounds on (i) estimated states (e.g., ensure positive mass, etc.), vs. (ii) bounds on “innovation”/prediction error??

Yes, my bad, bounds on the estimate of the process noise \mathbf{\hat{w}}.

That’s very good point. As a matter of fact, I’m not sure at all which bounds are the most useful, physically speaking. Clearly, bounds on the state estimate are crucial. Then:

bounds on “innovation”/prediction error

by innovation error, you means bounds on \mathbf{y-\hat{y}} ? In the MHE, it boils down to simply constraining the estimate of the sensor noise \mathbf{\hat{v}} (see \mathbf{\hat{V}} definition in the documentation). If that’s what you mean, do you have a application example for that? I’m genuinely curious.

In fact, the MHE literature almost exclusively talks about bounds on the state estimate. I only saw this paper that uses bounds on \mathbf{\hat{w}}, since the process noise in their case study was |\mathbf{w}(k)|, thus always positive. That is why I planned supporting this type of bounds.

2 Likes

I meant bounds on y_k - \hat{y}_k. This is easy to “interpret” in that one could say that the prediction error should not be too big/should be bounded.

To me, it is not intuitive that \hat{w}_k is a natural quantity to put bound on – I’d think that it is more intuitive to put a bound on w_k - \hat{w}_k.

I first came across the MHE during a visit to UT Austin close to 30 years ago – the work there was from ca. 1990. If one considers a linear system with quadratic cost, it is fairly easy to show that with an expanding horizon and no constraints, the resulting estimator can be re-formulated into a recursive form with the same equations as a Kalman filter… if the weight matrices are chosen as the inverse of noise covariance matrices.

When adding constraints, this is normally solved as an optimization problem, where it is necessary to use a moving horizon to avoid “blow-up” in the number of unknowns.

NOTE however: I was in touch with a PhD student from Sweden (I’ll have to think for a while to remember the name) who had developed a Riccati formulation that can be applied to linear quadratic control with input constraint (no state constraint) – the same should be applicable to MHE with constraint on disturbance w_k and not on states. The advantage to his formulation is a relatively dramatic improvement in efficiency. I never looked into this myself, though… had other things I had to prioritize.

1 Like