Model Predictive Control (MPC) in Julia

I am considering using Julia and JuMP for my Master Thesis in learning based robust economic Model Predictive Control. I hope to get some feedback from the community on what packages to use or on what functionalities are still missing, where I could possibly help contributing to julia.

In principle, implementing an MPC Controller means solving a constraint finite horizon Optimal Control Problem repeatedly. Additionally, it is important to be able to calculate invariant sets, i.e. computing the Minkowski Sum of two sets repeatedly (e.g. the sum of polyhedral and an ellipsoidal set).

At my University, we have a Matlab heavy control lab, and therefore we are also using Matlab for MPC. Specifically the Yalmip toolbox (for optimization) and the Multi-Parametric Toolbox (https://www.mpt3.org/) for the set calculations. I’d like to replace these toolboxes and Matlab by a Julia framework.
From my experience, with JuMP I should be able to define and solve the optimization problems. But for the set manipulations, I haven’t found any packages suitable for my needs.

So I was wondering, if there is already some work done on MPC in general and invariant set calculation in special or if there are plans to do so. Or if there are general caveats on using JuMP for MPC?

Sounds exciting! For polyhedron calculations you can use https://github.com/JuliaPolyhedra/Polyhedra.jl . What else is missing?

1 Like

Ok, I see, positive invariant set calculation. The Matlab code has GPL license, so you could convert it directly to julia if you know exactly which functions you need. Not so nice, LGPL or MIT license is more common for Julia libraries.

Thanks . Yes, I saw the Polyhedra.jl package, but as you mentioned, the functionality of calculating the Minkowsky sum is missing (as far as I know). And most likely, I also need to do the calculations with ellipsoidal sets.

You can certainly do MPC in JuMP (see https://github.com/rdeits/DynamicWalking2018.jl/blob/master/notebooks/6.%20Optimization%20with%20JuMP.ipynb for a very simple example), and if your baseline is Yalmip in MATLAB then the performance should be good.

An alternative to JuMP is https://github.com/tkoolen/Parametron.jl , which focuses on making it easy to update a problem in-place very efficiently. That can make your problem setup much faster than it would be with JuMP. Parametron.jl and JuMP both use the same solver interface tools, so it should be relatively easy to switch between them.

6 Likes

SetProg.jl is designed for computing invariant sets, it can compute ellipsoidal or polynomial controlled invariant sets that can be used for MPC, Polyhedral invariant sets is not implemented yet but it’s a few lines with Polyhedra.jl: https://github.com/JuliaPolyhedra/Polyhedra.jl/blob/master/examples/Computing%20controlled%20invariant%20sets.ipynb
Could you describe what form of invariant set (polyhedra, ellipsoidal ?) you are looking form and what is the system (do you want it to be controlled invariant or just invariant ?)

3 Likes

Is Parametron.jl expected to be more efficient for MPC than ParameterJuMP (https://github.com/JuliaStochOpt/ParameterJuMP.jl)?

Perhaps, but we haven’t tested it. The real difference is that they’re focused on different things: ParameterJuMP focuses on scalar parameters and lets you extract the duals corresponding to those parameters, while Parametron.jl takes a much more broad view of what a “parameter” is. In Parametron.jl, a Parameter can be literally anything as long as you can provide a value for it and a way to update that value. For example, the entire state of your robot might be a Parameter in Paramteron.jl. Parametron.jl also provides tools to construct affine expressions that can reference parameters, including the ability to update those expressions in-place when your parameters change. On the other hand, Parametron.jl doesn’t provide any ability to query duals of parameters.

1 Like

Ah, thank you for describing the different intended uses of these two packages.

Thanks, I will have a closer look at SetProg.jl.

Since I am still working on the theoretical description of my system which will be an economic MPC scheme with a learned model, I haven’t figured out everything concerning the implementation yet.
However, I intended to work with a polyhedral invariant set if it is computationally feasible. In MPC term, this would be the minimal robust positive invariant set (mRPI), respectively an invariant approximation of it as e.g. shown in Invariant Approximation of mRPI.

See this example for how to compute a minimal robust positive invariant set with Polyhedra.jl.

2 Likes

Thanks for your example. I was trying to run it for a 3D system (i.e. A as 3D matrix) and the performance was really bad. However, I did not use the CDDLib, since I could not make it build on Windows. For the 2D system, it worked without CDDLib, but for the 3D system, it just stalls.
Is there a way to make that computation faster? Or approach the problem differently?

The V-redundancy removal using LP is now implemented and used by default if the H-representation is has not been computed. Therefore you should now get similar performance between CDDLib and other libraries with Polyhedra v0.5.2.

4 Likes

Just out of curiousity @uwechsler: since you are using the mpt toolbox I suppose your are solving linear-quadratic MPC, i.e. positive definite quadratic cost + linear time-invariant system dynamics + polytopic state/input constraints?

Have you played with nonlinear MPC formulations in Julia, i.e. nonlinear system dynamics of the form \dot{x} = f(x,u) with some initial condition and x \in \mathbb{R}^n, u \in \mathbb{R}^m ?

So far, I was mostly working on theoretical analysis such as performance guarantees and convergence proofs. But I have implemented some low dimension nonlinear examples from academia such as Example 2 from Robust Economic MPC which has a piecewise quadratic cost and a “mild” nonlinearity. To do this, I used LazySets, Ipopt and JuMP.

Since I am working on economic MPC, I can use arbitrary continuous objective functions but mostly focus on linear objectives. Over the next months, I plan to implement “my” MPC scheme for an industrial application. First, a linear cost with the linearized model and subsequently, a linear cost (or something crazier) with the full nonlinear model.

The biggest challenge so far in the implementation was to compute the polytopic robust invariant sets for higher dimensions (on Windows ). But with the great support from @blegat and the guys from LazySets I could make it work. However, I hope to be able to implement an ellipsoidal set representation in the future.

I see that you do research in a similar area. Are you able to use Julia for your applications or do you mostly use Matlab?

We do a lecture on nonlinear MPC for which we use Matlab together with casadi. The great thing about casadi is that it lets you formulate vector-valued nonlinear equality constraints, i.e. h: \mathbb{R}^{n_x} \rightarrow \mathbb{R}^{n_h}. As far as I know (and I’d love to be corrected here), the current version of JuMP does not have a native support for vector-valued user-defined nonlinear functions. Sure, for small nonlinear systems there are workarounds and so on, but still…

1 Like

That sounds very interesting . Is there an official website to see a detailed course description?

I don’t think that there is such a nice functionality for nonlinear vector-valued function as with the linear-quadratic MPC case, like this

N = 20
f(x,u) = rand(3,3)*x + rand(3,2)*u
l(x,u) = x'*x + u'*u
m = Model(with_optimizer(Ipopt.Optimizer))
@variable(m, z[1:3,1:N])
@variable(m, v[1:2,1:N])
@constraint(m, [i=1:N-1], z[:,i+1] .== f(z[:,i],v[:,i]))
@objective(m, Min, sum(l(z,v)))


The workaround for non-linear systems that I used so far and the only method I know, looks something like that

m = Model(with_optimizer(Ipopt.Optimizer))
@variable(m, z[1:3])
@variable(m, v[1:2])
f1(x1,x2,x3,u1,u2) = x1^3 + x2^3 + x3^3 + u1^2 + u2^4
JuMP.register(m,:f,5,f, autodiff=true)
@NLconstraint(m, z[1] == f1(z[1],z[2],z[3],v[1],v[2]))


but I admit that it feels hacky and could be really annoying up to practically infeasible for problems with a large state-space.

We have no current plans to add support for vector-valued expressions to JuMP. I’d recommend supporting the effort to get casadi running in Julia: https://github.com/casadi/casadi/issues/1105.

thanks for the pointer! I wasn’t aware of these efforts!