Specifically I registered but missed “Trimming, Linearization, and Model Predictive Control (MPC) Design with JuliaSim” on 11/2. Is there a recording?
Nice presentation.
The slide on linearization (3. linearization) – it is not clear to me why the time derivatives of x, z, and u suddenly popped up in the linearized expression for the algebraic equation 0 = g(x,z,u).
Somewhat small text in the Pluto notebook (also after magnification), even when watched on my 55" TV…
Apart from that, nice to see DAE terms such as descriptor.
Thank you for your feedback!
The algebraic equation is differentiated w.r.t. time in an attempt to perform index reduction from index 1 to index 0, we subsequently solve for the algebraic variables explicitly to obtain a standard LTI system. In this process, the time derivatives of inputs may appear (common for inverse models), but if they don’t, we succeed and obtain the desired LTI system.
Ah.
Two followup questions/comments:

Does MTK and the
structural_simplify
algorithm guarantee that the DAE is reduced to index 1 or 0? 
If the index is known to be 1, is there any advantage in reducing the equation to index 0?
– you know that the index has been reduced to 0 if g_z is nonsingular
– then you have two differential equations, one for x and one for z, so you can use an ODE solver
– however, the state is still x, i.e., you can only choose x(0) freely – z(0) is still constrained to satisfy the algebraic equation 0 = g(x,z,u) at initial time
– because ODE solvers introduce errors, you should probably “prune” the solution (x(t),z(t)) found by the ODE solver with the algebraic constraint for each time step
– an alternative would be to instead specify x(0), then solve the algebraic equation to find z(0) [which you have to do whether you reduce the index to 0 or keep it at 1], and then for each time step with x(t) given – find z(t) by solving the (implicit) algebraic equation
– [a possible advantage of reducing the model to index 0 could be that one might get a better initial guess when solving the algebraic equation for z(t), I guess… but then you also have to handle the need for computing \dot{u}]
What do you think?
MTK always reduces to at most index 1, but may also reduce to index 0.
There are downsides to index reduction as well, in the simple case of the Cartesian pendulum, the algebraic equation
x^2 + y^2 = l^2
encodes exactly what you want and is index 3. The index2 variant encodes that the velocity is tangential to the circle and the index1 variant that the acceleration is pointing inwards. For each index reduction, you have one additional integration that accumulates error between what you are solving and what you really want to enforce.
I’m not sure what you mean with “prune”, but this is one of the reasons for solving the index1 DAE directly instead of reducing the index to 0, to avoid excessive error accumulation by integration.
This is the approach taken to generate sigma points in the UKF for DAEs I mentioned in the talk.
I used “prune” in the meaning of “trimming”. So if one solves the index 0 ODE set when the elements in (x,z) are not independent, one could, e.g., assume that x is “correct”, and iterate on z so that 0 = g(x,z,u) is satisfied – for each time step. The advantage of this may be that z(t) from the ODE solver probably is closer to the value that satisfies 0 = g(x,z,u)_t than z(tdt) is. On the other hand, with reasonably small time steps, z(tdt) may also be sufficiently close to the true solution satisfying 0 = g(x,z,u)_t for convergence, and then one avoids introducing the time derivative of the input.
This is indeed how you would obtain the algebric variables in case you had reduced the system to index 0, the problem, I believe, lies in
assume that x is “correct”
since we do expect numerical drift in such a scenario, causing the original, highindex constraint to be violated.
The problem with the timederivative of the input being required may be fundamental to the model, if the model is representing a nonproper system one has no choice but to either provide the input derivative or add additional filter poles and extract the derivative from the filter states. For linearization, where there are unspecified inputs present, it may happen that MTK fails to find the correct causalisation and the model may look noncausal even though it is. In such cases, one may employ further numerical (as opposed to symbolic) simplification to obtain a causal model (this is done automatically) but if the model is truly noncausal, the input derivatives will be required.
Maybe I’m misunderstanding what you are suggesting here and my comments are off point?