Computing the rank of a symbolic matrix in Julia

Hello. The context is that of discrete state space representations.

x_{t}=Ax_{t-1}+Bu_{t}
y_{t}=Cx_{t-1}+Du_{t}

I am making observational assumptions on the rows of x_{t} and I need to work symbolically. Specifically, I must construct controllability and observability matrices and verify the rank conditions for minimality, reducing the representation to a minimal one, ever symbolically. In Matlab I could do this by means of the syms command to then simply work with the basic linear algebra ones, more or less as follows.

syms a1111 a1112 a1121 a1122 b1111 b1121 b21
A=[a1111 a1112 0; a1121 a1122 0; zeros(1,3)];
B=[b1111; b1121; b21];
M=[1 0 0]; C=MA; D=MB;
Con=[B A*B A*A*B];
rc=rank(Con);

In Julia I have tried to use SymEngine and SymPy, but both fail to compute the rank. The code looks something like this.

using SymEngine, SymPy, ControlSystems, LinearAlgebra
@vars a1111 a1112 a1121 a1122 b1111 b1121 b21
A=[a1111 a1112 0; a1121 a1122 0; zeros(1,3)]
B=[b1111; b1121; b21]
M=[1 0 0]; C=MA; D=MB
Con=[B A*B A*A*B]
rc=rank(Con)

This is the error.

MethodError: no method matching copysign(::Basic, ::Basic)
Closest candidates are:
copysign(::Basic, !Matched::SymEngine.BasicType) at /Users/Name/.julia/packages/SymEngine/lLmHe/src/mathops.jl:89
copysign(!Matched::Sym, ::Number) at /Users/Name/.julia/packages/SymPy/fI5pg/src/generic.jl:26

How can I compute the rank for a symbolic matrix in Julia? I hope it is possible; I’m obviously not a cognoscente of Julia and appreciate the aid. Thanks a lot.

Won’t the rank depend on the value of the matrix entries? In what form would you expect the result of the call to the rank function?

1 Like

Thanks for the feedback. The ranks do, but in Matlab the rank of the controllability matrix is 3 and that of the observability one is 2, because the symbolic elements are treated such that linear independence be precisely maximal. The system is thereby non-observable, for x_{t} has 3 rows. I am only hoping to reproduce the same symbolic rank computation through some package in Julia.

So… to simplify the notation somewhat… you have an A matrix:

A=\left(\begin{array}{ccc} a_{11} & a_{12} & 0\\ a_{21} & a_{22} & 0\\ 0 & 0 & 0 \end{array}\right)

and a full B vector; you indicate a C matrix equal to the first row of A (and a D element being b_1). Thus, you have:

{\cal C}\left(s\right)=\left(\begin{array}{cc} A-sI & B\end{array}\right)=\left(\begin{array}{cccc} a_{11}-s & a_{12} & 0 & b_{1}\\ a_{21} & a_{22}-s & 0 & b_{2}\\ 0 & 0 & -s & b_{3} \end{array}\right)

Similarly, you can define

{\cal O}\left(s\right)=\left(\begin{array}{c} A-sI\\ C \end{array}\right)=\left(\begin{array}{ccc} a_{11}-s & a_{12} & 0\\ a_{21} & a_{22}-s & 0\\ 0 & 0 & -s\\ a_{11} & a_{12} & 0 \end{array}\right)

So according to the Popov-Belevitch-Hautus (PBH) controllability test, the system is controllable if \forall s\in \mathbb{C}:\ \mathrm{rank}\ {\cal C}(s) = 3. Which can not be ruled out.

Similarly, the system is observable if \forall s\in \mathbb{C}:\ \mathrm{rank}\ {\cal O}(s) = 3 — here, it is easily seen that \mathrm{rank}\ {\cal O}(s=0) \le 2, so the system is not observable.

OK – you already know this. My point is that perhaps it is easier to use the PBH test for symbolic computation, than the standard Kalman tests.

On the other hand… it may be difficult to get the symbolic tool understand that this has to be fulfilled for all s… In reality, it suffices to test for s being an eigenvalue of A, though.

1 Like

Thank you. Your tip to analytically work with the PBH tests instead of the controllability and observability matrices, especially for the other assumptions I am to contemplate, is expedient, as the computations via pencil and paper could arise more smoothly and effectively if unable to get Julia to help me, which I would however still appreciate (as Matlab did). PS. The notation is cumbersome because the state equation is that of Sims’ \texttt{gensys} such that the non-expectational variables, x_{1t}, be symmetrically measurable/observable and non-measurable: [x_{1t} \; x_{2t}]^{\top}=[(A_{11} \; 0) \; (0 \; 0)]^{\top}x_{t-1}+[B_{11} \; B_{21}]^{\top}u_{t} and x_{1t}=[x_{1mt} \; x_{1nt}]^{\top}. Other scenarios involve the trivial x_{1t} full measurability case and asymmetric ones; this is all ultimately relevant for the condition of a VAR representation of the states in the outputs.

1 Like

UPDATE. SymEngine and SymPy don’t seem to generate symbolic matrix inverses either, nor eigenvalues. This is problematic because the VAR representation of the x_{t} in y_{t} must satisfy the minimal eigenvalue condition |\lambda_{F_{m}(\lambda)}|<1 for F_{m}=A_{m}-B_{m}D^{-1}C_{m}. Admitting 4 state entries (in order to hypothesise more measurable/observable non-expectational variables than non-measurable ones, namely, x_{t}=[x_{1mt} \; x_{1n_{1}t} \; x_{1n_{2}t} \; x_{2t}]^{\top}) renders analytical calculations essentially impossible. Returning to Matlab at present isn’t really a possibility: does Julia truly lack a sufficient symbolic package for linear algebra?

UPDATE. SymEngine and SymPy don’t seem to generate symbolic matrix inverses either, nor eigenvalues.

This isn’t quite true:

julia> using SymPy

julia> M = Sym["a$i$j" for i in 1:2, j in 1:2]
2×2 Array{Sym,2}:
 a₁₁  a₁₂
 a₂₁  a₂₂

julia> inv(M)
2×2 Array{Sym,2}:
  a22/(a11*a22 - a12*a21)  -a12/(a11*a22 - a12*a21)
 -a21/(a11*a22 - a12*a21)   a11/(a11*a22 - a12*a21)

julia> using LinearAlgebra

julia> eigvals(M) # also M.eigenvals()
2-element Array{Sym,1}:
 a11/2 + a22/2 - sqrt(a11^2 - 2*a11*a22 + 4*a12*a21 + a22^2)/2
 a11/2 + a22/2 + sqrt(a11^2 - 2*a11*a22 + 4*a12*a21 + a22^2)/2

julia> eigvecs(M) # also M.eigenvects()
2×2 Array{Sym,2}:
 -2*a12/(a11 - a22 + sqrt(a11^2 - 2*a11*a22 + 4*a12*a21 + a22^2))  …  -2*a12/(a11 - a22 - sqrt(a11^2 - 2*a11*a22 + 4*a12*a21 + a22^2))
                                                                1                                                                    1

julia> M.rank()
2
2 Likes

Thanks, by all means: that’s why I had written “don’t seem”. How do I define symbolic variables from the outset in SymPy as in Matlab? I used the following in SymEngine.

syms a1111 a1112 a1121 a1122 b1111 b1121 b21 (Matlab)
@vars a1111 a1112 a1121 a1122 b1111 b1121 b21 (SymEngine)

You can use @vars, as in SymEngine; Sym(), as in the example; symbols, as in symbols("a)"); or sympify, as in sympify("a") to define symbolic variables.

OK, thanks (I also asked because it appeared to say @vars conflicted with SymPy). Is there a way to define subscripts of subscripts, that is, a_{11_{11}} and the like? I shall try the commands you suggested for the tasks at hand, anyway, and report as soon as possible.

Sure, Sym will add them for you in interpolation, as in the example. But I’d suggest you just use unicode subscripts (e.g. \_1[tab]) in the name if you want references to each.

By the way, BLI: can transformation matrix \mathcal{T} be constructed from PBH’s \mathcal{O}(s=0) one (as in \mathcal{T}=[\mathcal{O(s=0)}_{r_{\mathcal{O(s=0)}}} \; v_{n_{x}-r_{\mathcal{O(s=0)}}}]^{\top}) or must it only pass through the standard observability matrix \mathcal{O}=[C \; CA \; \cdots \; CA^{n_{x}-1}]^{\top}, so that \mathcal{T}=[\mathcal{O}_{r_{\mathcal{O}}} \; v_{n_{x}-r_{\mathcal{O}}}]^{\top} and x_{co\bar{o}t}=\mathcal{T}^{-1}x_{t} and x_{cot} equal the first r_{\mathcal{O}}=r_{\mathcal{O(s=0)}} entries of x_{co\bar{o}t}? Thank you.

I appreciate your help, although I didn’t fully understand.

I don’t recall – it’s some years since I taught linear systems theory; I could have checked my books, but my employer is upgrading our office space, so I’ve had to dump 16 boxes of my technical books in the basement for 6 months… :frowning:

I recommend you to check Wilson Rugh’s book on Linear Systems Theory: it has a pretty good proof of the PBH test (and other controllability tests). I also like Chi-Tsong Chen’s book on Linear Systems Theory and Design. Both are readable with a decent background in linear algebra (e.g., Gilbert Strang’s books). Chen’s book essentially bases most of the development on QR factorization and similar ideas.

1 Like

Sorry, I missed the subscripts of subscripts. That I don’t know how to do.

No problem. I’m grateful.

OK, makes sense. The references are well received.

I ran the following commands for the scenario I’m envisaging (ie. n_{x_{n}}>n_{x_{m}} such that x_{t}=[x_{1mt} \; x_{1_{1}nt} \; x_{1_{2}nt} \; x_{2t}]^{\top}). I politely ask of you or the others to check whether they succeed in delivering the symbolic eigenvalues at the end, for on my machine they appear to employ impassable time. Thanks in advance.

using SymPy, LinearAlgebra
A=[Sym[“a$i$j” for i in 1:3, j in 1:3] zeros(3, 1); zeros(1, 4)]; B=[Sym[“b$i” for i in 1:3]; Sym[“b2”]];
M=[1 0 0 0]; C=MA; D=MB;
Obs=[C; CA; CA^2; CA^3]; ro=Obs.rank();
T=[Obs[1:3, 1:4]; 0 0 0 1];
Ad = inv(T)AT; Bd = inv(T)B; Cd = CT;
Am = Ad[1:3, 1:3]; Bm = Bd[1:3]; Cm = Cd[1:1, 1:3];
Fm=Am-Bm
inv(D)*Cm; eigvals(Fm)

Do you remember if the reduction to minimality may happen by merely selecting the first r_{m} states in \bar{x}_{t}=\mathcal{T}^{-1}x_{t} for m=min\{r_{\mathcal{C}}, \; r_{\mathcal{O}}\} and \mathcal{T}=[\mathcal{O}_{r_{\mathcal{O}}} \; v_{n_{x}-r_{\mathcal{O}}}]^{\top} or \mathcal{T}=[\mathcal{C}_{r_{\mathcal{C}}} \; v_{n_{x}-r_{\mathcal{C}}}] accordingly? That is, does it suffice to transform the states through the lower rank matrix between the controllability and observability ones or must the higher rank matrix be checked for in the transformed system anyway before declaring it minimal (being there the possibility of n_{\bar{x}}>r_{\mathcal{\bar{C}\vee\bar{O}}})? Much obliged.

Not sure. It’s been too long… Note: I have “abused” the use of symbols \cal C and \cal O – in realization theory, these are used for the Kalman matrices.

In a paper from 2011 ( https://ieeexplore.ieee.org/document/5778169), some researchers explore the possibility to combine the PBH test with transforming this to the Smith Normal Form. I’m not sure if the Smith Normal Form works for symbolic pencils, though – it would be great if it does. I’ve only used the Smith Normal Form to explore transmission zeros and poles (the related Smith-McMillan form is trivial if you have an algorithm for Smith normal form). I did it in Maple eons ago; it is possible that this only works for integer elements in symbolic tools.

If Julia had had a function for the (rectangular) Smith Normal Form, that would have been fantastic. (I guess SymPy has…). If so, you could probably have generated random integers and put them in the same location as your symbolic values, preserving the structure, and by repeating it I’d guess that you with some probability could say something about controllability and observability.

You may perhaps also use random numbers instead of your symbols, and do numeric controllability and observability tests (Gramian, Kalman criterion, PBH, etc.). If you run the test for a number of random elements (preserving the structure), I’d guess you can say something about controllability and observability.