I am about to register PeriodicSystems.jl, a new package dedicated to handle linear control systems with periodic time-varying coefficient matrices. For stability analysis, the computation of characteristic exponents of the state matrix A(t) is a possible approach. I already implemented a function pseig
, which computes the characteristic exponents as the logarithms of the eigenvalues of the monodromy matrix (i.e., the state-transition matrix over one period). The following example illustrates the associated computational difficulties and the way to overcome them.
Consider the following periodic matrix of period T = 2pi
A(t) = [ 0 1; -10*cos(t) -24-10*sin(t)]
which has characterisitic exponents equal to 0 and -24 (you have to believe me).
However by computing the monodromy matrix and its eigenvalues, one of the resulting characteristic exponents has no one exact digit using Float64 based computations with high relative tolerance set for solving the underlying differential equations :
julia> using PeriodicSystems
julia> A = PeriodicFunctionMatrix(t -> [0 1; -10*cos(t) -24-10*sin(t)],2pi);
julia> @time ev = psceig(A; reltol = 1.e-10)
0.004462 seconds (21.75 k allocations: 924.010 KiB, 79.31% compilation time)
2-element Vector{Float64}:
-Inf
-1.4052750767605842e-13
However, full accuracy can be achieved by determining the mondromy matrix as a product of say 500 matrices and computing the eigenvalues using the periodic Schur decomposition:
julia> @time ev = psceig(A,500; reltol = 1.e-10)
0.029436 seconds (329.04 k allocations: 21.171 MiB)
2-element Vector{Float64}:
1.8376538159566882e-15
-23.99999999997412
Note that the evaluation of the 500 factors can be done in parallel, in which case a substantial speedup of computations can be achieved.
I recently included the ApproxFun.jl package as an alternative way to define periodic matrices and performed the same computation using operators based computations. I must say, I was pleasantly surprized to obtain almost the same accuracy as before:
s = Fourier(0..2Ď€);
a = Fun(t -> [0 1; -10*cos(t) -24-10*sin(t)],s);
d = domain(a);
D = Derivative(d);
ND = [D 0I; 0I D];
A = a-ND;
N = 50; n = (2*N+1)*2;
@time ev = eigvals(Matrix(A[1:n,1:n]));
0.044203 seconds (88.73 k allocations: 4.403 MiB)
ev[sortperm(imag(ev),by=abs)][1:4]
4-element Vector{ComplexF64}:
-24.000000000000135 + 0.0im
-23.999999585663453 + 0.0im
-6.042828006006208e-14 + 0.0im
0.0 + 0.0im
I guess that the operator A is a Fredholm operator (i.e., sum of a diagonal ND with a Toeplitz operator, but I am not sure) and I only displayed the eigenvalues in the central strip (i.e., in this case, those with null imaginary parts) using a truncated 2*N+1 blocks representation which is typical for Toeplitz operators (not sure if this is the best choice).
My questions are:
- What is the explanation for obtaining the characteristic exponents with double multiplicities?
- There exists a nice compact way to build the diagonal operator ND for an arbitrary order A(t)?