# Two questions related to using operators in ApproxFun.jl

I am about to register PeriodicSystems.jl, a new package dedicated to handle linear control systems with periodic time-varying coefficient matrices. For stability analysis, the computation of characteristic exponents of the state matrix A(t) is a possible approach. I already implemented a function `pseig`, which computes the characteristic exponents as the logarithms of the eigenvalues of the monodromy matrix (i.e., the state-transition matrix over one period). The following example illustrates the associated computational difficulties and the way to overcome them.

Consider the following periodic matrix of period T = 2pi
`A(t) = [ 0 1; -10*cos(t) -24-10*sin(t)]`
which has characterisitic exponents equal to 0 and -24 (you have to believe me).

However by computing the monodromy matrix and its eigenvalues, one of the resulting characteristic exponents has no one exact digit using Float64 based computations with high relative tolerance set for solving the underlying differential equations :

``````julia> using PeriodicSystems

julia> A = PeriodicFunctionMatrix(t -> [0 1; -10*cos(t) -24-10*sin(t)],2pi);

julia> @time ev = psceig(A; reltol = 1.e-10)
0.004462 seconds (21.75 k allocations: 924.010 KiB, 79.31% compilation time)
2-element Vector{Float64}:
-Inf
-1.4052750767605842e-13
``````

However, full accuracy can be achieved by determining the mondromy matrix as a product of say 500 matrices and computing the eigenvalues using the periodic Schur decomposition:

``````julia> @time ev = psceig(A,500; reltol = 1.e-10)
0.029436 seconds (329.04 k allocations: 21.171 MiB)
2-element Vector{Float64}:
1.8376538159566882e-15
-23.99999999997412
``````

Note that the evaluation of the 500 factors can be done in parallel, in which case a substantial speedup of computations can be achieved.

I recently included the ApproxFun.jl package as an alternative way to define periodic matrices and performed the same computation using operators based computations. I must say, I was pleasantly surprized to obtain almost the same accuracy as before:

``````s = Fourier(0..2π);
a = Fun(t -> [0 1; -10*cos(t) -24-10*sin(t)],s);
d = domain(a);
D = Derivative(d);
ND = [D 0I; 0I D];
A = a-ND;
N = 50; n = (2*N+1)*2;
@time ev = eigvals(Matrix(A[1:n,1:n]));
0.044203 seconds (88.73 k allocations: 4.403 MiB)
ev[sortperm(imag(ev),by=abs)][1:4]
4-element Vector{ComplexF64}:
-24.000000000000135 + 0.0im
-23.999999585663453 + 0.0im
-6.042828006006208e-14 + 0.0im
0.0 + 0.0im
``````

I guess that the operator A is a Fredholm operator (i.e., sum of a diagonal ND with a Toeplitz operator, but I am not sure) and I only displayed the eigenvalues in the central strip (i.e., in this case, those with null imaginary parts) using a truncated 2*N+1 blocks representation which is typical for Toeplitz operators (not sure if this is the best choice).

My questions are:

1. What is the explanation for obtaining the characteristic exponents with double multiplicities?
2. There exists a nice compact way to build the diagonal operator ND for an arbitrary order A(t)?
1. Probably you could look at the eigenvectors to figure out why they are repeated.

2. No I don’t think so.

1 Like

FWIW, I tried running the problem in Chebfun, which is ApproxFun’s favorite Matlab uncle. The eigenvalues it gives me are

`````` -0.000000000000067 + 0.000000000000000i
-0.000000000000067 - 1.000000000000009i
-0.000000000000067 + 1.000000000000009i
-0.000000000000064 - 2.000000000000016i
-0.000000000000064 + 2.000000000000016i
-0.000000000000067 - 3.000000000000025i
-0.000000000000067 + 3.000000000000025i
...
-0.000000000000005 -23.000000000000210i
-0.000000000000005 +23.000000000000210i
-0.000000000000007 -24.000000000000234i
-0.000000000000007 +24.000000000000234i
24.000000047940063 + 0.000000000000000i
23.999999974147293 - 1.000000041935243i
23.999999974147293 + 1.000000041935243i
23.999999978621233 - 1.999999919103405i
23.999999978621233 + 1.999999919103405i
24.000000054662237 - 3.000000104379710i
24.000000054662237 + 3.000000104379710i
23.999999950623149 - 3.999999901300158i
23.999999950623149 + 3.999999901300158i
24.000000012168623 - 5.000000066251817i
24.000000012168623 + 5.000000066251817i
``````

Is that any less surprising?

2 Likes

To clarify, I actually used the negative of the operator in the original code snippet.

The computed values are exactly what I would expect to obtain and their accuracy is indeed completely satisfactory. I suppose there are no double eigenvalues at -24 and 0 (it is not possible to check this from your result).
Apparently you used another value of N (probably less than 50).

For comparison, these are the actually computed eigenvalues (sorted acording to the magnitudes of imaginary parts):

``````202-element Vector{ComplexF64}:
-24.000000000000135 + 0.0im
-23.999999585663453 + 0.0im
-6.042828006006208e-14 + 0.0im
0.0 + 0.0im
-24.000000393549527 - 0.9999999987185656im
-24.000000393549527 + 0.9999999987185656im
1.4765966227514582e-14 - 1.000000000000013im
1.4765966227514582e-14 + 1.000000000000013im
1.0936130473426786e-14 - 2.0000000000000027im
1.0936130473426786e-14 + 2.0000000000000027im
-23.99999966468944 - 2.0000000019980897im
-23.99999966468944 + 2.0000000019980897im
⋮
-15.976780830674567 + 46.5840563132711im
-32.02321916932537 - 46.58405631327116im
-32.02321916932537 + 46.58405631327116im
-1.7763568394002505e-14 - 46.99999999999988im
-1.7763568394002505e-14 + 46.99999999999988im
1.2434497884199578e-14 - 48.00000000000027im
1.2434497884199578e-14 + 48.00000000000027im
-2.842170943040401e-14 - 49.00000000000017im
-2.842170943040401e-14 + 49.00000000000017im
-1.1102230246251565e-14 - 50.000000000000064im
-1.1102230246251565e-14 + 50.000000000000064im
``````

Chebfun chooses the discretization size automatically. The eigenfunctions are apparently discretized satisfactorily at 85 points, but that’s with a Chebyshev rather than Fourier method, which might use a factor of 2/pi fewer at N=54 or so.

The eigenfunction at -24 is low-frequency and therefore well resolved.

I am now able to compute the expected two real characteristic exponents without multiplicities. I employed another approach by generating explicitly the truncated Toeplitz - E(j0) operator in
J. Zhou, T. Hagiwara, and M. Araki, “Spectral characteristics and eigenvalues computation of the harmonic state operators in continuous-time periodic systems,” Systems and Control Letters, vol. 53, pp. 141–155, March 2004.
and computing the eigenvalues. For N = 50, the resulting 8 eigenvalues with the smallest imaginary parts are:

``````8-element Vector{ComplexF64}:
9.51898692030938e-15 - 1.5747448610226719e-16im
-24.00000020745413 + 3.0152922283089535e-8im
-23.99999976835794 + 0.9999999341699454im
-4.068736769884833e-15 + 0.9999999999999764im
3.4333666581810447e-15 - 1.000000000000011im
-23.999999877051295 - 1.0000000414655406im
-23.99999999433544 - 1.9999999296315698im
1.2104134588879378e-14 + 1.999999999999935im
``````

which do not exhibit any multiplicities of the first two eigenvalues in the central strip.

So, for my first question I still would like to understand what happens when using ApproxFun. Could it be that some boundary condition (e.g., addressing periodicity) should be added to the definition of the operator?

I think I got it right now using

``````ev, V = ApproxFun.eigs(A,600,tolerance=1E-10);
ev[sortperm(imag(ev),by=abs)][1:4]
4-element Vector{ComplexF64}:
-24.000000021801753 + 0.0im
-6.440239494284886e-14 + 0.0im
-24.000000080278454 + 0.9999997750716979im
-24.00000008027845 - 0.9999997750716979im
``````

The multiplicities are as expected, so the characteristic exponents are just the two eigenvalues in the central strip (with zero imaginary parts).

I wonder if it is possible to explicitly extract the finite matrix which underlies the eigenvalue computation with `eigs`. This would be usefull to build lifted (finite dimensional and constant) representations of periodic systems.

Regarding my second question, I could imagine to build a code generation sequence for arbitrary dimension `n` of `A(t)` to be evaluated, for example for `n = 2`, as

`eval(Meta.parse("ND = [D 0I; 0I D]"))`

but I wonder if the construction of `ND` would be efficient for large values of` n`. What would be a reasonable limit for `n`?

I think it actually solves a generalised eigenvalue problem:

though for `Fourier` it might reduce to a standard eigenvalue problem. Maybe the key is `pruneeigs` which checks the eigenvectors for convergence?

1 Like

Yes, indeed, `pruneeigs` is used to dismiss two of the multiple eigenvalues of `A1` (`C1 = I`). I also found the explanations in #645, so I think this aspect is now clear for me. Thanks.

My last question: It is possible to obtain the same results using Chebysev functions with a kind of periodic boundary conditions?