ApproxFun: Eigenvalue Problem for Linear Differential Algebraic Equations

I want to ask for help and guidance to know if the following problem can be solved using ApproxFun, as I have not encountered any example doing a similar thing.

Consider, for instance, a linear system of differential algebraic equations with the following form

\begin{aligned} u' + a\,v &= \lambda\,d\,v, \\ c\,u + a\,w &= \lambda\,d\,w, \\ a\,z &= \lambda\,(u + d\,z), \\ v' + b\,v + c\,w + a\,u &= \lambda\,(z + d\,u), \end{aligned}

subjected to the boundary conditions v(0.25)=v(1)=0. The goal is to obtain the eigenvalues \lambda and corresponding eigenvectors, given that a, b, c are known. Since this system does not have a derivative and a boundary condition for each equation, I am not sure it fits the template of problems that ApproxFun can handle, but I am very interested to know if this library can be used to analyze such problems.

Can you clarify: what are you solving for? Are u,v,w,z,a,b,c,λ unknown or given? Are they functions or constants?

Probably ApproxFun.jl can do this, if it is well-posed. Though ClassicalOrthogonalPolynomials.jl is a bit more low level so you might be better off using it. A similar example can be found here:

Yes, I edited my original post. Basically, the goal is to obtain the eigenvalues \lambda and associated eigenvectors (u, v, w, z) given that a, b, c are known (they can be functions, but they are known beforehand).

In matrix form, this becomes a non-Hermitian generalized eigenvalue problem. However, it differs from the examples I have seen because derivatives do not appear in all variables.

This problem arises from the computation of acoustic modes in a duct under certain physical assumptions. My first attempt was to use a Chebyshev discretization based on discretized function values rather than series coefficients. The difficulty is that, depending on how the boundary conditions are imposed in this framework, spurious checkerboard modes can appear, and these do not disappear as the discretization is refined.

My understanding is that ApproxFun works in a coefficient-based representation, enforcing boundary conditions by eliminating equations associated with higher wavenumbers and adding the boundary equations in the first rows. I suspect this approach may behave better in this setting, and I would like to test it.

You should be able to modify the ClassicalOrthogonalPolynomials.jl example I gave

Thank you very much for the example. I have been working on it and I now have a working implementation:

using ClassicalOrthogonalPolynomials, LinearAlgebra, Julianim

n = 20

T = chebyshevt(0.25 .. 1)
r = axes(T, 1)

D = ComplexF64.(T \ diff(T))[1:n, 1:n]
Dτ = D[1:n-2, 1:n]

Bop = ComplexF64.(T \ ((1 ./ r) .* T))[1:n, 1:n]
Bopτ = Bop[1:n-2, 1:n]

Iₙ = Matrix{ComplexF64}(I, n, n)
Iτ = [Matrix{ComplexF64}(I, n - 2, n - 2) zeros(ComplexF64, n - 2, 2)]

Z = zeros(ComplexF64, n, n)
Zτ = zeros(ComplexF64, n - 2, n)
z0 = zeros(ComplexF64, 1, n)

eσ = ComplexF64.(T[σ, 1:n])
e₁ = ComplexF64.(T[1, 1:n])

a = -im * 10.0
c = im * 2 * Bop
cτ = im * 2 * Bopτ
d = 0.3

# Boundary conditions by removing equations from last block
A = [
    eσ' z0 z0 z0;
    e₁' z0 z0 z0;
    a*Iₙ Z Z D;
    Z a*Iₙ Z c;
    Z Z a*Iₙ Z;
    Dτ+Bopτ cτ Zτ a*Iτ
]

M = [
    z0 z0 z0 z0;
    z0 z0 z0 z0;
    d*Iₙ Z Z Z;
    Z d*Iₙ Z Z;
    Z Z d*Iₙ Iₙ;
    Zτ Zτ Iτ d*Iτ
]


# Boundary conditions by removing equations from first block
# A = [
#     eσ' z0 z0 z0;
#     e₁' z0 z0 z0;
#     a*Iτ Zτ Zτ Dτ;
#     Z a*Iₙ Z c;
#     Z Z a*Iₙ Z;
#     D+Bop c Z a*Iₙ
# ]

# M = [
#     z0 z0 z0 z0;
#     z0 z0 z0 z0;
#     d*Iτ Zτ Zτ Zτ;
#     Z d*Iₙ Z Z;
#     Z Z d*Iₙ Iₙ;
#     Z Z Iₙ d*Iₙ
# ]

λ, x̂ = eigen(A, M)
k̃ₓ = im .* λ

Do you have any intuition for how the boundary conditions should be handled in a case like this?

I am asking because, if I impose the two BCs after removing equations from the last block, some spurious modes with purely real eigenvalues disappear. However, if I do the analogous thing after removing equations from the first block, those spurious modes appear again.

This is probably something specific to my problem, and I will think more carefully about it. Still, I was wondering whether there is any general rationale for how to insert boundary conditions in these coefficient based discretizations, where some equations have to be dropped in order to obtain a square system.

I’m not sure there’s an easy answer. Note that truncation can be viewed as one choice of “τ” polynomial in the τ-method, so it might be possible to use a better designed τ, see some recent work (in the PDE) setting:

Fig 9 seems to consider spectral problems.

If you want something more robust you may wish to look into residual methods a la Matthew Colbrook. I’m not aware of a Julia package to do it (but surely AI can cook something up?)

In ApproxFun.jl I played around with infinite-dimensional inverse iteration. Because your problem may be degenerate in terms of boundary conditions I suspect that’s not going to work super well. In theory you can do it yourself in ClassicalOrthogonalPolynomials.jl, but you will want to make sure you get almost-banded truncations by using the ultraspherical spectral method approach (which is, use U = ChebyshevU(); D = U\diff(T) to get a banded derivative, then idenitity becomes U\T, etc.)

This part hasn’t been super battle-tested so it’s likely to cause issues. I might try to reimplement this idea on an “easy” spectral problem at some point.

PS for context: ClassicalOrthogonalPolynomials.jl is meant to eventually sit under ApproxFun.jl. But I’ve been too busy with other things to put everything together. and I’ve kinda come to the conclusion that it’s too hard to make a “DSL” like ApproxFun that’s usuable for more challenging problems (like yours), where a lower level do-it-yourself design is more versatile albeit less user-friendly.

2 Likes

Thank you for these awesome softwares!

I had not paid much attention to ClassicalOrthogonalPolynomials.jl because I didn’t find much documentation for it, but I suppose is due to the package being still in development, right?

Yes it’s still under development. It’s designed in a very general way, which makes it very powerful but also extremely buggy. I mostly just use it with my own mathematical research with students (eg on multivariate orthogonal polynomials and PDEs).

At some point I would like to recreate all the ApproxFun examples to battle test it. But… not having any users has a lot of advantages in terms of my own time :sweat_smile: