How to improve the Thomas algorithm for block tridiagonal matrices

It is unfortunately. It’s even unstable for a non-block symmetric tridiagonal. (And the block case is going be worse with the explicit inverse you pointed out.) You really need some form of pivoting for this to guarantee stability. I put an example for the factorization variant used in ldlt in a post last year.

I just showed the factorization backward error, but the thing that seems more troubling to me is that the symmetric tridiagonal solver in Julia uses an unstable algorithm. With a relative residual for the solver, the example is:

using LinearAlgebra
A = SymTridiagonal([
  3e-10 1 0
  1     2 1 
  0     1 2
])

F = ldlt(A)
display(A - F.L * F.D * F.L')

b=[1.0, 2.0, 3.0]
x=A\b
@show norm(A*x-b) / opnorm(A) / norm(x)

which gives output

3×3 Matrix{Float64}:
 0.0  0.0         0.0
 0.0  4.76837e-7  0.0
 0.0  0.0         0.0
(norm(A * x - b) / opnorm(A)) / norm(x) = 8.692567662779919e-8
8.692567662779919e-8

LAPACK avoids this by using LU with partial pivoting for its tridiagonal solver. However, in addition to the permutation, you pick up an extra superdiagonal in U. So it isn’t a fully in place algorithm like the elegant algorithm implemented in ldlt!. The alternative is Bunch’s method that uses 2\times 2 diagonal blocks mentioned in my other post. You don’t need any permutations, but you do need to accept 2\times 2 blocks in D. Bunch’s algorithm has the further advantage of preserving symmetry, so by finding the eigenvalues in D you can reliably compute the inertia of a matrix. However, it also requires extra storage and you can’t represent the triangular factor as a UnitLowerTriangular a SymTriadiagonal as is done now. The structure of the factors doesn’t align as nicely with existing types in LinearAlgebra, which is annoying even if you are fine giving up on a fully in-place factorization.

1 Like