Solving a symmetric block tridiagonal system

I have a linear system in a symmetric block tridiagonal form, where each diagonal block is symmetric and invertible, but may be of varying sizes. There is no other structural advantage to make use of as far as I can see it. Plus these blocks are relatively large and the total number of blocks is small. What would be the best solving strategy in this situation? I’m considering these options:

1. A block LDL decomposition approach. The pro is to make use of the structure, the con is the diagonal blocks need to be explicitly inverted and updated.
2. Treat the whole system as a symmetric (somewhat) sparse linear system. The pro is straigtforwardness. The con is there is not too much sparsity, maybe just a few folds, but definitely not `O(n)`
3. `BandedMatrices.jl` or `BlockBandedBlockMatrices.jl`: according to some discussions, these offerings don’t necessarily have an advantage over sparse matrix approach. Plus in the documentation of these packages, I don’t see any mentioning of solving these systems. They seem to focus on saving memories and forward multiplications mostly.

So what would be the most suitable tools to use for this situation, that can make use of all available structures? Any suggestions are appreciated.

1 Like

I would definitely accept a block LDL factorisation PR into BlockBandedMatrices.jl.

Note block banded matrices performs better than sparse for Ill- conditioned problems or if the block sizes are large. Otherwise sparse might be better.

However, shouldn’t a generic solver with pivoted Gaussian elimination be more efficient as it does not invert the diagonal blocks? If I simply `ldiv!` the `BlockBandedMatrix` I wonder what happens internally? Does it make full use of the structural sparsity?
A related question is, since the system is symmetric, for the `BlockBandedMatrix`, the upper and lower subdiagonal blocks are adjoints of each other. If I assign a block to the upper subdiagonal and its adjoint to the lower subdiagonal, is `BlockBandedMatrix` smart enough to not copy any data?
If that’s the case, if Gaussian elimination were used internally, would there be new allocations of some kind? After all the block matrix does not exist concretely as an array I suppose?

`BlockBandedMatrix` actually stores the column-blocks in a BLAS/LAPACK column major format. So it’s inversion uses standard LAPACK dense QR. (Gaussian elimination with pivoting is not used as it may cause fill-in.)

To support the symmetric case one would want to use a `Symmetric(::BlockBandedMatrix)`. It’s not so clear how to use the storage format with LAPACK because the column blocks are rectangular (so not symmetric).

Note a basic block Cholesky is already implemented that supports block-bandedness: Create blockcholesky.jl by felixw98 · Pull Request #126 · JuliaArrays/BlockArrays.jl · GitHub But will need some work.

1 Like