Is there an iterative solver that is able to operate on block-matrices (non-concatenated)

Dear users,

I haven’t been able to find an implementation of an iterative solver (invertible matrix, no further specialization) general enough to work on matrices of matrices, i.e. A = [B C D…; E F G…; H I J…; …], b = [K; L; M; …], x = [N; O; P; …], A is block sparse, b, x is block dense. I would like to avoid concatenation in my use-case.

Shouldn’t this be generally possible using GMRES? Perhaps there’s an implementation out there that I’ve missed, or a way to use IterativeSolvers.jl/KrylovMethods.jl more appropriately than I have tried?

Thanks, Robert

I’ve never tried it but have been planning to do so with each block being a StaticArray. At least IterativeSolvers has no restrictions on the type of the matrix elements so it should in principle work.

If you wrap at least one of the individual blocks in a linear operator, LinearOperators.jl will let you build such a block operator using the standard bracket notation. You can then compute products with the result as if it were a usual matrix. Krylov.jl uses linear operators extensively, but it doesn’t have GMRES.

Hi Dominique,
so we’re talking about a matrix of LinearOperators? A matrix of matrices can also be multiplied by a vector of matrices, however the solvers appear to fail since they don’t know how to initialize the elements to zero (or one). Do you perhaps have a small example using LinearOperators?

you need to define zero(::T) and one(::T) where T is the type of your blocks.

A “matrix of linear operators” will avoid concatenating your matrices. I didn’t realize you have multiple right-hand sides. Neither Krylov.jl nor KrylovMethods.jl accommodate that. At first glance, IterativeSolvers.jl might but I’m not sure it’s a good idea in general for iterative methods. In Krylov.jl, the rhs should also be contiguous because we use BLAS calls. Blas.dot is quite a bit faster than dot.

@dpo : Actually, that was supposed to symbolize a vector of matrices, i.e. a block partitioned vector, where each block has size(b,2) >= 1. I looked in literature on the solution of such systems and didn’t find all too much except DUNE-ISTL. For now, I abandoned the partitioning of the RHS and solution vector, i.e. I now have a sparse matrix of matrices A, as well as contiguous x and b. I overloaded *(::SparseMatrixCSC{Matrix{T},Int}, ::Vector{T}), size(::SparseMatrixCSC{Matrix{T},Int}, ::Int) and eltype(::SparseMatrixCSC{Matrix{T},Int}) in line with the IterativeSolvers interface requirements. That did the trick.

@cortner : redefining zero and one doesn’t work conceptually as the size of the blocks is not constant.