Solving linear system with sum of kronecker products

Suppose I have

((A \otimes B) + (C \otimes D))x = v

where A and C and B and D have the same size, respectively. Suppose also that I particularly want to avoid instantiating the Kronecker products since they are large.

What would be the recommended way to solve for x? My default would be using IterativeSolvers.jl, using Kronecker.jl to represent the \otimes products. But I am not sure what to use for the sum, other than LazyArrays.BroadcastArray.

3 Likes

Assuming A, B, C, and D are square, which seems reasonable since you could otherwise have an over or under determined system, the usual thing to do would be to use Kronecker product properties to rewrite this as a matrix equation

BXA^T+ DXC^T = V

where x=\mbox{vec}(X) and v = \mbox{vec}(V). Here \mbox{vec} stacks the columns of a matrix into a vector. This is a generalized Sylvester equation. There are then solvers that compute generalized Schur decompositions of (B,D) and (A,C) to allow for the efficient direct column-wise solution of X using a back-substitution procedure. If every matrix is n\times n, it’s an O(n^3) algorithm to compute the n^2 elements of X. I believe that MatrixEquations.jl has this implemented with the function gsylv.

Even if your matrices are sparse there is limited benefit in trying to use an iterative method to exploit sparsity, since X will not in general be sparse. The direct method should be the way to go.

9 Likes

To complete the answer, if you still wanna use an iterative solver, you don’t need to “represent” the sum in any way other than a functional form

1 Like

Thanks.

Some background: I am solving a Fokker-Planck PDE using collocation. Suppose that T_0, T_1 are the collocation matrices for values and derivatives of a basis for time, and S_0, S_1, S_2 are similarly for a state, and c_0, etc are various coefficients.

Then generally my left hand side matrix is something like

T_1 ⊗ S_0 + T_0 ⊗ (\mathrm{diag}(c_1) S_0) + T_0 ⊗ (\mathrm{diag}(c_0) S_1) + \dots

which are for the drift only, T_0 \otimes S_2 etc terms follow if I have diffusion.

Then

D = \mathrm{diag}(c_1) S_0 + \mathrm{diag}(c_0) S_1 + \dots

which makes my life very, very simple and my computation fast.

Even if the matrix is not sparse, the fact that I don’t have to form the Kronecker explicitly is a huge time saver.

1 Like

I did a double take seeing the sum of more than two Kronecker products before I saw your point about T_0 being common to all of them except the first. If you have 3 or more Kronecker products, you don’t have the nice direct method and, as far as I know, you have to use iterative methods. As you say, your problem is a lot nicer with the way you are able to form D.

1 Like

But I am not sure what to use for the sum, other than LazyArrays.BroadcastArray .

I am not sure I understand your problem. Can you evaluate (A\otimes B) x and (C\otimes D) x ?

Then, looking at the above comment regarding the FP equation, the convergence could be slow and preconditioning by the Laplacian would help i.m.o.

1 Like

yes

Thanks. I am new to PDE solving and not familiar with the concept, can you please recommend a text that discusses preconditioning?

Basically, solving

$$\Delta u + u = v$$

with gmres has poor convergence. However, solving

$$u + \Delta^{-1}u = \Delta^{-1}v$$

is fast. You can pass the preconditioner Pl=lu(Delta) to gmres and it will do the above conversion for you. See Restarted GMRES · IterativeSolvers.jl

2 Likes

Then you can pass the function written in a lazy way

function LinOp(x)
    res = kron(A,B)*x + kron(C,D)*x
end

to gmres

2 Likes

Thanks. I found that my problem is reasonably well-conditioned because the spectral bases are (rationally transformed) Chebyshev polynomials, so I did not look for a preconditioner.

Is the solution from gmres supposed to be close to A \ b, or am I using it in the wrong way? Because for my problem, it is wildly off, and so is the residual:

julia> using IterativeSolvers

julia> θ1 = A \ b;

julia> θ2 = gmres(A, b);

julia> maximum(abs, A * θ1 - b)
5.551115123125783e-16

julia> maximum(abs, A * θ2 - b)
0.5582449534355522

It this is interesting to the devs I can collect A and b as a matrix (as I did above) and submit an MWE as an issue.

you can check for convergence by doing

gmres(A, b; log = true, verbose = true)

Thanks. I found that my problem is reasonably well-conditioned because the spectral bases are (rationally transformed) Chebyshev polynomials, so I did not look for a preconditioner.

That contradicts the outcome of your numerical experiment it seems

1 Like

I’m probably going out on a limb since I have no experience with Fokker-Planck equations, but it might be worth noting that convergence with GMRES defies simple characterization. It can perform badly even with a well conditioned matrix.
Preconditioning might still be necessary and, unless there’s some bug here, it looks like it probably is necessary for your problem.

1 Like

[A_X \otimes B_Y + C_X \otimes D_Y]x=v
=>A_X^{-1} \otimes D_Y^{-1}[A_X \otimes B_Y + C_X \otimes D_Y]x=A_X^{-1} \otimes D_Y^{-1}v
=> [1_X \otimes D_Y^{-1}B_Y + A_X^{-1} C_X \otimes 1_Y]x=A_X^{-1} \otimes D_Y^{-1}v
let w=A_X^{-1} \otimes D_Y^{-1}v , E_Y=D_Y^{-1}B_Y and E_X=A_X^{-1} C_X
=>[E_X \otimes 1_Y + 1_X \otimes E_Y]x=w
that can be effciently solved with a direct method via the “tensor trick” :

Let E_X=M_X \Pi_XM_X^{-1} and E_Y=M_Y \Pi_YM_Y^{-1} the diagonalization of E_X and E_Y.

=>[E_X \otimes 1_Y + 1_X \otimes E_Y]x=w
P_{XY} = M_X\otimes 1_Y + 1_X\otimes M_Y is diagonal and can be simply inverted
[P^{-1}_{XY}]_{\alpha\beta,\alpha'\beta'} = \frac{\delta_{\alpha,\alpha'},\delta_{\beta,\beta'}}{l_{x\alpha}+l_{y\beta}} where l_{x\alpha} and l_{y\beta} are the \Pi_X and \Pi_Y diagonal elements.
[E_X \otimes 1_Y + 1_X \otimes E_Y]x=[M_X \otimes M_Y]P_{XY}[M^{-1}_X \otimes M_Y^{-1}]x=w
=>x=[M_X \otimes M_Y]P^{-1}_{XY}[M^{-1}_X \otimes M_Y^{-1}]w a direct computation involving only small matrices (size of A_X).

I did implement the tensor trick with Julia in src/poisson2D_TT.jl here GitHub - triscale-innov/LidJul.jl: GMG, Poisson solver and Lid cavity

3 Likes

Sorry, I still only have a vague clue about preconditioning despite reading up about it in the context of some methods (particularly CG).

Is there a more robust method that is “matrix-free”, in the sense the evaluating Ax is sufficient?

Can you please explain or provide a link (does a package do this? Kronecker.jl seems to provide a type like this but no \ method as far as I can tell).

I did edit my previous post. You can have a look https://www.sciencedirect.com/science/article/abs/pii/S0021999199963386

3 Likes

Maybe someone who is more of an iterative methods or PDE expert than I am will answer. I mostly know generalities in this area. All the Krylov methods work with evaluating Ax (and possibly A^T x) and then some operations on vectors and smaller matrices. None of the ones for nonsymmetric systems have particularly simple convergence behavior and they generally depend on finding a decent preconditioner M for which it is easy to apply M^{-1}A (or AM^{-1} for right preconditioning) and M^{-1} approximates A^{-1} in some way so that M^{-1}A has an eigenvalue distribution that improves the convergence behavior. (Although eigenvalue distribution doesn’t tell you everything for nonnormal matrices). The choice of algorithm and preconditioner can be highly dependent on the problem, which is where my own background is not sufficient. Someone here may know the right thing to do for Fokker-Planck equations, but unfortunately I am not that person.

Not having to worry about the complexity of choosing iterative methods and preconditioners for a specific problem is one argument for going with direct methods if they solve your problem with reasonable cost. Depending on the sparsity of your matrices and how good a preconditioner you can find, you might do better than the direct method, but the potential gain is much less once you have already exploited the Kronecker product structure. It might not be worth the effort, depending on your problem and your needs. Personally, I would guess that it is not likely to be worth it, but I am probably biased toward methods I’m more comfortable with and using direct methods whenever it is possible.

This is equivalent to solving the Sylvester equation

X E_X^T + E_Y X = W

instead of the generalized Sylvester equation I suggested earlier. You can use sylvc from MatrixEquations.jl to solve that after forming E_X = A_X^{-1}C_X and E_Y = D_Y^{-1} B_Y. It would be faster than what I suggested before, but there is some potential for instability in forming W, E_X, and E_Y if A_X or D_Y is ill conditioned. The algorithm for the generalized Sylvester equation exists specifically to avoid that stability problem. If you know that A_X and D_Y are well conditioned, or can live with the results of moderate ill conditioning, then this would be an efficiency improvement on what I suggested. It’s a constant factor, but probably not a small constant factor. I wouldn’t be surprised if you saw a factor of 5 or 10 speedup over the generalized Sylvester equation approach.

3 Likes

We can’t build many preconditioners when we don’t have access to the coefficient of A.

I suggest to try gmres and bicgstab from Krylov.jl with the option history=true:

using Krylov

x, stats = bicgstab(A, b, history=true)

By default GMRES is not restarted. It’s what we want when we don’t have a good preconditioner.

1 Like

The package MatrixEquations.jl only supports Julia version>=1.8, are there other packages also deal with generalized Sylvester equations?

Not that I know of. I do wonder about the reason for that lower bound. The specific code for the generalized Sylvester equation seems to have almost no dependencies. It looks like 4 functions, gsylv, gsylvs!, gsylv2!, luslv!, and sfstruct that don’t use anything outside of Base and LinearAlgebra and that seem likely to work for older versions of Julia. (Unless I missed something). Even if the package as a whole really does need >=1.8 for some reason, it sure looks like if you just need to solve a generalized Sylvester equation, you could pull out those functions and use them independently of the rest of the package. It would be easy enough to check the residual to see if you got an acceptable solution. Although that’s certainly a very hacky work-around. I feel a bit bad suggesting it. And it’s even less reasonable if you want to depend on this in your own package instead of just getting the solution to an equation for your own purposes.

After writing the above, I did decide to try pulling those functions out into a separate file. The only snag in Julia 1.10.0 was that BlasFloat is not exported by LinearAlgebra.

There is also LinearAlgebra.sylvester if you have some well conditioned matrices so that you can convert it to an ordinary Sylvester equation. But stability will suffer if the matrices are not well conditioned.

1 Like

You had a typo here, P_{XY}=M_X⊗1_Y+1_X⊗M_Y should be P_{XY}=Π_X⊗1_Y+1_X⊗Π_Y

1 Like