where A and C and B and D have the same size, respectively. Suppose also that I particularly want to avoid instantiating the Kronecker products since they are large.

What would be the recommended way to solve for x? My default would be using IterativeSolvers.jl, using Kronecker.jl to represent the \otimes products. But I am not sure what to use for the sum, other than LazyArrays.BroadcastArray.

Assuming A, B, C, and D are square, which seems reasonable since you could otherwise have an over or under determined system, the usual thing to do would be to use Kronecker product properties to rewrite this as a matrix equation

BXA^T+ DXC^T = V

where x=\mbox{vec}(X) and v = \mbox{vec}(V). Here \mbox{vec} stacks the columns of a matrix into a vector. This is a generalized Sylvester equation. There are then solvers that compute generalized Schur decompositions of (B,D) and (A,C) to allow for the efficient direct column-wise solution of X using a back-substitution procedure. If every matrix is n\times n, itâ€™s an O(n^3) algorithm to compute the n^2 elements of X. I believe that MatrixEquations.jl has this implemented with the function gsylv.

Even if your matrices are sparse there is limited benefit in trying to use an iterative method to exploit sparsity, since X will not in general be sparse. The direct method should be the way to go.

To complete the answer, if you still wanna use an iterative solver, you donâ€™t need to â€śrepresentâ€ť the sum in any way other than a functional form

Some background: I am solving a Fokker-Planck PDE using collocation. Suppose that T_0, T_1 are the collocation matrices for values and derivatives of a basis for time, and S_0, S_1, S_2 are similarly for a state, and c_0, etc are various coefficients.

Then generally my left hand side matrix is something like

I did a double take seeing the sum of more than two Kronecker products before I saw your point about T_0 being common to all of them except the first. If you have 3 or more Kronecker products, you donâ€™t have the nice direct method and, as far as I know, you have to use iterative methods. As you say, your problem is a lot nicer with the way you are able to form D.

Thanks. I found that my problem is reasonably well-conditioned because the spectral bases are (rationally transformed) Chebyshev polynomials, so I did not look for a preconditioner.

Is the solution from gmres supposed to be close to A \ b, or am I using it in the wrong way? Because for my problem, it is wildly off, and so is the residual:

julia> using IterativeSolvers
julia> Î¸1 = A \ b;
julia> Î¸2 = gmres(A, b);
julia> maximum(abs, A * Î¸1 - b)
5.551115123125783e-16
julia> maximum(abs, A * Î¸2 - b)
0.5582449534355522

It this is interesting to the devs I can collectA and b as a matrix (as I did above) and submit an MWE as an issue.

Thanks. I found that my problem is reasonably well-conditioned because the spectral bases are (rationally transformed) Chebyshev polynomials, so I did not look for a preconditioner.

That contradicts the outcome of your numerical experiment it seems

Iâ€™m probably going out on a limb since I have no experience with Fokker-Planck equations, but it might be worth noting that convergence with GMRES defies simple characterization. It can perform badly even with a well conditioned matrix.
Preconditioning might still be necessary and, unless thereâ€™s some bug here, it looks like it probably is necessary for your problem.

[A_X \otimes B_Y + C_X \otimes D_Y]x=v =>A_X^{-1} \otimes D_Y^{-1}[A_X \otimes B_Y + C_X \otimes D_Y]x=A_X^{-1} \otimes D_Y^{-1}v => [1_X \otimes D_Y^{-1}B_Y + A_X^{-1} C_X \otimes 1_Y]x=A_X^{-1} \otimes D_Y^{-1}v
let w=A_X^{-1} \otimes D_Y^{-1}v , E_Y=D_Y^{-1}B_Y and E_X=A_X^{-1} C_X =>[E_X \otimes 1_Y + 1_X \otimes E_Y]x=w
that can be effciently solved with a direct method via the â€śtensor trickâ€ť :

Let E_X=M_X \Pi_XM_X^{-1} and E_Y=M_Y \Pi_YM_Y^{-1} the diagonalization of E_X and E_Y.

=>[E_X \otimes 1_Y + 1_X \otimes E_Y]x=w P_{XY} = M_X\otimes 1_Y + 1_X\otimes M_Y is diagonal and can be simply inverted [P^{-1}_{XY}]_{\alpha\beta,\alpha'\beta'} = \frac{\delta_{\alpha,\alpha'},\delta_{\beta,\beta'}}{l_{x\alpha}+l_{y\beta}} where l_{x\alpha} and l_{y\beta} are the \Pi_X and \Pi_Y diagonal elements. [E_X \otimes 1_Y + 1_X \otimes E_Y]x=[M_X \otimes M_Y]P_{XY}[M^{-1}_X \otimes M_Y^{-1}]x=w =>x=[M_X \otimes M_Y]P^{-1}_{XY}[M^{-1}_X \otimes M_Y^{-1}]w a direct computation involving only small matrices (size of A_X).

Sorry, I still only have a vague clue about preconditioning despite reading up about it in the context of some methods (particularly CG).

Is there a more robust method that is â€śmatrix-freeâ€ť, in the sense the evaluating Ax is sufficient?

Can you please explain or provide a link (does a package do this? Kronecker.jl seems to provide a type like this but no \ method as far as I can tell).

Maybe someone who is more of an iterative methods or PDE expert than I am will answer. I mostly know generalities in this area. All the Krylov methods work with evaluating Ax (and possibly A^T x) and then some operations on vectors and smaller matrices. None of the ones for nonsymmetric systems have particularly simple convergence behavior and they generally depend on finding a decent preconditioner M for which it is easy to apply M^{-1}A (or AM^{-1} for right preconditioning) and M^{-1} approximates A^{-1} in some way so that M^{-1}A has an eigenvalue distribution that improves the convergence behavior. (Although eigenvalue distribution doesnâ€™t tell you everything for nonnormal matrices). The choice of algorithm and preconditioner can be highly dependent on the problem, which is where my own background is not sufficient. Someone here may know the right thing to do for Fokker-Planck equations, but unfortunately I am not that person.

Not having to worry about the complexity of choosing iterative methods and preconditioners for a specific problem is one argument for going with direct methods if they solve your problem with reasonable cost. Depending on the sparsity of your matrices and how good a preconditioner you can find, you might do better than the direct method, but the potential gain is much less once you have already exploited the Kronecker product structure. It might not be worth the effort, depending on your problem and your needs. Personally, I would guess that it is not likely to be worth it, but I am probably biased toward methods Iâ€™m more comfortable with and using direct methods whenever it is possible.

This is equivalent to solving the Sylvester equation

X E_X^T + E_Y X = W

instead of the generalized Sylvester equation I suggested earlier. You can use sylvc from MatrixEquations.jl to solve that after forming E_X = A_X^{-1}C_X and E_Y = D_Y^{-1} B_Y. It would be faster than what I suggested before, but there is some potential for instability in forming W, E_X, and E_Y if A_X or D_Y is ill conditioned. The algorithm for the generalized Sylvester equation exists specifically to avoid that stability problem. If you know that A_X and D_Y are well conditioned, or can live with the results of moderate ill conditioning, then this would be an efficiency improvement on what I suggested. Itâ€™s a constant factor, but probably not a small constant factor. I wouldnâ€™t be surprised if you saw a factor of 5 or 10 speedup over the generalized Sylvester equation approach.

Not that I know of. I do wonder about the reason for that lower bound. The specific code for the generalized Sylvester equation seems to have almost no dependencies. It looks like 4 functions, gsylv, gsylvs!, gsylv2!, luslv!, and sfstruct that donâ€™t use anything outside of Base and LinearAlgebra and that seem likely to work for older versions of Julia. (Unless I missed something). Even if the package as a whole really does need >=1.8 for some reason, it sure looks like if you just need to solve a generalized Sylvester equation, you could pull out those functions and use them independently of the rest of the package. It would be easy enough to check the residual to see if you got an acceptable solution. Although thatâ€™s certainly a very hacky work-around. I feel a bit bad suggesting it. And itâ€™s even less reasonable if you want to depend on this in your own package instead of just getting the solution to an equation for your own purposes.

After writing the above, I did decide to try pulling those functions out into a separate file. The only snag in Julia 1.10.0 was that BlasFloat is not exported by LinearAlgebra.

There is also LinearAlgebra.sylvester if you have some well conditioned matrices so that you can convert it to an ordinary Sylvester equation. But stability will suffer if the matrices are not well conditioned.