Stationary distribution with sparse transition matrix

Dear All,

I would like to ask for help/advice with following problem. I am solving the Hugget type of model (stationary distribution/transition) using the transition matrix approach (one AR(1) and one IID shock). In the first version, I solved for stationary distribution just by brute-force (rising the matrix to power 3000). Unfortunately, this approach becomes very costly for finer discretizations of shock processes, when the size of matrix grows to (7000*7000). Also classical eigendecomposition using LinearAlgebra.eigen is pretty slow (it solves for all eigenvalues).

I would like to ask which method of solving for stationary distribution (finding eigenvector of the transpose of the transition matrix associated with unit eigenvalue) would be most efficient, given following sparsity pattern of the transition matrix.

Any advice/suggestion would be welcome!


Since you know the eigenvalue, solving for an eigenvector is just a linear solve. If you set one of the eigenvector components to 1, you get a sparse non singular system of smaller size (by 1) that you can solve with \


For example:

using SparseArrays, LinearAlgebra

# construct a random 10000Ă—10000 sparse Markov matrix M,
# whose columns sum to 1 (like the transpose of your transition matrix)
A = sprand(10000,10000, 1e-2)
M = A ./ sum(A, dims=1)

# find an eigenvector x corresponding to eigenvalue λ = 1,
# normalized so that the first component is 1
x = [1; (I - M[2:end,2:end]) \ Vector(M[2:end,1])]

You can easily check that this indeed produces an eigenvector:

julia> norm(M * x - x) / norm(x)

Isn’t the Arnoldi method more efficient in this case?

I get ~16 seconds for your \ code, and 2.3 seconds for Arnoldi:

julia> using ArnoldiMethod

julia> A = sprand(10000,10000, 1e-2);

julia> M = A ./ sum(A, dims=1);

julia> @time decomp, history = partialschur(M, nev=1);
  2.044695 seconds (154 allocations: 3.144 MiB)

julia> decomp.eigenvalues
1-element Array{Complex{Float64},1}:
 1.0000000000000002 + 0.0im

julia> norm(M * decomp.Q - decomp.Q)


  • The \ operation (sparse-direct solve) will be likely be faster for a sparsity pattern that is regular rather than random, like the actual one shown in the original post, and in general how fast it is depends on the sparsity pattern.

  • The speed of Arnoldi depends what the spectrum of the actual matrix looks like — if there are many eigenvalues close to 1, then Arnoldi could take a long time to converge.


Thank you very much!

So, I doesn’t have to declare some particular sparsity pattern to use sparse matrices rutines efficiently? My matrix unfortunately doesn’t fit some pattern exactly…

No, you don’t have to declare anything, and the matrices don’t have to have an exactly regular pattern.

My point was simply that sparse-direct algorithms (for \) tend to be more efficient for non-random sparsity patterns, and in general their efficiency varies for matrices with different patterns of nonzeros even if the number of nonzeros is the same.


Thank you very much!

I get 170x speed up vis-á-vis my original “brute force” solution method.


1 Like