Eigenvalue problem of sparse matrix

With Julia, I created a sparse matrix with the spzeros() function, initialized the matrix with some sentences, and tried to calculate the eigenvalue of it. However, the function works well only for small sparse matrix(n<800), for a little bit larger matrix, i got some error.

The code:

ns = 400 # 800
H = spzeros(Complex128, ns, ns)
#... initialization 
E, x = eigs(H)

The error message after the last sentence:

LoadError: Base.LinAlg.ARPACKException(“unspecified ARPACK error: 1”) while loading In[7], in expression starting on line 1

in aupd_wrapper(::Type{T}, ::Base.LinAlg.#matvecA!#69{SparseMatrixCSC{Complex{Float64},Int64}}, ::Base.LinAlg.##63#70, ::Base.LinAlg.##64#71, ::Int64, ::Bool, ::Bool, ::String, ::Int64, ::Int64, ::String, ::Float64, ::Int64, ::Int64, ::Array{Complex{Float64},1}) at .\linalg\arpack.jl:53 in #_eigs#62(::Int64, ::Int64, ::Symbol, ::Float64, ::Int64, ::Void, ::Array{Complex{Float64},1}, ::Bool, ::Base.LinAlg.#_eigs, ::SparseMatrixCSC{Complex{Float64},Int64}, ::UniformScaling{Int64}) at .\linalg\arnoldi.jl:268 in (::Base.LinAlg.#kw##_eigs)(::Array{Any,1}, ::Base.LinAlg.#_eigs, ::SparseMatrixCSC{Complex{Float64},Int64}, ::UniformScaling{Int64}) at .:0 in #eigs#55(::Array{Any,1}, ::Function, ::SparseMatrixCSC{Complex{Float64},Int64}, ::UniformScaling{Int64}) at .\linalg\arnoldi.jl:78 in (::Base.LinAlg.#kw##eigs)(::Array{Any,1}, ::Base.LinAlg.#eigs, ::SparseMatrixCSC{Complex{Float64},Int64}, ::UniformScaling{Int64}) at .:0 in #eigs#59(::Array{Any,1}, ::Function, ::SparseMatrixCSC{Complex,Int64}, ::UniformScaling{Int64}) at .\linalg\arnoldi.jl:85 in (::Base.LinAlg.#kw##eigs)(::Array{Any,1}, ::Base.LinAlg.#eigs, ::SparseMatrixCSC{Complex,Int64}, ::UniformScaling{Int64}) at .:0 in #eigs#54(::Array{Any,1}, ::Function, ::SparseMatrixCSC{Complex,Int64}) at .\linalg\arnoldi.jl:77 in (::Base.LinAlg.#kw##eigs)(::Array{Any,1}, ::Base.LinAlg.#eigs, ::SparseMatrixCSC{Complex,Int64}) at .:0

Normally error code 1 from ARPACK means that the iterations did not converge for all requested eigenvalues. (Unfortunately some error codes have more than one meaning, hence the “unspecified” in the message, but lack of convergence seems most likely here.) The simplest thing to try is more iterations (maxiter defaults to 300).

If the matrices of interest are very badly conditioned (or singular), you might try using one of the shifted versions of eigs() to get the eigenvalues you want.

Seems to compute just fine for me.

Can you post the result of versioninfo() and your system’s basic specs?

on my local machine :

Julia Version 0.6.0-dev.0
Platform Info:
OS: Linux (x86_64-pc-linux-gnu)
CPU: Intel(R) Core™ i5-5257U CPU @ 2.70GHz
WORD_SIZE: 64
BLAS: libblas
LAPACK: liblapack
LIBM: libm
LLVM: libLLVM-3.9.0 (ORCJIT, broadwell)

julia> ns = 800
800

julia> H = spzeros(Complex128, ns, ns)
800×800 sparse matrix with 0 Complex{Float64} nonzero entries

julia> E, x = eigs(H)

(Complex{Float64}[0.0+0.0im,0.0+0.0im,0.0+0.0im,0.0+0.0im,0.0+0.0im,0.0+0.0im],
Complex{Float64}[0.00555082+0.0353065im 0.00155263+0.031264im … -0.0406006+0.0219545im -0.0194948-0.0148421im; 0.0355774-0.00460492im 0.033045-0.0237392im … -0.00682072+0.0198229im 0.0359258+0.0197586im; … ; -0.0337911-0.0112814im -0.0122986+0.0173873im … -0.0240262+0.00768639im 0.0385091+0.0164941im; -0.0162859-0.00448363im 0.0281328+0.0210631im … -0.0376166-0.0209127im -0.0201465-0.0208221im],

6,1,20,Complex{Float64}[0.0+0.0im,0.0+0.0im,0.0+0.0im,0.0+0.0im,0.0+0.0im,0.0+0.0im,0.0+0.0im,0.0+0.0im,0.0+0.0im,0.0+0.0im  …  0.0+0.0im,0.0+0.0im,0.0+0.0im,0.0+0.0im,0.0+0.0im,0.0+0.0im,0.0+0.0im,0.0+0.0im,0.0+0.0im,0.0+0.0im])

for dim 1000x1000
julia> H = spzeros(Complex128, ns, ns)
1000×1000 sparse matrix with 0 Complex{Float64} nonzero entries

julia> E, x = eigs(H)

(Complex{Float64}[0.0+0.0im,0.0+0.0im,0.0+0.0im,0.0+0.0im,0.0+0.0im,0.0+0.0im],
Complex{Float64}[0.0039298+0.0272238im 0.00686543+0.0062161im … -0.0274624+0.0237615im -0.0313381-0.0168453im; 0.0267353+0.0351744im -0.0221038+0.00372655im … -0.0177064+0.000598678im 0.000851107+0.0173708im; … ; -0.0203378+0.00233114im -0.0107058-0.0128493im … 0.0319319-0.0377832im -0.0211287-0.0279336im; -0.0163723-0.022819im 0.0240894-0.0207569im … -0.0381605-0.00545265im -0.00600598-0.00221383im],

6,1,20,Complex{Float64}[0.0+0.0im,0.0+0.0im,0.0+0.0im,0.0+0.0im,0.0+0.0im,0.0+0.0im,0.0+0.0im,0.0+0.0im,0.0+0.0im,0.0+0.0im  …  0.0+0.0im,0.0+0.0im,0.0+0.0im,0.0+0.0im,0.0+0.0im,0.0+0.0im,0.0+0.0im,0.0+0.0im,0.0+0.0im,0.0+0.0im])

I found what I had done wrong. I added some options for the “eigs” function, as:

eigs(H; nev=200, which=:SR, tol=1e-4, rizvec=false, maxiter=500)

where

size(H)=(2^20, 2^20), 
typeof(H[1,1])=Complex128, 
nnz(H)=9431034.

The Matrix is a symmetric non-Hermitian complex one, does that make any sense?

I used the sub2ind() function to get better performance, and it works well.
However, the eigs() function is very slow! I set different nev, and got different run time:

nev 2 5 10
time(s) 20 54 340

The problem I am trying to solve needs nev to be nev=200 at least, can’t image the time it will take.
In addition, I found my workmates’s Matlab running fast with this eigs() function. His code spent only tens of seconds in the eigs() function, even when nev=200, the Matrix he solved is real symmetric Hermitian, is that the difference?

eigs (via ARPACK) uses specialized schemes for the real symmetric case, which is generally better behaved, but has no special code for complex symmetric (or Hermitian). Matlab also makes more use of multi-threading than Julia does presently; that could be significant for matrices as large as yours. Finally, the amount of work needed varies dramatically, depending on the particular arrangement of the eigenvalues; if some are close together, convergence is slower.

1 Like

which=:SR (smallest real part) is also a significantly harder problem than which=:LM (the default - search for largest magnitude). Anything other than which=:LM is quite difficult for iterative Arnoldi solvers and remains an active area of algorithmic research.

2 Likes

Thanks for your reply. I have read more details in a textbook on numerical analysis. Maybe I shouldn’t have expected too much of Julia.

1 Like

Most of the general linear algebra algorithms are implemented by C/Fortran packages like ARPACK in Python/R/MATLAB etc., so since they are calling pretty much the same codes, the performance will be pretty much the same.

But you can expect it to be easy to develop a (performant) library of algorithms from a research paper that will handle your specific problem really well, but you can never rely on general algorithms to be magic. Check around in numerical analysis texts: if your problem comes from some kind of PDE, there’s a special structure that maybe someone made a fast algorithm for (this is a large area of numerical linear algebra research). You can just write their loop in Julia and get pretty much 1x with a (probably non-existent) C implementation (if you write it type stable). That’s what you can expect from Julia.

2 Likes