Internal manipulation of complex hermitian matrix / explain the use of "RealHermSymComplexHerm" in symmetric.jl


Internal manipulation of complex hermitian matrix / explain the use of “RealHermSymComplexHerm” in symmetric.jl

I think Julia handles matrices with complex elements correctly.

My task is to modify the spectrum of a Hermitian matrix H and return just the matrix with modified spectrum. i.e. I have a function f(real_vec)->real_vec that modifies the spectrum s(H) of a hermitian matrix H=U[s(H)]U’. I need the result f(H) = U[f(s(H))]U’. I think it is possible to optimize by not computing explicitly the eigfact(H).

Therefore I tried to write my own eigmodif based on the Julia realization of eigfact. That was difficult because I was lost on the line 4816 in lapack.jl, where syevr() is wrapped up.

I need to understand where and how, Julia has converted a COMPLEX HERMITIAN matrix to a REAL-SYMMETRIC one. Theoretically it is possible, since we have a 2n by 2n matrix J that squares to minus identity; for any n by n COMPLEX HERMITIAN matrix H we then turn it into real(H).I + imag(H).J, or in a block form
[ real(H) -imag(H) ]
[ imag(H) real(H) ]

But how does Julia do this?


Julia as a type Symmetric used to represent matrices A=Aᵀ, and a type Hermitian used to represent matrices A=Aᴴ (conjugate transpose). For real matrices, however, these two concepts coincide, and so the RealHermSymComplexHerm is used for methods that apply to both types when the element type is real, or to Hermitian matrices when the element type is complex.

Usually eigfact will be a good way to compute matrix functions like this in the case of Hermitian matrices. (All of the complications come for non-Hermitian matrices, where more complicated methods are needed because they may be close to defective.)

Julia does no such conversion, as far as I know. If you have a complex Hermitian matrix, it calls complex LAPACK routines like cheev.


You might want to look at how matrix functions like exp or sin are computed in the Julia Base library. They use eigfact for Hermitian or real-symmetric matrices.


Thanks Steve! The answers are quite helpful. I am still confused because experiments show that LAPACK.cheev is not called even if the matrix is of type Matrix{Complex{Float64}}.

Here is the experiment code:

H = rand(Complex128,100,100)
H = H + H’
@profile eigfact(Hermitian(H))

Results show that the line 4992 of lapack.jl has been called by eigfact() at line 293 of symmetric.jl

But typeof(Hermitian(H)) is
I don’t see cheev…

My Julia version is 0.6.1


I got some papers on the problem I want to solve. Basically they use Chebyshev polynomials to approximate a rescaled f(x) and use the recursive formula of ChebyshevT[n,x] to evaluate a series of matrix polynomials, then combine them with appropriate coefficients.


The LAPACK solver is zheevr and is getting calling in lapack.jl:4992. You can see the symbols that we loop over at The LAPACK solver is doing both the reduction to (real) symmetric tridiagonal and then the actual eigendecomposition of the symmetric tridiagonal matrix. Maybe you are looking for just the reduction to real symmetric tridiagonal but unfortunately that is one of the relatively few LAPACK subroutines that we don’t provide.


Hi Andreas,
thanks for the reply. A follow-up question is, if we were able to call the tridiagonal reduction routine in LAPACK, is it convenient to get the matrix of similarity transform used in the reduction?

If so, then there could be an approximate matrix function routine for very large (not necessarily sparse!) matrices, by using Chebyshev approximation formula, operating on the reduced tridiagonal matrix. Chebyshev converges very fast for relatively smooth functions, it means that the tridiagonal won’t get too wide during he iterative calculation of ChebyshevT[n,X].

If not, then I don’t know what to do with my problem other than a complete eigfact computation :frowning:

Best regards,
Yunlong Lian


Hi Lian

I’m not sure you’ll benefit much from that approach since the unitary similarity transform is the more costly part of the decomposition. If you want to try it out you can try the function I have in my package. Be aware that the main application is for BigFloats, so it is not blocked and therefore not as fast as LAPACK. However, it is not too inefficient.