Oops. I had sort of missed that you needed a particular first column of Q. It had looked more like you were just surprised by the inconsistency. I’m a numerical analyst, not a physicist, so I’m probably living dangerously by trying to pull anything out of the paper you linked to, but it seemed to be using Lanczos for computing quadrature rules. Normally you truncate that early, and work with a tridiagonal that is much smaller than the original matrix. (Truncation of this type is mentioned on page 7 of the paper.) In that case Lanczos is the standard tool. And Lanczos does address any concerns about the first column of Q; you get to choose that.
However I have the impression you need the full basis for reasons that go beyond computing a quadrature rule? In that case, going with a full tridiagonal reduction does give you the full basis, the full tridiagonal matrix, and with better stability as well. But the first column is tricky. In most applications where you want the full decomposition, you commonly don’t really care about the first column of Q. Having it be a multiple of e_1 does happen naturally in most implementations, but it’s not usually something anyone tries to guarantee. Hence the behavior you observed.
As @stevengj mentioned, in doing this with hessenberg
you would be depending on undocumented behavior, which is a bad idea if you want to use your code over the long term or even distribute it. LAPACK does document the structure of the decomposition clearly enough that you can infer (even though they don’t state it explicitly) that the first column is e_1 if you use the general (nonhermitian) routine or the hermitian routine with the matrix stored in the lower triangular part of A. The catch is that those details don’t seem to be similarly documented on the Julia side. Other than the fact that it is mathematically a Hessenberg factorization, what is guaranteed seems to be just that hessenberg
returns a Hessenberg
object that contains a HessenbergQ
object. The internal structure of the HessenbergQ
object is not specified. So, while LAPACK is safe to use this way, Julia could potentially stop using LAPACK for this and return something different for HessenbergQ
.
Controller-Hessenberg form is a joint decomposition of two matrices B and A where A is square and B has the same number of rows as A. In the most basic case, B would be a single column b. The decomposition takes the form
\left[ \begin{array}{cc} Q^* b & Q^* A Q \end{array} \right] =
\left[ \begin{array}{cc} \alpha e_1 & H \end{array} \right]
where Q is unitary and H is Hessenber. From Q^* b = \alpha e_1 we get Q e_1 = b/\alpha. So up to scaling, you can make the first column of Q whatever you want. If \|b\|_2 = 1, then |\alpha|=1 and a rescaling of the columns of Q can guarantee \alpha=1. This gets used in linear control system design. It’s basically a Householder-based Hessenberg reduction where you get to choose the direction of the first column of Q. If this were already available in Julia, it might be an easy guarantee for that first column. But maybe that would be overkill if LAPACK already guarantees that. And I don’t see any obvious libraries that provide this in Julia anyway.
If you really need a full decomposition with a hard long-term guarantee that the first column of Q will be e_1, it’s kind of tricky. If it’s throw-away code, I personally would just do the easy thing and go with the undocumented behavior. If you need something that might be a little more stable, you might call LAPACK through the interface documented at LAPACK-functions. The docs only describe the non-hermitian functions: gehrd!
for computing the Hessenberg reduction and orghr!
for forming Q from Householder vectors. But you could use those with a little bit of a slowdown in the initial decomposition. Forming Q should be the same speed. The hermitian functions are available but not listed in the docs. And the docs do warn that the LAPACK API is not stable. So maybe that is of no real benefit, trading an undocumented property for something that is documented but comes with a warning that it might change. You could try doing your own LAPACK interface, but that’s going to depend on low level FFI stuff, so maybe that’s not stable either; I haven’t used the Julia FFI before, so I don’t know about that. Also, if you haven’t used LAPACK directly before, calling those functions can be error prone and is not very enjoyable; you really need to know the underlying algorithms to make sense of what’s going on. Finally, if you have done this sort of thing before, it’s not hard to throw together a rudimentary version of the algorithm from scratch in Julia. But you are going to take a very big hit in terms of speed, just from losing the blocking and corresponding BLAS operations that LAPACK does, even if you do everything right in terms of writing otherwise efficient Julia code.
It does seem like this falls between the cracks in terms of what’s available in Julia in a stable, documented way. If you really need it, it might be a choice of a lesser of evils.