Getting the corresponding eigenvectors in arpack

It might help to see the actual paper you are looking at, if only to establish consistent notation and maybe to make sure I don’t have misconceptions about what you are doing. I didn’t see a link or citation in your previous posts and general descriptions in posts without setting up notation and other context are open to misunderstanding and are a lot more work in general. I could probably be more specific if I took a a look at what you are reading.

With that said, what you wrote looks like a partial fraction representation of the reduced model transfer function. You shouldn’t expect that to be the same for the reduced model and the original system. It goes back to the fact that you are really not using this to approximate eigenvalues and eigenvectors. You are computing a Padé approximate, which should match moments, not eigenvalues. The moments should be the Taylor coefficients of the transfer function when you expand in \sigma and the reduced model moments should match those of the large model up to a degree that is appropriate for Pade approximation. I believe that if you are matching moments, your code should be working. If you are not, it isn’t. The quality of the approximation to your original transfer function can vary, even if it is working. But you can worry about that after you get a Pade approximant.

This is the paper I’m working from

http://www.cisl.columbia.edu/courses/spring-2002/ee6930/papers/00384428.pdf

and I don’t expect that.

this is true for the AWE method, which by the way , I have implemented and works perfectly well.
However this method calculates the partial expansion terms from the eigenvalues and eigenvectors of the reduced matrix, which is T_q, the reduced matrix calculated via Lanczos. It will be easiest to take a quick look at the paper, in particular eq (30) and (31).

HA! Bug in my code.

It turns out that there is an “l”, lowercase L, in the paper, and it looks EXACTLY like a “1”, the number 1.

So i was using a vector of ones in place of the value of l. l is simply a vector with a 1 in the correct location. I had noticed this potential gotcha early on, but somehow convinced myself the Lanczos algorithm really intended the use of a vector of 1s.

I was wrong, it was the vector “l”.

For the problems I’m currently using for testing, the simplified algorithm from the paper works perfectly. My problems will soon become more complicated and then it won’t work perfectly, or probably at all :laughing:

However I am still interested in answering my original question which is, I don’t want to use my own Lanczos, so how do I use KrylovKit or Arpack to get the information I need. It doesn’t seem that’s possible since I need the T matrix which is generated using values calculated “internally” during the Lanczos process.

I’ll do some more reading and see if I can figure it out.

However I am still interested in answering my original question which is, I don’t want to use my own Lanczos, so how do I use KrylovKit or Arpack to get the information I need. It doesn’t seem that’s possible since I need the T matrix which is generated using values calculated “internally” during the Lanczos process.

I think it’s likely a bit worse than T being hidden internally. I think that with those packages you are looking at the wrong algorithms. The common use cases for Krylov methods are computing eigenvalues and solving linear systems. You are using nonsymmetric Lanczos to compute a Pade approximation for model reduction, which is, relatively speaking, a niche application.

To expand on that a little, eigenvalue computations mostly don’t use nonsymmetric Lanczos. Restarted Arnoldi is standard and I think that’s all that is in ARPACK, hence the “AR” in the name. Most Lanczos packages for computing eigenvalues are going to be for the symmetric case and will also likely use restarts. You need to avoid restarts for your problem. Restarts change the starting vectors to try to get convergence of eigenvalues. You have to keep fixed starting vectors since they are determined by the state space model for your unreduced system. I also didn’t see nonsymmetric Lanczos in KrylovKit.

Nonsymmetric Lanczos is used by iterative linear system solvers as a back end computation driving methods like QMR and BiCG. However those methods often avoid look-ahead. I think if they encounter near breakdown, they tend to give up and do a restart, taking a hit on convergence, but making everything much simpler. So many of those codes won’t work for you either.

Unless you find something specifically focused on state space model reduction, I think you are going to have a hard time finding an implementation of look-ahead nonsymmetric Lanczos that isn’t tied to things that are less relevant to your use. The code @platawiec linked to does look like something you could use. It includes a look-ahead nonsymmetric Lanczos bundled with some variations that are useful for QMR. Specifically, it computes two extra sequences of A-biorthogonal vectors (matrices with look-ahead) p_j and q_j in addition to the biorthogonal v_j and w_j. This helps with QMR and allows the use of coupled two term recurrences instead of the three term recurrences you have used for v_j and w_j. (At least I think that’s probably what’s going on; it’s a standard approach in QMR, but I didn’t dig too deep into the code). You could probably use that code, ignore the extra sequences, and pull what you need out of it. It’s not exactly a prepackaged solution for your problem, but it undoubtedly beats doing it all yourself.

1 Like