Well that’s embarassing; you’re right of course. I hadn’t read the paper carefully since I was familiar with Petschow’s earlier work which didn’t use the newer reduction scheme.

I’d like to pick up the question of why you need *all* the eigenvalues.

Another poster already remarked:

Indeed, the reason why everyone is screaming here so much about that point is that “give me all the eigenvalues” is an ill-posed question in the context of PDE-style operators (and I think you said that this is your setting?).

The spectral theorem gives you a projection-valued measure. If you don’t care about the eigen-basis, then you just count dimensions, i.e. you have a counting measure (“in this subset of the complex plane, there are 43 eigenspace dimensions”).

You cannot list all the eigenvalues on your finite computer, but you can approximate the spectral counting measure with something finite.

Indeed, all well-posed posed questions *must* work well with such an approximation, by definition of *well-posed*.

Everything that you can possibly learn about a PDE by some finite-element approximation must be well-posed.

A different way of looking at this is: Suppose you have your code that takes a list of N eigenvalues, and does whatever you care about with it. Now suppose that you won the cluser-time lottery, could run on a 2x finer grid, and now have a list of 8N eigenvalues in hand. How would you preprocess these 8N eigenvalues to feed it into your code that can only handle N eigenvalues?

So we suggest that you tell us a little more about what you need *all* eigenvalues for. Then we can think about what notion of approximation fits for it, and then we can think about whether there are good algorithms.

The important things are:

- Of course every eigenvalue is allowed to be a little bit off, there is a tolerance!
- Of course the permitted tolerance depends on the point! Similar to how it’s baked into the very concept of
*floating*point numbers - Of course there is an infinite part of the spectrum (where your infinite dimensions live and your eigenvalues accumulate); for unbounded operators at infinity, for compact operators at zero.
- For well-posedness you need to choose tolerances (e.g. a distance function on the complex plane) such that, for any epsilon>0, there exists a finitely supported approximation of your spectrum up to epsilon (the finite approximation will of course contain one or more points where it says “here the counting measure is infinite!”).

Thanks for bringing this up; I had been meaning to circle back to this from the discussion of computational methods.

Another way of looking at it is that, for a PDE, your computed eigenvalues are *not even a subset* of the “true” PDE eigenvalues. In any finite calculation, a large fraction of the eigenvalues will probably not be converged (with respect to mesh resolution etc.).

(A separate issue is that PDE models themselves are usually approximations of the true physics. Either the PDE or parameters thereof — materials coefficients etc. — are often only accurate for a finite portion of the spectrum. And in an open/infinite system with a continuous spectrum you sometimes have other approximations to get a finite domain, e.g. you may only be able to compute the LDOS, which can be done via the Green’s function / resolvent operator rather than eigenvalues.)

Of course, you can devise synthetic finite-dimensional models, like finite ball-and-spring phonon systems, and ask formally well-posed questions about all the eigenvalues, at which point the challenge becomes purely mathematical/computational.

I don’t want to speculate too much about the @rafaelbailo’s problem (especially as he has mostly stayed out of the conversation), just emphasize a few things that one might want to think about in such contexts. (If there is some other exceptional circumstance here, I would be fascinated to hear about it.)

Well, the only answers in the thread are variations on “you shouldn’t do that, but let me show you how good I am in doing something unrelated”, which, to me, is the perfect definition of “rambling”. Don’t you think so?

As for my language, well so far I’ve insulted nobody, and it seems to me that I was never off topic, so don’t see the problem with your FAQ.

And well if you want more context, we have a technique called *Projective Integration* which integrates PDEs explicitely, but, to be parametrized, need the knowledge of the full spectrum of the problem. Not part of the spectrum.

Me being a mathematician, I tend to do the usual “the question you’re asking may not be the question you should ask”. I’m sure that you know a lot of people with the same professional deformity of character.

I get that this can be aggravating to some people. Instead of getting angry, you should just give us more context on what you need, and why you believe you need it. The entire discussion could have been less acrimonous if you had started off with e.g. "hey guys, I know this question might seem weird but we need the full spectrum of a so-and-so operator coming from a so-and-so PDE. This is because … [e.g. hyptothetical: … because we don’t want to study the PDE, we instead want to study the behavior of this specific discretization that many people use, especially to study how the distortions that are introduced at wave-lengths close to the grid size impact higher-level results]. "

And you should be open-minded to the possibility that there are shortcuts that you did not think about, and that these might be hiding in the gaps where your question is (intentionally?) vague and missing context.

Can you elaborate what “knowledge of the full spectrum” means for your purposes? What is “a spectrum” for you and what is the notion of accuracy / approximation that you need? Can you formulate this in a way that embeds all different N, preferably in a way such that N->\infty permits finite-support approximations?

To give a hypothetical style of answer that would open the door to clever tricks:

Ok, I have an accuracy \epsilon that comes from the discretization error (no computation needs to be better than the discretization error we incurred anyways!). We formulated the problem such that the spectrum is uniformly bounded away from \infty. We don’t care about dimensions, we just want to know where the spectrum is. That means: We want to identify all points with distance > epsilon from an eigenvalue of the matrix, and we want to identify all points with distance < epsilon/2 from an eigenvalue; for points with epsilon/2 < dist < epsilon we don’t care (i.e. these may be misidentified and this is where numerical inaccuracies and your clever algorithms should come in!)". Here is a plot for 3 small values of N, so what you can see some qualitative behavior.

This is related to @stevengj 's remark … One specific trick I’ve used in the past is that the number of positive, negative, and zero eigenvalues are preserved by any *conjugacy* transform of a symmetric matrix. i.e. if A is symmetric and L is any non-singular matrix, then L * A * L^T will have the same number of positive, negative, and zero eigenvalues as A.

How this helps… if you have any method like MA57 that computes a L D L^T factorization of A, then the number of positive and negative eigenvalues can be easily computed from the D. (Which, for MA57 can be a 2x2 block…)

Here is an example of using this to approximate an eigenvalue histogram: graph_eigs/eighistogram.m at master · dgleich/graph_eigs · GitHub

The downside is that I don’t think MA57 is implemented in Julia anywhere unless you have the HSL.

If you want an explicit routine that does similar things with a polynomial method look at this paper with Austin Benson and David Bindel … https://www.cs.cornell.edu/~bindel/papers/2019-kdd.pdf