I am wondering if there exists a way to rigorously compute spectral norms (an upper bound) of matrices (so the usual 2-norm for matrices) using the IntervalArithmetics package (GitHub - JuliaIntervals/IntervalArithmetic.jl: Rigorous floating-point calculations using interval arithmetic in Julia). The matrix I am considering is a matrix of intervals.
IntervalMatrices.jl allows the computation of the L^1 and L^\infty norms (as Steward pointed out) but not the spectal one.
The LinearAlgebra package computes things like opnorm(A)
up to floating-point error. You only need interval arithmetic if you need bounds on the roundoff error of the computation.
To get bounds on the roundoff errors, you need interval-arithmetic algorithms to bound the singular values of A (or the eigenvalues of A^* A). Such algorithms exist — see e.g. Hladik (2009), Hladik (2013), Nerantzis & Adjiman (2015), Hartman et al. (2019), and probably others — and the Hladik (2013) algorithm is implemented in Julia. Moreover, it has a function bound_perron_frobenius_eigenvalue
to bound just the maximal eigenvalue of an SPD matrix like A'A
, which is precisely what you need for the operator norm. I’m not sure how tight of a bound it returns, however — the usual caveats about interval arithmetic potentially returning very loose bounds applies.
It seems that the IntervalMatrices.jl package already provides an opnorm
function to bound the matrix norm of an interval matrix, but apparently it currently only supports the induced (operator) L^1 or L^\infty norms.
Thank you for the reply, indeed I need a rigorous bound on the roundoff error.
I have two questions :
- the algorithms you gave enables to verify an eigenvalue of A*A, but it does not verify that it is actually the biggest one ?
- for the Perron Frobenius eigenvalue, doesn’t it work only for elementwise non-negative matrices ?
eigenbox
apparently encloses all the eigenvalues of a matrix, including the biggest one.
Yes, sorry, this won’t work for general SPD A^*A.
Do you need a lower bound on the operator norm, an upper bound, or both? A lower bound is much easier to compute.
The problem is that I am treating huge matrices (10 000 by 10 000) so computing all the eigenvalues will be too much in term of computations. That’s why I am trying to find out a method that only focus on the maximum eigenvalue (or spectral radius) to avoid unecessary computations.
I need on upper bound for the operator norm. But as my matrix is invertible I can also compute the minimal eigenvalue of the inverse. Then, a lower bound for such an eigenvalue will give me an upper bound for the original problem. I don’t know if this helps.
Do you really need super rigorous bounds on the opnorm of a 10k by 10k matrix? opnorm
is already super accurate. Presumably if your system is of size 10k you have much bigger problems than that (eg discretization issues)
The thing is that I need the bound to be rigorous. So I do not necessarily need it to be super sharp (even if that would be better ) but simply rigorous.
One thing you can do to verify a bound is making sure A'A-lambda I
is negative. Don’t know if that helps, but at least it reduces the problem to checking whether an interval symmetric matrix is positive, which might be a bit faster? But honestly, life is much easier when you embrace floating point error
You could try a hybrid approach: just compute the SVD in the normal way and then write out a perturbation bound for the largest singular value. The perturbation theory is going to involve quantities (e.g. residuals) that will involve further computation involving A, the singular value, and its singular vectors. You might then worry about rounding errors in that computation undermining the rigor of your bounds. However you could apply interval arithmetic to computing the residuals and quantities in the perturbation bound, which would be quite a bit nicer than applying it to the SVD. I haven’t worked seriously with interval arithmetic before, but I did have the impression that the intervals could grow exponentially in computing matrix decompositions. Since the largest singular value is perfectly well conditioned, the perturbation theory reflects that, and the residuals etc. won’t be too bad in interval arithmetic, you would probably get a much better bound relative to trying to apply interval arithmetic directly to the SVD. I don’t mind sketching it out if you want to give it a try.
I hope that makes sense. I think it should work well, but I haven’t tried anything like that before and I have more experience with perturbation bounds than interval arithmetic.
Edit: This doesn’t work if you are starting with an interval matrix. But I had the impression from your wording that you might be starting with a matrix of numbers instead of intervals.
In fact I am starting with an interval matrix, my description was not clear…
Hmmm… I guess I was an optimist and headed for the easier problem. A follow-up question would be: Is this an interval matrix that represents small intervals added to a larger matrix? (i.e. a fixed matrix plus smaller intervals representing a smaller perturbation) Or is it really intervals that are larger?
Exactly, the intervals are pretty small, in practice they are of the order of the machine precision. I am building my matrix in Floats then turning the entries in Intervals before computing the norm, so it is as good as it can get.
I think the details are important here. So, just to make sure, here is my understanding: You have an interval matrix {\bf A} = [\underline{A}, \overline{A}] and you want an interval bound on \|A\|_2 that holds for all A\in {\bf A}? Further, \overline{A} - \underline{A} is of the order of the machine precision. So any computations you do in computing the final interval for \|A\|_2 are going to introduce errors that are comparable to the width of the interval you are trying to compute. So those must be taken into account as well? I’m using notation that I think is fairly standard and is summarized in the 2009 paper by Hladik that @stevengj cited. And the problem is large enough that you want to look at the largest singular value alone and don’t consider computing a full SVD to be a reasonable step in the process?
If you ignore the additional numerical errors and just want a decent upper bound on \|A\|_2, I think sqrt(eigenbox(Symmetric(AI'*AI)))
for an interval matrix AI
will give that. But they are explicit in saying that “The enclosure is not rigorous, meaning that the real eigenvalue problems solved internally utilize normal floating point computations.” It wasn’t that slow on my very old system. It seemed a little memory hungry, so I couldn’t try 10,000\times 10,000, but it took about 3 minutes for a 7,000\times 7,000.
If you can deal with a couple of SVDs, I think the perturbation theory approach could handle both the intervals and the numerical errors. Maybe you could apply the same thing to a singular values computed using a power method, but it is an additional complication in something that is already a bit tricky.
Your description is exactly what my problem is about.
So any computations you do in computing the final interval for ||A||_2
are going to introduce errors that are comparable to the width of the interval you are trying to compute. So those must be taken into account as well?
Yes indeed, the standard functions (sum, multiplication,…) are already implemented to be rigorous in IntervalMatrices or IntervalLinearAlgebra packages but at some point the width of the intervals will come into place.
And the problem is large enough that you want to look at the largest singular value alone and don’t consider computing a full SVD to be a reasonable step in the process?
Yes, computing a full rigorous SVD will be to much in term of computations (also is there a package that enables the computation of a rigorous SVD ? the IntervalLinearAlgebra package does not seem to do so). Computing a SVD on a block of the matrix and trying to bound the rest could be an option, but that is still a complicated problem.
If you can deal with a couple of SVDs, I think the perturbation theory approach could handle both the intervals and the numerical errors.
What do you mean by dealing with a couple of SVDs ?
If you can afford to compute a couple of ordinary singular value decompositions of (non-interval) matrices A_c = (\overline{A} + \underline{A})/2 and A_\Delta = (\overline{A} - \underline{A})/2, I think you can apply perturbation theory in a way that gives bounds that handle both the numerical errors and the original intervals. A couple of 10,000\times 10,000 SVDs shouldn’t be a deal breaker on modern computers, unless you are doing this lots of times. (Which maybe you are, since you are so worried about performance…)
You could apply similar perturbation theory to just the maximum singular values for A_c and A_\Delta computed using some other method (e.g. the power method or Lanczos.) But you are then going to have to worry about whether you converged to the correct singular values. It’s hard to compete with a full matrix decomposition if you want to be absolutely sure. If the second singular value is close to the first and has a slightly larger interval, maybe you really wanted the second singular value anyway. I think you are almost inevitably going to sacrifice some certainty if you aren’t looking at the smaller singular values. If the second singular values are known to be significantly smaller than the first, you might get something that is a strict bound with high probability, barring some terrible choice of starting vector for your iteration.