Speeding up solving `Ax=b` repeated when `A` is partially updated

There may be a better way, if the changes in Amat from one iteration to the next are “small” (in a sense made more precise below). Back in the stone age of computing (the 1990s), solving linear systems of equations was still pretty expensive even for matrices of order as small as a few hundred to several thousand. My fellow cavemen and I needed to solve many similar systems as we swept a parameter, frequency, over its analysis bandwidth. We adapted a well-known scheme called iterative refinement for this purpose. Here is the basic idea of our modified iterative refinement (MIR) algorithm.

Suppose one wishes to solve the matrix equation Ax = b and that we have already computed the inverse (in practice, we’ll solve using the LU decomposition instead of explicitly forming the inverse) of a matrix A_0 which is “nearly equal” to A. We compute a series of approximations to x as follows:

\begin{array}{ll} x^{(1)} = A_0^{-1} b, & r^{(1)} = b - A x^{(1)} \\ x^{(2)} = x^{(1)} + A_0^{-1} r^{(1)}, & r^{(2)} = b - A x^{(2)} \\ \hphantom{x^{(1)}} \vdots & \hphantom{x^{(1)}} \vdots \\ x^{(i)} = x^{(i-1)} + A_0^{-1} r^{(i-1)}, & r^{(i)} = b - A x^{(i)} \\ \hphantom{x^{(1)}} \vdots & \hphantom{x^{(1)}} \vdots \end{array}

It is not hard to show that the i th residual r^{(i)} = \left(I - AA_0^{-1} \right)^{i-1} r^{(1)}
so that r^{(i)} \rightarrow 0 if the spectral radius of
I - AA_0^{-1} is less than 1. Inverting a matrix of order m (or obtaining the LU decomposition) is an O(m^3) operation, while solving the factored system and matrix-vector multiplication are only O(m^2). Therefore, this algorithm can save execution time if convergence is rapid enough (fewer than approximately m iterations are required).

Here is pseudocode for the algorithm from our paper on the subject.

For our application, this technique worked very well, resulting in substantial savings in execution time. Of course, YMMV. Hope it helps.

7 Likes