Numerical Stability

Hi, recently I’ve been testing the numerical stability of my algorithms.

Let me just use QR decomposition as an example. I tried to test the orthogonality (relative backward error) of the computed Q following this formula:

norm(Q’ * Q - I, 1)/(size(Q, 2)*eps(Float64))

That is to say, the backward error should be bounded by big O(epsilon), in other words, I should expect the quantity above to be a small number, usually around 1 or 2.

I wrote a function to test the orthogonality of QR in Julia:

function test_qr_orthog()
    for i in 5:20:500
        println("-----------------------------------------")
        println("Input matrix dimension: m = n = ", i)
        A = randn(i, i)
        F = qr(A)
        Q = Matrix(F.Q)
        println(norm(Q'* Q - I, 1)/(size(Q, 2) * eps(Float64)))
    end
end

And here’s the output

julia> test_qr_orthog()
-----------------------------------------
Input matrix dimension: m = n = 5
2.378354766860583
-----------------------------------------
Input matrix dimension: m = n = 25
14.616908871064213
-----------------------------------------
Input matrix dimension: m = n = 45
18.830272731636825
-----------------------------------------
Input matrix dimension: m = n = 65
21.685212462565303
-----------------------------------------
Input matrix dimension: m = n = 85
25.646955513534657
-----------------------------------------
Input matrix dimension: m = n = 105
30.28030979265741
-----------------------------------------
Input matrix dimension: m = n = 125
32.902360263719956
-----------------------------------------
Input matrix dimension: m = n = 145
36.94465907194912
-----------------------------------------
Input matrix dimension: m = n = 165
38.70942796805594
-----------------------------------------
Input matrix dimension: m = n = 185
43.42462520573639
-----------------------------------------
Input matrix dimension: m = n = 205
46.44515240577629
-----------------------------------------
Input matrix dimension: m = n = 225
50.417073794452115
-----------------------------------------
Input matrix dimension: m = n = 245
54.04077009397869
-----------------------------------------
Input matrix dimension: m = n = 265
57.431942950554614
-----------------------------------------
Input matrix dimension: m = n = 285
60.97257505718031
-----------------------------------------
Input matrix dimension: m = n = 305
61.854275262551226
-----------------------------------------
Input matrix dimension: m = n = 325
66.24078595088078
-----------------------------------------
Input matrix dimension: m = n = 345
67.26613284401272
-----------------------------------------
Input matrix dimension: m = n = 365
69.36284938969024
-----------------------------------------
Input matrix dimension: m = n = 385
71.66907146255691
-----------------------------------------
Input matrix dimension: m = n = 405
73.65597325077763
-----------------------------------------
Input matrix dimension: m = n = 425
77.11433365990133
-----------------------------------------
Input matrix dimension: m = n = 445
79.14065724266379
-----------------------------------------
Input matrix dimension: m = n = 465
82.85539946607364
-----------------------------------------
Input matrix dimension: m = n = 485
84.07269537080195

As you can see, even when the size of input matrix is small, the error produced is still pretty big. So far, I don’t have any clue.

Any help is appreciated.

2 Likes

The norm function computes a vector norm. If you scale by the full length or change to opnorm you should get more satisfactory results.

5 Likes