Can the following list of transformations turn a stable well conditioned algorithm into an unstable one?

Can the following set of transformations:

-(a - b) → b - a

x - x → 0.0

-(a + b) → (-a) - b

-(a + b) → (-b) - a

x / C → x * (1.0 / C)

(a * c) + (b * c) → (a + b) * c

(a + b) * c → (a * c) + (b * c)

a * (b / c) → (a * b) / c

a * (b * c) → (a * b) * c

a + (b + c) → (a + b) + c

x + C1 == C2 → x == C2 - C1

x + C1 != C2 → x != C2 - C1

change a well conditioned and stable algorithm/code into an unstable algorithm? One can apply these transformations locally, on the order of ~10 times or so (i.e., not more than a 1000 times).

As a simple example, if these rules could be applied more than ~10 times, is a change from n*dx into dx+dx+dx+…, which for very large integer “n” becomes unstable due to numerical round off error.

It seems to me, however, that these transformation if applied only ~10 times cannot make a stable and well conditioned code unstable. If somebody has a counter example, I would be interested. In this thread, please lets only discuss this particular problem/challenge and nothing else. So that we can reach a simple conclusion of either “yes” or “no”.

Here is another possible counter example if the rules included a rule 0 → x-x (which they don’t, but I am posting it here as a detailed analysis for inspiration of finding an actual counter example).

The exact mathematical function we want to compute is f(x) = x, where “x” is in infinite precision. This function is well conditioned.

The algorithm 1 is alg1(x) = x, where “x” is computed in double precision.

The algorithm 2 is alg2(x) = (x+0x1.8p52) - 0x1.8p52 = round(x), again, “x” is in double precision.

The algorithm 1 is stable, because alg1(x) = f(x+e) where e=1e-16 for all “x”.

The algorithm 2 is unstable, because alg2(x) = f(x+e), where e=0.4 for x=0.6, so “e” is not small.

So a code “x” that is stable and well conditioned (algorithm 1) can be turned into (x+0x1.8p52) - 0x1.8p52 using the rule 0 -> x-x and the new code is well conditioned but unstable (algorithm 2).

Now, there is actually no rule 0 → x-x above, so this is a counter example if I included such a rule above, but I didn’t (I took the rules from this page: FloatingPointMath - GCC Wiki). So the rules can turn alg2 into alg1, i.e. an unstable algorithm into a stable one. Well, that’s fine. In this question I am interested in turning a stable algorithm into an unstable one.

So I guess a counter example is just a modification of the above:

The exact mathematical function we want to compute is f(x) = round(x), where “x” is in infinite precision.

The algorithm 1 is alg1(x) = x, where “x” is computed in double precision.

The algorithm 2 is alg2(x) = (x+0x1.8p52) - 0x1.8p52 = round(x), again, “x” is in double precision.

The algorithm 1 is unstable, because alg1(x) = f(x+e) where e=0.4 for x=0.6, so “e” is not small.

The algorithm 2 is stable, because alg2(x) = f(x+e), where e=1e-16 for all “x”.

So the rule x-x -> 0 turned a stable algorithm alg2 into an unstable one alg1.

But is round(x) well conditioned? Because the (stable) alg2(x) = (x+0x1.8p52) - 0x1.8p52 is not well conditioned, since we are subtracting two large numbers close to each other, and x1-x2 is ill-conditioned for x1~x2, per page 11 in:

web.mit.edu/ehliu/Public/Yelp/conditioning_and_precision.pdf

(Btw, these notes by Eric Liu are very nice.)

I think my confusion is this: (x+a)-a is mathematically equivalent to “x”, so both have the same condition number. At the same time, (x+a)-a where x << a is of the form x1-x2 which according to the E. Liu’s notes is ill-conditioned for x1~x2.

So how can (x+a)-a be both well conditioned and ill-conditioned for x << a? If somebody could shed some light on this, I would really appreciate it.

P.S. I wonder if the answer is that (x+a)-a is ill conditioned as a function of two variables “x” and “a”, but well-conditioned as a function of a single variable “x”.

P.P.S. I think that’s it. The relative condition number for f(x, a) = x+a-a = x is:

k = sqrt(x^2+a^2) / |x|

which for x << a is large when x → 0, so it is ill-conditioned for small “x”, but for x >> a it is close to 1, so it is well conditioned for small “x”.

So I think the conclusion is that the function f(x)=x+a-a = x is well conditioned if taken as a function of “x” only, or if taken as function of “x” and “a” as long as x >> a. If however x << a, then the function becomes ill-conditioned for small “x”.

You may find this useful https://github.com/stevengj/18S096-iap17/blob/master/lecture6/Numerical-Analysis.ipynb.

2 Likes

Thanks, that’s a nice notebook.

I haven’t updated this thread, my current understanding is that the problem is neither about conditioning nor stability (I can assume the problem is well-conditioned, and the algorithm is stable). Rather, the problem seems to be about numerical round-off errors of a stable algorithm for a well-conditioned problem.

This is not directly related to Julia, and I don’t want to waste people’s time, so I consider this resolved for the purpose of this discussion forum, and I will pursue these questions on another platform.