# In numerical computation, how could increasing the number of bits of precision decrease accuracy?

In this PolynomialRoots.jl Github Issue, accuracy of the result is relatively OK when the MPFR/`BigFloat` precision is at around (just!) twenty bits, but goes sour suddenly when increasing the number of bits and stays that way when increasing the precision afterwards:

How could this possibly be the case?

Usually it means that youâre not computing what you think you are, and some part of the computation is limited by the precision of your original data.

But weâre not just talking about accuracy failing to improve as precision is increased. The accuracy actually deteriorates dramatically!
For example, in the first example in my first comment on the issue, at precision of 25 bits, the maximum error is `8.5e-7`, but thereafter the error is huge (for 26 bits of precision itâs `3.5e16`, note that the exponent is positive).

Looking at your code, it seems that you are:

1. generating random `Complex64` polynomial coefficients `c`
2. computing roots `r` by `c -> BigFloat -> roots -> Complex64`
3. computing new polynomial coefficients `c2` by multiplying monomials `(x-r[i])` from the roots `r`
4. comparing `c - c2`

To quote my comment above, youâre not computing what you think you are.

`c` and `c2` should not match, so I donât understand the point of this âtestâ. (As soon as you round the roots `r` to `ComplexF64`, you dramatically change the coefficients of the corresponding polynomial, and then you introduce additional errors from roundoff when you multiply the monomials together.)

(Realize also that the roots of a polynomial are extremely sensitive to the coefficients â the sensitivity increases exponentially with the degree â so itâs not surprising to me that you are seeing a huge mismatch here.)

7 Likes

A more appropriate test would seem to be:

1. generating random `Complex64` polynomial coefficients `c`
2. convert to `BigFloat` (an exact conversion) and find roots `r = roots(big.(c))`
3. construct new `BigFloat` polynomial `c2` coefficients by multiplying monomials
4. compare to `c`.

This should match if you have sufficiently high precision. However, I find that it gives a wildly wrong error with higher and higher probability as the degree increases:

``````using PolynomialRoots, DynamicPolynomials

function poly(roots)
@polyvar x
d = length(roots)
f = prod(x - xi for xi in roots)
[coefficient(f, x^i) for i in 0:d]
end

d = 5
c = [randn(ComplexF64); randn(ComplexF64, d - 1) * 1e-4; 1]
setprecision(8192)
r = roots(big.(c), polish=true)
c2 = ComplexF64.(poly(r))
c2 - c
``````

90% of the time gives (correctly)

``````6-element Vector{ComplexF64}:
0.0 + 0.0im
0.0 + 0.0im
0.0 + 0.0im
0.0 + 0.0im
0.0 + 0.0im
0.0 + 0.0im
``````

but if we increase the degree to `d = 6` it is wildy wrong about 85% of the time (â 15% of the time it correctly gives zero), giving answers for `c - c2` like

``````7-element Vector{ComplexF64}:
3.9439141771536586e6 - 7.66389580196865e6im
2.2082351722049564e6 - 2.857913579123853e6im
469737.1655469684 - 420674.6692220386im
50125.96985319289 - 30578.538953102627im
2872.684011890461 - 1098.2968892416864im
84.4854525989318 - 15.599756742987397im
0.0 + 0.0im
``````

and if you check with `evalpoly` youâll find that these arenât roots at all, and indeed p(r) seems to have a constant(!!) offset:

``````julia> ComplexF64.(evalpoly.(r, Ref(c)))
6-element Vector{ComplexF64}:
-187275.93508116304 + 125277.6562628678im
-187275.93508116304 + 125277.6562628678im
-187275.93508116304 + 125277.6562628678im
-187275.93508116304 + 125277.6562628678im
-187275.93508116304 + 125277.6562628678im
-187275.93508116304 + 125277.6562628678im
``````

In fact, if you look at the ârootsâ they are all the same, which is clearly wrong:

``````julia> ComplexF64.(r)
6-element Vector{ComplexF64}:
-0.4426969318472816 - 6.30885464214318im
-0.4426969318472816 - 6.30885464214318im
-0.4426969318472816 - 6.30885464214318im
-0.4426969318472816 - 6.30885464214318im
-0.4426969318472816 - 6.30885464214318im
-0.4426969318472816 - 6.30885464214318im
``````

Conclusion: PolynomialRoots appears to be returning garbage roots some of the time. (This also falls under the category of âyou arenât computing what you think you areâ.)

3 Likes

If I understand correctly, this is the main mistake in my experiment? It seems easy to fix though, to prevent rounding I just remove the cast to `ComplexF64`, and to prevent additional errors when multiplying the monomials together I just temporarily increase the `BigFloat` precision?

When I repeat the experiment with those corrections, the results still exhibit the same peculiar pattern.

Using this code:

``````using Printf, DynamicPolynomials, PolynomialRoots

function poly_coefs_from_roots(roots::AbstractVector{<:Complex{T}}) where {T <: AbstractFloat}
local old_prec = precision(T)
setprecision(T, 64*old_prec)

@polyvar x
local f = prod(x - xi for xi in roots)
local ret = [coefficient(f, x^i) for i in 0:length(roots)]

setprecision(T, old_prec)

ret
end

random_poly_coefs(d) =
Complex{BigFloat}[randn(ComplexF64); randn(ComplexF64, d - 1) * 1e-4; 1]

max_coef_error(coefs) =
maximum(abs.(coefs - poly_coefs_from_roots(roots(coefs, polish = true))))

approx_sprintf(n::AbstractFloat) =
@sprintf("%.2e", n)

function experiment(d, mantissa_lengths)
local coefs = random_poly_coefs(d)

for nbits in mantissa_lengths
setprecision(nbits)
local max_err = max_coef_error(coefs)
println(nbits, "  ", approx_sprintf(max_err))
end

nothing
end
``````

These are some example results:

``````julia> experiment(20, 19:31)
19  8.39e-06
20  3.49e-06
21  1.89e-06
22  1.08e-06
23  4.16e-07
24  2.70e-07
25  1.45e-07
26  5.80e+07
27  6.55e+16
28  2.05e+18
29  6.16e+18
30  6.24e+18
31  6.80e+18

julia> experiment(20, 19:31)
19  4.01e-06
20  2.17e-06
21  1.10e-06
22  4.27e-07
23  1.86e-07
24  3.86e+07
25  4.82e-08
26  9.67e+10
27  1.33e+11
28  1.80e+11
29  1.42e+11
30  9.14e+10
31  9.59e+10

julia> experiment(20, 19:31)
19  5.51e-06
20  2.16e-06
21  1.27e-06
22  7.19e-07
23  3.68e-07
24  2.19e-07
25  1.24e-07
26  2.17e+13
27  6.89e+15
28  1.67e+16
29  7.63e+16
30  7.93e+16
31  1.37e+17

julia> experiment(20, 19:31)
19  4.10e-06
20  2.76e-06
21  1.11e-06
22  5.48e-07
23  2.46e-07
24  5.76e+07
25  1.45e+12
26  2.35e+13
27  4.89e+13
28  2.00e+14
29  4.19e+14
30  8.40e+14
31  1.28e+15
``````

Thereâs a new variation on the established pattern in the second example, but the main pattern is the same as before.

This seems to be the most significant part:

2 Likes

Yeah, but my point with this whole thing is that the accuracy is for some reason actually good when one does `setprecision(20)` during `roots`.

Iâve looked at the code in PolynomialRoots and some functions are very long and convoluted. The sort of which I would hesitate to use without formal proof (at least in space probes).

Luckily, at the cost of run-time, this function can be made robust, by doing `evalpoly` internally on found roots and filtering out glaring problems or even throwing an exception.

This is the usual trade-off between allow quick path and double down on safety, and the best solution IMHO is to have both paths and default on the safe path for ânewâ users.

Perhaps, if a Julia flag is set, these function can log an INFO or WARN level message to remind of quicker versions.

Probably just luck? `setprecision(20)` doesnât work for my `d=6` example above, but in any case my example shows that there is some randomness in whether the problem occurs.

Clearly, thereâs a bug here. Itâs usually a distraction, in my experience, to try to perform too much mathematical analysis of a bug. Better to just fix the bug.

I submitted an issue with a simple test case, in which the incorrect results are obtained for `ComplexF64` as well (`BigFloat` is just a distraction here): sometimes returns bogus duplicate roots Âˇ Issue #25 Âˇ giordano/PolynomialRoots.jl Âˇ GitHub

6 Likes

Good that you havenât looked at the original Fortran code because thatâs 50% longer

3 Likes

But I trust the FORTRAN codeâŚ it usually comes from the time of Real Programmers

3 Likes

Ah yes, because Juliaâs intense testing of old numerical libraries written in Fortran has never turned up any bugs

4 Likes

In the bogus cases found by Steven above it may be an error in the Julia implementation because the Fortran library gives the correct result (yes, the code is a mess, but the starting point wasnât much better and a mistake in the translation is almost unavoidable)

But @nsajko started this discussion because was looking into another issue where the ability of using arbitrary precision may be beneficial, except the solver goes haywire at some point. Itâd be great if that was solved by finding the culprit in the other issue

4 Likes