Why is there a discrepancy in the value of sqrt(2) from Wolfram Alpha and Julia?

On the Wolfram Alpha website, I get this approximation for sqrt(2)


With the Julia REPL, I get



convert(BigFloat, sqrt(2))

There is a difference after the fifteenth decimal place.

Is this due to differences in how sqrt(2) is computed, or due to display precision, or something else entirely?

I would have expected to be able to check one reputable source against another for a never-ending decimal expansion.


julia> sqrt(big(2))

When you convert matters.


Wow! I am reassured. Thanks.

Specifically, 1.4142135623730951 is the shortest decimal representation that rounds to the closest possible value to \sqrt{2} using standard 64-bit floating point.

Converting that 64-bit floating point number to a BigFloat doesn’t actually change its numerical value, but it does change how many digits are printed in the decimal representation you see. It’s the same as if you asked for a bunch of decimal digits from @printf:

julia> using Printf

julia> @printf("%.52f", sqrt(2))

64-bit floating point only has about ~15 decimal digits worth of precision, which is why the shortest decimal needed to round to that is the length it is. You can see that this is the closest possible value by comparing it (and its nearest representable neighbors) to that higher-precision reference:

julia> nextfloat.(sqrt(2), -1:1)
3-element Vector{Float64}:

julia> Float64.(nextfloat.(sqrt(2), -1:1) .- sqrt(big(2)))
3-element Vector{Float64}:

You can expect that the last digit is not correct and it’s not Julia-specific (max. 15-17 is correct for Float64, and fewer for Float32, or rather, exactly 53 bits after rounding, but any previous digit in binary or decimal can look off), even that the last few, and it’s not a bug per se. You can expect arbitrary many digits to be wrong for some numbers.

I myself once learned 380 digits of π, when I was a crazy high-school kid. My never-attained ambition was to reach the spot, 762 digits out in the decimal expansion, where it goes “999999”, so that I could recite it out loud, come to those six 9’s, and then impishly say, “and so on!”

This sequence of six nines is sometimes called the “Feynman point”, after physicist Richard Feynman

From correct value of pi at Wikipedia:
… 4999999837

julia> setprecision(640*ceil(Int, log(10)/log(2))); string(big(pi))[762:772]  # some miscalculation in finding the cut-off  point-in binary for the decimal, but showing correct

You can see that if you end at 98 (correct) and then cut off that (8) decimal and round the previous up from that decimal, then you would get … 35000000 …

Nothing fundamentally different applies in binary. You could get arbitrary long series of 1s repeating in pi or sqrt(2) or other irrationals.* Or in decimal, arbitrary long series of 9s as shown.

I’m trying to hit that right spot, but BigFloat is binary big-floating point:

julia> setprecision(639*ceil(Int, log(10)/log(2))); string(big(pi))[762:772]

* π is conjectured, but not known, to be a normal number.

For any normal number you will get arbitrary long sequences, and even all know literature ever written encoded in pi. But only if pi is proved normal. I don’t know if it’s know for the square root of 2, for all I know it’s proven to NOT be normal. Still you could get very long series.

We do have Dec64, and Dec128, but no arbitrary-precision decimal floating point, that I know of, but it would neither help…

1 Like

So don’t go memorizing more than 64 decimal digits of Pi