# Why is there a discrepancy in the value of sqrt(2) from Wolfram Alpha and Julia?

On the Wolfram Alpha website, I get this approximation for sqrt(2)

1.4142135623730950488016887242096980785696718753769480731766797379...

With the Julia REPL, I get

sqrt(2)
1.4142135623730951


and

convert(BigFloat, sqrt(2))
1.4142135623730951454746218587388284504413604736328125


There is a difference after the fifteenth decimal place.

Is this due to differences in how sqrt(2) is computed, or due to display precision, or something else entirely?

I would have expected to be able to check one reputable source against another for a never-ending decimal expansion.

Thanks.

julia> sqrt(big(2))
1.414213562373095048801688724209698078569671875376948073176679737990732478462102


When you convert matters.

16 Likes

Wow! I am reassured. Thanks.

Specifically, 1.4142135623730951 is the shortest decimal representation that rounds to the closest possible value to \sqrt{2} using standard 64-bit floating point.

Converting that 64-bit floating point number to a BigFloat doesnâ€™t actually change its numerical value, but it does change how many digits are printed in the decimal representation you see. Itâ€™s the same as if you asked for a bunch of decimal digits from @printf:

julia> using Printf

julia> @printf("%.52f", sqrt(2))
1.4142135623730951454746218587388284504413604736328125


64-bit floating point only has about ~15 decimal digits worth of precision, which is why the shortest decimal needed to round to that is the length it is. You can see that this is the closest possible value by comparing it (and its nearest representable neighbors) to that higher-precision reference:

julia> nextfloat.(sqrt(2), -1:1)
3-element Vector{Float64}:
1.414213562373095
1.4142135623730951
1.4142135623730954

julia> Float64.(nextfloat.(sqrt(2), -1:1) .- sqrt(big(2)))
3-element Vector{Float64}:
-1.2537167179050217e-16
9.667293313452913e-17
3.187175380595604e-16

12 Likes

You can expect that the last digit is not correct and itâ€™s not Julia-specific (max. 15-17 is correct for Float64, and fewer for Float32, or rather, exactly 53 bits after rounding, but any previous digit in binary or decimal can look off), even that the last few, and itâ€™s not a bug per se. You can expect arbitrary many digits to be wrong for some numbers.

I myself once learned 380 digits of Ď€, when I was a crazy high-school kid. My never-attained ambition was to reach the spot, 762 digits out in the decimal expansion, where it goes â€ś999999â€ť, so that I could recite it out loud, come to those six 9â€™s, and then impishly say, â€śand so on!â€ť

This sequence of six nines is sometimes called the â€śFeynman pointâ€ť, after physicist Richard Feynman

From correct value of pi at Wikipedia:
â€¦ 4999999837

julia> setprecision(640*ceil(Int, log(10)/log(2))); string(big(pi))[762:772]  # some miscalculation in finding the cut-off  point-in binary for the decimal, but showing correct
"34999999837"


You can see that if you end at 98 (correct) and then cut off that (8) decimal and round the previous up from that decimal, then you would get â€¦ 35000000 â€¦

Nothing fundamentally different applies in binary. You could get arbitrary long series of 1s repeating in pi or sqrt(2) or other irrationals.* Or in decimal, arbitrary long series of 9s as shown.

Iâ€™m trying to hit that right spot, but BigFloat is binary big-floating point:

julia> setprecision(639*ceil(Int, log(10)/log(2))); string(big(pi))[762:772]
"34999999833"


* Ď€ is conjectured, but not known, to be a normal number.

For any normal number you will get arbitrary long sequences, and even all know literature ever written encoded in pi. But only if pi is proved normal. I donâ€™t know if itâ€™s know for the square root of 2, for all I know itâ€™s proven to NOT be normal. Still you could get very long series.

We do have Dec64, and Dec128, but no arbitrary-precision decimal floating point, that I know of, but it would neither helpâ€¦

1 Like

So donâ€™t go memorizing more than 64 decimal digits of Pi

7 Likes