`x==y` is true but `π^x == π^y` is false

A simple question for you CS experts:

function foo()
    x::Float32 = 0.5
    y::Float64 = 0.5
    return x==y, π^x == π^y

julia> foo()
(true, false)

Why x==y is true given the different numeric types? And if x==y is true, why π^x == π^y is false? The example shows the importance of numerical details, and I want to better understand the issue.

Thanks in advance.

\pi is of special Irrational type in Julia, so that it’s converted to different floating-point types for π^x and π^y. Since those Float32(π) and Float64(π) are different, the exponentiation gives different results.

You may check Float64(π)^x vs Float64(π)^y, those must give the same result.


Also, because 0.5 can be represented exactly in binary numbers, x and y are identical.

If you instead do:

x = Float64(1/3)
y = Float32(1/3)

you will find that x == y leads to false, since 1/3 can not be represented exactly in binary numbers.


Thanks to @Vasily_Pisarev and @BLI for the quick responses and informative answers!

I wish to follow up with another example which confused me again.

function bar()
    x::Float32 = 1/3
    y::Float64 = 1/3
    a::Float32 = 1/2
    b::Float64 = 1/2
    return  x^a == x^b, y^a == y^b
julia> bar()
(false, true)

In this example I use 1/3 as the base instead of \pi, thinking that it does not getting the special treatment of Julia mentioned by @Vasily_Pisarev . Per @BLI 's post I understand that a and b are the same. Then I am confused why x^a == x^b is false?
Why when the base becomes Float64, the evaluation y^a == y^b becomes true?

Waiting to be enlightened again.

They are not the same. Just look at the output

julia> (Float32(1/2)^Float32(0.5),Float32(1/2)^Float64(0.5))
(0.70710677f0, 0.7071067811865476)

The second is being promoted to Float64 because it is the same as 0.5^0.5

1 Like

Generally speaking, when you deal with floating point numbers it’s more likely you want to do approximate comparison, for example with isapprox, than exact comparison with ==, precisely because most real numbers don’t have exact representation as floating point numbers, and when doing operations with these numbers you can accumulate multiple rounding errors.


Arguments in x^y are promoted to a common type. In case of x::Float32 and y::Float64, that’s Float64. So, Float32(1/3)^Float32(0.5) is computed in single precision, and Float32(1/3)^Float64(0.5) gets converted to Float64(Float32(1/3))^Float64(0.5) and computed in double precision.


Thanks again to everyone for the quick and informative response. Highly appreciated!