I don’t really think there’s much choice here if you want a coherent design. Suppose we made
1//5 == 1/5
Presumably we’d also want that to be true for different types, so we’d also have:
1//5 == 1f0/5f0
1//5 == big(1)/big(5)
Transitivity of equality then implies that
1f0/5f0 == 1/5 == big(1)/big(5)
But those are clearly different values as they have different decimal expansions:
julia> @printf("%0.70f\n", 1f0/5f0)
0.2000000029802322387695312500000000000000000000000000000000000000000000
julia> @printf("%0.70f\n", 1/5)
0.2000000000000000111022302462515654042363166809082031250000000000000000
julia> @printf("%0.70f\n", big(1)/big(5))
0.2000000000000000000000000000000000000000000000000000000000000000000000
Making these equal while they have different decimal values seems pretty questionable.
Another way to put this is that if equality is going to be an equivalence relation (hard to argue with that), then we need to pick equivalence classes for all numerical values. The classes Julia uses are the same ones that mathematics itself uses: two numerical values are only equal if they represent the same number. Since the rational value 1//5
represents the fraction 1/5 it can only be equal to other representations of that value and since 1/5 cannot be represented in binary floating-point, no floating-point value is equal to 1//5
.
There are other equivalence classes one could pick. For example, you could make Float64 special and compare everything based on whether it maps to the same Float64 or not. In particular, that would give you 1//5 == 1/5
since Float64(1//5)
is 1/5
. For lower precision binary types like Float32 this wouldn’t change the notion of equality since they are exactly embedded in Float64. But for higher precision types like BigFloat
, this would mean that very many values would fall into the same equivalence class. For example, we currently have:
julia> x = big(1)/big(10)
0.1000000000000000000000000000000000000000000000000000000000000000000000000000002
julia> y = x + 0.5*eps(1/10)
0.1000000000000000069388939039072283776476979255676269531250000000000000000000002
julia> x != y
true
But if we defined equality in terms of Float64, then we’d have x == y
since Float64(x) == Float64(y)
. Maybe that’s not so bad—it’s a bit weird for two numbers with so many differing significant digits to be unequal, but maybe for binary float types that people think of as “approximate” it could be tolerable. But it becomes far less palatable if you consider “exact” types like rationals. Consider:
julia> a = 1//5
1//5
julia> b = 3602879701896397//18014398509481984
3602879701896397//18014398509481984
Obviously these are different and non-equal, right? But if we define equality in terms of Float64 equivalence classes, these values have to be considered equal since they map to the same Float64:
julia> Float64(a) == Float64(b)
true
That’s pretty unacceptable for a rational number type. What if we patch up this badness by exempting rational number types from this definition of equality? We’d still be in trouble then since we’d have a == 0.2
and b == 0.2
since those comparison are done via conversion to Float64, yet we’d have a != b
. In other words, transitivity of equality would fail.
The bottom line is this: if you want a coherent notion of equality, you need to pick consistent equivalence classes of numbers and apply that equivalence relation to all different representations of numbers. If you try to mix and match different equivalence relations for different types, you end up with failures of transitivity. While there are many possible equivalence classes you could pick, the only choice that matches mathematics itself is for each numerical value to be in its own equivalence class. Which is exactly what we do.