So the last one is the the only one that is “correct”. Going in order: 10*1.1 converts both to floats where 10.0 * 1.1 == 11. The rest all happen because when doing arithmetic between floats and rationals, both values are promoted to floats, while 1.1 ≤ 11//10 actually does the math exactly.

Try @edit 1.1 <= 11//10 (or @less for an in-terminal display).

You’ll see that comparisons between Rational and float values are done by calling Base.decompose, which represents each number exactly as num * 2^pow / den. This is a generalization of both Rational (adding the pow field) and IEEE floating point values (adding the den).

To your point, yes this means that we essentially compute Rational{Int}(1.1) (although in practice we still have a 2^pow multiplier included for large or small numbers). Then the comparison is done (meticulously) versus 11//10 and we find that, in fact, 1.1e0 > 11//10.

You can also see this in larger (but still finite) precision by inspecting big(1.1) == 1.100000000000000088817841970012523233890533447265625.

All floating point numbers (aside from NaN and ±Inf) are rational numbers: Float64 is a particular subset of the rationals. Julia’s float–rational comparisons exploit this to perform an exact comparison.

(The main confusion is that the rational values are not what you think, because they often don’t correspond to a compact decimal representation. As noted above by @mikmoore, 1.1::Float64 is exactly the rational number 1.100000000000000088817841970012523233890533447265625.)

I think that making it easily accessible would go a long way in training the users on what to expect from floating point arithmetic. (Having it be the default when a single Float64 is printed in the REPL would be excessive, though.)

I mean, there is show(big(1.1)) which will show it and is easily available.

I’m also a bit curious what the printing is actually based on, is it just picking the shortest representation that is closer to the actual value than the neighbouring float values, or what is the rule for printing?

This is where I end up when calling show(1.0), though I’m a bit too lazy to try to figure what is going on there right now…

The docstring for Base.Ryu.writeshortest is

Convert a float value x into its “shortest” decimal string, which can be parsed back to
the same value. This function allows achieving the %g printf format. Note the 2nd method
allows passing in a byte buffer and position directly; callers must ensure the buffer has
sufficient room to hold the entire decimal string.

which seems to align with my expectation of the shortest representation that is closer to the desired float than any other float.

Trying to call this myself with more precision I only get additional zeros, which I guess could make sense from the docstring description. Though I initially expected it to “be exact” to the provided decimal precision and after that pick the shortest representation.

For Float64/32/16, it’s printing the shortest decimal value that rounds to the same value (when rounded to the nearest binary floating point number).

In particular, it uses a state-of-the-art algorithm called Ryū. (Previously it used another algorithm called Grisu that did the same thing.) In particular, as described in the Ryū paper, the criteria for display are:

and many others… (Note, however, that BigFloat printing uses a separate algorithm implemented in the GNU MPFR library, and IIRC it is not always the shortest decimal representation.)