Julia is doing two things: it first realizes that the floating point value
0.02 — that is,
0.0200000000000000004163336342344337026588618755340576171875 — is the closest representable value to
1//50. And that the start and stop can also be expressed in terms of a common denominator with the step. Then, secondly, it finds the closest twice-precision (128-bit) floating point representation for this exact fraction:
It’s not just high precision; it’s also a rationalization of the number you gave it. I should note that we are snapping, but we’re only changing the exact values by less than
eps(x)/2. In other words, if you converted this value back to a normal Float64, you’d get the same number you started with. This is why I talked about writing the floating point literals yourself. If you write them out, you’re writing some decimal fraction — something divisible by a power of 10 — that is able to be rationalized in this manner. If you instead do
0.02*87, then you’re also multiply that trailing
...00004163336... in the full expansion of
0.2 87 times, and that adds up to the point that you’re more than a half-step away from the rational number you really wanted. Matlab, on the other hand, seems to have some “closeness” heuristics that are much more forgiving.
A really awesome side-effect of the high precision arithmetic backing this is that the intermediate values are also much more frequently what you intended. The classic example is the third value in
0.1:0.1:0.9 — it’s 0.3 in Julia, but 0.30000000000000004 in most other languages.