Julia is doing two things: it first realizes that the floating point value `0.02`

â€” that is, `0.0200000000000000004163336342344337026588618755340576171875`

â€” is the closest representable value to `1//50`

. And that the start and stop can also be expressed in terms of a common denominator with the step. Then, secondly, it finds the closest twice-precision (128-bit) floating point representation for this exact fraction:

```
julia> big(0.02)
0.0200000000000000004163336342344337026588618755340576171875
julia> big((1.12:0.02:1.2).step)
0.019999999999999999999999999999999630221450677650716213252235023693954840684483542645466513931751251220703125
```

Itâ€™s not just high precision; itâ€™s also a rationalization of the number you gave it. I should note that we *are* snapping, but weâ€™re only changing the exact values by less than `eps(x)/2`

. In other words, if you converted this value back to a normal Float64, youâ€™d get the same number you started with. This is why I talked about writing the floating point literals yourself. If you write them out, youâ€™re writing some decimal fraction â€” something divisible by a power of 10 â€” that is able to be rationalized in this manner. If you instead do `0.02*87`

, then youâ€™re also multiply that trailing `...00004163336...`

in the full expansion of `0.2`

87 times, and that adds up to the point that youâ€™re more than a half-step away from the rational number you really wanted. Matlab, on the other hand, seems to have some â€śclosenessâ€ť heuristics that are much more forgiving.

A really awesome side-effect of the high precision arithmetic backing this is that the intermediate values are also much more frequently what you intended. The classic example is the third value in `0.1:0.1:0.9`

â€” itâ€™s 0.3 in Julia, but 0.30000000000000004 in most other languages.