Yes, Matlab may be lying to you and truncating its display. It’s also returning a floating point number — it’s just formatted like Julia formats integers. I no longer have a Matlab installed, but try
r = 1.12*100:0.02*100:1.2*100
fprintf('%.17f\n',r(1))
I do know that Matlab also does some “snapping” to integral endpoints within its colon syntax, so that may (also) be happening here.
Floating point arithmetic doesn’t behave how you might expect — across all languages. Indeed, just check out the result of 1.12*100 in both Matlab and Julia. Again, Matlab will likely obscure the “real” answer via its printing.
I tried the snippet you wrote, and yeah Matlab is just hiding the ugly in the print statements. But what I’m more concerned about is properly computing the number of elements in the range (the “snapping” as you say). 112.00000000000001421 is pretty much 112, but a range with 4 elements where it’s supposed to have 5 is a much bigger problem.
That would work if I expected all the elements in the UnitRange to be integers, but I don’t. For example, if I do this in matlab, I get what I expect:
>> 1.12*87:0.02*87:1.2*87
ans =
97.44 99.18 100.92 102.66 104.4
But in Julia, I’m still losing an element because of the small +ϵ tacked on
Ah, I missed that. That’s the other part of my answer — the slop for snapping. We don’t do slop, but it’s still something we try very hard to get right. It’s a challenge because of floating point issues like this. In general, if you write the values directly as floating point literals (instead of an multiplication) we’ll get them almost always right.
As such, a nice workaround is to do the multiplication outside the range. It also hits your intended values much more accurately.
It infers more precision than 1.12 and 1.2 specify since taken at face value, those end-points and a length of 5 are incompatible since 1.12 + 4*0.02 != 1.2. If you call dump on the range object, you can see the higher precision values that are inferred upon construction:
Julia is doing two things: it first realizes that the floating point value 0.02 — that is, 0.0200000000000000004163336342344337026588618755340576171875 — is the closest representable value to 1//50. And that the start and stop can also be expressed in terms of a common denominator with the step. Then, secondly, it finds the closest twice-precision (128-bit) floating point representation for this exact fraction:
It’s not just high precision; it’s also a rationalization of the number you gave it. I should note that we are snapping, but we’re only changing the exact values by less than eps(x)/2. In other words, if you converted this value back to a normal Float64, you’d get the same number you started with. This is why I talked about writing the floating point literals yourself. If you write them out, you’re writing some decimal fraction — something divisible by a power of 10 — that is able to be rationalized in this manner. If you instead do 0.02*87, then you’re also multiply that trailing ...00004163336... in the full expansion of 0.2 87 times, and that adds up to the point that you’re more than a half-step away from the rational number you really wanted. Matlab, on the other hand, seems to have some “closeness” heuristics that are much more forgiving.
A really awesome side-effect of the high precision arithmetic backing this is that the intermediate values are also much more frequently what you intended. The classic example is the third value in 0.1:0.1:0.9 — it’s 0.3 in Julia, but 0.30000000000000004 in most other languages.
Actually, it’s 0.29999999999999998889776975… in Julia; it just prints as 0.3 because that is the shortest literal that parses to the same value. The fraction 3/10 is not representable in binary floating point.