# What's up with UnitRange?

Consider in matlab:

``````>> 1.12*100:0.02*100:1.2*100

ans =

112          114          116          118          120
``````

In julia I get

``````julia> 1.12*100:0.02*100:1.2*100
112.00000000000001:2.0:118.00000000000001
``````

which when I collect is definitely not what I expected

``````julia> collect(1.12*100:0.02*100:1.2*100)
4-element Array{Float64,1}:
112.00000000000001
114.00000000000001
116.00000000000001
118.00000000000001
``````

In Matlab everything is a Complex Float, but it rounds the answer when possible. In Julia you explicitly asked for a Float, and so you got one.

Makes sense, but I still find it very unintuitive. Is there anything I can do to match the functionality of matlab in this case?

Is this not just printing though? Matlab just truncates it to avoid that question.

1 Like

Yes, Matlab may be lying to you and truncating its display. Itâ€™s also returning a floating point number â€” itâ€™s just formatted like Julia formats integers. I no longer have a Matlab installed, but try

``````r = 1.12*100:0.02*100:1.2*100
fprintf('%.17f\n',r(1))
``````

I do know that Matlab also does some â€śsnappingâ€ť to integral endpoints within its colon syntax, so that may (also) be happening here.

Floating point arithmetic doesnâ€™t behave how you might expect â€” across all languages. Indeed, just check out the result of `1.12*100` in both Matlab and Julia. Again, Matlab will likely obscure the â€śrealâ€ť answer via its printing.

2 Likes

`collect(round(Int, 1.12*100):round(Int, 0.02*100):round(Int, 1.2*100))` would work.

I tried the snippet you wrote, and yeah Matlab is just hiding the ugly in the print statements. But what Iâ€™m more concerned about is properly computing the number of elements in the range (the â€śsnappingâ€ť as you say). `112.00000000000001421` is pretty much `112`, but a range with 4 elements where itâ€™s supposed to have 5 is a much bigger problem.

1 Like

That would work if I expected all the elements in the UnitRange to be integers, but I donâ€™t. For example, if I do this in matlab, I get what I expect:

``````>> 1.12*87:0.02*87:1.2*87

ans =

97.44        99.18       100.92       102.66        104.4
``````

But in Julia, Iâ€™m still losing an element because of the small +Ďµ tacked on

``````julia> collect(1.12*87:0.02*87:1.2*87)
4-element Array{Float64,1}:
97.44000000000001
99.18
100.92000000000002
102.66000000000001
``````

Ah, I missed that. Thatâ€™s the other part of my answer â€” the slop for snapping. We donâ€™t do slop, but itâ€™s still something we try very hard to get right. Itâ€™s a challenge because of floating point issues like this. In general, if you write the values directly as floating point literals (instead of an multiplication) weâ€™ll get them almost always right.

As such, a nice workaround is to do the multiplication outside the range. It also hits your intended values much more accurately.

``````julia> (1.12:0.02:1.2) .* 87
97.44:1.74:104.4

julia> collect(ans)
5-element Array{Float64,1}:
97.44
99.18
100.92
102.66
104.4
``````
4 Likes

You can use `range` with keyword `length`:

``````julia> collect(range(1.12*100,1.20*100, length=5))
5-element Array{Float64,1}:
112.00000000000001
114.00000000000001
116.0
118.0
120.0
``````

Here the step size is implicit, which might be inconvenient in some cases, but this always guarantees the end point to be included.

5 Likes

Thanks, I appreciate the lack of slop with Julia but it can be frustrating at times. This is a simple fix thatâ€™ll work for me.

How does Julia calculate ` (1.12:0.02:1.2)*87` to get the crisp values?

It infers more precision than `1.12` and `1.2` specify since taken at face value, those end-points and a length of 5 are incompatible since `1.12 + 4*0.02 != 1.2`. If you call `dump` on the range object, you can see the higher precision values that are inferred upon construction:

``````julia> dump((1.12:0.02:1.2))
StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}
ref: Base.TwicePrecision{Float64}
hi: Float64 1.12
lo: Float64 -1.0658141036401502e-16
step: Base.TwicePrecision{Float64}
hi: Float64 0.01999999999999999
lo: Float64 9.992007221626408e-18
len: Int64 5
offset: Int64 1
``````

Indeed, the true value 1.12 is about 1.0658141036401502e-16 smaller than the floating-point value `1.12`.

1 Like

The end-point is always included but it may not be what you wanted. In this case, `1.12*100 == 112.00000000000001 != 112` whereas:

``````julia> collect((1.12:0.02:1.2)*100)
5-element Array{Float64,1}:
112.0
114.0
116.0
118.0
120.0
``````

The values are all exact integer values.

So range object triggers high precision multiplication. Very neat!

Julia is doing two things: it first realizes that the floating point value `0.02` â€” that is, `0.0200000000000000004163336342344337026588618755340576171875` â€” is the closest representable value to `1//50`. And that the start and stop can also be expressed in terms of a common denominator with the step. Then, secondly, it finds the closest twice-precision (128-bit) floating point representation for this exact fraction:

``````julia> big(0.02)
0.0200000000000000004163336342344337026588618755340576171875

julia> big((1.12:0.02:1.2).step)
0.019999999999999999999999999999999630221450677650716213252235023693954840684483542645466513931751251220703125
``````

Itâ€™s not just high precision; itâ€™s also a rationalization of the number you gave it. I should note that we are snapping, but weâ€™re only changing the exact values by less than `eps(x)/2`. In other words, if you converted this value back to a normal Float64, youâ€™d get the same number you started with. This is why I talked about writing the floating point literals yourself. If you write them out, youâ€™re writing some decimal fraction â€” something divisible by a power of 10 â€” that is able to be rationalized in this manner. If you instead do `0.02*87`, then youâ€™re also multiply that trailing `...00004163336...` in the full expansion of `0.2` 87 times, and that adds up to the point that youâ€™re more than a half-step away from the rational number you really wanted. Matlab, on the other hand, seems to have some â€śclosenessâ€ť heuristics that are much more forgiving.

A really awesome side-effect of the high precision arithmetic backing this is that the intermediate values are also much more frequently what you intended. The classic example is the third value in `0.1:0.1:0.9` â€” itâ€™s 0.3 in Julia, but 0.30000000000000004 in most other languages.

6 Likes

Actually, itâ€™s `0.29999999999999998889776975â€¦` in Julia; it just prints as `0.3` because that is the shortest literal that parses to the same value. The fraction `3/10` is not representable in binary floating point.

2 Likes

Right, itâ€™s `0.3` but not 0.3 if you see what Iâ€™m saying.

1 Like