I was meddling with ranges and found the following strange behaviour: suppose I want to create a range starting at 0, with step of 1 and length 10. If I want to work with Float32
s, I’d type something like
r1 = 0.0f0:1.0f0:9.0f0
I then tried to use this within some KernelAbstraction
GPU kernels and I kept getting errors.
Upon further investigation, I saw that
typeof(r1) = StepRangeLen{Float32, Float64, Float64, Int64}
so I got Float64
s in the the ref
and step
fields of StepRangeLen, instead of the expected Float32
.
If I explicitly define
r2 = StepRangeLen(0f0, 1f0, 10)
I get the expected
typeof(r2) = StepRangeLen{Float32, Float32, Float32, Int64}
and the errors I got from the GPU kernel go away.
My question is if this silent type promotion is intended, and, if so, what is the reason behind it.
What’s happening is more explicit with Float64
:
julia> typeof(0:0.1:1)
StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}
It’s a design choice, and certainly a debatable one at that. It’s done to avoid roundoff issues. Compare
julia> collect(0:0.1:1)
11-element Vector{Float64}:
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
julia> collect(StepRangeLen(0,0.1,11))
11-element Vector{Float64}:
0.0
0.1
0.2
0.30000000000000004
0.4
0.5
0.6000000000000001
0.7000000000000001
0.8
0.9
1.0
This issue appears cosmetic, but it can actually lead to significant ambiguity. Knowing that 0.1*3 == 0.30000000000000004 > 0.3
, should last(0.0:0.1:0.3)
be 2.0
, 3.0
, or 0.30000000000000004
? The TwicePrecision
step is the only way to give the user what they (usually) want while maintaining consistency.
But I will sympathize that I, too, find it a tedious to have to use the StepRangeLen
constructor when I’m okay with a lower precision step. Especially because there is a performance cost.
You might be looking for LinRange(0f0, 9f0, 10)
, which is the low-tech cousin of StepRangeLen
, with none of this higher-precision-intermediate cleverness.
2 Likes