What’s happening is more explicit with Float64:
julia> typeof(0:0.1:1)
StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}
It’s a design choice, and certainly a debatable one at that. It’s done to avoid roundoff issues. Compare
julia> collect(0:0.1:1)
11-element Vector{Float64}:
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
julia> collect(StepRangeLen(0,0.1,11))
11-element Vector{Float64}:
0.0
0.1
0.2
0.30000000000000004
0.4
0.5
0.6000000000000001
0.7000000000000001
0.8
0.9
1.0
This issue appears cosmetic, but it can actually lead to significant ambiguity. Knowing that 0.1*3 == 0.30000000000000004 > 0.3, should last(0.0:0.1:0.3) be 2.0, 3.0, or 0.30000000000000004? The TwicePrecision step is the only way to give the user what they (usually) want while maintaining consistency.
But I will sympathize that I, too, find it a tedious to have to use the StepRangeLen constructor when I’m okay with a lower precision step. Especially because there is a performance cost.