Strange behaviour when substracting a scalar to a StepRangeLen

No. This has nothing to do with @simd or vectorization. Broadcasting basic arithmetic over ranges is not broadcasting at all, but rather it is specialized to recompute a new range with O(1) cost. How you do this is the hard part, especially how it interacts with all the other smarts that ranges have.

In this case, constructing the step range 1e-3:1e-3:1.0 identifies that the endpoints and step are within floating point precision of the exact \frac{1}{1000}, and then it compensates to that. But that compensation means that it’s representing that first point as a higher precision representation of \frac{1}{1000}. That’s why you’re getting the answer you have there:

julia> Float64(big(1)//1000 - 0.001)
-2.0816681711721686e-20

If, however, all the values step-range do not rationalize nicely to \frac{1}{1000}, this will not happen:

julia> x = 1e-3:π/123:1.
0.001:0.025541403687721894:0.9971147438211538

julia> x .- 1e-3
0.0:0.025541403687721894:0.9961147438211538

julia> (x .- 1e-3)[1]
0.0

It can be helpful to think of ranges as higher precision objects that choose their exact value based upon all the values you give (within floating point rounding of the values you passed, that is). Just like working with mixed precision in other cases, this affects the order of operations you should do. It’s also worth noting that sometimes doing floating point math (yes, even something as innocuous as \times10^{-3}) before constructing ranges can lead to surprising results because the result of the floating point arithmetic isn’t quite hitting the value you expected, which makes it no longer be within rounding range of an even fraction of the other endpoints.

6 Likes