Unexpected Behaviour of the Rounding Function

I am trying to round some Float64 values and don’t understand the behavior of the round() function for specific values.

If I take:

round(0.585, digits=2, RoundNearestTiesUp)

I get 0.59, as expected, since I want values to be rounded up when ending in a 5.

However, for:

round(0.575, digits=2, RoundNearestTiesUp)

I get 0.57, which I don’t understand. This goes for 0.565 as well for some reason.

My question is why does this happen, particularly when I get the expected behavior for 0.505,0.515…0.595, except for 0.565 and 0.575?

I assume this has to do with the RoundingMode, but does anyone have any suggestions as to how I can consistently get the appropriate rounding of Float64s that I’m looking for; a work around perhaps?

PS:

I’m using Julia version 1.7.1

Floating point numbers aren’t decimal.

julia> big(0.575)
0.5749999999999999555910790149937383830547332763671875
3 Likes

I think that’s because the decimal value 0.575 is not exactly represented in binary floating-point arithmetic. It’s actually the value:

julia> big(0.575)
0.5749999999999999555910790149937383830547332763671875

which is < 0.575:

julia> 0.575 < 575//1000
true

so it correctly rounds down to 0.57. See PSA: floating-point arithmetic

The surprising thing to me is that 0.585 rounds up, since we also have:

julia> big(0.585)
0.58499999999999996447286321199499070644378662109375

julia> 0.585 < 585//1000
true
3 Likes

I think what’s happening here is that .58 and .59 are both slightly less than their decimal values as well (but I’m not 100% sure here)

Note that if you want to work with decimal values exactly you can use decimal floating point:

julia> using DecFP

julia> round(d"0.575", digits=2, RoundNearestTiesUp)
0.58

julia> round(d"0.585", digits=2, RoundNearestTiesUp)
0.59

It’s rare to actually need this in a real application, but it does make human interpretation of decimal inputs and outputs easier.

2 Likes

Float64(0.585) is exactly halfway between Float64(0.58) and Float64(0.59). However, the same is true for Float64(0.575), which is exactly halfway between Float64(0.57) and Float64(0.58):

julia> big(0.59) - big(0.585)
0.00500000000000000444089209850062616169452667236328125

julia> big(0.585) - big(0.58)
0.00500000000000000444089209850062616169452667236328125

julia> big(0.58) - big(0.575)
0.00500000000000000444089209850062616169452667236328125

julia> big(0.575) - big(0.57)
0.00500000000000000444089209850062616169452667236328125

So, by this logic, shouldn’t 0.575 also round up to 0.58?

Probably there is an additional roundoff error in the round code…

Thank you for your recommendation, I’ll check out the DecFP.jl package.

Well, strangely, it might not be as rare in real application.

In my case, I was trying to verify a min-max normalization on a fairly large data-set of around 42000 entries. I used DataFrames’ transform function to apply the calculation and rounding of the result across each value in the DataFrame.

And since, I already have a normalized version of the data (I was just checking them), I found some inaccuracies in comparing my results and to those of the pre-normalized data.

With that said, unexpected rounding was on average extremely infrequent. Nevertheless, I could see decimal floating point having a place in data manipulations that require a great deal of accuracy.