I found a bug in SafeREPL the function round does not work!

               _
   _       _ _(_)_     |  Documentation: https://docs.julialang.org
  (_)     | (_) (_)    |
   _ _   _| |_  __ _   |  Type "?" for help, "]?" for Pkg help.
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 1.10.2 (2024-03-01)
 _/ |\__'_|_|_|\__'_|  |  Official https://julialang.org/ release
|__/                   |

julia> round(1.2,digits=4)
1.2

julia> using SafeREPL

julia> round(1.2,digits=4)
1.200000000000000000000000000000000000000000000000000000000000000000000000000007

julia> 

the function round does not work in SafeREPL

If I understand SafeREPL correctly it is representing the number as a BigFloat, so it is probably rounding to the closest BigFloat representation of 1.2?

It is a difference in value on the 77th decimal place, I think you will be fine as long as it is not finances! :sweat_smile:

Kind regards

3 Likes

Have a look at this: PSA: floating-point arithmetic. It discusses rounding and printing for Float64 numbers, but BigFloats are the same, just with more digits.

As Ahmed_Salih said, what you’re seeing is that the closest BigFloat (at default precision) to the decimal number 1.2 is not exactly 1.2, but some base-2 rational number that differs from 1.2 by something like 1e-77.

4 Likes

Upon loading, the default is to replace Float64 literals with BigFloat, and Int, Int64 and Int128 literals with BigInt

The former to BigFloat isn’t “safe” as in fully accurate. 1.2 can’t be accurate in any binary floating-point format, because 0.1 isn’t, unlike e.g. 0.5 or 0.125.

We do have DecFP.jl for accurate 1.2 and 0.1. SafeREPL could have chosen that to replace floats, or used rationals (of e.g. BigInts), and it might not be too late to change it or add it as an option. I think the safe in SafeREPL refers to safe from overflows, i.e. with BigInt you’re safe from that (but not from running out of memory if your number requires very large amount of memory, but there’s no alternative to then failing somehow).

Rationals will have 1/3 accurate, but none of the other options above, only some hypothetical ternary floating-point would (and then 0.5 not, nor 0.1, then you need base 30-based), or any base with 3 as a factor (and you really want 2 and 5 for 10). The Soviet Setun was ternary based:

https://inria.hal.science/hal-01568401/document

With a minimum set of commands (only 24 single-address commands), the “Setun”
provided an opportunity to do calculations with fixed-point and floating-point
numbers.
[…]
It also provided the addition operation with products that optimized polynomial calculations. It utilized three-valued (trit) operations for multiplying and three commands for conditional transition according to the sign of the result.
[…]
thanks to its simplicity, its natural architecture, and its rational constructed programming system, the Setun effectively used an interpretive system successfully. Some of its features included floating-point numbers with eight decimal digits (IP-2), floating-point numbers with six decimal digits (IP-3), complex numbers with eight decimal digits (IP-4), floating-point numbers with twelve decimal digits (IP-5), auto code “Poliz” with its operating system, and a library of standard programs that used floating-point numbers with six decimal digits.

The document seems to later confirm ternary-based floating point, i.e. 1/3 would be accurate and then not 0.5, nor 0.1. But I may misunderstand and it has e.g. “eight decimal digits” accurate (likely as with regular floating point number of decimal digits doesn’t say all of them are accurate, only at best), then decimal/10-based or 30-based. At least any of this would be possible not just on a ternary, but also opn a binary computer.

When your numbers can’t accurately represent your numbers e.g. 1/3, then none of your calculations are accurate and errors can add up.

There’s really no way around it. You can have all of the numbers accurate, and * + - / but when you get to power a ^ b you have a problem unless b is an integer, i.e. e.g. the square root when b = 0.5 then you get irrational number and you can’t store such a number, it has infinite number of digits in any base. Hypothetically I can see you have fully symbolic computation (and supports sets… e.g. two solutions to square roots if you’re not just looking for the principal root); or at least have sqrt(2) be accurate, with storing all numbers as its square… or use base-sqrt(2) number system, but you would then just have a different problem e.g. cube roots not accurate.

Have you considered a package such as GitHub - JuliaMath/FixedPointDecimals.jl: Julia fixed-point decimals built from integers

1 Like

It’s a great option, but not safe (can e.g. overflow, unless it also works/is used on top of BigInt). Yes, decimal, but DecFP has same good property. And neither ok for e.g. 1/3. If you replace floating point (e.g. Float64) then you likely want another, better, floating-point option (or rationals) for e.g. NaN.

This issue might be relavant to this discussion. Not sure though, I have never used SafeREPL.

This was all but answered in the first few comments, but maybe it’d help to illustrate things. 1.2 and 0.2 cannot be exactly represented in Float64 or any binary format because there is no equal finite sum of powers of 2, but 1=2^0 can be. 1.2 is printed 1.2 for convenience, but this is not done in BigFloat because the purpose is higher precision.

julia> 1.2, 0.2
(1.2, 0.2)

julia> 1.2-1-0.2, 1.2-1 # if exact decimal, you'd expect (0.0, 0.2)
(-5.551115123125783e-17, 0.19999999999999996)

julia> big(1.2)
1.1999999999999999555910790149937383830547332763671875

julia> big(0.2)
0.200000000000000011102230246251565404236316680908203125

julia> length(repr(MIME("text/plain"), big(1.2))) - 1 # -1 for decimal point
53

Float64’s significand has 53 bits of precision: 52 stored bits plus an implied bit with a value of 1. The nice thing about powers of 2 is that they all have a finite decimal representation, so you can calculate the bounds on the number of decimal digits. 1.2 sums 2^0 to 2^-52, so that’s 1 integral digit plus at most 52 decimal digits; 1.2 goes the full 53 digits. 0.2 sums 2^-3 to 2^-55, so that’s at most 55 decimal digits; 0.2 uses 54 digits.

With the higher precision of BigFloat, the rounding actually does change its value to be closer to 1.2, but it’s still inexact because it’s still a binary format.

julia> round(big(1.2), digits=4)
1.200000000000000000000000000000000000000000000000000000000000000000000000000007

julia> big(Float64(ans)) # putting it through Float64 loses precision
1.1999999999999999555910790149937383830547332763671875

As mentioned alreadly, FixedPointDecimals.jl does fixed point decimals, DecFP.jl wraps Intel’s IEEE 754-2008 decimal floating point library, and Decimals.jl does arbitrary precision decimal floating points, which as far as I know is not covered by IEEE. Note that the latter 2 packages were last updated in 2020, which could be a problem and indicates how much more useful binary format is in practice.

1 Like