Upon loading, the default is to replace Float64 literals with BigFloat, and Int, Int64 and Int128 literals with BigInt
The former to BigFloat isn’t “safe” as in fully accurate. 1.2 can’t be accurate in any binary floating-point format, because 0.1 isn’t, unlike e.g. 0.5 or 0.125.
We do have DecFP.jl for accurate 1.2 and 0.1. SafeREPL could have chosen that to replace floats, or used rationals (of e.g. BigInt
s), and it might not be too late to change it or add it as an option. I think the safe in SafeREPL refers to safe from overflows, i.e. with BigInt you’re safe from that (but not from running out of memory if your number requires very large amount of memory, but there’s no alternative to then failing somehow).
Rationals will have 1/3 accurate, but none of the other options above, only some hypothetical ternary floating-point would (and then 0.5 not, nor 0.1, then you need base 30-based), or any base with 3 as a factor (and you really want 2 and 5 for 10). The Soviet Setun was ternary based:
https://inria.hal.science/hal-01568401/document
With a minimum set of commands (only 24 single-address commands), the “Setun”
provided an opportunity to do calculations with fixed-point and floating-point
numbers.
[…]
It also provided the addition operation with products that optimized polynomial calculations. It utilized three-valued (trit) operations for multiplying and three commands for conditional transition according to the sign of the result.
[…]
thanks to its simplicity, its natural architecture, and its rational constructed programming system, the Setun effectively used an interpretive system successfully. Some of its features included floating-point numbers with eight decimal digits (IP-2), floating-point numbers with six decimal digits (IP-3), complex numbers with eight decimal digits (IP-4), floating-point numbers with twelve decimal digits (IP-5), auto code “Poliz” with its operating system, and a library of standard programs that used floating-point numbers with six decimal digits.
The document seems to later confirm ternary-based floating point, i.e. 1/3 would be accurate and then not 0.5, nor 0.1. But I may misunderstand and it has e.g. “eight decimal digits” accurate (likely as with regular floating point number of decimal digits doesn’t say all of them are accurate, only at best), then decimal/10-based or 30-based. At least any of this would be possible not just on a ternary, but also opn a binary computer.
When your numbers can’t accurately represent your numbers e.g. 1/3, then none of your calculations are accurate and errors can add up.
There’s really no way around it. You can have all of the numbers accurate, and * + - / but when you get to power a ^ b you have a problem unless b is an integer, i.e. e.g. the square root when b = 0.5 then you get irrational number and you can’t store such a number, it has infinite number of digits in any base. Hypothetically I can see you have fully symbolic computation (and supports sets… e.g. two solutions to square roots if you’re not just looking for the principal root); or at least have sqrt(2) be accurate, with storing all numbers as its square… or use base-sqrt(2) number system, but you would then just have a different problem e.g. cube roots not accurate.