Upon loading, the default is to replace Float64 literals with BigFloat, and Int, Int64 and Int128 literals with BigInt

The former to BigFloat isnâ€™t â€śsafeâ€ť as in fully accurate. 1.2 canâ€™t be accurate in any binary floating-point format, because 0.1 isnâ€™t, unlike e.g. 0.5 or 0.125.

We do have DecFP.jl for accurate 1.2 and 0.1. SafeREPL could have chosen that to replace floats, or used rationals (of e.g. `BigInt`

s), and it might not be too late to change it or add it as an option. I think the safe in SafeREPL refers to safe from overflows, i.e. with BigInt youâ€™re safe from that (but not from running out of memory if your number requires very large amount of memory, but thereâ€™s no alternative to then failing somehow).

Rationals will have 1/3 accurate, but none of the other options above, only some hypothetical ternary floating-point would (and then 0.5 not, nor 0.1, then you need base 30-based), or any base with 3 as a factor (and you really want 2 and 5 for 10). The Soviet Setun was ternary based:

https://inria.hal.science/hal-01568401/document

With a minimum set of commands (only 24 single-address commands), the â€śSetunâ€ť

provided an opportunity to do calculations with fixed-point and floating-point

numbers.

[â€¦]

It also provided the addition operation with products that optimized polynomial calculations. It utilized three-valued (trit) operations for multiplying and three commands for conditional transition according to the sign of the result.

[â€¦]

thanks to its simplicity, its natural architecture, and its rational constructed programming system, the Setun effectively used an interpretive system successfully. Some of its features included floating-point numbers with eight decimal digits (IP-2), floating-point numbers with six decimal digits (IP-3), complex numbers with eight decimal digits (IP-4), floating-point numbers with twelve decimal digits (IP-5), auto code â€śPolizâ€ť with its operating system, and a library of standard programs that used floating-point numbers with six decimal digits.

The document seems to later confirm ternary-based floating point, i.e. 1/3 would be accurate and then not 0.5, nor 0.1. But I may misunderstand and it has e.g. â€śeight decimal digitsâ€ť accurate (likely as with regular floating point number of decimal digits doesnâ€™t say all of them are accurate, only at best), then decimal/10-based or 30-based. At least any of this would be possible not just on a ternary, but also opn a binary computer.

When your numbers canâ€™t accurately represent your numbers e.g. 1/3, then none of your calculations are accurate and errors can add up.

Thereâ€™s really no way around it. You can have all of the numbers accurate, and * + - / but when you get to power a ^ b you have a problem unless b is an integer, i.e. e.g. the square root when b = 0.5 then you get irrational number and you canâ€™t store such a number, it has infinite number of digits in any base. Hypothetically I can see you have fully symbolic computation (and supports setsâ€¦ e.g. two solutions to square roots if youâ€™re not just looking for the principal root); or at least have sqrt(2) be accurate, with storing all numbers as its squareâ€¦ or use base-sqrt(2) number system, but you would then just have a different problem e.g. cube roots not accurate.