I know this has something todo with Loss of Precission due to the way floating point numbers work. But i Find it a bit weird that this happens in such a “simple” calculation when my old Calculator can do it without loss of Precision.
Can someone enlighten me here again. Its been a while since I did this.
In short, 295.8 is not the number you think it is. It’s actually 295.80000000000001136868377216160297393798828125:
julia> using Printf; @printf "%.60f" 295.8
295.800000000000011368683772161602973937988281250000000000000000
295.8 is used to input (and output) that number because there’s no other floating point number closer to the exact 295.8 than that one. So then when you do maths with that number, you can get further roundings away from what you’d otherwise expect.
To make this even more tangible, you can see that if you subtract 295 from the above true value, the answer must be 0.80000000000001136868377216160297393798828125, or the nearest floating point number that rounds to it:
Old calculators don’t actually do this any differently, they just do some rounding by default, which papers over most such issues. Doing that causes other issues in a programming language, however, which is why we don’t do that.
The new calculator in Android has really innovated in this space. Hans-J Boehm (of garbage collector fame)'s “Towards an API for the Real Numbers” says
We no longer receive bug reports about inaccurate results, as we
occasionally did for the 2014 floating-point-based calculator.
this is actualy realy interesting. One thing I always was wondering. Would it not also make sense that for (not too large) Numbers we can store the digits left of the decimal seperator in an integer and the numbers right of the decimal seperator as an Integer. And using a carry on flag. Essentialy turning the calculation into two integer Calculations?
You’re describing a variant on fixed-point representation. Fixed-point is what was used before we had floating-point. It’s still more or less widely used, but its applications are not as general-purpose as with floating-point.
Keep in mind floating-point is sometimes also implemented using integer operations, when hardware support is missing.
The approach they base their software on only works with a few hardcoded operations.
IMO the continued fraction-based approaches are more interesting. Some are referenced in the above paper. They all stem from one of R.W. Gosper’s HAKMEM notes: HAKMEM -- CONTENTS -- DRAFT, NOT YET PROOFED. Couple that with a CAS for checking equality.