Question about floating-point precision in summation

Yes, in fact I was inaccurate: the problem did not stem from the fact that the numbers are small, but from the fact that I must necessarily multiply and add up an enormous quantity of numbers that can differ from each other by many orders of magnitude. I suspect that this was the cause of the discrepancy of some results that I observed, while now this discrepancy has clearly disappeared with the BigFloats, which (although they are probably not the best existing solutions in terms of precision and efficiency) for the moment are more than enough to effectively solve the problem. For any future needs I will definitely come back to consult this topic, since extremely interesting discussions have arisen.

I am very happy that there are forums like these!

3 Likes

By the way, I was corresponding today with Radford Neal (author of the xsum algorithm), and he asked

Our of curiosity, what applications do Julia people tend to use xsum for?

If anyone has applications that they want to share where they wanted/needed extended-precision sums, I would love to pass them along.

e.g. @Frisus95, could you say what application your problem arises in?

3 Likes

It is a calculation that has to do with quantum gravity. The theory I’m working on is called “loop quantum gravity”, founded by Peter Smolin, Abhay Ashtekar and Carlo Rovelli (someone here has probably heard of it). In a nutshell, we are trying to unify quantum mechanics and general relativity.

1 Like