Precision error in Nbody simulation


I’m currently writing a Nbody simulation using the Barnes-Hut algorithm. In order to validate what i wrote, i want to compare it to the brute force calculation. (setting the theta parameter in BH makes it degenerate in a brute force calculation)

However,for reason i dont fully understand there a discrepancy between the two result (at machine precision level) which end up accumulating and diverging.

Except that the BH algorithm is written in a recursive form in a separate module, i see no reason why i shouldn’t get the same result.

does anyone has any insight about what is happening ? and any workaround if possible ?


In particular, floating-point arithmetic is not associative, so even adding up the same numbers in a different order (e.g. recursively) will generally give a slightly different result.


In addition to the links posted by @stevengj, the error is probably due to that - changing the order of operations when dealing with floating point numbers can change the result.

I see you were faster :joy:

Same as in == or in isapprox? And N-body is hard AFAIU.

Edit: the BH algorithm gives an approximate solution? And you are comparing it to an better approximation via brute force?

The problem may be ill-conditioned. I would try gradually increasing the precision used in each algorithm and see if the answers begin to converge.

oh, i forgot about the impact of the order of operations. its effectively not the same.

@goerch yes BH is approximate but the level of approximation can be control. and you can make it so its exactly equivalent to a brute force approach. also isapprox is true for the ~20 first iterations.

thanks for the answer. i probably round of the acceleration after each iteration. its will be slower but will be easier to see if everythin works fine

You could also parameterize isapprox

help?> isapprox
search: isapprox

  isapprox(x, y; atol::Real=0, rtol::Real=atol>0 ? 0 : √eps, nans::Bool=false[, norm::Function])

Note that N-body dynamics are typical chaotic, so even tiny differences in roundoff errors will eventually lead to exponentially differing results.

The usual rejoinder is that any quantity that is exponentially sensitive to initial conditions or other perturbations is not physically meaningful to begin with, and you want to instead look at statistical properties of the solution that are robust to tiny perturbations.


Not sure about this, isn’t the solar system N-body?

Edit: dangerously off topic…


For some arguments on the origin of the instabilities, see i.e.

1 Like