[ANN] AstroNbodySim.jl: Unitful and differentiable gravitational N-body simulation code in Julia

If some of these assumptions don’t hold, you get basically garbage numbers out

Thanks for the information.

https://discourse.julialang.org/t/re-ann-astronbodysim-jl-unitful-and-differentiable-gravitational-n-body-simulation-code-in-julia/74817/2?u=islent

1 Like

These days it’s PartitionedArrays that’s probably the best. But yes

if you’re writing a parallel code, then you’re going to be writing parallel array primitives, in which case you might as well stick that parallel array anywhere. Good parallel array types will be more useful than the applications that they are written for, so this would might have a Ewald summation package associated with it as well, but then it could be used with symplectic time steppers, implicit ones, etc. Locking such an object towards one application would be not the greatest use of dev resources, especially given that such array types are lacking yet the rest of what’s in there is rather straightforward.

Though the thing that really should get fixed here though is that the code is GPL.

1 Like

We can combine Measurements.jl with DifferentialEquations.jl to solve an ODE with error propagation as this tutorials. This is one of advantages of Julia language, as mentioned in The Unreasonable Effectiveness of Multiple Dispatch | Stefan Karpinski. Combining with Measurements.jl with AstroNbodySim.jl has no big differences especially for small systems since they are essentially some ODEs. So this feature may be useful for some applications such as orbital dynamics and mission trajectory design, etc.

2 Likes

Two comments:

  1. AstroNbodySim.jl as a (“small-system”) nbody simulator, there is Miller’s instability generically.
    See p.343 of Binney & Tremaine 2008 book.
  2. So, as far as this measurements.jl feature is concerned, we think about applications that can be modeled with N<3 . Then, what is that? :smile:
1 Like

There are situations where the error propagation and derivatives could be sensible, even for large N (i.e. FlowPM). There’s some link above to a discussion by Chris about how AD diverges on chaotic problems. We need to avoid chaos, so we need to zoom out to distance scales much larger than a typical orbit, and simulate time scales much shorter than the typical Lyapunov time. This can be the case for cosmological simulations in the weakly nonlinear regime.

I see that cosmological sims are on the roadmap! I look forward to trying those out. Have you seen PencilFFTs?

Yes, I have. However, PencilFFTs is based on MPI, witch is incompatible with the parallelization scheme of AstroNbodySim. We plan to implement cosmological simulations by tree method and AMR Poisson solver.

Do the simulations run on an non-nVidia, non-CUDA environment, e.g., an iMac?

If you are asking whether the GPU module can run on non-NVIDIA environments, the answer is NO. Our GPU implementation is based on CUDA.jl.

If you are asking whether the other functionalities of AstroNbodySim can work on non-GPU platforms, the answer is YES. However, direct summation method and particle-mesh method on GPU will not be supported.

I use Nvidia GT216 GLM (Quadro FX880M) it is not a GPU to play heavy core game like Assassin’s Creed. Can I run the GPU simulation with my condition? DO I still need Cuda installed for that?

I’ll try it today and tell you the result.

P.S. I want to create Julia package that can do Physics simulation like yours. Amazing you are !

I try to test example 4 and add the dependencies but got an error:

Can you capture the screenshot of full output?

It does not seem like problem stemming from AstroIC, because we set loose restrictions on dependencies.

Possible solutions are:

  • update Reexport (and other packages)
  • install packages depending on Reexport after installing AstroIC

PS: Please post usage problems on github issues