If some of these assumptions don’t hold, you get basically garbage numbers out
Thanks for the information.
If some of these assumptions don’t hold, you get basically garbage numbers out
Thanks for the information.
These days it’s PartitionedArrays that’s probably the best. But yes
if you’re writing a parallel code, then you’re going to be writing parallel array primitives, in which case you might as well stick that parallel array anywhere. Good parallel array types will be more useful than the applications that they are written for, so this would might have a Ewald summation package associated with it as well, but then it could be used with symplectic time steppers, implicit ones, etc. Locking such an object towards one application would be not the greatest use of dev resources, especially given that such array types are lacking yet the rest of what’s in there is rather straightforward.
Though the thing that really should get fixed here though is that the code is GPL.
We can combine Measurements.jl
with DifferentialEquations.jl
to solve an ODE with error propagation as this tutorials. This is one of advantages of Julia language, as mentioned in The Unreasonable Effectiveness of Multiple Dispatch | Stefan Karpinski. Combining with Measurements.jl
with AstroNbodySim.jl
has no big differences especially for small systems since they are essentially some ODEs. So this feature may be useful for some applications such as orbital dynamics and mission trajectory design, etc.
Two comments:
There are situations where the error propagation and derivatives could be sensible, even for large N (i.e. FlowPM). There’s some link above to a discussion by Chris about how AD diverges on chaotic problems. We need to avoid chaos, so we need to zoom out to distance scales much larger than a typical orbit, and simulate time scales much shorter than the typical Lyapunov time. This can be the case for cosmological simulations in the weakly nonlinear regime.
I see that cosmological sims are on the roadmap! I look forward to trying those out. Have you seen PencilFFTs?
Yes, I have. However, PencilFFTs
is based on MPI, witch is incompatible with the parallelization scheme of AstroNbodySim
. We plan to implement cosmological simulations by tree method and AMR Poisson solver.
Do the simulations run on an non-nVidia, non-CUDA environment, e.g., an iMac?
If you are asking whether the GPU module can run on non-NVIDIA environments, the answer is NO. Our GPU implementation is based on CUDA.jl
.
If you are asking whether the other functionalities of AstroNbodySim
can work on non-GPU platforms, the answer is YES. However, direct summation method and particle-mesh method on GPU will not be supported.
I use Nvidia GT216 GLM (Quadro FX880M) it is not a GPU to play heavy core game like Assassin’s Creed. Can I run the GPU simulation with my condition? DO I still need Cuda installed for that?
I’ll try it today and tell you the result.
P.S. I want to create Julia package that can do Physics simulation like yours. Amazing you are !
Can you capture the screenshot of full output?
It does not seem like problem stemming from AstroIC, because we set loose restrictions on dependencies.
Possible solutions are:
PS: Please post usage problems on github issues