[ANN] SPHExample - A teaching package for small SPH dam-break simulations

Hi everyone!

First time doing this sort of annoucement so a bit excited and nervous :slight_smile:

I’d like to announce a new package that I’ve been working on: an SPH (Smoothed Particle Hydrodynamics) dam-break simulation tutorial in Julia. The purpose of this code is to serve as an introduction to the SPH method, and I’ve designed it to be relatively simple and easy to understand.

Using this package as a teaching example one can learn how to get the following results:

The code includes a variety of features such as weakly compressible SPH, dynamic boundary conditions, density diffusion, and a Wendland Quintic kernel. It can simulate a 2d or 3d dam break, and it makes use of external packages such as CellListMap and WriteVTK to simplify the calculation process and output results in the .vtp format.

I’ve structured the code into “input” and “src” directories. The “input” directory includes some pre-generated particle layouts, and the “src” directory contains all the code files. The package also includes some auxiliary functions and time stepping controls.

If you find this code repository helpful, please feel free to star the project on GitHub. And if you end up using this code for your own learning or teaching, please let me know – it’s always a motivation boost to hear about how the code is being used!

You can find the code repository at the following link: GitHub - AhmedSalih3d/SPHExample: Simple SPH dam-break simulation in Julia

In a way this my concept of giving back to the Julia community. Some of you might have seen my name pop up in a lot of different questions and finally I was able to produce something in my field which I felt was worth sharing with others and the Julia community at large.

As always, please let me know if you have any questions or feedback.

Thank you!

21 Likes

Really great job! :slight_smile: This can (GitHub - InteractiveComputerGraphics/SPlisHSPlasH: SPlisHSPlasH is an open-source library for the physically-based simulation of fluids.) insight what else can be done.

1 Like

Thank you very much!

That is a really impressive repo :slight_smile:

Part of producing this example is actually due to me personally finding that a lot of these great and already quite big SPH libraries are very complicated due to the amount of features and options they have - which is a great experience for a user, but wanting to learn/develop it can quickly become “over-loaded”. So my aim with this was to hopefully lower barrier to entry as much as possible :slight_smile:

Kind regards

Hello everyone!

I am stoked to present an updated version of the SPHExample package. It has been updated and streamlined grealy, so that now, compared to the first version:

  • Uses a multi-threaded approach using LoopVectorization.jl (without deviating much from single threaded code!)
  • Uses TimerOutputs to ‘benchmark’ code and see where time is spent.
  • Uses StructArrays to be able to split input into 1D vectors of components or Vector{StaticVector} without allocation - this greatly simplifies some mathematics
  • Includes a custom .vtp file save function for those interested in learning how to do that (“src/ProduceVTP.jl”)
  • Bugfixes to some simulation equations

The simulation it self is still quite the same, about 2x faster now for the 2d dambreak! Using the package now a progress bar will show up and TimerOutputs returns timing values:

For me it is a great personal pleasure seeing all zeros in allocations, other than in the packages used for neighbor finding, outputting files and resetting/resizing vectors.

All in all, it has been really exciting to update my package and share it with you once again. I hope you would want to give this revised version a go and I enjoy issues/stars on Github, so feel free to give any of those :blush:

Before concluding, once again I would like to thank the community as a whole, the package would not have been as easy to make without these awesome packages. I wil not be able to thank everyone individually, so first to everyone on the forum; thank you for all the nice answers and discussions on how to improve code, it has really taught me a lot. I want to highlight three persons in the community.

I would like to thank @lmiq for about a ~1 year ago having giving the package a detailed overlook and really given me great feedback on how to improve it.

Thank you to @PharmCat for his great interest in the project, code and help to improve the package further.

And thanks to @abraemer for spotting the final pesky issues giving me obscure amounts of minor memory allocations, which I wasn’t able to resolve before hand.

Link: GitHub - AhmedSalih3d/SPHExample: Simple SPH dam-break simulation in Julia

Kind regards

11 Likes

This is a great project. Very interesting.
I constructed an HPC cluster for a Danish company a few years ago, they do studies of flooding like this.
Also managed a system which did SPH simulations of fuel tank sloshing in F1.

ps. The 3D case takes over 1 day to run. Are you able to parallelise and run that on an HPC cluster? Have you looked at using GPUs?

1 Like

Hello John!

Thank you for the kind words.

That sounds like some amazing tasks you’ve done, and I am sure they like the setup! :blush: Ooh, that sounds very interesting to me, the topic of flooding and fuel tank simulations.

Yes, the 3D case is embarrasingly slow right now. In my view SPH seems to be better geared for GPU so that is the intention for now, to move towards in the future, but if I had access to multiple cpu processors I would not mind trying to test/learn how to transfer the code to such a setup. In the past I have made some GPU simulations to get familiar with that, but nothing shared publicly yet. @PharmCat is also looking into it now and has based on SPHExample made a similar code but on GPU. My hope is that we can have inside of SPHExample a “src_cpu” and “src_gpu” in the future, so it is indeed on radar. I suspect we can get anywhere between 10x to 100x on GPU in regards to speed.

There are still some gains on single threaded CPU, anyone wanting to push some more performance out and get us under the 2 minute mark on CPU, is certainly most welcome! :blush:

Kind regards

How many particles and how many time-steps are there?

The easiest way to take advantage of a GPU is to use multiple-time-stepping. This means updating the neighbor lists only at something like one at every 20 steps, but compute it with a safety margin, and running the force calculations on the GPU.

On the CPU what can be improved is not computing the neighbor lists at all, just compute the forces directly with CellListMap.map_pairwise!. That would bring the allocations down to minimal. I would expect a small performance gain, with the possibility of supporting much larger systems, for not having to store the neighbor lists in memory.

Also it may be worth looking at what Trixi.jl provides for this kind of simulation.

3 Likes

Hello all!

I have not merged it into main branch yet, but thought it was too exciting not to share.

It has really been a great tool to understand the SPH method better for personally :blush:

What I like is how my simulation using similar procedures stabilizes faster, and does not have the “wedge” structure inside of the pressure field. My particles move around quite a bit though, but doesnt change the shape of the initial fluid, so perhaps not an issue.

1 Like

After quite some intensive work I am happy to share a complete rewrite of the package. Its performance has been improved drastically, while keeping the initial aim of a teaching package in place. The latest developments are:

  • Letting go of LoopVectorization.jl
    Had to let go of this otherwise awesome package, due to the potential deprecation. I don’t wish to versionlock my package to Julia 1.10.
  • Letting go of CellListMap.jl
    I had to let go off this great package due a goal in the future; writing a proper GPU version. I need to have implemented my own neighbor-searching algorithm and all that follows. This is now what I’ve done.
  • Including more physics
    It is possible to use a turbulence scheme now, based on the large eddy simulation approach instead of the artificial viscosity.
  • Examples have been cleaned up and stream-lined

I am also happy to share that I follow @lmiq advice; now the particle interaction list is never populated but used on the fly, which seems for now to have been a good decision. Took me honestly about three intensive tries to get respectable performance but I am pretty stoked about it. Still possible to improve of course as always.

Any suggestions, issues, support, testing of the package is much appreciated and really drives the quality up :blush:

Kind regards

5 Likes

Nice, did you manage to implement, or find the directions, on how to implement a GPU based neighbor list?

I have found some papers outlining some approaches, but I always feel that the language is a bit more difficult than it needs to be in them, so I struggled to implement their approaches. Instead I started from scratch doing step by step what I understood and now I have one which works quite well on CPU. Since I started developing it I had the aim of porting it to GPU and I believe that what I have made should be fairly easy to port.

Going to be fun when I try it, hopefully really gives a nice speed up for larger simulations.

2 Likes

Hello all!

Happy to announce another update to version 0.5.0. In this version:

Now I have started to perform dedicated benchmarks up against DualSPHysics (state of the art SPH solver) and am getting these performance readings on the same hardware:

And attached here is a video showing the result at resolution = 0.02 on the same hardware (Intel i5). Res=0.01 has more particles than Res=0.02. Top row uses DBC (Dynamic Boundary Condition) with no density diffusion, whilte bottom row shows it with DDT enabled. Left column is SPHExample, right column is DualSPHysics.

Notice how SPHExample has a cleaner result in the density field (and thereby pressure too, since it is a weakly compressible implementaton) and that the noise around boundaries is non-existent compared to DualSPHysics + it is faster at these resolutions.

So it is turning out to be an awesome package for developing SPH methods, hope I can get some of the Julia community to give it a go!

5 Likes

Hello everyone!

I am happy to share version 0.6 of SPHExample. I am happy to share that I’ve modified it so now it is much easier to interact with and functions as a solver. Basically assuming you have cloned the project by providing the following script:

using SPHExample

let
    Dimensions = 2
    FloatType  = Float64

    
    SimConstantsWedge = SimulationConstants{FloatType}(dx=0.02,c₀=42.48576250492629, δᵩ = 0.1, CFL=0.2)

    # Define the dictionary with specific types for keys and values to avoid any type ambiguity
    SimulationGeometry = Dict{Symbol, Dict{String, Union{String, Int, ParticleType, Nothing}}}()

    # Populate the dictionary
    SimulationGeometry[:FixedBoundary] = Dict(
        "CSVFile"     => "./input/still_wedge/StillWedge_Dp$(SimConstantsWedge.dx)_Bound.csv",
        "GroupMarker" => 1,
        "Type"        => Fixed,
        "Motion"      => nothing
    )

    SimulationGeometry[:Water] = Dict(
        "CSVFile"     => "./input/still_wedge/StillWedge_Dp$(SimConstantsWedge.dx)_Fluid.csv",
        "GroupMarker" => 2,
        "Type"        => Fluid,
        "Motion"      => nothing
    )


    SimMetaDataWedge  = SimulationMetaData{Dimensions,FloatType}(
        SimulationName="StillWedge", 
        SaveLocation="E:/SecondApproach/TESTING_CPU_StillWedge",
        SimulationTime=4,
        OutputEach=0.01,
        FlagDensityDiffusion=true,
        FlagOutputKernelValues=false,
        FlagLog=true,
        FlagShifting=false,
    )

    SimLogger = SimulationLogger(SimMetaDataWedge.SaveLocation)

    @profview RunSimulation(
        SimGeometry        = SimulationGeometry,
        SimMetaData        = SimMetaDataWedge,
        SimConstants       = SimConstantsWedge,
        SimLogger          = SimLogger
    )
end

It would run the simulation using the engine I’ve built. Due to the beauty of Julia one can still always overload the functions and do whatever one would want to, so that is amazing :slight_smile:

Two main new developments are:

  • Particle shifting is included as an option (currently only for internal flows). Particle shifting is basically a way to move material points into voids, to avoid a degenerating simulation especially in violent 2D flows when using SPH.
  • Motion of objects has been implemented. I share with you the beautiful animation below.

Here we see over 100k particles simulated on CPU with 8 threads in less than 15 minutes for 2.5 physical seconds. My unofficial comparison with DualSPHysics shows ~4x faster in simulation time which is quite impressive.

I hope someone with a stronger CPU ( 16 or 32 threads ) would not mind giving it a go and letting me know how well it scales.

Thanks for all the help, I am really proud of where this package is at right now :blush:

Kind regards

3 Likes

Hello everyone!

A while since I’ve given an update. Today I don’t have the most exciting news, but a lot of changes have been made to make it even easier to work with:

  • If one has Paraview installed (minimum version 5.12) it is now possible to automatically open and view results as soon as the simulation is finished
  • After a finished simulation it is possible to automatically open and view the full log file
  • When running longer simulations or unexpected crashes happens, the visualisation files (vtkhdf) are not completely finished and unusable. I have added CloseHDFVTKManually which when given a directory now fixes the previously unusable files and they can now be used again.
  • Reset and Reduction steps of the algorithm have been fused into seperate functions, for clearer code readability

The intent of these changes is to make it easier to use for rapid development and working more interactively with the data.

Any feedback / code suggestions here or on Github are highly appreciated, thank you

Kind regards, Ahmed Salih

3 Likes