PIC (particle-in-cell), space charge tracking simulation?

There’s a lot of demand in plasma/accelerator science to simulate space charge through so-called particle-in-cell model (which is somewhat like finite element analysis in structural dynamics etc)

Many of them, for example fbpic uses Python for the apparent advantage for scientists users and for performance reason has to use Numba. Also, there are more legacy code written in Fortan (e.g http://www.desy.de/~mpyflo/)

Is there any related project going on in Julia community? If people were to write something in Julia, would there be a real advantage over Numpy?

I am not an expert of Python/Numpy/numba… (I have developed // and HPC physics solvers in C++ for many years) so my answer is only a strong opinion.

  1. Pros of Julia for HPC simulations
  • Solve the problem of two languages (productivity of a dynamic language and performances of C++ or Fortran codes)).
  • Very expressive, complete and rich (too rich ?) language that allows you to express accurately and efficiently an amazing variety of algorithms.
  1. Cons.
  • People form classical HPC (e.g. CFD) have not yet massively made the switch to Julia and so they are (IMHO) a bit late to react compared to other scientific domains (e.g. mathematical optimization, data science,…). As a consequence, the ecosystem may appear a bit sparse and you may have to develop more stuff by yourself. For example, I am presently coding parallel geometric multigrid solvers (and it is a real pleasure to do it in Julia).
  • The multi-threading API is not yet totally completed.

With this in mind, I estimate that the benefits (generic, modular, composable algorithms, functional programming, meta-programming, performance tools…) to start new development in Julia largely exceed the drawbacks of the relative youth of the ecosystem: the productivity in Julia allows you to focus on algorithms and deliver more performant and maintainable simulation tools.

P.S. It takes some time to learn Julia but I think it is a good investment :wink:

3 Likes

@LaurentPlagne You make a very good reply. It would be really interesting to hear more about the features of the Julia language help in your new work.

@jling I cannot help you directly. I have a background in physics, but not in accelerator physics. I would say though - if you are starting a new project from scratch use Julia. Don’t code any fundamental algorithms by yourself - if you ask on this forum you will probably discover that there is a Julia package which has that function.
However if you need to use the Astra package from DESY you can use the Fortran functions in that package quite easily. SO you use Julia as a ‘wrapper’ around existing functions.
From your post I do not think you are using the Astra package, rather using it as an example.

I am using ASTRA at the moment, the problem with ASTRA though, is that it’s design to run like:

$ ASTRA xxx.in

and spits out some files record the results of the simulation, There’s not really any interface can be used. And I think this is a common pattern, that performant software just meant to be ran like a binary executable if it’s impossible to abstract any common core set out (for example what openblas does)

First, I apologize for my general feelings about the Julia suitability for HPC physics simulation: probably not what the OP expected.

I have already mentioned (in the Pros paragraph) the main features that makes Julia so convenient for my work. Functions that are generic by default (duck typing), abstract types for implementation swiches, nice and light syntax for parametric types, first class citizenship for functions, introspection, macros… Each time I have to switch back to C++, I feel like putting on an heavy dragon’s class armor.

I should also mention integrated features like @code_native and @btime that accelerate the tuning of computationally intensive functions. Obviously there are very good tools (vtune, valgrind…) for C++ for profiling or assembly investigation but they are external/commercial tools. The Julia’s ecosystem is also becoming very competitive. For example Makie that allows for rather sophisticated and performant visualizations :wink:
test

Package system of modern languages like Julia (or Rust or Python) does not exist for C++ or Fortran.

Of course they are still some missing features/ perf issues like:

  • allocating array views
  • OpenCl (Vulkan ?) backends competitive to CUDA’s Julia handling
  • Polished multi-threading API

but the Julia’s evolution is so fast that I am pretty confident for the future.

1 Like

I had the same question too…

you made great point and to me they all sound like relevant things to PIC or any simulation really; but all these algorithms are quite complicated so I don’t know how to feel about technical details and how doing it in Julia would feel.

Me posting this was more like: google ‘julia particle in cell’ resulted nothing, let me put up something so people could discuss or even better, if a team is writing a PIC program and can make some comments.

1 Like

Hey everyone, since the publication of Anthony Peratt’s 2015 book I have been very curious about all this. There is a guy named Donald Scott who made a presentation about Anthony Peratt’s particle-in-cell simulations of spiral galaxies using plasma discharge. I have taken some excerpts and made it into a song:

if someone wants to do some spiral galactic PIC of a plasma discharge in Julia, do it while listening to this!

1 Like

I’d be interested in participating in this project. IMO Julia will excel in speeding development of the peripheral components of a PIC code, since the core of these codes is a small fraction of all LOC.

There are umpteen different PIC codes written in a variety of languages. What should make the Julia version stand apart is it’s interoperability with FE grids rather than just Cartesian boxes. Otherwise, why recreate so much effort?

It should also allow the user to apply their own Ohm’s law for hybrid equations, and use the Darwin approach. Also implicit timestepping would set it apart from other free implementations.

1 Like

Check this out https://www.sciencedirect.com/science/article/pii/S0965997816302769

3 Likes

It’s so interesting that the same technique has so many different names! The particle shape function in plasmas needs to be higher order than that described in that paper, but it’s a simple extension.

It’s all quite a lot of work though, and I don’t have the bandwidth. Any other takers?

Problem is I don’t have experience writing performance-critial code like this, it would be nice if there’s an article talking about toy model / code (even in python or whatever)

This particle / grid approach is exactly what PIC is about.

I know, that’s why I posted the link. :slight_smile:

This paper has some nice pictures https://arxiv.org/pdf/1310.7866.pdf, and it’s got Hartmut Ruhl and Amitava Bhattacharjee on the author list. They know their stuff. It covers the basics of particle shapes and mapping quantities from the grid to particle and vice-versa.

All the gory details are in the Particle Simulation Code manual - I can’t seem to google a copy of it.

On the Maxwell solver side there is https://github.com/wsshin/MaxwellFDM.jl, which appears to be under development. I know that the Julia FEM package is crying out for developers, and really feel as though that should be the ultimate goal.

1 Like

I’m not sure what you mean here. FDTD is not really finite elements, and PIC isn’t finite elements either.

I should be clearer. In PIC moments of the distribution function gathered from particles are projected onto any grid. Then any grid will do. FDTD is the easiest to implement but there shouldn’t be any restriction to generalise to FEM, except some extra coding trickiness for dealing with shape functions and element shapes.

I believe that FDTD relies on the “nested” character of the Cartesian grid.
It gives it the conservation properties that make it a success.

I think you’re right, PIC should be able to work with any grid at all. I’m not sure though that it will necessarily lead to a more efficient code. Part of the appeal of PIC is that it can deal with “fronts” moving through space. In that case static graded grids don’t necessarily help very much. And if the graded grid needs to be redone many times over the simulation one could just as well work with a Lagrangean method without the complication of advection.

Could you explain what you mean by nested and graded grids? Do you mean adaptive mesh refinement?

Regarding conservation, explicit Cartesian FDTD plasma PIC codes can be formulated to exactly conserve either momentum or energy but not both. I’m most familiar with the momentum conserving kind. With FEM formulations there are more ways of getting it wrong, but it would still work.

A PIC code that works with arbitrary grids would open up opportunities for gyro kinetic particles for which Cartesian geometries fall short.

Are you also refering to the level set method for front tracking? I’ve never worked with that so would be interested to hear your thoughts.

I am possibly making an unjustified assumption that this is a topic of general interest, as it is only tangentially related to Julia. Perhaps this should be discussed through private messages?

The FDTD method relies on the “staggered” (or nested) grids for the dual quantities. That works well when the grid is Cartesian.

I have to state for the record that my familiarity with PIC codes is for the solid mechanics equations of fast dynamic deformation. The fronts refers then to shock waves or shear bands. I’m not really familiar with plasma physics or the application of these codes plasma physics. Sorry.