[ANN] Trixi.jl v0.4: New features and performance improvements for hyperbolic PDEs

We are pleased to announce the release of v0.4 of Trixi.jl, our Julia package providing adaptive high-order numerical simulations of hyperbolic PDEs. Compared to the initial release of v0.3, we have introduced a ton of new features. In particular, we added new mesh types enabling simulations on unstructured, curvilinear, non-conforming and adaptive meshes in multiple space dimensions. These new mesh types support all of our solver features, including shock capturing techniques for high-order methods. Additionally, we have introduced new physical models including multicomponent compressible Euler and magnetohydrodynamics equations. Moreover, Trixi.jl composes well with forward mode automatic differentiation and similar approaches. We presented most of these features already in our presentation of Trixi.jl at JuliaCon. Furthermore, we have some interactive visualizations with Makie.jl - kudos to @sdanisch, @jules, and all other developers of Makie!

As a teaser, here is the simulation of the shallow water equations with a bottom topography on a circular domain with slip-wall boundaries using a discontinuous Galerkin method on a high-order curved unstructured quadrilateral mesh.

The next video shows an MHD rotor, a rapidly spinning dense cylinder embedded in a magnetized, homogeneous medium at rest. This simulation uses a discontinuous Galerkin method on a high-order, curvilinear mesh with adaptive refinement and shock capturing.

Many of these new features come with additional contributions to the Julia ecosystem. For example, we created P4est.jl as thin wrapper around the C library p4est for adaptive meshes and HOHQMesh.jl as Julia wrapper of a high-order mesh generation library. You can find more information about these and further activities related to Trixi.jl on our summary website.

On top of these new features, we worked on the internals of Trixi.jl. In particular, we improved the performance quite a bit and were able to beat another open source code implementing the same numerical algorithms in Fortran by up to 2x. The details are available in our preprint.

Since this is a breaking release, some user-facing changes are of course necessary. However, they should not affect most codes and are summarized in our NEWS.md.

Kudos to all contributors for exciting developments - and thanks to all users encouraging us!

Here is our current list of main features (changes vs. v0.3.0 highlighted):

  • 1D, 2D, and 3D simulations on line/quad/hex/simplex meshes
    • Cartesian and curvilinear meshes
    • Conforming and non-conforming meshes
    • Structured and unstructured meshes
    • Hierarchical quadtree/octree grid with adaptive mesh refinement
    • Forests of quadtrees/octrees with p4est via P4est.jl
  • High-order accuracy in space in time
  • Discontinuous Galerkin methods
    • Kinetic energy-preserving and entropy-stable methods based on flux differencing
    • Entropy-stable shock capturing
    • Positivity-preserving limiting
  • Compatible with the SciML ecosystem for ordinary differential equations
  • Native support for differentiable programming
  • Periodic and weakly-enforced boundary conditions
  • Multiple governing equations:
    • Compressible Euler equations
    • Magnetohydrodynamics (MHD) equations
    • Multi-component compressible Euler and MHD equations
    • Acoustic perturbation equations
    • Hyperbolic diffusion equations for elliptic problems
    • Lattice-Boltzmann equations (D2Q9 and D3Q27 schemes)
    • Shallow water equations
    • Several scalar conservation laws (e.g., linear advection, Burgers’ equation)
  • Multi-physics simulations
  • Shared-memory parallelization via multithreading
  • Visualization and postprocessing of the results
    • In-situ and a posteriori visualization with Plots.jl
    • Interactive visualization with Makie.jl
    • Postprocessing with ParaView/VisIt via Trixi2Vtk

Xref: Previous announcements of v0.3 and v0.2

37 Likes

Trixi.jl is amazing, @ranocha! Out of interest, are there plans to implement distributed memory parallelism?

4 Likes

Thanks a lot! Yes, we plan to implement distributed memory parallelism via MPI. We have already some working prototypes on one mesh type with adaptive refinement in 2D. We just need more developer time to unify internal interfaces and resolve technical issues such as supporting MPI with binary dependencies created by BinaryBuilder. These are some non-trivial tasks, so it will take us some time. Nevertheless, I am sure that we will have MPI support in one of the following major new releases of Trixi.jl.

4 Likes

Do the adjoints not work? Since you’re using the DiffEq time stepping, I would think the adjoints would work? If you’re mutating internally, the EnzymeVJP might be the trick here.

It might be interesting to try reviving MPIArrays.jl @barche ?

1 Like

We are using mutation quite a lot internally (for efficiency). Trying Enzyme.jl is on our (ever-growing) TODO list, see AD: Extend differentiable programming · Issue #462 · trixi-framework/Trixi.jl · GitHub.

Yeah, that’s definitely something we are interested in. Right now, we don’t know whether it’s (easily) possible to use some general MPI array type that satisfies the requirement of different packages. As far as I know, there are different MPI array types out there, e.g. from CliMA, PencilArrays.jl, MPIArrays.jl, and probably many more.
For us, the requirements that come to my mind right now are

  • broadcasting, mapreduce etc. to work with explicit time integrators and error-based step size control from OrdinaryDiffEq.jl
  • accessing parent to implement parallel capabilities on top of serial code
  • ability to control which data is available on which rank and load balancing
  • no communication unless we explicitly ask for it, allowing the usual MPI-type communication patters (for efficiency)
3 Likes

Possibly, though I admit I neglected this badly. It might be worth checking if something better exists in the mean time.

I’ve been looking around the discussion boards for something similar for parabolic PDEs. Perhaps I’m missing something obvious, but is there any support for doing similar adaptive resizing of the grid/mesh for a parabolic PDE, e.g., the advection-dispersion-reaction (ADR) equation? Or is there another type of preferred method for such problems? I am interested in applying adaptive resizing (or similar) techniques to an ADR model in which (for example) a pollutant plume is traveling “downstream” along the grid and so the stability of each cell size may change with time as the plume moves, spreads out and decays. It seems to me that something like Trixi’s methods using callbacks on the ODE solver applied to the semi-discretization could considerably accelerate the solution by adapting the cell sizes based on where the plume currently resides. Thanks in advance for any thoughts offered!

I agree with you that this sort of adaptivity will also be useful for parabolic problems, advection-diffusion equations etc. However, these kinds of problems are not in the focus of research of our Trixi.jl developers, so we don’t have ready-to-use discretizations for them. Advection-dominated problems with diffusion (such as the compressible Navier-Stokes equations for computational fluid dynamics) are on our radar but not at the top of our priority list.

Thanks for that feedback. If it’s simply a matter of interest and labor, as opposed to a better available solution, I’d be interested in contributing in this area. It would certainly benefit my work. If @ChrisRackauckas isn’t already planning/working on something similar in SciML (?), I’d think Trixi.jl would be the right place to include such features.

2 Likes

Yes, it’s definitely mostly a matter of labor and time - and an appropriate design if it should go into Trixi.jl. We have some prototype in a PR but we were not really satisfied with it back then…

1 Like

To keep the scope as not infinitely large, I am trying to say that non-finite difference discretizations are not in the scope of SciML. Trixi, Gridap, etc. are great tools and so they should be used in conjunction with SciML bits. That said, ModelingToolkit is providing a generalized symbolic PDE interface, which then discretizers are being built on, and a few years from now we’ll probably look at that and start expanding the areas where discretizers are missing.

1 Like

Thanks, that’s a good rule of thumb to go by, and I appreciate the outlook on where you see things developing.

Thanks again - I’ll look through the PRs when I get the time myself and see if I can be of any help.

1 Like