We (@santiagobadia, @amartinhuertas, and @fverdugo) are pleased to announce GridapDistributed.jl
, a distributed-memory extension of Gridap.jl
. It provides a massively parallel, generic toolbox written in Julia for the large-scale numerical approximation of partial differential equations (PDEs). GridapDistributed.jl
extends its sequential counterpart package Gridap.jl
and shares its design principles and goals (see, e.g., this Julia discourse post for more details).
Why GridapDistributed.jl
?
It provides a very compact, user-friendly, and mathematically-supported syntax suitable for writing Finite Element solvers for PDEs, with the bonus of being able to efficiently tackle large-scale problems on state-of-the-art supercomputers.
Satellite packages
GridapDistributed.jl
can be combined with its satellite packages to achieve high performance and scalability, and applicability in real-world application problems:
GridapP4est.jl
, for scalable mesh generation using the p4est mesh engine.GridapGmsh.jl
, for handling unstructured distributed meshes loaded from secondary storage.GridapPETSc.jl
, which provides access to the full suite of linear and nonlinear solvers in the PETSc package.
Example code
The following snippet illustrates how a Poisson problem can be solved with GridapDistributed.jl
in very few lines of code.
using Gridap
using GridapDistributed
using PartitionedArrays
using GridapPETSc
# Function to be executed on each subdomain/MPI task
function main(parts)
# Conjugate Gradients iterative solver preconditioned
# with algebraic multigrid (as provided by PETSc)
options = "-ksp_type cg -pc_type gamg -ksp_monitor"
GridapPETSc.with(args=split(options)) do
domain = (0,1,0,1)
# Split the box into a 4x4 Cartesian-like quadrilateral mesh
mesh_partition = (4,4)
model = CartesianDiscreteModel(parts,domain,mesh_partition)
order = 2
u((x,y)) = (x+y)^order
f(x) = -Δ(u,x)
reffe = ReferenceFE(lagrangian,Float64,order)
V = TestFESpace(model,reffe,dirichlet_tags="boundary")
U = TrialFESpace(u,V)
Ω = Triangulation(model)
dΩ = Measure(Ω,2*order)
a(u,v) = ∫( ∇(v)⋅∇(u) )dΩ
l(v) = ∫( v*f )dΩ
op = AffineFEOperator(a,l,U,V)
solver = PETScLinearSolver()
uh = solve(solver,op)
writevtk(Ω,"results",cellfields=["uh"=>uh,"grad_uh"=>∇(uh)])
end
end
# Lay out subdomains/MPI tasks into 2x2 Cartesian subdomain/MPI task mesh
partition = (2,2)
# Trigger the main function on each subdomain/MPI task
prun(main, mpi, partition)
The code example leverages GridapPETSc.jl
to solve the distributed linear system resulting from discretization. The example uses a Cartesian mesh generator built-in in GridapDistributed.jl
to mesh a box. However, one may very easily modify it to use GridapGmsh.jl
or GridapP4est.jl
for meshing more complex domains. See this tutorial for more details.
Performance and scalability
The figures below report remarkable strong (left) and weak (scalability) of the example program above when applied to solve a 3D Poisson problem on a real-world supercomputer (Gadi at NCI, Australia).
Strong scalability | Weak scalability |
How to start ?
If you are further interested in the project, visit the Gridap.jl and GridapDistributed.jl repositories.
If you want to start learning how to solve PDEs with the Gridap package ecosystem, then visit our Tutorials repository.