Implementation of Periodic Boundary Conditions

Hi, I am writing my Finite Element Solver for CFD simulations. I encounter a what should be like an easy problem but I cant figure out how to do it.
In my code, I want to account for periodic boundary conditions. For that I have to manipulate my matrices and vectors. I use a technique shown below. I am only showing for matrices.
‘’’ M[i,:] .+= M[j,:]
M[j, :] .= 0.
M[j, i] = 1.
M[j, j] = -1. ‘’’

where i and j are the nodes that are coupled. This code works and I get the results as desired but the issue is, there are a lot of matrices like M. And these matrices will be very large (in the size of millions). Alos, there are a lot of coupled nodes, so for each matrix the same process have to be done for different combinations of i and j. This makes my code extremely slow.

Now, I have tried parallel computing(just multi threading- I am still very new to the parallel computing world). But If I simply add @threads, I think I encounter data race conditions. I thought of making different copies of M for different threads and then combining the result later, but the process of combining result didnt feel so efficient.

Is there some simple way to do this ? Or perhaps, any packages which have already implemented something like this. I just want to make this process very fast.

Right now, when I am solving with like 40,000 degrees of freedom, usually an iteration used to take like 2 seconds. But now, only this periodic boundary condition part takes like 15 seconds.

Thank you for your help. I hope was clear. This is my first time in the Julia community

Not sure.

Would it help here to start considering a one-dimensional problem with a coarse mesh?

What kind of problem are you trying to solve?

I am happy to look along.

See this post for a discussion of “ghost cells” commonly used to implement periodic boundary conditions.

1 Like

Ferrite.jl supports periodic boundary conditions (see for example this tutorial on computational homogenization).

The implementation is very efficient and all required modifications to the system matrix, for all periodic constraints, are perfomred in a single sweep of the sparse data structure. The “kernel” of the is available here: Ferrite.jl/src/Dofs/ConstraintHandler.jl at 3b081bea81cb718cf33b004ea16f90fdbb8cc5f9 · Ferrite-FEM/Ferrite.jl · GitHub

With Ferrite the application of boundary conditions are never a bottle neck in my experience since for all practical problems the linear solve will dominate.

1 Like

Gridap.jl supports periodic boundary conditions as well. See below — in FEM, it should be possible to do this implicitly just by specifying the correct adjacency relationships in your mesh data structure.

It surprises me that you have to make a special sweep. In Gridap, you just have to generate a mesh that lives on a torus (i.e. one side is connected to the other side — Gmsh can do this, for example) and IIRC the ordinary matrix assembly works as-is. Correction: some additional node-mapping machinery was required, see link below.

Hi,
Do you mean that Gridap does not do any additional modification to the matrices if we define the mesh correctly ?

Hi,
We are trying to make our own framework for fluid structure interaction and hence I am starting with CFD.
If I solve a very coarse mesh, it does work as desired. The logic of code looks fine. My issue is that the application of boundary conditions are becoming a bottle neck.
I am looking for ways to make this operation faster( or perhaps any method through which I dont have manipulate the matrices).

Can you share a MWE (minimal working example) for us to have a look at?

I second this. It is all a mater of connectivity. The local nodes on the periodic BC should refer to the same global node.

Sure, that’s a possibility, but it couples node numbers directly to dof numbers. In Ferrite these two enumerations are completely independent (although the node connectivity is of course uses for bookkeeping while distributing dofs). I suspect that using a mesh with such “duplicated” node numbers, with isoparametric elements, would just work in Ferrite too, though.

What happens in Gridap/Gmsh if you complicate things slighly and use a linear mesh with quadratic function approximation, for example? At least in Ferrite this is handled nicely since periodicity is based on dofs and not nodes.

The periodic constraints (as well as Dirichlet constraints) in Ferrite are actually built on top of general affine constraint between dofs. This means that we are not limited to mirroring x-dofs to x-dofs and y-dofs to y-dofs. In this example the periodicity is rotated so we map u_x to -u_y and u_y to u_x, for example. This is another example where relying on duplicated node numbers wouldn’t be sufficient.

High order element basis functions just work. (Last I checked, curved periodic elements, e.g. curved triangles, were not supported, however.)

However, I oversimplified how easy it was — there is some special support that had to be implemented for the node mapping that appears in the periodic meshes: Add periodic bcs by fverdugo · Pull Request #35 · gridap/GridapGmsh.jl · GitHub

@fverdugo can better comment on the details, however.

A possible way to implemented periodic BCs in FEM (e.g. in Gridap) is by modifying the generation of the map cell to dofs ids (not cell to node ids!). In this way, the matrix automatically contains the periodic couplings without an additional sweep. This works also for high-order FEM, Hdiv, HCurl, etc.

Another option is to apply the periodic couplings when assembling the local matrix into the global one, or modifying the global matrix after assembling, as mentioned already. If the extra sweep is done on the coordinate arrays (COO) before matrix compression, it should be pretty efficient as well and possibly without race conditions (if you allow for duplicated contributions in the COO vectors). This second family of approaches also makes easier to implement Bloch periodic conditions, and it is equivalent in spirit to what is done to apply general affine constraints between dofs.

1 Like