Hi, I am writing my Finite Element Solver for CFD simulations. I encounter a what should be like an easy problem but I cant figure out how to do it.
In my code, I want to account for periodic boundary conditions. For that I have to manipulate my matrices and vectors. I use a technique shown below. I am only showing for matrices.
‘’’ M[i,:] .+= M[j,:]
M[j, :] .= 0.
M[j, i] = 1.
M[j, j] = -1. ‘’’
where i and j are the nodes that are coupled. This code works and I get the results as desired but the issue is, there are a lot of matrices like M. And these matrices will be very large (in the size of millions). Alos, there are a lot of coupled nodes, so for each matrix the same process have to be done for different combinations of i and j. This makes my code extremely slow.
Now, I have tried parallel computing(just multi threading- I am still very new to the parallel computing world). But If I simply add @threads, I think I encounter data race conditions. I thought of making different copies of M for different threads and then combining the result later, but the process of combining result didnt feel so efficient.
Is there some simple way to do this ? Or perhaps, any packages which have already implemented something like this. I just want to make this process very fast.
Right now, when I am solving with like 40,000 degrees of freedom, usually an iteration used to take like 2 seconds. But now, only this periodic boundary condition part takes like 15 seconds.
Thank you for your help. I hope was clear. This is my first time in the Julia community
With Ferrite the application of boundary conditions are never a bottle neck in my experience since for all practical problems the linear solve will dominate.
Gridap.jl supports periodic boundary conditions as well. See below — in FEM, it should be possible to do this implicitly just by specifying the correct adjacency relationships in your mesh data structure.
It surprises me that you have to make a special sweep. In Gridap, you just have to generate a mesh that lives on a torus (i.e. one side is connected to the other side — Gmsh can do this, for example) and IIRC the ordinary matrix assembly works as-is. Correction: some additional node-mapping machinery was required, see link below.
Hi,
We are trying to make our own framework for fluid structure interaction and hence I am starting with CFD.
If I solve a very coarse mesh, it does work as desired. The logic of code looks fine. My issue is that the application of boundary conditions are becoming a bottle neck.
I am looking for ways to make this operation faster( or perhaps any method through which I dont have manipulate the matrices).
Sure, that’s a possibility, but it couples node numbers directly to dof numbers. In Ferrite these two enumerations are completely independent (although the node connectivity is of course uses for bookkeeping while distributing dofs). I suspect that using a mesh with such “duplicated” node numbers, with isoparametric elements, would just work in Ferrite too, though.
What happens in Gridap/Gmsh if you complicate things slighly and use a linear mesh with quadratic function approximation, for example? At least in Ferrite this is handled nicely since periodicity is based on dofs and not nodes.
The periodic constraints (as well as Dirichlet constraints) in Ferrite are actually built on top of general affine constraint between dofs. This means that we are not limited to mirroring x-dofs to x-dofs and y-dofs to y-dofs. In this example the periodicity is rotated so we map u_x to -u_y and u_y to u_x, for example. This is another example where relying on duplicated node numbers wouldn’t be sufficient.
A possible way to implemented periodic BCs in FEM (e.g. in Gridap) is by modifying the generation of the map cell to dofs ids (not cell to node ids!). In this way, the matrix automatically contains the periodic couplings without an additional sweep. This works also for high-order FEM, Hdiv, HCurl, etc.
Another option is to apply the periodic couplings when assembling the local matrix into the global one, or modifying the global matrix after assembling, as mentioned already. If the extra sweep is done on the coordinate arrays (COO) before matrix compression, it should be pretty efficient as well and possibly without race conditions (if you allow for duplicated contributions in the COO vectors). This second family of approaches also makes easier to implement Bloch periodic conditions, and it is equivalent in spirit to what is done to apply general affine constraints between dofs.