I’ve been working on a small Julia FEM framework called LowLevelFEM.jl. One of the ideas behind the project is to assemble PDE systems directly from their weak form.
The core idea is simple:
If you know the weak form of a PDE, you can almost copy it from paper into executable code.
The framework provides a small set of variational operators (Grad, Div, SymGrad, etc.) that allow weak forms to be written in a way that closely resembles their mathematical formulation.
Example: steady Navier–Stokes
μ = 1.0
γ = 1e-1 # grad-div stabilization
δ = 1e-6 # pressure stabilization
A = ∫((ε(Pv) ⋅ ε(Pv)) * 2μ)
D = ∫(Div(Pv) ⋅ Div(Pv) * γ)
B = ∫(Div(Pv) ⋅ Pp)
C = ∫(Grad(Pp) ⋅ Grad(Pp) * δ)
K = SystemMatrix([
A + D B'
B -C ])
The snippet above is not pseudocode — it is the actual solver code.
The same formulation can be used for different PDE systems such as
diffusion problems
linear elasticity
nonlinear elasticity
fluid flow
multiphysics coupling
Here is a small example result from the Navier–Stokes test case:
Sounds interesting, reminds me of FEniCSx, which I use heavily. However, there’s already Gridap.jl - I wonder how your package compares with it?
I have also wondered whether it would make sense to build something like this on top of Ferrite.jl by leveraging Symbolics.jl to generate assembly code from the weak form. I’m curious to hear your thoughts on this? Did you consider such an approach?
Good question — there are indeed similarities to frameworks like FEniCS or Gridap.
One of the motivations behind LowLevelFEM was to build something transparent and open rather than a “black box” solver. I wanted a codebase that I can use both for teaching and research, where the numerical formulation is clearly visible and easy to modify.
The goal is not to hide the numerical method, but to expose it in a clear and programmable way.
In that sense the framework behaves more like a toolbox for PDE experimentation than a traditional solver package.
The framework therefore keeps the variational operators explicit, so the weak form is written almost directly in code, e.g.
A = ∫(2μ * (ε(Pu) ⋅ ε(Pu)))
B = ∫(Div(Pv) ⋅ Pp)
The idea is that you can work at different levels: use it as a straightforward engineering solver, or go deeper and modify the operators and assembly if needed.
Regarding Ferrite + Symbolics: that’s an interesting direction. At the moment, the operators are implemented directly in Julia rather than generated symbolically, but I could definitely imagine experimenting with that approach in the future.
Since Ferrite gets mentioned here (and in the Github project) multiple times, let me briefly comment here.
Before I start, I really appreciate the effort here! The package comes really with lots of features already. But while I appreciate the effort here, the package name leaves me mildly confused. It is called LowLevelFEM and the package provides high-level tools to assembly typical finite element problems. I also could not reconstruct, after a brief glance on the documentation/examples, how the solver modules decide on what interpolation/interpolation order/quadrature rule choices are made based on which inputs, or how to control these details. I could also not figure out how to add custom ansatz and test functions.
While I am a big fan of experimenting around with different software designs, I also really start to wonder if there is something seriously wrong with the existing frameworks, because there are more and more FEM frameworks popping up over the last years. Especially packages providing a high level FEniCS style interface seem emerge a lot, which all do the same thing in one way or another, and they all start from developing the finite element core starting from 0. What exactly the unmet expectations from the existing packages and wouldn’t it be more beneficial for everyone to join forces here?
I think this is true for many FEM frameworks in Julia, isn’t it? I think especially Gridap.jl and GalerkinToolkit.jl do a good job here. IMHO Ferrite is also not that far away from the form you write down on paper, just a bit more verbose to give you maximal control over what exactly is happening. Especially many modern research problems involve some form of additional local problems to solve, or non-local operations to be executed, which is notoriously hard to be integrated when using a high-level language.
This is work in progress on my end (+MTK) and we already have working internal prototypes for a while. Unfortunately not too useful yet. I never have enough time yet to fully commit to this, as I do not really see this as a priority in comparison with other features. It is also not as straightforward to implement as I hoped for if we want to use make full usage of the advanced optimization features on the symbolic representation provided by MTK.
I’m inclined to agree with this sentiment. In general, only packages that have a certain degree of “weight” or momentum in the development are interesting for most users. For example, FEniCSx has reached some maturity and has a reasonable user base, and that gives me some confidence that most of the features you’d expect are supported or will be added within some time frame. Enough users mean that most questions have been asked and answered in Discourse. Some things in Fenicsx such as the UFL library seem pretty clumsy compared to what could be implemented in Julia, but that does not matter nearly as much as the level of maturity. In the Julia ecosystem, the packages that come most close to a degree of maturity seem to be Ferrite.jl and Gridap.jl, but I don’t think they can be compared to Fenicsx in terms of user base, for example.
One thing in particular that keeps me using FEniCSx is the ability to write a nonlinear weak form and have the Jacobian automatically calculated by UFL. I haven’t tried Gridap.jl for the reason that it doesn’t seem to have that feature. Making use of a symbolic engine would probably allow implementing that quite easily. Perhaps MTK is just the thing for that, I haven’t really looked into it much yet.
Just to give a bit more background on why I started LowLevelFEM in the first place.
My main motivation was actually teaching. I wanted something where the full workflow from weak form to assembled system is completely visible to students. In many examples the whole solver fits on a single screen, which helps a lot when explaining what actually happens in a FEM code.
Another motivation was practical: over the years we had repeated licensing and platform issues with commercial tools (ANSYS, Abaqus) and I wanted something that runs reliably on Linux and can be freely used by students.
For teaching this is extremely useful, because students can see the whole process interactively instead of jumping between multiple tools (mesh generator, solver, ParaView, etc.).
Existing open tools like GetDP or other large frameworks are very powerful, but they can be difficult to approach when you only want to prototype a small idea or demonstrate a concept in a lecture. In teaching I often found that the amount of initialization and abstract objects (trial functions, test functions, measures, etc.) can easily distract beginners from the actual physical problem.
So the goal of LowLevelFEM was not really to compete with existing frameworks, but to have a transparent toolbox where
the weak form is visible in the code
operators remain explicit
and small research ideas can be implemented quickly.
In that sense it sits somewhere between a teaching code and a research prototyping framework.
I think it is quite obvious that people write software because they enjoy it, or because it scratches a particular itch for them. Nothing wrong with that. Proponents of some packages tend to think that it is a waste of time for other people to develop their own packages when there is already this fantastic software free for the taking. But recruiting users for this purpose makes a lot more sense than trying to recruit developers, who much prefer being masters of their own domain.
Thank you for the detailed breakdown @perebalazs ! I still need some help try to understand some thing better.
This is indeed an interesting decision here. While I teach FEM with Ferrite myself, I never thought about it this way. We also provide for some things helper functions for the students to shorten the code and hide some complexities not relevant for the lecture (like mesh gen and interactive visualization). We never shared teaching material tho, not because we do not like it, but because it is typically quite tailored towards an existing lecturing structure.
I think the majority of open source (Julia) finite element frameworks come from this scenario in one or another way. And I also think, very personally, that for teaching FEM in academic context we should not teach students how to use software X to run problem Y, but rather the actual FEM as an engine. If these basics are solid, then learning how to use FEM through specific GUI comes down to reading tutorials and manual.
This is what actually confuses me. If this is the problematic part, then, for any framework, what would be the disadvantage in just putting a small layer for the “hidden” parts on top for the initial teaching. Or am I misunderstanding you here?
I do not think that anyone here will really disagree with that statement. However,
I strongly disagree with this statement. But we are drifting off-topic here into a philosophical discussion.
Interestingly, the operator-based formulation was not the original goal.
The first versions were much closer to a simple engineering workflow (loads, constraints, stiffness matrix, solve, visualization) because the initial motivation was teaching.
The operator layer only appeared later when I started experimenting with problems like the Reynolds equation, where writing the formulation more directly in terms of operators became very convenient.