(post withdrawn by author, will be automatically deleted in 24 hours unless flagged)

I’m afraid the organization JuliaPDE is in limbo. There have not been any contributions or activity.

That doesn’t mean that potential contributors have not been busy: Individual packages or groups of packages have grown by leaps and bounds.

I think the biggest development for that has been the working prototype of https://github.com/JuliaDiffEq/DifferentialEquations.jl/issues/469

All are different packages doing FEM with Julia with the authors coming from different backgrounds and have different philosophies on how the code should be structured and how the API should look. I’d suggest reading through some examples from each of them and try contribute to the one you like the most

Is there an “agreed upon” or somewhat “standardized” mesh format we can pull from something like JuliaGeometry? One of the issues with a higher level interface of this is that, for non-regular domains, in order to specify the domain you already need a mesh format, so at some point it might make sense for our high level interface to just choose one and let other FEM packages convert from it. I am not sure which one to choose though.

I second GeometryBasics. I’m updating all my isosurface extraction code to support it. The TetGen jl wrapper uses this for convience as well.

`GeometryBasics`

is looking good, but the question is can it grow up some more? A fully general mesh framework needs to support, in addition to the basic cube -like and simplex -like shapes with the minimum number of vertices, analogous shapes with higher numbers of vertices (mid-edge, interior, and such), and also shapes with variable number of vertices (Voronoi-type polyhedra and so on). Also, arbitrary-order NURBS/Bezier shapes need to be supported. Finally, frameworks similar, but not identical, are needed for mesh-free methods.

Hi Chris,

How about using the gmsh mesh format? http://gmsh.info/doc/texinfo/gmsh.html#File-formats

At least here in Germany a lot of people use gmsh and it even has a Julia API (I haven’t used it though) http://gmsh.info/doc/texinfo/gmsh.html#Gmsh-API

Cheers,

J

Perhaps off topic… one of my students work with population balances, which in principle add attribute (internal) independent variables to spatial (external) variables. For such problems, higher dimensional grids than 3D may be needed. Examples could be description of particle size distribution, liquid content distribution, and porosity/gas content distribution. OK – perhaps other discretization methods are needed for such problems, I don’t know.

Matrix-free methods are what DiffEqOperators is doing, and higher dimensions use neural network based methods. For example, 100 dimensional PDEs are showcased here:

The classical methods like FEM don’t do as well with high dimensions (if you can even represent them in memory), so this is an area where the newer SciML methods tend to shine.

As I have mentioned in the other threads before here, the Grassmann.jl differential geometric algebriac number system would be well suited to this, since it naturally supports homological algebra with simplices and faces and edges and points and lattices and so on. It also generalizes to any number of dimensions and works with projective geometry. Also, it supports multivariable differential operators and differential forms.

That is to say, I have not created a mesh geometry framework with it yet… but I believe it would be beneficial to use this universal mathematical language as a foundational building block for such things… since it is a universal language that can be used for automatic differentiation and geometry simultaneously.

I should probably look into this some time (SciML) – at least get an idea of the principle. With PDEs, one doesn’t have data (as with traditional machine learning). Is the idea similar to weighted residual methods, where instead of formalizing a trial solution (e.g., using finite support basis functions such as in FEM), one uses a neural network as “trial solution”, and trains the NN to minimize some weighted residual of the PDE + BCs?

Btw. my impression is that special discretization methods are used with population balances. Maybe I can talk my student into testing Sci ML in Julia as an alternative to the current MATLAB implementation, when back from a leave.

Exactly. For high dimensions, it’s just too impractical to do any point-wise or element-wise representation of the space. You get around the curse of dimensionality by expressing the solution through a neural network and training said network. These methods are not necessarily as efficient as say FDM/FEM for low dimensions, but when you get to high dimensions then the memory advantage really begins to be a computational advantage.

Is the limitation to simplices crucial? Can it represent cube-like complexes? With arbitrary number of vertices?

That’s a good question, I will investigate some more on that. There is a general cellular homology theory

So the answer is yes, it seems that hyper cubes are possible to represent with a simplicial set… it is closely tied together with the CW-complex. Simplicial set is a generalization, taking unions of subspaces.

For example, if we have 4 vertices (to make a square) we might use `v12 - v14 + v23 + v34`

to represent the boundary of the square by using 1-simplices. So I’d say you can represent them… you can also get that result by computing (on `master`

) the boundary of the 2-cube square `v123 + v134`

itself:

```
julia> using Grassmann, Leibniz; @basis tangent(ℝ^4,2,4); #master branch
julia> ∂(v1234) # boundary of 3-simplex
0 - 1∂₄v₁₂₃ + 1∂₃v₁₂₄ - 1∂₂v₁₃₄ + 1∂₁v₂₃₄
julia> ∂(v123+v134) # boundary of 2-cube
0 + 1∂₃v₁₂ - 1∂₂v₁₃ + 1∂₄v₁₃ - 1∂₃v₁₄ + 1∂₁v₂₃ + 1∂₁v₃₄
```

Note the extra edges `1∂₄v₁₃ - 1∂₂v₁₃`

which cancel out if you set `∂ᵢ = ∂ⱼ`

for `i≠j`

so it works.

In case anyone is confused, the answer I gave above shows that n-cubes come from n-simplices.

This totally makes sense, because for example a quadrilateral `v123 + v134`

is made up of two triangles.

Hope that explains to you why simplices are crucial and that cubes with arbitrary number of vertices work.

To help with understanding, tensor simplices can be converted into graphs by newly defined methods for `LightGraphs.SimpleDiGraph(::TensorAlgebra)`

on the `master`

branch of `Grassmann`

```
using Grassmann, LightGraphs, GraphPlot, Compose # Grassmann master
graph(x,n="simplex.pdf",a=16cm,b=16cm) = draw(PDF(n,a,b), gplot(SimpleDiGraph(x),layout=circular_layout,nodelabel=collect(1:grade(vectorspace(x)))))
@basis tangent(V"8",2,1); # 8 vertices, single differential operator (2nd order)
```

Here are some of the results

3-simplex and 2-simplex `graph(v1234+v678)`

2-cube and 2-simplex `graph(v123+v134+v678)`

*note* that this time the internal edges cancel out, because only a single differential operator is used (with multiple you would retain internal edges)

3-cube attempt `graph(v1246+v2346+v1267+v2678+v2568-v2356)`

Note that the 3-cube attempt has 2 extra edges… it is a union of 6 simplices, but I only figured this one out by trial and error… since the orientation of each simplex matters, I am not sure yet of a fully general method for figuring out arbitrary n-cube representations using oriented simplices, but it’s possible.

What’s interesting about this to me is the relationship between tensor calculus and graph theory…

That is interesting. By the way, it is possible to tile a cube with 6 tetrahedra or with 5 tetrahedra.