DifferentialEquations.jl FEM example

Hmm, I would say showing how to do it right is in your court.

I think both of you are making sensible claims but you should do a friendly comparison :wink:. For parabolic PDE which are very similar to ODE, I would agree with Chris. But for more singular PDE, like the hyperbolic ones, I would tend to agree with Petrā€¦

For hyperbolic ones you tend to want to use an SSPRK method, which arenā€™t too difficult to hand code but a package will get you to the same spot but quicker (productivity-wise). For this example, we just need to fix something in the nonlinear solver that we seem to have broke with linearity detection: the hand-coded version just specailized on the fact that M/dt - J is constant and sparse, while the one he showed did not because of a recent DAE handling update and a missing test.

But the main idea is that the operation cases of the factorizations for PDEs is so high that whatever the implementation does is essentially void: it all comes down to the stepping algorithm and the total number of factorizations. Checking whether an option is true or false doesnā€™t matter when thereā€™s a 2000x2000 sparse matrix being factorized: that factorization has the cost of checking around a few trillion options! Itā€™s fundamentally a case where you get to nearly 0% overhead, and any performance difference is just the failure of the package.

Right now we are failing because of an algorithmic issue to identify linearity (and Iā€™ll get that fixed up) and use the sparsity when itā€™s mixed with linearity (https://github.com/JuliaDiffEq/OrdinaryDiffEq.jl/issues/914), but to then think that means that a package implementation canā€™t match a ā€œhand optimized codeā€ is quite ludicrous when you think about the actual performance characteristics of the loop. Thereā€™s not even opinions or an argument there, unless youā€™re talking about a package having literally a trillion or so options to check, making it start to be close 1 factorization cost.

(Again, that doesnā€™t mean everything is amenable to packaging, but things like PDEs and ML are especially easy to get covered with a package because of their reliance on O(n^3) operations. And it doesnā€™t matter what language you interface with either: a PDE defined by an ODE in Python would be fine).

Let me rephrase my statement that a hand-coded integrator will be in general superior to what the general-purpose package can offer. There is really no contradiction with what Chris is saying, as I will demonstrate.

All PDE classes and the algorithms used to discretize them have certain important characteristics. There may be linearity. There may be separable linearity and nonlinearity. There may be separable stiff part. The PDE may be first order. The PDE may be second order, and conversion to first order form may be cheap or expensive. The right hand side function may be inexpensive or extremely expensive. There may be a non-identity mass matrix or not. It may require an unsymmetrical Jacobian. The operators may be sparse or dense. There may be a structure to the Jacobians (for instance hierarchical matrices). The cost of the time step may be dominated by the solve of the linear algebraic equations required, or it may be dominated by the cost of the right hand side function. There may be important features to preserve (such as energy, momentum, or symplecticity). And moreā€¦

A successful integrator will take into account these characteristics. It will be tailored to these characteristics. It doesnā€™t terribly matter whether such an integrator will be hand coded or whether it will be obtained by tuning a packaged integrator by applying options, but in my opinion it is much more likely that one will run into some difficulties with a general-purpose package as it may not be sufficiently flexible to allow for all such PDE/discretization characteristics to be accommodated. Then the specialized integrator (which may itself be part of a package: some packages, such as PETSC, recognize what Iā€™ve said above and provide integrators specialized to particular classes of PDE/discretization) will win.

One of the reasons there are so many integrators for ODEs (as mentioned above)

is that it pays to specialize the integrator. So a hand-coded integrator doesnā€™t mean a trivial one (Euler). It can be any one of the sophisticated algorithms above. But it is one that does only what needs to be done and does it well.

A lot of those cases are covered by DiffEq. For sure not everything, but for most applications not starting with DiffEq offerings would probably be premature optimization.

https://docs.juliadiffeq.org/latest/types/split_ode_types.html
https://docs.juliadiffeq.org/latest/solvers/split_ode_solve.html

https://docs.juliadiffeq.org/latest/types/dynamical_types.html
https://docs.juliadiffeq.org/latest/solvers/dynamical_solve.html

Sparse and special Jacobians are, since this summer, supported: http://juliadiffeq.org/2019/06/24/coloring.html

3 Likes

Thank you, Mauro. These are perfect examples of what my point was: the general-purpose package gradually includes specializations (one could call them tailored, or hand-optimized) to address practically important classes of PDEs and discretization methods in order to be efficient dealing with those. These are precisely the algorithms that used to be hand-coded in the past. So perhaps the future is in ā€œeverything but the kitchen sinkā€ time integrator packages? (One just hopes they donā€™t collapse under their own weight.)

2 Likes

We will hopefully have a kitchen sink next summer.

But the way that DifferentialEquations.jl is designed is that there is no single package which has to handle all of it. DiffEqBase just defines an API of different problem types, ranging from first and second order ODEs to random ODEs and split delay equations. From there, any package could add solvers too it. Indeed, it would be way too much for a single package, which is why thereā€™s about 80 different packages. The workhorse of course is OrdinaryDiffEq.jl which handles the majority of ODE methods (in the different ODE forms) and soon some native Julia DAE methods, but any package can plug into the interface and add new methods. Sundials.jl, LSODA.jl, DASKR,jl, etc. are all examples of other solver packages. For details, see the special issue paper.

https://www.sciencedirect.com/science/article/pii/S0965997818310251

With that in mind, the main thing to do is to recognize how specific PDE algorithms can be interpreted as a specialized ODE form, and make sure that our forms are diverse enough that they can handle all of the optimizations youā€™d do to the algorithm. Iā€™ll admit that OrdinaryDiffEq.jl is missing optimizations on linearity right now since my focus has always been nonlinear (stochastic) partial differential equations, but the DiffEqOperator Interface is our way of specifying linear components, and we do need to specialize on them better. Now that automatic finite difference discretizations are done, itā€™s a pretty good time to get the linear optimizations finished.

Also, I plan on adding an ā€œAdvanced ODEsā€ tutorial very soon. My plan is to write one for my course, and then just use the same thing in the docs. I havenā€™t gotten there in my course yet, so I right now I have to write a bunch of other things first.

8 Likes

Thanks for all the discussion, I appreciate it. I still have some trouble with my original goal. I tried to do a simple heat equation with FEniCS.jl + OrdinaryDiffEq. My problem is that I donā€™t know how to translate between arrays and fenics functions. One direction is get_array, but how to do the other way? Here is my code so far:

using FEniCS
using OrdinaryDiffEq

mesh = UnitIntervalMesh(100) 
V = FunctionSpace(mesh,"P",1)
ex_u = Expression("1 - pow(x[0],2)", degree=2)
u = interpolate(ex_u, V)
bc = DirichletBC(V, 0.0, "on_boundary")
dudt = TrialFunction(V)
v = TestFunction(V)
Dudt = FeFunction(V)
F = dot(dudt, v)*dx + dot(grad(u), grad(v))*dx
a = lhs(F)
L = rhs(F)

p = (u=u, Dudt=Dudt, bc=bc, a=a, L=L)

function f!(dudt_vec, u_vec, p, t)
    u = HOW_TO_TURN_THIS_INTO_A_FUNCTION(u_vec)
    assign(p.u, u)
    lvsolve(p.a, p.L, p.Dudt, p.bc)
    copy!(dudt_vec, get_array(p.Dudt))
end

u0_arr = get_array(u)
tspan = (0.0, 1.0)
prob = ODEProblem(f!, u0_arr, tspan)
OrdinaryDiffEq.solve(prob, ImplicitEuler())

From the FEniCS tutorial, I think itā€™s saying you just assign using the uvec?

@ysimillides might be able to chime in here. It would be good to have this as an example for the docs.

If I try this I get an error: type Array has no field pyobject.

Try converting it to a numpy array somehow?

That did not work either, though I found a way and opened a PR.

1 Like