Setting up implicit solvers to beat the performance of explicit solvers

We are trying to benchmark implicit solvers for an ice flow diffusivity PDE. So far we’ve stuck to explicit ones, but now we would like to cover all of them. Since many implicit solvers from DifferentialEquations.jl require AD, we’re currently having the following issue:

First call to automatic differentiation for the Jacobian
failed. This means that the user `f` function is not compatible
with automatic differentiation. Methods to fix this include:

1. Turn off automatic differentiation (e.g. Rosenbrock23() becomes
   Rosenbrock23(autodiff=false)). More details can befound at
   https://docs.sciml.ai/DiffEqDocs/stable/features/performance_overloads/
2. Improving the compatibility of `f` with ForwardDiff.jl automatic 
   differentiation (using tools like PreallocationTools.jl). More details
   can be found at https://docs.sciml.ai/DiffEqDocs/stable/basics/faq/#Autodifferentiation-and-Dual-Numbers
3. Defining analytical Jacobians. More details can be
   found at https://docs.sciml.ai/DiffEqDocs/stable/types/ode_types/#SciMLBase.ODEFunction

Since our f function is really optimized to avoid almost all memory allocations, I guess the problem comes from the usual fact that AD doesn’t like mutation. We have tried deactivating AD, but it’s terribly slow.

So my question is: what should one do in this case? The documentation is really scarce, and we haven’t found any clear examples on how to work around this issue. The code can be found here, and here’s the repository with all the benchmarks (with explicit solvers so far).

Thanks in advance!

the easy answer is to pass autodiff=false to the solver. it will then use finite different

Yes, as I mentioned, we tried this and it’s horribly slow. We’re looking into implicit solvers to beat the performance of our best explicit solver.

It looks like the error message suggests improving compatiblity with ForwardDiff.jl. The problem then is likely that you allocate float arrays instead of generically typed arrays. How do your allocations look? Can you manually compute the Jacobian using ForwardDiff?

It does indeed look like you hard code types everywhere, don’t do that :wink:

1 Like

You can use the implicit solvers without AD. For example pass FBDF(autodiff=false) as the solver.

Note that finite difference of the solver does not make a huge impact on performance unless it’s a Rodas method. It’s like a 2x-4x performance thing because it’s forward mode AD and has nothing to do with the adjoints of the back solve which is what really matters.

That said, for PDEs the issue is that Rosenbrock23 is a bad idea. As the docs mention, it’s not a method that scales to larger systems well. Did you try FBDF or KenCarp47? Those are more sensible algorithms for large equations. And then when optimizing that, you should look into the tutorial on handling large systems:

Setting up sparse Jacobians and iLU or multigrid preconditioning is such a huge boost that you should always do it for PDEs.

1 Like

I am working with @JordiBolibar on this ice flow diffusivity PDE issue. Actually, the latest version of the f function that we use can be found here and the latest benchmark is available here.

We tried using FBDF and KenCarp47, and it is still at least 10 times slower than when using explicit solvers.

Also, when trying to set up sparse Jacobians with
u0 = iceflow_model.H
du0 = zeros(Float64,iceflow_model.nx)
jac_sparsity = Symbolics.jacobian_sparsity((du, u) -> SIA1D!(du, u, iceflow_model, 0.0), du0,u0)
we get the following error :

 MethodError: no method matching SIA1D!(::Vector{Num}, ::Vector{Num}, ::SIA1Dmodel{Float64, Int64}, ::Float64)
Closest candidates are:
  SIA1D!(!Matched::Vector{Float64}, !Matched::Vector{Float64}, ::SIA1Dmodel{Float64, Int64}, ::Float64) at ~/oggm/oggm/core/SIA1D_utils.jl:14
Stacktrace:
 [1] (::var"#10#13"{SIA1Dmodel{Float64, Int64}})(du::Vector{Num}, u::Vector{Num})
   @ Main ~/oggm/oggm/core/SIA1D_utils.jl:174
 [2] jacobian_sparsity(::var"#10#13"{SIA1Dmodel{Float64, Int64}}, ::Vector{Float64}, ::Vector{Float64}; kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
   @ Symbolics ~/.julia/packages/Symbolics/BQlmn/src/diff.jl:584
 [3] jacobian_sparsity(::Function, ::Vector{Float64}, ::Vector{Float64})
   @ Symbolics ~/.julia/packages/Symbolics/BQlmn/src/diff.jl:579

Your caches force Float64, so that’s of course going to fail on AD and sparsity detection. That’s what PreallocationTools.jl is for:

To see if this is a direction you should go, did you try using GMRES without a preconditioner and see how that does?

Thanks for your message; by using PreallocationTools.jl we were able to use the AD of implicit solvers such as KenCarp47 or FBDF.

However, they still perform at least 3 times slower that the explicit solver we were using previously (RDPK3Sp35), even when setting up sparse jacobians and using GMRES (i.e adding linsolve = KrylovJL_GMRES() as a solver agument if that’s what you meant).

1 Like

You need to tune KrylovJL_GMRES(), pass in verbose mode and see how fast it converges

Are you sure the equation is stiff? What makes you think so?

If it’s on the edge, did you try ROCK2() or ROCK4()? There are some PDE cases which have “not tiny but not large real valued eigenvalues” (i.e. a Laplacian) where this is the most efficient solver.

What preconditioner? Did you ilu and tune the cutoff?