We are trying to benchmark implicit solvers for an ice flow diffusivity PDE. So far we’ve stuck to explicit ones, but now we would like to cover all of them. Since many implicit solvers from DifferentialEquations.jl require AD, we’re currently having the following issue:
First call to automatic differentiation for the Jacobian
failed. This means that the user `f` function is not compatible
with automatic differentiation. Methods to fix this include:
1. Turn off automatic differentiation (e.g. Rosenbrock23() becomes
Rosenbrock23(autodiff=false)). More details can befound at
https://docs.sciml.ai/DiffEqDocs/stable/features/performance_overloads/
2. Improving the compatibility of `f` with ForwardDiff.jl automatic
differentiation (using tools like PreallocationTools.jl). More details
can be found at https://docs.sciml.ai/DiffEqDocs/stable/basics/faq/#Autodifferentiation-and-Dual-Numbers
3. Defining analytical Jacobians. More details can be
found at https://docs.sciml.ai/DiffEqDocs/stable/types/ode_types/#SciMLBase.ODEFunction
Since our f function is really optimized to avoid almost all memory allocations, I guess the problem comes from the usual fact that AD doesn’t like mutation. We have tried deactivating AD, but it’s terribly slow.
So my question is: what should one do in this case? The documentation is really scarce, and we haven’t found any clear examples on how to work around this issue. The code can be found here, and here’s the repository with all the benchmarks (with explicit solvers so far).
It looks like the error message suggests improving compatiblity with ForwardDiff.jl. The problem then is likely that you allocate float arrays instead of generically typed arrays. How do your allocations look? Can you manually compute the Jacobian using ForwardDiff?
It does indeed look like you hard code types everywhere, don’t do that
Note that finite difference of the solver does not make a huge impact on performance unless it’s a Rodas method. It’s like a 2x-4x performance thing because it’s forward mode AD and has nothing to do with the adjoints of the back solve which is what really matters.
That said, for PDEs the issue is that Rosenbrock23 is a bad idea. As the docs mention, it’s not a method that scales to larger systems well. Did you try FBDF or KenCarp47? Those are more sensible algorithms for large equations. And then when optimizing that, you should look into the tutorial on handling large systems:
Setting up sparse Jacobians and iLU or multigrid preconditioning is such a huge boost that you should always do it for PDEs.
I am working with @JordiBolibar on this ice flow diffusivity PDE issue. Actually, the latest version of the f function that we use can be found here and the latest benchmark is available here.
We tried using FBDF and KenCarp47, and it is still at least 10 times slower than when using explicit solvers.
Also, when trying to set up sparse Jacobians with u0 = iceflow_model.H du0 = zeros(Float64,iceflow_model.nx) jac_sparsity = Symbolics.jacobian_sparsity((du, u) -> SIA1D!(du, u, iceflow_model, 0.0), du0,u0)
we get the following error :
Thanks for your message; by using PreallocationTools.jl we were able to use the AD of implicit solvers such as KenCarp47 or FBDF.
However, they still perform at least 3 times slower that the explicit solver we were using previously (RDPK3Sp35), even when setting up sparse jacobians and using GMRES (i.e adding linsolve = KrylovJL_GMRES() as a solver agument if that’s what you meant).
Are you sure the equation is stiff? What makes you think so?
If it’s on the edge, did you try ROCK2() or ROCK4()? There are some PDE cases which have “not tiny but not large real valued eigenvalues” (i.e. a Laplacian) where this is the most efficient solver.
What preconditioner? Did you ilu and tune the cutoff?