Linsolve on SparseMatrixCSC{BigFloat}

I want to solve a quite large but sparse DAE system using DifferentialEquaitions.jl

I can do it with decent efficiency when using Float64’s
Now I would like to do the same thing with higher precision numbers (Double64, BigFloat… you name it) to cover some edge cases.
The thing is, that the jacobian is a SparseMatrixCSC. At this point the linsolve's do not work anymore since lu is not specialized for SparseMatrixCSC{BigFloat}.

Not sure if BlockBandedMatrices.jl is an option with such a sparsity pattern:

julia> c.Jac
2524×2524 SparseArrays.SparseMatrixCSC{Float64, Int64} with 21124 stored entries:

Dense fallback is not an option either since it would literally take for ever.
So I tried to use iterative solvers from IterativeSolvers.jl as they are purely implemented in Julia an thus can handle the sparse jacobians. But than I run into the next problem. Literally all solvers lead to instability warnings and the integration of the DAE stopped after one or two steps. My first thought was, that this is due to the Rosenbrock integrators which to my knowledge demand for precise jacobians and the iterative solvers simply could not cover these requirements, but other algs fail as well.

So my question now is, how to solve sparse DAE’s with higher precision numbers. I’m fighting with this for a week now and I would be thankful for any hint towards a solutions.

At what tolerance was the iterative solve to? If you don’t lower that tolerance it may be too high.

I think iterative solvers might be your only way to go until we have a generic sparse LU.

Solved / mitigated the problem now by using IncompleteLU.jl with close to zero drop tolerance as fallback for lu(::SparseMatrixCSC{BigFloat}). Still slow, but faster then a dense fallback.

At what tolerance was the iterative solve to? If you don’t lower that tolerance it may be too high.

It fails irrespective of the tolerance specified with a “not converged” if verbosity is on. Ill try to extract the Matrix in question from the integrator, to give a full MWE, but I am quit busy right now.