I’m trying to clarify a bit of terminology that is popping up in the DifferentialEquations.jl documentation relating to the “problem size” and “large systems.”

I have a linear matrix differential equation like \partial_t \vec{u}(t) = \hat{A}(t)\vec{u}(t) where dim(\hat{A}(t))= Nd \times Nd and dim(\vec{u}(t))=Nd \times d and both are filled with ComplexF64 types. In the context of DifferentialEquations.jl do I have Nd \times d ODEs or 1? Because I’m trying to determine a good class of algorithms to use when I need high accuracy \varepsilon<10^{-9} for a system like above where N\approx \mathcal{O}(10^2) and d \approx \mathcal{O}(10^2).

My impression would be that I really have Nd \times d \approx \mathcal{O}(10^6) ODEs even though I am specifying the system in a linear algebraic form. Is this correct?

Also it turns out that \hat{A}(t) is particularly sparse, I am currently using an the DiffeqOperator method to define the system with a sparse array. How can I accelerate this implementation? Should I be using a more sophisticated array type. I’m currently letting DIfferentialEquations.jl choose the algorithm.

I don’t think I have adequately benchmarked this area enough, but my guess is that you will want to specialize on this structure. You can define your function as a DiffEqOperator with a sparse array and use a Magnus integrator

which is specifically for u' = A(t)u types of problems. You’re going to want to specify it directly since these are pretty new and haven’t been added to the automatic algorithm handler.

Thanks Chris. I have played around with the Magnus integrators but I’ve observed that they are slower than the default solver for similar accuracy. (or at least I believe similar accuracy because the Magnus integrators take in an explicit time step that I have estimated to give me similar accuracy as the default solver). Perhaps I haven’t scaled up the problem large enough yet to notice an advantage with the Magnus integrators. I’ll probe it a bit more and try a few of the different Magnus methods.