I am writing a domain-specific simulator built on top of DifferentialEquations.jl; I am solving a mathematically simple, but large / expensive system of ODEs (>10,000), and wish to minimise memory usage by setting
save_everystep=false and periodically exporting some data to disk.
With no exporting to disk and using the SciML
integrator interface, a simulation can finish in 27 function evaluations (as shown by
integrator.stats); however, adding the periodic saving increases that to 603 function evaluations, both using
DiffEqCallbacks or explicitly
step!(integrator, next_export_time, true). I also set
u_modified!(integrator, false) to avoid unnecessary saves as there are no discontinuities added.
Would it make sense to do each
step!(integrator) - i.e. take the maximal successful time step - then use
change_t_via_interpolation! to “go back” to each data-exporting timepoint between
integrator.t? Do the ODE higher-order interpolants apply when
save_everystep=false, but just between
I implemented this “maximal stepping, then back-interpolating” (though it’s quite a bit of code; I’m hoping this post is enough to spot any problems with the general approach, but if an MWE is needed please let me know) - however, the calls to
change_t_via_interpolation! seem to destroy the integration accuracy in future timesteps. In other words, incrementally interpolating between
change_t_via_interpolation! does not return the integrator to
t. Is that expected, and is there a smarter way to let the integrator take maximal steps (and minimise function evaluations) while interpolating intra-step to export some data? I know interpolating can take some time, but I expect it to be much less than running more function evaluations.