I sent this thread to a friend who wrote
possibly relevant: this work ([1805.07199] Explicit Stabilised Gradient Descent for Faster Strongly Convex Optimisation) constructs a generalisation of gradient descent by discretising the gradient flow ODE using a Runge-Kutta-Chebyshev method. the point is not that this lets you track the flow more accurately, but that this choice of discretisation (designed for solving stiff ODEs with an explicit integrator) lets you use a larger time-step, and converge more quickly. some similar ideas are explored in ([1805.00521] Direct Runge-Kutta Discretization Achieves Acceleration)
and I thought I’d pass on the message in case those ideas are of interest.