Well absolutely. You start with your ODE solver which you tell to follow gradient descent. Then you change the descent direction to a more appropriate one. Maybe you use the Hessian to follow a suitably regularized Newton flow, maybe you incorporate some curvature information with a quasi-Newton algorithm. Then you change the stepsize control to adapt the stepsize based on a more sensible criterion than ODE trajectory accuracy - for instance, the decrease of the objective function. Maybe you can even ensure you don’t take too small steps with a clever line search. Congratulations, you’ve now implemented a state-of-the-art optimizer.
I’m not saying that what you suggest is intrinsically a bad idea; it’s just that this is a thing that has been studied to death over decades, and just plugging in an ODE solver for an optimization problem is not by itself a sufficiently new idea to beat standard schemes. However you never know: sometimes by having sufficiently different ideas you can get something useful - for instance http://users.jyu.fi/~pekkosk/resources/pdf/FIRE.pdf, is supposedly better than standard algorithms at certain type of optimization problems.