Discussion on the future for performant programming in Julia

Nope. Correction, that’s for forward sensitivity analysis, which is equivalent to forward-mode AD (with a few differences in computation order). Forward-mode AD of the solver works without any code from SciMLSensitivity, which is why it’s performed even if that package isn’t loaded.

It’s very common in HPC to have strong memory bounds and have to know the exact memory requirement. This way we can enforce strong real-time guarantees on the amount of memory that is required by the methods. You can see this in action in the low-storage RK methods which actually test this property to ensure we have exactly the theoretical number of arrays allocated.

Of course, DifferentialEquations.jl is used in a lot of real-time applications where guaranteeing all memory is allocated up front to have an essentially static stepper is a requirement. So even with improvements, DiffEq has some applications which PyTorch’s model cannot support which we want to continue to support, and even support better (going all the way to static compilation and edge computing). Therefore it’s not going to go away as this code is more similar to C++ or Rust in terms of the application’s goals.

That said.

This is all being discussed and there’s a small PRs in motion. v1.9 and v1.10 had a large focus on TTFX and usability issues, with the end of v1.10 getting some things like improved stacktraces a major push (for example, improved stacktraces for automatic differentiation Frame registration for foreign JITs · Issue #49056 · JuliaLang/julia · GitHub and generally truncating stacktraces https://twitter.com/ChrisRackauckas/status/1661014235466563591). With those kinds of usability improvements and loading improvements, v1.10 is set to branch off its release rather soon.

With those changes in hand, we have been having many discussions about improving the way that high performance and statically compilable code can be written. This includes steps like:

The MWEs came from a recent priorities meeting so while it’s hard to put a timeline on things, I can at least say that the improvement of memory management on performance has now hit the top of the stack in terms of some JuliaHub priorities and I would expect a major focus in this area, along with static compilation, in the near future.

We will (hopefully) be writing a Julep in the very near future that details some motivations and some planned routes to solve some issues along these lines, so I’ll just cut here for now.

24 Likes