I completely agree with this sentiment. There is a lot of cool stuff we can show in Julia, but I don’t think the basics are a reason to retool an existing project into Julia. IMO, this is how you get to someone new: you know what methods are at the forefront of their discipline, there is a package for that in Julia, and you explain how it and its surrounding ecosystem can be used to solve these problems in their field.
For ODEs we have things like high order Rosenbrock methods, the ability to autodiff the whole thing, and are coming out with a whole set of exponential integrators. This could improve ODE solving efficiency quite a bit, but is it worth retooling existing code? Probably not. Efficiency is only a reason to switch languages if you’re bound by your code speed and there’s something that can make a massive difference, or if you’re in the rare case where you’re running out of HPC resources because your project is huge.
But for new projects, development time can be cut down quite a bit by using abstract types in the generic integrators to represent the mathematical model without having to hand-code some stiff integrator to work. But ODEs in general aren’t where the huge advantage is. Having easy access to stiff DDE and SDE integrators is uncommon, and mixing discrete stochastic modeling in with that is uncommon. While the existence of these techniques is uncommon, the use of these techniques, particularly in biological and pharmacological modeling, is on the rise, meaning that it may be a particularly good reason to switch to Julia specifically for these tools. In the end, people doing this kind of work will invariably be solving plenty of ODEs as well, probably even more than the specialized problem, and then that is a major incentive to convert and then also move all of one’s new work into Julia.
Other tie-ins to note are things like JuMP, where the first draw is the features (free Hessian autodiffing, the callback structures, etc.) but the lasting draw is the existence of many “only JuMP” optimizers coming out of strong departments like MIT which then bolster the existing work in Julia. It may not be for this audience, but CUDANative/CuArrays is another “switching library” since it allows pretty pervasive GPU use for those who don’t want to or are unable to jump to a lower level for it, while the lasting draw is the efficiency it gives on just simple broadcasted code. While I heavily enjoy libraries like NLsolve.jl, Optim.jl, and IterativeSolvers.jl, they don’t generally have the “newness” to draw a newcommer to the language, but they do give you very good reasons to stay.
Really, language features and small efficiency gains are just things package writers and methods developers care about. Most people are willing to use a package and don’t really care how it got optimized (likely using C++ or Fortran if it’s not Julia). What matters most is what kinds of problems they can solve. If the package choices allow for new research to be done, then they care. If not, they don’t, or they at least care a lot less.