Yes, that snippet has been noted before, I think in Github issues and on Discourse. It does not read well to developers because it’s obviously missing details. It should really say that its from real experiences, and not some blanket statement about “speed”.

But there is truth in there that is being ignored by this comment. Sure, if someone wrote a very large software in Fortran which is perfectly optimal, Julia probably can’t optimize that as well as Fortran because it can alias pointers among other things which can disable some compiler optimizations. However, without any pain, Julia will get you <2x from Fortran. Using some macros to create little local environments that are aliasing free and things like that, Julia can probably get stupidly close to Fortran (like <1.2x) with minimum work (probably 10x-100x less code…).

So Fortran is faster? Not so fast… there’s a bias in the person who wrote the answer because they have experience building software in Fortran. Most people cannot write “optimal Fortran”. People point to BLAS and LAPACK as examples of Fortran, when BLAS and LAPACK are actually quite exceptional. Most Fortran code is not that optimal, and most Fortran code is a nightmare to use, understand, and contribute to. The suboptimal algorithms which come out of having a larger code base more than make up the difference, especially when you put in the fact that most mathematicians / scientists who are trying to write these things are not software engineers. “Software bloat” and maintenance issues quickly come up in this area.

But there is a secondary effect. In my field, numerical differential equations, the algorithm trumps optimizations. Sure, you want to get your implementation fast, but having the right algorithm makes a huge deal. For example, the newest methods for SDEs that I recently published gets about 100x over the “simple standard methods” in easy problems, and I give a real-world example (problem which someone was trying to solve in my lab) where the adaptivity algorithm gives about a 1e6 speedup. That’s huge! And no 1.2x matters at that point. But speedups using things like Verner’s newest algorithms with lazy adding of extra interpolation stages, just some new research in the oldest part of the field (ODEs), I have found that these high order RK methods can improve things like 10x-100x. However, since it takes so much more code in Fortran, it’s harder to maintain and update, which makes the codebase less agile and less able to add every new algorithm that’s available (the proof is very visible: no one has added these Verner algorithms to Fortran code even though they’ve been available for years, while DifferentialEquations.jl was a year long side project (turned somewhat “main project” of course )). And that’s to a detriment to performance because again, the gains by using specialized and improved mathematical algorithms are orders of magnitude larger than any Julia vs Fortran difference.

So sure, someone from a highly trained group of experienced Fortran/C++ programmers may look at this and say “but I can make it faster”, but I look at this and go “sure, but almost no one else can, almost no one else can help maintain it, and when researchers want to add new algorithms, it will be a nightmare to extend”. There’s an engineering tradeoff, and for me Julia is at a very optimal point for people when you don’t want to devote their whole life to software engineering for an extra 20% faster, and instead want to work on new algorithms and share their advances with usable software.