Julia vs Fortran complaint

I don’t think the original quote was that problematic if read in a reasonable way. Obviously, it was a situation in which someone ported some low-level kernel from Fortran to Julia, discovered some additional optimizations along the way (possibly made easier by Julia), and ended up with a 30% speedup. This kind of thing is not unreasonable — it happens all the time if you have a language in which you can write high-performance code. Whereas in Matlab/R/Python it is simply not possible to port low-level kernels from Fortran and get speedups unless you find truly fantastic algorithmic improvements [like going from O(n²) to O(n)] or find a new library routine (usually written in C or Fortran or similar!) that is perfectly suited to your problem. The fact that it is possible to do meaningful performance optimization of low-level kernels in Julia is the point here.

The NAG comment that it is impossible for Julia code to be faster than Fortran because they are both calling the same LAPACK/BLAS is just not reasonable, in my opinion. Dense linear algebra in every language has basically the same performance for this reason — obviously, this sort of code is not what any of the quotes was referring to.

5 Likes