The Benchmarks category of Fortran code on GitHub lists many benchmarks where Julia is considered, listed below. I will not make any generalizations about the results.
Assessment of Programming Languages for Computational Numerical Dynamics: compares different programming languages performance in order to solve CFD and Heat Transfer problems, by arturofburgos
Basic Comparison of Various Computing Languages: Python, Julia, Matlab, IDL, R, Java, Scala, C, Fortran by Jules Kouatchou and Alexander Medema
bench_density_gradient_wfn: analyze the performance of evaluating the density gradient with various language implementations (C++/Fortran/Rust/Julia), by zyth0s
Comparison of Programming Languages in Economics: code referenced in the paper “A Comparison of Programming Languages in Economics” by S. Borağan Aruoba and Jesús Fernández-Villaverde
julia-numpy-fortran-test: comparing Julia vs Numpy vs Fortran for performance and code simplicity, by mdmaas
Microbenchmarks: micro benchmark comparison of Julia against other languages, including Fortran, from JuliaLang
NetworkDynamicsBenchmarks: scripts for benchmarking dynamical systems on networks, from paper “NetworkDynamics.jl – Composing and simulating complex networks in Julia”, by Michael Lindner et al.
Performance comparison R, Julia, Fortran for Bayesian binary probit by driesbenoit
raytracer: raytracer benchmark in dozens of languages, by edin.
I have read the post on the Fortran discourse forum that deals with benchmarks in Julia and Fortran.
Usually programmers who have a stake in Fortran object to any results that indicate
that Julia might be as fast or faster than Fortran.
It is certainly possible to find programs in one language which can be sped up by programming
in the other language. It can be usually traced to some flaw in the original program that was
corrected in the new program. If the two programs approached the algorithm from the same
view point, and if the compilers were equally good, the number crunching should be ultimately
carried out at the same speed in either one.
I am convinced that this is a red herring when comparing the two languages. For me the question is:
with which language can I reach the goal of programming an algorithm faster, with more legible and maintainable code, and such that for a given-size model I get the results at roughly the same wall-clock time?
Considering that Fortran lacks generics, multiple dispatch, and it’s not likely to get there in the foreseeable future, for me this is no contest.
Sometimes the benchmark code will just be written suboptimally, which can be ignored.
More usefully, in some cases Julia itself may be actually missing some real optimization opportunities in the language that can be revealed by looking at why Fortran is faster in each particular case that it is. For example, this PR Add UInt/Int to _str_sizehint for printing to strings by Seelengrab · Pull Request #40718 · JuliaLang/julia · GitHub on string performance came from a benchmark comparison.
I’m just waiting for Chris Elrod to make a PR to the Fortran-lang/benchmark with what he showed us yesterday on Slack
Fortran is missing from this one, but GitHub - dyu/ffi-overhead: comparing the c ffi (foreign function interface) overhead on various programming languages the ffi-overhead test shows how JIT compiled languages like Julia and Lua (i.e. it’s not just a one language outlier) are faster at C calls than compiled languages like C. That’s one data point or explanation for pieces of it.
The GitHub - SciML/SciMLBenchmarks.jl: Benchmarks for scientific machine learning (SciML) software and differential equation solvers showcase a lot of examples where Julia libraries are outperforming C and Fortran libraries. The reason is mostly algorithmic though (new methods which make use of autodiff, new heuristics, and other changes to perform less
f evaluations) so it’s not quite a language comparison itself, but it’s one factor that can be involved in a lot of people’s real-world benchmarks. The fact that they switch to a new stiff ODE solver than takes half of the actual
f calls and get a 2.5x-3x speedup from the language change + code cleanup should not be so surprising in that light. Real applications use real packages, and algorithms matter.
Some of the measurements are a bit suspect: I noticed some type instabilities and some of the algorithms may actually be incorrect.
This one has been discussed here: Why is this raytracer slow?
The conclusion were:
The time reported for Julia there does not really make sense. The code that is there, as is, runs in about 2x the time of the Fortran code (including Julia startup and compilation). Thus one would expect that a time of ~320 ms would be reported in that table (not 900ms).
There are some algorithmic differences, particularly in the fact that for the Fortran code there is only "one type* of “thing”, while in the Julia code there are different types of “things” in the scene. That introduces some type instabilities and run-time dispatch issues that make the Julia code slower.
The algorithms being equal, the final running time is dependent on the number of objects of the scene, and the Julia code becomes as fast as the Nim one (which has very low-level optimizations) and faster than the Fortran code for about 10 objects.