The Benchmarks category of Fortran code on GitHub lists many benchmarks where Julia is considered, listed below. I will not make any generalizations about the results.
bench_density_gradient_wfn: analyze the performance of evaluating the density gradient with various language implementations (C++/Fortran/Rust/Julia), by zyth0s
julia-numpy-fortran-test: comparing Julia vs Numpy vs Fortran for performance and code simplicity, by mdmaas
Microbenchmarks: micro benchmark comparison of Julia against other languages, including Fortran, from JuliaLang
NetworkDynamicsBenchmarks: scripts for benchmarking dynamical systems on networks, from paper “NetworkDynamics.jl – Composing and simulating complex networks in Julia”, by Michael Lindner et al.
I have read the post on the Fortran discourse forum that deals with benchmarks in Julia and Fortran.
Usually programmers who have a stake in Fortran object to any results that indicate
that Julia might be as fast or faster than Fortran.
It is certainly possible to find programs in one language which can be sped up by programming
in the other language. It can be usually traced to some flaw in the original program that was
corrected in the new program. If the two programs approached the algorithm from the same
view point, and if the compilers were equally good, the number crunching should be ultimately
carried out at the same speed in either one.
I am convinced that this is a red herring when comparing the two languages. For me the question is:
with which language can I reach the goal of programming an algorithm faster, with more legible and maintainable code, and such that for a given-size model I get the results at roughly the same wall-clock time?
Considering that Fortran lacks generics, multiple dispatch, and it’s not likely to get there in the foreseeable future, for me this is no contest.
The https://github.com/SciML/SciMLBenchmarks.jl showcase a lot of examples where Julia libraries are outperforming C and Fortran libraries. The reason is mostly algorithmic though (new methods which make use of autodiff, new heuristics, and other changes to perform less f evaluations) so it’s not quite a language comparison itself, but it’s one factor that can be involved in a lot of people’s real-world benchmarks. The fact that they switch to a new stiff ODE solver than takes half of the actual f calls and get a 2.5x-3x speedup from the language change + code cleanup should not be so surprising in that light. Real applications use real packages, and algorithms matter.
The time reported for Julia there does not really make sense. The code that is there, as is, runs in about 2x the time of the Fortran code (including Julia startup and compilation). Thus one would expect that a time of ~320 ms would be reported in that table (not 900ms).
There are some algorithmic differences, particularly in the fact that for the Fortran code there is only "one type* of “thing”, while in the Julia code there are different types of “things” in the scene. That introduces some type instabilities and run-time dispatch issues that make the Julia code slower.
The algorithms being equal, the final running time is dependent on the number of objects of the scene, and the Julia code becomes as fast as the Nim one (which has very low-level optimizations) and faster than the Fortran code for about 10 objects.