I found a benchmark set with Julia and other languages at https://modelingguru.nasa.gov/docs/DOC-2676
It seems that the Gauss-Legendre Quadrature case Julia is slower than Python
Looking at the code, I’m suspicious of their Julia timings. They’re not using BenchmarkTools.jl or interpolation to time Julia code, so they might have picked up precompilation times.
They are also using @njit(parallel=True) for the trigonometric Python+Numba code, while not using Julia @threads. This purpose of this benchmark is ostensibly to measure “function evaluation”, though most of the time is spent on array allocation.
I think the benchmarks need some thoughtful input.
Aaarg. Another complaint. They have a test_count_unique_words, which, first of all, counts all words, not unique words. But they also print each line, and the results to screen, which completely dwarfs the time it takes to actually count (printing a line takes ~200x as long as counting the number of words, for a fast implementation). And also, they use non-const globals.
I’m afraid this benchmark suite requires a lot of input.