Gauss-Legendre Quadrature performance

Hi,

I found a benchmark set with Julia and other languages at
https://modelingguru.nasa.gov/docs/DOC-2676
It seems that the Gauss-Legendre Quadrature case Julia is slower than Python :frowning:

I also found GitHub - JuliaApproximation/FastGaussQuadrature.jl: Julia package for Gaussian quadrature
Is there any updated comparison :slight_smile: Sorry I am too lazy to benchmark on my own…

1 Like

Looking at the code, I’m suspicious of their Julia timings. They’re not using BenchmarkTools.jl or interpolation to time Julia code, so they might have picked up precompilation times.

You are looking at the wrong page. Those are the 2018 results, and they are clearly a mess:

IDL is 1000-2000x faster than Julia, while Python grows faster for larger problems (by a factor of 10x!!)

And then, for the Jacobi iterative solver, look at this


IDL is suddenly 80x slower than Julia…

Those results are garbage. Instead look at the 2019 results for Gauss-Legendre:
(2019-results here: https://modelingguru.nasa.gov/docs/DOC-2783)


Julia blows IDL and python out of the water. IDL still makes no sense, though. Look at how the runtimes change for different problem sizes.

4 Likes

They are also using @njit(parallel=True) for the trigonometric Python+Numba code, while not using Julia @threads. This purpose of this benchmark is ostensibly to measure “function evaluation”, though most of the time is spent on array allocation.

I think the benchmarks need some thoughtful input.

4 Likes

Aaarg. Another complaint. They have a test_count_unique_words, which, first of all, counts all words, not unique words. But they also print each line, and the results to screen, which completely dwarfs the time it takes to actually count (printing a line takes ~200x as long as counting the number of words, for a fast implementation). And also, they use non-const globals.

I’m afraid this benchmark suite requires a lot of input.

Wow, such a careless (or deliberate) test appears in a link with nasa.gov? Unbelievable!