Is Julia integral calculation not as fast as MATLAB?

I found it was slow when I used QuadGK.jl to do integral calculation. Then I used MATLAB to make a comparison and found that Julia integral was slower than MATLAB. Here are my comparisons:

julia> using QuadGK

julia> f(x) = (cos(x)^5 - cos(x)^3 + cos(x))*sin(x)^5
f (generic function with 1 method)

julia> @time quadgk(f, -1, 1, rtol=1e-14)
  0.499007 seconds (609.20 k allocations: 34.800 MiB, 1.39% gc time, 99.96% compilation time)
(6.713414395092624e-19, 0.0)

julia> versioninfo()
Julia Version 1.7.2
Commit bf53498635 (2022-02-06 15:21 UTC)
Platform Info:
  OS: Windows (x86_64-w64-mingw32)
  CPU: Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-12.0.1 (ORCJIT, skylake)

Results in MATLAB:

f = @(x) ((cos(x).^5 - cos(x).^3 + cos(x)) .* sin(x).^5);

tic
    quadgk(f, -1, 1,'AbsTol',1e-14);
toc

output: 0.145103s

The above tests were performed on the same computer.

Julia integrals are slower on Linux:

julia> using QuadGK

julia> f(x) = (cos(x)^5 - cos(x)^3 + cos(x))*sin(x)^5
f (generic function with 1 method)

julia> @time quadgk(f, -1, 1, rtol=1e-14)
  2.708747 seconds (7.27 M allocations: 175.746 MiB, 1.04% gc time, 20.33% compilation time)
(-5.403687448405256e-18, 5.524260415282572e-29)

julia> versioninfo()
Julia Version 1.7.2
Commit bf53498635 (2022-02-06 15:21 UTC)
Platform Info:
  OS: Linux (x86_64-pc-linux-gnu)
  CPU: Intel(R) Core(TM) i9-7980XE CPU @ 2.60GHz
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-12.0.1 (ORCJIT, skylake-avx512)

Of course what I’m showing here is just integrating against a particular function, but when I do a lot of integrals Julia is obviously much slower than MATLAB. Is it my improper use of the quadgk function or is this package not perfect? Do you have any good suggestions?

You are using absolute error tolerance in Matlab (AbsTol) and relative tolerance in QuadGK (rtol). Since the integral goes to zero, QuadGK won’t give up until the error is actually 0.0.

Use atol instead. Or, perhaps, both rtol and atol, to make sure you get convergence both in cases with zero result and with non-zero results.

7 Likes

After compilation Julia gets faster:

using QuadGK
using BenchmarkTools

f(x) = (cos(x)^5 - cos(x)^3 + cos(x))*sin(x)^5

@btime quadgk(f, -1, 1, rtol=1e-14)

yielding

  140.000 μs (958 allocations: 25.11 KiB)
2 Likes
julia> @btime quadgk(f, -1, 1; rtol=1e-14, atol=1e-14)
  2.822 μs (7 allocations: 192 bytes)
(-2.304928352082297e-18, 1.5934560307793533e-18)

Fast enough? (Note: microseconds.)

1 Like

Hmm. With rtol I’m actually seeing this:

1.7.2> @btime quadgk(f, -1, 1, rtol=1e-14)
  2.269 s (6666686 allocations: 141.08 MiB)
(-5.403687448405256e-18, 5.524260415282572e-29)

As you can see, the convergence criterium is crazy. This is why we should use atol. And that is clearly insanely fast.

I’m wondering

using QuadGK
using BenchmarkTools

f(x) = (cos(x)^5 - cos(x)^3 + cos(x))*sin(x)^5

@btime quadgk(f, -1, 1, atol=1e-14)
@btime quadgk(f, -1, 1, rtol=1e-14)
versioninfo()

yielding

  1.420 μs (6 allocations: 160 bytes)
  158.400 μs (954 allocations: 25.05 KiB)
Julia Version 1.7.2
Commit bf53498635 (2022-02-06 15:21 UTC)
Platform Info:
  OS: Windows (x86_64-w64-mingw32)
  CPU: Intel(R) Core(TM) i7-10710U CPU @ 1.10GHz
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-12.0.1 (ORCJIT, skylake)
Environment:
  JULIA_EDITOR = code
  JULIA_NUM_THREADS = 4

Will check with newer versions…

Edit: looks even slightly better on nightly for me.

I’m also on 1.7.2. Windows.

julia> Pkg.status("QuadGK")
Status `...\Project.toml`
  [1fd47b50] QuadGK v2.4.2
1.7.2> versioninfo()
Julia Version 1.7.2
Commit bf53498635 (2022-02-06 15:21 UTC)
Platform Info:
  OS: Windows (x86_64-w64-mingw32)
  CPU: Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-12.0.1 (ORCJIT, skylake)

(@v1.7) pkg> st QuadGK
  [1fd47b50] QuadGK v2.4.2

Strange. Especially the rtol, but also atol is twice as slow on my computer, and that seems odd.

I was wondering about

which looks to be more than 2x slower?

Yes, especially rtol, but I’m also perplexed about the atol difference.

BTW, the Matlab timing is also off. Here’s mine:

>> timeit(@()quadgk(f, -1, 1,'AbsTol',1e-14))
ans =
   3.5025e-04

That’s 350us.

And the new integral function is faster yet:

>> timeit(@()integral(f, -1, 1,'AbsTol',1e-14))
ans =
   2.0296e-04

But still 100x slower than QuadGK.jl :wink:

JULIA_NUM_THREADS = 4

?

It’s actually faster.

After compilation? You means run a second time? The second time, it does come out faster. But if you integrate the function g(x) =sin(x) * cos(x) , you’ll see that no matter how many runs you do, you’re not going to get a significant increase in speed

What is faster?

Use atol instead,or both rtol and atol,which will faster than use rtol only

Second run is faster:

julia> g(x) = sin(x) * cos(x)
g (generic function with 1 method)

julia> @time quadgk(g, -1, 1; atol=1e-14)
  0.427068 seconds (462.03 k allocations: 26.545 MiB, 99.99% compilation time)
(-1.2855363932200686e-17, 1.51605683459074e-17)

julia> @time quadgk(g, -1, 1; atol=1e-14)
  0.000005 seconds (6 allocations: 160 bytes)
(-1.2855363932200686e-17, 1.51605683459074e-17)

It does faster with ‘atol’,

It’s using ‘rtol’ here

(QuadGK doesn’t use multiple threads.)

1 Like