Hi,

I’m trying to compute a set of integrals, and the integration takes a lot of time and allocates a lot. Is there a way to speed it up?

MWE:

```
using Integrals, BenchmarkTools
const ω1, ω2 = 33.807, 37.822;
const Ω1, Ω2 = 0.314159, 0.41082;
function integrand(x, p)
Ω1 * cos(ω1 * x) + Ω2 * cos(ω2 * x)
end
@btime solve(IntegralProblem(integrand, (0.0, 4500.0)), QuadGKJL(); reltol=1e-5, abstol=1e-5)
@btime solve(IntegralProblem((t, p) -> Ω1 * cos(ω1 * t + Ω2 * cos(ω2 * t)), (0.0, 4500.0)), QuadGKJL(); reltol=1e-5, abstol=1e-5)
```

Results:

```
602.150 ms (13008244 allocations: 207.57 MiB)
retcode: Success
u: -0.00500189374329392
1.216 s (26252034 allocations: 417.91 MiB)
retcode: Success
u: 0.029554584257757574
```

Is there a way to improve the performance? I’m trying to get up to a 4-dimensional integral of a product of these functions and I can’t even calculate a two-dimensional version since it takes too long.

You have a highly oscillatory integrand — your integrand is oscillating around 30000 times in your integration domain, and you should expect that any generic quadrature scheme will need several quadrature points per oscillation, so you should figure on 100k+ quadrature points for just a 1d integral.

Indeed, calling QuadGK.jl explicitly to get an evaluation count, I find:

```
julia> using QuadGK
julia> quadgk_count(x -> integrand(x, 0), 0, 4500, rtol=1e-5, atol=1e-5)
(0.001980687766630197, 9.987036693025001e-6, 968415)
```

or almost 10^6 evaluation points for your 1d integral. There are ways to speed this up (e.g. increasing the quadrature order, pre-allocating buffers, etc.) but fundamentally this is a hard problem for a generic quadrature routine.

If your multidimensional integrand is similar along each dimension, you can see why — in 2d it would take \approx 10^{12} function evaluations.

You really need to re-think what you are doing — this is not something you are going to be able to brute force without exploiting more structure of your problem.

In certain cases, highly oscillatory integrals can be computed very efficiently — for example, if the oscillatory part is known and simple (e.g. a cosine), but it is multiplied by some complicated function (as in a Fourier transform), there are efficient methods. See e.g. [RFC/ANN] OscillatoryIntegralsODE.jl: Levin method + OrdinaryDiffEq and Integrate High Dimension highly oscillating integral

6 Likes