It has excellent behavior â itâs an entire function, so both its Fourier series and the trapezoidal-rule approximation thereof (the DFT) should converge *faster* than exponentially.

For example, with \alpha = \beta = 1 the Fourier series converges to machine precision after only about 30 terms (at both positive and negative frequencies), so a DFT with 100 terms should give a Fourier series that is accurate to nearly machine precision everywhere in [0,2\pi].

```
using FFTW
f(x,Î±,ÎČ) = cis(Î±*sin(x) + ÎČ*sin(2x))
n = 100
x = 2Ï/n * (0:n-1)
F = fft(f.(x, 1,1) .* (1/n)) # â Fourier series coefficients
using Plots
scatter(abs.(F), yscale=:log10, xlabel="index", ylabel="|Fourier coefficient|", legend=false)
```

(The

`cis`

function is convenient here.) Note that the frequency ordering of the DFT takes some getting used to:

`F[1]`

corresponds to zero frequency, and

`F[2:(n+2)Ă·2]`

correspond to âpositiveâ frequencies while

`F[n:-1:(n+3)Ă·2]`

correspond to ânegativeâ frequencies (depending on your sign convention). (The middle

`nĂ·2+1`

âNyquistâ element for even

`n`

is ambiguous, but for sufficiently large

`n`

this term is negligible anyway.) This is one context where zero-based indexing makes a lot of sense, which you could do (if you want) by wrapping

`F`

with the OffsetArrays.jl package.

If we compare to adaptive numerical integration via QuadGK.jl, we see that they agree to machine precision as expected:

```
using QuadGK
fourier(f, n) = quadgk(x -> f(x) * cis(-n*x), 0, 2Ï, rtol=1e-14)[1] / 2Ï
fourier.(x -> f(x, 1,1), 0:3) ./ F[1:4] # compare the first 4 coefficients
```

gives

```
4-element Vector{ComplexF64}:
0.9999999999999999 + 2.4523045117069327e-17im
0.9999999999999993 - 2.91266466383833e-16im
0.9999999999999999 - 2.969943128507037e-16im
1.0 - 1.6216552943397132e-16im
```

(i.e. about `1.0 Â± Î”`

where `Î”`

is close to machine precision).

From the plot, you will also see that the accuracy hits a ânoise floorâ once the coefficients get so small that they reach machine precision. In this regime, any direct summation approach is vulnerable to cancellation errors that limit accuracy. If you actually want to calculate such tiny coefficients accurately (e.g. you are interested in high-frequency asymptotics), then you need to turn to more specialized methods (either analytical or computational) as described here: Package to evaluate oscillating integrals with increasing frequency - #10 by stevengj

PS. Note also that your function f(x) = \overline{f(-x)} is conjugate-symmetric (assuming \alpha and \beta are real), so the Fourier-series coefficients `F`

are purely real. You could therefore get another factor of â2 in performance by using the `brfft`

function in FFTW. Of course if you care that much about performance, you should also precompute your FFT plans, maybe work with a pre-allocated output array using `mul!`

, etcetera etcetera. I didnât want to get into the complexities of performance optimization here.