An FFT computes a discrete Fourier transform (DFT). You can interpret it as an approximation for a Fourier series or continuous Fourier transform or DTFT, but by itself the DFT knows nothing about sample period (L?) or “sampling rates” etc.
The normalizations of fft
and ifft
are somewhat arbitrary (along with the choice of ± signs in their phase exponents) — the only mathematical requirement, if you want them to be inverses of one another, is that the product of their normalization factors should be 1/n.
Conceptually, it can be attractive to give them both a normalization of 1/\sqrt{n}, so that each of the transformations is unitary. However, this has a couple of disadvantages: first, you pay the (slight) cost of a normalization pass twice, while in practical situations the unitary normalization is rarely needed. Second, the convolution theorem would then need an additional factor of \sqrt{n}.
I’m not sure where the convention that it is the ifft
that gets the factor of 1/n originated, but it seems likely to have been popularized by Matlab, from which it was copied into Julia, SciPy, and probably others. This choice has the advantages that the normalization is done only once, and the convolution theorem then has a simple form: ifft(fft(x) .* fft(y))
is the cyclic convolution of x
and y
(whereas other normalizations would require an additional scaling).
The underlying FFTW library (used by both Matlab and Julia) computes transforms with no normalization factor at all. (In Julia, you can compute the lower-level unnormalized “inverse” transform with bfft
.) The rationale was that (a) many normalization conventions are common and we didn’t want to pay the price of picking one (possibly wrong) for the user and (b) in practice, you invariably do some pre- or post-processing of the FFT inputs/outputs, and it is more efficient if you combine the scale factor into that computation (rather than doing a separate scaling loop, as ifft
does).