No, at the Nyquist frequency it simply adds the aliased amplitudes like at every other frequency. There’s no “averaging”.
For example, for the IFFT of a “delta”-function frequency content, i.e. a pure sinusoid at a frequency of 2\pi*1/n (not the Nyquist frequency) with an amplitude of 1, it outputs an amplitude of 1. If we add an alias at a frequency of 2\pi*(1+n)/n then it outputs an amplitude of 2:
This shouldn’t be surprising since (a) the DFT definition does not have a different scale factor for the Nyquist term and (b) the aliased signal is identical at the sample points (this is the definition of “aliasing”) so it is literally impossible for the transform to detect how many aliased terms were summed in order to “average” them.
Maybe you are confused by the fact that if you put in a cosine (not a complex exponential) at a lower frequency, e.g. 2\pi/n, then a DFT is able to distinguish both the positive and negative frequency components as separate amplitudes:
so the cosine seems to be “normalized differently” at the Nyquist frequency. This is just a consequence of the facts that the DFT basis functions are complex exponentials and a pure cosine consists of two complex exponentials, but at the Nyquist frequency those two complex exponentials are aliased.
You don’t need to do simulations, you can do this analytically. It simplifies a lot because (a) the DFT is linear and (b) the frequency components are orthogonal (when integrated over a common period), and the upshot is that you can analyze just the Nyquist term by itself:
In particular, take a function f(x)=a_{+}e^{i\pi x}+a_{-}e^{-i\pi x}, sampled at the Nyquist frequency f(n)=(a_{+}+a_{-})(-1)^{n} so we can only measure 2a=a_{+}+a_{-}. We reconstruct/interpolate it as a signal \tilde{f}(x)=2a\left[c_{+}e^{i\pi x}+c_{-}e^{-i\pi x}\right]. Question: what coefficients c_{\pm} minimize the expected mean-square error E[\int|f(x)-\tilde{f}(x)|^{2}dx] if a_{\pm} are i.i.d. random numbers with zero mean and some distribution (e.g. Gaussian)?
(using the facts that E[a_{+}\overline{a_{-}}]=0 and E[|a_{+}|^{2}]=E[|a_{-}|^{2}]), which is minimized for \boxed{c_{\pm}=\frac{1}{2}}.
That is, the optimal interpolant is \boxed{\tilde{f}(x)=a\left[e^{i\pi x}+e^{-i\pi x}\right]}, which corresponds to splitting the Nyquist amplitude 2a equally between the positive- and negative-frequency terms. This also coincides with the minimal mean-square slope interpolant from above, and has the nice property that it interpolates real signals f (a_- = \overline{a_+} \implies a purely real) with real interpolants \tilde{f}. (The same analysis also works, with the same optimal interpolant \tilde{f}, if we average over purely real signals f with random a_+=\overline{a_-}, since in that case we still have E[a_{+}\overline{a_{-}}]=E[a_+^2]=0 if the real and imaginary parts of a_+ are i.i.d. with zero mean.)
Thanks for your nice examples (you may want to edit the result of abs. in line 3, block one). Yes, I do agree that it is the sum as I was previously thinking (Sinc Interpolation based on FFT - #72 by RainerHeintzmann) and how we first implemented FourierTools.jl. Yet what threw us off was the DFT of a delta-function placed at coordinate zero (or one in Julia notation).
Due to the aliasing, I would have expected a 2.0 at the 5th entry, but it is only a 1.0 (independently of even or odd n). I am still not sure how to understand this effect, but it must have a very simple explanation. It looks a bit like we have the choice to either preserve deltas or cosine functions correctly, when we downsample? But both the delta and the cosine are real and symmetric (leading to real and symmetric DFTs), so aliasing should act the same way on both. So what is going on here?
I think your confusion is that there should be a +180 and -180 degrees and thus the amplitude of that coefficient should be doubled. This is not the case because +180 and -180 are the same. If we shift the delta distribution over by one discrete unit, we see the 5th coefficient is real. In fact that will always be the case for a real input.
Odd Case (n = 9)
In the odd case +160 and -160 degrees are not the same and thus have distinct coefficients that are complex conjugates of each other.
Discrete/sampled signals are assumed to be periodic and band limited by the FFT technique.
An impulse at time zero is assumed to be repeating every N*dt samples and its FFT, from the formula definition, will be constant for all frequencies (in agreement with theory for the continuous case).
On the other hand, if a cosine signal is truncated in the time domain such that it does not have continuous integral periods every N*dt samples, then there will be additional frequency components (spectral leakage effect). In other words, the truncation of a periodic function over an interval that is not multiple of the period results in a sharp discontinuity in the time domain and in side-lobes in the frequency domain.
NB (a reminder for those who need it, including me): In the continuous case, the Fourier Transform of c(t) = Acos(2pi f0 t) has two impulses with amplitude A/2 at f=+/-f0. If we let f0 to be 0, then c(t) = A (constant) and in the Fourier domain we get one single impulse of amplitude A at f = 0.
Discrete/sampled signals are assumed to be periodic and band limited by the FFT technique.
An impulse at time zero is assumed to be repeating every N*dt samples and its FFT, from the formula definition, will be constant for all frequencies (in agreement with theory for the continuous case).
On the other hand, if a cosine signal is truncated in the time domain, i.e., if it is not continuous every N*dt samples at the start/end of the window of analysis, then there will be additional frequency components (spectral leakage effect).
Yes, indeed. However here both examples are nicely repeated (the delta comb as well as the cosine) without any jumps.
Both examples are undersampled by the even-sized FFT. Both are real at the Nyquist frequency.
Yet one undersampling can be reverted easily, the other not. The problem is that the delta has lots of more undersampled frequencies whereas the cosine does not. Recovering the delta lives from the fake impression that we can recover also the higher frequencies but this is of course not true.
If you first band-limit the delta, than that band-limited signal behaves as you would expect.
The example and statement that follows are extremely puzzling:
Why one time series built with with one impulse at t=0 and a total of 8 samples would be aliased? And if it was why would it be aliased only in position 5 of the FFT and not in additional positions?
Probably this is being taken out of its proper context and therefore would appreciate if you could elaborate on the rationale here. Thank you.
Indeed it so puzzling which is why I posted it.
The cosine example posted before shown the factor of 2.0 for the aliased frequency at Nyquist sampling due to aliasing. Naive thinking made me expect the same for the delta peak also containing such a frequency which is aliased.
Yet it is not seen.
Remember that this “one delta pulse” really is a series of periodic delta pulses that do contain the frequency that is aliased.
Thank you for the pointer, but I still do not get it how this is related to the discrete Fourier transform. How is the Dirichlet function D_n(x) used in the discrete Fourier transform?
You could probably post the simple case where M = N. Also, over the range of indices considered, k' = k, and so the formulas will simplify greatly and it will be easier to reason.
It comes from the DFT of a rectangular window. Zero-padding can be thought of as multiplying a longer signal (which you don’t have) by a square window (resulting in the data you have), and hence by the convolution theorem this takes the spectrum of the longer signal and convolves it with a Dirichlet kernel, which is a form of trigonometric interpolation.