No, it’s because you’re not computing an interpolation polynomial at all, you’re computing a kind of Fourier series, except you only allow nonnegative frequencies. As you can see, the points where this Fourier series matches the original function are indeed equidistant, but there’s no Runge phenomenon because that’s a property of interpolation polynomials, and a Fourier series is not a polynomial.
(Well, actually, it is a polynomial in the variable z = e^{i\pi (x + 1)}, so in that sense what you point out is correct: in terms of z, the sample points are not equidistant in a real interval; they are roots of unity, equally spaced around the unit circle. But the point remains that a Fourier series is not a polynomial in the original variable x).
The rightmost point is missing because a Fourier series is periodic, so the right endpoint is considered the same as the left endpoint, not a distinct point. (Once again, in terms of z = e^{i\pi(x + 1)}, observe that e^{i\pi(-1 + 1)} = e^{i\pi (1 + 1)}.)
Yeah, the problem is that negative frequencies are missing. Keeping only nonnegative frequencies results in a function whose value rotates monotonically counterclockwise in the complex plane as x increases, so unless the function is zero everywhere, it cannot be real everywhere. To obtain a real-valued function, you need positive and negative frequencies that form complex conjugate pairs such that the imaginary parts cancel.
You’re right that Chebyshev polynomials are also related to Fourier series, but this construction is built on a real-valued cosine series, which if you write it in exponential form always has complex conjugate pairs of positive and negative frequencies, so you don’t run into this problem.
The distribution of interpolation points is what sets Chebyshev interpolants apart from other interpolation polynomials (trivially, since interpolation polynomials are uniquely defined by their interpolation points). However, once again, Fun(f, Taylor(-1..1))
does not compute an interpolation polynomial in the usual sense, it computes a Fourier series with a particularly unfortunate frequency-space constraint, and this is the reason it can’t ever converge to any real-valued function.
You may rightly wonder if Fun(f, Taylor(-1..1))
is ever useful at all. Maybe, if you know your function f(x) is complex-valued and consists only of nonnegative frequency components. Equivalently, f(x) must have the property that, when mapped from [-1, 1) onto the unit circle by z = e^{i\pi(x + 1)}, and analytically continued into the complex plane, the resulting function is holomorphic on the unit disc. I’m sure this class of functions is relevant in some contexts, like signal analysis.
However, in most cases, you’re probably more interested in Fun(f, Taylor(Circle(c, r)))
. As long as f(z) is holomorphic on the disc of radius r
centered on c
, this computes a Taylor expansion of f(x) around x = c. If a Taylor approximant is what you want, this is what you should use. But, once again, don’t forget that Taylor expansions are completely different from interpolation polynomials; Fun(f, Taylor(Circle(c, r)))
does not give you an interpolation polynomial.