Replicating Runge's phenomenon with ApproxFun

Hi there, I want to replicate Runge’s phenomenon Runge's phenomenon - Wikipedia

Specifically, I want to interpolate Runge’s function with a taylor series. I expect that the polynomial I fit oscillates like crazy at the boundaries and fits the function well in the middle. But it oscillates everywhere!

using ApproxFun, Plots

f(x) = 1 / (1 + 25x^2) ## runge's function
tay5 = Fun(f, Taylor(-1..1), 6)
tay9 = Fun(f, Taylor(-1..1), 10)

x = -1 : .01 : 1
p1 = plot(x, f.(x), label = "Runges function")
# degree = ncoefs - 1
plot!(x, real.(tay5.(x)), label = "degree 5", color = "red",
      title = "ApproxFun Taylor approximation\nwith interpolation points")
scatter!(points(tay5), zeros(6), label = "degree 5", color = "red")
plot!(x, real.(tay9.(x)), label = "degree 9", color = "green")
scatter!(points(tay9), zeros(10), label = "degree 9", color = "green")

points(Fun(f, Taylor(-1..1+1/11), 10))

The problem is that Fun(f, Taylor(-1..1), n) does not approximate f as an n-th order polynomial as you might expect:

f(x) \approx \sum_{k = 0}^n f_k x^k \, ,

but rather as a truncated Fourier series with only non-negative frequency components:

f(x) \approx \sum_{k = 0}^n f_k e^{i \pi k (x + 1)} \, .

What you’re plotting is the real part of that approximation:

\sum_{k = 0}^n f_k \cos(\pi k (x + 1)) \, ,

(As it turns out, you end up with all real f_k here, so only \cos terms appear in the real part.) This is dominated by oscillations at the highest frequency, n / 2, and is not subject to the Runge phenomenon.

The documentation is a bit sparse on this: Spaces · ApproxFun.jl. It does introduce Taylor as a space closely related to Fourier, but doesn’t spell out how various domains are mapped to the unit circle when using these spaces.

3 Likes

Oh snap… so what the docs say here: Library · ApproxFun.jl could be made more clear I guess.

Perhaps some of the confusion is rooted here: an interpolation polynomial is not a (truncated) Taylor series. Both are polynomials, but they are calculated in different ways. An n-th order interpolation polynomial is calculated from the function’s value at n + 1 different points. An n-th order truncated Taylor series is calculated from the values of the function and its first n derivatives at a single point.


If the Taylor series of your function has convergence radius greater than 1, there’s a connection between Taylor and Fourier series that lets you calculate a truncated Taylor series without using derivatives by evaluating the function on the unit circle in the complex plane. This is effectively what Fun(f, Taylor()) can accomplish (note no -1..1—we need to use the default domain for this, i.e., the unit circle). See, e.g., intuition - Connection between Fourier transform and Taylor series - Mathematics Stack Exchange. But this is irrelevant to the Runge phenomenon for two reasons:

  • The Runge function has poles at \pm i / 5, so the radius of convergence of its Taylor series doesn’t extend to the unit circle (I suppose you could work around this by rescaling the method to use a smaller circle)
  • The Runge phenomenon isn’t a property of Taylor series anyway—it’s a property of interpolation polynomials.
2 Likes

Yeah for sure. What it says is true for Taylor(), but misleading for Taylor(a..b). Even for Taylor() it wouldn’t hurt to point out that the interpolation nodes are the n-th roots of unity, and explain that this is exactly what you need to obtain a Taylor series approximation, but that this only works if the Taylor series has sufficiently large convergence radius.

Theory aside, if you want to visualize the Runge phenomenon, use something like BasicInterpolators.jl instead of ApproxFun.jl. Here’s an example:

using BasicInterpolators, Plots

f(x) = 1 / (1 + 25x^2)

const x5 = range(-1, 1, 5)
const x9 = range(-1, 1, 9)

const y5 = f.(x5)
const y9 = f.(x9)

# BasicInterpolators.neville(x, xn, yn) calculates the value at x
# of the interpolation polynomial passing through (xn, yn)
f5(x) = neville(x, x5, y5)
f9(x) = neville(x, x9, y9)

x = range(-1, 1, 100)
plot(x, f.(x); label="Runge function")
plot!(x, f5.(x); label="degree 5", color="red")
plot!(x, f9.(x); label="degree 9", color="green")
scatter!(x5, 0.05oneunit.(x5); label="degree 5", color="red")
scatter!(x9, -0.05oneunit.(x9); label="degree 9", color="green")

3 Likes

@danielwe , many thanks for your detailed answers, I appreciate.

So just to clarify some things: So with Fun(f, Taylor(-1..1), 10) we don’t see Runge’s phenomenon because the interpolation points are transformed by \cos, so they are in fact not equidistant. Because that’s what confused me,
points(Fun(f, Taylor(-1..1), 10)) reveals equidistant points. Also, do you know why the right most point is always missing? Like the interpolation points of Taylor always start at the left most point of the interval but never end on the right most point.

The other confusion for me is the relation of Chebyshev polynomials with the truncated Fourier series with only non-negative frequency components, as you write. So Cheby will converge to Runge’s function with increasing N, but this particular Fourier series won’t. Is that because the positive frequencies are missing? Or because the interpolation points are distributed differently?

using ApproxFun, Plots

p1 = plot(x, f.(x), label = "Runges function")
# degree = ncoefs - 1
plot!(x, real.(tay5.(x)), label = "degree 5", color = "red")
scatter!(points(tay5), zeros(6), label = "degree 5", color = "red")
plot!(x, real.(tay9.(x)), label = "degree 9", color = "green")
scatter!(points(tay9), zeros(10), label = "degree 9", color = "green")

No, it’s because you’re not computing an interpolation polynomial at all, you’re computing a kind of Fourier series, except you only allow nonnegative frequencies. As you can see, the points where this Fourier series matches the original function are indeed equidistant, but there’s no Runge phenomenon because that’s a property of interpolation polynomials, and a Fourier series is not a polynomial.

(Well, actually, it is a polynomial in the variable z = e^{i\pi (x + 1)}, so in that sense what you point out is correct: in terms of z, the sample points are not equidistant in a real interval; they are roots of unity, equally spaced around the unit circle. But the point remains that a Fourier series is not a polynomial in the original variable x).

The rightmost point is missing because a Fourier series is periodic, so the right endpoint is considered the same as the left endpoint, not a distinct point. (Once again, in terms of z = e^{i\pi(x + 1)}, observe that e^{i\pi(-1 + 1)} = e^{i\pi (1 + 1)}.)

Yeah, the problem is that negative frequencies are missing. Keeping only nonnegative frequencies results in a function whose value rotates monotonically counterclockwise in the complex plane as x increases, so unless the function is zero everywhere, it cannot be real everywhere. To obtain a real-valued function, you need positive and negative frequencies that form complex conjugate pairs such that the imaginary parts cancel.

You’re right that Chebyshev polynomials are also related to Fourier series, but this construction is built on a real-valued cosine series, which if you write it in exponential form always has complex conjugate pairs of positive and negative frequencies, so you don’t run into this problem.

The distribution of interpolation points is what sets Chebyshev interpolants apart from other interpolation polynomials (trivially, since interpolation polynomials are uniquely defined by their interpolation points). However, once again, Fun(f, Taylor(-1..1)) does not compute an interpolation polynomial in the usual sense, it computes a Fourier series with a particularly unfortunate frequency-space constraint, and this is the reason it can’t ever converge to any real-valued function.

You may rightly wonder if Fun(f, Taylor(-1..1)) is ever useful at all. Maybe, if you know your function f(x) is complex-valued and consists only of nonnegative frequency components. Equivalently, f(x) must have the property that, when mapped from [-1, 1) onto the unit circle by z = e^{i\pi(x + 1)}, and analytically continued into the complex plane, the resulting function is holomorphic on the unit disc. I’m sure this class of functions is relevant in some contexts, like signal analysis.

However, in most cases, you’re probably more interested in Fun(f, Taylor(Circle(c, r))). As long as f(z) is holomorphic on the disc of radius r centered on c, this computes a Taylor expansion of f(x) around x = c. If a Taylor approximant is what you want, this is what you should use. But, once again, don’t forget that Taylor expansions are completely different from interpolation polynomials; Fun(f, Taylor(Circle(c, r))) does not give you an interpolation polynomial.

1 Like