Truncated power series in ApproxFun.jl

I gave ApproxFun.jl another look.

As far as I can see, polynomials of truncated power series in ApproxFun.jl (requires for e.g. the Duffing equation) are poorly documented.

Do I overlook something?

Do you mean approximation in the monomial basis \{1,x,x^2,\ldots\} (the Taylor() basis in ApproxFun)? I was surprised to see that this was included at all in ApproxFun.jl, to be honest — the monomial basis is very badly behaved at high degrees (leads to ill conditioning and numerical instability), which is why high-degree polynomial approximation tends to be in terms of orthogonal polynomials such as Chebyshev polynomials instead. This is why Chebyshev(a .. b) is the default basis used in ApproxFun for approximating functions on finite intervals [a, b].

If you mean specifically truncated Taylor series and similar expansions, such as in series methods to solve ODEs (the Frobenius method and friends), however, this is fundamentally not what ApproxFun is about. A Taylor series is for approximating a function near a single point, and becomes more and more accurate as you get closer to that point. Polynomial approximation ala ApproxFun (or chebfun etc.) is about approximating a function in a whole domain, ideally with exponentially (in the degree) good accuracy everywhere in the domain.

(You can still solve boundary-value ODEs and PDEs in ApproxFun, but it is a spectral method that tries to get exponential accuracy everywhere in a domain. A larger-scale software package for this sort of method, alas with a Python interface, is Dedalus.)

PS. I moved your post to a new thread since it was off-topic for the original thread. In general, you should try to start new threads for new questions, linking the original thread if you want to cross-reference anything there.

1 Like

One thing I always wonder when you say this is what qualifies as high-degree in this context.

Also, I’d like to note that in practice it often makes sense to consider custom bases, such as the even monomials \{ 1, x^2, x^4, x^6 \}, depending on the function and the interval.

Lastly, I have a relevant package, in case someone finds it useful: FindMinimaxPolynomial.jl (no support for orthogonal bases yet, though).

I have not used it personally, but if you are looking for a truncated power series, GTPSA.jl seems to be a pretty directly relevant option to consider as well.

1 Like

Thank you @stevengj for moving my post to a new thread. Much appreciated.

My query is not about Taylor approximations. I was not clear enough in my previous post. Thanks @nsajko and @cgeoga for your input.

My query is about the harmonic balance method as implemented in e.g. HarmomicBalance.jl and illustrated in a MWE for the Duffing equation at BifurcationKit: Duffing Equation: Time-Harmonic: Single Harmonic

One way to explain the harmonic balance method is to consider the ApproxFun.jl linear system example at Linear Equations · ApproxFun.jl. It is wonderful to see how succintly code is! I imagine that under the hood ApproxFun.jl assembles and solves a 2-by-2 linear system for the amplitude of cos(t) and sin(t).

Suppose now replacing the zeroth-order term c u(\theta) by the quadratic (thus non-linear) term c [u(\theta)]^2. Or by c [u(\theta)]^3 to make it more Duffing equation like. Suppose looking again for a 2pi-periodic solution. Can this non-linear problem be solved inside ApproxFun.jl.

If not directly, can ApproxFun.jl be instrument to assemble the non-linear system for the expansion coefficients? This would then allow to use ApproxFun.jl as a preprocessor for a non-linear solver or for BifurcationKit.jl as the MWE shows.

In practice, about degree 40–50 is where one always seems to run into big trouble with monomials (in double precision), I’ve found.

Basically, the insurmountable problems occur when the condition number of the Vandermonde matrix (which grows exponentially ill-conditioned with degree) becomes on the order of 1/eps(). Until then, monomials can sometimes work surprisingly well — even though the polynomial coefficients become increasingly inaccurate due to the ill-conditioning, the error in the approximated function is a backwards error and remains small, as discussed in Shen and Serkh (2022).

Assuming you use Chebyshev points or similar (note that equally spaced points are a disaster in any polynomial basis) on [-1, 1], the condition number of the Vandermonde matrix hits 1/eps(Float64) at about degree 44. That means that if your function needs much higher degree than that to approximate in your interval (which happens for highly oscillatory functions, functions with near singularities, etcetera), then the monomial basis will be problematic (its backwards error stagnates before convergence is achieved).

In contrast, the condition number of the “Chebyshev–Vandermonde” matrix (whose rows are Chebyshev points and whose columns are Chebyshev polynomials rather than monomials) stays bounded < 2 for arbitrarily high degrees.

2 Likes

I think you could do it straightforwardly by a Newton iteration (or a related sequence of linearizations).

For example, suppose you are solving u' + cu^2 = f on some function space (e.g. periodic functions as in the ApproxFun example). Start with a guess u_k, and construct an improved guess u_{k+1} by linearizing u_{k+1} = u_k + \delta u in \delta u and solving for \delta u:

(u_k + \delta u)' + c (u_k + \delta u)^2 = f \\ \implies \delta u ' + 2cu_k \delta u \approx f - u_k' - c u_k^2

which you can then solve as a linear equation in ApproxFun, with something like (untested):

δu = (𝒟 + Multiplication(2c*uₖ)) \ (f - uₖ' - c*uₖ^2)
uₖ₊₁ = uₖ + δu

and then repeat.

Of course, like all Newton iterations, this may converge unpredictably unless you start out with a sufficiently good guess, e.g. if you turn on the nonlinearity slowly and perform continuation.

The trick with doing this in a conventional nonlinear-solver package is that the iteration state (u) is not a fixed-length array — ApproxFun will adaptively adjust the number of polynomial terms as needed to maintain accuracy. So you need to support a more general state vector space. Fortunately, it seems that BifurcationKit.jl supports exactly this feature, and even has a tutorial example using ApproxFun.

Thanks. I will think this over.

Can ApproxFun be instrumented to keep the polynomial degree fixed?

I don’t know, but it’s not really in the spirit of the package — the whole point is to give the illusion that you are doing infinite-dimensional linear algebra on functions, by internally keeping every term to nearly machine precision (via whatever number of basis terms is required at each step) or whatever your desired tolerance is.

(You can specify a relative-error tolerance, though, as a keyword parameter to \.)

2 Likes

Taylor series are completely stable and well-conditioned if sampling on the (complex) unit circle which is the canonical domain for Taylor(). Provided your function is analytic in the disk.

3 Likes

My newer package ClassicalOrthogonalPolynomials.jl does support fixed degrees a bit better. Though it’s even more poorly documented than ApproxFun.jl :joy: And certainly more buggy.

In ApproxFun.jl you can force the number of coefficients in an expansion via Fun(f, Chebyshev(), n).

5 Likes