 # Inverse functions using TaylorSeries.jl

I’m interested in using AD, and so have been something like 90% successful
using TaylorSeries.jl to generate the inverse function to almost any general arbitrary function.
For example TaylorSeries.jl can invert asin from sin , sin from asin etc. - but not,
strangely enough, log/ln for exp - might you have an insight as to why this MWE snippet below works for all except the exp function ?

``````using Micrologging
using TaylorSeries

# @warn "MSC NOTE TIP TaylorSeries.inverse — Function.  https://www.juliadiff.org/TaylorSeries.jl/v0.5/api.html "
# MSC NOTE TIP inverse(f)
# Return the Taylor expansion of f^−1(t)
# , of order N = f.order, **for f::Taylor1 polynomial** **if the first coefficient of f is zero.** **Otherwise, an ArgumentError is thrown.**
# The algorithm implements Lagrange inversion at t=0
# if f(0)=0
#   f−1(t)=∑n=1Ntnn!dn−1dzn−1(zf(z))n∣∣∣z=0.
sin(asin(0.9))
# 0.9
asin(sin(0.9))
# 0.9
t = Taylor1(15)
@debug "t = Taylor1(15) ; 1.0 t + 𝒪(t¹⁶)"
p = sin(t)
TaylorSeries.evaluate(inverse(p), TaylorSeries.evaluate(p, 0.9))
# 0.8999969178405351

tBig = Taylor1(BigFloat, 50) # Independent variable with BigFloats, order 50
# julia> # With order 50 precision
tBig = Taylor1(BigFloat, 50) # Independent variable with BigFloats, order 50
# 1.0 t + 𝒪(t⁵¹)

# julia> # TaylorSeries.evaluate(inverse(p), TaylorSeries.evaluate(p, 0.9))
TaylorSeries.evaluate(inverse(exp(tBig)), TaylorSeries.evaluate(exp(tBig), 0.9))
# DEBUG ERROR:
# ERROR: ArgumentError: **Evaluation of Taylor1 series at 0 is non-zero.**
# For high accuracy, revert a Taylor1 series with first coefficient 0 and re-expand about f(0).
`````` At first glance it seems like inverting exp with Taylor Series would be the easiest - not the hardest, So hopefully you/someone on this thread might have some hints about that TY The error message is giving you the clue: the value of `exp(t)` at `t=0` is `1`.
When you reflect this in the line `y=x` to find the inverse function, that function (`log`) cannot be expanded in a power series around `t=0`.

Also, please prefer to start a new thread for questions that are not really related to the original one.

cc @lbenet

6 Likes

Thank You for your response, it spurred me on and below is my (initial) solution to an issue I was having with TaylorSeries.jl , which leads to another question >> And more generally I’m interested in using TaylorSeries.jl as an implementation of high-order automatic differentiation - more completely described here >> Interested in using TaylorSeries.jl as an implementation of high-order automatic differentiation / AD , as presented in the book by W. Tucker

One partial SOLN : Using TaylorSeries.jl to get the inverse function exp by defining Taylor1 function log works now

``````julia> tBig = Taylor1(BigFloat, 50) # Independent variable with BigFloats, With order 50 precision 1.0 t + ?(t⁵¹)
julia> p = log(tBig + 1.0)
julia> TaylorSeries.evaluate(inverse(p), TaylorSeries.evaluate(p, 0.9))
8.999082e-01
julia> TaylorSeries.evaluate(inverse(p), 0.9)
1.459603
julia> exp(0.9)
2.45960311115695
julia> TaylorSeries.evaluate(inverse(p), 0.9) + 1.0
2.459603111156949
``````

cc @lbenet

I have moved this discussion to a new thread.

3 Likes

As noted by the error message, to invert a series using Lagrange inversion formula you need it to have a zero constant term. The following I think is what you are looking for:

``````julia> using TaylorSeries

julia> t = Taylor1(5)  # independent variable
1.0 t + 𝒪(t⁶)

julia> p = exp(t)  # Taylor series of `exp` around zero
1.0 + 1.0 t + 0.5 t² + 0.16666666666666666 t³ + 0.041666666666666664 t⁴ + 0.008333333333333333 t⁵ + 𝒪(t⁶)

julia> inverse(p-1)  # t -> t-1, so the new expansion is around 1
1.0 t - 0.5 t² + 0.3333333333333333 t³ - 0.25 t⁴ + 0.2 t⁵ + 𝒪(t⁶)

julia> log(1+t)  # Taylor series of `log` around 1
1.0 t - 0.5 t² + 0.3333333333333333 t³ - 0.25 t⁴ + 0.2 t⁵ + 𝒪(t⁶)
``````
2 Likes

Hi Luis @lbenet ,

Here’s a summary of what we have figured out so far:

``````# Tip howto Use Taylor Series Polynomial to automatically calculate the Inverse function of an arbitrary function.
# This Example Code uses Taylor Series Polynomial inverse(exp) to calculate the natural log
# julia> CLI aka REPL
shift_taylor(a) = a + Taylor1(typeof(a),127)
p = exp(shift_taylor(0.0))  # Taylor series of `exp` around zero
exp(TaylorSeries.evaluate(inverse(p - 1.0), 1.0))
# 2.00785841499612
exp(TaylorSeries.evaluate(inverse(p - 1.0), 0.9))
# 1.9000000109002069
exp(TaylorSeries.evaluate(inverse(p - 1.0), 0.5))
# 1.5
exp(TaylorSeries.evaluate(inverse(p - 1.0), 0.0))
# 1.0
``````

Next issue to tackle and some observations that may lead us to
potentially useful techniques to improve accuracy near endpoint extrema=2.0 .
Issue Example Code:

``````shift_taylor(a) = a + Taylor1(typeof(a),127)
p = exp(shift_taylor(0.0))  # Taylor series of `exp` around zero
exp(TaylorSeries.evaluate(inverse(p - 1.0), 1.0))
# 2.00785841499612
``````

Issue Description: Despite Taylor1 Polynomial @very high order=127,
there is a reduction of accuracy/loss of significant figures near discontinuity endpoint=2.0
IOW : Taylor Series Polynomials are Subject to Runges Phenomenon / Phenomena aka Ringing at the endpoint(s) discontinuitie(s)

SOLN TIPs Visualize the issue and possible mitigations solutions
Details
Runge’s function is given by … When this function is sampled at equally spaced points in the range , as the number of sample points increase the error difference the interpolating polynomial and the function grow without bound.
When Cheybshev points are used the error diminishes as the number of points increases.
https://demonstrations.wolfram.com/RungesPhenomenon/

SOLN TIPs Operate strictly using continuous Fourier FFT in Frequency space.
Taylor Series ( as Truncated Polynomials ) for Trig functions such as sin(x) cos(x) are Subject to Gibbs Phenomena aka Ringing at the endpoint(s) discontinuitie(s)

In the polynomial interpolation setting, the Gibbs phenomenon can be >> mitigated using the S-Gibbs algorithm . <<
A Python implementation of this procedure can be found here.

https://mathworld.wolfram.com/GibbsPhenomenon.html
The Gibbs phenomenon is an overshoot (or “ringing”) of Fourier series and other eigenfunction series occurring at simple discontinuities. It can be >> reduced with the Lanczos sigma factor. <<

Book: Numerical Methods that Work (Spectrum) 1st Edition classic originally published in 1970
by Forman S. Acton (Author)
Part II presents basic tools:

=============
Moving Code Compilation vs REPL CLI v1.04 issue here for now, probably a separate Post later .

Issue Example Code Description: When you execute ALL Three Lines AT ONCE in the ( v1.04 specific bug ?) REPL the answer is different than if you pause before running TaylorSeries.evaluate( , which points to a “race condition” which is a common programming issue described in Race Condition Issue Description:** after the example code here:

``````shift_taylor(a) = a + Taylor1(typeof(a),127)
p = exp(shift_taylor(0.0))  # Taylor series of `exp` around zero
exp(TaylorSeries.evaluate(inverse(p - 1.0), 1.0))
julia> exp(TaylorSeries.evaluate(inverse(p - 1.0), 1.0))
2.9131380935020
julia> exp(TaylorSeries.evaluate(inverse(p - 1.0), 1.0))
2.00785841499612
``````

Race Condition Issue Description: One definition of race condition describes it as “anomalous behavior due to unexpected critical dependence on the relative timing of events.” https://devopedia.org/race-condition-software

There is a bunch of issues reported here. I’ll try to clarify some of them, while I get to understand the others. Thanks a lot for the reference. As for the race condition you report, I can’t check it now; please do open an issue in TaylorSeries.jl to follow this up.

What I understand from the code with `exp` and its inverse (`log`), and the fact that you are using a quite high order, is that you are interested in checking how the series behaves as you approach the radius of convergence. The following shows that the series does converge, though slowly:

``````for ord = 100:100:500
t = Taylor1(ord)    # independent variable of order `ord`
p = exp(t)
q = inverse(p-1)
l2 = q(1.0)    # equivalent to `evaluate(q, 1.0)`
@show(ord, l2-log(2.0))
println()
end

ord = 100
l2 - log(2.0) = -0.004975001249750366

ord = 200
l2 - log(2.0) = -0.0024937500781213595

ord = 300
l2 - log(2.0) = -0.0016638889043206762

ord = 400
l2 - log(2.0) = -0.0012484375048821272

ord = 500
l2 - log(2.0) = -0.0009990000019985956
``````

As it can be seen, the difference of evaluating the Taylor polynomial to `log(2)` gets smaller as the order is increased.

I should clarify that `evaluate` uses Horner’s method, so there is no interpolation for computing the evaluation of the polynomial itself, nor in the computation of the coefficients of the series. Certainly, there is a loss of accuracy, which is due to the fact that the coefficients have finite precision, and that you are evaluating the series at the radius of convergence.

2 Likes