Seek efficient fast manipulation of power series

I am investigating changing from Reduce computer algebra to Julia symbolics. My main first need is efficient manipulation of power series (aiming for multiple small variables, and coefficients being complicated algebra and/or vector coefficients). How can it be done?

A simple example. In Reduce I simply do the following
1: on div,revpri;
2: let a^5=>0;
3: (1+a/2)^9;
1 + 9/2a + 9a2 + 21/2*a3 + 63/8a**4
4: ws
(1-a/2);
1 + 4a + 27/4a2 + 6*a3 + 21/8*a**4
to indicate how all subsequent algebraic steps are automatically and seamlessly computed to truncations in ā€œaā€ to error O(a^5).

In Julia, all I can see to do is
julia> using Symbolics
julia> @variables a
julia> taylor((1+a/2)^9,a,0:4)
(1//1) + (9//2)a + (9//1)(a^2) + (21//2)(a^3) + (63//8)(a^4)
julia> taylor(ans*(1-a/2),a,0:4)
(1//1) + (4//1)a + (27//4)(a^2) + (6//1)(a^3) + (21//8)(a^4)
Unfortunately, this code seems slow computationally because the taylor() functions seems slow (correct me if I am wrong). And also, it is slow for human coding because one has to repeatedly explicitly invoke the taylor() function (and also explicitly invoke expand() function for anything that does not need immediate truncation).

Please advise what could be efficient and fast in Julia.

Tony Roberts

A good way out is TaylorDiff.jl

You can simply define a as a TaylorScalar{5} (not sure it’s the exact right syntax, see their docs) and it will apply the faa di Bruno formula at compile time to expand f(a) for any Julia function f.

Edit:


a=0.367294
f(x) = (1+x/2)^9


using TaylorDiff
t = TaylorScalar{5}(a, 1.0)
f(t) 

This might not work I did not test it but it should

Sorry I have no idea what the following results of this are. It seems to have nothing whatsoever to do with the power series expansion of (1+a/2)^9 to errors O(a^5).

Divide them by factorial.(0:5) and you’ll have the coefficient you want. Now that I think about it you might want to set a=0 instead or 0.36…

Not quite. All I need to do for this simple example is, after ā€˜using TaylorDiff’,

julia> a=TaylorScalar{5}(0.0,1.0)
TaylorScalar{Float64, 5}(0.0, (1.0, 0.0, 0.0, 0.0, 0.0))
julia> (1+a/2)^9
TaylorScalar{Float64, 5}(1.0, (4.5, 9.0, 10.5, 7.875, 3.9375))
julia> ans*(1-a/2)
TaylorScalar{Float64, 5}(1.0, (4.0, 6.75, 6.0, 2.625, 0.0))

This sets up a to be ā€˜variable’ a to be special in that it carries all its derivatives up to and including 5th order, but no more so the error here is O(a^6). Then subsequent expressions inherit all those derivatives and their respective computation. Very nice.
Three questions:

  • Although for physically complicated problems I will be happy with floating point coefficients, for many physically basic problems I want the coefficients as rational numbers, with numerator and denominator computed exactly to arbitrarily high precision. How can that be done?
  • Many times the coefficients are complicated algebraic expressions: in basic PDE examples coefficients might be each a finite sum of sin(n*x). Does TaylorDiff handle those?
  • This example indicate human-coding could be simple, but what about computational efficiency? How can I time a chunk of such code?

Thanks, Tony

1 Like
  1. Try with a=TaylorScalar{5}(0//1,1//1) ?

  2. don’t know, the easiest way to know would be trying

  3. the expansion happens at compile time so depending on your use case it is either the most efficient way of doing it or one of the worsts : if you want to reuse the same expansion over and over again on different values, then this is perfect. If on the other hand you want to expand only once per different expression, then this might not be the best code performance-wise

I suggests you read a bit about Julia, maybe the getting started page of the manual ? I haven’t seen that was your first post, welcome by the way !

OK, this approach can indeed solve the simplest perturbation problems via this code. Although the equality test to terminate iteration does not work: any suggestions?

# This is a Julia attempt to find perturbation solution to
# x^2+eps*x-1=0 by translating Algorithm 1.1 in Chapter 1 of
# my book Model Emergent Dynamics in Complex Systems. 
# Execute with include("quadc.jl")  AJR, 6/9/2025
using TaylorDiff
# OK for 31, errors O(eps^32), but any more overflows int64 size
eps=TaylorScalar{5}(0//1,1//1)         # truncate power series
zero=0*eps; println("zero = ",zero)    # a zero power series
x=1          # set initial approx to root
for iter=1:6         # iterative refinement
    global x
    Res=x^2+eps*x-1  # current residual of quadratic
    println(iter," Res = ",Res)
    x=x-Res/2        # correct approx using residual
    if Res==zero; println(iter," test says Res=0??"); end
end      
println("\nresult x = ",x)

which produces the following correct iteration to the power series solution (apart from not terminating when Res=0)

julia> include("quadc.jl")
zero = TaylorScalar{Rational{Int64}, 5}(0//1, (0//1, 0//1, 0//1, 0//1, 0//1))
1 Res = TaylorScalar{Rational{Int64}, 5}(0//1, (1//1, 0//1, 0//1, 0//1, 0//1))
1 test says Res=0??
2 Res = TaylorScalar{Rational{Int64}, 5}(0//1, (0//1, -1//4, 0//1, 0//1, 0//1))
2 test says Res=0??
3 Res = TaylorScalar{Rational{Int64}, 5}(0//1, (0//1, 0//1, 0//1, 1//64, 0//1))
3 test says Res=0??
4 Res = TaylorScalar{Rational{Int64}, 5}(0//1, (0//1, 0//1, 0//1, 0//1, 0//1))
4 test says Res=0??
5 Res = TaylorScalar{Rational{Int64}, 5}(0//1, (0//1, 0//1, 0//1, 0//1, 0//1))
5 test says Res=0??
6 Res = TaylorScalar{Rational{Int64}, 5}(0//1, (0//1, 0//1, 0//1, 0//1, 0//1))
6 test says Res=0??

result x = TaylorScalar{Rational{Int64}, 5}(1//1, (-1//2, 1//8, 0//1, -1//128, 0//1))
1 Like

On execution time, this simple problem is only suggestive, but here this approach in Julia is roughly three times slower than computer algebra of Reduce. To compute to high-order, O(eps^32), on my Mac: Reduce takes 2–4 ms, whereas Julia takes 9–11 ms (i specified 17 iterations for both).

Additionally Julia takes an initial 6 secs (6000 ms) compile time, whereas Reduce parse and interpretation takes roughly 10 ms.

How did you measure the execution time? You should put your code in a function and use BenchmarkTools.jl .

To reduce the initial computation time you can create a custom system image with PackageCompiler.jl .

Finally, given the fact that Reduce is in development since 1968, I would not expect the same level of maturity from Julia and its packages. But improving issues on Julia packages should be much easier.

I am not much concerned about execution time yet—after all, this problem is still the simplest example of any interest whatsoever. At least the times are as yet both ā€˜in the same ball-park’, albeit this Julia approach seeming slower here.

Also, I wanted this trial time comparison to be as fair as possible: since I do not care to spend time coding in Reduce the equivalent of BenchmarkTools @btime (whatever that does), I assume the standard Julia @time is more-or-less equivalent to Reduce ā€œshowtimeā€.

The outstanding question is, can anyone suggest an alternative Julia approach that should be as quick, both human and computer, for much more complicated algebraic perturbation problems?

Just to preface, I’m not familiar with the math, Reduce, or any possible Julia implementations, so I can’t really offer any actionable solutions, just broad comments.

Julia has a reputation as a ā€œfast languageā€, but that’s really only relatively true to some languages with designs or implementations that obstruct optimizations. From what I could read, a Reduce implementation uses a Lisp that can be optimized and compiled to native code, which qualifies as a ā€œfast languageā€, and another implementation that doesn’t compile everything to native code may only be up to a few times slower because it compiled thousands of performance-critical subroutines. Aggressive compiler optimizations also have the most benefit for hardware number crunching, somewhat less so for higher-level structures like symbolics.

Besides language design and application domain, the most important factor in performance is algorithm choice. As ufechner7 said, Reduce has been around for over half a century, so it would be expected that its authors made algorithms very efficient in that time. That may be why someone made a Julia interface Reduce.jl. There’s no issue with reimplementing solutions in other languages, but language choice usually can’t compensate for suboptimal algorithms, so interfaces to mature software are pretty common.

You don’t give enough details on what implementations you’re using or how you’re benchmarking, so I can’t comment on that exactly. I can at least explain some of the reasons slowing your Julia code down:

  1. Your code is in the global scope, which Julia’s method-wise compiler won’t optimize much. It could potentially be adding to the observed compile time for not much benefit. The individual method calls would be optimized though. If you want more optimizations and open up the possibility of saving your compiled code (though it’s enough work that I don’t think it’s worth doing until you learn enough of Julia), write your program in a method.
  2. You’re using many untyped global variables, which Julia’s compiler can’t infer and optimize. That’ll add overhead to each call. Writing the program in a method can help avoid global variables, but they can still suffer from global variables if you manually specify them.
  3. Many I/O actions like println take up time; it’s <1ms usually, but that exceeds many method calls and is usually avoided unless necessary. Of course, step-wise reports could be what you need, so there’s no avoiding it.

It’s unclear to me whether making the everyday performance adjustments would improve performance to or beyond Reduce’s, or whether the Julia ecosystem can be a worthy alternative. I’m sure you can figure that out in practice.

I gather from this that you don’t have much experience with imperative programming. Reduce does have it, but you can evidently do a lot of work with a rule-based/declarative style by design. Here your for-loop doesn’t terminate at the equality test simply because you didn’t make it so; the equality test runs and just moves onto the next iteration. Typically, if you want a loop to end under a specific condition, you use a while loop specified with the opposite condition. If you want a hard limit to iterations, you can include that as a subcondition e.g. while !(Res == zero) && iter <= 6. You could also terminate a while or for-loop at any point in its body with break, so just adding that to your if statement would work.

First, put your code in backticks like ` `, the plain @ is attempting to tag a user, which is unfortunately succeeding. Second, it may be equivalent, but the issue is that running very short programs once generally can’t be measured accurately due to timer resolution limits, no matter the language. Julia has BenchmarkTools.jl precisely to repeat programs and gather statistics for more accuracy. I think you might be fine with several milliseconds, but not if it gets <1ms.

That could be a couple problems. The one I can comment on is that I’m reading Reduce natively uses arbitrary precision integers (I’m not clear on the precision options of integers or other numerical types). That in Julia is BigInt, though I will warn that it wraps a C library without exposing its efficient mutation, and the algorithms written with plain operations of usually immutable types will have the overhead of instantiating BigInts a lot (for example, a=a+b will instantiate a separate object and reassign a to it, whereas mpz_add(a, a, b) can mutate a directly). Optimized algorithms would leverage efficient mutation when possible, and some arbitrary precision implementations would avoid the overhead for suboptimal operations by having a hybrid behavior of the arbitrary precision and fixed 32/64-bit precision types.

3 Likes

If Reduce can solve your problems, then try out Reduce.jl , which provides a nice Julia interface to reduce.

Did you look ar GitHub - JuliaSymbolics/Symbolics.jl: Symbolic programming for the next generation of numerical software ?

It will also mature over time.

Reduce is fantastic for computer algebra … except for problems involving algebra and numerical computations with large matrices and vectors, say over 1000x1000. The pointer to Reduce.jl looks promising.

Julia Symbolics is where I started. See the very first post and issues with taylor() and similar issues with symbolics juliasymbolics org/stable/tutorials/perturbation/

The AD approach suggested by lrnv works for the simplest perturbation problems. However, I do not see how to combine with Symbolics. For example, in constructing perturbation solutions to Burgers’ PDE I need to evaluate the RHS of the PDE for expressions of the following form…which fails with the AD approach.

using TaylorDiff
eps = TaylorScalar{5}(0//1,1//1)         # AD truncated power series in eps
using Symbolics
@variables a x
u = eps*a*sin(x)-eps^2*a^2*sin(2*x)/6    # AD fails with this symbolics??
# D = Differential(x)
# rhs = -u*D(u)+(1+eps)*u+D(D(u))

I get the mysterious error

ERROR: LoadError: MethodError: *(::TaylorScalar{Rational{Int64}, 5}, ::Num) is ambiguous.

You don’t, independent packages don’t inherently work together.

Well, most of the time independent Julia packages work together well. If not, it should not be too difficult to write some glue code.

Yes, but not inherently. There’s subtyping and interface compliance to compose by design even if not explicitly planned and tested, and TaylorDiff.TaylorScalar and Symbolics.taylor are not interchangeable because that didn’t apply. There wouldn’t be any benefit to add glue code to make TaylorScalar effectively convert to taylor to work in Symbolics, like implementing addition of strings and numbers "1"+1.

I’d suggest again to learn writing imperative code in methods (and perhaps more of Julia’s performance tips), try Symbolics.jl and BigInt to see if that can serve as an alternative computer algebra software for both capability and performance, and try Reduce.jl for an interface between Julia and Reduce. Julia has its strengths, but it doesn’t automatically overcome the limitations of other software. Reduce’s struggle with matrices of >=10^6 arbitrary precision, symbolic elements could easily just be a hardware limitation.

1 Like

Thanks for the thoughts. Recall that Symbolics.jl is where I started with all this. What I need from Symbolcs is a way to efficiently truncate power series on-the-fly.

Having to do such trncation via Taylor() is far too slow: any other suggestions in Symbolics? For the simplest example, the following code in Julia.Symbolics takes about one second to find the series solution of x(a) to 20th order, whereas the corresponding code in Reduce takes about 2 ms to find it to 31st order.

using Symbolics
@variables a
oErr=20       # >20 fails on factorial(21)
x=1           # set initial approx to root
for iter=1:99 # iterative refinement
    global x,Res
    Res=taylor(x^2+a*x-1,a,0:oErr)  # current residual of quadratic
    x=expand(x-Res/2)               # correct approx using residual
    if isequal(Res,0); println("Success: iter=",iter); break; end
end      

Just create an issue at Symbolics.jl/issues .

Ok, it might take a few years until it is fixed. But it is still important to track such issues. And perhaps AI can speed up the development of this package? Who knows.

Have you looked at the performance tips: Performance Tips Ā· The Julia Language

In particular, I recommend the very first two tips. Any hope of decent performance is out the window if those are not followed.

SInce over 99.9% of the execution time will be within the Symbolics functions taylor() and expand(), I can conceive of no revision of my outer code that will make a significant reduction in execution time.

Again, the need is for Symbolics to have an efficient mechanism for truncating asymptotic computations (efficient for both humans and computations).