I finally found time to benchmark the Symbolics.jl against SymEngine.jl. Here are the results:
Am I doing anything wrong with how I run them? So far on my computer Symbolics.jl seems slower. I want to make sure that I am benchmarking it correctly. Thanks for any tips. I tried both Julia 1.8 and 1.9.3. Both global scope and inside a function.
Hi! This is good to know. Is expand the only thing you’re benchmarking?
Most of expand is spent in DynamicPolynomials.jl, there are faster alternatives we could use however. AbstractAlgebra.jl could be faster, but needs more code to convert into. Cc @blegat
IIRC AbstractAlgebra uses a representation where there are a fixed number of possible variables, and if I’m right, they map each one to an integer, and represent the monomials as a matrix of exponents.
julia> using AbstractAlgebra
julia> R, (x, y, z) = polynomial_ring(ZZ, ["x", "y", "z"])
(Multivariate polynomial ring in 3 variables over integers, AbstractAlgebra.Generic.MPoly{BigInt}[x, y, z])
julia> f = x + y + z + 1
x + y + z + 1
julia> p = f^20;
julia> @time q = p*(p+1);
0.510858 seconds (15.66 M allocations: 294.774 MiB, 28.83% gc time)
# Same thing with DynamicPolynomials:
julia> @time p*(p+1);
16.334984 seconds (3.14 M allocations: 335.826 MiB, 0.54% gc time)
julia> p.x
1771-element MonomialVector{DynamicPolynomials.Commutative{DynamicPolynomials.CreationOrder}, Graded{LexOrder}}:
1
z
y
x
z²
yz
y²
xz
xy
x²
⋮
x¹⁷yz²
x¹⁷y²z
x¹⁷y³
# vs.
Oh not bad, TypedPolynomials beats AbstractAlgebra:
f = x + y + z + 1
p = f^20;
@time q = p*(p+1);
0.279255 seconds (23 allocations: 242.545 MiB)
We have to be very careful about compile time in Symbolics though, so it might make sense to allow switching between the two with Typed being the default.
We can do other benchmarks too, we have quite a few here: https://github.com/symengine/symengine/tree/master/benchmarks, I just started with the above. If you agree that the speed that I got is accurate, we can move to other benchmarks. I just wasn’t sure if I am doing something wrong, or not.
We could switch to a Matrix representation for DynamicPolynomials, it shouldn’t be too hard.
However, I would use the transpose of what AbstractAlgebra does I think, so that the exponents of the same monomials are contiguous in memory.
One thing that is would be annoying is that you can’t push a new term at the end anymore since I never found how to append a new column inplace for a Julia Matrix. Do you know a way ?
This is actually very puzzling, I guess SymEngine’s manual memory management would be a big contributor to the performance. But I can’t account for all of the 100x slowdown in Symbolics/DynamicPolynomials/TypedPolynomials. To actually only benchmark term construction rather than expansion of 20th power, I did the following benchmark:
julia> p(acc, x, y, z) = acc * (x + y + z + 1)
p (generic function with 1 method)
julia> q(x, y, z) = (acc=1; for i=1:20; acc = expand(p(acc, x, y, z)); end; acc)
q (generic function with 1 method)
Symbolics (using DynamicPolynomials) is 100x slower, and TypedPolynomials is 150x slower.