Help in accelerating constructing Symbolic jacobian?

Can anyone help me accelerate/optimize this computation of symbolic computation of Jacobian?
I am on latest version of Symbolics and julia 1.9.3.

using Symbolics
using ToeplitzMatrices

N= 280;

a  = Symbolics.variables(:a, 0:N);

A = SymmetricToeplitz(a);

b = [1; (1:N).*a[2:end]];

F_expr = A*b;

julia> @time jacob = Symbolics.jacobian(F_expr, a);

208.583873 seconds (1.10 G allocations: 63.029 GiB, 8.76% gc time)

Try a symbolics arr instead of symbolics.variables I think
@variables a[0:N] the variables put a lot of stress on the compiler which might be what you’re seeing

I’m also running Julia 1.9.3 and the latest version of Symbolics (5.8.0). But your code crashes on my machine. Is there missing code that makes this work?

julia> using Symbolics

julia> using ToeplitzMatrices

julia> 

julia> N= 280;

julia> 

julia> a  = Symbolics.variables(:a, 0:N);

julia> 

julia> A = SymmetricToeplitz(a);

julia> 

julia> b = [1; (1:N).*a[2:end]];

julia> 

julia> F_expr = A*b;
ERROR: MethodError: no method matching plan_fft!(::Vector{Complex{Num}}, ::UnitRange{Int64})

Closest candidates are:
  plan_fft!(::StridedArray{T, N}, ::Any; flags, timelimit, num_threads) where {T<:Union{ComplexF32, ComplexF64}, N}
   @ FFTW C:\Users\seatt\.julia\packages\FFTW\HfEjB\src\fft.jl:786
  plan_fft!(::AbstractArray; kws...)
   @ AbstractFFTs C:\Users\seatt\.julia\packages\AbstractFFTs\4iQz5\src\definitions.jl:68

Stacktrace:
 [1] plan_fft!(x::Vector{Complex{Num}}; kws::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
   @ AbstractFFTs C:\Users\seatt\.julia\packages\AbstractFFTs\4iQz5\src\definitions.jl:68
 [2] plan_fft!(x::Vector{Complex{Num}})
   @ AbstractFFTs C:\Users\seatt\.julia\packages\AbstractFFTs\4iQz5\src\definitions.jl:68
 [3] factorize(A::SymmetricToeplitz{Num, Vector{Num}})
   @ ToeplitzMatrices C:\Users\seatt\.julia\packages\ToeplitzMatrices\uvasQ\src\linearalgebra.jl:148
 [4] mul!(y::Vector{Num}, A::SymmetricToeplitz{Num, Vector{Num}}, x::Vector{Num}, Ξ±::Bool, Ξ²::Bool)
   @ ToeplitzMatrices C:\Users\seatt\.julia\packages\ToeplitzMatrices\uvasQ\src\linearalgebra.jl:37
 [5] mul!
   @ C:\Users\seatt\.julia\juliaup\julia-1.9.3+0.x64.w64.mingw32\share\julia\stdlib\v1.9\LinearAlgebra\src\matmul.jl:276 [inlined]
 [6] *(A::SymmetricToeplitz{Num, Vector{Num}}, x::Vector{Num})
   @ LinearAlgebra C:\Users\seatt\.julia\juliaup\julia-1.9.3+0.x64.w64.mingw32\share\julia\stdlib\v1.9\LinearAlgebra\src\matmul.jl:56
 [7] top-level scope
   @ REPL[7]:1

@brianguenter
Right, so sorry i forgot to mention that i comment off the line 20, 35-38 in Toeplitzmatruces.jl https://github.com/JuliaLinearAlgebra/ToeplitzMatrices.jl/blob/7288838f517516fd63ae5c42b23c01d894ce7b47/src/linearalgebra.jl#L20

This is to make sure matrix-vec multiplication using usual rules is followed and not using FFTW as it is not supported for symbolics operations i guess

@Salmon Can you please explain what is difference in @variables and Symbolics.variables ?? How to define a Symbolic.arr ??

This is the explanation from the docstring:

help?> Symbolics.variables
  variables(name::Symbol, indices...)

  Create a multi-dimensional array of individual variables named with subscript notation. Use @variables instead to create
  symbolic array variables (as opposed to array of variables). See variable to create one variable with subscripts.

  julia> Symbolics.variables(:x, 1:3, 3:6)
  3Γ—4 Matrix{Num}:
   x₁ˏ₃  x₁ˏ₄  x₁ˏ₅  x₁ˏ₆
   x₂ˏ₃  x₂ˏ₄  x₂ˏ₅  x₂ˏ₆
   x₃ˏ₃  x₃ˏ₄  x₃ˏ₅  x₃ˏ₆

So Symbolics.variables creates an array of symbols, whereas @Salmon’s suggestion creates a symbolic array (a symbol that represents the whole array and can be indexed):

julia> @variables a[0:10]
1-element Vector{Symbolics.Arr{Num, 1}}:
 a[1:11]

# It can (but probably shouldn't) be turned into individual symbols as well:
julia> collect(a)
11-element Vector{Num}:
  a[1]
  a[2]
  a[3]
  a[4]
  a[5]
  a[6]
  a[7]
  a[8]
  a[9]
 a[10]
 a[11]

Haven’t tried using the array for your symbolic calculation, but hope it helps.

PS: Here is some more documentation on how to use the array variables.
https://symbolics.juliasymbolics.org/dev/manual/arrays/

1 Like

Thanks for replying @Sevi . But, at some point Symbolics.scalarize does arrive into the picture (like in function call of Symbolics.jacobian or creating F_expr = A*b). So if the delay is due to scalarize I still don’t understand how to bypass it

You can differentiate this expression using FastDifferentiation.jl. Here’s the code:

using BenchmarkTools
using FastDifferentiation
using ToeplitzMatrices

function J3()
    N = 280

    a = make_variables(:a, N + 1)

    A = SymmetricToeplitz(a)

    b = [1; (1:N) .* a[2:end]]

    F_expr = A * b

    println(size(F_expr))

    println("Symbolic Jacobian time")
    @time jacob = jacobian(FastDifferentiation.Node.(F_expr), a)
    println("\n")
    println("make_function time")
    @time tfunc! = make_function(jacob, a, in_place=true)
    println("\n")

    tmp = similar(jacob, Float64)
    input = rand(N + 1)
    println("executable compilation time")
    @time tfunc!(tmp, input)
    println("\n")
    println("executable evaluation time")
    @benchmark $tfunc!($tmp, $input)
end

Here’s the output from running J3 on my laptop. The compilation time is almost entirely due to LLVM. It runs very slowly when programs get big.

julia> J3()
(281,)
Symbolic Jacobian time
162.030855 seconds (1.08 G allocations: 55.232 GiB, 7.15% gc time, 0.25% compilation time)


make_function time
  1.260080 seconds (8.69 M allocations: 415.564 MiB, 9.07% gc time)


executable compilation time
369.170204 seconds (66.84 M allocations: 2.926 GiB, 0.21% gc time, 100.00% compilation time)


executable evaluation time
BenchmarkTools.Trial: 10000 samples with 1 evaluation.
 Range (min … max):  70.800 ΞΌs … 255.600 ΞΌs  β”Š GC (min … max): 0.00% … 0.00%
 Time  (median):     71.600 ΞΌs               β”Š GC (median):    0.00%
 Time  (mean Β± Οƒ):   72.505 ΞΌs Β±   3.619 ΞΌs  β”Š GC (mean Β± Οƒ):  0.00% Β± 0.00%

   β–†β–ˆβ–…  ▅▆▅▃▁▁▁▁                                               β–‚
  β–ˆβ–ˆβ–ˆβ–ˆβ–†β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‡β–‡β–†β–†β–„β–β–„β–„β–„β–β–β–ƒβ–ƒβ–ƒβ–ƒβ–ƒβ–…β–‡β–†β–‡β–†β–‡β–†β–†β–†β–…β–†β–…β–…β–ƒβ–„β–β–ƒβ–…β–„β–„β–„β–„β–„β–„β–ƒβ–…β–…β–†β–† β–ˆ
  70.8 ΞΌs       Histogram: log(frequency) by time      88.5 ΞΌs <

 Memory estimate: 0 bytes, allocs estimate: 0.

Computing the symbolic derivative takes a while (it’s a large expression) and compiling the derivative into an efficient executable takes longer, entirely because of LLVM.

But, evaluation time is reasonably fast. If you only need to do the symbolic computation once and then evaluate the derivative many times then FastDifferentiation.jl might work for you.

This is impressive (160 secs) @brianguenter . i will check this , thanks !

is the calculation of symbolic jacobian parallelised ? I checked in symbolics.jl but it doesn’t seem so! Is it even possible to parallelise it though ?

The calculation of the Jacobian is not parallelized in FastDifferentiation.jl. It would be possible and I might do it in a future version. However, there are several higher priority work items that must be done first so it will not be done soon.

1 Like