Try a symbolics arr instead of symbolics.variables I think @variables a[0:N] the variables put a lot of stress on the compiler which might be what youβre seeing
Iβm also running Julia 1.9.3 and the latest version of Symbolics (5.8.0). But your code crashes on my machine. Is there missing code that makes this work?
This is to make sure matrix-vec multiplication using usual rules is followed and not using FFTW as it is not supported for symbolics operations i guess
help?> Symbolics.variables
variables(name::Symbol, indices...)
Create a multi-dimensional array of individual variables named with subscript notation. Use @variables instead to create
symbolic array variables (as opposed to array of variables). See variable to create one variable with subscripts.
julia> Symbolics.variables(:x, 1:3, 3:6)
3Γ4 Matrix{Num}:
xβΛβ xβΛβ xβΛβ xβΛβ
xβΛβ xβΛβ xβΛβ xβΛβ
xβΛβ xβΛβ xβΛβ xβΛβ
So Symbolics.variables creates an array of symbols, whereas @Salmonβs suggestion creates a symbolic array (a symbol that represents the whole array and can be indexed):
julia> @variables a[0:10]
1-element Vector{Symbolics.Arr{Num, 1}}:
a[1:11]
# It can (but probably shouldn't) be turned into individual symbols as well:
julia> collect(a)
11-element Vector{Num}:
a[1]
a[2]
a[3]
a[4]
a[5]
a[6]
a[7]
a[8]
a[9]
a[10]
a[11]
Havenβt tried using the array for your symbolic calculation, but hope it helps.
Thanks for replying @Sevi . But, at some point Symbolics.scalarize does arrive into the picture (like in function call of Symbolics.jacobian or creating F_expr = A*b). So if the delay is due to scalarize I still donβt understand how to bypass it
using BenchmarkTools
using FastDifferentiation
using ToeplitzMatrices
function J3()
N = 280
a = make_variables(:a, N + 1)
A = SymmetricToeplitz(a)
b = [1; (1:N) .* a[2:end]]
F_expr = A * b
println(size(F_expr))
println("Symbolic Jacobian time")
@time jacob = jacobian(FastDifferentiation.Node.(F_expr), a)
println("\n")
println("make_function time")
@time tfunc! = make_function(jacob, a, in_place=true)
println("\n")
tmp = similar(jacob, Float64)
input = rand(N + 1)
println("executable compilation time")
@time tfunc!(tmp, input)
println("\n")
println("executable evaluation time")
@benchmark $tfunc!($tmp, $input)
end
Hereβs the output from running J3 on my laptop. The compilation time is almost entirely due to LLVM. It runs very slowly when programs get big.
julia> J3()
(281,)
Symbolic Jacobian time
162.030855 seconds (1.08 G allocations: 55.232 GiB, 7.15% gc time, 0.25% compilation time)
make_function time
1.260080 seconds (8.69 M allocations: 415.564 MiB, 9.07% gc time)
executable compilation time
369.170204 seconds (66.84 M allocations: 2.926 GiB, 0.21% gc time, 100.00% compilation time)
executable evaluation time
BenchmarkTools.Trial: 10000 samples with 1 evaluation.
Range (min β¦ max): 70.800 ΞΌs β¦ 255.600 ΞΌs β GC (min β¦ max): 0.00% β¦ 0.00%
Time (median): 71.600 ΞΌs β GC (median): 0.00%
Time (mean Β± Ο): 72.505 ΞΌs Β± 3.619 ΞΌs β GC (mean Β± Ο): 0.00% Β± 0.00%
βββ β ββ βββββ β
ββββββββββββββββββββββββββββββββ βββββββββ ββ β βββββ ββββββββ β ββ β
70.8 ΞΌs Histogram: log(frequency) by time 88.5 ΞΌs <
Memory estimate: 0 bytes, allocs estimate: 0.
Computing the symbolic derivative takes a while (itβs a large expression) and compiling the derivative into an efficient executable takes longer, entirely because of LLVM.
But, evaluation time is reasonably fast. If you only need to do the symbolic computation once and then evaluate the derivative many times then FastDifferentiation.jl might work for you.
This is impressive (160 secs) @brianguenter . i will check this , thanks !
is the calculation of symbolic jacobian parallelised ? I checked in symbolics.jl but it doesnβt seem so! Is it even possible to parallelise it though ?
The calculation of the Jacobian is not parallelized in FastDifferentiation.jl. It would be possible and I might do it in a future version. However, there are several higher priority work items that must be done first so it will not be done soon.