Improving performance of calculating Symbolics Jacobians?

I am trying to calculate the Observation Matrix of a non-linear system. To do this I need to find more and more difficult Lie derivatives. This requires calculating recursive Jacobians.

Very quickly, Symbolics seems to develop huge memory issues and bogs down. I’ve tried multithreading the calculation process but now that I’ve narrowed the problem down to excessive memory usage its clear to me that I need to solve this in a different way.

When running the code below, @time shows that the jacobian calculation and the substitution step have huge memory allocation.

4.346713 seconds (28.54 M allocations: 1.347 GiB, 6.56% gc time, 17.72% compilation time)
2.319133 seconds (12.11 M allocations: 400.201 MiB, 2.95% gc time, 4.24% compilation time)
Iteration 6 took 6.666589s. Rank: 7
33.084276 seconds (239.31 M allocations: 10.981 GiB, 4.90% gc time, 6.32% compilation time)
20.436380 seconds (107.05 M allocations: 3.397 GiB, 3.12% gc time, 0.24% compilation time)
Iteration 7 took 53.521444s. Rank: 7

If anyone has any tips on how to get this to work, please let me know. I am working on recreating this problem in Matlab to see how the computation does in that language. I’ve shared a minimal example. Ideally, I need to do this with systems even bigger than 7 states.

``````using Symbolics, LinearAlgebra, Printf, TickTock

function compute(f, X, U, value_dict)
n_states = length(X)

# Generate first jacobian of the
Hx = Symbolics.jacobian(h, X)
Ox_symb = Hx

# Checking if initial Ox matrix is observable
Ox = Symbolics.value.(substitute.(Ox_symb, (value_dict,))) #not sure why it generates
Ox_rank = LinearAlgebra.rank(Ox)
if Ox_rank >= n_states
@printf "Initial Observability matrix is fully observable\n"
return Ox_rank
end

# Begin building later rows
LfHx = simplify(Hx * f)
for i in 2:n_states
tick()
@time LfHx = Symbolics.jacobian(LfHx, X)
Ox_symb = vcat(Ox_symb, LfHx)

@time Ox = vcat(Ox, Symbolics.value.(substitute.(LfHx, (value_dict,))))
Ox_rank = LinearAlgebra.rank(Ox)

# Dont check this for now to show how bad things get
# if Ox_rank >= n_states
#     @printf "Fully observable on iteration: %2i \n" i
#     return Ox_rank
# end

LfHx = LfHx * f

@printf "Iteration %2i took %2fs. Rank: %2i \n" i tok() Ox_rank
end

# If we make it out of the loop Ox_rank !>= nstates
@printf "Not fully observable. Final rank: %2i \n" Ox_rank

return Ox_rank
end

@variables Ax Az θ q Vx Vz Ze Wx Wz Wq λx λz λq
g = 9.81
# Augmented state matrix
X = [Vx Vz θ Ze λx λz λq]

# Transition equations
V̇x = Ax - λx - g * sin(θ) - (q - λq) * Vz
V̇z = Az - λz + g * cos(θ) + (q - λq) * Vx
θ̇ = q - λq
Że = -Vx * sin(θ) + Vz * cos(θ)
λ̇x = 0
λ̇z = 0
λ̇q = 0
f = [V̇x; V̇z; θ̇; Że; λ̇x; λ̇z; λ̇q]

# Augmented state observation equation Zm = h +V
Vm = sqrt(Vx^2 + Vz^2)
θm = θ
Δhm = -Ze
h = [Vm; θm; Δhm]

values = Dict(Vx => 100, Vz => 20, θ => 0.01, Ze => 1240, Ax => 0, Az => 0, q => 0, λx => 0, λz => 0, λq => 0)
U = [Ax Az q]
compute(f, X, U, values)
``````

Do you need a symbolic Jacobian at all costs, or would a numerical Jacobian do the trick?

If you need symbolics, you could try

I only need to calculate the rank of the final matrix.

So, I can probably get away with numerical Jacobians. I’m not sure how that would work recursively, though.

I don’t fully understand your code so I can’t clearly see the recursive aspect

You have a nonlinear system matrix f with states X
You have a nonlinear measurement matrix h
First, you calculate the jacobian of h w.r.t. X
This is your first Lie derivative. LfHx
The second Lie derivative is the Jacobian of the first Lie derivative * f
Or another way of putting it:
LfHx2 = jacobian(LfHx * f, x) = jacobian(jacobian(h,x)*f, x)

My guess is that algorithmic differentiation would be better-behaved than symbolic due to the lack of expression swell. I would recommend trying ForwardDiff

1 Like

Yeah, I would agree I think. It gets really out of hand on the real system I’m analyzing with 18 states…

For reference matlab symbolics toolbox has a pretty similar performance.

Loop 5 took 1.14 seconds to complete
Loop 6 took 6.84 seconds to complete
Loop 7 took 46.23 seconds to complete

I will give ForwardDiff a shot thanks!

Yes this is a better job for AD

2 Likes