Hessian is not jacobian-compose-gradient in Symbolics.jl

I wonder why is (the title).
My results is like this. Did I use its API correctly?

julia> using Symbolics

julia> @variables p[1:10]
1-element Vector{Symbolics.Arr{Num, 1}}:
 p[1:10]

julia> f = bobj(e, p) # some scalar function
-1.5log(7 + p[4]) - 1.5log(5 + p[3]) - 1.5log(3 - p[9]) - 1.5log(3 - p[7]) - 1.5log(7 + p[1]) - 1.5log(8 - p[2]) - 1.5log(9 - p[4]) - 1.5log(9 - p[1]) - 1.5log(3 + p[8]) - 1.5log(6 + p[6]) - 1.5log(6 - p[3]) - 1.5log(6 - p[6]) - 1.5log(5 - p[8]) - 1.5log(5 + p[9]) - 1.5log(3 + p[7]) - 1.5log(5 + p[5]) - 1.5log(5 + p[2]) - 1.5log(5 - p[5]) - 1.5log(6 + p[10]) - 1.5log(6 - p[10]) - 0.08p[1]*p[6] + 0.09p[1]*p[7] + 0.07p[1]*p[8] - 0.02p[10]*p[3] - 0.07p[10]*p[4] - 0.02p[10]*p[5] - 0.03p[2]*p[7] - 0.07p[2]*p[8] - 0.04p[2]*p[9] + 0.02p[3]*p[8] + 0.06p[3]*p[9] + 0.08p[4]*p[9]

julia> Symbolics.jacobian(Symbolics.gradient(f, p), p)
10×10 OffsetArray(::Matrix{Num}, 1:10, 1:10) with eltype Num with indices 1:10×1:10:
 1.5 / ((7 + p[1])^2) + 1.5 / ((9 - p[1])^2)                                            0  …                                            0                                              0
                                           0  1.5 / ((8 - p[2])^2) + 1.5 / ((5 + p[2])^2)                                           -0.04                                              0
                                           0                                            0                                            0.06                                          -0.02
                                           0                                            0                                            0.08                                          -0.07
                                           0                                            0                                               0                                          -0.02
                                       -0.08                                            0  …                                            0                                              0
                                        0.09                                        -0.03                                               0                                              0
                                        0.07                                        -0.07                                               0                                              0
                                           0                                        -0.04     1.5 / ((3 - p[9])^2) + 1.5 / ((5 + p[9])^2)                                              0
                                           0                                            0                                               0  1.5 / ((6 + p[10])^2) + 1.5 / ((6 - p[10])^2)

julia> Symbolics.hessian(f, p)
10×10 Matrix{Num}:
 0  0  0  0  0  0  0  0  0  0
 0  0  0  0  0  0  0  0  0  0
 0  0  0  0  0  0  0  0  0  0
 0  0  0  0  0  0  0  0  0  0
 0  0  0  0  0  0  0  0  0  0
 0  0  0  0  0  0  0  0  0  0
 0  0  0  0  0  0  0  0  0  0
 0  0  0  0  0  0  0  0  0  0
 0  0  0  0  0  0  0  0  0  0
 0  0  0  0  0  0  0  0  0  0

(If you need the source code, I’ll append then.) It was already in the post (though a bit lengthy).

The source code would be helpful indeed

I think this is a bug with symbolic arrays. Check Symbolics.hessian(f, collect(p)), since p is not an array of symbolic values but a symbolic array, it might be hitting the wrong dispatch.

It works. I’ve opened an issue.

This is unreasonable. Since it invents the specialized vector, I guess there are some motivations (for speed etc.) Then the specialized method should have been more reliable. Otherwise we don’t need to invent new types of containers.

As a comparison

julia> using JuMP

julia> m = JuMP.Model(); @variable(m, x[1:3]) # the default container
3-element Vector{VariableRef}:
 x[1]
 x[2]
 x[3]

julia> exit()
PS K:\uc24> julia

julia> using Symbolics

julia> @variables x[1:3]
1-element Vector{Symbolics.Arr{Num, 1}}:
 x[1:3]

julia> typeof(x) # the special container
Symbolics.Arr{Num, 1}

julia> collect(x)
3-element Vector{Num}:
 x[1]
 x[2]
 x[3]

Yes, it has an O(1) representation. But there isn’t necessarily an O(1) representation of the Hessian, so the safe thing to do is just unroll here. If we end up with block vector things and someone wants to specialize it to improve performance then we can get to that, but for now Fix and better test derivatives w.r.t. array variables by ChrisRackauckas · Pull Request #1536 · JuliaSymbolics/Symbolics.jl · GitHub should fix this case. For the record, I put some tests on Gradient and Jacobian as well and those seemed to work, so it’s just Hessian that got missed for some reason, so this now tests it more comprehensively.

1 Like