I am looking to compute the Hessian of a vector-valued in-place function that looks like
function eom!(dx, x, p, t)
dx[1:3] = x[4:6]
dx[4:6] = -p[1] / norm(x[1:3])^3 * x[1:3]
end
using ForwardDiff.jl but I am struggling to find the correct way to implement a vector_hessian() function, as suggested by the ForwardDiff.jl tutorial page:
you could write a version of
vector_hessianwhich supports functions of the formf!(y, x), or perhaps an in-place Jacobian withForwardDiff.jacobian!.
So far, to compute the Jacobian & Hessian, I have something like
using LinearAlgebra
using ForwardDiff
p = [1.0]
function eom!(y, x, p, t)
y[1:3] = x[4:6]
y[4:6] = -p[1] / norm(x[1:3])^3 * x[1:3]
return
end
t0 = 0.0
x0 = [1.0, 0.1, 0.2, 0.3, 1.0, -0.1]
nx = length(x0)
# Jacobian (works)
A1 = ForwardDiff.jacobian((y,x) -> eom!(y,x,p,t0), zeros(nx), x0)
# Hessian (doesn't work)
A2 = ForwardDiff.jacobian(x -> ForwardDiff.jacobian((y,x) -> eom!(y,x,p,t0), zeros(nx), x), x0)
A2 = reshape(A2, nx, nx, nx)
where I get the error with the second-to-last line
ERROR: LoadError: MethodError: no method matching Float64(::ForwardDiff.Dual{ForwardDiff.Tag{var"#2124#2125", Float64}, Float64, 6})
The type `Float64` exists, but no method is defined for this combination of argument types when trying to construct it.
As I understand, the issue is with how I am passing the dummy output storage y, where specifying a Float64 vector (as I am doing via zeros(nx)) works fine when calling ForwardDiff.jacobian once (as in when calculate A1), but it is not when I have it nested (as in when I calculate A2).
I’d appreciate any help/suggestion, including any tips to implement both the Jacobian & Hessian calculation in this context in the most “appropriate” way.
Thank you!