I’ve got a log-probability density function of multiple variables, and would like to integrate it using a Laplace approximation. To do this I need to find the function’s maximum and calculate the Hessian at that point.

That’s easy enough using Optim and AD, but when the Hessian is sparse, I’d like to be able to automatically detect the sparsity and exploit it. That should be possible using e.g. SparseDiffTools.jl and/or ModelingToolkit.jl, but from the documentation I’m not clear what the optimal approach is. The method below works, but seems hacky. Is there a better way?

```
using Distributions, ForwardDiff
d = MvNormal([1, 2], [1 0.2; 0.2 1])
f(x) = logpdf(d, x[1:2]) + logpdf(Normal(), x[3])
x0 = [1.0, 2.0, 0.0]
f(x0)
ForwardDiff.hessian(f, x0)
using SparseDiffTools, LinearAlgebra, SparseArrays
Hv = HesVec(f, x0)
H = hcat([sparse(Hv * i) for i in eachcol(I(3))]...)
```