I am using the following special interface from NLSolversBase, to minimize a function.

```
function fg!(F, G, x)
common_calc(…)
if !(G == nothing)
# mutating calculations specific to g!
end
if !(F == nothing)
# calculations specific to f
return f
end
end
```

In particular, my code that use Optim.jl is:

```
ODJ = OnceDifferentiable(only_fg!((F, G, theta)->fg!(F, G, theta, x1,x2)), θ)
results = optimize(ODJ, θ, LBFGS(),Optim.Options(store_trace = true))
```

How can I get the hessian to estimate standard errors using the gradient that I already defined in only_fg! ? or is there another faster way? Since the problem has almost 100 parameters efficiency is important.

Currently, I am saving the gradient in another function to obtain the Hessian, and then the standard errors:

```
grad! = theta -> gradient_function!(theta, x1,x2)
Jinv = ForwardDiff.jacobian(grad!, results.minimizer)
V = inv(Jinv)/n
```