Is it possible to obtain the gradient of the objective function at the iterations limit? There is a field in the object returned that is named g_residual
, that would give me the max norm of the gradient, but I am interested in obtaining the gradient vector. Is there a way to obtain that gradient?
Assuming you didn’t pass an analytic gradient, just use ForwardDiff
:
using ForwardDiff
ForwardDiff.gradient(f, x)
It’s what Optim uses internally.
I was hoping to piggy back on the work done by Optim. The way you propose I would have to pay for precompilation. I guess if I had supplied the gradient, then I could that without penalty, maybe I should do that.
1 Like
Is recompilation really an issue?
julia> using Optim, ForwardDiff
julia> f(x) = (x[1] - 2)^2 + 3
f (generic function with 1 method)
julia> r = optimize(f, [1.0], BFGS())
* Status: success
* Candidate solution
Final objective value: 3.000000e+00
* Found with
Algorithm: BFGS
* Convergence measures
|x - x'| = 1.00e+00 ≰ 0.0e+00
|x - x'|/|x'| = 5.00e-01 ≰ 0.0e+00
|f(x) - f(x')| = 1.00e+00 ≰ 0.0e+00
|f(x) - f(x')|/|f(x')| = 3.33e-01 ≰ 0.0e+00
|g(x)| = 1.83e-11 ≤ 1.0e-08
* Work counters
Seconds run: 1 (vs limit Inf)
Iterations: 1
f(x) calls: 3
∇f(x) calls: 3
julia> xstar = Optim.minimizer(r)
1-element Vector{Float64}:
2.0000000000069598
julia> @time ForwardDiff.gradient(f, xstar)
0.840569 seconds (2.92 M allocations: 176.158 MiB, 7.48% gc time, 99.96% compilation time)
1-element Vector{Float64}:
1.3919532193540363e-11