I am optimizing a function. The algorithm finds a solution, but the gradient at the solution is not close to zero. What can be done in that case?
sol3 = optimize(θ -> -obj(θ), sol1.minimizer, NewtonTrustRegion(),
Optim.Options(extended_trace=true, store_trace=true, show_trace=true, iterations=10_000, g_tol=1e-6);
autodiff=:finite)
julia> sol3
* Status: success
* Candidate solution
Final objective value: 2.505956e+05
* Found with
Algorithm: Newton's Method (Trust Region)
* Convergence measures
|x - x'| = 0.00e+00 ≤ 0.0e+00
|x - x'|/|x'| = 0.00e+00 ≤ 0.0e+00
|f(x) - f(x')| = 0.00e+00 ≤ 0.0e+00
|f(x) - f(x')|/|f(x')| = 0.00e+00 ≤ 0.0e+00
|g(x)| = 1.70e+07 ≰ 1.0e-06
* Work counters
Seconds run: 4957 (vs limit Inf)
Iterations: 42
f(x) calls: 33
∇f(x) calls: 35
∇²f(x) calls: 3
julia> FiniteDiff.finite_difference_gradient(obj, sol3.minimizer) |> norm
2.1403981052614696e7