OptimizationState in Optim.jl does not have expected fields

Hello. I am trying to use the callback functionality Optim.jl’s solvers (CG and LBFGS). However, the state passed to the callback function does not seem to have the expected fields. In fact, the state only has the fields iteration (iteration number), value (probably the function value at the current iterate), g_norm (probably the gradient norm at the current iterate) and metadata (a dictionary that only seems to have the entry time).

I would like to access the current iterate and the objective gradient at that iterate. Is that possible?

To verify the behavior I cited above, you can run the code:

using Optim
# function my_callback(state)
#     print(" Objective Value: ", state.f_x)
#     println(" at state x: ", state.x)
#     return false  # Return true to stop the optimization
# end
function my_callback(state)
    @show propertynames(state)
    @show keys(state.metadata)
    return true
end
function objective(x)
    return (x[1]-2)^2 + (x[2]-3)^2
end

initial_x = [0.0, 0.0]
method = BFGS()
options = Optim.Options(callback=my_callback)
d = OnceDifferentiable(objective, initial_x)

optstate = Optim.initial_state(method, options, d, initial_x)
result = optimize(d, initial_x, method, options, optstate)

This code was adapted from Optim.jl documentation, where I changed the definition my_callback to show the fields of the state and the keys of the metadata. The original code ends with an error saying that the field f_x is not available.

Which version of Optim.jl are you using? I think there was a mayor rework towards version 2.0 on this.

When I try to run your code on Optim 2.0.1 I get that the state has no field metadata from you second line in the callback, i.e.

julia> result = optimize(d, initial_x, method, options, optstate)
propertynames(state) = (:x, :g_x, :f_x, :x_previous, :g_x_previous, :f_x_previous, :dx, :dg, :u, :invH, :s, :x_ls, :alpha)
ERROR: FieldError: type Optim.BFGSState has no field `metadata`, available fields: `x`, `g_x`, `f_x`, `x_previous`, `g_x_previous`, `f_x_previous`, `dx`, `dg`, `u`, `invH`, `s`, `x_ls`, `alpha`
Stacktrace:

But one can see that there are fields lie x for the current iterate, f_x is probably the cost, g_x is probably the current gradient,…

Running the example you linked also works fine on Optim 2.0.1 and prints something like

julia> result = optimize(d, initial_x, method, options, optstate)
 Objective Value: 13.0 at state x: [0.0, 0.0]
 Objective Value: 4.1479071968772985e-21 at state x: [1.9999999999606561, 3.00000000005099]
 * Status: success
[ ... rest of status print skipped here for space reasons ... ]