Hello. I am trying to use the callback functionality Optim.jl’s solvers (CG and LBFGS). However, the state passed to the callback function does not seem to have the expected fields. In fact, the state only has the fields iteration (iteration number), value (probably the function value at the current iterate), g_norm (probably the gradient norm at the current iterate) and metadata (a dictionary that only seems to have the entry time).
I would like to access the current iterate and the objective gradient at that iterate. Is that possible?
To verify the behavior I cited above, you can run the code:
using Optim
# function my_callback(state)
# print(" Objective Value: ", state.f_x)
# println(" at state x: ", state.x)
# return false # Return true to stop the optimization
# end
function my_callback(state)
@show propertynames(state)
@show keys(state.metadata)
return true
end
function objective(x)
return (x[1]-2)^2 + (x[2]-3)^2
end
initial_x = [0.0, 0.0]
method = BFGS()
options = Optim.Options(callback=my_callback)
d = OnceDifferentiable(objective, initial_x)
optstate = Optim.initial_state(method, options, d, initial_x)
result = optimize(d, initial_x, method, options, optstate)
This code was adapted from Optim.jl documentation, where I changed the definition my_callback to show the fields of the state and the keys of the metadata. The original code ends with an error saying that the field f_x is not available.