I’m using Optim.jl to solve an optimization problem and I want to plot its trace/error convergence. However I have some issues and I couldn’t find much info on that so far.
Just to make things clearer, I want something like this:
So my first problem is: I’m dealing with a box-constrained optimization problem. I’m calling the optimize function as follows
res = opt.optimize(od, lb, ub, p0, opt.Fminbox(inner_optimizer), Grad_options)
and it seems that the store_trace parameter is not available for this kind of optimization. Is there any way around this?
The other thing is, if I get rid of the box constraints I can do something along the lines of
import Optim
const opt = Optim
res = opt.optimize((x-> sum(x.^2)), [100.0, 200.0], store_trace=true)
trace = opt.trace(res)
trace_err = []
trace_time = []
for i in 1:length(trace)
append!(trace_err, parse(Float64, split(string(trace[i]))[2]))
append!(trace_time, parse(Float64, split(string(trace[i]))[end]))
end
plot(log10.(trace_time), log10.(trace_err/trace_err[end]))
and then I get back something close to what I want.
But it really feels like I’m hacking my way through this, and not in a good way. Is there any standard way of plotting the trace?
It’s because you’re seeing the output of the inner trace. I have never gotten around to it, but you have to be aware that what you’re seeing is not really the objective itself for box constrained optimization, it includes a penalty term. Is your objective expensive to evaluate?