Can I "warm start" Optim.jl's BFGS more cleanly?

Hey,

I am solving a series of problems which are in some sense very small modifications from one another.
At eash step, I modify my loss using the information from the inverse hessian obtained by BFGS, and then i restart BFGS on the new loss. I was wandering if there would be a way to keep the BFGS instance I have (an dnot only give its hessian to a new one) at each step ?

I have something that looks like the following (not an MWE sadly).


opts = Optim.Options(
    show_trace=true,
    show_every=100,
    allow_f_increases=true,
    allow_outer_f_increases = true,
    iterations=10000
)

my_loss(p,λ) = ...

par = randn(10)
λ = 0
for i in 1:100
    println("------------------------------")
    invH = one(zeros(10,10))
    m = BFGS(initial_invH = x-> invH)
    obj = OnceDifferentiable(p -> my_loss(p,λ),par; autodiff = :forward)
    bfgsstate = Optim.initial_state(m,opts,obj,par) # allows to get back the hessian. 
    res = optimize(obj, par, m, opts, bfgsstate)
    par = Optim.minimizer(res)
    invH = bfgsstate.invH
    λ += some_correction_computed_from(invH)
end

This looks wastefull. Is there a way to keep the bfgsstate alltogether and only change the loss in it ? I cannot come up with a good MWE sorry :confused:

Using the NWE (:grimacing:), the following seems to be setting the initial state (p and invH) in my test:

invH = one(zeros(10,10))
m = BFGS(initial_invH = x -> invH)
par = zeros(10)
obj = OnceDifferentiable(p -> my_loss(p,λ),par; autodiff = :forward)
bfgsstate = Optim.initial_state(m,opts,obj,par)
for i in 1:100
    println("------------------------------")
    res = optimize(obj, par, m, opts, bfgsstate)
    par = Optim.minimizer(res)
    invH .= bfgsstate.invH
    λ += some_correction_computed_from(invH)
    obj = OnceDifferentiable(x -> rosenbrock(x, λ),par; autodiff = :forward)
    Optim.reset!(m, bfgsstate, obj, par)
end

Optim.reset! is helpful in this case.

1 Like