Early optimizer termination

Hi I am using NLOP and MMA for optimization. The optimizer stops the optimization and reach the Xtol_rel while I have large gradients which should make optimizer keep going. Could you please help me understand what is the issue and what changes I should make? Thank you

function gf_p_optimize(p_init; r, β, η, TOL=1e-4, MAX_ITER, fem_params, global_iter_start)
    ##################### Optimize #################
    opt = Opt(:LD_MMA, fem_params.np)
    opt.lower_bounds = 0.001
    opt.upper_bounds = 1
    opt.xtol_rel = TOL
    opt.maxeval = MAX_ITER
    global_iter = global_iter_start
    iteration_solutions = Any[] 
    function objective_fn(p0, grad)
       
        global_iter += 1
        push!(iteration_solutions, p0) 
        pf_vec = pf_p0(p0; r, fem_params)
        pfh = FEFunction(fem_params.Pf, pf_vec)
        pth = (pf -> Threshold(pf; β, η)) ∘ pfh
        writevtk(Ω,"ldtwrl$global_iter",cellfields= ["p_opt"=>p0,"pfh"=>pfh,"pth"=>pth]) 
        println("called from iteration $global_iter")
        
        return gf_p(p0, grad;iteration_counter=global_iter, r, β, η, fem_params)
    end
    opt.min_objective = (p0, grad) -> objective_fn(p0, grad)
    (g_opt, p_opt, ret) = optimize(opt, p_init)
    @show numevals = opt.numevals # the number of function evaluations
    println("got $g_opt at $p_opt after $numevals iterations (returned $ret)")
    open("gopt.txt", "a") do io
        write(io, "$g_opt \n")
    end
    return g_opt, p_opt, global_iter

end
1 Like

Hi @mary,

It’s hard to say what is happening without a reproducible example, but xtol_rel is one of the convergence criteria. If you don’t want the optimizer to stop at the point it reaches, choose a smaller value for TOL.

You can read more about NLopt’s stopping criteria at General reference - NLopt Documentation

Thank you @odow . I tried with smaller value but it is not working. my question is in basics of optimization, the optimizer should keep going while the gradients are high enough right? why it can not guided by the gradients?

1 Like

It’s impossible to say without a reproducible example. For example, you might have a large gradient but the optimizer might be blocked from stepping there because of variable bounds.

Have you looked at iteration_solutions? Is there anything weird about the solution that it converges to?

I understand it is hard.
no there isn’t anything weird in iteration_solution. the result is in the variable bounds but as optimizer stops the optimization the solution is not converged.

If you take a small step from the final result in the direction of the gradient, is it still within the variable bounds?

yes, I also change that xtol to ftol. optimizer keep going to max_eval but in this case the result is not changing after some iterations and stuck in the solution that xtol gave me.