How to get evaluation number in optimization using NLopt

Hi, I am using Nlopt to do the optimization. in this case i have defined a function for optimization. inside this function the optimizer reads another functions to calculate the objective and gradients. I also want to generate a random number in each iteration. In this case I need the iteration counter in my objective and gradients function to be used. can you please help me how I can return the iteration number in each iteration and use it in other functions?
so in a nutshell I need the optimizer iteration to : 1- generate identified random number in each iteration,
2- save the forward analysis results in each iteration.
3-save the optimized design in each iteration.
Thank you!

this is the optimization function

using NLopt

function gf_p_optimize(p_init; r, β, η, TOL=1e-4, MAX_ITER, fem_params)
    ##################### Optimize #################
    opt = Opt(:LD_MMA, fem_params.np)
    opt.lower_bounds = 0
    opt.upper_bounds = 1
    opt.xtol_rel = TOL
    opt.maxeval = MAX_ITER
    opt.max_objective = (p0, grad) -> gf_p(p0, grad; r, β, η, fem_params)
    (g_opt, p_opt, ret) = optimize(opt, p_init)
    @show numevals = opt.numevals # the number of function evaluations
    println("got $g_opt at $p_opt after $numevals iterations (returned $ret)")
    return g_opt, p_opt

end

and gf_p is the function for objective and gradients calculation based on rrule

function gf_pf(pf_vec; β, η, fem_params)
   
    #generate random number for iteration m
    sample = 2
    Random.seed!(1234);
    diste = LogNormal(log(E_mat),(log(E_mat*0.1))^2)
    Es = rand(diste,sample)
    #forward analysis to solve the equations based on the random value
   #save the result for this iteration
        writevtk(Ω,"resultscellelasti $m",cellfields=["uh"=>uh, "sh"=>sh])
     
        return obj

function rrule(::typeof(gf_pf), pf_vec; β, η, fem_params)
    function U_Disp_pullback(dgdg)
      NO_FIELDS, dgdg * Dgfdpf(pf_vec; β, η, fem_params)
    end
    gf_pf(pf_vec; β, η, fem_params), U_Disp_pullback

end
function Dgfdpf(pf_vec; β, η, fem_params)
    
#solve forward problem again with define random value at iteration m

# solve backward for adjoint
  #calculate the gradient
    return dgfdpfl
end

Hi @Mary,

You can do something like this:

function gf_p_optimize(p_init; r, β, η, TOL=1e-4, MAX_ITER, fem_params)
    opt = Opt(:LD_MMA, fem_params.np)
    opt.lower_bounds = 0
    opt.upper_bounds = 1
    opt.xtol_rel = TOL
    opt.maxeval = MAX_ITER
    iteration_counter = 0  # <-- new
    iteration_solutions = Any[]  # <-- new
    function objective_fn(p0, grad)
        iteration_counter += 1  # <-- new
        push!(iteration_solutions, p0)  # <-- new 
        println("called from iteration $iteration_counter")
        return gf_p(p0, grad; r, β, η, fem_params)
    end
    opt.max_objective = objective_fn
    (g_opt, p_opt, ret) = optimize(opt, p_init)
    println("got $g_opt at $p_opt after $numevals iterations (returned $ret)")
    return g_opt, p_opt
end
1 Like

HI @odow
Thank you for your great response. That should work for saving the design (p0) at each iteration.
do you have any idea that how can I use iteration_counter in gf_pf function and use that to generate random number in each iteration?

You could pass iteartion_counter through to your gf_p function.

But you should be very careful. Do you really want to use a random number? NLopt assumes your function is deterministic, that is, it will return the same objective value for repeated calls with the same value of p0.

1 Like

I want to use iteration_counter in random.seed!(iteration_counter) to generate number

I want to do stochastic optimization. I am assuming uncertainty in material properties. therefore, at each iteration it should calculate material properties, find the solution and gradients.

You should reconsider how you intend to optimize your parameters. Using NLopt with a stochastic function is likely not what you need.

You might want to:

  • Use some sort of sample average approximation, where you optimize across a sample of different random values (say, 10 or 20) at each iteration, but where you keep the random values constant between iterations.
  • Use some sort of algorithm that supports noisy objective functions, such as Bayesian Optimization.

Other people on this forum might chime in with some other ideas.

1 Like

For local optimization of expected values of noisy objectives, machine-learning people like the Adam algorithm and its cousins. Several of these stochastic gradient-descent types of algorithms can be found in Optimization.jl for example.

Uncertainty does not necessarily imply stochastic optimization; often, if you have uncertainty, you want to do some form of robust optimization, for which there are a wide variety of algorithms.

3 Likes