Need help debugging an NLopt optimization

I’m trying to use the LD_MMA algoritmn in NLopt to fit my model.

I get Xtol reached after 2 iterations and the values of the variables don’t seem to change from the starting values. Actually when I leave out the xtol restriction and set maxeval to 200, the parameter value still didn’t change from the second iteration.
Do you know which can be the problem?

I tested all the functions I implemented and all of them work outside the NLopt optimization. For your reference, I attach the relevant part for the model fitting here:

observed_accuracy = zeros(3,2,4);
observed_accuracy[1,:,:] = [0.964  0.958  0.920  0.828;
                            0.971  0.965  0.936  0.852];
observed_accuracy[2,:,1:3] = [0.888  0.626  0.155;
                              0.806  0.789  0.221];
observed_accuracy[3,:,1:3] = [0.863  0.583  0.862;
                              0.786  0.768  0.938];
bdd = (
    (.001, 20.000),  # between,
    (.001, 2.000),  # within,
    (.001, 5.000),  # sensitivity,
    (.000, 20.000),  # noise,
    (.001, 5.000),  # response_scaling,
    (.001, 30.000),  # crit2_rep,
    (.001, 30.000),  # crit2_nrep,
    (.001, 30.000),  # crit3_rep,
    (.001, 30.000),  # crit3_nrep,
      );
bdd_lower = [i[1] for i in bdd];
bdd_upper = [i[2] for i in bdd];
init_guess() = [rand(Uniform(bdd_lower[i],bdd_upper[i])) for i in 1:length(bdd)];

function myobjective(x::Vector, grad::Vector)
    criterion = zeros(3,2);
    between, within, sensitivity, noise,
    response_scaling, criterion[2,1],criterion[2,2],
    criterion[3,1], criterion[3,2] = x;
    all_vars = [between, within, sensitivity, noise, response_scaling, criterion];
    WSSD = calc_wssd(observed_accuracy,all_vars,nsim = 1000);

    return(WSSD)
end

function myconstraint(x::Vector, grad::Vector)
    return(x[2]-x[1])                
end

function optim_all()
    println("\n ************start optimization now****************")

    opt = Opt(:LD_MMA, length(init_guess()));
    opt.lower_bounds = bdd_lower;
    opt.upper_bounds = bdd_upper;
    opt.xtol_rel = 1e-32;
    opt.maxtime = 60*10;
    #opt.initial_step = .1;
    #opt.maxeval = 200;
    opt.min_objective = myobjective; # important!! import objective function
    inequality_constraint!(opt, myconstraint, 1e-8);
    time1=Dates.now();
    (minf,minx,ret) = optimize(opt, init_guess());
    println(Dates.now()-time1);
    numevals = opt.numevals;
    println("got $minf at $minx after $numevals iterations (returned $ret)");

    return(minf, minx)
end

Thank you in advance for your support!
Mingjia

LD_MMA requires the objective function gradient as well but your objective function does not set it (the D in LD_MMA is for gradient-based optimization). You must either provide the gradient or change algorithm (for instance LN_NELDERMEAD).

As to why the optimization stops after two iterations: my guess is the algorithm does not detect any change and it assumes it is done.

1 Like