I am facing challenges in replicating a nonlinear unconstrained dynamic optimization problem originally coded in MATLAB using Julia. While the convergence of the algorithm is successful in MATLAB, I cannot achieve convergence in Julia.

In MATLAB, fminsearch() is used while in Julia I am using the Nelder-Mead algorithm from the package Optim.jl.

Convergence criteria and starting values are the same in both cases.

The only source of a (potentially relevant) difference between the two models is that the initial simplex between the two is not the same, as Optim.jl describes. Moreover, random numbers are introduced in the model, perhaps this can lead to different optimization results in each run.

I am a bit stuck because, assuming everything is coded properly (which I believe it is), I cannot make the algorithm converge as the original model does.

If Iâ€™m reading that correctly, it looks like the tolerance you set for Optim.jl is 10x smaller than the tolerance for MATLAB (0.0001 vs 0.001). Could that be part of the difference?

I should have corrected that. The difference in the tolerance above is because I was trying different tolerances to see if that was part of the issue. I just edited the code above. Thanks

If you have verified that the objective functions are equivalent and the optimization configurations are equivalent, then would comparing the traces help? If not, I think a simple reproducible example would be helpful.

Following up on this, I reached convergence in Julia using NLopt.jl instead of Optim.jl. The former package uses a different version of the Nelder-Mead algorithm than the latter package.

using NLopt
param_start = vec(ep.param_start_flat) # needs to be a 1D vector
# NLopt has to take 2 inputs
function obj_funct_est2(param_start, dummy_gradient!)
return obj_funct_est(param_start, sv, rm)
end
opt = Opt(:LN_NELDERMEAD, length(param_start))
opt.ftol_rel = 0.001
opt.xtol_rel = 0.001
opt.maxeval = 50_0000
opt.min_objective = obj_funct_est2
(minf, minx, ret) = NLopt.optimize(opt, param_start)

Aside comment: I wasnâ€™t able to find a way to add a trace to the NLopt algorithm (similar to show_trace or extended_trace in Optim.jl). Suggestions are welcome.

Maybe one way to try debugging is to take the parameters that gave you the optimal solution from MATLAB and put them in Julia to evaluate the objective function in Julia to see the difference.

Do both MATLAB and Julia use double precision in this case?

I will try your suggestion, thanks.
Apparently, fminsearch in MATLAB and Optim.jl and NLopt.jl use double precision as the default floating-point precision.

You may be interested to know that the author of NLopt has stated on more than one occasion that he considers Nelder-Mead to be obsolete, and that other algorithms should typically be selected.

For NLopt, I believe that the recommended way to trace the calls to your objective is to pass a â€śwrapperâ€ť function to the optimizer that calls your actual objective function, then prints the desired information before returning the objective function value.

I am aware of the discussion around Nelder-Mead algorithm, thanks. I am using it just for replication purposes as MATLABâ€™s fminsearch uses this algorithm. Now that I got convergence with NLopt I am exploring other gradient-free algorithms that the package includes (e.g, BOBYQA or Sbplx). But the first round of optimized values of the optimization parameters with these alternative algorithms look a bit odd.

Many thanks for pointing out a way to trace calls; what you suggested did the trick:

function obj_funct_est2(param_start, dummy_gradient!)
println("Calling objective function with parameters: ", param_start)
result = obj_funct_est(param_start, sv, rm)
println("Objective function value: ", result)
return result
end

If you are getting questionable results with BOBYQA or other Powell algorithms from NLopt, you might want to try the versions in PRIMA.jl. PRIMA contains updated versions of the original Fortran 77 codes into Modern Fortran that include bug fixes and exhibit improved performance.

Iâ€™m not sure how familiar you are with the Julia landscape of derivative-free optimizers, so Iâ€™ll add a few commentsâ€¦

For global optimization, Iâ€™ve had very good results with CMAEvolutionStrategy, though it is relatively slow in the end game.

Optimization.jl wraps a huge number other packages, including many derivative-free algorithms, making it easy to try different algorithms without major changes to your code.

I didnâ€™t know any of the packages you mentioned, so thanks a lot. I will go over them carefully and see if I can apply them to my setting (the Optimization.jl package looks like an ambitious and promissing input!).