Nonconvex.jl documentation


This may be a silly question, but where can I find the applicable args and kwargs to the function optimize() in Nonconvex.jl are these the same as NLopt or other packages like JuMP?

Additionally, based on Various optimization algorithms from NLopt.jl Β· Nonconvex.jl I should be able to use NLopts options for each algorithm. I tried to use verbosity to get internal status but it did not work (it does without it). Anyway I get the solver details such as time to solve etc?

using Nonconvex, NonconvexMMA, LinearAlgebra, Printf

n = 10;
A = 10*rand(Float64,(n,n));
P = A'*A;

f(x) = x'*P*x;
f1(x) = 1- x'*x;
lx = -500*ones(n); ux = 500*ones(n);
model = Model(f)
addvar!(model, lx,ux)
add_ineq_constraint!(model, x -> f1(x))

x0 = 1.5*ones(n);
alg = NLoptAlg(:LD_MMA)
#alg = MMA02()
options = NLoptOptions(ftol_rel=1e-10; verbosity=1)
#options = MMAOptions()
res = optimize(model, alg, x0, options =options, convcriteria = KKTCriteria())

V = eigvals(P);
@printf("Answer should be = %0.4f\n",V[1])
@printf("Answer is = %0.4f\n", res.minimum)


See the docstring: (type ? then the name you want to look up):

help?> Nonconvex.optimize
      optimizer::AbstractOptimizer = MMA02(),
      convcriteria::ConvergenceCriteria = KKTCriteria(),
      plot_trace::Bool = false,
      callback::Function = plot_trace ? LazyPlottingCallback() : NoCallback(),

  Optimizes model using the algorithm optimizer, e.g. an instance of MMA87 or MMA02. x0 is the
  initial solution. The keyword arguments are:

    β€’  options: used to set the optimization options. It is an instance of MMAOptions for
       MMA87 and MMA02.

    β€’  convcriteria: an instance of ConvergenceCriteria that specifies the convergence
       criteria of the MMA algorithm.

    β€’  plot_trace: a Boolean that if true specifies the callback to be an instance of
       PlottingCallback and plots a live trace of the last 50 solutions.

    β€’  callback: a function that is called on solution in every iteration of the algorithm.
       This can be used to store information about the optimization process.

  The details of the MMA optimization algorithms can be found in the original 1987 MMA paper
  ( and the 2002 paper


  optimize without x0


  Generic optimize for VecModel


  Workspace constructor without x0

are these the same as NLopt or other packages like JuMP?

Not really. JuMP, for example, does not pass options via keyword args.

1 Like


In general, all the NLopt options should go to the options NamedTuple in optimize. Any option that’s valid in the algorithm you are using in the C API should work. If it doesn’t, please open an issue and I can look into it. I intend to document the exhaustive list of options per algorithm in the docs at some point but digging up this information from the NLopt docs and src takes a bit of time.

Nonconvex.jl uses the Julia wrapper of NLopt (NLopt.jl) so if an option is not supported, it could be an issue in Nonconvex or the Julia wrapper. Either way, please open an issue to track it. I hope this helps.

1 Like

Thank you