# Nonlinear constrained optimization without gradient

Hi everyone.
I’m trying to do constrained optimization like:
\min_{x} f(x)
s.t.
c(x)\le u_x

I would like to solve the problem only with the function and constraint. I tried with Optim and NLopt, and the examples I found are always with gradient or constraint Jacobian. For example:

Optim

``````using Optim
fun(x) =  (1.0 - x)^2 + 100.0 * (x - x^2)^2
con_c!(c, x) = (c = x^2 + x^2; c)
function con_jacobian!(J, x)
J[1,1] = 2*x
J[1,2] = 2*x
J
end
function con_h!(h, x, λ)
h[1,1] += λ*2
h[2,2] += λ*2
end
lx = Float64[]; ux = Float64[]
lc = [-Inf]; uc = [0.5^2]
dfc = TwiceDifferentiableConstraints(con_c!, con_jacobian!, con_h!,
lx, ux, lc, uc)
res = optimize(df, dfc, x0, IPNewton())
``````

The TwiceDifferentiableConstraints() function needs the Jacobian and Hessian.

NLopt

``````using NLopt

grad = -2.0 * (1.0 - x) - 400.0 * (x - x^2) * x
grad = 200.0 * (x - x^2)
end
return (1.0 - x)^2 + 100.0 * (x - x^2)^2
end

end
return x^2 + x^2 - radius
end

opt = Opt(:LD_MMA, 2)
lower_bounds!(opt, [-5, -5.0])
min_objective!(opt,(x,g) -> rosenbrockf(x,g))
ftol_rel!(opt,1e-9)
(minfunc,minx,ret) = optimize(opt, [-1.0,0.0])

``````

The LD_MMA algorithm needs the gradient. I tried other solvers, but I get: `(0.0, [-1.0, 0.0], :FORCED_STOP)`

Do you know a way to solve this problem without the use of gradient?

For NLopt solvers that don’t use gradients, the objective function simply ignores the second argument. Cf the NLopt docs:

The return value should be the value of the function at the point `x`, where `x` is a (`Float64`) array of length `n` of the optimization parameters (the same as the dimension passed to the `Opt` constructor).

In addition, if the argument `grad` is not empty [i.e. `length(grad)`>0], then `grad` is a (`Float64`) array of length `n` which should (upon return) be set to the gradient of the function with respect to the optimization parameters at `x`. That is, `grad[i]` should upon return contain the partial derivative ∂`f`/∂`x` `i`, for 1≤`i``n`, if `grad` is non-empty. Not all of the optimization algorithms (below) use the gradient information: for algorithms listed as “derivative-free,” the `grad` argument will always be empty and need never be computed. (For algorithms that do use gradient information, however, `grad` may still be empty for some calls.)

1 Like

Check the second table in the README GitHub - JuliaNonconvex/Nonconvex.jl: Toolbox for non-convex constrained optimization..

Right. For example, you can use the COBYLA algorithm in NLopt for nonlinear constraints without derivatives.