NLopt :FORCED_STOP after one iteration Gradient based optimization

Hello,
I am using NLopt to solve an optimization problem with non-linear constraints. I provide the gradients for both the optimization function and the gradient function. Both functions provide the correct values of the gradients as I compared them to previous Matlab results. The optimization stops after one iteration providing the initial point and the corresponding value of the optimization function and giving the message :FORCED_STOP. I tried the examples in the Documentation and they worked finely. Here is the code that I am using.
After running the code I get:
got -0.6640272512608545 after 1 iterations (returned FORCED_STOP)

begin
# M, k, xf, psi, a and R2 are all previously defined Float64 except xf::Vector[1,2]
    al=minimum(a)* ones(M,1)         
    xu=R2* ones(M,1)                     
    yu=R2* ones(M,1)
    au=maximum(a)* ones(M,1)      
    lb =reshape([xl yl al]',M*3,1)
    ub =reshape([xu yu au]',M*3,1)
	
function f(x, grad)
	
	Pres=FP_Multi_Freq_Multi_IncAngle_Radius_Position(x,M,k,xf,psi)  # The optimization fn that returns two values: Float64 and a vector of gradients [1x3M] (number of variables to be optimized)
	if length(grad) > 0
		grad[:]=vec(Pres[2])
	end
return Pres[1]
end
	
function myConst(x, grad)
	 C=nlcon_grad_Radius_Position(x, R2, M, delta)
	if length(grad) > 0
	        grad[:]=C[2]
	         
       end
	return C[1]
end


	opt = Opt(:LD_SLSQP, M*3)
	
opt.lower_bounds = lb[:,1]
opt.upper_bounds=ub[:,1]

opt.xtol_abs = 1e-11

inequality_constraint!(opt, myConst)
opt.min_objective = f
opt.maxtime=10

(minf,minx,ret) = optimize(opt, vec(xx))  #xx is a previously defined initial vector
	
numevals = opt.numevals # the number of function evaluations
println("got $minf at $minx after $numevals iterations (returned $ret)")
end

Thank you in advance.

I can’t run your code because I don’t have the data or the functions. But typically the FORCED_STOP is because there is a different error in your code. What happens if you call

g = zeros(length(xx))
myConst(vec(xx), g)

See also: Rethrow errors instead of returning FORCED_STOP by odow · Pull Request #194 · JuliaOpt/NLopt.jl · GitHub

I get this when I call the code.
1×1275 adjoint(::Vector{Float64}) with eltype Float64:
-0.00763178 -0.00462144 -0.00469838 … -0.000935341 -0.00198357 -0.00633272
These values are the constraint values that I get in Matlab.

Is xx a feasible solution? Some optimization algorithms (with barrier functions) would not like an initial points which is not feasible.

xx is a feasible solution that satisfies all the constraints.

I get this when I call the code.
1×1275 adjoint(::Vector{Float64}) with eltype Float64:
-0.00763178 -0.00462144 -0.00469838 … -0.000935341 -0.00198357 -0.00633272
These values are the constraint values that I get in Matlab.

So your constraint returns a vector? If so, you need to use different syntax: GitHub - JuliaOpt/NLopt.jl: Package to call the NLopt nonlinear-optimization library from the Julia language

The function you used assumed the constraint returned a scalar.

function myConst(result, x, grad)
	
	
	 C=nlcon_grad_Radius_Position(x, R2, M, delta)
	
	
	 println(length(grad))
	println(1)
	if length(grad) > 0
		println(2)
        grad=C[2]
		println(size(grad))
		println(grad)
		println(3)
    end
	result=C[1]
  	
end

I used printlp() to track the flow of the code and in this way the code did not even call the constraint function and I got the exact same result as previous.

It worked. I needed to add the value of tol in
inequality_constraint!(opt::Opt, c, tol::AbstractVector) like this:

inequality_constraint!(opt, myConst,vec(1e-8*ones(sum(1:M),1)))
The key was in @odow 's suggestion of using the constraint function syntax for vector output.

Thank you very much.

1 Like