NLopt not optimising

Hi
I am rather new to Julia and I am experimenting with NLopt. I ran the tests on github and they work fine but then I tried my own objective and constraints.
function ps(x,grad)
return x[1]
end
function ps_con(x,grad,w)
f=zeros(2)
f[1]=x[2]^2-1+x[3]
f[2]=-10x[2]^2+0.1x[3]
z=-f+w*x[1]
return z
end

I then followed the same procedure followed in the test examples but with a derivative free optimiser.
opt = Opt(:LN_COBYLA, 3)
opt.lower_bounds = [0, -Inf.-Inf]
opt.upper_bounds = [1,Inf,Inf]
opt.xtol_rel = 1e-4
opt.min_objective = ps
opt.inequality_constraint = (x,g) → ps_con(x,g,[1,1])
(minf,minx,ret) = optimize(opt, [1,1,1])

No error occurs but the optimiser does not do anything and exits with either FORCED_STOP or XTOL_REACHED at the first iteration.

Note that calling the objective and constraint functions individually with random inputs does not produce any error.

What am I doing wrong?
Mx

Try to call the function and constraint with the values you supplied to see if there is an error in them.

1 Like

See the vector valued constraints section. These (in)equality constraints expect first argument to be returned vector.

already done that and the function outputs the expected value with no error

ok I can try, this is the part I was not sure about, in fact
I thought the the syntax I was using use equivalent to declaring a vector constraint but I will try to follow the example you suggested

I tested these new functions
function ps(x::Vector,grad::Vector)
if length(grad)>0
grad[1]=1
grad[2]=0
grad[3]=0
end
return x[1]
end
function ps_con(z::Vector,x::Vector,grad::Matrix,w::Vector)
if length(grad)>0
grad[1,1]=w[1]
grad[2,2]=20x[2]
grad[1,2]=-2
x[2]
grad[1,3]=-1
grad[2,1]=w[2]
grad[2,3]=-0.1
end
z[1]=-(x[2]^2-1+x[3])+w[1]x[1]
z[2]=-(-10
x[2]^2+0.1*x[3])+w[2]*x[1]
end
and inserted the string
inequality_constraint!(opt, (z,x,g) → ps_con(z,x,g,[1,1]), [1e-8,1e-8]::AbstractVector)
(minf,minx,ret) = optimize(opt, [1,1, 1])
but now I get:
(1.0, [1.0, 1.0, 1.0], :FORCED_STOP)

Please quote your code usingt triple backticks, it’s quite hard to read your posts (see Please read: make it easier to help you)

1 Like

can you read it better?

1 Like

Yes much better, thanks!

NLopt is pretty aggressive about swallowing errors in my experience. The fact that you have :FORCED_STOP and just get your inits back leads me to believe that there is an error happening somewhere. I haven’t looked at your code in detail here and am not familiar with the constraint interface in NLopt, but just in the interest of a quick response, you’re almost certainly experiencing an error that NLopt is trying to elegantly handle without making everything crash.

yeh I suspected that but I tested each function individually with no error and given that I started programming in Julia on Saturday I am not that skilled yet
any advice on where to look for the bug would be precious

It seems grad should be 3x2 as documentation says instead of 2x3 if it ever asked by optimization.

julia> function ps(x::Vector,grad::Vector)
       if length(grad)>0
       grad[1]=1
       grad[2]=0
       grad[3]=0
       end
       return x[1]
       end
ps (generic function with 1 method)

julia> using NLopt

julia> function ps_con(z::Vector,x::Vector,grad::Matrix,w::Vector)
       if length(grad)>0
       grad[1,1]=w[1]
       grad[2,2]=20x[2]
       grad[2,1]=-2x[2]
       grad[3,1]=-1
       grad[1,2]=w[2]
       grad[3,2]=-0.1
       end
       z[1]=-(x[2]^2-1+x[3])+w[1]x[1]
       z[2]=-(-10x[2]^2+0.1*x[3])+w[2]*x[1]
       end
ps_con (generic function with 1 method)

julia> opt=Opt(:LD_MMA,3)
Opt(LD_MMA, 3)

julia> opt.min_objective=ps
ps (generic function with 1 method)

julia> opt.lower_bounds=[0,-Inf,-Inf];

julia> opt.upper_bounds=[1,Inf,Inf];

julia> inequality_constraint!(opt, (z,x,g) -> ps_con(z,x,g,[1,1]), [1e-8,1e-8])

julia> (minf,minx,ret) = optimize(opt, Float64[1,1, 1])

^C(0.0, [0.0, -0.01634360284914689, 2.11468681873786], :FORCED_STOP)

julia> opt.maxeval = 1000
1000

julia> (minf,minx,ret) = optimize(opt, Float64[1,1, 1])
(0.0, [0.0, -0.01634360284914689, 2.11468681873786], :MAXEVAL_REACHED)

It seems without a stopping criteria it never stops that’s why stopped first try.

great, thank you, I included a tolerance on the x at convergence and now it returns a solution, so just to be clear you did the following:
1- transposed the constraint gradient
2- you used ```inequality_constraint!(opt, (z,x,g) → ps_con(z,x,g,[1,1]), [1e-8,1e-8])

question: if the solver does not need gradients does it matter how I define the gradient? I guess no