# NLopt:FORCED_STOP for vector-valued constraint

The following code tries to solve Ax = b subject to x >= 0 using the NLopt package.

``````using NLopt
A = [0.0372 0.2869; 0.6861 0.7071; 0.6233 0.6245; 0.6344 0.6170]
b = [0.8587; 0.1781; 0.0747; 0.8405]
return sum((A*x-b).^2)
end
result .= -x
end
opt = Opt(:LN_COBYLA, 2)
opt.min_objective = myfunc
inequality_constraint!(opt, (r, x, g) -> myconstraint(r, x, g), [1e-8, 1e-8])
minf, minx, ret = optimize!(opt, [0.5, 0.5])
``````

The function `myfunc` computes ||Ax-b||_2^2 and the `myconstraint` function imposes the vector-valued constraint [-x_1, x_2] <= 0, modifying `result` in place. When this runs, I get a `:FORCED_STOP` error:

``````julia> minf, minx, ret = optimize!(opt, [0.5, 0.5])
(7.59933607e-315, [0.5, 0.5], :FORCED_STOP)
``````

Why might this be? If I were to instead create scalar-valued constraints for x_1 and x_2 individually:

``````function myconstraint1(x::Vector, grad::Vector)
-x
end
-x
end
opt = Opt(:LN_COBYLA, 2)
opt.min_objective = myfunc
inequality_constraint!(opt, (x, g) -> myconstraint1(x, g), 1e-8)
inequality_constraint!(opt, (x, g) -> myconstraint2(x, g), 1e-8)
minf, minx, ret = optimize!(opt, [0.5, 0.5])
``````

I get the correct result:

``````julia> minf, minx, ret = optimize!(opt, [0.5, 0.5])
(0.831455951263312, [1.4907648850642344e-17, 0.6929343970280805], :ROUNDOFF_LIMITED)
``````

But I would like to implement more complicated constraints later on (not bound constraints and for many variables using matrix-vector products) so this is not feasible.

What is wrong with the first code that I am unable to obtain any results from `optimize!`, surely these should be the same?

One other thing I notice is that, after putting println inside the functions, it doesn’t seem to actually evaluate the constraint at all, which seems even more confusing.