### Problem

I’m having problems porting `fmincon()`

statements that * appear* to require “Generic nonlinear constraints” (using

`nonlcon`

). Here are the references I have used:- Find minimum of constrained nonlinear multivariable function - MATLAB fmincon
- https://julianlsolvers.github.io/Optim.jl/stable/#examples/generated/ipnewton_basics/#generic-nonlinear-constraints

* Background*: I just recently figured out how to port

`fmincon()`

statements successfully (I think) but I can’t quite figure out how to port “*generic nonlinear constraints*” like these:

```
options = optimset('TolX',0.001, 'TolFun',1, 'MaxIter',1000 );
options = optimset(options,'LargeScale','off');
x = fmincon(@LCObj1a,x,[],[],[],[],[],[],@LCObj1b,options,param,0.97,dbg);
```

With the callback functions being:

```
function f = LCObj1a(x,param,max_radius,dbg)
% The objective function for the initial optimization
% process used to put the roots of the denominator of the LCBP NTF inside
% the unit circle.
f = 1; % No actual objective
return
function [C,Ceq] = LCObj1b(x,param,max_radius,dbg)
% The constraints function for the initial optimization
% process used to put the roots of the denominator of the LCBP NTF inside
% the unit circle.
H = LCoptparam2tf(x,param);
objective = 1; % No actual objective
rmax = max(abs(H.p{:}));
C = rmax - max_radius;
Ceq = [];
if dbg
fprintf(1,'x = [ ');
fprintf(1, '%.4f ',x);
fprintf(1,']\n');
fprintf(1,'rmax = %f\n\n',rmax);
end
return
```

###
What I could find in `Optim.jl`

Under “Generic nonlinear constraints”, it appears I have to generate “generic” `con_jacobian!()`

& `con_h!()`

functions in addition to `con_c!()`

(which apparently corresponds to my own `LCObj1b()`

):

```
con_c!(c, x) = (c[1] = x[1]^2 + x[2]^2; c)
function con_jacobian!(J, x)
J[1,1] = 2*x[1]
J[1,2] = 2*x[2]
J
end
function con_h!(h, x, λ)
h[1,1] += λ[1]*2
h[2,2] += λ[1]*2
end;
```

Then, I have to specify the following magical parameters (due to my own ignorance wrt optimization algorithms):

```
lx = Float64[]; ux = Float64[]
lc = [-Inf]; uc = [0.5^2]
dfc = TwiceDifferentiableConstraints(con_c!, con_jacobian!, con_h!,
lx, ux, lc, uc)
res = optimize(df, dfc, x0, IPNewton())
```

where `uc`

should be replaced by my own radius (`uc=0.97^2`

).

Which is great!.. except I’m still missing `df`

. AFAICT, I have to grab the snippet from a section above:

```
fun(x) = (1.0 - x[1])^2 + 100.0 * (x[2] - x[1]^2)^2
function fun_grad!(g, x)
g[1] = -2.0 * (1.0 - x[1]) - 400.0 * (x[2] - x[1]^2) * x[1]
g[2] = 200.0 * (x[2] - x[1]^2)
end
function fun_hess!(h, x)
h[1, 1] = 2.0 - 400.0 * x[2] + 1200.0 * x[1]^2
h[1, 2] = -400.0 * x[1]
h[2, 1] = -400.0 * x[1]
h[2, 2] = 200.0
end;
```

Ok great: I have my own `fun(x)`

(which is `LCObj1a()`

), and I might be able to find the gradient (which is 0, isn’t it?).

### Where I stopped

But without further research, I don’t really know what a Hessian is. Plus, I don’t really understand how you can converge on a solution with 0 gradient (That’s a confused face) - so I’m sort of not very confident I’m even going in the right direction.

Oh, yeah: and I’m not really sure if `con_jacobian!()`

& `con_h!()`

are really generic functions or if they need to be specialized for my particular application!

### Help would be appreciated

Thanks in advance!