How to pass constraints in NLsolve?

I am solving 2-variable system of first order conditions for an optimization problem with NLsolve. I know that the optimal values of the optimization problem satisfy some conditions. For example, the optimal values should satisfy T_1 = H_1^{-1}(K_1, x_1) < T_2 \leq H_2^{-1}(K_2/\delta, x_2). How can I supply that information on the inequalities to nlsolve? I want to enforce that T_1 < T_2 and also ideally speed up the computation of T_2 by providing bounds on it.

To provide a starting point, consider:

function foc!(res, t; x=x, K1=K1, K2=K2, θ=θ, δ=δ)
    x1 = x[1]
    x2 = x[2]
    t1 = t[1]
    t2 = t[2]
   
    res[1] = K1 - Hi(1, t1, x1; θ=θ) # foc wrt t1
    res[2] = Hi(1, t2, x1; θ=θ)*(1 - δ)*termII(t, x2; θ=θ, K2=K2) + # foc wrt t2
            termI(t, x1; θ=θ, K1=K1) * (K2- Hi(2, t2, x2; θ=θ)*δ)
end

I would then call nlsolve(foc!, t0), so I suppose I’d have to somehow include those constraints in the definition of foc!, how?

I’m not an expert on this!
As far as I know, NLsolve doesn’t seem to support nonlinear constraints. (Ok, as a hack, you could define your own Jacobian and maybe the solvers will then never go into the infeasible regions. But, I think that comes with the risk of messing up the methods as they are not supposed to be used in that way.)

Have you considered trying another package?

(I’m not sure which one is best for you, maybe JuMP? Nonlinear Modeling · JuMP )

1 Like

I am afraid that solving it through JuMP may be slower. Remember that I will have to solve the same problem many times. I guess I can try it at the worst.

How about GitHub - JuliaOpt/NLopt.jl: Package to call the NLopt nonlinear-optimization library from the Julia language or https://julianlsolvers.github.io/Optim.jl/stable/#examples/generated/ipnewton_basics/#_top

1 Like

Yes, I’m trying NLopt now.

Actually, the solution for t1 is independent of the solution for t2, so I can use the (faster?) univariate methods in Roots.jl. Then I can find t2 with the same univariate approach.