Performance of log in nonlinear optimization

I’m having some trouble with the function “log” in an optimization problem.

For instance:
I create a variable, and then the following constraint:

@NLconstraint(ModelName, log(x) <=100), I use Ipopt to solve it (a feasible problema) and it gives me x=0 (it is infeasible I think), but then I put this constraint: @NL(ModelName, x <= exp(100)) and now it returns a feasible solution. This is a simpler version of my issue…

The thing is…I strongly need to use “Log” in some constraints and the objective function of the optimization problem.How can I solve this?

Does it also need to constrain x to greater than 0? Otherwise it might look at eg x=-0.1, which is fine for exp() but not so good for log()

Added: if negative x is allowable then maybe log(abs(x)) or something along those lines

Can you provide a minimum working example?

1 Like

Yes of course.

m=Model(solver=IpoptSolver())

@variable(m, x[i=1:n] )

for i in [1:n;]
@NLconstraint(m, x[i]<=1 )
@NLconstraint(m, x[i] >= exp(-gamma[i]) )
end
@NLconstraint(m,-sum{log(x[i])/gamma[i], i=1:n}>=-mm)

the last constraint presents the infeasibility I wrote above. I’m using the change of variables xi=exp(-yi*gamma[i]), because with this the problem transforms into a convex one.

The last constraint is probably an issue when x[i]=0. Try adding bounds on x rather than via NLconstraint. You probably want to set the start keyword as well since it defaults to 0.0.

model = Model(solver = IpoptSolver())
@variable(model, 0.01 <= x[i = 1:n] <= 1, start = 1)
@NLconstraint(model, sum(log(x[i]) for i in 1:n) >= -1)

P.s. for future reference, take a look at the post I linked to. It demonstrates how to format code nicely. I can’t copy and paste your example because gamma, n, and mm aren’t defined.

1 Like

IPOPT is not a feasible method. The first thing it does is add slack variables to turn your constraint into an equality: log(x) + s = 100 with s ≥ 0. IPOPT only ensures that s > 0 during the iterations, but the equality constraint is only guaranteed to be satisfied in the limit. The problem you have is that your constraint function isn’t defined everywhere, and IPOPT is likely to venture outside of its domain. If you have access to KNITRO, it has a “feasible” option to ensure that nonlinear inequality constraints remain strictly satisfied. If you don’t have access to KNITRO, your best bet is to rework your model.

1 Like

Thanks a lot odow, putting start=1 solved the problem. I didn’t know that much about Ipopt solver.

And I will take a look at that post so in the future I improve the format code.

Status: Problem solved