I have a nonlinear program which Ipopt finds the optimal solution for it, with adding some changes into the problem the solver can not find the optimal solution anymore and iterates till the maximum number of iterations.
Is there any option in Ipopt which I can change and tune for solving the problem?
There is a whole bunch of ipopt options that you could change (Ipopt: Ipopt Options) from JuMP (Models · JuMP). It is difficult to say which options would be useful without knowing the problem.
Some ideas:
You can always start with increasing the number of iterations - maybe ipopt just needs more iterations to find a solution.
You can also look at detailed output from ipopt changing the print_level option - maybe that would give some guidance. You could for instance look at what is happening to your objective - does it keep changing or does it get stuck somewhere?
You can also start with giving ipopt a different initial guess (Variables · JuMP) - in nonlinear optimisation a good starting point can help a lot.
At the same time - what do the changes do? Do you add more constraints? Are you sure that your problem has a solution after the changes are introduced?
Thanks for your attention
The objective seems to be “oscillating” …
The changes turn a linear constraint into a nonlinear one , I also add a regularization term to the objective function to perform a feature selection .
Does Ipopt have different methods or something like that among the options ?
Does Ipopt have different methods or something like that among the options ?
Not really. Ipopt assumes problems are smooth and twice-differentiable. If your problem violates those assumptions things still generally work, but sometimes it won’t converge (try minimizing abs(x) starting at x=1, for example).
You can reformulate your problem into a smooth problem.
Replace \min_{x, y} f(x) + |y| with \min_{x, y, a} f(x) + a subject to a \ge y, a \ge -y.
Keep Ipopt