Nonlinear solver other than IPOPT

I am solving a feasibility problem that consists of a lot of constraints. I tried IPOPT. It is great. But It couldn’t solve my whole problem with all constraints. It says that I have too few degrees of freedom. My constraints are not redundant, though.

What would be another nonlinear solver to try? I wanted to try knitro but it is not for free. Is there any suggestion?

1 Like

Can you provide a reproducible example?

Do you have constant constraints like `2 >= 1`?

@odow Absolutely no trivial or redundant constraint. Here is the output

``````This is Ipopt version 3.14.4, running with linear solver MUMPS 5.4.1.

Number of nonzeros in equality constraint Jacobian...:     1951
Number of nonzeros in inequality constraint Jacobian.:       12
Number of nonzeros in Lagrangian Hessian.............:    11942

Exception of type: TOO_FEW_DOF in file "Interfaces/IpIpoptApplication.cpp" at line 655:
Exception message: status != TOO_FEW_DEGREES_OF_FREEDOM evaluated false: Too few degrees of freedom (rethrown)!

EXIT: Problem has too few degrees of freedom.
``````

Do you have lots of fixed variables?
Lots of equality constraints?

Try:

``````set_optimizer_attribute(model, "fixed_variable_treatment", "relax_bounds")
``````

@odow Thanks. Yes a lot of equality constraints. I tried the relaxation, it helped overcoming the error message but now the results are inaccurate. Any additional suggestion to try?

now the results are inaccurate

How inaccurate?

Take a look through the Ipopt options:

Hi @horvetz !

The following tutorial Solve a PDE-constrained optimization problem deals with a large constrained optimization problem, and there is a section on finding a feasible point using `trunk` from JSOSolvers.jl .

2 Likes

Hello @odow , I know this is an old post but I think my problem is related to the one you were discussing here.

I think I can provide an example:

``````function testManualIntegration()
model = Model(Ipopt.Optimizer)

@variable(model, w >= 0)
@variable(model, -1 <= x <= 1)

@NLobjective(model, Min, w)

@constraint(model, 2*w == 2)
@NLconstraint(model, x*w + (1-x)*w == 0)
@NLconstraint(model, x^2 * w + (-x)^2 * w == 2/3)

# set_optimizer_attribute(model, "fixed_variable_treatment", "make_parameter_nodual")

JuMP.optimize!(model)

println("""
termination_status = \$(termination_status(model))
primal_status      = \$(primal_status(model))
objective_value    = \$(objective_value(model))
""")

println("w = ", value(w), "\nx = ", value(x))

solution_summary(model; verbose = true)

return
end
``````

This code should recover Gaussian quadrature points in 1D. There are two points and two weights, but since the problem is symmetric, only two variables are needed. The constraints impose integrating exactly the polynomials 1, x, and x^2.

If one tries to solve the problem by hand, one sees that the first constraint leads to w=1. Then, the second constraint is 0=0, and the third constraint reduces to a second order equation in x, with solution x = 1/sqrt(3) and x=-1/sqrt(3). Therefore, the problem has a unique feasible solution (and agrees with the Gauss integration rule).

However, IPOPT throws the error “too few degrees of freedom”. I have tried all the possible “fixed_variable_treatment” options but the problem persists. Is it possible to solve this issue? Is it possible to disable the check regarding the number of variables and the number of constraints?

Thank you very much,

SHCE

EDIT: There was a copy-paste error detected by odow. The second constraint should read `@NLconstraint(model, x*w + (-x)*w == 0)`. Unfortunately, this change does not solve the problem though.

1 Like

Try setting “bound_relax_factor” to something appropriate e.g. 1e-4 - it has helped with similar issues I had in the past.

Thank you. Unfortunately, it does not solve the problem in this case.

There are too few degrees of freedom because you have two variables and three equality constraints. I think you need to reconsider your formulation.

For example:

No, the second constraint is:

``````@NLconstraint(model, x*w + (1-x)*w == 0)
@NLconstraint(model, x*1 + (1-x)*1 == 0)
@NLconstraint(model, x + 1 - x == 0)
@NLconstraint(model, 1 == 0) # <- this is the problem
``````
3 Likes

I’m sorry. I was working with the [0,1] interval but changed it to the [-1,1] interval for the sake of clarity and ended up mixing both formulations. My apologies.

`@NLconstraint(model, x*w + (-x)*w == 0)`

Now, indeed, it becomes 0=0. However, the problem persists and the same error is thrown.

It doesn’t work for the same reason: Ipopt doesn’t support solving models with more equality constraints than there are variables.

Perhaps try not adding trivial constraints like `0=0`?

Is it possible to overcome this limitation? Maybe with a flag that disables this check… I was wondering if the algorithm still works in such cases. Consider the case where one adds the same constraint multiple times. Indeed, the number of equality constraints increases but the “span” of the whole set of constraints is the same. What are your thoughts on this?

I think it is very difficult to detect such constraints in real-case scenarios.

See my ticket here.

Thank you. I have tried with the option suggested there. It detects 1 linearly dependent equality constraint and takes it out. However, the solver converges to “a point of local infeasibility” which is not the expected feasible solution.

Unfortunately, as mentioned in the issue, it seems it is not possible to “run Ipopt anyway”.

Did you find any solver which can handle this situation? I saw your comparison and filterSQP performs well. However, as far as I know, SQP is gradient-based and does not use the second-derivative information of the constraints. Am I right? I don’t know what filterSQP is though.
I have tried with SLSQP implementation from NLOPT library but it throws `ArgumentError: invalid NLopt arguments: too many equality constraints`

On the other hand, do you have a Julia interface for your work?

Thank you.

SQP (Sequential Quadratic Programming) uses second-order information. If you have an AMPL mode, you can try out filterSQP on the NEOS server.

Convergence to a point of local infeasibility (= a minimum of constraint violation) is unfortunate, but a very good feature that is part of robust codes. You can try providing a different initial point.

My C++ solver Uno doesn’t have bindings to other languages yet, but it’s on my To-Do list

From the Wikipedia page, I thought SQP used second-order information of the objective function, though only the gradient from the constraints. Am I missing anything?

Interestingly, providing a random weights and a random point as initial values, it has managed to find the optimal solution. Thank you very much. I will try it with my real-case problem.

Thank you! It looks promising.

1 Like

From the Wikipedia page, I thought SQP used second-order information of the objective function, though only the gradient from the constraints. Am I missing anything?

Great remark: the Wikipedia page is wrong! I fixed it You take the Hessian of the Lagrangian, not of the objective. It captures curvature of both the objective and the constraints.

1 Like

OMG! I was living a lie! That took a time to digest… Thank you very much for pointing that out.

1 Like