Wrong answer with NonLinear Optimization in JuMP

Hi, I am trying to maximize a very simple function in JuMP which I can solve analytically. I am getting an optimal solution which is different from what I should get. I was hoping someone could point me to my error.

Here you can find a MWE

using JuMP, Ipopt, Optim, LinearAlgebra
C1 = 50
K1 = 1
K2 = 0
K3 = 0
σ = 5
problem_test = Model(Ipopt.Optimizer)
#set_silent(primal_capacity)
@variable(problem_test, Q[1:3] >= 0)
@NLobjective(problem_test, Max, Q[1]^((σ-1)/σ)*K1 + Q[2]^((σ-1)/σ)*K2 + Q[3]^((σ-1)/σ)*K3 -   1.0*Q[1] - 1.1*Q[2] - 1.1*Q[3]  )
@constraint(problem_test, c1,  1.0*Q[1] +  1.1*Q[2] + 1.1*Q[3]<= C1)
optimize!(problem_test)
objective_value(problem_test)
sol = value.(Q)
sum(sol)
dual(c1)

soltrue1 = ((σ-1)/σ)^σ*(K1/1.0)^σ
soltrue2 = ((σ-1)/σ)^σ*(K2/1.1)^σ
soltrue3 = ((σ-1)/σ)^σ*(K3/1.1)^σ

The variables soltrue correspond to the optimizer computed by paper and pencil.

1 Like
julia> sol = value.(Q)
3-element Vector{Float64}:
 0.327742785832787
 1.1393908780473464e-5
 1.1393908780473464e-5

julia> soltrue1 = ((σ-1)/σ)^σ*(K1/1.0)^σ
0.3276800000000001

julia> soltrue2 = ((σ-1)/σ)^σ*(K2/1.1)^σ
0.0

julia> soltrue3 = ((σ-1)/σ)^σ*(K3/1.1)^σ
0.0

They seems pretty similar to me? Ipopt uses tolerances, so you should expect some (small) differences between the analytic solution and the computed optimum.

You can get a better solution by turning off scaling:

set_optimizer_attribute(problem_test, "nlp_scaling_method", "none")

But the underlying problem is that x^0.8 is only defined for x>=0 and this can cause problems when Ipopt attempts to evaluate a solution that is slightly negative (e.g., Q[1] = -1e-8).

2 Likes

Thanks so much for the reply! Now, I am even more worried!

See my results. which I should have included in the original post

julia> sol = value.(Q)
3-element Vector{Float64}:
 1.38696926506704
 0.20409119357963545
 0.204091193579635

julia> 

julia> soltrue1 = ((σ-1)/σ)^σ*(K1/1.0)^σ
0.3276800000000001

julia> soltrue2 = ((σ-1)/σ)^σ*(K2/1.1)^σ
0.0

julia> soltrue3 = ((σ-1)/σ)^σ*(K3/1.1)^σ
0.0

In case it is helpful, I am using

VERSION
v"1.6.1"

and

(@v1.6) pkg> status JuMP
      Status `C:\Users\juan\.julia\environments\v1.6\Project.toml`
  [4076af6c] JuMP v0.21.10

(@v1.6) pkg> status Ipopt
      Status `C:\Users\juan\.julia\environments\v1.6\Project.toml`
  [b6b21f68] Ipopt v0.7.0

Actually, I have a much larger problem, and I do not know if this is a symptom. The solution in this optimization routine is interior (in terms that the constraint c1 is not binding at the optimum), however when I change the value of the constant C1 the value of the optimizer changes.

Edit: your analytical solution is correct, provided it satisfies the constraint (which of course should be checked).
Are you sure Ipopt converged properly? When I ran your example, it failed with “EXIT: Restoration Failed”.

Hi! Thanks for the answer.

Edit: Sorry I had replied to the previous post where I said that since this function is globally concave, there should be no local optima.

The exit from Ipopt says EXIT: Optimal Solution Found.

Did you mean to set K2 and K3 to 0?

I just set K2 and K3 to 0 to make the error more apparent, but if I set them to something else. Say 1, I still get the error.

using JuMP, Ipopt, Optim, LinearAlgebra
C1 = 50
K1 = 1
K2 = 1
K3 = 1
σ = 5
problem_test = Model(Ipopt.Optimizer)
@variable(problem_test, Q[1:3] >= 0)
@NLobjective(problem_test, Max, Q[1]^((σ-1)/σ)*K1 + Q[2]^((σ-1)/σ)*K2 + Q[3]^((σ-1)/σ)*K3 -   1.0*Q[1] - 1.1*Q[2] - 1.1*Q[3]  )
@constraint(problem_test, c1,  1.0*Q[1] +  1.1*Q[2] + 1.1*Q[3]<= C1)
optimize!(problem_test)
objective_value(problem_test)
sol = value.(Q)
sum(sol)
dual(c1)

soltrue1 = ((σ-1)/σ)^σ*(K1/1.0)^σ
soltrue2 = ((σ-1)/σ)^σ*(K2/1.1)^σ
soltrue3 = ((σ-1)/σ)^σ*(K3/1.1)^σ

julia> value.(Q)
3-element Vector{Float64}:
 0.9511299412613234
 0.7518509419158073
 0.7518509419158073

julia> soltrue1 = ((σ-1)/σ)^σ*(K1/1.0)^σ
0.3276800000000001

julia> soltrue2 = ((σ-1)/σ)^σ*(K2/1.1)^σ
0.20346349914002398

julia> soltrue3 = ((σ-1)/σ)^σ*(K3/1.1)^σ
0.20346349914002398

Weird. Can you try minimizing -f(x)?
(edit: the nonlinear solver I’m developing indeed returned 0.32768 0.203464 0.203464)

1 Like

This is unlikely to be relevant here, but you probably shouldn’t use 1.6.1 - the current stable release is 1.7.1, and even if for some reason you need to use 1.6, the latest patch release is 1.6.5

2 Likes

The minimization does work! I find it very very puzzling though.

1 Like

Thanks. I will update it.

Awesome. There has to be some kind of bug in Jump/Ipopt then :slight_smile:

1 Like

Installing Julia 1.7.1 or minimizing -f seem to have done the trick

Thanks so much for your time @cvanaret , @odow, @nilshg

1 Like

Please update your packages. I think ipopt v0.7 had a bug. The latest release is v0.9.1

1 Like